diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro 11.0.07 Serial Number.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro 11.0.07 Serial Number.md deleted file mode 100644 index 554bb0b96956ef36d1929949638c3f0769ff6a95..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro 11.0.07 Serial Number.md +++ /dev/null @@ -1,25 +0,0 @@ -
-

How to Find Your Adobe Acrobat XI Pro 11.0.07 Serial Number

-

If you have purchased Adobe Acrobat XI Pro 11.0.07, you will need a serial number to activate your product and access its features. A serial number is a unique code that identifies your software license. Without it, you will not be able to use Adobe Acrobat XI Pro 11.0.07 properly.

-

adobe acrobat xi pro 11.0.07 serial number


Download Zip ✸✸✸ https://imgfil.com/2uxZVW



-

There are different ways to find your serial number depending on how you obtained your product. Here are some common scenarios and how to locate your serial number:

- -

Once you have your serial number, you can enter it during the installation process or after launching the software for the first time. Follow the on-screen instructions to complete the activation process and enjoy using Adobe Acrobat XI Pro 11.0.07.

- -

Adobe Acrobat XI Pro 11.0.07 is a powerful and versatile software that allows you to create, edit, convert, sign, and share PDF documents. You can also use it to fill out forms, add comments, apply digital signatures, protect your files, and collaborate with others. Adobe Acrobat XI Pro 11.0.07 is compatible with Windows and Mac operating systems, and it supports various file formats, such as Word, Excel, PowerPoint, JPEG, PNG, and more.

-

-

Some of the key features of Adobe Acrobat XI Pro 11.0.07 include:

- -

If you want to learn more about Adobe Acrobat XI Pro 11.0.07 and how to use it effectively, you can visit the official website at https://www.adobe.com/products/acrobatpro.html or check out the online tutorials at https://helpx.adobe.com/acrobat/tutorials.html. You can also contact the customer support team at https://helpx.adobe.com/contact.html if you have any questions or issues.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Freeserialnumberencore500 __FULL__.md b/spaces/1gistliPinn/ChatGPT4/Examples/Freeserialnumberencore500 __FULL__.md deleted file mode 100644 index 7f11654b3e95452564688b11f962b60ee08e95f5..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Freeserialnumberencore500 __FULL__.md +++ /dev/null @@ -1,58 +0,0 @@ -

freeserialnumberencore500


DOWNLOAD ---> https://imgfil.com/2uy0Gi



- -If other options are ordered with Encore S (e.g., quad seat configurations), the model number uses a three-letter and two-number model code. - -Product description - -Encore is available in three rows, eight seats, or twelve seats. The seating is fixed, meaning there are no middle or rear-facing seats. The optional driver's side seat comes with a center armrest, which is only available in the front row. The front seats use the same bench seats as the Citation III, but is more softly upholstered. There are no armrests on the driver's side of the front seats. The second row is a bench with fold-down seating for the rear passengers. There are no seats in the rear row, nor do any seats fold. - -The Encore can be ordered with three engine options: 2.5 L Tigershark inline-four engines with or at 5600 rpm, 2.8 L Tigershark V6 engines with at 5200 rpm, or 3.0 L Tigershark V6 engines with at 5500 rpm. The 2.5 L Tigershark is available in two grades: and. The 2.5 Tigershark produces with the, and with the. The 2.8 Tigershark produces with the. The 3.0 Tigershark produces with the. - -A navigation system with eight-inch color display, Sirius satellite radio, auxiliary audio input jack, and either AM/FM or CD/MP3 radio are available as options. Navigation system and GPS inputs are standard on the 6th Avenue, 6th Avenue S, and 6th Avenue S6 models, but not on the 6th Avenue SE6. - -Engines can be ordered with either single or dual exhaust outlets. - -Numerous options are available, including: - -- front and rear bumper extenders - -- chrome wheels - -- carpet floor mats - -- interior carpeting - -- floor mats - -- carpet - -- cargo cover - -- cargo cover with locking features - -- DVD-Audio and CD-Audio capability - -- dual fuel tanks - -- auxiliary fuel cell - -- safety kit - -- exterior garnish kit - -- grille guards - -- cargo net - -- side mirror covers - -- power antenna - -- rain-sensing wipers - -- power windows - -- heated windshield 4fefd39f24
-
-
-

diff --git a/spaces/1phancelerku/anime-remove-background/Download WIFI Driver for Lenovo 20207 - Compatible with Windows 10 (64-bit) and All Models.md b/spaces/1phancelerku/anime-remove-background/Download WIFI Driver for Lenovo 20207 - Compatible with Windows 10 (64-bit) and All Models.md deleted file mode 100644 index 5713dffe8481a96b3d25ba4b3de8033e7b1d2911..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download WIFI Driver for Lenovo 20207 - Compatible with Windows 10 (64-bit) and All Models.md +++ /dev/null @@ -1,115 +0,0 @@ - -

How to Download and Install WiFi Driver for Lenovo 20207 Laptop

-

If you have a Lenovo 20207 laptop, you may need to download and install a WiFi driver to connect to wireless networks. A WiFi driver is a software program that enables your laptop to communicate with wireless devices such as routers, modems, and access points. Without a WiFi driver, you will not be able to access the internet, share files, or use online services on your laptop.

-

In this article, we will show you how to download and install the WiFi driver for your Lenovo 20207 laptop. We will also explain what a WiFi driver is and why you need it. By following these simple steps, you will be able to enjoy wireless connectivity on your laptop.

-

download driver wifi lenovo 20207


Downloadhttps://jinyurl.com/2uNSPp



-

What is a WiFi Driver and Why Do You Need It?

-

A WiFi driver is a software program that enables your laptop to communicate with wireless networks

-

A WiFi driver is a type of device driver that acts as an interface between your laptop's hardware and software. It allows your laptop's wireless card to send and receive data packets from wireless networks. A wireless card is a component that enables your laptop to connect to wireless devices such as routers, modems, and access points.

-

A WiFi driver is usually specific to your laptop's model, wireless card, and operating system. It contains information about how to configure, control, and operate your wireless card. It also contains instructions on how to handle different types of wireless networks, such as public, private, or encrypted ones.

-

You need a WiFi driver to access the internet, share files, and use online services on your laptop

-

A WiFi driver is essential for using wireless connectivity on your laptop. Without a WiFi driver, your laptop will not be able to recognize or connect to any wireless network. This means that you will not be able to access the internet, share files, or use online services on your laptop.

-

A WiFi driver also helps improve the performance and stability of your wireless connection. It ensures that your wireless card works properly and efficiently. It also prevents errors, crashes, or compatibility issues that may occur due to outdated or corrupted drivers.

-

How to Find Out the Model and Operating System of Your Lenovo 20207 Laptop

-

You can find out the model of your laptop by checking the label on the bottom or the box

-

The easiest way to find out the model of your Lenovo 20207 laptop is to check the label on the bottom of your laptop or the box that it came in. The label should have a sticker that shows the model name, serial number, and product key of your laptop. You can also find the model name on the top right corner of your laptop's keyboard.

-

You can find out the operating system of your laptop by following these steps

-

The operating system of your laptop is the software that runs your laptop and manages its resources. It also provides the user interface and the applications that you use on your laptop. The most common operating systems for laptops are Windows 10, Windows 8.1, and Windows 7.

-

download driver wifi lenovo 20207 windows 10
-download driver wifi lenovo b490 notebook 20207
-download driver wifi lenovo 20207 desktops and workstations
-download driver wifi lenovo 20207 intel
-download driver wifi lenovo 20207 realtek
-download driver wifi lenovo 20207 qualcomm
-download driver wifi lenovo 20207 ideapad 100s-14ibr
-download driver wifi lenovo 20207 support us
-download driver wifi lenovo 20207 ds543269
-download driver wifi lenovo 20207 m2wlg02us14.exe
-download driver wifi lenovo 20207 thinkstation p330
-download driver wifi lenovo 20207 thinkcentre m715q
-download driver wifi lenovo 20207 ideacentre t540-15icb g
-download driver wifi lenovo 20207 installation instructions
-download driver wifi lenovo 20207 checksum readme
-download driver wifi lenovo 20207 compatible devices
-download driver wifi lenovo 20207 compatible operating systems
-download driver wifi lenovo 20207 file name size version
-download driver wifi lenovo 20207 severity recommended
-download driver wifi lenovo 20207 product home product info
-download driver wifi lenovo 20207 serial number machine type
-download driver wifi lenovo 20207 quick links user guide
-download driver wifi lenovo 20207 parts accessories drivers software
-download driver wifi lenovo 20207 warranty status unknown warranty
-download driver wifi lenovo 20207 terms and conditions unknown warranty status
-download driver wifi lenovo 20207 how can we help you today
-download driver wifi lenovo 20207 download lenovo vantage windows support center
-download driver wifi lenovo 20207 purchase parts repair status register products services
-download driver wifi lenovo 20207 laptops and netbooks b series laptops b490 laptop type 20207
-download driver wifi lenovo 20207 pc support laptops b series laptops b490 laptop type 20207
-download driver wifi lenovo 20207 wlan driver intel realtek qualcomm for windows 10 ideapad 100s-14ibr
-download driver wifi lenovo 20207 shop support community my account english cart pc support laptops ideapad series laptops ideapad laptop ideapad
-download driver wifi lenovo 20207 intel wifi driver for windows desktops and workstations in this article compatible devices compatible operating systems other information available drivers file name size version operating system release date severity options intel wifi driver for windows desktops and workstations mb windows bit apr recommended intel wifi

-

To find out the operating system of your Lenovo 20207 laptop, you can follow these steps:

-

Windows 10: Click on Start > Settings > System > About

-

On the About page, you will see the edition, version, and build of your Windows 10 operating system. You will also see the system type, which indicates whether your laptop has a 32-bit or a 64-bit processor.

-

Windows 8.1: Swipe in from the right edge of the screen > Settings > PC info

-

On the PC info page, you will see the edition and version of your Windows 8.1 operating system. You will also see the system type, which indicates whether your laptop has a 32-bit or a 64-bit processor.

-

Windows 7: Click on Start > Control Panel > System and Security > System

-

On the System page, you will see the edition and service pack of your Windows 7 operating system. You will also see the system type, which indicates whether your laptop has a 32-bit or a 64-bit processor.

-

How to Download the WiFi Driver for Your Lenovo 20207 Laptop

-

You can download the WiFi driver from the Lenovo support website by following these steps

-

The Lenovo support website is the official source of drivers and software for your Lenovo 20207 laptop. You can download the WiFi driver that is compatible with your laptop's model, wireless card, and operating system from this website. To do so, you can follow these steps:

-

Go to [Lenovo Support] and enter your laptop model in the search box

-

On the Lenovo support website, you will see a search box where you can enter your laptop model. Type in "Lenovo 20207" and hit Enter. You will be directed to the product page of your laptop.

-

Select your operating system from the drop-down menu

-

On the product page of your laptop, you will see a drop-down menu where you can select your operating system. Choose the one that matches your laptop's operating system, such as Windows 10, Windows 8.1, or Windows 7.

-

Click on Drivers & Software and then on Networking: Wireless LAN

-

On the product page of your laptop, you will see a tab called Drivers & Software. Click on it to see all the drivers and software available for your laptop. Then, click on Networking: Wireless LAN to see all the WiFi drivers for your laptop.

-

Choose the WiFi driver that matches your wireless card and download it

-

On the Networking: Wireless LAN page, you will see different WiFi drivers for different wireless cards. You need to choose the one that matches your wireless card. To find out what wireless card you have, you can check the label on the bottom of your laptop or the box that it came in. You can also use a tool like [Speccy] to scan your laptop and find out the details of your wireless card. Once you have identified your wireless card, you can choose the corresponding WiFi driver from the list and click on the download button. You will be asked to save the file to your laptop. Choose a location where you can easily find it later, such as your desktop or downloads folder.

-

How to Install the WiFi Driver for Your Lenovo 20207 Laptop

-

You can install the WiFi driver by following these steps

-

After you have downloaded the WiFi driver, you need to install it on your laptop. This will update your wireless card and enable it to work properly with wireless networks. To install the WiFi driver, you can follow these steps:

-

Locate the downloaded file and double-click on it to run it

-

Go to the location where you saved the WiFi driver file and locate it. It should have a name like "wlanxxxx.exe" or something similar. Double-click on the file to run it. You may see a security warning asking you to confirm if you want to run the file. Click on Yes or Run to proceed.

-

Follow the on-screen instructions to complete the installation process

-

A window will open that will guide you through the installation process. You may need to accept the license agreement, choose the installation location, and click on Next or Install to continue. Follow the on-screen instructions until the installation is complete.

-

Restart your laptop and check if the WiFi is working properly

-

After the installation is finished, you may need to restart your laptop for the changes to take effect. Click on Finish or Restart Now to do so. When your laptop restarts, check if the WiFi icon is visible on the taskbar and if you can connect to wireless networks. If everything is working fine, you have successfully installed the WiFi driver for your Lenovo 20207 laptop.

-

Conclusion

-

You have learned how to download and install the WiFi driver for your Lenovo 20207 laptop. By following these simple steps, you can enjoy wireless connectivity on your laptop and access the internet, share files, and use online services. If you have any questions or issues, you can contact Lenovo support for assistance.

-

Here are some FAQs that may help you:

-

FAQs

- -

I hope you found this article helpful and informative. If you have any feedback or suggestions, please let me know in the comments section below. Thank you for reading!

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/2ndelement/voicevox/test/test_mock_synthesis_engine.py b/spaces/2ndelement/voicevox/test/test_mock_synthesis_engine.py deleted file mode 100644 index c06a0504a37d316c4769fcf0c658ac245f0e50d8..0000000000000000000000000000000000000000 --- a/spaces/2ndelement/voicevox/test/test_mock_synthesis_engine.py +++ /dev/null @@ -1,140 +0,0 @@ -from unittest import TestCase - -from voicevox_engine.dev.synthesis_engine import MockSynthesisEngine -from voicevox_engine.kana_parser import create_kana -from voicevox_engine.model import AccentPhrase, AudioQuery, Mora - - -class TestMockSynthesisEngine(TestCase): - def setUp(self): - super().setUp() - - self.accent_phrases_hello_hiho = [ - AccentPhrase( - moras=[ - Mora( - text="コ", - consonant="k", - consonant_length=0.0, - vowel="o", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ン", - consonant=None, - consonant_length=None, - vowel="N", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ニ", - consonant="n", - consonant_length=0.0, - vowel="i", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="チ", - consonant="ch", - consonant_length=0.0, - vowel="i", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ワ", - consonant="w", - consonant_length=0.0, - vowel="a", - vowel_length=0.0, - pitch=0.0, - ), - ], - accent=5, - pause_mora=Mora( - text="、", - consonant=None, - consonant_length=None, - vowel="pau", - vowel_length=0.0, - pitch=0.0, - ), - ), - AccentPhrase( - moras=[ - Mora( - text="ヒ", - consonant="h", - consonant_length=0.0, - vowel="i", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ホ", - consonant="h", - consonant_length=0.0, - vowel="o", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="デ", - consonant="d", - consonant_length=0.0, - vowel="e", - vowel_length=0.0, - pitch=0.0, - ), - Mora( - text="ス", - consonant="s", - consonant_length=0.0, - vowel="U", - vowel_length=0.0, - pitch=0.0, - ), - ], - accent=1, - pause_mora=None, - ), - ] - self.engine = MockSynthesisEngine(speakers="", supported_devices="") - - def test_replace_phoneme_length(self): - self.assertEqual( - self.engine.replace_phoneme_length( - accent_phrases=self.accent_phrases_hello_hiho, - speaker_id=0, - ), - self.accent_phrases_hello_hiho, - ) - - def test_replace_mora_pitch(self): - self.assertEqual( - self.engine.replace_mora_pitch( - accent_phrases=self.accent_phrases_hello_hiho, - speaker_id=0, - ), - self.accent_phrases_hello_hiho, - ) - - def test_synthesis(self): - self.engine.synthesis( - AudioQuery( - accent_phrases=self.accent_phrases_hello_hiho, - speedScale=1, - pitchScale=0, - intonationScale=1, - volumeScale=1, - prePhonemeLength=0.1, - postPhonemeLength=0.1, - outputSamplingRate=24000, - outputStereo=False, - kana=create_kana(self.accent_phrases_hello_hiho), - ), - speaker_id=0, - ) diff --git a/spaces/A00001/bingothoo/src/pages/api/sydney.ts b/spaces/A00001/bingothoo/src/pages/api/sydney.ts deleted file mode 100644 index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/pages/api/sydney.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/AIConsultant/MusicGen/audiocraft/utils/__init__.py b/spaces/AIConsultant/MusicGen/audiocraft/utils/__init__.py deleted file mode 100644 index 75e25a0212f98e4a18d97c86c6cda225636a3215..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/utils/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Utilities.""" diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/utils.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/utils.py deleted file mode 100644 index 2a22213b627ebee77ab3d0bda3a59d1c3ade4040..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/utils.py +++ /dev/null @@ -1,73 +0,0 @@ -import importlib - -from inspect import isfunction - -import os -import soundfile as sf - -def seed_everything(seed): - import random, os - import numpy as np - import torch - - random.seed(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = True - -def save_wave(waveform, savepath, name="outwav"): - if type(name) is not list: - name = [name] * waveform.shape[0] - - for i in range(waveform.shape[0]): - path = os.path.join( - savepath, - "%s_%s.wav" - % ( - os.path.basename(name[i]) - if (not ".wav" in name[i]) - else os.path.basename(name[i]).split(".")[0], - i, - ), - ) - sf.write(path, waveform[i, 0], samplerate=16000) - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def count_params(model, verbose=False): - total_params = sum(p.numel() for p in model.parameters()) - if verbose: - print(f"{model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.") - return total_params - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -def instantiate_from_config(config): - if not "target" in config: - if config == "__is_first_stage__": - return None - elif config == "__is_unconditional__": - return None - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - -def default_audioldm_config(): - return {'wave_file_save_path': './output', 'id': {'version': 'v1', 'name': 'default', 'root': '/mnt/fast/nobackup/users/hl01486/projects/general_audio_generation/AudioLDM-python/config/default/latent_diffusion.yaml'}, 'model': {'device': 'cuda', 'reload_from_ckpt': '/mnt/fast/nobackup/scratch4weeks/hl01486/exps/audio_generation/stablediffusion/LDM/audioverse/2023_01_14_full_F4_B_spatial_v2_v1/checkpoints/last.ckpt', 'target': 'audioldm.pipline.LatentDiffusion', 'params': {'base_learning_rate': 5e-06, 'linear_start': 0.0015, 'linear_end': 0.0195, 'num_timesteps_cond': 1, 'log_every_t': 200, 'timesteps': 1000, 'first_stage_key': 'fbank', 'cond_stage_key': 'waveform', 'latent_t_size': 256, 'latent_f_size': 16, 'channels': 8, 'cond_stage_trainable': True, 'conditioning_key': 'film', 'monitor': 'val/loss_simple_ema', 'scale_by_std': True, 'unet_config': {'target': 'audioldm.latent_diffusion.openaimodel.UNetModel', 'params': {'image_size': 64, 'extra_film_condition_dim': 512, 'extra_film_use_concat': True, 'in_channels': 8, 'out_channels': 8, 'model_channels': 128, 'attention_resolutions': [8, 4, 2], 'num_res_blocks': 2, 'channel_mult': [1, 2, 3, 5], 'num_head_channels': 32, 'use_spatial_transformer': True}}, 'first_stage_config': {'base_learning_rate': 4.5e-05, 'target': 'audioldm.variational_autoencoder.autoencoder.AutoencoderKL', 'params': {'monitor': 'val/rec_loss', 'image_key': 'fbank', 'subband': 1, 'embed_dim': 8, 'time_shuffle': 1, 'ddconfig': {'double_z': True, 'z_channels': 8, 'resolution': 256, 'downsample_time': False, 'in_channels': 1, 'out_ch': 1, 'ch': 128, 'ch_mult': [1, 2, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}}}, 'cond_stage_config': {'target': 'audioldm.clap.encoders.CLAPAudioEmbeddingClassifierFreev2', 'params': {'key': 'waveform', 'sampling_rate': 16000, 'embed_mode': 'audio', 'unconditional_prob': 0.1}}}}} \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/fs2_utils.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/fs2_utils.py deleted file mode 100644 index 092550863d2fd72f008cc790bc6d950340e68182..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/fs2_utils.py +++ /dev/null @@ -1,173 +0,0 @@ -import matplotlib - -matplotlib.use('Agg') - -import glob -import importlib -from utils.cwt import get_lf0_cwt -import os -import torch.optim -import torch.utils.data -from utils.indexed_datasets import IndexedDataset -from utils.pitch_utils import norm_interp_f0 -import numpy as np -from tasks.base_task import BaseDataset -import torch -import torch.optim -import torch.utils.data -import utils -import torch.distributions -from utils.hparams import hparams - - -class FastSpeechDataset(BaseDataset): - def __init__(self, prefix, shuffle=False): - super().__init__(shuffle) - self.data_dir = hparams['binary_data_dir'] - self.prefix = prefix - self.hparams = hparams - self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy') - self.indexed_ds = None - # self.name2spk_id={} - - # pitch stats - f0_stats_fn = f'{self.data_dir}/train_f0s_mean_std.npy' - if os.path.exists(f0_stats_fn): - hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = np.load(f0_stats_fn) - hparams['f0_mean'] = float(hparams['f0_mean']) - hparams['f0_std'] = float(hparams['f0_std']) - else: - hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = None, None - - if prefix == 'test': - if hparams['test_input_dir'] != '': - self.indexed_ds, self.sizes = self.load_test_inputs(hparams['test_input_dir']) - else: - if hparams['num_test_samples'] > 0: - self.avail_idxs = list(range(hparams['num_test_samples'])) + hparams['test_ids'] - self.sizes = [self.sizes[i] for i in self.avail_idxs] - - if hparams['pitch_type'] == 'cwt': - _, hparams['cwt_scales'] = get_lf0_cwt(np.ones(10)) - - def _get_item(self, index): - if hasattr(self, 'avail_idxs') and self.avail_idxs is not None: - index = self.avail_idxs[index] - if self.indexed_ds is None: - self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}') - return self.indexed_ds[index] - - def __getitem__(self, index): - hparams = self.hparams - item = self._get_item(index) - max_frames = hparams['max_frames'] - spec = torch.Tensor(item['mel'])[:max_frames] - energy = (spec.exp() ** 2).sum(-1).sqrt() - mel2ph = torch.LongTensor(item['mel2ph'])[:max_frames] if 'mel2ph' in item else None - f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams) - phone = torch.LongTensor(item['phone'][:hparams['max_input_tokens']]) - pitch = torch.LongTensor(item.get("pitch"))[:max_frames] - # print(item.keys(), item['mel'].shape, spec.shape) - sample = { - "id": index, - "item_name": item['item_name'], - "text": item['txt'], - "txt_token": phone, - "mel": spec, - "pitch": pitch, - "energy": energy, - "f0": f0, - "uv": uv, - "mel2ph": mel2ph, - "mel_nonpadding": spec.abs().sum(-1) > 0, - } - if self.hparams['use_spk_embed']: - sample["spk_embed"] = torch.Tensor(item['spk_embed']) - if self.hparams['use_spk_id']: - sample["spk_id"] = item['spk_id'] - # sample['spk_id'] = 0 - # for key in self.name2spk_id.keys(): - # if key in item['item_name']: - # sample['spk_id'] = self.name2spk_id[key] - # break - if self.hparams['pitch_type'] == 'cwt': - cwt_spec = torch.Tensor(item['cwt_spec'])[:max_frames] - f0_mean = item.get('f0_mean', item.get('cwt_mean')) - f0_std = item.get('f0_std', item.get('cwt_std')) - sample.update({"cwt_spec": cwt_spec, "f0_mean": f0_mean, "f0_std": f0_std}) - elif self.hparams['pitch_type'] == 'ph': - f0_phlevel_sum = torch.zeros_like(phone).float().scatter_add(0, mel2ph - 1, f0) - f0_phlevel_num = torch.zeros_like(phone).float().scatter_add( - 0, mel2ph - 1, torch.ones_like(f0)).clamp_min(1) - sample["f0_ph"] = f0_phlevel_sum / f0_phlevel_num - return sample - - def collater(self, samples): - if len(samples) == 0: - return {} - id = torch.LongTensor([s['id'] for s in samples]) - item_names = [s['item_name'] for s in samples] - text = [s['text'] for s in samples] - txt_tokens = utils.collate_1d([s['txt_token'] for s in samples], 0) - f0 = utils.collate_1d([s['f0'] for s in samples], 0.0) - pitch = utils.collate_1d([s['pitch'] for s in samples]) - uv = utils.collate_1d([s['uv'] for s in samples]) - energy = utils.collate_1d([s['energy'] for s in samples], 0.0) - mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \ - if samples[0]['mel2ph'] is not None else None - mels = utils.collate_2d([s['mel'] for s in samples], 0.0) - txt_lengths = torch.LongTensor([s['txt_token'].numel() for s in samples]) - mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples]) - - batch = { - 'id': id, - 'item_name': item_names, - 'nsamples': len(samples), - 'text': text, - 'txt_tokens': txt_tokens, - 'txt_lengths': txt_lengths, - 'mels': mels, - 'mel_lengths': mel_lengths, - 'mel2ph': mel2ph, - 'energy': energy, - 'pitch': pitch, - 'f0': f0, - 'uv': uv, - } - - if self.hparams['use_spk_embed']: - spk_embed = torch.stack([s['spk_embed'] for s in samples]) - batch['spk_embed'] = spk_embed - if self.hparams['use_spk_id']: - spk_ids = torch.LongTensor([s['spk_id'] for s in samples]) - batch['spk_ids'] = spk_ids - if self.hparams['pitch_type'] == 'cwt': - cwt_spec = utils.collate_2d([s['cwt_spec'] for s in samples]) - f0_mean = torch.Tensor([s['f0_mean'] for s in samples]) - f0_std = torch.Tensor([s['f0_std'] for s in samples]) - batch.update({'cwt_spec': cwt_spec, 'f0_mean': f0_mean, 'f0_std': f0_std}) - elif self.hparams['pitch_type'] == 'ph': - batch['f0'] = utils.collate_1d([s['f0_ph'] for s in samples]) - - return batch - - def load_test_inputs(self, test_input_dir, spk_id=0): - inp_wav_paths = glob.glob(f'{test_input_dir}/*.wav') + glob.glob(f'{test_input_dir}/*.mp3') - sizes = [] - items = [] - - binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizerr.BaseBinarizer') - pkg = ".".join(binarizer_cls.split(".")[:-1]) - cls_name = binarizer_cls.split(".")[-1] - binarizer_cls = getattr(importlib.import_module(pkg), cls_name) - binarization_args = hparams['binarization_args'] - - for wav_fn in inp_wav_paths: - item_name = os.path.basename(wav_fn) - ph = txt = tg_fn = '' - wav_fn = wav_fn - encoder = None - item = binarizer_cls.process_item(item_name, ph, txt, tg_fn, wav_fn, spk_id, encoder, binarization_args) - items.append(item) - sizes.append(item['len']) - return items, sizes diff --git a/spaces/AP123/dreamgaussian/sh_utils.py b/spaces/AP123/dreamgaussian/sh_utils.py deleted file mode 100644 index bbca7d192aa3a7edf8c5b2d24dee535eac765785..0000000000000000000000000000000000000000 --- a/spaces/AP123/dreamgaussian/sh_utils.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright 2021 The PlenOctree Authors. -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# -# 1. Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# -# 2. Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -import torch - -C0 = 0.28209479177387814 -C1 = 0.4886025119029199 -C2 = [ - 1.0925484305920792, - -1.0925484305920792, - 0.31539156525252005, - -1.0925484305920792, - 0.5462742152960396 -] -C3 = [ - -0.5900435899266435, - 2.890611442640554, - -0.4570457994644658, - 0.3731763325901154, - -0.4570457994644658, - 1.445305721320277, - -0.5900435899266435 -] -C4 = [ - 2.5033429417967046, - -1.7701307697799304, - 0.9461746957575601, - -0.6690465435572892, - 0.10578554691520431, - -0.6690465435572892, - 0.47308734787878004, - -1.7701307697799304, - 0.6258357354491761, -] - - -def eval_sh(deg, sh, dirs): - """ - Evaluate spherical harmonics at unit directions - using hardcoded SH polynomials. - Works with torch/np/jnp. - ... Can be 0 or more batch dimensions. - Args: - deg: int SH deg. Currently, 0-3 supported - sh: jnp.ndarray SH coeffs [..., C, (deg + 1) ** 2] - dirs: jnp.ndarray unit directions [..., 3] - Returns: - [..., C] - """ - assert deg <= 4 and deg >= 0 - coeff = (deg + 1) ** 2 - assert sh.shape[-1] >= coeff - - result = C0 * sh[..., 0] - if deg > 0: - x, y, z = dirs[..., 0:1], dirs[..., 1:2], dirs[..., 2:3] - result = (result - - C1 * y * sh[..., 1] + - C1 * z * sh[..., 2] - - C1 * x * sh[..., 3]) - - if deg > 1: - xx, yy, zz = x * x, y * y, z * z - xy, yz, xz = x * y, y * z, x * z - result = (result + - C2[0] * xy * sh[..., 4] + - C2[1] * yz * sh[..., 5] + - C2[2] * (2.0 * zz - xx - yy) * sh[..., 6] + - C2[3] * xz * sh[..., 7] + - C2[4] * (xx - yy) * sh[..., 8]) - - if deg > 2: - result = (result + - C3[0] * y * (3 * xx - yy) * sh[..., 9] + - C3[1] * xy * z * sh[..., 10] + - C3[2] * y * (4 * zz - xx - yy)* sh[..., 11] + - C3[3] * z * (2 * zz - 3 * xx - 3 * yy) * sh[..., 12] + - C3[4] * x * (4 * zz - xx - yy) * sh[..., 13] + - C3[5] * z * (xx - yy) * sh[..., 14] + - C3[6] * x * (xx - 3 * yy) * sh[..., 15]) - - if deg > 3: - result = (result + C4[0] * xy * (xx - yy) * sh[..., 16] + - C4[1] * yz * (3 * xx - yy) * sh[..., 17] + - C4[2] * xy * (7 * zz - 1) * sh[..., 18] + - C4[3] * yz * (7 * zz - 3) * sh[..., 19] + - C4[4] * (zz * (35 * zz - 30) + 3) * sh[..., 20] + - C4[5] * xz * (7 * zz - 3) * sh[..., 21] + - C4[6] * (xx - yy) * (7 * zz - 1) * sh[..., 22] + - C4[7] * xz * (xx - 3 * yy) * sh[..., 23] + - C4[8] * (xx * (xx - 3 * yy) - yy * (3 * xx - yy)) * sh[..., 24]) - return result - -def RGB2SH(rgb): - return (rgb - 0.5) / C0 - -def SH2RGB(sh): - return sh * C0 + 0.5 \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/Factory.js deleted file mode 100644 index 9443aaf3102ae9b00989c411a1a95954bb63b779..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import DropDownList from './DropDownList.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('dropDownList', function (config) { - var gameObject = new DropDownList(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.DropDownList', DropDownList); - -export default DropDownList; \ No newline at end of file diff --git a/spaces/AiMimicry/sovits-models/vdecoder/hifigan/models.py b/spaces/AiMimicry/sovits-models/vdecoder/hifigan/models.py deleted file mode 100644 index 9747301f350bb269e62601017fe4633ce271b27e..0000000000000000000000000000000000000000 --- a/spaces/AiMimicry/sovits-models/vdecoder/hifigan/models.py +++ /dev/null @@ -1,503 +0,0 @@ -import os -import json -from .env import AttrDict -import numpy as np -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -def load_model(model_path, device='cuda'): - config_file = os.path.join(os.path.split(model_path)[0], 'config.json') - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - generator = Generator(h).to(device) - - cp_dict = torch.load(model_path) - generator.load_state_dict(cp_dict['generator']) - generator.eval() - generator.remove_weight_norm() - del cp_dict - return generator, h - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -def padDiff(x): - return F.pad(F.pad(x, (0,0,-1,1), 'constant', 0) - x, (0,0,0,-1), 'constant', 0) - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = (f0 > self.voiced_threshold).type(torch.float32) - return uv - - def _f02sine(self, f0_values): - """ f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \ - device=f0_values.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (padDiff(tmp_over_one)) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) - * 2 * np.pi) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, - device=f0.device) - # fundamental component - fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device)) - - # generate sine waveforms - sine_waves = self._f02sine(fn) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x): - """ - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - """ - # source for harmonic branch - sine_wavs, uv, _ = self.l_sin_gen(x) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.sine_amp / 3 - return sine_merge, noise, uv - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - - self.num_kernels = len(h["resblock_kernel_sizes"]) - self.num_upsamples = len(h["upsample_rates"]) - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"])) - self.m_source = SourceModuleHnNSF( - sampling_rate=h["sampling_rate"], - harmonic_num=8) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3)) - resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2 - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])): - c_cur = h["upsample_initial_channel"] // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - if i + 1 < len(h["upsample_rates"]): # - stride_f0 = np.prod(h["upsample_rates"][i + 1:]) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h["upsample_initial_channel"] // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1) - - def forward(self, x, f0, g=None): - # print(1,x.shape,f0.shape,f0[:, None].shape) - f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t - # print(2,f0.shape) - har_source, noi_source, uv = self.m_source(f0) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - x = x + self.cond(g) - # print(124,x.shape,har_source.shape) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - # print(3,x.shape) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - # print(4,x_source.shape,har_source.shape,x.shape) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, periods=None): - super(MultiPeriodDiscriminator, self).__init__() - self.periods = periods if periods is not None else [2, 3, 5, 7, 11] - self.discriminators = nn.ModuleList() - for period in self.periods: - self.discriminators.append(DiscriminatorP(period)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/AlekseyCalvin/Make-Putin-Queer/app.py b/spaces/AlekseyCalvin/Make-Putin-Queer/app.py deleted file mode 100644 index 096af7a2fb6add27aa02ddacefb541e278f1d5f8..0000000000000000000000000000000000000000 --- a/spaces/AlekseyCalvin/Make-Putin-Queer/app.py +++ /dev/null @@ -1,16 +0,0 @@ - -import gradio as gr - -markdown=f''' - # - ### Use prompt "trp" or "trp person" or "trp person putin" in your prompt. - To generate custom images of a queer or/and trans alter-dimensional identities of the infamous reigning spook Vladimir Putin – use "trp" or "trp person" in your Stable Diffusion prompt during inference with this model. -Among other crucial, yet oft neglected, documentary content available in the public sphere ("Putin finally appears in drag", "Putin plays piano in Bowie wig", "femme Putin", etc...)... -This model was fine-tuned on numerous distinct variants of the classic "queer Putin" meme which had once spread like wildfiring rainbows in response to the 2018 intensification of the Russian government's ruthlessly inhumane crackdowns on LGBTQ+ persons and communities. - - It is running on cpu. Duplicate and change to GPU of choice for faster generations. - -''' - -gr.Interface.load("models/AlekseyCalvin/Make_Putin_Queer_Please").launch() - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion/README.md deleted file mode 100644 index 14e8b160fb1fb2de72cd37ddb4e4abcab83356fa..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion/README.md +++ /dev/null @@ -1,68 +0,0 @@ -## Textual Inversion fine-tuning example - -[Textual inversion](https://arxiv.org/abs/2208.01618) is a method to personalize text2image models like stable diffusion on your own images using just 3-5 examples. -The `textual_inversion.py` script shows how to implement the training procedure and adapt it for stable diffusion. - -## Training with Intel Extension for PyTorch - -Intel Extension for PyTorch provides the optimizations for faster training and inference on CPUs. You can leverage the training example "textual_inversion.py". Follow the [instructions](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) to get the model and [dataset](https://huggingface.co/sd-concepts-library/dicoo2) before running the script. - -The example supports both single node and multi-node distributed training: - -### Single node training - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export DATA_DIR="path-to-dir-containing-dicoo-images" - -python textual_inversion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_data_dir=$DATA_DIR \ - --learnable_property="object" \ - --placeholder_token="" --initializer_token="toy" \ - --seed=7 \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --max_train_steps=3000 \ - --learning_rate=2.5e-03 --scale_lr \ - --output_dir="textual_inversion_dicoo" -``` - -Note: Bfloat16 is available on Intel Xeon Scalable Processors Cooper Lake or Sapphire Rapids. You may not get performance speedup without Bfloat16 support. - -### Multi-node distributed training - -Before running the scripts, make sure to install the library's training dependencies successfully: - -```bash -python -m pip install oneccl_bind_pt==1.13 -f https://developer.intel.com/ipex-whl-stable-cpu -``` - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export DATA_DIR="path-to-dir-containing-dicoo-images" - -oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") -source $oneccl_bindings_for_pytorch_path/env/setvars.sh - -python -m intel_extension_for_pytorch.cpu.launch --distributed \ - --hostfile hostfile --nnodes 2 --nproc_per_node 2 textual_inversion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_data_dir=$DATA_DIR \ - --learnable_property="object" \ - --placeholder_token="" --initializer_token="toy" \ - --seed=7 \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --max_train_steps=750 \ - --learning_rate=2.5e-03 --scale_lr \ - --output_dir="textual_inversion_dicoo" -``` -The above is a simple distributed training usage on 2 nodes with 2 processes on each node. Add the right hostname or ip address in the "hostfile" and make sure these 2 nodes are reachable from each other. For more details, please refer to the [user guide](https://github.com/intel/torch-ccl). - - -### Reference - -We publish a [Medium blog](https://medium.com/intel-analytics-software/personalized-stable-diffusion-with-few-shot-fine-tuning-on-a-single-cpu-f01a3316b13) on how to create your own Stable Diffusion model on CPUs using textual inversion. Try it out now, if you have interests. diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/training_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/training_utils.py deleted file mode 100644 index eaa9ed64554bf8830e35efd220a77bd2de207f18..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/training_utils.py +++ /dev/null @@ -1,314 +0,0 @@ -import contextlib -import copy -import random -from typing import Any, Dict, Iterable, Optional, Union - -import numpy as np -import torch - -from .utils import deprecate, is_transformers_available - - -if is_transformers_available(): - import transformers - - -def set_seed(seed: int): - """ - Args: - Helper function for reproducible behavior to set the seed in `random`, `numpy`, `torch`. - seed (`int`): The seed to set. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - # ^^ safe to call this function even if cuda is not available - - -# Adapted from torch-ema https://github.com/fadel/pytorch_ema/blob/master/torch_ema/ema.py#L14 -class EMAModel: - """ - Exponential Moving Average of models weights - """ - - def __init__( - self, - parameters: Iterable[torch.nn.Parameter], - decay: float = 0.9999, - min_decay: float = 0.0, - update_after_step: int = 0, - use_ema_warmup: bool = False, - inv_gamma: Union[float, int] = 1.0, - power: Union[float, int] = 2 / 3, - model_cls: Optional[Any] = None, - model_config: Dict[str, Any] = None, - **kwargs, - ): - """ - Args: - parameters (Iterable[torch.nn.Parameter]): The parameters to track. - decay (float): The decay factor for the exponential moving average. - min_decay (float): The minimum decay factor for the exponential moving average. - update_after_step (int): The number of steps to wait before starting to update the EMA weights. - use_ema_warmup (bool): Whether to use EMA warmup. - inv_gamma (float): - Inverse multiplicative factor of EMA warmup. Default: 1. Only used if `use_ema_warmup` is True. - power (float): Exponential factor of EMA warmup. Default: 2/3. Only used if `use_ema_warmup` is True. - device (Optional[Union[str, torch.device]]): The device to store the EMA weights on. If None, the EMA - weights will be stored on CPU. - - @crowsonkb's notes on EMA Warmup: - If gamma=1 and power=1, implements a simple average. gamma=1, power=2/3 are good values for models you plan - to train for a million or more steps (reaches decay factor 0.999 at 31.6K steps, 0.9999 at 1M steps), - gamma=1, power=3/4 for models you plan to train for less (reaches decay factor 0.999 at 10K steps, 0.9999 - at 215.4k steps). - """ - - if isinstance(parameters, torch.nn.Module): - deprecation_message = ( - "Passing a `torch.nn.Module` to `ExponentialMovingAverage` is deprecated. " - "Please pass the parameters of the module instead." - ) - deprecate( - "passing a `torch.nn.Module` to `ExponentialMovingAverage`", - "1.0.0", - deprecation_message, - standard_warn=False, - ) - parameters = parameters.parameters() - - # set use_ema_warmup to True if a torch.nn.Module is passed for backwards compatibility - use_ema_warmup = True - - if kwargs.get("max_value", None) is not None: - deprecation_message = "The `max_value` argument is deprecated. Please use `decay` instead." - deprecate("max_value", "1.0.0", deprecation_message, standard_warn=False) - decay = kwargs["max_value"] - - if kwargs.get("min_value", None) is not None: - deprecation_message = "The `min_value` argument is deprecated. Please use `min_decay` instead." - deprecate("min_value", "1.0.0", deprecation_message, standard_warn=False) - min_decay = kwargs["min_value"] - - parameters = list(parameters) - self.shadow_params = [p.clone().detach() for p in parameters] - - if kwargs.get("device", None) is not None: - deprecation_message = "The `device` argument is deprecated. Please use `to` instead." - deprecate("device", "1.0.0", deprecation_message, standard_warn=False) - self.to(device=kwargs["device"]) - - self.temp_stored_params = None - - self.decay = decay - self.min_decay = min_decay - self.update_after_step = update_after_step - self.use_ema_warmup = use_ema_warmup - self.inv_gamma = inv_gamma - self.power = power - self.optimization_step = 0 - self.cur_decay_value = None # set in `step()` - - self.model_cls = model_cls - self.model_config = model_config - - @classmethod - def from_pretrained(cls, path, model_cls) -> "EMAModel": - _, ema_kwargs = model_cls.load_config(path, return_unused_kwargs=True) - model = model_cls.from_pretrained(path) - - ema_model = cls(model.parameters(), model_cls=model_cls, model_config=model.config) - - ema_model.load_state_dict(ema_kwargs) - return ema_model - - def save_pretrained(self, path): - if self.model_cls is None: - raise ValueError("`save_pretrained` can only be used if `model_cls` was defined at __init__.") - - if self.model_config is None: - raise ValueError("`save_pretrained` can only be used if `model_config` was defined at __init__.") - - model = self.model_cls.from_config(self.model_config) - state_dict = self.state_dict() - state_dict.pop("shadow_params", None) - - model.register_to_config(**state_dict) - self.copy_to(model.parameters()) - model.save_pretrained(path) - - def get_decay(self, optimization_step: int) -> float: - """ - Compute the decay factor for the exponential moving average. - """ - step = max(0, optimization_step - self.update_after_step - 1) - - if step <= 0: - return 0.0 - - if self.use_ema_warmup: - cur_decay_value = 1 - (1 + step / self.inv_gamma) ** -self.power - else: - cur_decay_value = (1 + step) / (10 + step) - - cur_decay_value = min(cur_decay_value, self.decay) - # make sure decay is not smaller than min_decay - cur_decay_value = max(cur_decay_value, self.min_decay) - return cur_decay_value - - @torch.no_grad() - def step(self, parameters: Iterable[torch.nn.Parameter]): - if isinstance(parameters, torch.nn.Module): - deprecation_message = ( - "Passing a `torch.nn.Module` to `ExponentialMovingAverage.step` is deprecated. " - "Please pass the parameters of the module instead." - ) - deprecate( - "passing a `torch.nn.Module` to `ExponentialMovingAverage.step`", - "1.0.0", - deprecation_message, - standard_warn=False, - ) - parameters = parameters.parameters() - - parameters = list(parameters) - - self.optimization_step += 1 - - # Compute the decay factor for the exponential moving average. - decay = self.get_decay(self.optimization_step) - self.cur_decay_value = decay - one_minus_decay = 1 - decay - - context_manager = contextlib.nullcontext - if is_transformers_available() and transformers.deepspeed.is_deepspeed_zero3_enabled(): - import deepspeed - - for s_param, param in zip(self.shadow_params, parameters): - if is_transformers_available() and transformers.deepspeed.is_deepspeed_zero3_enabled(): - context_manager = deepspeed.zero.GatheredParameters(param, modifier_rank=None) - - with context_manager(): - if param.requires_grad: - s_param.sub_(one_minus_decay * (s_param - param)) - else: - s_param.copy_(param) - - def copy_to(self, parameters: Iterable[torch.nn.Parameter]) -> None: - """ - Copy current averaged parameters into given collection of parameters. - - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored moving averages. If `None`, the parameters with which this - `ExponentialMovingAverage` was initialized will be used. - """ - parameters = list(parameters) - for s_param, param in zip(self.shadow_params, parameters): - param.data.copy_(s_param.to(param.device).data) - - def to(self, device=None, dtype=None) -> None: - r"""Move internal buffers of the ExponentialMovingAverage to `device`. - - Args: - device: like `device` argument to `torch.Tensor.to` - """ - # .to() on the tensors handles None correctly - self.shadow_params = [ - p.to(device=device, dtype=dtype) if p.is_floating_point() else p.to(device=device) - for p in self.shadow_params - ] - - def state_dict(self) -> dict: - r""" - Returns the state of the ExponentialMovingAverage as a dict. This method is used by accelerate during - checkpointing to save the ema state dict. - """ - # Following PyTorch conventions, references to tensors are returned: - # "returns a reference to the state and not its copy!" - - # https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict - return { - "decay": self.decay, - "min_decay": self.min_decay, - "optimization_step": self.optimization_step, - "update_after_step": self.update_after_step, - "use_ema_warmup": self.use_ema_warmup, - "inv_gamma": self.inv_gamma, - "power": self.power, - "shadow_params": self.shadow_params, - } - - def store(self, parameters: Iterable[torch.nn.Parameter]) -> None: - r""" - Args: - Save the current parameters for restoring later. - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - temporarily stored. - """ - self.temp_stored_params = [param.detach().cpu().clone() for param in parameters] - - def restore(self, parameters: Iterable[torch.nn.Parameter]) -> None: - r""" - Args: - Restore the parameters stored with the `store` method. Useful to validate the model with EMA parameters without: - affecting the original optimization process. Store the parameters before the `copy_to()` method. After - validation (or model saving), use this to restore the former parameters. - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored parameters. If `None`, the parameters with which this - `ExponentialMovingAverage` was initialized will be used. - """ - if self.temp_stored_params is None: - raise RuntimeError("This ExponentialMovingAverage has no `store()`ed weights " "to `restore()`") - for c_param, param in zip(self.temp_stored_params, parameters): - param.data.copy_(c_param.data) - - # Better memory-wise. - self.temp_stored_params = None - - def load_state_dict(self, state_dict: dict) -> None: - r""" - Args: - Loads the ExponentialMovingAverage state. This method is used by accelerate during checkpointing to save the - ema state dict. - state_dict (dict): EMA state. Should be an object returned - from a call to :meth:`state_dict`. - """ - # deepcopy, to be consistent with module API - state_dict = copy.deepcopy(state_dict) - - self.decay = state_dict.get("decay", self.decay) - if self.decay < 0.0 or self.decay > 1.0: - raise ValueError("Decay must be between 0 and 1") - - self.min_decay = state_dict.get("min_decay", self.min_decay) - if not isinstance(self.min_decay, float): - raise ValueError("Invalid min_decay") - - self.optimization_step = state_dict.get("optimization_step", self.optimization_step) - if not isinstance(self.optimization_step, int): - raise ValueError("Invalid optimization_step") - - self.update_after_step = state_dict.get("update_after_step", self.update_after_step) - if not isinstance(self.update_after_step, int): - raise ValueError("Invalid update_after_step") - - self.use_ema_warmup = state_dict.get("use_ema_warmup", self.use_ema_warmup) - if not isinstance(self.use_ema_warmup, bool): - raise ValueError("Invalid use_ema_warmup") - - self.inv_gamma = state_dict.get("inv_gamma", self.inv_gamma) - if not isinstance(self.inv_gamma, (float, int)): - raise ValueError("Invalid inv_gamma") - - self.power = state_dict.get("power", self.power) - if not isinstance(self.power, (float, int)): - raise ValueError("Invalid power") - - shadow_params = state_dict.get("shadow_params", None) - if shadow_params is not None: - self.shadow_params = shadow_params - if not isinstance(self.shadow_params, list): - raise ValueError("shadow_params must be a list") - if not all(isinstance(p, torch.Tensor) for p in self.shadow_params): - raise ValueError("shadow_params must all be Tensors") diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r18-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r18-d8_769x769_80k_cityscapes.py deleted file mode 100644 index a990c076536ad9455a9203f5b6a60157f2f2f99f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r18-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './deeplabv3_r50-d8_769x769_80k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnet18_v1c', - backbone=dict(depth=18), - decode_head=dict( - in_channels=512, - channels=128, - ), - auxiliary_head=dict(in_channels=256, channels=64)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x1024_160k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x1024_160k_cityscapes.py deleted file mode 100644 index c094391b1dfcef2fa6278f0c181fb50c303f7a4c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr48_512x1024_160k_cityscapes.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = './ocrnet_hr18_512x1024_160k_cityscapes.py' -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w48', - backbone=dict( - extra=dict( - stage2=dict(num_channels=(48, 96)), - stage3=dict(num_channels=(48, 96, 192)), - stage4=dict(num_channels=(48, 96, 192, 384)))), - decode_head=[ - dict( - type='FCNHead', - in_channels=[48, 96, 192, 384], - channels=sum([48, 96, 192, 384]), - input_transform='resize_concat', - in_index=(0, 1, 2, 3), - kernel_size=1, - num_convs=1, - norm_cfg=norm_cfg, - concat_input=False, - dropout_ratio=-1, - num_classes=19, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=[48, 96, 192, 384], - channels=512, - ocr_channels=256, - input_transform='resize_concat', - in_index=(0, 1, 2, 3), - norm_cfg=norm_cfg, - dropout_ratio=-1, - num_classes=19, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)) - ]) diff --git a/spaces/AngoHF/ANGO-Leaderboard/components/submit.py b/spaces/AngoHF/ANGO-Leaderboard/components/submit.py deleted file mode 100644 index 9064a2e9e0b650b755ae4f803c7581c3b46cd909..0000000000000000000000000000000000000000 --- a/spaces/AngoHF/ANGO-Leaderboard/components/submit.py +++ /dev/null @@ -1,15 +0,0 @@ -import os - -import gradio as gr - -from assets.content import SUBMIT_TEXT, TEST_SCRIPT_TEXT, TEST_SET_TEXT -from assets.path import SEASON - - -def create_submit(): - test_box = gr.Markdown(value=TEST_SET_TEXT, scale=4) - test_file = gr.File(value=os.path.join("results", SEASON["latest"], "test_dataset.json"), - label="Test Set", scale=1) - script_box = gr.Markdown(value=TEST_SCRIPT_TEXT, scale=4) - script_button = gr.File(value=os.path.join("assets/evaluation.py"), label="Test Script", scale=1) - gr.Markdown(SUBMIT_TEXT) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/completions.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/completions.py deleted file mode 100644 index 40d96c1f0cf0a2d72cd5beb7f957a0918f06812c..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/completions.py +++ /dev/null @@ -1,637 +0,0 @@ -import time - -import tiktoken -import torch -import torch.nn.functional as F -import yaml -from extensions.openai.defaults import clamp, default, get_default_req_params -from extensions.openai.errors import InvalidRequestError -from extensions.openai.utils import debug_msg, end_line -from modules import shared -from modules.text_generation import decode, encode, generate_reply -from transformers import LogitsProcessor, LogitsProcessorList - - -# Thanks to @Cypherfox [Cypherfoxy] for the logits code, blame to @matatonic -class LogitsBiasProcessor(LogitsProcessor): - def __init__(self, logit_bias={}): - self.logit_bias = logit_bias - if self.logit_bias: - self.keys = list([int(key) for key in self.logit_bias.keys()]) - values = [self.logit_bias[str(key)] for key in self.keys] - self.values = torch.tensor(values, dtype=torch.float, device=shared.model.device) - debug_msg(f"{self})") - - def __call__(self, input_ids: torch.LongTensor, logits: torch.FloatTensor) -> torch.FloatTensor: - if self.logit_bias: - debug_msg(logits[0, self.keys], " + ", self.values) - logits[0, self.keys] += self.values - debug_msg(" --> ", logits[0, self.keys]) - debug_msg(" max/min ", float(torch.max(logits[0])), float(torch.min(logits[0]))) - return logits - - def __repr__(self): - return f"<{self.__class__.__name__}(logit_bias={self.logit_bias})>" - - -class LogprobProcessor(LogitsProcessor): - def __init__(self, logprobs=None): - self.logprobs = logprobs - self.token_alternatives = {} - - def __call__(self, input_ids: torch.LongTensor, logits: torch.FloatTensor) -> torch.FloatTensor: - if self.logprobs is not None: # 0-5 - log_e_probabilities = F.log_softmax(logits, dim=1) - top_values, top_indices = torch.topk(log_e_probabilities, k=self.logprobs + 1) - top_tokens = [decode(tok) for tok in top_indices[0]] - top_probs = [float(x) for x in top_values[0]] - self.token_alternatives = dict(zip(top_tokens, top_probs)) - debug_msg(repr(self)) - return logits - - def __repr__(self): - return f"<{self.__class__.__name__}(logprobs={self.logprobs}, token_alternatives={self.token_alternatives})>" - - -def convert_logprobs_to_tiktoken(model, logprobs): - # more problems than it's worth. - # try: - # encoder = tiktoken.encoding_for_model(model) - # # just pick the first one if it encodes to multiple tokens... 99.9% not required and maybe worse overall. - # return dict([(encoder.decode([encoder.encode(token)[0]]), prob) for token, prob in logprobs.items()]) - # except KeyError: - # # assume native tokens if we can't find the tokenizer - # return logprobs - - return logprobs - - -def marshal_common_params(body): - # Request Parameters - # Try to use openai defaults or map them to something with the same intent - - req_params = get_default_req_params() - - # Common request parameters - req_params['truncation_length'] = shared.settings['truncation_length'] - req_params['add_bos_token'] = shared.settings.get('add_bos_token', req_params['add_bos_token']) - req_params['seed'] = shared.settings.get('seed', req_params['seed']) - req_params['custom_stopping_strings'] = shared.settings['custom_stopping_strings'] - - # OpenAI API Parameters - # model - ignored for now, TODO: When we can reliably load a model or lora from a name only change this - req_params['requested_model'] = body.get('model', shared.model_name) - - req_params['suffix'] = default(body, 'suffix', req_params['suffix']) - req_params['temperature'] = clamp(default(body, 'temperature', req_params['temperature']), 0.01, 1.99) # fixup absolute 0.0/2.0 - req_params['top_p'] = clamp(default(body, 'top_p', req_params['top_p']), 0.01, 1.0) - n = default(body, 'n', 1) - if n != 1: - raise InvalidRequestError(message="Only n = 1 is supported.", param='n') - - if 'stop' in body: # str or array, max len 4 (ignored) - if isinstance(body['stop'], str): - req_params['stopping_strings'] = [body['stop']] # non-standard parameter - elif isinstance(body['stop'], list): - req_params['stopping_strings'] = body['stop'] - - # presence_penalty - ignored - # frequency_penalty - ignored - - # pass through unofficial params - req_params['repetition_penalty'] = default(body, 'repetition_penalty', req_params['repetition_penalty']) - req_params['encoder_repetition_penalty'] = default(body, 'encoder_repetition_penalty', req_params['encoder_repetition_penalty']) - - # user - ignored - - logits_processor = [] - logit_bias = body.get('logit_bias', None) - if logit_bias: # {str: float, ...} - # XXX convert tokens from tiktoken based on requested model - # Ex.: 'logit_bias': {'1129': 100, '11442': 100, '16243': 100} - try: - encoder = tiktoken.encoding_for_model(req_params['requested_model']) - new_logit_bias = {} - for logit, bias in logit_bias.items(): - for x in encode(encoder.decode([int(logit)]), add_special_tokens=False)[0]: - if int(x) in [0, 1, 2, 29871]: # XXX LLAMA tokens - continue - new_logit_bias[str(int(x))] = bias - debug_msg('logit_bias_map', logit_bias, '->', new_logit_bias) - logit_bias = new_logit_bias - except KeyError: - pass # assume native tokens if we can't find the tokenizer - - logits_processor = [LogitsBiasProcessor(logit_bias)] - - logprobs = None # coming to chat eventually - if 'logprobs' in body: - logprobs = default(body, 'logprobs', 0) # maybe cap at topk? don't clamp 0-5. - req_params['logprob_proc'] = LogprobProcessor(logprobs) - logits_processor.extend([req_params['logprob_proc']]) - else: - logprobs = None - - if logits_processor: # requires logits_processor support - req_params['logits_processor'] = LogitsProcessorList(logits_processor) - - return req_params - - -def messages_to_prompt(body: dict, req_params: dict, max_tokens): - # functions - if body.get('functions', []): # chat only - raise InvalidRequestError(message="functions is not supported.", param='functions') - if body.get('function_call', ''): # chat only, 'none', 'auto', {'name': 'func'} - raise InvalidRequestError(message="function_call is not supported.", param='function_call') - - if 'messages' not in body: - raise InvalidRequestError(message="messages is required", param='messages') - - messages = body['messages'] - - role_formats = { - 'user': 'User: {message}\n', - 'assistant': 'Assistant: {message}\n', - 'system': '{message}', - 'context': 'You are a helpful assistant. Answer as concisely as possible.\nUser: I want your assistance.\nAssistant: Sure! What can I do for you?', - 'prompt': 'Assistant:', - } - - if 'stopping_strings' not in req_params: - req_params['stopping_strings'] = [] - - # Instruct models can be much better - if shared.settings['instruction_template']: - try: - instruct = yaml.safe_load(open(f"instruction-templates/{shared.settings['instruction_template']}.yaml", 'r')) - - template = instruct['turn_template'] - system_message_template = "{message}" - system_message_default = instruct.get('context', '') # can be missing - bot_start = template.find('<|bot|>') # So far, 100% of instruction templates have this token - user_message_template = template[:bot_start].replace('<|user-message|>', '{message}').replace('<|user|>', instruct.get('user', '')) - bot_message_template = template[bot_start:].replace('<|bot-message|>', '{message}').replace('<|bot|>', instruct.get('bot', '')) - bot_prompt = bot_message_template[:bot_message_template.find('{message}')].rstrip(' ') - - role_formats = { - 'user': user_message_template, - 'assistant': bot_message_template, - 'system': system_message_template, - 'context': system_message_default, - 'prompt': bot_prompt, - } - - if 'Alpaca' in shared.settings['instruction_template']: - req_params['stopping_strings'].extend(['\n###']) - elif instruct['user']: # WizardLM and some others have no user prompt. - req_params['stopping_strings'].extend(['\n' + instruct['user'], instruct['user']]) - - debug_msg(f"Loaded instruction role format: {shared.settings['instruction_template']}") - - except Exception as e: - req_params['stopping_strings'].extend(['\nUser:', 'User:']) # XXX User: prompt here also - - print(f"Exception: When loading instruction-templates/{shared.settings['instruction_template']}.yaml: {repr(e)}") - print("Warning: Loaded default instruction-following template for model.") - - else: - req_params['stopping_strings'].extend(['\nUser:', 'User:']) # XXX User: prompt here also - print("Warning: Loaded default instruction-following template for model.") - - system_msgs = [] - chat_msgs = [] - - # You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible. Knowledge cutoff: {knowledge_cutoff} Current date: {current_date} - context_msg = role_formats['system'].format(message=role_formats['context']) if role_formats['context'] else '' - context_msg = end_line(context_msg) - - # Maybe they sent both? This is not documented in the API, but some clients seem to do this. - if 'prompt' in body: - context_msg = end_line(role_formats['system'].format(message=body['prompt'])) + context_msg - - for m in messages: - if 'role' not in m: - raise InvalidRequestError(message="messages: missing role", param='messages') - if 'content' not in m: - raise InvalidRequestError(message="messages: missing content", param='messages') - - role = m['role'] - content = m['content'] - # name = m.get('name', None) - # function_call = m.get('function_call', None) # user name or function name with output in content - msg = role_formats[role].format(message=content) - if role == 'system': - system_msgs.extend([msg]) - elif role == 'function': - raise InvalidRequestError(message="role: function is not supported.", param='messages') - else: - chat_msgs.extend([msg]) - - system_msg = '\n'.join(system_msgs) - system_msg = end_line(system_msg) - - prompt = system_msg + context_msg + ''.join(chat_msgs) + role_formats['prompt'] - - token_count = len(encode(prompt)[0]) - - if token_count >= req_params['truncation_length']: - err_msg = f"This model maximum context length is {req_params['truncation_length']} tokens. However, your messages resulted in over {token_count} tokens." - raise InvalidRequestError(message=err_msg, param='messages') - - if max_tokens > 0 and token_count + max_tokens > req_params['truncation_length']: - err_msg = f"This model maximum context length is {req_params['truncation_length']} tokens. However, your messages resulted in over {token_count} tokens and max_tokens is {max_tokens}." - print(f"Warning: ${err_msg}") - # raise InvalidRequestError(message=err_msg, params='max_tokens') - - return prompt, token_count - - -def chat_completions(body: dict, is_legacy: bool = False) -> dict: - # Chat Completions - object_type = 'chat.completions' - created_time = int(time.time()) - cmpl_id = "chatcmpl-%d" % (int(time.time() * 1000000000)) - resp_list = 'data' if is_legacy else 'choices' - - # common params - req_params = marshal_common_params(body) - req_params['stream'] = False - requested_model = req_params.pop('requested_model') - logprob_proc = req_params.pop('logprob_proc', None) - req_params['top_k'] = 20 # There is no best_of/top_k param for chat, but it is much improved with a higher top_k. - - # chat default max_tokens is 'inf', but also flexible - max_tokens = 0 - max_tokens_str = 'length' if is_legacy else 'max_tokens' - if max_tokens_str in body: - max_tokens = default(body, max_tokens_str, req_params['truncation_length']) - req_params['max_new_tokens'] = max_tokens - else: - req_params['max_new_tokens'] = req_params['truncation_length'] - - # format the prompt from messages - prompt, token_count = messages_to_prompt(body, req_params, max_tokens) # updates req_params['stopping_strings'] - - # set real max, avoid deeper errors - if req_params['max_new_tokens'] + token_count >= req_params['truncation_length']: - req_params['max_new_tokens'] = req_params['truncation_length'] - token_count - - stopping_strings = req_params.pop('stopping_strings', []) - - # generate reply ####################################### - debug_msg({'prompt': prompt, 'req_params': req_params}) - generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False) - - answer = '' - for a in generator: - answer = a - - # strip extra leading space off new generated content - if answer and answer[0] == ' ': - answer = answer[1:] - - completion_token_count = len(encode(answer)[0]) - stop_reason = "stop" - if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= req_params['max_new_tokens']: - stop_reason = "length" - - resp = { - "id": cmpl_id, - "object": object_type, - "created": created_time, - "model": shared.model_name, # TODO: add Lora info? - resp_list: [{ - "index": 0, - "finish_reason": stop_reason, - "message": {"role": "assistant", "content": answer} - }], - "usage": { - "prompt_tokens": token_count, - "completion_tokens": completion_token_count, - "total_tokens": token_count + completion_token_count - } - } - if logprob_proc: # not official for chat yet - top_logprobs = convert_logprobs_to_tiktoken(model=requested_model, logprobs=logprob_proc.token_alternatives) - resp[resp_list][0]["logprobs"] = {'top_logprobs': [top_logprobs]} - # else: - # resp[resp_list][0]["logprobs"] = None - - return resp - - -# generator -def stream_chat_completions(body: dict, is_legacy: bool = False): - - # Chat Completions - stream_object_type = 'chat.completions.chunk' - created_time = int(time.time()) - cmpl_id = "chatcmpl-%d" % (int(time.time() * 1000000000)) - resp_list = 'data' if is_legacy else 'choices' - - # common params - req_params = marshal_common_params(body) - req_params['stream'] = True - requested_model = req_params.pop('requested_model') - logprob_proc = req_params.pop('logprob_proc', None) - req_params['top_k'] = 20 # There is no best_of/top_k param for chat, but it is much improved with a higher top_k. - - # chat default max_tokens is 'inf', but also flexible - max_tokens = 0 - max_tokens_str = 'length' if is_legacy else 'max_tokens' - if max_tokens_str in body: - max_tokens = default(body, max_tokens_str, req_params['truncation_length']) - req_params['max_new_tokens'] = max_tokens - else: - req_params['max_new_tokens'] = req_params['truncation_length'] - - # format the prompt from messages - prompt, token_count = messages_to_prompt(body, req_params, max_tokens) # updates req_params['stopping_strings'] - - # set real max, avoid deeper errors - if req_params['max_new_tokens'] + token_count >= req_params['truncation_length']: - req_params['max_new_tokens'] = req_params['truncation_length'] - token_count - - def chat_streaming_chunk(content): - # begin streaming - chunk = { - "id": cmpl_id, - "object": stream_object_type, - "created": created_time, - "model": shared.model_name, - resp_list: [{ - "index": 0, - "finish_reason": None, - # So yeah... do both methods? delta and messages. - "message": {'role': 'assistant', 'content': content}, - "delta": {'role': 'assistant', 'content': content}, - }], - } - - if logprob_proc: # not official for chat yet - top_logprobs = convert_logprobs_to_tiktoken(model=requested_model, logprobs=logprob_proc.token_alternatives) - chunk[resp_list][0]["logprobs"] = {'top_logprobs': [top_logprobs]} - # else: - # chunk[resp_list][0]["logprobs"] = None - return chunk - - yield chat_streaming_chunk('') - - # generate reply ####################################### - debug_msg({'prompt': prompt, 'req_params': req_params}) - - stopping_strings = req_params.pop('stopping_strings', []) - - generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False) - - answer = '' - seen_content = '' - completion_token_count = 0 - - for a in generator: - answer = a - - len_seen = len(seen_content) - new_content = answer[len_seen:] - - if not new_content or chr(0xfffd) in new_content: # partial unicode character, don't send it yet. - continue - - seen_content = answer - - # strip extra leading space off new generated content - if len_seen == 0 and new_content[0] == ' ': - new_content = new_content[1:] - - chunk = chat_streaming_chunk(new_content) - - yield chunk - - # to get the correct token_count, strip leading space if present - if answer and answer[0] == ' ': - answer = answer[1:] - - completion_token_count = len(encode(answer)[0]) - stop_reason = "stop" - if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= req_params['max_new_tokens']: - stop_reason = "length" - - chunk = chat_streaming_chunk('') - chunk[resp_list][0]['finish_reason'] = stop_reason - chunk['usage'] = { - "prompt_tokens": token_count, - "completion_tokens": completion_token_count, - "total_tokens": token_count + completion_token_count - } - - yield chunk - - -def completions(body: dict, is_legacy: bool = False): - # Legacy - # Text Completions - object_type = 'text_completion' - created_time = int(time.time()) - cmpl_id = "conv-%d" % (int(time.time() * 1000000000)) - resp_list = 'data' if is_legacy else 'choices' - - # ... encoded as a string, array of strings, array of tokens, or array of token arrays. - prompt_str = 'context' if is_legacy else 'prompt' - if prompt_str not in body: - raise InvalidRequestError("Missing required input", param=prompt_str) - - prompt_arg = body[prompt_str] - if isinstance(prompt_arg, str) or (isinstance(prompt_arg, list) and isinstance(prompt_arg[0], int)): - prompt_arg = [prompt_arg] - - # common params - req_params = marshal_common_params(body) - req_params['stream'] = False - max_tokens_str = 'length' if is_legacy else 'max_tokens' - max_tokens = default(body, max_tokens_str, req_params['max_new_tokens']) - req_params['max_new_tokens'] = max_tokens - requested_model = req_params.pop('requested_model') - logprob_proc = req_params.pop('logprob_proc', None) - stopping_strings = req_params.pop('stopping_strings', []) - # req_params['suffix'] = default(body, 'suffix', req_params['suffix']) - req_params['echo'] = default(body, 'echo', req_params['echo']) - req_params['top_k'] = default(body, 'best_of', req_params['top_k']) - - resp_list_data = [] - total_completion_token_count = 0 - total_prompt_token_count = 0 - - for idx, prompt in enumerate(prompt_arg, start=0): - if isinstance(prompt[0], int): - # token lists - if requested_model == shared.model_name: - prompt = decode(prompt)[0] - else: - try: - encoder = tiktoken.encoding_for_model(requested_model) - prompt = encoder.decode(prompt) - except KeyError: - prompt = decode(prompt)[0] - - token_count = len(encode(prompt)[0]) - total_prompt_token_count += token_count - - if token_count + max_tokens > req_params['truncation_length']: - err_msg = f"The token count of your prompt ({token_count}) plus max_tokens ({max_tokens}) cannot exceed the model's context length ({req_params['truncation_length']})." - # print(f"Warning: ${err_msg}") - raise InvalidRequestError(message=err_msg, param=max_tokens_str) - - # generate reply ####################################### - debug_msg({'prompt': prompt, 'req_params': req_params}) - generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False) - answer = '' - - for a in generator: - answer = a - - # strip extra leading space off new generated content - if answer and answer[0] == ' ': - answer = answer[1:] - - completion_token_count = len(encode(answer)[0]) - total_completion_token_count += completion_token_count - stop_reason = "stop" - if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= max_tokens: - stop_reason = "length" - - respi = { - "index": idx, - "finish_reason": stop_reason, - "text": answer, - "logprobs": {'top_logprobs': [logprob_proc.token_alternatives]} if logprob_proc else None, - } - - resp_list_data.extend([respi]) - - resp = { - "id": cmpl_id, - "object": object_type, - "created": created_time, - "model": shared.model_name, # TODO: add Lora info? - resp_list: resp_list_data, - "usage": { - "prompt_tokens": total_prompt_token_count, - "completion_tokens": total_completion_token_count, - "total_tokens": total_prompt_token_count + total_completion_token_count - } - } - - return resp - - -# generator -def stream_completions(body: dict, is_legacy: bool = False): - # Legacy - # Text Completions - # object_type = 'text_completion' - stream_object_type = 'text_completion.chunk' - created_time = int(time.time()) - cmpl_id = "conv-%d" % (int(time.time() * 1000000000)) - resp_list = 'data' if is_legacy else 'choices' - - # ... encoded as a string, array of strings, array of tokens, or array of token arrays. - prompt_str = 'context' if is_legacy else 'prompt' - if prompt_str not in body: - raise InvalidRequestError("Missing required input", param=prompt_str) - - prompt = body[prompt_str] - req_params = marshal_common_params(body) - requested_model = req_params.pop('requested_model') - if isinstance(prompt, list): - if prompt and isinstance(prompt[0], int): - try: - encoder = tiktoken.encoding_for_model(requested_model) - prompt = encoder.decode(prompt) - except KeyError: - prompt = decode(prompt)[0] - else: - raise InvalidRequestError(message="API Batched generation not yet supported.", param=prompt_str) - - # common params - req_params['stream'] = True - max_tokens_str = 'length' if is_legacy else 'max_tokens' - max_tokens = default(body, max_tokens_str, req_params['max_new_tokens']) - req_params['max_new_tokens'] = max_tokens - logprob_proc = req_params.pop('logprob_proc', None) - stopping_strings = req_params.pop('stopping_strings', []) - # req_params['suffix'] = default(body, 'suffix', req_params['suffix']) - req_params['echo'] = default(body, 'echo', req_params['echo']) - req_params['top_k'] = default(body, 'best_of', req_params['top_k']) - - token_count = len(encode(prompt)[0]) - - if token_count + max_tokens > req_params['truncation_length']: - err_msg = f"The token count of your prompt ({token_count}) plus max_tokens ({max_tokens}) cannot exceed the model's context length ({req_params['truncation_length']})." - # print(f"Warning: ${err_msg}") - raise InvalidRequestError(message=err_msg, param=max_tokens_str) - - def text_streaming_chunk(content): - # begin streaming - chunk = { - "id": cmpl_id, - "object": stream_object_type, - "created": created_time, - "model": shared.model_name, - resp_list: [{ - "index": 0, - "finish_reason": None, - "text": content, - "logprobs": {'top_logprobs': [logprob_proc.token_alternatives]} if logprob_proc else None, - }], - } - - return chunk - - yield text_streaming_chunk('') - - # generate reply ####################################### - debug_msg({'prompt': prompt, 'req_params': req_params}) - generator = generate_reply(prompt, req_params, stopping_strings=stopping_strings, is_chat=False) - - answer = '' - seen_content = '' - completion_token_count = 0 - - for a in generator: - answer = a - - len_seen = len(seen_content) - new_content = answer[len_seen:] - - if not new_content or chr(0xfffd) in new_content: # partial unicode character, don't send it yet. - continue - - seen_content = answer - - # strip extra leading space off new generated content - if len_seen == 0 and new_content[0] == ' ': - new_content = new_content[1:] - - chunk = text_streaming_chunk(new_content) - - yield chunk - - # to get the correct count, we strip the leading space if present - if answer and answer[0] == ' ': - answer = answer[1:] - - completion_token_count = len(encode(answer)[0]) - stop_reason = "stop" - if token_count + completion_token_count >= req_params['truncation_length'] or completion_token_count >= max_tokens: - stop_reason = "length" - - chunk = text_streaming_chunk('') - chunk[resp_list][0]["finish_reason"] = stop_reason - chunk["usage"] = { - "prompt_tokens": token_count, - "completion_tokens": completion_token_count, - "total_tokens": token_count + completion_token_count - } - - yield chunk diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/callbacks.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/callbacks.py deleted file mode 100644 index e29e397d3040d7b4b3205069d490b7eed31620f7..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/callbacks.py +++ /dev/null @@ -1,95 +0,0 @@ -import gc -import traceback -from queue import Queue -from threading import Thread - -import torch -import transformers - -import modules.shared as shared - - -class _StopEverythingStoppingCriteria(transformers.StoppingCriteria): - def __init__(self): - transformers.StoppingCriteria.__init__(self) - - def __call__(self, input_ids: torch.LongTensor, _scores: torch.FloatTensor) -> bool: - return shared.stop_everything - - -class Stream(transformers.StoppingCriteria): - def __init__(self, callback_func=None): - self.callback_func = callback_func - - def __call__(self, input_ids, scores) -> bool: - if self.callback_func is not None: - self.callback_func(input_ids[0]) - - return False - - -class Iteratorize: - - """ - Transforms a function that takes a callback - into a lazy iterator (generator). - - Adapted from: https://stackoverflow.com/a/9969000 - """ - - def __init__(self, func, args=None, kwargs=None, callback=None): - self.mfunc = func - self.c_callback = callback - self.q = Queue() - self.sentinel = object() - self.args = args or [] - self.kwargs = kwargs or {} - self.stop_now = False - - def _callback(val): - if self.stop_now or shared.stop_everything: - raise ValueError - self.q.put(val) - - def gentask(): - try: - ret = self.mfunc(callback=_callback, *args, **self.kwargs) - except ValueError: - pass - except: - traceback.print_exc() - pass - - clear_torch_cache() - self.q.put(self.sentinel) - if self.c_callback: - self.c_callback(ret) - - self.thread = Thread(target=gentask) - self.thread.start() - - def __iter__(self): - return self - - def __next__(self): - obj = self.q.get(True, None) - if obj is self.sentinel: - raise StopIteration - else: - return obj - - def __del__(self): - clear_torch_cache() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.stop_now = True - clear_torch_cache() - - -def clear_torch_cache(): - gc.collect() - if not shared.args.cpu: - torch.cuda.empty_cache() diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/start_linux.sh b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/start_linux.sh deleted file mode 100644 index d9d2ab0777409a29bee92feb13b2198b3aa4ea93..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/start_linux.sh +++ /dev/null @@ -1,67 +0,0 @@ -#!/bin/bash - -cd "$(dirname "${BASH_SOURCE[0]}")" - -if [[ "$(pwd)" =~ " " ]]; then echo This script relies on Miniconda which can not be silently installed under a path with spaces. && exit; fi - -# deactivate existing conda envs as needed to avoid conflicts -{ conda deactivate && conda deactivate && conda deactivate; } 2> /dev/null - -OS_ARCH=$(uname -m) -case "${OS_ARCH}" in - x86_64*) OS_ARCH="x86_64";; - arm64*) OS_ARCH="aarch64";; - aarch64*) OS_ARCH="aarch64";; - *) echo "Unknown system architecture: $OS_ARCH! This script runs only on x86_64 or arm64" && exit -esac - -# config -INSTALL_DIR="$(pwd)/installer_files" -CONDA_ROOT_PREFIX="$(pwd)/installer_files/conda" -INSTALL_ENV_DIR="$(pwd)/installer_files/env" -MINICONDA_DOWNLOAD_URL="https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Linux-${OS_ARCH}.sh" -conda_exists="F" - -# figure out whether git and conda needs to be installed -if "$CONDA_ROOT_PREFIX/bin/conda" --version &>/dev/null; then conda_exists="T"; fi - -# (if necessary) install git and conda into a contained environment -# download miniconda -if [ "$conda_exists" == "F" ]; then - echo "Downloading Miniconda from $MINICONDA_DOWNLOAD_URL to $INSTALL_DIR/miniconda_installer.sh" - - mkdir -p "$INSTALL_DIR" - curl -Lk "$MINICONDA_DOWNLOAD_URL" > "$INSTALL_DIR/miniconda_installer.sh" - - chmod u+x "$INSTALL_DIR/miniconda_installer.sh" - bash "$INSTALL_DIR/miniconda_installer.sh" -b -p $CONDA_ROOT_PREFIX - - # test the conda binary - echo "Miniconda version:" - "$CONDA_ROOT_PREFIX/bin/conda" --version -fi - -# create the installer env -if [ ! -e "$INSTALL_ENV_DIR" ]; then - "$CONDA_ROOT_PREFIX/bin/conda" create -y -k --prefix "$INSTALL_ENV_DIR" python=3.10 -fi - -# check if conda environment was actually created -if [ ! -e "$INSTALL_ENV_DIR/bin/python" ]; then - echo "Conda environment is empty." - exit -fi - -# environment isolation -export PYTHONNOUSERSITE=1 -unset PYTHONPATH -unset PYTHONHOME -export CUDA_PATH="$INSTALL_ENV_DIR" -export CUDA_HOME="$CUDA_PATH" - -# activate installer env -source "$CONDA_ROOT_PREFIX/etc/profile.d/conda.sh" # otherwise conda complains about 'shell not initialized' (needed when running in a script) -conda activate "$INSTALL_ENV_DIR" - -# setup installer env -python one_click.py $@ diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/docs/faq.md b/spaces/Anonymous-sub/Rerender/ControlNet/docs/faq.md deleted file mode 100644 index 07afd7aeacb51cac4c8bac3b601fe23a2842c4d3..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/docs/faq.md +++ /dev/null @@ -1,21 +0,0 @@ -# FAQs - -**Q:** If the weight of a conv layer is zero, the gradient will also be zero, and the network will not learn anything. Why "zero convolution" works? - -**A:** This is wrong. Let us consider a very simple - -$$y=wx+b$$ - -and we have - -$$\partial y/\partial w=x, \partial y/\partial x=w, \partial y/\partial b=1$$ - -and if $w=0$ and $x \neq 0$, then - -$$\partial y/\partial w \neq 0, \partial y/\partial x=0, \partial y/\partial b\neq 0$$ - -which means as long as $x \neq 0$, one gradient descent iteration will make $w$ non-zero. Then - -$$\partial y/\partial x\neq 0$$ - -so that the zero convolutions will progressively become a common conv layer with non-zero weights. diff --git a/spaces/Arnx/MusicGenXvAKN/tests/modules/test_transformer.py b/spaces/Arnx/MusicGenXvAKN/tests/modules/test_transformer.py deleted file mode 100644 index ff7dfe4c2de05112aec55ddea9c8fd978668f80b..0000000000000000000000000000000000000000 --- a/spaces/Arnx/MusicGenXvAKN/tests/modules/test_transformer.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.transformer import ( - StreamingMultiheadAttention, StreamingTransformer, set_efficient_attention_backend) - - -def test_transformer_causal_streaming(): - torch.manual_seed(1234) - - for context, custom in product([None, 10], [False, True]): - # Test that causality and receptive fields are properly handled. - # looking at the gradients - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=custom, - dropout=0.) - steps = 20 - for k in [0, 10, 15, 19]: - x = torch.randn(4, steps, 16, requires_grad=True) - y = tr(x) - y[:, k].abs().sum().backward() - if k + 1 < steps: - assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm() - assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm() - if context is not None and k > context: - limit = k - context - 1 - assert torch.allclose(x.grad[:, :limit], - torch.tensor(0.)), x.grad[:, :limit].norm() - - # Now check that streaming gives the same result at batch eval. - x = torch.randn(4, steps, 16) - y = tr(x) - ys = [] - with tr.streaming(): - for k in range(steps): - chunk = x[:, k:k + 1, :] - ys.append(tr(chunk)) - y_stream = torch.cat(ys, dim=1) - delta = torch.norm(y_stream - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_transformer_vs_pytorch(): - torch.manual_seed(1234) - # Check that in the non causal setting, we get the same result as - # PyTorch Transformer encoder. - for custom in [False, True]: - tr = StreamingTransformer( - 16, 4, 2, - causal=False, custom=custom, dropout=0., positional_scale=0.) - layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True) - tr_ref = torch.nn.TransformerEncoder(layer, 2) - tr.load_state_dict(tr_ref.state_dict()) - - x = torch.randn(4, 20, 16) - y = tr(x) - y2 = tr_ref(x) - delta = torch.norm(y2 - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_streaming_api(): - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.) - tr.eval() - steps = 12 - x = torch.randn(1, steps, 16) - - with torch.no_grad(): - with tr.streaming(): - _ = tr(x[:, :1]) - state = {k: v.clone() for k, v in tr.get_streaming_state().items()} - y = tr(x[:, 1:2]) - tr.set_streaming_state(state) - y2 = tr(x[:, 1:2]) - assert torch.allclose(y, y2), (y - y2).norm() - assert tr.flush() is None - - -def test_memory_efficient(): - for backend in ['torch', 'xformers']: - torch.manual_seed(1234) - set_efficient_attention_backend(backend) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1) - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - assert torch.allclose(y, y2), ((y - y2).norm(), backend) - - -def test_attention_as_float32(): - torch.manual_seed(1234) - cases = [ - {'custom': True}, - {'custom': False}, - ] - for case in cases: - tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case) - tr_float32 = StreamingTransformer( - 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case) - if not case['custom']: - # we are not using autocast here because it doesn't really - # work as expected on CPU, so we have to manually cast the weights of the MHA. - for layer in tr_float32.layers: - layer.self_attn.mha.to(torch.float32) - tr_float32.load_state_dict(tr.state_dict()) - steps = 12 - x = torch.randn(3, steps, 16, dtype=torch.bfloat16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_float32(x) - assert not torch.allclose(y, y2), (y - y2).norm() - - -@torch.no_grad() -def test_streaming_memory_efficient(): - for backend in ['torch', 'xformers']: - torch.manual_seed(1234) - set_efficient_attention_backend(backend) - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, causal=True) - tr.load_state_dict(tr_mem_efficient.state_dict()) - tr.eval() - tr_mem_efficient.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr_mem_efficient.streaming(): - outs = [] - # frame_sizes = [2] + [1] * (steps - 2) - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr_mem_efficient(frame)) - - out = torch.cat(outs, dim=1) - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_cross_attention(): - torch.manual_seed(1234) - for norm_first in [True, False]: - m = StreamingTransformer( - 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True) - m_cross = StreamingTransformer( - 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True) - m_cross.load_state_dict(m.state_dict(), strict=False) - x = torch.randn(2, 5, 16) - cross_x = torch.randn(2, 3, 16) - y_ref = m(x) - y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x) - # With norm_first, the two should be exactly yhe same, - # but with norm_first=False, we get 2 normalization in a row - # and the epsilon value leads to a tiny change. - atol = 0. if norm_first else 1e-6 - print((y_ref - y_cross_zero).norm() / y_ref.norm()) - assert torch.allclose(y_ref, y_cross_zero, atol=atol) - - # We now expect a difference even with a generous atol of 1e-2. - y_cross = m_cross(x, cross_attention_src=cross_x) - assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2) - - with pytest.raises(AssertionError): - _ = m_cross(x) - _ = m(x, cross_attention_src=cross_x) - - -def test_cross_attention_compat(): - torch.manual_seed(1234) - num_heads = 2 - dim = num_heads * 64 - with pytest.raises(AssertionError): - StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True) - - cross_attn = StreamingMultiheadAttention( - dim, num_heads, dropout=0, cross_attention=True, custom=True) - ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True) - - # We can load the regular attention state dict - # so we have compat when loading old checkpoints. - cross_attn.load_state_dict(ref_attn.state_dict()) - - queries = torch.randn(3, 7, dim) - keys = torch.randn(3, 9, dim) - values = torch.randn(3, 9, dim) - - y = cross_attn(queries, keys, values)[0] - y_ref = ref_attn(queries, keys, values)[0] - assert torch.allclose(y, y_ref, atol=1e-7), (y - y_ref).norm() / y_ref.norm() - - # Now let's check that streaming is working properly. - with cross_attn.streaming(): - ys = [] - for step in range(queries.shape[1]): - ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0]) - y_streaming = torch.cat(ys, dim=1) - assert torch.allclose(y_streaming, y, atol=1e-7) - - -def test_repeat_kv(): - torch.manual_seed(1234) - num_heads = 8 - kv_repeat = 4 - dim = num_heads * 64 - with pytest.raises(AssertionError): - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True) - x = torch.randn(4, 18, dim) - y = mha(x, x, x)[0] - assert x.shape == y.shape - - -def test_qk_layer_norm(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False) - steps = 12 - x = torch.randn(3, steps, 16) - y = tr(x) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True) - z = torch.randn(3, 21, 16) - y = tr(x, cross_attention_src=z) - assert y.shape == x.shape diff --git a/spaces/Artrajz/vits-simple-api/bert_vits2/bert/bert-base-japanese-v3/README.md b/spaces/Artrajz/vits-simple-api/bert_vits2/bert/bert-base-japanese-v3/README.md deleted file mode 100644 index c5b3456719f01801a2f29fef5faa8ee672391adf..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/bert_vits2/bert/bert-base-japanese-v3/README.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -license: apache-2.0 -datasets: -- cc100 -- wikipedia -language: -- ja -widget: -- text: 東北大学で[MASK]の研究をしています。 ---- - -# BERT base Japanese (unidic-lite with whole word masking, CC-100 and jawiki-20230102) - -This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. - -This version of the model processes input texts with word-level tokenization based on the Unidic 2.1.2 dictionary (available in [unidic-lite](https://pypi.org/project/unidic-lite/) package), followed by the WordPiece subword tokenization. -Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective. - -The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/). - -## Model architecture - -The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. - -## Training Data - -The model is trained on the Japanese portion of [CC-100 dataset](https://data.statmt.org/cc-100/) and the Japanese version of Wikipedia. -For Wikipedia, we generated a text corpus from the [Wikipedia Cirrussearch dump file](https://dumps.wikimedia.org/other/cirrussearch/) as of January 2, 2023. -The corpus files generated from CC-100 and Wikipedia are 74.3GB and 4.9GB in size and consist of approximately 392M and 34M sentences, respectively. - -For the purpose of splitting texts into sentences, we used [fugashi](https://github.com/polm/fugashi) with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd) dictionary (v0.0.7). - -## Tokenization - -The texts are first tokenized by MeCab with the Unidic 2.1.2 dictionary and then split into subwords by the WordPiece algorithm. -The vocabulary size is 32768. - -We used [fugashi](https://github.com/polm/fugashi) and [unidic-lite](https://github.com/polm/unidic-lite) packages for the tokenization. - -## Training - -We trained the model first on the CC-100 corpus for 1M steps and then on the Wikipedia corpus for another 1M steps. -For training of the MLM (masked language modeling) objective, we introduced whole word masking in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once. - -For training of each model, we used a v3-8 instance of Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/). - -## Licenses - -The pretrained models are distributed under the Apache License 2.0. - -## Acknowledgments - -This model is trained with Cloud TPUs provided by [TPU Research Cloud](https://sites.research.google/trc/about/) program. diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/sjisprober.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/sjisprober.py deleted file mode 100644 index 91df077961b6310b8e1c708b74003d5343bff6a8..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/sjisprober.py +++ /dev/null @@ -1,105 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from typing import Union - -from .chardistribution import SJISDistributionAnalysis -from .codingstatemachine import CodingStateMachine -from .enums import MachineState, ProbingState -from .jpcntx import SJISContextAnalysis -from .mbcharsetprober import MultiByteCharSetProber -from .mbcssm import SJIS_SM_MODEL - - -class SJISProber(MultiByteCharSetProber): - def __init__(self) -> None: - super().__init__() - self.coding_sm = CodingStateMachine(SJIS_SM_MODEL) - self.distribution_analyzer = SJISDistributionAnalysis() - self.context_analyzer = SJISContextAnalysis() - self.reset() - - def reset(self) -> None: - super().reset() - self.context_analyzer.reset() - - @property - def charset_name(self) -> str: - return self.context_analyzer.charset_name - - @property - def language(self) -> str: - return "Japanese" - - def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState: - assert self.coding_sm is not None - assert self.distribution_analyzer is not None - - for i, byte in enumerate(byte_str): - coding_state = self.coding_sm.next_state(byte) - if coding_state == MachineState.ERROR: - self.logger.debug( - "%s %s prober hit error at byte %s", - self.charset_name, - self.language, - i, - ) - self._state = ProbingState.NOT_ME - break - if coding_state == MachineState.ITS_ME: - self._state = ProbingState.FOUND_IT - break - if coding_state == MachineState.START: - char_len = self.coding_sm.get_current_charlen() - if i == 0: - self._last_char[1] = byte - self.context_analyzer.feed( - self._last_char[2 - char_len :], char_len - ) - self.distribution_analyzer.feed(self._last_char, char_len) - else: - self.context_analyzer.feed( - byte_str[i + 1 - char_len : i + 3 - char_len], char_len - ) - self.distribution_analyzer.feed(byte_str[i - 1 : i + 1], char_len) - - self._last_char[0] = byte_str[-1] - - if self.state == ProbingState.DETECTING: - if self.context_analyzer.got_enough_data() and ( - self.get_confidence() > self.SHORTCUT_THRESHOLD - ): - self._state = ProbingState.FOUND_IT - - return self.state - - def get_confidence(self) -> float: - assert self.distribution_analyzer is not None - - context_conf = self.context_analyzer.get_confidence() - distrib_conf = self.distribution_analyzer.get_confidence() - return max(context_conf, distrib_conf) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/common.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/common.py deleted file mode 100644 index 1859fb79cc4e78850b69742fca56698041ce59f8..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/common.py +++ /dev/null @@ -1,424 +0,0 @@ -# common.py -from .core import * -from .helpers import delimited_list, any_open_tag, any_close_tag -from datetime import datetime - - -# some other useful expressions - using lower-case class name since we are really using this as a namespace -class pyparsing_common: - """Here are some common low-level expressions that may be useful in - jump-starting parser development: - - - numeric forms (:class:`integers`, :class:`reals`, - :class:`scientific notation`) - - common :class:`programming identifiers` - - network addresses (:class:`MAC`, - :class:`IPv4`, :class:`IPv6`) - - ISO8601 :class:`dates` and - :class:`datetime` - - :class:`UUID` - - :class:`comma-separated list` - - :class:`url` - - Parse actions: - - - :class:`convertToInteger` - - :class:`convertToFloat` - - :class:`convertToDate` - - :class:`convertToDatetime` - - :class:`stripHTMLTags` - - :class:`upcaseTokens` - - :class:`downcaseTokens` - - Example:: - - pyparsing_common.number.runTests(''' - # any int or real number, returned as the appropriate type - 100 - -100 - +100 - 3.14159 - 6.02e23 - 1e-12 - ''') - - pyparsing_common.fnumber.runTests(''' - # any int or real number, returned as float - 100 - -100 - +100 - 3.14159 - 6.02e23 - 1e-12 - ''') - - pyparsing_common.hex_integer.runTests(''' - # hex numbers - 100 - FF - ''') - - pyparsing_common.fraction.runTests(''' - # fractions - 1/2 - -3/4 - ''') - - pyparsing_common.mixed_integer.runTests(''' - # mixed fractions - 1 - 1/2 - -3/4 - 1-3/4 - ''') - - import uuid - pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) - pyparsing_common.uuid.runTests(''' - # uuid - 12345678-1234-5678-1234-567812345678 - ''') - - prints:: - - # any int or real number, returned as the appropriate type - 100 - [100] - - -100 - [-100] - - +100 - [100] - - 3.14159 - [3.14159] - - 6.02e23 - [6.02e+23] - - 1e-12 - [1e-12] - - # any int or real number, returned as float - 100 - [100.0] - - -100 - [-100.0] - - +100 - [100.0] - - 3.14159 - [3.14159] - - 6.02e23 - [6.02e+23] - - 1e-12 - [1e-12] - - # hex numbers - 100 - [256] - - FF - [255] - - # fractions - 1/2 - [0.5] - - -3/4 - [-0.75] - - # mixed fractions - 1 - [1] - - 1/2 - [0.5] - - -3/4 - [-0.75] - - 1-3/4 - [1.75] - - # uuid - 12345678-1234-5678-1234-567812345678 - [UUID('12345678-1234-5678-1234-567812345678')] - """ - - convert_to_integer = token_map(int) - """ - Parse action for converting parsed integers to Python int - """ - - convert_to_float = token_map(float) - """ - Parse action for converting parsed numbers to Python float - """ - - integer = Word(nums).set_name("integer").set_parse_action(convert_to_integer) - """expression that parses an unsigned integer, returns an int""" - - hex_integer = ( - Word(hexnums).set_name("hex integer").set_parse_action(token_map(int, 16)) - ) - """expression that parses a hexadecimal integer, returns an int""" - - signed_integer = ( - Regex(r"[+-]?\d+") - .set_name("signed integer") - .set_parse_action(convert_to_integer) - ) - """expression that parses an integer with optional leading sign, returns an int""" - - fraction = ( - signed_integer().set_parse_action(convert_to_float) - + "/" - + signed_integer().set_parse_action(convert_to_float) - ).set_name("fraction") - """fractional expression of an integer divided by an integer, returns a float""" - fraction.add_parse_action(lambda tt: tt[0] / tt[-1]) - - mixed_integer = ( - fraction | signed_integer + Opt(Opt("-").suppress() + fraction) - ).set_name("fraction or mixed integer-fraction") - """mixed integer of the form 'integer - fraction', with optional leading integer, returns float""" - mixed_integer.add_parse_action(sum) - - real = ( - Regex(r"[+-]?(?:\d+\.\d*|\.\d+)") - .set_name("real number") - .set_parse_action(convert_to_float) - ) - """expression that parses a floating point number and returns a float""" - - sci_real = ( - Regex(r"[+-]?(?:\d+(?:[eE][+-]?\d+)|(?:\d+\.\d*|\.\d+)(?:[eE][+-]?\d+)?)") - .set_name("real number with scientific notation") - .set_parse_action(convert_to_float) - ) - """expression that parses a floating point number with optional - scientific notation and returns a float""" - - # streamlining this expression makes the docs nicer-looking - number = (sci_real | real | signed_integer).setName("number").streamline() - """any numeric expression, returns the corresponding Python type""" - - fnumber = ( - Regex(r"[+-]?\d+\.?\d*([eE][+-]?\d+)?") - .set_name("fnumber") - .set_parse_action(convert_to_float) - ) - """any int or real number, returned as float""" - - identifier = Word(identchars, identbodychars).set_name("identifier") - """typical code identifier (leading alpha or '_', followed by 0 or more alphas, nums, or '_')""" - - ipv4_address = Regex( - r"(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})(\.(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})){3}" - ).set_name("IPv4 address") - "IPv4 address (``0.0.0.0 - 255.255.255.255``)" - - _ipv6_part = Regex(r"[0-9a-fA-F]{1,4}").set_name("hex_integer") - _full_ipv6_address = (_ipv6_part + (":" + _ipv6_part) * 7).set_name( - "full IPv6 address" - ) - _short_ipv6_address = ( - Opt(_ipv6_part + (":" + _ipv6_part) * (0, 6)) - + "::" - + Opt(_ipv6_part + (":" + _ipv6_part) * (0, 6)) - ).set_name("short IPv6 address") - _short_ipv6_address.add_condition( - lambda t: sum(1 for tt in t if pyparsing_common._ipv6_part.matches(tt)) < 8 - ) - _mixed_ipv6_address = ("::ffff:" + ipv4_address).set_name("mixed IPv6 address") - ipv6_address = Combine( - (_full_ipv6_address | _mixed_ipv6_address | _short_ipv6_address).set_name( - "IPv6 address" - ) - ).set_name("IPv6 address") - "IPv6 address (long, short, or mixed form)" - - mac_address = Regex( - r"[0-9a-fA-F]{2}([:.-])[0-9a-fA-F]{2}(?:\1[0-9a-fA-F]{2}){4}" - ).set_name("MAC address") - "MAC address xx:xx:xx:xx:xx (may also have '-' or '.' delimiters)" - - @staticmethod - def convert_to_date(fmt: str = "%Y-%m-%d"): - """ - Helper to create a parse action for converting parsed date string to Python datetime.date - - Params - - - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%d"``) - - Example:: - - date_expr = pyparsing_common.iso8601_date.copy() - date_expr.setParseAction(pyparsing_common.convertToDate()) - print(date_expr.parseString("1999-12-31")) - - prints:: - - [datetime.date(1999, 12, 31)] - """ - - def cvt_fn(ss, ll, tt): - try: - return datetime.strptime(tt[0], fmt).date() - except ValueError as ve: - raise ParseException(ss, ll, str(ve)) - - return cvt_fn - - @staticmethod - def convert_to_datetime(fmt: str = "%Y-%m-%dT%H:%M:%S.%f"): - """Helper to create a parse action for converting parsed - datetime string to Python datetime.datetime - - Params - - - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%dT%H:%M:%S.%f"``) - - Example:: - - dt_expr = pyparsing_common.iso8601_datetime.copy() - dt_expr.setParseAction(pyparsing_common.convertToDatetime()) - print(dt_expr.parseString("1999-12-31T23:59:59.999")) - - prints:: - - [datetime.datetime(1999, 12, 31, 23, 59, 59, 999000)] - """ - - def cvt_fn(s, l, t): - try: - return datetime.strptime(t[0], fmt) - except ValueError as ve: - raise ParseException(s, l, str(ve)) - - return cvt_fn - - iso8601_date = Regex( - r"(?P\d{4})(?:-(?P\d\d)(?:-(?P\d\d))?)?" - ).set_name("ISO8601 date") - "ISO8601 date (``yyyy-mm-dd``)" - - iso8601_datetime = Regex( - r"(?P\d{4})-(?P\d\d)-(?P\d\d)[T ](?P\d\d):(?P\d\d)(:(?P\d\d(\.\d*)?)?)?(?PZ|[+-]\d\d:?\d\d)?" - ).set_name("ISO8601 datetime") - "ISO8601 datetime (``yyyy-mm-ddThh:mm:ss.s(Z|+-00:00)``) - trailing seconds, milliseconds, and timezone optional; accepts separating ``'T'`` or ``' '``" - - uuid = Regex(r"[0-9a-fA-F]{8}(-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}").set_name("UUID") - "UUID (``xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx``)" - - _html_stripper = any_open_tag.suppress() | any_close_tag.suppress() - - @staticmethod - def strip_html_tags(s: str, l: int, tokens: ParseResults): - """Parse action to remove HTML tags from web page HTML source - - Example:: - - # strip HTML links from normal text - text = 'More info at the pyparsing wiki page' - td, td_end = makeHTMLTags("TD") - table_text = td + SkipTo(td_end).setParseAction(pyparsing_common.stripHTMLTags)("body") + td_end - print(table_text.parseString(text).body) - - Prints:: - - More info at the pyparsing wiki page - """ - return pyparsing_common._html_stripper.transform_string(tokens[0]) - - _commasepitem = ( - Combine( - OneOrMore( - ~Literal(",") - + ~LineEnd() - + Word(printables, exclude_chars=",") - + Opt(White(" \t") + ~FollowedBy(LineEnd() | ",")) - ) - ) - .streamline() - .set_name("commaItem") - ) - comma_separated_list = delimited_list( - Opt(quoted_string.copy() | _commasepitem, default="") - ).set_name("comma separated list") - """Predefined expression of 1 or more printable words or quoted strings, separated by commas.""" - - upcase_tokens = staticmethod(token_map(lambda t: t.upper())) - """Parse action to convert tokens to upper case.""" - - downcase_tokens = staticmethod(token_map(lambda t: t.lower())) - """Parse action to convert tokens to lower case.""" - - # fmt: off - url = Regex( - # https://mathiasbynens.be/demo/url-regex - # https://gist.github.com/dperini/729294 - r"^" + - # protocol identifier (optional) - # short syntax // still required - r"(?:(?:(?Phttps?|ftp):)?\/\/)" + - # user:pass BasicAuth (optional) - r"(?:(?P\S+(?::\S*)?)@)?" + - r"(?P" + - # IP address exclusion - # private & local networks - r"(?!(?:10|127)(?:\.\d{1,3}){3})" + - r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" + - r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" + - # IP address dotted notation octets - # excludes loopback network 0.0.0.0 - # excludes reserved space >= 224.0.0.0 - # excludes network & broadcast addresses - # (first & last IP address of each class) - r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" + - r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" + - r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" + - r"|" + - # host & domain names, may end with dot - # can be replaced by a shortest alternative - # (?![-_])(?:[-\w\u00a1-\uffff]{0,63}[^-_]\.)+ - r"(?:" + - r"(?:" + - r"[a-z0-9\u00a1-\uffff]" + - r"[a-z0-9\u00a1-\uffff_-]{0,62}" + - r")?" + - r"[a-z0-9\u00a1-\uffff]\." + - r")+" + - # TLD identifier name, may end with dot - r"(?:[a-z\u00a1-\uffff]{2,}\.?)" + - r")" + - # port number (optional) - r"(:(?P\d{2,5}))?" + - # resource path (optional) - r"(?P\/[^?# ]*)?" + - # query string (optional) - r"(\?(?P[^#]*))?" + - # fragment (optional) - r"(#(?P\S*))?" + - r"$" - ).set_name("url") - # fmt: on - - # pre-PEP8 compatibility names - convertToInteger = convert_to_integer - convertToFloat = convert_to_float - convertToDate = convert_to_date - convertToDatetime = convert_to_datetime - stripHTMLTags = strip_html_tags - upcaseTokens = upcase_tokens - downcaseTokens = downcase_tokens - - -_builtin_exprs = [ - v for v in vars(pyparsing_common).values() if isinstance(v, ParserElement) -] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/helpers.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/helpers.py deleted file mode 100644 index 9588b3b780159a2a2d23c7f84a4404ec350e2b65..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/pyparsing/helpers.py +++ /dev/null @@ -1,1088 +0,0 @@ -# helpers.py -import html.entities -import re -import typing - -from . import __diag__ -from .core import * -from .util import _bslash, _flatten, _escape_regex_range_chars - - -# -# global helpers -# -def delimited_list( - expr: Union[str, ParserElement], - delim: Union[str, ParserElement] = ",", - combine: bool = False, - min: typing.Optional[int] = None, - max: typing.Optional[int] = None, - *, - allow_trailing_delim: bool = False, -) -> ParserElement: - """Helper to define a delimited list of expressions - the delimiter - defaults to ','. By default, the list elements and delimiters can - have intervening whitespace, and comments, but this can be - overridden by passing ``combine=True`` in the constructor. If - ``combine`` is set to ``True``, the matching tokens are - returned as a single token string, with the delimiters included; - otherwise, the matching tokens are returned as a list of tokens, - with the delimiters suppressed. - - If ``allow_trailing_delim`` is set to True, then the list may end with - a delimiter. - - Example:: - - delimited_list(Word(alphas)).parse_string("aa,bb,cc") # -> ['aa', 'bb', 'cc'] - delimited_list(Word(hexnums), delim=':', combine=True).parse_string("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE'] - """ - if isinstance(expr, str_type): - expr = ParserElement._literalStringClass(expr) - - dlName = "{expr} [{delim} {expr}]...{end}".format( - expr=str(expr.copy().streamline()), - delim=str(delim), - end=" [{}]".format(str(delim)) if allow_trailing_delim else "", - ) - - if not combine: - delim = Suppress(delim) - - if min is not None: - if min < 1: - raise ValueError("min must be greater than 0") - min -= 1 - if max is not None: - if min is not None and max <= min: - raise ValueError("max must be greater than, or equal to min") - max -= 1 - delimited_list_expr = expr + (delim + expr)[min, max] - - if allow_trailing_delim: - delimited_list_expr += Opt(delim) - - if combine: - return Combine(delimited_list_expr).set_name(dlName) - else: - return delimited_list_expr.set_name(dlName) - - -def counted_array( - expr: ParserElement, - int_expr: typing.Optional[ParserElement] = None, - *, - intExpr: typing.Optional[ParserElement] = None, -) -> ParserElement: - """Helper to define a counted list of expressions. - - This helper defines a pattern of the form:: - - integer expr expr expr... - - where the leading integer tells how many expr expressions follow. - The matched tokens returns the array of expr tokens as a list - the - leading count token is suppressed. - - If ``int_expr`` is specified, it should be a pyparsing expression - that produces an integer value. - - Example:: - - counted_array(Word(alphas)).parse_string('2 ab cd ef') # -> ['ab', 'cd'] - - # in this parser, the leading integer value is given in binary, - # '10' indicating that 2 values are in the array - binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2)) - counted_array(Word(alphas), int_expr=binary_constant).parse_string('10 ab cd ef') # -> ['ab', 'cd'] - - # if other fields must be parsed after the count but before the - # list items, give the fields results names and they will - # be preserved in the returned ParseResults: - count_with_metadata = integer + Word(alphas)("type") - typed_array = counted_array(Word(alphanums), int_expr=count_with_metadata)("items") - result = typed_array.parse_string("3 bool True True False") - print(result.dump()) - - # prints - # ['True', 'True', 'False'] - # - items: ['True', 'True', 'False'] - # - type: 'bool' - """ - intExpr = intExpr or int_expr - array_expr = Forward() - - def count_field_parse_action(s, l, t): - nonlocal array_expr - n = t[0] - array_expr <<= (expr * n) if n else Empty() - # clear list contents, but keep any named results - del t[:] - - if intExpr is None: - intExpr = Word(nums).set_parse_action(lambda t: int(t[0])) - else: - intExpr = intExpr.copy() - intExpr.set_name("arrayLen") - intExpr.add_parse_action(count_field_parse_action, call_during_try=True) - return (intExpr + array_expr).set_name("(len) " + str(expr) + "...") - - -def match_previous_literal(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_literal(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches a previous literal, will also match the leading - ``"1:1"`` in ``"1:10"``. If this is not desired, use - :class:`match_previous_expr`. Do *not* use with packrat parsing - enabled. - """ - rep = Forward() - - def copy_token_to_repeater(s, l, t): - if t: - if len(t) == 1: - rep << t[0] - else: - # flatten t tokens - tflat = _flatten(t.as_list()) - rep << And(Literal(tt) for tt in tflat) - else: - rep << Empty() - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def match_previous_expr(expr: ParserElement) -> ParserElement: - """Helper to define an expression that is indirectly defined from - the tokens matched in a previous expression, that is, it looks for - a 'repeat' of a previous expression. For example:: - - first = Word(nums) - second = match_previous_expr(first) - match_expr = first + ":" + second - - will match ``"1:1"``, but not ``"1:2"``. Because this - matches by expressions, will *not* match the leading ``"1:1"`` - in ``"1:10"``; the expressions are evaluated first, and then - compared, so ``"1"`` is compared with ``"10"``. Do *not* use - with packrat parsing enabled. - """ - rep = Forward() - e2 = expr.copy() - rep <<= e2 - - def copy_token_to_repeater(s, l, t): - matchTokens = _flatten(t.as_list()) - - def must_match_these_tokens(s, l, t): - theseTokens = _flatten(t.as_list()) - if theseTokens != matchTokens: - raise ParseException( - s, l, "Expected {}, found{}".format(matchTokens, theseTokens) - ) - - rep.set_parse_action(must_match_these_tokens, callDuringTry=True) - - expr.add_parse_action(copy_token_to_repeater, callDuringTry=True) - rep.set_name("(prev) " + str(expr)) - return rep - - -def one_of( - strs: Union[typing.Iterable[str], str], - caseless: bool = False, - use_regex: bool = True, - as_keyword: bool = False, - *, - useRegex: bool = True, - asKeyword: bool = False, -) -> ParserElement: - """Helper to quickly define a set of alternative :class:`Literal` s, - and makes sure to do longest-first testing when there is a conflict, - regardless of the input order, but returns - a :class:`MatchFirst` for best performance. - - Parameters: - - - ``strs`` - a string of space-delimited literals, or a collection of - string literals - - ``caseless`` - treat all literals as caseless - (default= ``False``) - - ``use_regex`` - as an optimization, will - generate a :class:`Regex` object; otherwise, will generate - a :class:`MatchFirst` object (if ``caseless=True`` or ``asKeyword=True``, or if - creating a :class:`Regex` raises an exception) - (default= ``True``) - - ``as_keyword`` - enforce :class:`Keyword`-style matching on the - generated expressions - (default= ``False``) - - ``asKeyword`` and ``useRegex`` are retained for pre-PEP8 compatibility, - but will be removed in a future release - - Example:: - - comp_oper = one_of("< = > <= >= !=") - var = Word(alphas) - number = Word(nums) - term = var | number - comparison_expr = term + comp_oper + term - print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12")) - - prints:: - - [['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']] - """ - asKeyword = asKeyword or as_keyword - useRegex = useRegex and use_regex - - if ( - isinstance(caseless, str_type) - and __diag__.warn_on_multiple_string_args_to_oneof - ): - warnings.warn( - "More than one string argument passed to one_of, pass" - " choices as a list or space-delimited string", - stacklevel=2, - ) - - if caseless: - isequal = lambda a, b: a.upper() == b.upper() - masks = lambda a, b: b.upper().startswith(a.upper()) - parseElementClass = CaselessKeyword if asKeyword else CaselessLiteral - else: - isequal = lambda a, b: a == b - masks = lambda a, b: b.startswith(a) - parseElementClass = Keyword if asKeyword else Literal - - symbols: List[str] = [] - if isinstance(strs, str_type): - symbols = strs.split() - elif isinstance(strs, Iterable): - symbols = list(strs) - else: - raise TypeError("Invalid argument to one_of, expected string or iterable") - if not symbols: - return NoMatch() - - # reorder given symbols to take care to avoid masking longer choices with shorter ones - # (but only if the given symbols are not just single characters) - if any(len(sym) > 1 for sym in symbols): - i = 0 - while i < len(symbols) - 1: - cur = symbols[i] - for j, other in enumerate(symbols[i + 1 :]): - if isequal(other, cur): - del symbols[i + j + 1] - break - elif masks(cur, other): - del symbols[i + j + 1] - symbols.insert(i, other) - break - else: - i += 1 - - if useRegex: - re_flags: int = re.IGNORECASE if caseless else 0 - - try: - if all(len(sym) == 1 for sym in symbols): - # symbols are just single characters, create range regex pattern - patt = "[{}]".format( - "".join(_escape_regex_range_chars(sym) for sym in symbols) - ) - else: - patt = "|".join(re.escape(sym) for sym in symbols) - - # wrap with \b word break markers if defining as keywords - if asKeyword: - patt = r"\b(?:{})\b".format(patt) - - ret = Regex(patt, flags=re_flags).set_name(" | ".join(symbols)) - - if caseless: - # add parse action to return symbols as specified, not in random - # casing as found in input string - symbol_map = {sym.lower(): sym for sym in symbols} - ret.add_parse_action(lambda s, l, t: symbol_map[t[0].lower()]) - - return ret - - except re.error: - warnings.warn( - "Exception creating Regex for one_of, building MatchFirst", stacklevel=2 - ) - - # last resort, just use MatchFirst - return MatchFirst(parseElementClass(sym) for sym in symbols).set_name( - " | ".join(symbols) - ) - - -def dict_of(key: ParserElement, value: ParserElement) -> ParserElement: - """Helper to easily and clearly define a dictionary by specifying - the respective patterns for the key and value. Takes care of - defining the :class:`Dict`, :class:`ZeroOrMore`, and - :class:`Group` tokens in the proper order. The key pattern - can include delimiting markers or punctuation, as long as they are - suppressed, thereby leaving the significant key text. The value - pattern can include named results, so that the :class:`Dict` results - can include named token fields. - - Example:: - - text = "shape: SQUARE posn: upper left color: light blue texture: burlap" - attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - print(attr_expr[1, ...].parse_string(text).dump()) - - attr_label = label - attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join) - - # similar to Dict, but simpler call format - result = dict_of(attr_label, attr_value).parse_string(text) - print(result.dump()) - print(result['shape']) - print(result.shape) # object attribute access works too - print(result.as_dict()) - - prints:: - - [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - - color: 'light blue' - - posn: 'upper left' - - shape: 'SQUARE' - - texture: 'burlap' - SQUARE - SQUARE - {'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'} - """ - return Dict(OneOrMore(Group(key + value))) - - -def original_text_for( - expr: ParserElement, as_string: bool = True, *, asString: bool = True -) -> ParserElement: - """Helper to return the original, untokenized text for a given - expression. Useful to restore the parsed fields of an HTML start - tag into the raw tag text itself, or to revert separate tokens with - intervening whitespace back to the original matching input text. By - default, returns astring containing the original parsed text. - - If the optional ``as_string`` argument is passed as - ``False``, then the return value is - a :class:`ParseResults` containing any results names that - were originally matched, and a single token containing the original - matched text from the input string. So if the expression passed to - :class:`original_text_for` contains expressions with defined - results names, you must set ``as_string`` to ``False`` if you - want to preserve those results name values. - - The ``asString`` pre-PEP8 argument is retained for compatibility, - but will be removed in a future release. - - Example:: - - src = "this is test bold text normal text " - for tag in ("b", "i"): - opener, closer = make_html_tags(tag) - patt = original_text_for(opener + SkipTo(closer) + closer) - print(patt.search_string(src)[0]) - - prints:: - - [' bold text '] - ['text'] - """ - asString = asString and as_string - - locMarker = Empty().set_parse_action(lambda s, loc, t: loc) - endlocMarker = locMarker.copy() - endlocMarker.callPreparse = False - matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end") - if asString: - extractText = lambda s, l, t: s[t._original_start : t._original_end] - else: - - def extractText(s, l, t): - t[:] = [s[t.pop("_original_start") : t.pop("_original_end")]] - - matchExpr.set_parse_action(extractText) - matchExpr.ignoreExprs = expr.ignoreExprs - matchExpr.suppress_warning(Diagnostics.warn_ungrouped_named_tokens_in_collection) - return matchExpr - - -def ungroup(expr: ParserElement) -> ParserElement: - """Helper to undo pyparsing's default grouping of And expressions, - even if all but one are non-empty. - """ - return TokenConverter(expr).add_parse_action(lambda t: t[0]) - - -def locatedExpr(expr: ParserElement) -> ParserElement: - """ - (DEPRECATED - future code should use the Located class) - Helper to decorate a returned token with its starting and ending - locations in the input string. - - This helper adds the following results names: - - - ``locn_start`` - location where matched expression begins - - ``locn_end`` - location where matched expression ends - - ``value`` - the actual parsed results - - Be careful if the input text contains ```` characters, you - may want to call :class:`ParserElement.parseWithTabs` - - Example:: - - wd = Word(alphas) - for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"): - print(match) - - prints:: - - [[0, 'ljsdf', 5]] - [[8, 'lksdjjf', 15]] - [[18, 'lkkjj', 23]] - """ - locator = Empty().set_parse_action(lambda ss, ll, tt: ll) - return Group( - locator("locn_start") - + expr("value") - + locator.copy().leaveWhitespace()("locn_end") - ) - - -def nested_expr( - opener: Union[str, ParserElement] = "(", - closer: Union[str, ParserElement] = ")", - content: typing.Optional[ParserElement] = None, - ignore_expr: ParserElement = quoted_string(), - *, - ignoreExpr: ParserElement = quoted_string(), -) -> ParserElement: - """Helper method for defining nested lists enclosed in opening and - closing delimiters (``"("`` and ``")"`` are the default). - - Parameters: - - ``opener`` - opening character for a nested list - (default= ``"("``); can also be a pyparsing expression - - ``closer`` - closing character for a nested list - (default= ``")"``); can also be a pyparsing expression - - ``content`` - expression for items within the nested lists - (default= ``None``) - - ``ignore_expr`` - expression for ignoring opening and closing delimiters - (default= :class:`quoted_string`) - - ``ignoreExpr`` - this pre-PEP8 argument is retained for compatibility - but will be removed in a future release - - If an expression is not provided for the content argument, the - nested expression will capture all whitespace-delimited content - between delimiters as a list of separate values. - - Use the ``ignore_expr`` argument to define expressions that may - contain opening or closing characters that should not be treated as - opening or closing characters for nesting, such as quoted_string or - a comment expression. Specify multiple expressions using an - :class:`Or` or :class:`MatchFirst`. The default is - :class:`quoted_string`, but if no expressions are to be ignored, then - pass ``None`` for this argument. - - Example:: - - data_type = one_of("void int short long char float double") - decl_data_type = Combine(data_type + Opt(Word('*'))) - ident = Word(alphas+'_', alphanums+'_') - number = pyparsing_common.number - arg = Group(decl_data_type + ident) - LPAR, RPAR = map(Suppress, "()") - - code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment)) - - c_function = (decl_data_type("type") - + ident("name") - + LPAR + Opt(delimited_list(arg), [])("args") + RPAR - + code_body("body")) - c_function.ignore(c_style_comment) - - source_code = ''' - int is_odd(int x) { - return (x%2); - } - - int dec_to_hex(char hchar) { - if (hchar >= '0' && hchar <= '9') { - return (ord(hchar)-ord('0')); - } else { - return (10+ord(hchar)-ord('A')); - } - } - ''' - for func in c_function.search_string(source_code): - print("%(name)s (%(type)s) args: %(args)s" % func) - - - prints:: - - is_odd (int) args: [['int', 'x']] - dec_to_hex (int) args: [['char', 'hchar']] - """ - if ignoreExpr != ignore_expr: - ignoreExpr = ignore_expr if ignoreExpr == quoted_string() else ignoreExpr - if opener == closer: - raise ValueError("opening and closing strings cannot be the same") - if content is None: - if isinstance(opener, str_type) and isinstance(closer, str_type): - if len(opener) == 1 and len(closer) == 1: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS, - exact=1, - ) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = empty.copy() + CharsNotIn( - opener + closer + ParserElement.DEFAULT_WHITE_CHARS - ).set_parse_action(lambda t: t[0].strip()) - else: - if ignoreExpr is not None: - content = Combine( - OneOrMore( - ~ignoreExpr - + ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - content = Combine( - OneOrMore( - ~Literal(opener) - + ~Literal(closer) - + CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1) - ) - ).set_parse_action(lambda t: t[0].strip()) - else: - raise ValueError( - "opening and closing arguments must be strings if no content expression is given" - ) - ret = Forward() - if ignoreExpr is not None: - ret <<= Group( - Suppress(opener) + ZeroOrMore(ignoreExpr | ret | content) + Suppress(closer) - ) - else: - ret <<= Group(Suppress(opener) + ZeroOrMore(ret | content) + Suppress(closer)) - ret.set_name("nested %s%s expression" % (opener, closer)) - return ret - - -def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")): - """Internal helper to construct opening and closing tag expressions, given a tag name""" - if isinstance(tagStr, str_type): - resname = tagStr - tagStr = Keyword(tagStr, caseless=not xml) - else: - resname = tagStr.name - - tagAttrName = Word(alphas, alphanums + "_-:") - if xml: - tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue))) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - else: - tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word( - printables, exclude_chars=">" - ) - openTag = ( - suppress_LT - + tagStr("tag") - + Dict( - ZeroOrMore( - Group( - tagAttrName.set_parse_action(lambda t: t[0].lower()) - + Opt(Suppress("=") + tagAttrValue) - ) - ) - ) - + Opt("/", default=[False])("empty").set_parse_action( - lambda s, l, t: t[0] == "/" - ) - + suppress_GT - ) - closeTag = Combine(Literal("", adjacent=False) - - openTag.set_name("<%s>" % resname) - # add start results name in parse action now that ungrouped names are not reported at two levels - openTag.add_parse_action( - lambda t: t.__setitem__( - "start" + "".join(resname.replace(":", " ").title().split()), t.copy() - ) - ) - closeTag = closeTag( - "end" + "".join(resname.replace(":", " ").title().split()) - ).set_name("" % resname) - openTag.tag = resname - closeTag.tag = resname - openTag.tag_body = SkipTo(closeTag()) - return openTag, closeTag - - -def make_html_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for HTML, - given a tag name. Matches tags in either upper or lower case, - attributes with namespaces and with quoted or unquoted values. - - Example:: - - text = 'More info at the pyparsing wiki page' - # make_html_tags returns pyparsing expressions for the opening and - # closing tags as a 2-tuple - a, a_end = make_html_tags("A") - link_expr = a + SkipTo(a_end)("link_text") + a_end - - for link in link_expr.search_string(text): - # attributes in the tag (like "href" shown here) are - # also accessible as named results - print(link.link_text, '->', link.href) - - prints:: - - pyparsing -> https://github.com/pyparsing/pyparsing/wiki - """ - return _makeTags(tag_str, False) - - -def make_xml_tags( - tag_str: Union[str, ParserElement] -) -> Tuple[ParserElement, ParserElement]: - """Helper to construct opening and closing tag expressions for XML, - given a tag name. Matches tags only in the given upper/lower case. - - Example: similar to :class:`make_html_tags` - """ - return _makeTags(tag_str, True) - - -any_open_tag: ParserElement -any_close_tag: ParserElement -any_open_tag, any_close_tag = make_html_tags( - Word(alphas, alphanums + "_:").set_name("any tag") -) - -_htmlEntityMap = {k.rstrip(";"): v for k, v in html.entities.html5.items()} -common_html_entity = Regex("&(?P" + "|".join(_htmlEntityMap) + ");").set_name( - "common HTML entity" -) - - -def replace_html_entity(t): - """Helper parser action to replace common HTML entities with their special characters""" - return _htmlEntityMap.get(t.entity) - - -class OpAssoc(Enum): - LEFT = 1 - RIGHT = 2 - - -InfixNotationOperatorArgType = Union[ - ParserElement, str, Tuple[Union[ParserElement, str], Union[ParserElement, str]] -] -InfixNotationOperatorSpec = Union[ - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - typing.Optional[ParseAction], - ], - Tuple[ - InfixNotationOperatorArgType, - int, - OpAssoc, - ], -] - - -def infix_notation( - base_expr: ParserElement, - op_list: List[InfixNotationOperatorSpec], - lpar: Union[str, ParserElement] = Suppress("("), - rpar: Union[str, ParserElement] = Suppress(")"), -) -> ParserElement: - """Helper method for constructing grammars of expressions made up of - operators working in a precedence hierarchy. Operators may be unary - or binary, left- or right-associative. Parse actions can also be - attached to operator expressions. The generated parser will also - recognize the use of parentheses to override operator precedences - (see example below). - - Note: if you define a deep operator list, you may see performance - issues when using infix_notation. See - :class:`ParserElement.enable_packrat` for a mechanism to potentially - improve your parser performance. - - Parameters: - - ``base_expr`` - expression representing the most basic operand to - be used in the expression - - ``op_list`` - list of tuples, one for each operator precedence level - in the expression grammar; each tuple is of the form ``(op_expr, - num_operands, right_left_assoc, (optional)parse_action)``, where: - - - ``op_expr`` is the pyparsing expression for the operator; may also - be a string, which will be converted to a Literal; if ``num_operands`` - is 3, ``op_expr`` is a tuple of two expressions, for the two - operators separating the 3 terms - - ``num_operands`` is the number of terms for this operator (must be 1, - 2, or 3) - - ``right_left_assoc`` is the indicator whether the operator is right - or left associative, using the pyparsing-defined constants - ``OpAssoc.RIGHT`` and ``OpAssoc.LEFT``. - - ``parse_action`` is the parse action to be associated with - expressions matching this operator expression (the parse action - tuple member may be omitted); if the parse action is passed - a tuple or list of functions, this is equivalent to calling - ``set_parse_action(*fn)`` - (:class:`ParserElement.set_parse_action`) - - ``lpar`` - expression for matching left-parentheses; if passed as a - str, then will be parsed as Suppress(lpar). If lpar is passed as - an expression (such as ``Literal('(')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress('(')``) - - ``rpar`` - expression for matching right-parentheses; if passed as a - str, then will be parsed as Suppress(rpar). If rpar is passed as - an expression (such as ``Literal(')')``), then it will be kept in - the parsed results, and grouped with them. (default= ``Suppress(')')``) - - Example:: - - # simple example of four-function arithmetic with ints and - # variable names - integer = pyparsing_common.signed_integer - varname = pyparsing_common.identifier - - arith_expr = infix_notation(integer | varname, - [ - ('-', 1, OpAssoc.RIGHT), - (one_of('* /'), 2, OpAssoc.LEFT), - (one_of('+ -'), 2, OpAssoc.LEFT), - ]) - - arith_expr.run_tests(''' - 5+3*6 - (5+3)*6 - -2--11 - ''', full_dump=False) - - prints:: - - 5+3*6 - [[5, '+', [3, '*', 6]]] - - (5+3)*6 - [[[5, '+', 3], '*', 6]] - - -2--11 - [[['-', 2], '-', ['-', 11]]] - """ - # captive version of FollowedBy that does not do parse actions or capture results names - class _FB(FollowedBy): - def parseImpl(self, instring, loc, doActions=True): - self.expr.try_parse(instring, loc) - return loc, [] - - _FB.__name__ = "FollowedBy>" - - ret = Forward() - if isinstance(lpar, str): - lpar = Suppress(lpar) - if isinstance(rpar, str): - rpar = Suppress(rpar) - - # if lpar and rpar are not suppressed, wrap in group - if not (isinstance(rpar, Suppress) and isinstance(rpar, Suppress)): - lastExpr = base_expr | Group(lpar + ret + rpar) - else: - lastExpr = base_expr | (lpar + ret + rpar) - - for i, operDef in enumerate(op_list): - opExpr, arity, rightLeftAssoc, pa = (operDef + (None,))[:4] - if isinstance(opExpr, str_type): - opExpr = ParserElement._literalStringClass(opExpr) - if arity == 3: - if not isinstance(opExpr, (tuple, list)) or len(opExpr) != 2: - raise ValueError( - "if numterms=3, opExpr must be a tuple or list of two expressions" - ) - opExpr1, opExpr2 = opExpr - term_name = "{}{} term".format(opExpr1, opExpr2) - else: - term_name = "{} term".format(opExpr) - - if not 1 <= arity <= 3: - raise ValueError("operator must be unary (1), binary (2), or ternary (3)") - - if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT): - raise ValueError("operator must indicate right or left associativity") - - thisExpr: Forward = Forward().set_name(term_name) - if rightLeftAssoc is OpAssoc.LEFT: - if arity == 1: - matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...]) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + lastExpr) + Group( - lastExpr + (opExpr + lastExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + lastExpr) + Group(lastExpr[2, ...]) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr - ) + Group(lastExpr + OneOrMore(opExpr1 + lastExpr + opExpr2 + lastExpr)) - elif rightLeftAssoc is OpAssoc.RIGHT: - if arity == 1: - # try to avoid LR with this extra test - if not isinstance(opExpr, Opt): - opExpr = Opt(opExpr) - matchExpr = _FB(opExpr.expr + thisExpr) + Group(opExpr + thisExpr) - elif arity == 2: - if opExpr is not None: - matchExpr = _FB(lastExpr + opExpr + thisExpr) + Group( - lastExpr + (opExpr + thisExpr)[1, ...] - ) - else: - matchExpr = _FB(lastExpr + thisExpr) + Group( - lastExpr + thisExpr[1, ...] - ) - elif arity == 3: - matchExpr = _FB( - lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr - ) + Group(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr) - if pa: - if isinstance(pa, (tuple, list)): - matchExpr.set_parse_action(*pa) - else: - matchExpr.set_parse_action(pa) - thisExpr <<= (matchExpr | lastExpr).setName(term_name) - lastExpr = thisExpr - ret <<= lastExpr - return ret - - -def indentedBlock(blockStatementExpr, indentStack, indent=True, backup_stacks=[]): - """ - (DEPRECATED - use IndentedBlock class instead) - Helper method for defining space-delimited indentation blocks, - such as those used to define block statements in Python source code. - - Parameters: - - - ``blockStatementExpr`` - expression defining syntax of statement that - is repeated within the indented block - - ``indentStack`` - list created by caller to manage indentation stack - (multiple ``statementWithIndentedBlock`` expressions within a single - grammar should share a common ``indentStack``) - - ``indent`` - boolean indicating whether block must be indented beyond - the current level; set to ``False`` for block of left-most statements - (default= ``True``) - - A valid block must contain at least one ``blockStatement``. - - (Note that indentedBlock uses internal parse actions which make it - incompatible with packrat parsing.) - - Example:: - - data = ''' - def A(z): - A1 - B = 100 - G = A2 - A2 - A3 - B - def BB(a,b,c): - BB1 - def BBA(): - bba1 - bba2 - bba3 - C - D - def spam(x,y): - def eggs(z): - pass - ''' - - - indentStack = [1] - stmt = Forward() - - identifier = Word(alphas, alphanums) - funcDecl = ("def" + identifier + Group("(" + Opt(delimitedList(identifier)) + ")") + ":") - func_body = indentedBlock(stmt, indentStack) - funcDef = Group(funcDecl + func_body) - - rvalue = Forward() - funcCall = Group(identifier + "(" + Opt(delimitedList(rvalue)) + ")") - rvalue << (funcCall | identifier | Word(nums)) - assignment = Group(identifier + "=" + rvalue) - stmt << (funcDef | assignment | identifier) - - module_body = stmt[1, ...] - - parseTree = module_body.parseString(data) - parseTree.pprint() - - prints:: - - [['def', - 'A', - ['(', 'z', ')'], - ':', - [['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]], - 'B', - ['def', - 'BB', - ['(', 'a', 'b', 'c', ')'], - ':', - [['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]], - 'C', - 'D', - ['def', - 'spam', - ['(', 'x', 'y', ')'], - ':', - [[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]] - """ - backup_stacks.append(indentStack[:]) - - def reset_stack(): - indentStack[:] = backup_stacks[-1] - - def checkPeerIndent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if curCol != indentStack[-1]: - if curCol > indentStack[-1]: - raise ParseException(s, l, "illegal nesting") - raise ParseException(s, l, "not a peer entry") - - def checkSubIndent(s, l, t): - curCol = col(l, s) - if curCol > indentStack[-1]: - indentStack.append(curCol) - else: - raise ParseException(s, l, "not a subentry") - - def checkUnindent(s, l, t): - if l >= len(s): - return - curCol = col(l, s) - if not (indentStack and curCol in indentStack): - raise ParseException(s, l, "not an unindent") - if curCol < indentStack[-1]: - indentStack.pop() - - NL = OneOrMore(LineEnd().set_whitespace_chars("\t ").suppress()) - INDENT = (Empty() + Empty().set_parse_action(checkSubIndent)).set_name("INDENT") - PEER = Empty().set_parse_action(checkPeerIndent).set_name("") - UNDENT = Empty().set_parse_action(checkUnindent).set_name("UNINDENT") - if indent: - smExpr = Group( - Opt(NL) - + INDENT - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + UNDENT - ) - else: - smExpr = Group( - Opt(NL) - + OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL)) - + Opt(UNDENT) - ) - - # add a parse action to remove backup_stack from list of backups - smExpr.add_parse_action( - lambda: backup_stacks.pop(-1) and None if backup_stacks else None - ) - smExpr.set_fail_action(lambda a, b, c, d: reset_stack()) - blockStatementExpr.ignore(_bslash + LineEnd()) - return smExpr.set_name("indented block") - - -# it's easy to get these comment structures wrong - they're very common, so may as well make them available -c_style_comment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/").set_name( - "C style comment" -) -"Comment of the form ``/* ... */``" - -html_comment = Regex(r"").set_name("HTML comment") -"Comment of the form ````" - -rest_of_line = Regex(r".*").leave_whitespace().set_name("rest of line") -dbl_slash_comment = Regex(r"//(?:\\\n|[^\n])*").set_name("// comment") -"Comment of the form ``// ... (to end of line)``" - -cpp_style_comment = Combine( - Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/" | dbl_slash_comment -).set_name("C++ style comment") -"Comment of either form :class:`c_style_comment` or :class:`dbl_slash_comment`" - -java_style_comment = cpp_style_comment -"Same as :class:`cpp_style_comment`" - -python_style_comment = Regex(r"#.*").set_name("Python style comment") -"Comment of the form ``# ... (to end of line)``" - - -# build list of built-in expressions, for future reference if a global default value -# gets updated -_builtin_exprs: List[ParserElement] = [ - v for v in vars().values() if isinstance(v, ParserElement) -] - - -# pre-PEP8 compatible names -delimitedList = delimited_list -countedArray = counted_array -matchPreviousLiteral = match_previous_literal -matchPreviousExpr = match_previous_expr -oneOf = one_of -dictOf = dict_of -originalTextFor = original_text_for -nestedExpr = nested_expr -makeHTMLTags = make_html_tags -makeXMLTags = make_xml_tags -anyOpenTag, anyCloseTag = any_open_tag, any_close_tag -commonHTMLEntity = common_html_entity -replaceHTMLEntity = replace_html_entity -opAssoc = OpAssoc -infixNotation = infix_notation -cStyleComment = c_style_comment -htmlComment = html_comment -restOfLine = rest_of_line -dblSlashComment = dbl_slash_comment -cppStyleComment = cpp_style_comment -javaStyleComment = java_style_comment -pythonStyleComment = python_style_comment diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/test.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/test.py deleted file mode 100644 index 8dde513c9534eb7119aa18f4d4f480a264b239a3..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/test.py +++ /dev/null @@ -1,251 +0,0 @@ -import os -import operator -import sys -import contextlib -import itertools -import unittest -from distutils.errors import DistutilsError, DistutilsOptionError -from distutils import log -from unittest import TestLoader - -from pkg_resources import ( - resource_listdir, - resource_exists, - normalize_path, - working_set, - evaluate_marker, - add_activation_listener, - require, -) -from .._importlib import metadata -from setuptools import Command -from setuptools.extern.more_itertools import unique_everseen -from setuptools.extern.jaraco.functools import pass_none - - -class ScanningLoader(TestLoader): - def __init__(self): - TestLoader.__init__(self) - self._visited = set() - - def loadTestsFromModule(self, module, pattern=None): - """Return a suite of all tests cases contained in the given module - - If the module is a package, load tests from all the modules in it. - If the module has an ``additional_tests`` function, call it and add - the return value to the tests. - """ - if module in self._visited: - return None - self._visited.add(module) - - tests = [] - tests.append(TestLoader.loadTestsFromModule(self, module)) - - if hasattr(module, "additional_tests"): - tests.append(module.additional_tests()) - - if hasattr(module, '__path__'): - for file in resource_listdir(module.__name__, ''): - if file.endswith('.py') and file != '__init__.py': - submodule = module.__name__ + '.' + file[:-3] - else: - if resource_exists(module.__name__, file + '/__init__.py'): - submodule = module.__name__ + '.' + file - else: - continue - tests.append(self.loadTestsFromName(submodule)) - - if len(tests) != 1: - return self.suiteClass(tests) - else: - return tests[0] # don't create a nested suite for only one return - - -# adapted from jaraco.classes.properties:NonDataProperty -class NonDataProperty: - def __init__(self, fget): - self.fget = fget - - def __get__(self, obj, objtype=None): - if obj is None: - return self - return self.fget(obj) - - -class test(Command): - """Command to run unit tests after in-place build""" - - description = "run unit tests after in-place build (deprecated)" - - user_options = [ - ('test-module=', 'm', "Run 'test_suite' in specified module"), - ( - 'test-suite=', - 's', - "Run single test, case or suite (e.g. 'module.test_suite')", - ), - ('test-runner=', 'r', "Test runner to use"), - ] - - def initialize_options(self): - self.test_suite = None - self.test_module = None - self.test_loader = None - self.test_runner = None - - def finalize_options(self): - - if self.test_suite and self.test_module: - msg = "You may specify a module or a suite, but not both" - raise DistutilsOptionError(msg) - - if self.test_suite is None: - if self.test_module is None: - self.test_suite = self.distribution.test_suite - else: - self.test_suite = self.test_module + ".test_suite" - - if self.test_loader is None: - self.test_loader = getattr(self.distribution, 'test_loader', None) - if self.test_loader is None: - self.test_loader = "setuptools.command.test:ScanningLoader" - if self.test_runner is None: - self.test_runner = getattr(self.distribution, 'test_runner', None) - - @NonDataProperty - def test_args(self): - return list(self._test_args()) - - def _test_args(self): - if not self.test_suite: - yield 'discover' - if self.verbose: - yield '--verbose' - if self.test_suite: - yield self.test_suite - - def with_project_on_sys_path(self, func): - """ - Backward compatibility for project_on_sys_path context. - """ - with self.project_on_sys_path(): - func() - - @contextlib.contextmanager - def project_on_sys_path(self, include_dists=[]): - self.run_command('egg_info') - - # Build extensions in-place - self.reinitialize_command('build_ext', inplace=1) - self.run_command('build_ext') - - ei_cmd = self.get_finalized_command("egg_info") - - old_path = sys.path[:] - old_modules = sys.modules.copy() - - try: - project_path = normalize_path(ei_cmd.egg_base) - sys.path.insert(0, project_path) - working_set.__init__() - add_activation_listener(lambda dist: dist.activate()) - require('%s==%s' % (ei_cmd.egg_name, ei_cmd.egg_version)) - with self.paths_on_pythonpath([project_path]): - yield - finally: - sys.path[:] = old_path - sys.modules.clear() - sys.modules.update(old_modules) - working_set.__init__() - - @staticmethod - @contextlib.contextmanager - def paths_on_pythonpath(paths): - """ - Add the indicated paths to the head of the PYTHONPATH environment - variable so that subprocesses will also see the packages at - these paths. - - Do this in a context that restores the value on exit. - """ - nothing = object() - orig_pythonpath = os.environ.get('PYTHONPATH', nothing) - current_pythonpath = os.environ.get('PYTHONPATH', '') - try: - prefix = os.pathsep.join(unique_everseen(paths)) - to_join = filter(None, [prefix, current_pythonpath]) - new_path = os.pathsep.join(to_join) - if new_path: - os.environ['PYTHONPATH'] = new_path - yield - finally: - if orig_pythonpath is nothing: - os.environ.pop('PYTHONPATH', None) - else: - os.environ['PYTHONPATH'] = orig_pythonpath - - @staticmethod - def install_dists(dist): - """ - Install the requirements indicated by self.distribution and - return an iterable of the dists that were built. - """ - ir_d = dist.fetch_build_eggs(dist.install_requires) - tr_d = dist.fetch_build_eggs(dist.tests_require or []) - er_d = dist.fetch_build_eggs( - v - for k, v in dist.extras_require.items() - if k.startswith(':') and evaluate_marker(k[1:]) - ) - return itertools.chain(ir_d, tr_d, er_d) - - def run(self): - self.announce( - "WARNING: Testing via this command is deprecated and will be " - "removed in a future version. Users looking for a generic test " - "entry point independent of test runner are encouraged to use " - "tox.", - log.WARN, - ) - - installed_dists = self.install_dists(self.distribution) - - cmd = ' '.join(self._argv) - if self.dry_run: - self.announce('skipping "%s" (dry run)' % cmd) - return - - self.announce('running "%s"' % cmd) - - paths = map(operator.attrgetter('location'), installed_dists) - with self.paths_on_pythonpath(paths): - with self.project_on_sys_path(): - self.run_tests() - - def run_tests(self): - test = unittest.main( - None, - None, - self._argv, - testLoader=self._resolve_as_ep(self.test_loader), - testRunner=self._resolve_as_ep(self.test_runner), - exit=False, - ) - if not test.result.wasSuccessful(): - msg = 'Test failed: %s' % test.result - self.announce(msg, log.ERROR) - raise DistutilsError(msg) - - @property - def _argv(self): - return ['unittest'] + self.test_args - - @staticmethod - @pass_none - def _resolve_as_ep(val): - """ - Load the indicated attribute value, called, as a as if it were - specified as an entry point. - """ - return metadata.EntryPoint(value=val, name=None, group=None).load()() diff --git a/spaces/AtomdffAI/wechatgpt4atom/bot/chatgpt/chat_gpt_bot.py b/spaces/AtomdffAI/wechatgpt4atom/bot/chatgpt/chat_gpt_bot.py deleted file mode 100644 index 1b10e102fa516c12b2b94eff7f111f306000ef76..0000000000000000000000000000000000000000 --- a/spaces/AtomdffAI/wechatgpt4atom/bot/chatgpt/chat_gpt_bot.py +++ /dev/null @@ -1,131 +0,0 @@ -# encoding:utf-8 - -from bot.bot import Bot -from config import conf -from common.log import logger -import openai -import time - -user_session = dict() - -# OpenAI对话模型API (可用) -class ChatGPTBot(Bot): - def __init__(self): - openai.api_key = conf().get('open_ai_api_key') - openai.api_base="https://apai.zyai.online/v1" - - def reply(self, query, context=None): - # acquire reply content - if not context or not context.get('type') or context.get('type') == 'TEXT': - logger.info("[OPEN_AI] query={}".format(query)) - from_user_id = context['from_user_id'] - if query == '#清除记忆': - Session.clear_session(from_user_id) - return '记忆已清除' - - new_query = Session.build_session_query(query, from_user_id) - logger.debug("[OPEN_AI] session query={}".format(new_query)) - - # if context.get('stream'): - # # reply in stream - # return self.reply_text_stream(query, new_query, from_user_id) - - reply_content = self.reply_text(new_query, from_user_id, 0) - logger.debug("[OPEN_AI] new_query={}, user={}, reply_cont={}".format(new_query, from_user_id, reply_content)) - if reply_content: - Session.save_session(query, reply_content, from_user_id) - return reply_content - - elif context.get('type', None) == 'IMAGE_CREATE': - return self.create_img(query, 0) - - def reply_text(self, query, user_id, retry_count=0): - try: - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo-16k", # 对话模型的名称 - messages=query, - temperature=0.5, # 值在[0,1]之间,越大表示回复越具有不确定性 - max_tokens=1500, # 回复最大的字符数 - top_p=1, - frequency_penalty=0.5, # [-2,2]之间,该值越大则更倾向于产生不同的内容 - presence_penalty=0.5, # [-2,2]之间,该值越大则更倾向于产生不同的内容 - ) - # res_content = response.choices[0]['text'].strip().replace('<|endoftext|>', '') - logger.info(response.choices[0]['message']['content']) - # log.info("[OPEN_AI] reply={}".format(res_content)) - return response.choices[0]['message']['content'] - except openai.error.RateLimitError as e: - # rate limit exception - logger.warn(e) - if retry_count < 1: - time.sleep(5) - logger.warn("[OPEN_AI] RateLimit exceed, 第{}次重试".format(retry_count+1)) - return self.reply_text(query, user_id, retry_count+1) - else: - return "问太快了,慢点行不行" - except Exception as e: - # unknown exception - logger.exception(e) - Session.clear_session(user_id) - return "Sorry,AI也有时候出错……请再问一次。" - - def create_img(self, query, retry_count=0): - try: - logger.info("[OPEN_AI] image_query={}".format(query)) - response = openai.Image.create( - prompt=query, #图片描述 - n=1, #每次生成图片的数量 - size="1024x1024" #图片大小,可选有 256x256, 512x512, 1024x1024 - ) - image_url = response['data'][0]['url'] - logger.info("[OPEN_AI] image_url={}".format(image_url)) - return image_url - except openai.error.RateLimitError as e: - logger.warn(e) - if retry_count < 1: - time.sleep(5) - logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1)) - return self.reply_text(query, retry_count+1) - else: - return "问太快了,慢点行不行" - except Exception as e: - logger.exception(e) - return None - -class Session(object): - @staticmethod - def build_session_query(query, user_id): - ''' - build query with conversation history - e.g. [ - {"role": "system", "content": "You are a helpful assistant,let's think step by step in multiple different ways."}, - {"role": "user", "content": "Who won the world series in 2020?"}, - {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, - {"role": "user", "content": "Where was it played?"} - ] - :param query: query content - :param user_id: from user id - :return: query content with conversaction - ''' - session = user_session.get(user_id, []) - if len(session) == 0: - system_prompt = conf().get("character_desc", "") - system_item = {'role': 'system', 'content': system_prompt} - session.append(system_item) - user_session[user_id] = session - user_item = {'role': 'user', 'content': query} - session.append(user_item) - return session - - @staticmethod - def save_session(query, answer, user_id): - session = user_session.get(user_id) - if session: - # append conversation - gpt_item = {'role': 'assistant', 'content': answer} - session.append(gpt_item) - - @staticmethod - def clear_session(user_id): - user_session[user_id] = [] - diff --git a/spaces/BIOML-SVM/SVM/app.py b/spaces/BIOML-SVM/SVM/app.py deleted file mode 100644 index 93e540f27c4491421498623053d8320ebd7d1cf3..0000000000000000000000000000000000000000 --- a/spaces/BIOML-SVM/SVM/app.py +++ /dev/null @@ -1,286 +0,0 @@ -# credit: https://huggingface.co/spaces/simonduerr/3dmol.js/blob/main/app.py -import os -import sys -from urllib import request - -import esm -import gradio as gr -import progres as pg -import requests -import torch -from transformers import (AutoModel, AutoModelForMaskedLM, AutoTokenizer, - EsmModel) - -import msa -import proteinbind_new - -tokenizer_nt = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-1000g") -model_nt = AutoModelForMaskedLM.from_pretrained("InstaDeepAI/nucleotide-transformer-500m-1000g") -model_nt.eval() - -tokenizer_aa = AutoTokenizer.from_pretrained("facebook/esm2_t12_35M_UR50D") -model_aa = EsmModel.from_pretrained("facebook/esm2_t12_35M_UR50D") -model_aa.eval() - -tokenizer_se = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') -model_se = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2') -model_se.eval() - -msa_transformer, msa_transformer_alphabet = esm.pretrained.esm_msa1b_t12_100M_UR50S() -msa_transformer = msa_transformer.eval() -msa_transformer_batch_converter = msa_transformer_alphabet.get_batch_converter() - -model = proteinbind_new.create_proteinbind(True) - - -def pass_through(torch_output, key: str): - device = torch.device("cpu") - input_data = { - key: torch_output.type(torch.float32).to(device) - } - output = model(input_data) - return output[key].detach().numpy() - - -def nt_embed(sequence: str): - tokens_ids = tokenizer_nt.batch_encode_plus([sequence], return_tensors="pt")["input_ids"] - attention_mask = tokens_ids != tokenizer_nt.pad_token_id - with torch.no_grad(): - torch_outs = model_nt( - tokens_ids, # .to('cuda'), - attention_mask=attention_mask, # .to('cuda'), - output_hidden_states=True - ) - last_layer_CLS = torch_outs.hidden_states[-1].detach()[:, 0, :][0] - return pass_through(last_layer_CLS, "dna") - - -def aa_embed(sequence: str): - tokens = tokenizer_aa([sequence], return_tensors="pt") - with torch.no_grad(): - torch_outs = model_aa(**tokens) - return pass_through(torch_outs[0], "aa") - - -def se_embed(sentence: str): - encoded_input = tokenizer_se([sentence], return_tensors='pt') - with torch.no_grad(): - model_output = model_se(**encoded_input) - return pass_through(model_output[0], "text") - - -def msa_embed(sequences: list): - inputs = msa.greedy_select(sequences, num_seqs=128) # can change this to pass more/fewer sequences - msa_transformer_batch_labels, msa_transformer_batch_strs, msa_transformer_batch_tokens = msa_transformer_batch_converter([inputs]) - msa_transformer_batch_tokens = msa_transformer_batch_tokens.to(next(msa_transformer.parameters()).device) - - with torch.no_grad(): - temp = msa_transformer(msa_transformer_batch_tokens, repr_layers=[12])['representations'] - temp = temp[12][:, :, 0, :] - temp = torch.mean(temp, (0, 1)) - return pass_through(temp, "msa") - - -def go_embed(terms): - pass - - -def download_data_if_required(): - url_base = f"https://zenodo.org/record/{pg.zenodo_record}/files" - fps = [pg.trained_model_fp] - urls = [f"{url_base}/trained_model.pt"] - # for targetdb in pre_embedded_dbs: - # fps.append(os.path.join(database_dir, targetdb + ".pt")) - # urls.append(f"{url_base}/{targetdb}.pt") - - if not os.path.isdir(pg.trained_model_dir): - os.makedirs(pg.trained_model_dir) - # if not os.path.isdir(database_dir): - # os.makedirs(database_dir) - - printed = False - for fp, url in zip(fps, urls): - if not os.path.isfile(fp): - if not printed: - print("Downloading data as first time setup (~340 MB) to ", pg.progres_dir, - ", internet connection required, this can take a few minutes", - sep="", file=sys.stderr) - printed = True - try: - request.urlretrieve(url, fp) - d = torch.load(fp, map_location="cpu") - if fp == pg.trained_model_fp: - assert "model" in d - else: - assert "embeddings" in d - except Exception: - if os.path.isfile(fp): - os.remove(fp) - print("Failed to download from", url, "and save to", fp, file=sys.stderr) - print("Exiting", file=sys.stderr) - sys.exit(1) - - if printed: - print("Data downloaded successfully", file=sys.stderr) - - -def get_pdb(pdb_code="", filepath=""): - if pdb_code is None or pdb_code == "": - try: - with open(filepath.name) as f: - return f.read() - except AttributeError: - return None - else: - return requests.get(f"https://files.rcsb.org/view/{pdb_code}.pdb").content.decode() - - -def molecule(pdb): - - x = ( - """ - - - - - - - - -
- - - """ - ) - - return f"""""" - - -def str2coords(s): - coords = [] - for line in s.split('\n'): - if (line.startswith("ATOM ") or line.startswith("HETATM")) and line[12:16].strip() == "CA": - coords.append([float(line[30:38]), float(line[38:46]), float(line[46:54])]) - elif line.startswith("ENDMDL"): - break - return coords - - -def update_st(inp, file): - pdb = get_pdb(inp, file) - new_coords = pass_through(pg.embed_coords(str2coords(pdb)), "pdb") - return (molecule(pdb), new_coords) - - -def update_nt(inp): - return str(nt_embed(inp or '')) - - -def update_aa(inp): - return str(aa_embed(inp)) - - -def update_se(inp): - return str(se_embed(inp)) - - -def update_go(inp): - return str(go_embed(inp)) - - -def update_msa(inp): - return str(msa_embed(msa.read_msa(inp.name))) - - -demo = gr.Blocks() - -with demo: - with gr.Tabs(): - with gr.TabItem("PDB Structural Embeddings"): - with gr.Row(): - with gr.Box(): - inp = gr.Textbox( - placeholder="PDB Code or upload file below", label="Input structure" - ) - file = gr.File(file_count="single") - gr.Examples(["2CBA", "6VXX"], inp) - btn = gr.Button("View structure") - gr.Markdown("# PDB viewer using 3Dmol.js") - mol = gr.HTML() - emb = gr.Textbox(interactive=False) - btn.click(fn=update_st, inputs=[inp, file], outputs=[mol, emb]) - with gr.TabItem("Nucleotide Sequence Embeddings"): - with gr.Box(): - inp = gr.Textbox( - placeholder="ATCGCTGCCCGTAGATAATAAGAGACACTGAGGCC", label="Input Nucleotide Sequence" - ) - btn = gr.Button("View embeddings") - emb = gr.Textbox(interactive=False) - btn.click(fn=update_nt, inputs=[inp], outputs=emb) - with gr.TabItem("Amino Acid Sequence Embeddings"): - with gr.Box(): - inp = gr.Textbox( - placeholder="AAGQCYRGRCSGGLCCSKYGYCGSGPAYCG", label="Input Amino Acid Sequence" - ) - btn = gr.Button("View embeddings") - emb = gr.Textbox(interactive=False) - btn.click(fn=update_aa, inputs=[inp], outputs=emb) - with gr.TabItem("Sentence Embeddings"): - with gr.Box(): - inp = gr.Textbox( - placeholder="Your text here", label="Input Sentence" - ) - btn = gr.Button("View embeddings") - emb = gr.Textbox(interactive=False) - btn.click(fn=update_se, inputs=[inp], outputs=emb) - with gr.TabItem("MSA Embeddings"): - with gr.Box(): - inp = gr.File(file_count="single", label="Input MSA") - btn = gr.Button("View embeddings") - emb = gr.Textbox(interactive=False) - btn.click(fn=update_msa, inputs=[inp], outputs=emb) - with gr.TabItem("GO Embeddings"): - with gr.Box(): - inp = gr.Textbox( - placeholder="", label="Input GO Terms" - ) - btn = gr.Button("View embeddings") - emb = gr.Textbox(interactive=False) - btn.click(fn=update_go, inputs=[inp], outputs=emb) - - -if __name__ == "__main__": - download_data_if_required() - demo.launch() diff --git a/spaces/Benson/text-generation/Examples/Descargar Angry Birds Star Wars 2 Monedas Ilimitadas.md b/spaces/Benson/text-generation/Examples/Descargar Angry Birds Star Wars 2 Monedas Ilimitadas.md deleted file mode 100644 index a6f2f66116dfa97842c1270e02e838e7ee0cc07d..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Angry Birds Star Wars 2 Monedas Ilimitadas.md +++ /dev/null @@ -1,69 +0,0 @@ -
-

Cómo descargar Angry Birds Star Wars 2 Monedas ilimitadas

-

Angry Birds Star Wars 2 es un popular juego de puzzle que combina la diversión y la emoción de las franquicias de Angry Birds y Star Wars. En este juego, puedes unirte al lado del pájaro o al lado del cerdo, y usar varios personajes y poderes para derrotar a tus enemigos. También puedes recoger monedas, que son la moneda principal del juego, para desbloquear más personajes, niveles y objetos.

-

descargar angry birds star wars 2 monedas ilimitadas


Downloadhttps://bltlly.com/2v6MUp



-

Sin embargo, recolectar monedas puede ser lento y desafiante, especialmente si quieres obtener todos los personajes y objetos del juego. Es por eso que algunos jugadores pueden querer obtener monedas ilimitadas en Angry Birds Star Wars 2, que puede darles una ventaja sobre sus oponentes y hacer el juego más agradable. Pero, ¿cómo se puede obtener monedas ilimitadas en Angry Birds Star Wars 2? En este artículo, le mostraremos tres métodos diferentes que puede utilizar para descargar angry birds star wars 2 monedas ilimitadas.

-

Método 1: Usar un código de trucos

-

Una de las maneras más fáciles de obtener monedas ilimitadas en Angry Birds Star Wars 2 es usar un código de trucos. Un código de trucos es una combinación secreta de letras o números que puedes introducir en el juego para activar ciertos efectos o características. Por ejemplo, hay un código de trucos que te puede dar monedas ilimitadas en Angry Birds Star Wars 2. Aquí está cómo usarlo:

-
    -
  1. Abre Angry Birds Star Wars 2 en tu dispositivo.
  2. -
  3. Vaya al menú de configuración y toque en "Enter Code".
  4. -
  5. Escribe "ABSWII" (sin comillas) y toca "OK".
  6. -
  7. Usted debe ver un mensaje que dice "Cheat activado".
  8. -
  9. Volver al juego y disfrutar de sus monedas ilimitadas.
  10. -
- -

Método 2: Usar un Mod APK

-

Otra manera de obtener monedas ilimitadas en Angry Birds Star Wars 2 es utilizar un mod APK. Un mod APK es una versión modificada del archivo de juego original que ha sido alterado por alguien para incluir características o funciones adicionales. Por ejemplo, hay un mod APK que puede darle monedas ilimitadas en Angry Birds Star Wars 2. Aquí está cómo usarlo:

-

-
    -
  1. Descargar el archivo APK mod de una fuente confiable. Puede buscar en línea para "angry birds star wars 2 mod apk monedas ilimitadas" o utilizar este enlace.
  2. -
  3. Antes de instalar el mod APK, asegúrese de haber habilitado "Fuentes desconocidas" en la configuración de su dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store.
  4. -
  5. Desinstala el juego original de Angry Birds Star Wars 2 desde tu dispositivo.
  6. -
  7. Instalar el archivo APK mod tocando en él y siguiendo las instrucciones.
  8. -
  9. Abre Angry Birds Star Wars 2 en tu dispositivo y disfruta de tus monedas ilimitadas.
  10. -
-

Los pros de usar un mod APK son que es eficaz, permanente y personalizable. Usted puede obtener monedas ilimitadas y otras características que el juego original no tiene. Sin embargo, los inconvenientes son que es arriesgado, ilegal e incompatible. Puede descargar un virus o malware que puede dañar su dispositivo o robar sus datos. También puede violar los términos de servicio del juego y ser prohibido o demandado. Además, es posible que no pueda actualizar el juego o jugar en línea con otros jugadores que tengan la versión original.

-

Método 3: Utilice una herramienta Hack

-

Una tercera manera de obtener monedas ilimitadas en Angry Birds Star Wars 2 es utilizar una herramienta de hackeo. Una herramienta de corte es un software o sitio web que puede generar monedas u otros recursos para usted en el juego. Por ejemplo, hay una herramienta de hackeo que puede darle monedas ilimitadas en Angry Birds Star Wars 2. Aquí está cómo usarlo:

-
    -
  1. Ir al sitio web de la herramienta de hackeo. También puede escanear el código QR a continuación para acceder a ella.
  2. - -
  3. Seleccione el tipo de dispositivo (Android o iOS) y su región.
  4. -
  5. Introduzca la cantidad de monedas que desea obtener. Puede elegir entre 10.000 y 999.999 monedas.
  6. -
  7. Haga clic en "Generar" y espere unos segundos.
  8. -
  9. Verifica que no eres un robot completando una breve encuesta u oferta.
  10. -
  11. Revisa tu cuenta de juego y disfruta de tus monedas ilimitadas.
  12. -
-

QR code for hack tool

-

Los pros de usar una herramienta de hackeo son que es conveniente, rápido y gratuito. No es necesario descargar nada o root o jailbreak su dispositivo. También puede obtener tantas monedas como desee en cuestión de minutos. Sin embargo, los inconvenientes son que no es confiable, inseguro y poco ético. Es posible que no obtenga las monedas que solicitó o solo las obtenga temporalmente. También puede exponer su información personal o dispositivo a hackers o estafadores. Además, puede arruinar el equilibrio y la equidad del juego mediante el uso de una herramienta de hackeo.

-

Conclusión

-

En conclusión, hay tres métodos diferentes que se pueden utilizar para descargar angry birds star wars 2 monedas ilimitadas: usando un código de trucos, usando un mod APK, o usando una herramienta de hackeo. Cada método tiene sus propios pros y contras, por lo que debe sopesarlos cuidadosamente antes de decidir cuál usar. Aquí hay algunos consejos y advertencias para usar monedas ilimitadas en Angry Birds Star Wars 2:

-
    -
  • Usa monedas ilimitadas bajo tu propio riesgo y discreción. No respaldamos ni recomendamos ninguno de estos métodos, y no somos responsables de ninguna consecuencia que pueda surgir de su uso.
  • -
  • Tenga cuidado con las fuentes que descarga o accede. Asegúrese de que son confiables y seguros, y escanearlos en busca de virus o malware antes de usarlos.
  • -
  • Copia de seguridad de los datos del juego antes de usar cualquiera de estos métodos. Puede perder su progreso o dañar el archivo del juego si algo sale mal.
  • - -
  • Respetar a los desarrolladores de juegos y su trabajo. Pusieron mucho esfuerzo y creatividad en hacer Angry Birds Star Wars 2, y merecen ser apoyados y apreciados.
  • -
-

Si quieres descargar Angry Birds Star Wars 2 y disfrutar del juego sin trucos o hacks, puedes hacerlo haciendo clic en este enlace. ¡Que la fuerza esté contigo!

-

Preguntas frecuentes

-

Q: ¿Es Angry Birds Star Wars 2 libre para jugar?

-

A: Sí, Angry Birds Star Wars 2 es gratis para descargar y jugar en dispositivos Android e iOS. Sin embargo, hay algunas compras en la aplicación que puedes hacer para mejorar tu experiencia de juego.

-

P: ¿Cuántos personajes hay en Angry Birds Star Wars 2?

-

A: Hay más de 30 personajes jugables en Angry Birds Star Wars 2, incluyendo aves y cerdos del universo de Star Wars. Puedes desbloquearlos recogiendo monedas, completando niveles o escaneando telepods (juguetes físicos que interactúan con el juego).

-

Q: ¿Cuáles son los telepods en Angry Birds Star Wars 2?

-

A: Los telepods son juguetes especiales que puedes comprar por separado del juego. Se basan en los personajes de Angry Birds Star Wars 2, y vienen con una base que tiene un código QR. Puedes escanear el código QR con la cámara de tu dispositivo para desbloquear el personaje del juego. También puedes colocar el juguete en la pantalla de tu dispositivo para intercambiar el personaje del juego con el del juguete.

-

Q: ¿Cómo puedo jugar Angry Birds Star Wars 2 en línea con otros jugadores?

-

A: Angry Birds Star Wars 2 tiene un modo multijugador llamado Arena, donde puedes competir con otros jugadores de todo el mundo. Puedes acceder a Arena tocando el icono del trofeo en el menú principal. Puedes optar por unirte al Lado Pájaro o al Lado Cerdo, y luego jugar contra otros jugadores en una serie de partidos. Puedes ganar monedas y recompensas ganando partidos y subiendo las tablas de clasificación.

-

P: ¿Cómo puedo contactar al servicio de atención al cliente de Angry Birds Star Wars 2?

- -
    -
  1. Vaya al menú de configuración y toque en "Ayuda".
  2. -
  3. Toque en "Contáctenos".
  4. -
  5. Rellene el formulario con su nombre, correo electrónico, asunto y mensaje.
  6. -
  7. Toque en "Enviar".
  8. -
  9. Usted debe recibir una respuesta dentro de las 24 horas.
  10. -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Binguii/Venus_Proxy/README.md b/spaces/Binguii/Venus_Proxy/README.md deleted file mode 100644 index 2de36d5a0641e8eaf74878341e702baa17147017..0000000000000000000000000000000000000000 --- a/spaces/Binguii/Venus_Proxy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Venus Proxy -emoji: 👀 -colorFrom: purple -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CALM/Dashboard/streamlit_observable/frontend/src/index.tsx b/spaces/CALM/Dashboard/streamlit_observable/frontend/src/index.tsx deleted file mode 100644 index 82c39327ba06bc5955873f9fc901dd19c6611d9d..0000000000000000000000000000000000000000 --- a/spaces/CALM/Dashboard/streamlit_observable/frontend/src/index.tsx +++ /dev/null @@ -1,10 +0,0 @@ -import React from "react" -import ReactDOM from "react-dom" -import Observable from "./Observable" - -ReactDOM.render( - - - , - document.getElementById("root") -) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/__init__.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/__init__.py deleted file mode 100644 index a49099aa5cfa58b55c66fe8fa85092eb26d15535..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/roi_heads/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head -from .keypoint_head import ROI_KEYPOINT_HEAD_REGISTRY, build_keypoint_head, BaseKeypointRCNNHead -from .mask_head import ROI_MASK_HEAD_REGISTRY, build_mask_head, BaseMaskRCNNHead -from .roi_heads import ( - ROI_HEADS_REGISTRY, - ROIHeads, - Res5ROIHeads, - StandardROIHeads, - build_roi_heads, - select_foreground_proposals, -) -from .rotated_fast_rcnn import RROIHeads -from .fast_rcnn import FastRCNNOutputLayers - -from . import cascade_rcnn # isort:skip diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/trident_backbone.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/trident_backbone.py deleted file mode 100644 index 232dfaf1ca01c0395c0ceea544bfbdee0d45ce1a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/tridentnet/trident_backbone.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import fvcore.nn.weight_init as weight_init -import torch -import torch.nn.functional as F - -from detectron2.layers import Conv2d, FrozenBatchNorm2d, get_norm -from detectron2.modeling import BACKBONE_REGISTRY, ResNet, ResNetBlockBase, make_stage -from detectron2.modeling.backbone.resnet import BasicStem, BottleneckBlock, DeformBottleneckBlock - -from .trident_conv import TridentConv - -__all__ = ["TridentBottleneckBlock", "make_trident_stage", "build_trident_resnet_backbone"] - - -class TridentBottleneckBlock(ResNetBlockBase): - def __init__( - self, - in_channels, - out_channels, - *, - bottleneck_channels, - stride=1, - num_groups=1, - norm="BN", - stride_in_1x1=False, - num_branch=3, - dilations=(1, 2, 3), - concat_output=False, - test_branch_idx=-1, - ): - """ - Args: - num_branch (int): the number of branches in TridentNet. - dilations (tuple): the dilations of multiple branches in TridentNet. - concat_output (bool): if concatenate outputs of multiple branches in TridentNet. - Use 'True' for the last trident block. - """ - super().__init__(in_channels, out_channels, stride) - - assert num_branch == len(dilations) - - self.num_branch = num_branch - self.concat_output = concat_output - self.test_branch_idx = test_branch_idx - - if in_channels != out_channels: - self.shortcut = Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=stride, - bias=False, - norm=get_norm(norm, out_channels), - ) - else: - self.shortcut = None - - stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) - - self.conv1 = Conv2d( - in_channels, - bottleneck_channels, - kernel_size=1, - stride=stride_1x1, - bias=False, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv2 = TridentConv( - bottleneck_channels, - bottleneck_channels, - kernel_size=3, - stride=stride_3x3, - paddings=dilations, - bias=False, - groups=num_groups, - dilations=dilations, - num_branch=num_branch, - test_branch_idx=test_branch_idx, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv3 = Conv2d( - bottleneck_channels, - out_channels, - kernel_size=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]: - if layer is not None: # shortcut can be None - weight_init.c2_msra_fill(layer) - - def forward(self, x): - num_branch = self.num_branch if self.training or self.test_branch_idx == -1 else 1 - if not isinstance(x, list): - x = [x] * num_branch - out = [self.conv1(b) for b in x] - out = [F.relu_(b) for b in out] - - out = self.conv2(out) - out = [F.relu_(b) for b in out] - - out = [self.conv3(b) for b in out] - - if self.shortcut is not None: - shortcut = [self.shortcut(b) for b in x] - else: - shortcut = x - - out = [out_b + shortcut_b for out_b, shortcut_b in zip(out, shortcut)] - out = [F.relu_(b) for b in out] - if self.concat_output: - out = torch.cat(out) - return out - - -def make_trident_stage(block_class, num_blocks, first_stride, **kwargs): - """ - Create a resnet stage by creating many blocks for TridentNet. - """ - blocks = [] - for i in range(num_blocks - 1): - blocks.append(block_class(stride=first_stride if i == 0 else 1, **kwargs)) - kwargs["in_channels"] = kwargs["out_channels"] - blocks.append(block_class(stride=1, concat_output=True, **kwargs)) - return blocks - - -@BACKBONE_REGISTRY.register() -def build_trident_resnet_backbone(cfg, input_shape): - """ - Create a ResNet instance from config for TridentNet. - - Returns: - ResNet: a :class:`ResNet` instance. - """ - # need registration of new blocks/stems? - norm = cfg.MODEL.RESNETS.NORM - stem = BasicStem( - in_channels=input_shape.channels, - out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS, - norm=norm, - ) - freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT - - if freeze_at >= 1: - for p in stem.parameters(): - p.requires_grad = False - stem = FrozenBatchNorm2d.convert_frozen_batchnorm(stem) - - # fmt: off - out_features = cfg.MODEL.RESNETS.OUT_FEATURES - depth = cfg.MODEL.RESNETS.DEPTH - num_groups = cfg.MODEL.RESNETS.NUM_GROUPS - width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP - bottleneck_channels = num_groups * width_per_group - in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS - out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS - stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1 - res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION - deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE - deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED - deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS - num_branch = cfg.MODEL.TRIDENT.NUM_BRANCH - branch_dilations = cfg.MODEL.TRIDENT.BRANCH_DILATIONS - trident_stage = cfg.MODEL.TRIDENT.TRIDENT_STAGE - test_branch_idx = cfg.MODEL.TRIDENT.TEST_BRANCH_IDX - # fmt: on - assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation) - - num_blocks_per_stage = {50: [3, 4, 6, 3], 101: [3, 4, 23, 3], 152: [3, 8, 36, 3]}[depth] - - stages = [] - - res_stage_idx = {"res2": 2, "res3": 3, "res4": 4, "res5": 5} - out_stage_idx = [res_stage_idx[f] for f in out_features] - trident_stage_idx = res_stage_idx[trident_stage] - max_stage_idx = max(out_stage_idx) - for idx, stage_idx in enumerate(range(2, max_stage_idx + 1)): - dilation = res5_dilation if stage_idx == 5 else 1 - first_stride = 1 if idx == 0 or (stage_idx == 5 and dilation == 2) else 2 - stage_kargs = { - "num_blocks": num_blocks_per_stage[idx], - "first_stride": first_stride, - "in_channels": in_channels, - "bottleneck_channels": bottleneck_channels, - "out_channels": out_channels, - "num_groups": num_groups, - "norm": norm, - "stride_in_1x1": stride_in_1x1, - "dilation": dilation, - } - if stage_idx == trident_stage_idx: - assert not deform_on_per_stage[ - idx - ], "Not support deformable conv in Trident blocks yet." - stage_kargs["block_class"] = TridentBottleneckBlock - stage_kargs["num_branch"] = num_branch - stage_kargs["dilations"] = branch_dilations - stage_kargs["test_branch_idx"] = test_branch_idx - stage_kargs.pop("dilation") - elif deform_on_per_stage[idx]: - stage_kargs["block_class"] = DeformBottleneckBlock - stage_kargs["deform_modulated"] = deform_modulated - stage_kargs["deform_num_groups"] = deform_num_groups - else: - stage_kargs["block_class"] = BottleneckBlock - blocks = ( - make_trident_stage(**stage_kargs) - if stage_idx == trident_stage_idx - else make_stage(**stage_kargs) - ) - in_channels = out_channels - out_channels *= 2 - bottleneck_channels *= 2 - - if freeze_at >= stage_idx: - for block in blocks: - block.freeze() - stages.append(blocks) - return ResNet(stem, stages, out_features=out_features) diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_opaque_types.py b/spaces/CVPR/LIVE/pybind11/tests/test_opaque_types.py deleted file mode 100644 index 3f2392775d83a833457d95520648ee7e1f2aa6d5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_opaque_types.py +++ /dev/null @@ -1,47 +0,0 @@ -# -*- coding: utf-8 -*- -import pytest -from pybind11_tests import opaque_types as m -from pybind11_tests import ConstructorStats, UserType - - -def test_string_list(): - lst = m.StringList() - lst.push_back("Element 1") - lst.push_back("Element 2") - assert m.print_opaque_list(lst) == "Opaque list: [Element 1, Element 2]" - assert lst.back() == "Element 2" - - for i, k in enumerate(lst, start=1): - assert k == "Element {}".format(i) - lst.pop_back() - assert m.print_opaque_list(lst) == "Opaque list: [Element 1]" - - cvp = m.ClassWithSTLVecProperty() - assert m.print_opaque_list(cvp.stringList) == "Opaque list: []" - - cvp.stringList = lst - cvp.stringList.push_back("Element 3") - assert m.print_opaque_list(cvp.stringList) == "Opaque list: [Element 1, Element 3]" - - -def test_pointers(msg): - living_before = ConstructorStats.get(UserType).alive() - assert m.get_void_ptr_value(m.return_void_ptr()) == 0x1234 - assert m.get_void_ptr_value(UserType()) # Should also work for other C++ types - assert ConstructorStats.get(UserType).alive() == living_before - - with pytest.raises(TypeError) as excinfo: - m.get_void_ptr_value([1, 2, 3]) # This should not work - assert msg(excinfo.value) == """ - get_void_ptr_value(): incompatible function arguments. The following argument types are supported: - 1. (arg0: capsule) -> int - - Invoked with: [1, 2, 3] - """ # noqa: E501 line too long - - assert m.return_null_str() is None - assert m.get_null_str_value(m.return_null_str()) is not None - - ptr = m.return_unique_ptr() - assert "StringList" in repr(ptr) - assert m.print_opaque_list(ptr) == "Opaque list: [some value]" diff --git a/spaces/CVPR/LIVE/pybind11/tools/pybind11Common.cmake b/spaces/CVPR/LIVE/pybind11/tools/pybind11Common.cmake deleted file mode 100644 index 8f7f57b5171e12b55a7752d19d7cabdaf9085961..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tools/pybind11Common.cmake +++ /dev/null @@ -1,296 +0,0 @@ -#[======================================================[.rst - -Adds the following targets:: - - pybind11::pybind11 - link to headers and pybind11 - pybind11::module - Adds module links - pybind11::embed - Adds embed links - pybind11::lto - Link time optimizations (manual selection) - pybind11::thin_lto - Link time optimizations (manual selection) - pybind11::python_link_helper - Adds link to Python libraries - pybind11::python2_no_register - Avoid warning/error with Python 2 + C++14/7 - pybind11::windows_extras - MSVC bigobj and mp for building multithreaded - -Adds the following functions:: - - pybind11_strip(target) - strip target after building on linux/macOS - - -#]======================================================] - -# CMake 3.10 has an include_guard command, but we can't use that yet -if(TARGET pybind11::lto) - return() -endif() - -# If we are in subdirectory mode, all IMPORTED targets must be GLOBAL. If we -# are in CONFIG mode, they should be "normal" targets instead. -# In CMake 3.11+ you can promote a target to global after you create it, -# which might be simpler than this check. -get_property( - is_config - TARGET pybind11::headers - PROPERTY IMPORTED) -if(NOT is_config) - set(optional_global GLOBAL) -endif() - -# --------------------- Shared targets ---------------------------- - -# Build an interface library target: -add_library(pybind11::pybind11 IMPORTED INTERFACE ${optional_global}) -set_property( - TARGET pybind11::pybind11 - APPEND - PROPERTY INTERFACE_LINK_LIBRARIES pybind11::headers) - -# Build a module target: -add_library(pybind11::module IMPORTED INTERFACE ${optional_global}) -set_property( - TARGET pybind11::module - APPEND - PROPERTY INTERFACE_LINK_LIBRARIES pybind11::pybind11) - -# Build an embed library target: -add_library(pybind11::embed IMPORTED INTERFACE ${optional_global}) -set_property( - TARGET pybind11::embed - APPEND - PROPERTY INTERFACE_LINK_LIBRARIES pybind11::pybind11) - -# ----------------------- no register ---------------------- - -# Workaround for Python 2.7 and C++17 (C++14 as a warning) incompatibility -# This adds the flags -Wno-register and -Wno-deprecated-register if the compiler -# is Clang 3.9+ or AppleClang and the compile language is CXX, or /wd5033 for MSVC (all languages, -# since MSVC didn't recognize COMPILE_LANGUAGE until CMake 3.11+). - -add_library(pybind11::python2_no_register INTERFACE IMPORTED ${optional_global}) -set(clang_4plus - "$,$,3.9>>>") -set(no_register "$>") - -if(MSVC AND CMAKE_VERSION VERSION_LESS 3.11) - set(cxx_no_register "${no_register}") -else() - set(cxx_no_register "$,${no_register}>") -endif() - -set(msvc "$") - -set_property( - TARGET pybind11::python2_no_register - PROPERTY INTERFACE_COMPILE_OPTIONS - "$<${cxx_no_register}:-Wno-register;-Wno-deprecated-register>" "$<${msvc}:/wd5033>") - -# --------------------------- link helper --------------------------- - -add_library(pybind11::python_link_helper IMPORTED INTERFACE ${optional_global}) - -if(CMAKE_VERSION VERSION_LESS 3.13) - # In CMake 3.11+, you can set INTERFACE properties via the normal methods, and - # this would be simpler. - set_property( - TARGET pybind11::python_link_helper - APPEND - PROPERTY INTERFACE_LINK_LIBRARIES "$<$:-undefined dynamic_lookup>") -else() - # link_options was added in 3.13+ - # This is safer, because you are ensured the deduplication pass in CMake will not consider - # these separate and remove one but not the other. - set_property( - TARGET pybind11::python_link_helper - APPEND - PROPERTY INTERFACE_LINK_OPTIONS "$<$:LINKER:-undefined,dynamic_lookup>") -endif() - -# ------------------------ Windows extras ------------------------- - -add_library(pybind11::windows_extras IMPORTED INTERFACE ${optional_global}) - -if(MSVC) - # /MP enables multithreaded builds (relevant when there are many files), /bigobj is - # needed for bigger binding projects due to the limit to 64k addressable sections - set_property( - TARGET pybind11::windows_extras - APPEND - PROPERTY INTERFACE_COMPILE_OPTIONS /bigobj) - - if(CMAKE_VERSION VERSION_LESS 3.11) - set_property( - TARGET pybind11::windows_extras - APPEND - PROPERTY INTERFACE_COMPILE_OPTIONS $<$>:/MP>) - else() - # Only set these options for C++ files. This is important so that, for - # instance, projects that include other types of source files like CUDA - # .cu files don't get these options propagated to nvcc since that would - # cause the build to fail. - set_property( - TARGET pybind11::windows_extras - APPEND - PROPERTY INTERFACE_COMPILE_OPTIONS $<$>:$<$:/MP>>) - endif() -endif() - -# ----------------------- Legacy option -------------------------- - -# Warn or error if old variable name used -if(PYBIND11_CPP_STANDARD) - string(REGEX MATCH [[..$]] VAL "${PYBIND11_CPP_STANDARD}") - if(CMAKE_CXX_STANDARD) - if(NOT CMAKE_CXX_STANDARD STREQUAL VAL) - message(WARNING "CMAKE_CXX_STANDARD=${CMAKE_CXX_STANDARD} does not match " - "PYBIND11_CPP_STANDARD=${PYBIND11_CPP_STANDARD}, " - "please remove PYBIND11_CPP_STANDARD from your cache") - endif() - else() - set(supported_standards 11 14 17 20) - if("${VAL}" IN_LIST supported_standards) - message(WARNING "USE -DCMAKE_CXX_STANDARD=${VAL} instead of PYBIND11_CPP_STANDARD") - set(CMAKE_CXX_STANDARD - ${VAL} - CACHE STRING "From PYBIND11_CPP_STANDARD") - else() - message(FATAL_ERROR "PYBIND11_CPP_STANDARD should be replaced with CMAKE_CXX_STANDARD " - "(last two chars: ${VAL} not understood as a valid CXX std)") - endif() - endif() -endif() - -# --------------------- Python specifics ------------------------- - -# Check to see which Python mode we are in, new, old, or no python -if(PYBIND11_NOPYTHON) - set(_pybind11_nopython ON) -elseif( - PYBIND11_FINDPYTHON - OR Python_FOUND - OR Python2_FOUND - OR Python3_FOUND) - # New mode - include("${CMAKE_CURRENT_LIST_DIR}/pybind11NewTools.cmake") - -else() - - # Classic mode - include("${CMAKE_CURRENT_LIST_DIR}/pybind11Tools.cmake") - -endif() - -# --------------------- LTO ------------------------------- - -include(CheckCXXCompilerFlag) - -# Checks whether the given CXX/linker flags can compile and link a cxx file. -# cxxflags and linkerflags are lists of flags to use. The result variable is a -# unique variable name for each set of flags: the compilation result will be -# cached base on the result variable. If the flags work, sets them in -# cxxflags_out/linkerflags_out internal cache variables (in addition to -# ${result}). -function(_pybind11_return_if_cxx_and_linker_flags_work result cxxflags linkerflags cxxflags_out - linkerflags_out) - set(CMAKE_REQUIRED_LIBRARIES ${linkerflags}) - check_cxx_compiler_flag("${cxxflags}" ${result}) - if(${result}) - set(${cxxflags_out} - "${cxxflags}" - PARENT_SCOPE) - set(${linkerflags_out} - "${linkerflags}" - PARENT_SCOPE) - endif() -endfunction() - -function(_pybind11_generate_lto target prefer_thin_lto) - if(CMAKE_CXX_COMPILER_ID MATCHES "GNU|Clang") - set(cxx_append "") - set(linker_append "") - if(CMAKE_CXX_COMPILER_ID MATCHES "Clang" AND NOT APPLE) - # Clang Gold plugin does not support -Os; append -O3 to MinSizeRel builds to override it - set(linker_append ";$<$:-O3>") - elseif(CMAKE_CXX_COMPILER_ID MATCHES "GNU") - set(cxx_append ";-fno-fat-lto-objects") - endif() - - if(CMAKE_CXX_COMPILER_ID MATCHES "Clang" AND prefer_thin_lto) - _pybind11_return_if_cxx_and_linker_flags_work( - HAS_FLTO_THIN "-flto=thin${cxx_append}" "-flto=thin${linker_append}" - PYBIND11_LTO_CXX_FLAGS PYBIND11_LTO_LINKER_FLAGS) - endif() - - if(NOT HAS_FLTO_THIN) - _pybind11_return_if_cxx_and_linker_flags_work( - HAS_FLTO "-flto${cxx_append}" "-flto${linker_append}" PYBIND11_LTO_CXX_FLAGS - PYBIND11_LTO_LINKER_FLAGS) - endif() - elseif(CMAKE_CXX_COMPILER_ID MATCHES "Intel") - # Intel equivalent to LTO is called IPO - _pybind11_return_if_cxx_and_linker_flags_work(HAS_INTEL_IPO "-ipo" "-ipo" - PYBIND11_LTO_CXX_FLAGS PYBIND11_LTO_LINKER_FLAGS) - elseif(MSVC) - # cmake only interprets libraries as linker flags when they start with a - (otherwise it - # converts /LTCG to \LTCG as if it was a Windows path). Luckily MSVC supports passing flags - # with - instead of /, even if it is a bit non-standard: - _pybind11_return_if_cxx_and_linker_flags_work(HAS_MSVC_GL_LTCG "/GL" "-LTCG" - PYBIND11_LTO_CXX_FLAGS PYBIND11_LTO_LINKER_FLAGS) - endif() - - # Enable LTO flags if found, except for Debug builds - if(PYBIND11_LTO_CXX_FLAGS) - set(not_debug "$>") - set(cxx_lang "$") - if(MSVC AND CMAKE_VERSION VERSION_LESS 3.11) - set(genex "${not_debug}") - else() - set(genex "$") - endif() - set_property( - TARGET ${target} - APPEND - PROPERTY INTERFACE_COMPILE_OPTIONS "$<${genex}:${PYBIND11_LTO_CXX_FLAGS}>") - if(CMAKE_PROJECT_NAME STREQUAL "pybind11") - message(STATUS "${target} enabled") - endif() - else() - if(CMAKE_PROJECT_NAME STREQUAL "pybind11") - message(STATUS "${target} disabled (not supported by the compiler and/or linker)") - endif() - endif() - - if(PYBIND11_LTO_LINKER_FLAGS) - if(CMAKE_VERSION VERSION_LESS 3.11) - set_property( - TARGET ${target} - APPEND - PROPERTY INTERFACE_LINK_LIBRARIES "$<${not_debug}:${PYBIND11_LTO_LINKER_FLAGS}>") - else() - set_property( - TARGET ${target} - APPEND - PROPERTY INTERFACE_LINK_OPTIONS "$<${not_debug}:${PYBIND11_LTO_LINKER_FLAGS}>") - endif() - endif() -endfunction() - -add_library(pybind11::lto IMPORTED INTERFACE ${optional_global}) -_pybind11_generate_lto(pybind11::lto FALSE) - -add_library(pybind11::thin_lto IMPORTED INTERFACE ${optional_global}) -_pybind11_generate_lto(pybind11::thin_lto TRUE) - -# ---------------------- pybind11_strip ----------------------------- - -function(pybind11_strip target_name) - # Strip unnecessary sections of the binary on Linux/Mac OS - if(CMAKE_STRIP) - if(APPLE) - set(x_opt -x) - endif() - - add_custom_command( - TARGET ${target_name} - POST_BUILD - COMMAND ${CMAKE_STRIP} ${x_opt} $) - endif() -endfunction() diff --git a/spaces/CVPR/LIVE/scene.h b/spaces/CVPR/LIVE/scene.h deleted file mode 100644 index e2f452dd33f139df89805967b416e21b5ffe109f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/scene.h +++ /dev/null @@ -1,120 +0,0 @@ -#pragma once - -#include "diffvg.h" -#include "aabb.h" -#include - -struct Shape; -struct ShapeGroup; -struct Filter; -struct DFilter; - -struct BVHNode { - int child0, child1; // child1 is negative if it is a leaf - AABB box; - float max_radius; -}; - -struct Scene { - Scene(int canvas_width, - int canvas_height, - const std::vector &shape_list, - const std::vector &shape_group_list, - const Filter &filter, - bool use_gpu, - int gpu_index); - - ~Scene(); - - int canvas_width; - int canvas_height; - - uint8_t *buffer; - - Shape *shapes; - Shape *d_shapes; - ShapeGroup *shape_groups; - ShapeGroup *d_shape_groups; - Filter *filter; - DFilter *d_filter; - // For accelerating intersection - AABB *shapes_bbox; - BVHNode **path_bvhs; // Only for Path - BVHNode **shape_groups_bvh_nodes; // One BVH for each shape group - BVHNode *bvh_nodes; - - int num_shapes; - int num_shape_groups; - // shape_groups reuse shape, so the total number of shapes - // doesn't equal to num_shapes - int num_total_shapes; - bool use_gpu; - int gpu_index; - - // For edge sampling - float *shapes_length; - float *sample_shapes_cdf; - float *sample_shapes_pmf; - int *sample_shape_id; - int *sample_group_id; - float **path_length_cdf; - float **path_length_pmf; - int **path_point_id_map; - - ShapeGroup get_d_shape_group(int group_id) const; - Shape get_d_shape(int shape_id) const; - float get_d_filter_radius() const; -}; - -struct SceneData { - int canvas_width; - int canvas_height; - Shape *shapes; - Shape *d_shapes; - ShapeGroup *shape_groups; - ShapeGroup *d_shape_groups; - Filter *filter; - DFilter *d_filter; - AABB *shapes_bbox; - BVHNode **path_bvhs; // Only for Path - BVHNode **shape_groups_bvh_nodes; - BVHNode *bvh_nodes; - int num_shapes; - int num_shape_groups; - int num_total_shapes; - // For edge sampling - float *shapes_length; - float *sample_shapes_cdf; - float *sample_shapes_pmf; - int *sample_shape_id; - int *sample_group_id; - float **path_length_cdf; - float **path_length_pmf; - int **path_point_id_map; -}; - -inline SceneData get_scene_data(const Scene &scene) { - return SceneData{scene.canvas_width, - scene.canvas_height, - scene.shapes, - scene.d_shapes, - scene.shape_groups, - scene.d_shape_groups, - scene.filter, - scene.d_filter, - scene.shapes_bbox, - scene.path_bvhs, - scene.shape_groups_bvh_nodes, - scene.bvh_nodes, - scene.num_shapes, - scene.num_shape_groups, - scene.num_total_shapes, - scene.shapes_length, - scene.sample_shapes_cdf, - scene.sample_shapes_pmf, - scene.sample_shape_id, - scene.sample_group_id, - scene.path_length_cdf, - scene.path_length_pmf, - scene.path_point_id_map}; -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/config/simple_defines.h b/spaces/CVPR/LIVE/thrust/thrust/detail/config/simple_defines.h deleted file mode 100644 index e3ea2eb64e766ea2147ecf1de308a454a739d88e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/config/simple_defines.h +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file simple_defines.h - * \brief Primitive macros without dependencies. - */ - -#pragma once - -#define THRUST_UNKNOWN 0 -#define THRUST_FALSE 0 -#define THRUST_TRUE 1 - -#define THRUST_UNUSED_VAR(expr) do { (void)(expr); } while (0) - -#define THRUST_PREVENT_MACRO_SUBSTITUTION - diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/distance_from_result.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/distance_from_result.h deleted file mode 100644 index 2b7e0d60e5d31816ce9695444129ff8b5eed52d7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/distance_from_result.h +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ - -namespace detail -{ - -// since both arguments are known to be specializations of iterator_facade, -// it's legal to access IteratorFacade2::difference_type -template - struct distance_from_result - : eval_if< - is_convertible::value, - identity_, - identity_ - > -{}; - -} // end detail - -} // end thrust - diff --git a/spaces/CVPR/WALT/mmdet/datasets/cityscapes.py b/spaces/CVPR/WALT/mmdet/datasets/cityscapes.py deleted file mode 100644 index 71eead87e7f4e511c0cb59e69c3a599832ada0e4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/datasets/cityscapes.py +++ /dev/null @@ -1,334 +0,0 @@ -# Modified from https://github.com/facebookresearch/detectron2/blob/master/detectron2/data/datasets/cityscapes.py # noqa -# and https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - -import glob -import os -import os.path as osp -import tempfile -from collections import OrderedDict - -import mmcv -import numpy as np -import pycocotools.mask as maskUtils -from mmcv.utils import print_log - -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class CityscapesDataset(CocoDataset): - - CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle', - 'bicycle') - - def _filter_imgs(self, min_size=32): - """Filter images too small or without ground truths.""" - valid_inds = [] - # obtain images that contain annotation - ids_with_ann = set(_['image_id'] for _ in self.coco.anns.values()) - # obtain images that contain annotations of the required categories - ids_in_cat = set() - for i, class_id in enumerate(self.cat_ids): - ids_in_cat |= set(self.coco.cat_img_map[class_id]) - # merge the image id sets of the two conditions and use the merged set - # to filter out images if self.filter_empty_gt=True - ids_in_cat &= ids_with_ann - - valid_img_ids = [] - for i, img_info in enumerate(self.data_infos): - img_id = img_info['id'] - ann_ids = self.coco.getAnnIds(imgIds=[img_id]) - ann_info = self.coco.loadAnns(ann_ids) - all_iscrowd = all([_['iscrowd'] for _ in ann_info]) - if self.filter_empty_gt and (self.img_ids[i] not in ids_in_cat - or all_iscrowd): - continue - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - valid_img_ids.append(img_id) - self.img_ids = valid_img_ids - return valid_inds - - def _parse_ann_info(self, img_info, ann_info): - """Parse bbox and mask annotation. - - Args: - img_info (dict): Image info of an image. - ann_info (list[dict]): Annotation info of an image. - - Returns: - dict: A dict containing the following keys: bboxes, \ - bboxes_ignore, labels, masks, seg_map. \ - "masks" are already decoded into binary masks. - """ - gt_bboxes = [] - gt_labels = [] - gt_bboxes_ignore = [] - gt_masks_ann = [] - - for i, ann in enumerate(ann_info): - if ann.get('ignore', False): - continue - x1, y1, w, h = ann['bbox'] - if ann['area'] <= 0 or w < 1 or h < 1: - continue - if ann['category_id'] not in self.cat_ids: - continue - bbox = [x1, y1, x1 + w, y1 + h] - if ann.get('iscrowd', False): - gt_bboxes_ignore.append(bbox) - else: - gt_bboxes.append(bbox) - gt_labels.append(self.cat2label[ann['category_id']]) - gt_masks_ann.append(ann['segmentation']) - - if gt_bboxes: - gt_bboxes = np.array(gt_bboxes, dtype=np.float32) - gt_labels = np.array(gt_labels, dtype=np.int64) - else: - gt_bboxes = np.zeros((0, 4), dtype=np.float32) - gt_labels = np.array([], dtype=np.int64) - - if gt_bboxes_ignore: - gt_bboxes_ignore = np.array(gt_bboxes_ignore, dtype=np.float32) - else: - gt_bboxes_ignore = np.zeros((0, 4), dtype=np.float32) - - ann = dict( - bboxes=gt_bboxes, - labels=gt_labels, - bboxes_ignore=gt_bboxes_ignore, - masks=gt_masks_ann, - seg_map=img_info['segm_file']) - - return ann - - def results2txt(self, results, outfile_prefix): - """Dump the detection results to a txt file. - - Args: - results (list[list | tuple]): Testing results of the - dataset. - outfile_prefix (str): The filename prefix of the json files. - If the prefix is "somepath/xxx", - the txt files will be named "somepath/xxx.txt". - - Returns: - list[str]: Result txt files which contains corresponding \ - instance segmentation images. - """ - try: - import cityscapesscripts.helpers.labels as CSLabels - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - result_files = [] - os.makedirs(outfile_prefix, exist_ok=True) - prog_bar = mmcv.ProgressBar(len(self)) - for idx in range(len(self)): - result = results[idx] - filename = self.data_infos[idx]['filename'] - basename = osp.splitext(osp.basename(filename))[0] - pred_txt = osp.join(outfile_prefix, basename + '_pred.txt') - - bbox_result, segm_result = result - bboxes = np.vstack(bbox_result) - # segm results - if isinstance(segm_result, tuple): - # Some detectors use different scores for bbox and mask, - # like Mask Scoring R-CNN. Score of segm will be used instead - # of bbox score. - segms = mmcv.concat_list(segm_result[0]) - mask_score = segm_result[1] - else: - # use bbox score for mask score - segms = mmcv.concat_list(segm_result) - mask_score = [bbox[-1] for bbox in bboxes] - labels = [ - np.full(bbox.shape[0], i, dtype=np.int32) - for i, bbox in enumerate(bbox_result) - ] - labels = np.concatenate(labels) - - assert len(bboxes) == len(segms) == len(labels) - num_instances = len(bboxes) - prog_bar.update() - with open(pred_txt, 'w') as fout: - for i in range(num_instances): - pred_class = labels[i] - classes = self.CLASSES[pred_class] - class_id = CSLabels.name2label[classes].id - score = mask_score[i] - mask = maskUtils.decode(segms[i]).astype(np.uint8) - png_filename = osp.join(outfile_prefix, - basename + f'_{i}_{classes}.png') - mmcv.imwrite(mask, png_filename) - fout.write(f'{osp.basename(png_filename)} {class_id} ' - f'{score}\n') - result_files.append(pred_txt) - - return result_files - - def format_results(self, results, txtfile_prefix=None): - """Format the results to txt (standard format for Cityscapes - evaluation). - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of txt files. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If not specified, a temp file will be created. Default: None. - - Returns: - tuple: (result_files, tmp_dir), result_files is a dict containing \ - the json filepaths, tmp_dir is the temporal directory created \ - for saving txt/png files when txtfile_prefix is not specified. - """ - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - assert isinstance(results, list), 'results must be a list' - assert len(results) == len(self), ( - 'The length of results is not equal to the dataset len: {} != {}'. - format(len(results), len(self))) - - if txtfile_prefix is None: - tmp_dir = tempfile.TemporaryDirectory() - txtfile_prefix = osp.join(tmp_dir.name, 'results') - else: - tmp_dir = None - result_files = self.results2txt(results, txtfile_prefix) - - return result_files, tmp_dir - - def evaluate(self, - results, - metric='bbox', - logger=None, - outfile_prefix=None, - classwise=False, - proposal_nums=(100, 300, 1000), - iou_thrs=np.arange(0.5, 0.96, 0.05)): - """Evaluation in Cityscapes/COCO protocol. - - Args: - results (list[list | tuple]): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. Options are - 'bbox', 'segm', 'proposal', 'proposal_fast'. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - outfile_prefix (str | None): The prefix of output file. It includes - the file path and the prefix of filename, e.g., "a/b/prefix". - If results are evaluated with COCO protocol, it would be the - prefix of output json file. For example, the metric is 'bbox' - and 'segm', then json files would be "a/b/prefix.bbox.json" and - "a/b/prefix.segm.json". - If results are evaluated with cityscapes protocol, it would be - the prefix of output txt/png files. The output files would be - png images under folder "a/b/prefix/xxx/" and the file name of - images would be written into a txt file - "a/b/prefix/xxx_pred.txt", where "xxx" is the video name of - cityscapes. If not specified, a temp file will be created. - Default: None. - classwise (bool): Whether to evaluating the AP for each class. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thrs (Sequence[float]): IoU threshold used for evaluating - recalls. If set to a list, the average recall of all IoUs will - also be computed. Default: 0.5. - - Returns: - dict[str, float]: COCO style evaluation metric or cityscapes mAP \ - and AP@50. - """ - eval_results = dict() - - metrics = metric.copy() if isinstance(metric, list) else [metric] - - if 'cityscapes' in metrics: - eval_results.update( - self._evaluate_cityscapes(results, outfile_prefix, logger)) - metrics.remove('cityscapes') - - # left metrics are all coco metric - if len(metrics) > 0: - # create CocoDataset with CityscapesDataset annotation - self_coco = CocoDataset(self.ann_file, self.pipeline.transforms, - None, self.data_root, self.img_prefix, - self.seg_prefix, self.proposal_file, - self.test_mode, self.filter_empty_gt) - # TODO: remove this in the future - # reload annotations of correct class - self_coco.CLASSES = self.CLASSES - self_coco.data_infos = self_coco.load_annotations(self.ann_file) - eval_results.update( - self_coco.evaluate(results, metrics, logger, outfile_prefix, - classwise, proposal_nums, iou_thrs)) - - return eval_results - - def _evaluate_cityscapes(self, results, txtfile_prefix, logger): - """Evaluation in Cityscapes protocol. - - Args: - results (list): Testing results of the dataset. - txtfile_prefix (str | None): The prefix of output txt file - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: Cityscapes evaluation results, contains 'mAP' \ - and 'AP@50'. - """ - - try: - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as CSEval # noqa - except ImportError: - raise ImportError('Please run "pip install citscapesscripts" to ' - 'install cityscapesscripts first.') - msg = 'Evaluating in Cityscapes style' - if logger is None: - msg = '\n' + msg - print_log(msg, logger=logger) - - result_files, tmp_dir = self.format_results(results, txtfile_prefix) - - if tmp_dir is None: - result_dir = osp.join(txtfile_prefix, 'results') - else: - result_dir = osp.join(tmp_dir.name, 'results') - - eval_results = OrderedDict() - print_log(f'Evaluating results under {result_dir} ...', logger=logger) - - # set global states in cityscapes evaluation API - CSEval.args.cityscapesPath = os.path.join(self.img_prefix, '../..') - CSEval.args.predictionPath = os.path.abspath(result_dir) - CSEval.args.predictionWalk = None - CSEval.args.JSONOutput = False - CSEval.args.colorized = False - CSEval.args.gtInstancesFile = os.path.join(result_dir, - 'gtInstances.json') - CSEval.args.groundTruthSearch = os.path.join( - self.img_prefix.replace('leftImg8bit', 'gtFine'), - '*/*_gtFine_instanceIds.png') - - groundTruthImgList = glob.glob(CSEval.args.groundTruthSearch) - assert len(groundTruthImgList), 'Cannot find ground truth images' \ - f' in {CSEval.args.groundTruthSearch}.' - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(CSEval.getPrediction(gt, CSEval.args)) - CSEval_results = CSEval.evaluateImgLists(predictionImgList, - groundTruthImgList, - CSEval.args)['averages'] - - eval_results['mAP'] = CSEval_results['allAp'] - eval_results['AP@50'] = CSEval_results['allAp50%'] - if tmp_dir is not None: - tmp_dir.cleanup() - return eval_results diff --git a/spaces/ChandraMohanNayal/AutoGPT/tests/unit/json_tests.py b/spaces/ChandraMohanNayal/AutoGPT/tests/unit/json_tests.py deleted file mode 100644 index 25c383377708359b5cfec28e0625343c5692f15c..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/tests/unit/json_tests.py +++ /dev/null @@ -1,114 +0,0 @@ -import unittest - -from autogpt.json_utils.json_fix_llm import fix_and_parse_json - - -class TestParseJson(unittest.TestCase): - def test_valid_json(self): - # Test that a valid JSON string is parsed correctly - json_str = '{"name": "John", "age": 30, "city": "New York"}' - obj = fix_and_parse_json(json_str) - self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"}) - - def test_invalid_json_minor(self): - # Test that an invalid JSON string can be fixed with gpt - json_str = '{"name": "John", "age": 30, "city": "New York",}' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_with_gpt(self): - # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=True), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_without_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - # Assert that this raises an exception: - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I suggest we start by browsing the repository to find any issues that we can fix. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/CikeyQI/meme-api/docs/examples/test_api.py b/spaces/CikeyQI/meme-api/docs/examples/test_api.py deleted file mode 100644 index b3beaf6a5fdf0b82fd71583c4043a261b176cdb3..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/docs/examples/test_api.py +++ /dev/null @@ -1,23 +0,0 @@ -import asyncio -import json - -import httpx - - -async def main(): - files = [("images", open("avatar.jpg", "rb"))] - texts = [] - args = {"circle": True} - data = {"texts": texts, "args": json.dumps(args)} - - url = "http://127.0.0.1:2233/memes/petpet/" - async with httpx.AsyncClient() as client: - resp = await client.post(url, files=files, data=data) - - with open("result.gif", "wb") as f: - f.write(resp.content) - - -if __name__ == "__main__": - loop = asyncio.new_event_loop() - loop.run_until_complete(main()) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-add9ad59.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-add9ad59.js deleted file mode 100644 index d9af10b645d906092bf95649e7b56286a1e73a08..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-add9ad59.js +++ /dev/null @@ -1,346 +0,0 @@ -import{S as Or,e as qr,s as Pr,J as _n,K as de,p as Xe,M as Gt,n as ut,A as $e,al as Ir,g as Gi,N as ct,B as Vi,am as Wi,a7 as Is,h as Yi,O as O0,aj as Ls,z as Re,u as y0,v as Ye,y as x0,an as Os,k as H0,o as U0,x as G0,G as q0,m as Pn,V as Hn,_ as ui,F as At,U as I0,Q as b0,P as ji,R as Xi,T as P0,a1 as $i,E as qs,ae as Ps,q as Hs,r as Us}from"./index-3370be2a.js";import{u as Gs,S as Vs}from"./ShareButton-39feba51.js";import{B as Ws}from"./Button-89624748.js";import{B as Ys}from"./BlockLabel-56db415e.js";import{n as ci}from"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";import"./IconButton-abe5ede9.js";function js(v){let i,s,o;return{c(){i=_n("svg"),s=_n("path"),o=_n("path"),de(s,"fill","currentColor"),de(s,"d","M17.74 30L16 29l4-7h6a2 2 0 0 0 2-2V8a2 2 0 0 0-2-2H6a2 2 0 0 0-2 2v12a2 2 0 0 0 2 2h9v2H6a4 4 0 0 1-4-4V8a4 4 0 0 1 4-4h20a4 4 0 0 1 4 4v12a4 4 0 0 1-4 4h-4.84Z"),de(o,"fill","currentColor"),de(o,"d","M8 10h16v2H8zm0 6h10v2H8z"),de(i,"xmlns","http://www.w3.org/2000/svg"),de(i,"xmlns:xlink","http://www.w3.org/1999/xlink"),de(i,"aria-hidden","true"),de(i,"role","img"),de(i,"class","iconify iconify--carbon"),de(i,"width","100%"),de(i,"height","100%"),de(i,"preserveAspectRatio","xMidYMid meet"),de(i,"viewBox","0 0 32 32")},m(m,p){Xe(m,i,p),Gt(i,s),Gt(i,o)},p:ut,i:ut,o:ut,d(m){m&&$e(i)}}}class Xs extends Or{constructor(i){super(),qr(this,i,null,js,Pr,{})}}function Zi(){return{async:!1,baseUrl:null,breaks:!1,extensions:null,gfm:!0,headerIds:!0,headerPrefix:"",highlight:null,hooks:null,langPrefix:"language-",mangle:!0,pedantic:!1,renderer:null,sanitize:!1,sanitizer:null,silent:!1,smartypants:!1,tokenizer:null,walkTokens:null,xhtml:!1}}let w0=Zi();function $s(v){w0=v}const Ki=/[&<>"']/,Zs=new RegExp(Ki.source,"g"),Qi=/[<>"']|&(?!(#\d{1,7}|#[Xx][a-fA-F0-9]{1,6}|\w+);)/,Ks=new RegExp(Qi.source,"g"),Qs={"&":"&","<":"<",">":">",'"':""","'":"'"},hi=v=>Qs[v];function et(v,i){if(i){if(Ki.test(v))return v.replace(Zs,hi)}else if(Qi.test(v))return v.replace(Ks,hi);return v}const Js=/&(#(?:\d+)|(?:#x[0-9A-Fa-f]+)|(?:\w+));?/ig;function Ji(v){return v.replace(Js,(i,s)=>(s=s.toLowerCase(),s==="colon"?":":s.charAt(0)==="#"?s.charAt(1)==="x"?String.fromCharCode(parseInt(s.substring(2),16)):String.fromCharCode(+s.substring(1)):""))}const eo=/(^|[^\[])\^/g;function De(v,i){v=typeof v=="string"?v:v.source,i=i||"";const s={replace:(o,m)=>(m=m.source||m,m=m.replace(eo,"$1"),v=v.replace(o,m),s),getRegex:()=>new RegExp(v,i)};return s}const to=/[^\w:]/g,ro=/^$|^[a-z][a-z0-9+.-]*:|^[?#]/i;function mi(v,i,s){if(v){let o;try{o=decodeURIComponent(Ji(s)).replace(to,"").toLowerCase()}catch{return null}if(o.indexOf("javascript:")===0||o.indexOf("vbscript:")===0||o.indexOf("data:")===0)return null}i&&!ro.test(s)&&(s=lo(i,s));try{s=encodeURI(s).replace(/%25/g,"%")}catch{return null}return s}const Cr={},no=/^[^:]+:\/*[^/]*$/,ao=/^([^:]+:)[\s\S]*$/,io=/^([^:]+:\/*[^/]*)[\s\S]*$/;function lo(v,i){Cr[" "+v]||(no.test(v)?Cr[" "+v]=v+"/":Cr[" "+v]=Rr(v,"/",!0)),v=Cr[" "+v];const s=v.indexOf(":")===-1;return i.substring(0,2)==="//"?s?i:v.replace(ao,"$1")+i:i.charAt(0)==="/"?s?i:v.replace(io,"$1")+i:v+i}const Lr={exec:function(){}};function di(v,i){const s=v.replace(/\|/g,(p,x,w)=>{let z=!1,L=x;for(;--L>=0&&w[L]==="\\";)z=!z;return z?"|":" |"}),o=s.split(/ \|/);let m=0;if(o[0].trim()||o.shift(),o.length>0&&!o[o.length-1].trim()&&o.pop(),o.length>i)o.splice(i);else for(;o.length1;)i&1&&(s+=v),i>>=1,v+=v;return s+v}function pi(v,i,s,o){const m=i.href,p=i.title?et(i.title):null,x=v[1].replace(/\\([\[\]])/g,"$1");if(v[0].charAt(0)!=="!"){o.state.inLink=!0;const w={type:"link",raw:s,href:m,title:p,text:x,tokens:o.inlineTokens(x)};return o.state.inLink=!1,w}return{type:"image",raw:s,href:m,title:p,text:et(x)}}function uo(v,i){const s=v.match(/^(\s+)(?:```)/);if(s===null)return i;const o=s[1];return i.split(` -`).map(m=>{const p=m.match(/^\s+/);if(p===null)return m;const[x]=p;return x.length>=o.length?m.slice(o.length):m}).join(` -`)}class Un{constructor(i){this.options=i||w0}space(i){const s=this.rules.block.newline.exec(i);if(s&&s[0].length>0)return{type:"space",raw:s[0]}}code(i){const s=this.rules.block.code.exec(i);if(s){const o=s[0].replace(/^ {1,4}/gm,"");return{type:"code",raw:s[0],codeBlockStyle:"indented",text:this.options.pedantic?o:Rr(o,` -`)}}}fences(i){const s=this.rules.block.fences.exec(i);if(s){const o=s[0],m=uo(o,s[3]||"");return{type:"code",raw:o,lang:s[2]?s[2].trim().replace(this.rules.inline._escapes,"$1"):s[2],text:m}}}heading(i){const s=this.rules.block.heading.exec(i);if(s){let o=s[2].trim();if(/#$/.test(o)){const m=Rr(o,"#");(this.options.pedantic||!m||/ $/.test(m))&&(o=m.trim())}return{type:"heading",raw:s[0],depth:s[1].length,text:o,tokens:this.lexer.inline(o)}}}hr(i){const s=this.rules.block.hr.exec(i);if(s)return{type:"hr",raw:s[0]}}blockquote(i){const s=this.rules.block.blockquote.exec(i);if(s){const o=s[0].replace(/^ *>[ \t]?/gm,""),m=this.lexer.state.top;this.lexer.state.top=!0;const p=this.lexer.blockTokens(o);return this.lexer.state.top=m,{type:"blockquote",raw:s[0],tokens:p,text:o}}}list(i){let s=this.rules.block.list.exec(i);if(s){let o,m,p,x,w,z,L,G,K,ne,V,xe,ge=s[1].trim();const le=ge.length>1,q={type:"list",raw:"",ordered:le,start:le?+ge.slice(0,-1):"",loose:!1,items:[]};ge=le?`\\d{1,9}\\${ge.slice(-1)}`:`\\${ge}`,this.options.pedantic&&(ge=le?ge:"[*+-]");const C=new RegExp(`^( {0,3}${ge})((?:[ ][^\\n]*)?(?:\\n|$))`);for(;i&&(xe=!1,!(!(s=C.exec(i))||this.rules.block.hr.test(i)));){if(o=s[0],i=i.substring(o.length),G=s[2].split(` -`,1)[0].replace(/^\t+/,N=>" ".repeat(3*N.length)),K=i.split(` -`,1)[0],this.options.pedantic?(x=2,V=G.trimLeft()):(x=s[2].search(/[^ ]/),x=x>4?1:x,V=G.slice(x),x+=s[1].length),z=!1,!G&&/^ *$/.test(K)&&(o+=K+` -`,i=i.substring(K.length+1),xe=!0),!xe){const N=new RegExp(`^ {0,${Math.min(3,x-1)}}(?:[*+-]|\\d{1,9}[.)])((?:[ ][^\\n]*)?(?:\\n|$))`),D=new RegExp(`^ {0,${Math.min(3,x-1)}}((?:- *){3,}|(?:_ *){3,}|(?:\\* *){3,})(?:\\n+|$)`),I=new RegExp(`^ {0,${Math.min(3,x-1)}}(?:\`\`\`|~~~)`),j=new RegExp(`^ {0,${Math.min(3,x-1)}}#`);for(;i&&(ne=i.split(` -`,1)[0],K=ne,this.options.pedantic&&(K=K.replace(/^ {1,4}(?=( {4})*[^ ])/g," ")),!(I.test(K)||j.test(K)||N.test(K)||D.test(i)));){if(K.search(/[^ ]/)>=x||!K.trim())V+=` -`+K.slice(x);else{if(z||G.search(/[^ ]/)>=4||I.test(G)||j.test(G)||D.test(G))break;V+=` -`+K}!z&&!K.trim()&&(z=!0),o+=ne+` -`,i=i.substring(ne.length+1),G=K.slice(x)}}q.loose||(L?q.loose=!0:/\n *\n *$/.test(o)&&(L=!0)),this.options.gfm&&(m=/^\[[ xX]\] /.exec(V),m&&(p=m[0]!=="[ ] ",V=V.replace(/^\[[ xX]\] +/,""))),q.items.push({type:"list_item",raw:o,task:!!m,checked:p,loose:!1,text:V}),q.raw+=o}q.items[q.items.length-1].raw=o.trimRight(),q.items[q.items.length-1].text=V.trimRight(),q.raw=q.raw.trimRight();const _=q.items.length;for(w=0;w<_;w++)if(this.lexer.state.top=!1,q.items[w].tokens=this.lexer.blockTokens(q.items[w].text,[]),!q.loose){const N=q.items[w].tokens.filter(I=>I.type==="space"),D=N.length>0&&N.some(I=>/\n.*\n/.test(I.raw));q.loose=D}if(q.loose)for(w=0;w<_;w++)q.items[w].loose=!0;return q}}html(i){const s=this.rules.block.html.exec(i);if(s){const o={type:"html",block:!0,raw:s[0],pre:!this.options.sanitizer&&(s[1]==="pre"||s[1]==="script"||s[1]==="style"),text:s[0]};if(this.options.sanitize){const m=this.options.sanitizer?this.options.sanitizer(s[0]):et(s[0]);o.type="paragraph",o.text=m,o.tokens=this.lexer.inline(m)}return o}}def(i){const s=this.rules.block.def.exec(i);if(s){const o=s[1].toLowerCase().replace(/\s+/g," "),m=s[2]?s[2].replace(/^<(.*)>$/,"$1").replace(this.rules.inline._escapes,"$1"):"",p=s[3]?s[3].substring(1,s[3].length-1).replace(this.rules.inline._escapes,"$1"):s[3];return{type:"def",tag:o,raw:s[0],href:m,title:p}}}table(i){const s=this.rules.block.table.exec(i);if(s){const o={type:"table",header:di(s[1]).map(m=>({text:m})),align:s[2].replace(/^ *|\| *$/g,"").split(/ *\| */),rows:s[3]&&s[3].trim()?s[3].replace(/\n[ \t]*$/,"").split(` -`):[]};if(o.header.length===o.align.length){o.raw=s[0];let m=o.align.length,p,x,w,z;for(p=0;p({text:L}));for(m=o.header.length,x=0;x/i.test(s[0])&&(this.lexer.state.inLink=!1),!this.lexer.state.inRawBlock&&/^<(pre|code|kbd|script)(\s|>)/i.test(s[0])?this.lexer.state.inRawBlock=!0:this.lexer.state.inRawBlock&&/^<\/(pre|code|kbd|script)(\s|>)/i.test(s[0])&&(this.lexer.state.inRawBlock=!1),{type:this.options.sanitize?"text":"html",raw:s[0],inLink:this.lexer.state.inLink,inRawBlock:this.lexer.state.inRawBlock,block:!1,text:this.options.sanitize?this.options.sanitizer?this.options.sanitizer(s[0]):et(s[0]):s[0]}}link(i){const s=this.rules.inline.link.exec(i);if(s){const o=s[2].trim();if(!this.options.pedantic&&/^$/.test(o))return;const x=Rr(o.slice(0,-1),"\\");if((o.length-x.length)%2===0)return}else{const x=so(s[2],"()");if(x>-1){const z=(s[0].indexOf("!")===0?5:4)+s[1].length+x;s[2]=s[2].substring(0,x),s[0]=s[0].substring(0,z).trim(),s[3]=""}}let m=s[2],p="";if(this.options.pedantic){const x=/^([^'"]*[^\s])\s+(['"])(.*)\2/.exec(m);x&&(m=x[1],p=x[3])}else p=s[3]?s[3].slice(1,-1):"";return m=m.trim(),/^$/.test(o)?m=m.slice(1):m=m.slice(1,-1)),pi(s,{href:m&&m.replace(this.rules.inline._escapes,"$1"),title:p&&p.replace(this.rules.inline._escapes,"$1")},s[0],this.lexer)}}reflink(i,s){let o;if((o=this.rules.inline.reflink.exec(i))||(o=this.rules.inline.nolink.exec(i))){let m=(o[2]||o[1]).replace(/\s+/g," ");if(m=s[m.toLowerCase()],!m){const p=o[0].charAt(0);return{type:"text",raw:p,text:p}}return pi(o,m,o[0],this.lexer)}}emStrong(i,s,o=""){let m=this.rules.inline.emStrong.lDelim.exec(i);if(!m||m[3]&&o.match(/[\p{L}\p{N}]/u))return;const p=m[1]||m[2]||"";if(!p||p&&(o===""||this.rules.inline.punctuation.exec(o))){const x=m[0].length-1;let w,z,L=x,G=0;const K=m[0][0]==="*"?this.rules.inline.emStrong.rDelimAst:this.rules.inline.emStrong.rDelimUnd;for(K.lastIndex=0,s=s.slice(-1*i.length+x);(m=K.exec(s))!=null;){if(w=m[1]||m[2]||m[3]||m[4]||m[5]||m[6],!w)continue;if(z=w.length,m[3]||m[4]){L+=z;continue}else if((m[5]||m[6])&&x%3&&!((x+z)%3)){G+=z;continue}if(L-=z,L>0)continue;z=Math.min(z,z+L+G);const ne=i.slice(0,x+m.index+(m[0].length-w.length)+z);if(Math.min(x,z)%2){const xe=ne.slice(1,-1);return{type:"em",raw:ne,text:xe,tokens:this.lexer.inlineTokens(xe)}}const V=ne.slice(2,-2);return{type:"strong",raw:ne,text:V,tokens:this.lexer.inlineTokens(V)}}}}codespan(i){const s=this.rules.inline.code.exec(i);if(s){let o=s[2].replace(/\n/g," ");const m=/[^ ]/.test(o),p=/^ /.test(o)&&/ $/.test(o);return m&&p&&(o=o.substring(1,o.length-1)),o=et(o,!0),{type:"codespan",raw:s[0],text:o}}}br(i){const s=this.rules.inline.br.exec(i);if(s)return{type:"br",raw:s[0]}}del(i){const s=this.rules.inline.del.exec(i);if(s)return{type:"del",raw:s[0],text:s[2],tokens:this.lexer.inlineTokens(s[2])}}autolink(i,s){const o=this.rules.inline.autolink.exec(i);if(o){let m,p;return o[2]==="@"?(m=et(this.options.mangle?s(o[1]):o[1]),p="mailto:"+m):(m=et(o[1]),p=m),{type:"link",raw:o[0],text:m,href:p,tokens:[{type:"text",raw:m,text:m}]}}}url(i,s){let o;if(o=this.rules.inline.url.exec(i)){let m,p;if(o[2]==="@")m=et(this.options.mangle?s(o[0]):o[0]),p="mailto:"+m;else{let x;do x=o[0],o[0]=this.rules.inline._backpedal.exec(o[0])[0];while(x!==o[0]);m=et(o[0]),o[1]==="www."?p="http://"+o[0]:p=o[0]}return{type:"link",raw:o[0],text:m,href:p,tokens:[{type:"text",raw:m,text:m}]}}}inlineText(i,s){const o=this.rules.inline.text.exec(i);if(o){let m;return this.lexer.state.inRawBlock?m=this.options.sanitize?this.options.sanitizer?this.options.sanitizer(o[0]):et(o[0]):o[0]:m=et(this.options.smartypants?s(o[0]):o[0]),{type:"text",raw:o[0],text:m}}}}const me={newline:/^(?: *(?:\n|$))+/,code:/^( {4}[^\n]+(?:\n(?: *(?:\n|$))*)?)+/,fences:/^ {0,3}(`{3,}(?=[^`\n]*(?:\n|$))|~{3,})([^\n]*)(?:\n|$)(?:|([\s\S]*?)(?:\n|$))(?: {0,3}\1[~`]* *(?=\n|$)|$)/,hr:/^ {0,3}((?:-[\t ]*){3,}|(?:_[ \t]*){3,}|(?:\*[ \t]*){3,})(?:\n+|$)/,heading:/^ {0,3}(#{1,6})(?=\s|$)(.*)(?:\n+|$)/,blockquote:/^( {0,3}> ?(paragraph|[^\n]*)(?:\n|$))+/,list:/^( {0,3}bull)([ \t][^\n]+?)?(?:\n|$)/,html:"^ {0,3}(?:<(script|pre|style|textarea)[\\s>][\\s\\S]*?(?:[^\\n]*\\n+|$)|comment[^\\n]*(\\n+|$)|<\\?[\\s\\S]*?(?:\\?>\\n*|$)|\\n*|$)|\\n*|$)|)[\\s\\S]*?(?:(?:\\n *)+\\n|$)|<(?!script|pre|style|textarea)([a-z][\\w-]*)(?:attribute)*? */?>(?=[ \\t]*(?:\\n|$))[\\s\\S]*?(?:(?:\\n *)+\\n|$)|(?=[ \\t]*(?:\\n|$))[\\s\\S]*?(?:(?:\\n *)+\\n|$))",def:/^ {0,3}\[(label)\]: *(?:\n *)?([^<\s][^\s]*|<.*?>)(?:(?: +(?:\n *)?| *\n *)(title))? *(?:\n+|$)/,table:Lr,lheading:/^((?:.|\n(?!\n))+?)\n {0,3}(=+|-+) *(?:\n+|$)/,_paragraph:/^([^\n]+(?:\n(?!hr|heading|lheading|blockquote|fences|list|html|table| +\n)[^\n]+)*)/,text:/^[^\n]+/};me._label=/(?!\s*\])(?:\\.|[^\[\]\\])+/;me._title=/(?:"(?:\\"?|[^"\\])*"|'[^'\n]*(?:\n[^'\n]+)*\n?'|\([^()]*\))/;me.def=De(me.def).replace("label",me._label).replace("title",me._title).getRegex();me.bullet=/(?:[*+-]|\d{1,9}[.)])/;me.listItemStart=De(/^( *)(bull) */).replace("bull",me.bullet).getRegex();me.list=De(me.list).replace(/bull/g,me.bullet).replace("hr","\\n+(?=\\1?(?:(?:- *){3,}|(?:_ *){3,}|(?:\\* *){3,})(?:\\n+|$))").replace("def","\\n+(?="+me.def.source+")").getRegex();me._tag="address|article|aside|base|basefont|blockquote|body|caption|center|col|colgroup|dd|details|dialog|dir|div|dl|dt|fieldset|figcaption|figure|footer|form|frame|frameset|h[1-6]|head|header|hr|html|iframe|legend|li|link|main|menu|menuitem|meta|nav|noframes|ol|optgroup|option|p|param|section|source|summary|table|tbody|td|tfoot|th|thead|title|tr|track|ul";me._comment=/|$)/;me.html=De(me.html,"i").replace("comment",me._comment).replace("tag",me._tag).replace("attribute",/ +[a-zA-Z:_][\w.:-]*(?: *= *"[^"\n]*"| *= *'[^'\n]*'| *= *[^\s"'=<>`]+)?/).getRegex();me.paragraph=De(me._paragraph).replace("hr",me.hr).replace("heading"," {0,3}#{1,6} ").replace("|lheading","").replace("|table","").replace("blockquote"," {0,3}>").replace("fences"," {0,3}(?:`{3,}(?=[^`\\n]*\\n)|~{3,})[^\\n]*\\n").replace("list"," {0,3}(?:[*+-]|1[.)]) ").replace("html",")|<(?:script|pre|style|textarea|!--)").replace("tag",me._tag).getRegex();me.blockquote=De(me.blockquote).replace("paragraph",me.paragraph).getRegex();me.normal={...me};me.gfm={...me.normal,table:"^ *([^\\n ].*\\|.*)\\n {0,3}(?:\\| *)?(:?-+:? *(?:\\| *:?-+:? *)*)(?:\\| *)?(?:\\n((?:(?! *\\n|hr|heading|blockquote|code|fences|list|html).*(?:\\n|$))*)\\n*|$)"};me.gfm.table=De(me.gfm.table).replace("hr",me.hr).replace("heading"," {0,3}#{1,6} ").replace("blockquote"," {0,3}>").replace("code"," {4}[^\\n]").replace("fences"," {0,3}(?:`{3,}(?=[^`\\n]*\\n)|~{3,})[^\\n]*\\n").replace("list"," {0,3}(?:[*+-]|1[.)]) ").replace("html",")|<(?:script|pre|style|textarea|!--)").replace("tag",me._tag).getRegex();me.gfm.paragraph=De(me._paragraph).replace("hr",me.hr).replace("heading"," {0,3}#{1,6} ").replace("|lheading","").replace("table",me.gfm.table).replace("blockquote"," {0,3}>").replace("fences"," {0,3}(?:`{3,}(?=[^`\\n]*\\n)|~{3,})[^\\n]*\\n").replace("list"," {0,3}(?:[*+-]|1[.)]) ").replace("html",")|<(?:script|pre|style|textarea|!--)").replace("tag",me._tag).getRegex();me.pedantic={...me.normal,html:De(`^ *(?:comment *(?:\\n|\\s*$)|<(tag)[\\s\\S]+? *(?:\\n{2,}|\\s*$)|\\s]*)*?/?> *(?:\\n{2,}|\\s*$))`).replace("comment",me._comment).replace(/tag/g,"(?!(?:a|em|strong|small|s|cite|q|dfn|abbr|data|time|code|var|samp|kbd|sub|sup|i|b|u|mark|ruby|rt|rp|bdi|bdo|span|br|wbr|ins|del|img)\\b)\\w+(?!:|[^\\w\\s@]*@)\\b").getRegex(),def:/^ *\[([^\]]+)\]: *]+)>?(?: +(["(][^\n]+[")]))? *(?:\n+|$)/,heading:/^(#{1,6})(.*)(?:\n+|$)/,fences:Lr,lheading:/^(.+?)\n {0,3}(=+|-+) *(?:\n+|$)/,paragraph:De(me.normal._paragraph).replace("hr",me.hr).replace("heading",` *#{1,6} *[^ -]`).replace("lheading",me.lheading).replace("blockquote"," {0,3}>").replace("|fences","").replace("|list","").replace("|html","").getRegex()};const ie={escape:/^\\([!"#$%&'()*+,\-./:;<=>?@\[\]\\^_`{|}~])/,autolink:/^<(scheme:[^\s\x00-\x1f<>]*|email)>/,url:Lr,tag:"^comment|^|^<[a-zA-Z][\\w-]*(?:attribute)*?\\s*/?>|^<\\?[\\s\\S]*?\\?>|^|^",link:/^!?\[(label)\]\(\s*(href)(?:\s+(title))?\s*\)/,reflink:/^!?\[(label)\]\[(ref)\]/,nolink:/^!?\[(ref)\](?:\[\])?/,reflinkSearch:"reflink|nolink(?!\\()",emStrong:{lDelim:/^(?:\*+(?:([punct_])|[^\s*]))|^_+(?:([punct*])|([^\s_]))/,rDelimAst:/^(?:[^_*\\]|\\.)*?\_\_(?:[^_*\\]|\\.)*?\*(?:[^_*\\]|\\.)*?(?=\_\_)|(?:[^*\\]|\\.)+(?=[^*])|[punct_](\*+)(?=[\s]|$)|(?:[^punct*_\s\\]|\\.)(\*+)(?=[punct_\s]|$)|[punct_\s](\*+)(?=[^punct*_\s])|[\s](\*+)(?=[punct_])|[punct_](\*+)(?=[punct_])|(?:[^punct*_\s\\]|\\.)(\*+)(?=[^punct*_\s])/,rDelimUnd:/^(?:[^_*\\]|\\.)*?\*\*(?:[^_*\\]|\\.)*?\_(?:[^_*\\]|\\.)*?(?=\*\*)|(?:[^_\\]|\\.)+(?=[^_])|[punct*](\_+)(?=[\s]|$)|(?:[^punct*_\s\\]|\\.)(\_+)(?=[punct*\s]|$)|[punct*\s](\_+)(?=[^punct*_\s])|[\s](\_+)(?=[punct*])|[punct*](\_+)(?=[punct*])/},code:/^(`+)([^`]|[^`][\s\S]*?[^`])\1(?!`)/,br:/^( {2,}|\\)\n(?!\s*$)/,del:Lr,text:/^(`+|[^`])(?:(?= {2,}\n)|[\s\S]*?(?:(?=[\\?@\\[\\]`^{|}~";ie.punctuation=De(ie.punctuation).replace(/punctuation/g,ie._punctuation).getRegex();ie.blockSkip=/\[[^\]]*?\]\([^\)]*?\)|`[^`]*?`|<[^>]*?>/g;ie.escapedEmSt=/(?:^|[^\\])(?:\\\\)*\\[*_]/g;ie._comment=De(me._comment).replace("(?:-->|$)","-->").getRegex();ie.emStrong.lDelim=De(ie.emStrong.lDelim).replace(/punct/g,ie._punctuation).getRegex();ie.emStrong.rDelimAst=De(ie.emStrong.rDelimAst,"g").replace(/punct/g,ie._punctuation).getRegex();ie.emStrong.rDelimUnd=De(ie.emStrong.rDelimUnd,"g").replace(/punct/g,ie._punctuation).getRegex();ie._escapes=/\\([!"#$%&'()*+,\-./:;<=>?@\[\]\\^_`{|}~])/g;ie._scheme=/[a-zA-Z][a-zA-Z0-9+.-]{1,31}/;ie._email=/[a-zA-Z0-9.!#$%&'*+/=?^_`{|}~-]+(@)[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)+(?![-_])/;ie.autolink=De(ie.autolink).replace("scheme",ie._scheme).replace("email",ie._email).getRegex();ie._attribute=/\s+[a-zA-Z:_][\w.:-]*(?:\s*=\s*"[^"]*"|\s*=\s*'[^']*'|\s*=\s*[^\s"'=<>`]+)?/;ie.tag=De(ie.tag).replace("comment",ie._comment).replace("attribute",ie._attribute).getRegex();ie._label=/(?:\[(?:\\.|[^\[\]\\])*\]|\\.|`[^`]*`|[^\[\]\\`])*?/;ie._href=/<(?:\\.|[^\n<>\\])+>|[^\s\x00-\x1f]*/;ie._title=/"(?:\\"?|[^"\\])*"|'(?:\\'?|[^'\\])*'|\((?:\\\)?|[^)\\])*\)/;ie.link=De(ie.link).replace("label",ie._label).replace("href",ie._href).replace("title",ie._title).getRegex();ie.reflink=De(ie.reflink).replace("label",ie._label).replace("ref",me._label).getRegex();ie.nolink=De(ie.nolink).replace("ref",me._label).getRegex();ie.reflinkSearch=De(ie.reflinkSearch,"g").replace("reflink",ie.reflink).replace("nolink",ie.nolink).getRegex();ie.normal={...ie};ie.pedantic={...ie.normal,strong:{start:/^__|\*\*/,middle:/^__(?=\S)([\s\S]*?\S)__(?!_)|^\*\*(?=\S)([\s\S]*?\S)\*\*(?!\*)/,endAst:/\*\*(?!\*)/g,endUnd:/__(?!_)/g},em:{start:/^_|\*/,middle:/^()\*(?=\S)([\s\S]*?\S)\*(?!\*)|^_(?=\S)([\s\S]*?\S)_(?!_)/,endAst:/\*(?!\*)/g,endUnd:/_(?!_)/g},link:De(/^!?\[(label)\]\((.*?)\)/).replace("label",ie._label).getRegex(),reflink:De(/^!?\[(label)\]\s*\[([^\]]*)\]/).replace("label",ie._label).getRegex()};ie.gfm={...ie.normal,escape:De(ie.escape).replace("])","~|])").getRegex(),_extended_email:/[A-Za-z0-9._+-]+(@)[a-zA-Z0-9-_]+(?:\.[a-zA-Z0-9-_]*[a-zA-Z0-9])+(?![-_])/,url:/^((?:ftp|https?):\/\/|www\.)(?:[a-zA-Z0-9\-]+\.?)+[^\s<]*|^email/,_backpedal:/(?:[^?!.,:;*_'"~()&]+|\([^)]*\)|&(?![a-zA-Z0-9]+;$)|[?!.,:;*_'"~)]+(?!$))+/,del:/^(~~?)(?=[^\s~])([\s\S]*?[^\s~])\1(?=[^~]|$)/,text:/^([`~]+|[^`~])(?:(?= {2,}\n)|(?=[a-zA-Z0-9.!#$%&'*+\/=?_`{\|}~-]+@)|[\s\S]*?(?:(?=[\\.5&&(o="x"+o.toString(16)),i+="&#"+o+";";return i}class l0{constructor(i){this.tokens=[],this.tokens.links=Object.create(null),this.options=i||w0,this.options.tokenizer=this.options.tokenizer||new Un,this.tokenizer=this.options.tokenizer,this.tokenizer.options=this.options,this.tokenizer.lexer=this,this.inlineQueue=[],this.state={inLink:!1,inRawBlock:!1,top:!0};const s={block:me.normal,inline:ie.normal};this.options.pedantic?(s.block=me.pedantic,s.inline=ie.pedantic):this.options.gfm&&(s.block=me.gfm,this.options.breaks?s.inline=ie.breaks:s.inline=ie.gfm),this.tokenizer.rules=s}static get rules(){return{block:me,inline:ie}}static lex(i,s){return new l0(s).lex(i)}static lexInline(i,s){return new l0(s).inlineTokens(i)}lex(i){i=i.replace(/\r\n|\r/g,` -`),this.blockTokens(i,this.tokens);let s;for(;s=this.inlineQueue.shift();)this.inlineTokens(s.src,s.tokens);return this.tokens}blockTokens(i,s=[]){this.options.pedantic?i=i.replace(/\t/g," ").replace(/^ +$/gm,""):i=i.replace(/^( *)(\t+)/gm,(w,z,L)=>z+" ".repeat(L.length));let o,m,p,x;for(;i;)if(!(this.options.extensions&&this.options.extensions.block&&this.options.extensions.block.some(w=>(o=w.call({lexer:this},i,s))?(i=i.substring(o.raw.length),s.push(o),!0):!1))){if(o=this.tokenizer.space(i)){i=i.substring(o.raw.length),o.raw.length===1&&s.length>0?s[s.length-1].raw+=` -`:s.push(o);continue}if(o=this.tokenizer.code(i)){i=i.substring(o.raw.length),m=s[s.length-1],m&&(m.type==="paragraph"||m.type==="text")?(m.raw+=` -`+o.raw,m.text+=` -`+o.text,this.inlineQueue[this.inlineQueue.length-1].src=m.text):s.push(o);continue}if(o=this.tokenizer.fences(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.heading(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.hr(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.blockquote(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.list(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.html(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.def(i)){i=i.substring(o.raw.length),m=s[s.length-1],m&&(m.type==="paragraph"||m.type==="text")?(m.raw+=` -`+o.raw,m.text+=` -`+o.raw,this.inlineQueue[this.inlineQueue.length-1].src=m.text):this.tokens.links[o.tag]||(this.tokens.links[o.tag]={href:o.href,title:o.title});continue}if(o=this.tokenizer.table(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.lheading(i)){i=i.substring(o.raw.length),s.push(o);continue}if(p=i,this.options.extensions&&this.options.extensions.startBlock){let w=1/0;const z=i.slice(1);let L;this.options.extensions.startBlock.forEach(function(G){L=G.call({lexer:this},z),typeof L=="number"&&L>=0&&(w=Math.min(w,L))}),w<1/0&&w>=0&&(p=i.substring(0,w+1))}if(this.state.top&&(o=this.tokenizer.paragraph(p))){m=s[s.length-1],x&&m.type==="paragraph"?(m.raw+=` -`+o.raw,m.text+=` -`+o.text,this.inlineQueue.pop(),this.inlineQueue[this.inlineQueue.length-1].src=m.text):s.push(o),x=p.length!==i.length,i=i.substring(o.raw.length);continue}if(o=this.tokenizer.text(i)){i=i.substring(o.raw.length),m=s[s.length-1],m&&m.type==="text"?(m.raw+=` -`+o.raw,m.text+=` -`+o.text,this.inlineQueue.pop(),this.inlineQueue[this.inlineQueue.length-1].src=m.text):s.push(o);continue}if(i){const w="Infinite loop on byte: "+i.charCodeAt(0);if(this.options.silent){console.error(w);break}else throw new Error(w)}}return this.state.top=!0,s}inline(i,s=[]){return this.inlineQueue.push({src:i,tokens:s}),s}inlineTokens(i,s=[]){let o,m,p,x=i,w,z,L;if(this.tokens.links){const G=Object.keys(this.tokens.links);if(G.length>0)for(;(w=this.tokenizer.rules.inline.reflinkSearch.exec(x))!=null;)G.includes(w[0].slice(w[0].lastIndexOf("[")+1,-1))&&(x=x.slice(0,w.index)+"["+fi("a",w[0].length-2)+"]"+x.slice(this.tokenizer.rules.inline.reflinkSearch.lastIndex))}for(;(w=this.tokenizer.rules.inline.blockSkip.exec(x))!=null;)x=x.slice(0,w.index)+"["+fi("a",w[0].length-2)+"]"+x.slice(this.tokenizer.rules.inline.blockSkip.lastIndex);for(;(w=this.tokenizer.rules.inline.escapedEmSt.exec(x))!=null;)x=x.slice(0,w.index+w[0].length-2)+"++"+x.slice(this.tokenizer.rules.inline.escapedEmSt.lastIndex),this.tokenizer.rules.inline.escapedEmSt.lastIndex--;for(;i;)if(z||(L=""),z=!1,!(this.options.extensions&&this.options.extensions.inline&&this.options.extensions.inline.some(G=>(o=G.call({lexer:this},i,s))?(i=i.substring(o.raw.length),s.push(o),!0):!1))){if(o=this.tokenizer.escape(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.tag(i)){i=i.substring(o.raw.length),m=s[s.length-1],m&&o.type==="text"&&m.type==="text"?(m.raw+=o.raw,m.text+=o.text):s.push(o);continue}if(o=this.tokenizer.link(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.reflink(i,this.tokens.links)){i=i.substring(o.raw.length),m=s[s.length-1],m&&o.type==="text"&&m.type==="text"?(m.raw+=o.raw,m.text+=o.text):s.push(o);continue}if(o=this.tokenizer.emStrong(i,x,L)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.codespan(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.br(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.del(i)){i=i.substring(o.raw.length),s.push(o);continue}if(o=this.tokenizer.autolink(i,gi)){i=i.substring(o.raw.length),s.push(o);continue}if(!this.state.inLink&&(o=this.tokenizer.url(i,gi))){i=i.substring(o.raw.length),s.push(o);continue}if(p=i,this.options.extensions&&this.options.extensions.startInline){let G=1/0;const K=i.slice(1);let ne;this.options.extensions.startInline.forEach(function(V){ne=V.call({lexer:this},K),typeof ne=="number"&&ne>=0&&(G=Math.min(G,ne))}),G<1/0&&G>=0&&(p=i.substring(0,G+1))}if(o=this.tokenizer.inlineText(p,co)){i=i.substring(o.raw.length),o.raw.slice(-1)!=="_"&&(L=o.raw.slice(-1)),z=!0,m=s[s.length-1],m&&m.type==="text"?(m.raw+=o.raw,m.text+=o.text):s.push(o);continue}if(i){const G="Infinite loop on byte: "+i.charCodeAt(0);if(this.options.silent){console.error(G);break}else throw new Error(G)}}return s}}class Gn{constructor(i){this.options=i||w0}code(i,s,o){const m=(s||"").match(/\S*/)[0];if(this.options.highlight){const p=this.options.highlight(i,m);p!=null&&p!==i&&(o=!0,i=p)}return i=i.replace(/\n$/,"")+` -`,m?'
'+(o?i:et(i,!0))+`
-`:"
"+(o?i:et(i,!0))+`
-`}blockquote(i){return`
-${i}
-`}html(i,s){return i}heading(i,s,o,m){if(this.options.headerIds){const p=this.options.headerPrefix+m.slug(o);return`${i} -`}return`${i} -`}hr(){return this.options.xhtml?`
-`:`
-`}list(i,s,o){const m=s?"ol":"ul",p=s&&o!==1?' start="'+o+'"':"";return"<"+m+p+`> -`+i+" -`}listitem(i){return`
  • ${i}
  • -`}checkbox(i){return" "}paragraph(i){return`

    ${i}

    -`}table(i,s){return s&&(s=`${s}`),` - -`+i+` -`+s+`
    -`}tablerow(i){return` -${i} -`}tablecell(i,s){const o=s.header?"th":"td";return(s.align?`<${o} align="${s.align}">`:`<${o}>`)+i+` -`}strong(i){return`${i}`}em(i){return`${i}`}codespan(i){return`${i}`}br(){return this.options.xhtml?"
    ":"
    "}del(i){return`${i}`}link(i,s,o){if(i=mi(this.options.sanitize,this.options.baseUrl,i),i===null)return o;let m='",m}image(i,s,o){if(i=mi(this.options.sanitize,this.options.baseUrl,i),i===null)return o;let m=`${o}":">",m}text(i){return i}}class el{strong(i){return i}em(i){return i}codespan(i){return i}del(i){return i}html(i){return i}text(i){return i}link(i,s,o){return""+o}image(i,s,o){return""+o}br(){return""}}class tl{constructor(){this.seen={}}serialize(i){return i.toLowerCase().trim().replace(/<[!\/a-z].*?>/ig,"").replace(/[\u2000-\u206F\u2E00-\u2E7F\\'!"#$%&()*+,./:;<=>?@[\]^`{|}~]/g,"").replace(/\s/g,"-")}getNextSafeSlug(i,s){let o=i,m=0;if(this.seen.hasOwnProperty(o)){m=this.seen[i];do m++,o=i+"-"+m;while(this.seen.hasOwnProperty(o))}return s||(this.seen[i]=m,this.seen[o]=0),o}slug(i,s={}){const o=this.serialize(i);return this.getNextSafeSlug(o,s.dryrun)}}class s0{constructor(i){this.options=i||w0,this.options.renderer=this.options.renderer||new Gn,this.renderer=this.options.renderer,this.renderer.options=this.options,this.textRenderer=new el,this.slugger=new tl}static parse(i,s){return new s0(s).parse(i)}static parseInline(i,s){return new s0(s).parseInline(i)}parse(i,s=!0){let o="",m,p,x,w,z,L,G,K,ne,V,xe,ge,le,q,C,_,N,D,I;const j=i.length;for(m=0;m0&&C.tokens[0].type==="paragraph"?(C.tokens[0].text=D+" "+C.tokens[0].text,C.tokens[0].tokens&&C.tokens[0].tokens.length>0&&C.tokens[0].tokens[0].type==="text"&&(C.tokens[0].tokens[0].text=D+" "+C.tokens[0].tokens[0].text)):C.tokens.unshift({type:"text",text:D}):q+=D),q+=this.parse(C.tokens,le),ne+=this.renderer.listitem(q,N,_);o+=this.renderer.list(ne,xe,ge);continue}case"html":{o+=this.renderer.html(V.text,V.block);continue}case"paragraph":{o+=this.renderer.paragraph(this.parseInline(V.tokens));continue}case"text":{for(ne=V.tokens?this.parseInline(V.tokens):V.text;m+1{if(o.message+=` -Please report this to https://github.com/markedjs/marked.`,v){const m="

    An error occurred:

    "+et(o.message+"",!0)+"
    ";if(i)return Promise.resolve(m);if(s){s(null,m);return}return m}if(i)return Promise.reject(o);if(s){s(o);return}throw o}}function rl(v,i){return(s,o,m)=>{typeof o=="function"&&(m=o,o=null);const p={...o};o={...he.defaults,...p};const x=ho(o.silent,o.async,m);if(typeof s>"u"||s===null)return x(new Error("marked(): input parameter is undefined or null"));if(typeof s!="string")return x(new Error("marked(): input parameter is of type "+Object.prototype.toString.call(s)+", string expected"));if(oo(o,m),o.hooks&&(o.hooks.options=o),m){const w=o.highlight;let z;try{o.hooks&&(s=o.hooks.preprocess(s)),z=v(s,o)}catch(K){return x(K)}const L=function(K){let ne;if(!K)try{o.walkTokens&&he.walkTokens(z,o.walkTokens),ne=i(z,o),o.hooks&&(ne=o.hooks.postprocess(ne))}catch(V){K=V}return o.highlight=w,K?x(K):m(null,ne)};if(!w||w.length<3||(delete o.highlight,!z.length))return L();let G=0;he.walkTokens(z,function(K){K.type==="code"&&(G++,setTimeout(()=>{w(K.text,K.lang,function(ne,V){if(ne)return L(ne);V!=null&&V!==K.text&&(K.text=V,K.escaped=!0),G--,G===0&&L()})},0))}),G===0&&L();return}if(o.async)return Promise.resolve(o.hooks?o.hooks.preprocess(s):s).then(w=>v(w,o)).then(w=>o.walkTokens?Promise.all(he.walkTokens(w,o.walkTokens)).then(()=>w):w).then(w=>i(w,o)).then(w=>o.hooks?o.hooks.postprocess(w):w).catch(x);try{o.hooks&&(s=o.hooks.preprocess(s));const w=v(s,o);o.walkTokens&&he.walkTokens(w,o.walkTokens);let z=i(w,o);return o.hooks&&(z=o.hooks.postprocess(z)),z}catch(w){return x(w)}}}function he(v,i,s){return rl(l0.lex,s0.parse)(v,i,s)}he.options=he.setOptions=function(v){return he.defaults={...he.defaults,...v},$s(he.defaults),he};he.getDefaults=Zi;he.defaults=w0;he.use=function(...v){const i=he.defaults.extensions||{renderers:{},childTokens:{}};v.forEach(s=>{const o={...s};if(o.async=he.defaults.async||o.async||!1,s.extensions&&(s.extensions.forEach(m=>{if(!m.name)throw new Error("extension name required");if(m.renderer){const p=i.renderers[m.name];p?i.renderers[m.name]=function(...x){let w=m.renderer.apply(this,x);return w===!1&&(w=p.apply(this,x)),w}:i.renderers[m.name]=m.renderer}if(m.tokenizer){if(!m.level||m.level!=="block"&&m.level!=="inline")throw new Error("extension level must be 'block' or 'inline'");i[m.level]?i[m.level].unshift(m.tokenizer):i[m.level]=[m.tokenizer],m.start&&(m.level==="block"?i.startBlock?i.startBlock.push(m.start):i.startBlock=[m.start]:m.level==="inline"&&(i.startInline?i.startInline.push(m.start):i.startInline=[m.start]))}m.childTokens&&(i.childTokens[m.name]=m.childTokens)}),o.extensions=i),s.renderer){const m=he.defaults.renderer||new Gn;for(const p in s.renderer){const x=m[p];m[p]=(...w)=>{let z=s.renderer[p].apply(m,w);return z===!1&&(z=x.apply(m,w)),z}}o.renderer=m}if(s.tokenizer){const m=he.defaults.tokenizer||new Un;for(const p in s.tokenizer){const x=m[p];m[p]=(...w)=>{let z=s.tokenizer[p].apply(m,w);return z===!1&&(z=x.apply(m,w)),z}}o.tokenizer=m}if(s.hooks){const m=he.defaults.hooks||new Ln;for(const p in s.hooks){const x=m[p];Ln.passThroughHooks.has(p)?m[p]=w=>{if(he.defaults.async)return Promise.resolve(s.hooks[p].call(m,w)).then(L=>x.call(m,L));const z=s.hooks[p].call(m,w);return x.call(m,z)}:m[p]=(...w)=>{let z=s.hooks[p].apply(m,w);return z===!1&&(z=x.apply(m,w)),z}}o.hooks=m}if(s.walkTokens){const m=he.defaults.walkTokens;o.walkTokens=function(p){let x=[];return x.push(s.walkTokens.call(this,p)),m&&(x=x.concat(m.call(this,p))),x}}he.setOptions(o)})};he.walkTokens=function(v,i){let s=[];for(const o of v)switch(s=s.concat(i.call(he,o)),o.type){case"table":{for(const m of o.header)s=s.concat(he.walkTokens(m.tokens,i));for(const m of o.rows)for(const p of m)s=s.concat(he.walkTokens(p.tokens,i));break}case"list":{s=s.concat(he.walkTokens(o.items,i));break}default:he.defaults.extensions&&he.defaults.extensions.childTokens&&he.defaults.extensions.childTokens[o.type]?he.defaults.extensions.childTokens[o.type].forEach(function(m){s=s.concat(he.walkTokens(o[m],i))}):o.tokens&&(s=s.concat(he.walkTokens(o.tokens,i)))}return s};he.parseInline=rl(l0.lexInline,s0.parseInline);he.Parser=s0;he.parser=s0.parse;he.Renderer=Gn;he.TextRenderer=el;he.Lexer=l0;he.lexer=l0.lex;he.Tokenizer=Un;he.Slugger=tl;he.Hooks=Ln;he.parse=he;he.options;he.setOptions;he.use;he.walkTokens;he.parseInline;s0.parse;l0.lex;function mo(v){if(typeof v=="function"&&(v={highlight:v}),!v||typeof v.highlight!="function")throw new Error("Must provide highlight function");return typeof v.langPrefix!="string"&&(v.langPrefix="language-"),{async:!!v.async,walkTokens(i){if(i.type!=="code")return;const s=fo(i);if(v.async)return Promise.resolve(v.highlight(i.text,s)).then(vi(i));const o=v.highlight(i.text,s);vi(i)(o)},renderer:{code(i,s,o){const m=(s||"").match(/\S*/)[0],p=m?` class="${v.langPrefix}${yi(m)}"`:"";return i=i.replace(/\n$/,""),`
    ${o?i:yi(i,!0)}
    -
    `}}}}function fo(v){return(v.lang||"").match(/\S*/)[0]}function vi(v){return i=>{typeof i=="string"&&i!==v.text&&(v.escaped=!0,v.text=i)}}const nl=/[&<>"']/,po=new RegExp(nl.source,"g"),al=/[<>"']|&(?!(#\d{1,7}|#[Xx][a-fA-F0-9]{1,6}|\w+);)/,go=new RegExp(al.source,"g"),vo={"&":"&","<":"<",">":">",'"':""","'":"'"},bi=v=>vo[v];function yi(v,i){if(i){if(nl.test(v))return v.replace(po,bi)}else if(al.test(v))return v.replace(go,bi);return v}var il={exports:{}};(function(v){var i=typeof window<"u"?window:typeof WorkerGlobalScope<"u"&&self instanceof WorkerGlobalScope?self:{};/** - * Prism: Lightweight, robust, elegant syntax highlighting - * - * @license MIT - * @author Lea Verou - * @namespace - * @public - */var s=function(o){var m=/(?:^|\s)lang(?:uage)?-([\w-]+)(?=\s|$)/i,p=0,x={},w={manual:o.Prism&&o.Prism.manual,disableWorkerMessageHandler:o.Prism&&o.Prism.disableWorkerMessageHandler,util:{encode:function C(_){return _ instanceof z?new z(_.type,C(_.content),_.alias):Array.isArray(_)?_.map(C):_.replace(/&/g,"&").replace(/"u")return null;if("currentScript"in document&&1<2)return document.currentScript;try{throw new Error}catch(D){var C=(/at [^(\r\n]*\((.*):[^:]+:[^:]+\)$/i.exec(D.stack)||[])[1];if(C){var _=document.getElementsByTagName("script");for(var N in _)if(_[N].src==C)return _[N]}return null}},isActive:function(C,_,N){for(var D="no-"+_;C;){var I=C.classList;if(I.contains(_))return!0;if(I.contains(D))return!1;C=C.parentElement}return!!N}},languages:{plain:x,plaintext:x,text:x,txt:x,extend:function(C,_){var N=w.util.clone(w.languages[C]);for(var D in _)N[D]=_[D];return N},insertBefore:function(C,_,N,D){D=D||w.languages;var I=D[C],j={};for(var re in I)if(I.hasOwnProperty(re)){if(re==_)for(var Z in N)N.hasOwnProperty(Z)&&(j[Z]=N[Z]);N.hasOwnProperty(re)||(j[re]=I[re])}var U=D[C];return D[C]=j,w.languages.DFS(w.languages,function(fe,se){se===U&&fe!=C&&(this[fe]=j)}),j},DFS:function C(_,N,D,I){I=I||{};var j=w.util.objId;for(var re in _)if(_.hasOwnProperty(re)){N.call(_,re,_[re],D||re);var Z=_[re],U=w.util.type(Z);U==="Object"&&!I[j(Z)]?(I[j(Z)]=!0,C(Z,N,null,I)):U==="Array"&&!I[j(Z)]&&(I[j(Z)]=!0,C(Z,N,re,I))}}},plugins:{},highlightAll:function(C,_){w.highlightAllUnder(document,C,_)},highlightAllUnder:function(C,_,N){var D={callback:N,container:C,selector:'code[class*="language-"], [class*="language-"] code, code[class*="lang-"], [class*="lang-"] code'};w.hooks.run("before-highlightall",D),D.elements=Array.prototype.slice.apply(D.container.querySelectorAll(D.selector)),w.hooks.run("before-all-elements-highlight",D);for(var I=0,j;j=D.elements[I++];)w.highlightElement(j,_===!0,D.callback)},highlightElement:function(C,_,N){var D=w.util.getLanguage(C),I=w.languages[D];w.util.setLanguage(C,D);var j=C.parentElement;j&&j.nodeName.toLowerCase()==="pre"&&w.util.setLanguage(j,D);var re=C.textContent,Z={element:C,language:D,grammar:I,code:re};function U(se){Z.highlightedCode=se,w.hooks.run("before-insert",Z),Z.element.innerHTML=Z.highlightedCode,w.hooks.run("after-highlight",Z),w.hooks.run("complete",Z),N&&N.call(Z.element)}if(w.hooks.run("before-sanity-check",Z),j=Z.element.parentElement,j&&j.nodeName.toLowerCase()==="pre"&&!j.hasAttribute("tabindex")&&j.setAttribute("tabindex","0"),!Z.code){w.hooks.run("complete",Z),N&&N.call(Z.element);return}if(w.hooks.run("before-highlight",Z),!Z.grammar){U(w.util.encode(Z.code));return}if(_&&o.Worker){var fe=new Worker(w.filename);fe.onmessage=function(se){U(se.data)},fe.postMessage(JSON.stringify({language:Z.language,code:Z.code,immediateClose:!0}))}else U(w.highlight(Z.code,Z.grammar,Z.language))},highlight:function(C,_,N){var D={code:C,grammar:_,language:N};if(w.hooks.run("before-tokenize",D),!D.grammar)throw new Error('The language "'+D.language+'" has no grammar.');return D.tokens=w.tokenize(D.code,D.grammar),w.hooks.run("after-tokenize",D),z.stringify(w.util.encode(D.tokens),D.language)},tokenize:function(C,_){var N=_.rest;if(N){for(var D in N)_[D]=N[D];delete _.rest}var I=new K;return ne(I,I.head,C),G(C,I,_,I.head,0),xe(I)},hooks:{all:{},add:function(C,_){var N=w.hooks.all;N[C]=N[C]||[],N[C].push(_)},run:function(C,_){var N=w.hooks.all[C];if(!(!N||!N.length))for(var D=0,I;I=N[D++];)I(_)}},Token:z};o.Prism=w;function z(C,_,N,D){this.type=C,this.content=_,this.alias=N,this.length=(D||"").length|0}z.stringify=function C(_,N){if(typeof _=="string")return _;if(Array.isArray(_)){var D="";return _.forEach(function(U){D+=C(U,N)}),D}var I={type:_.type,content:C(_.content,N),tag:"span",classes:["token",_.type],attributes:{},language:N},j=_.alias;j&&(Array.isArray(j)?Array.prototype.push.apply(I.classes,j):I.classes.push(j)),w.hooks.run("wrap",I);var re="";for(var Z in I.attributes)re+=" "+Z+'="'+(I.attributes[Z]||"").replace(/"/g,""")+'"';return"<"+I.tag+' class="'+I.classes.join(" ")+'"'+re+">"+I.content+""};function L(C,_,N,D){C.lastIndex=_;var I=C.exec(N);if(I&&D&&I[1]){var j=I[1].length;I.index+=j,I[0]=I[0].slice(j)}return I}function G(C,_,N,D,I,j){for(var re in N)if(!(!N.hasOwnProperty(re)||!N[re])){var Z=N[re];Z=Array.isArray(Z)?Z:[Z];for(var U=0;U=j.reach);Te+=He.value.length,He=He.next){var Mt=He.value;if(_.length>C.length)return;if(!(Mt instanceof z)){var J=1,Ze;if(Pe){if(Ze=L(o0,Te,C,ze),!Ze||Ze.index>=C.length)break;var at=Ze.index,Fe=Ze.index+Ze[0].length,Ve=Te;for(Ve+=He.value.length;at>=Ve;)He=He.next,Ve+=He.value.length;if(Ve-=He.value.length,Te=Ve,He.value instanceof z)continue;for(var pt=He;pt!==_.tail&&(Vej.reach&&(j.reach=gt);var it=He.prev;Vt&&(it=ne(_,it,Vt),Te+=Vt.length),V(_,it,J);var c0=new z(re,se?w.tokenize(Ct,se):Ct,V0,Ct);if(He=ne(_,it,c0),u0&&ne(_,He,u0),J>1){var Lt={cause:re+","+U,reach:gt};G(C,_,N,He.prev,Te,Lt),j&&Lt.reach>j.reach&&(j.reach=Lt.reach)}}}}}}function K(){var C={value:null,prev:null,next:null},_={value:null,prev:C,next:null};C.next=_,this.head=C,this.tail=_,this.length=0}function ne(C,_,N){var D=_.next,I={value:N,prev:_,next:D};return _.next=I,D.prev=I,C.length++,I}function V(C,_,N){for(var D=_.next,I=0;I/,greedy:!0},prolog:{pattern:/<\?[\s\S]+?\?>/,greedy:!0},doctype:{pattern:/"'[\]]|"[^"]*"|'[^']*')+(?:\[(?:[^<"'\]]|"[^"]*"|'[^']*'|<(?!!--)|)*\]\s*)?>/i,greedy:!0,inside:{"internal-subset":{pattern:/(^[^\[]*\[)[\s\S]+(?=\]>$)/,lookbehind:!0,greedy:!0,inside:null},string:{pattern:/"[^"]*"|'[^']*'/,greedy:!0},punctuation:/^$|[[\]]/,"doctype-tag":/^DOCTYPE/i,name:/[^\s<>'"]+/}},cdata:{pattern://i,greedy:!0},tag:{pattern:/<\/?(?!\d)[^\s>\/=$<%]+(?:\s(?:\s*[^\s>\/=]+(?:\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+(?=[\s>]))|(?=[\s/>])))+)?\s*\/?>/,greedy:!0,inside:{tag:{pattern:/^<\/?[^\s>\/]+/,inside:{punctuation:/^<\/?/,namespace:/^[^\s>\/:]+:/}},"special-attr":[],"attr-value":{pattern:/=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+)/,inside:{punctuation:[{pattern:/^=/,alias:"attr-equals"},{pattern:/^(\s*)["']|["']$/,lookbehind:!0}]}},punctuation:/\/?>/,"attr-name":{pattern:/[^\s>\/]+/,inside:{namespace:/^[^\s>\/:]+:/}}}},entity:[{pattern:/&[\da-z]{1,8};/i,alias:"named-entity"},/&#x?[\da-f]{1,8};/i]},s.languages.markup.tag.inside["attr-value"].inside.entity=s.languages.markup.entity,s.languages.markup.doctype.inside["internal-subset"].inside=s.languages.markup,s.hooks.add("wrap",function(o){o.type==="entity"&&(o.attributes.title=o.content.replace(/&/,"&"))}),Object.defineProperty(s.languages.markup.tag,"addInlined",{value:function(m,p){var x={};x["language-"+p]={pattern:/(^$)/i,lookbehind:!0,inside:s.languages[p]},x.cdata=/^$/i;var w={"included-cdata":{pattern://i,inside:x}};w["language-"+p]={pattern:/[\s\S]+/,inside:s.languages[p]};var z={};z[m]={pattern:RegExp(/(<__[^>]*>)(?:))*\]\]>|(?!)/.source.replace(/__/g,function(){return m}),"i"),lookbehind:!0,greedy:!0,inside:w},s.languages.insertBefore("markup","cdata",z)}}),Object.defineProperty(s.languages.markup.tag,"addAttribute",{value:function(o,m){s.languages.markup.tag.inside["special-attr"].push({pattern:RegExp(/(^|["'\s])/.source+"(?:"+o+")"+/\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+(?=[\s>]))/.source,"i"),lookbehind:!0,inside:{"attr-name":/^[^\s=]+/,"attr-value":{pattern:/=[\s\S]+/,inside:{value:{pattern:/(^=\s*(["']|(?!["'])))\S[\s\S]*(?=\2$)/,lookbehind:!0,alias:[m,"language-"+m],inside:s.languages[m]},punctuation:[{pattern:/^=/,alias:"attr-equals"},/"|'/]}}}})}}),s.languages.html=s.languages.markup,s.languages.mathml=s.languages.markup,s.languages.svg=s.languages.markup,s.languages.xml=s.languages.extend("markup",{}),s.languages.ssml=s.languages.xml,s.languages.atom=s.languages.xml,s.languages.rss=s.languages.xml,function(o){var m=/(?:"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"|'(?:\\(?:\r\n|[\s\S])|[^'\\\r\n])*')/;o.languages.css={comment:/\/\*[\s\S]*?\*\//,atrule:{pattern:RegExp("@[\\w-](?:"+/[^;{\s"']|\s+(?!\s)/.source+"|"+m.source+")*?"+/(?:;|(?=\s*\{))/.source),inside:{rule:/^@[\w-]+/,"selector-function-argument":{pattern:/(\bselector\s*\(\s*(?![\s)]))(?:[^()\s]|\s+(?![\s)])|\((?:[^()]|\([^()]*\))*\))+(?=\s*\))/,lookbehind:!0,alias:"selector"},keyword:{pattern:/(^|[^\w-])(?:and|not|only|or)(?![\w-])/,lookbehind:!0}}},url:{pattern:RegExp("\\burl\\((?:"+m.source+"|"+/(?:[^\\\r\n()"']|\\[\s\S])*/.source+")\\)","i"),greedy:!0,inside:{function:/^url/i,punctuation:/^\(|\)$/,string:{pattern:RegExp("^"+m.source+"$"),alias:"url"}}},selector:{pattern:RegExp(`(^|[{}\\s])[^{}\\s](?:[^{};"'\\s]|\\s+(?![\\s{])|`+m.source+")*(?=\\s*\\{)"),lookbehind:!0},string:{pattern:m,greedy:!0},property:{pattern:/(^|[^-\w\xA0-\uFFFF])(?!\s)[-_a-z\xA0-\uFFFF](?:(?!\s)[-\w\xA0-\uFFFF])*(?=\s*:)/i,lookbehind:!0},important:/!important\b/i,function:{pattern:/(^|[^-a-z0-9])[-a-z0-9]+(?=\()/i,lookbehind:!0},punctuation:/[(){};:,]/},o.languages.css.atrule.inside.rest=o.languages.css;var p=o.languages.markup;p&&(p.tag.addInlined("style","css"),p.tag.addAttribute("style","css"))}(s),s.languages.clike={comment:[{pattern:/(^|[^\\])\/\*[\s\S]*?(?:\*\/|$)/,lookbehind:!0,greedy:!0},{pattern:/(^|[^\\:])\/\/.*/,lookbehind:!0,greedy:!0}],string:{pattern:/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0},"class-name":{pattern:/(\b(?:class|extends|implements|instanceof|interface|new|trait)\s+|\bcatch\s+\()[\w.\\]+/i,lookbehind:!0,inside:{punctuation:/[.\\]/}},keyword:/\b(?:break|catch|continue|do|else|finally|for|function|if|in|instanceof|new|null|return|throw|try|while)\b/,boolean:/\b(?:false|true)\b/,function:/\b\w+(?=\()/,number:/\b0x[\da-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?/i,operator:/[<>]=?|[!=]=?=?|--?|\+\+?|&&?|\|\|?|[?*/~^%]/,punctuation:/[{}[\];(),.:]/},s.languages.javascript=s.languages.extend("clike",{"class-name":[s.languages.clike["class-name"],{pattern:/(^|[^$\w\xA0-\uFFFF])(?!\s)[_$A-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\.(?:constructor|prototype))/,lookbehind:!0}],keyword:[{pattern:/((?:^|\})\s*)catch\b/,lookbehind:!0},{pattern:/(^|[^.]|\.\.\.\s*)\b(?:as|assert(?=\s*\{)|async(?=\s*(?:function\b|\(|[$\w\xA0-\uFFFF]|$))|await|break|case|class|const|continue|debugger|default|delete|do|else|enum|export|extends|finally(?=\s*(?:\{|$))|for|from(?=\s*(?:['"]|$))|function|(?:get|set)(?=\s*(?:[#\[$\w\xA0-\uFFFF]|$))|if|implements|import|in|instanceof|interface|let|new|null|of|package|private|protected|public|return|static|super|switch|this|throw|try|typeof|undefined|var|void|while|with|yield)\b/,lookbehind:!0}],function:/#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*(?:\.\s*(?:apply|bind|call)\s*)?\()/,number:{pattern:RegExp(/(^|[^\w$])/.source+"(?:"+(/NaN|Infinity/.source+"|"+/0[bB][01]+(?:_[01]+)*n?/.source+"|"+/0[oO][0-7]+(?:_[0-7]+)*n?/.source+"|"+/0[xX][\dA-Fa-f]+(?:_[\dA-Fa-f]+)*n?/.source+"|"+/\d+(?:_\d+)*n/.source+"|"+/(?:\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\.\d+(?:_\d+)*)(?:[Ee][+-]?\d+(?:_\d+)*)?/.source)+")"+/(?![\w$])/.source),lookbehind:!0},operator:/--|\+\+|\*\*=?|=>|&&=?|\|\|=?|[!=]==|<<=?|>>>?=?|[-+*/%&|^!=<>]=?|\.{3}|\?\?=?|\?\.?|[~:]/}),s.languages.javascript["class-name"][0].pattern=/(\b(?:class|extends|implements|instanceof|interface|new)\s+)[\w.\\]+/,s.languages.insertBefore("javascript","keyword",{regex:{pattern:RegExp(/((?:^|[^$\w\xA0-\uFFFF."'\])\s]|\b(?:return|yield))\s*)/.source+/\//.source+"(?:"+/(?:\[(?:[^\]\\\r\n]|\\.)*\]|\\.|[^/\\\[\r\n])+\/[dgimyus]{0,7}/.source+"|"+/(?:\[(?:[^[\]\\\r\n]|\\.|\[(?:[^[\]\\\r\n]|\\.|\[(?:[^[\]\\\r\n]|\\.)*\])*\])*\]|\\.|[^/\\\[\r\n])+\/[dgimyus]{0,7}v[dgimyus]{0,7}/.source+")"+/(?=(?:\s|\/\*(?:[^*]|\*(?!\/))*\*\/)*(?:$|[\r\n,.;:})\]]|\/\/))/.source),lookbehind:!0,greedy:!0,inside:{"regex-source":{pattern:/^(\/)[\s\S]+(?=\/[a-z]*$)/,lookbehind:!0,alias:"language-regex",inside:s.languages.regex},"regex-delimiter":/^\/|\/$/,"regex-flags":/^[a-z]+$/}},"function-variable":{pattern:/#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*[=:]\s*(?:async\s*)?(?:\bfunction\b|(?:\((?:[^()]|\([^()]*\))*\)|(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)\s*=>))/,alias:"function"},parameter:[{pattern:/(function(?:\s+(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)?\s*\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\))/,lookbehind:!0,inside:s.languages.javascript},{pattern:/(^|[^$\w\xA0-\uFFFF])(?!\s)[_$a-z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*=>)/i,lookbehind:!0,inside:s.languages.javascript},{pattern:/(\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\)\s*=>)/,lookbehind:!0,inside:s.languages.javascript},{pattern:/((?:\b|\s|^)(?!(?:as|async|await|break|case|catch|class|const|continue|debugger|default|delete|do|else|enum|export|extends|finally|for|from|function|get|if|implements|import|in|instanceof|interface|let|new|null|of|package|private|protected|public|return|set|static|super|switch|this|throw|try|typeof|undefined|var|void|while|with|yield)(?![$\w\xA0-\uFFFF]))(?:(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*\s*)\(\s*|\]\s*\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\)\s*\{)/,lookbehind:!0,inside:s.languages.javascript}],constant:/\b[A-Z](?:[A-Z_]|\dx?)*\b/}),s.languages.insertBefore("javascript","string",{hashbang:{pattern:/^#!.*/,greedy:!0,alias:"comment"},"template-string":{pattern:/`(?:\\[\s\S]|\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}|(?!\$\{)[^\\`])*`/,greedy:!0,inside:{"template-punctuation":{pattern:/^`|`$/,alias:"string"},interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}/,lookbehind:!0,inside:{"interpolation-punctuation":{pattern:/^\$\{|\}$/,alias:"punctuation"},rest:s.languages.javascript}},string:/[\s\S]+/}},"string-property":{pattern:/((?:^|[,{])[ \t]*)(["'])(?:\\(?:\r\n|[\s\S])|(?!\2)[^\\\r\n])*\2(?=\s*:)/m,lookbehind:!0,greedy:!0,alias:"property"}}),s.languages.insertBefore("javascript","operator",{"literal-property":{pattern:/((?:^|[,{])[ \t]*)(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*:)/m,lookbehind:!0,alias:"property"}}),s.languages.markup&&(s.languages.markup.tag.addInlined("script","javascript"),s.languages.markup.tag.addAttribute(/on(?:abort|blur|change|click|composition(?:end|start|update)|dblclick|error|focus(?:in|out)?|key(?:down|up)|load|mouse(?:down|enter|leave|move|out|over|up)|reset|resize|scroll|select|slotchange|submit|unload|wheel)/.source,"javascript")),s.languages.js=s.languages.javascript,function(){if(typeof s>"u"||typeof document>"u")return;Element.prototype.matches||(Element.prototype.matches=Element.prototype.msMatchesSelector||Element.prototype.webkitMatchesSelector);var o="Loading…",m=function(ge,le){return"✖ Error "+ge+" while fetching file: "+le},p="✖ Error: File does not exist or is empty",x={js:"javascript",py:"python",rb:"ruby",ps1:"powershell",psm1:"powershell",sh:"bash",bat:"batch",h:"c",tex:"latex"},w="data-src-status",z="loading",L="loaded",G="failed",K="pre[data-src]:not(["+w+'="'+L+'"]):not(['+w+'="'+z+'"])';function ne(ge,le,q){var C=new XMLHttpRequest;C.open("GET",ge,!0),C.onreadystatechange=function(){C.readyState==4&&(C.status<400&&C.responseText?le(C.responseText):C.status>=400?q(m(C.status,C.statusText)):q(p))},C.send(null)}function V(ge){var le=/^\s*(\d+)\s*(?:(,)\s*(?:(\d+)\s*)?)?$/.exec(ge||"");if(le){var q=Number(le[1]),C=le[2],_=le[3];return C?_?[q,Number(_)]:[q,void 0]:[q,q]}}s.hooks.add("before-highlightall",function(ge){ge.selector+=", "+K}),s.hooks.add("before-sanity-check",function(ge){var le=ge.element;if(le.matches(K)){ge.code="",le.setAttribute(w,z);var q=le.appendChild(document.createElement("CODE"));q.textContent=o;var C=le.getAttribute("data-src"),_=ge.language;if(_==="none"){var N=(/\.(\w+)$/.exec(C)||[,"none"])[1];_=x[N]||N}s.util.setLanguage(q,_),s.util.setLanguage(le,_);var D=s.plugins.autoloader;D&&D.loadLanguages(_),ne(C,function(I){le.setAttribute(w,L);var j=V(le.getAttribute("data-range"));if(j){var re=I.split(/\r\n?|\n/g),Z=j[0],U=j[1]==null?re.length:j[1];Z<0&&(Z+=re.length),Z=Math.max(0,Math.min(Z-1,re.length)),U<0&&(U+=re.length),U=Math.max(0,Math.min(U,re.length)),I=re.slice(Z,U).join(` -`),le.hasAttribute("data-start")||le.setAttribute("data-start",String(Z+1))}q.textContent=I,s.highlightElement(q)},function(I){le.setAttribute(w,G),q.textContent=I})}}),s.plugins.fileHighlight={highlight:function(le){for(var q=(le||document).querySelectorAll(K),C=0,_;_=q[C++];)s.highlightElement(_)}};var xe=!1;s.fileHighlight=function(){xe||(console.warn("Prism.fileHighlight is deprecated. Use `Prism.plugins.fileHighlight.highlight` instead."),xe=!0),s.plugins.fileHighlight.highlight.apply(this,arguments)}}()})(il);var bo=il.exports;const En=Gi(bo);Prism.languages.python={comment:{pattern:/(^|[^\\])#.*/,lookbehind:!0,greedy:!0},"string-interpolation":{pattern:/(?:f|fr|rf)(?:("""|''')[\s\S]*?\1|("|')(?:\\.|(?!\2)[^\\\r\n])*\2)/i,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^{])(?:\{\{)*)\{(?!\{)(?:[^{}]|\{(?!\{)(?:[^{}]|\{(?!\{)(?:[^{}])+\})+\})+\}/,lookbehind:!0,inside:{"format-spec":{pattern:/(:)[^:(){}]+(?=\}$)/,lookbehind:!0},"conversion-option":{pattern:/![sra](?=[:}]$)/,alias:"punctuation"},rest:null}},string:/[\s\S]+/}},"triple-quoted-string":{pattern:/(?:[rub]|br|rb)?("""|''')[\s\S]*?\1/i,greedy:!0,alias:"string"},string:{pattern:/(?:[rub]|br|rb)?("|')(?:\\.|(?!\1)[^\\\r\n])*\1/i,greedy:!0},function:{pattern:/((?:^|\s)def[ \t]+)[a-zA-Z_]\w*(?=\s*\()/g,lookbehind:!0},"class-name":{pattern:/(\bclass\s+)\w+/i,lookbehind:!0},decorator:{pattern:/(^[\t ]*)@\w+(?:\.\w+)*/m,lookbehind:!0,alias:["annotation","punctuation"],inside:{punctuation:/\./}},keyword:/\b(?:_(?=\s*:)|and|as|assert|async|await|break|case|class|continue|def|del|elif|else|except|exec|finally|for|from|global|if|import|in|is|lambda|match|nonlocal|not|or|pass|print|raise|return|try|while|with|yield)\b/,builtin:/\b(?:__import__|abs|all|any|apply|ascii|basestring|bin|bool|buffer|bytearray|bytes|callable|chr|classmethod|cmp|coerce|compile|complex|delattr|dict|dir|divmod|enumerate|eval|execfile|file|filter|float|format|frozenset|getattr|globals|hasattr|hash|help|hex|id|input|int|intern|isinstance|issubclass|iter|len|list|locals|long|map|max|memoryview|min|next|object|oct|open|ord|pow|property|range|raw_input|reduce|reload|repr|reversed|round|set|setattr|slice|sorted|staticmethod|str|sum|super|tuple|type|unichr|unicode|vars|xrange|zip)\b/,boolean:/\b(?:False|None|True)\b/,number:/\b0(?:b(?:_?[01])+|o(?:_?[0-7])+|x(?:_?[a-f0-9])+)\b|(?:\b\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\B\.\d+(?:_\d+)*)(?:e[+-]?\d+(?:_\d+)*)?j?(?!\w)/i,operator:/[-+%=]=?|!=|:=|\*\*?=?|\/\/?=?|<[<=>]?|>[=>]?|[&|^~]/,punctuation:/[{}[\];(),.:]/};Prism.languages.python["string-interpolation"].inside.interpolation.inside.rest=Prism.languages.python;Prism.languages.py=Prism.languages.python;(function(v){var i=/\\(?:[^a-z()[\]]|[a-z*]+)/i,s={"equation-command":{pattern:i,alias:"regex"}};v.languages.latex={comment:/%.*/,cdata:{pattern:/(\\begin\{((?:lstlisting|verbatim)\*?)\})[\s\S]*?(?=\\end\{\2\})/,lookbehind:!0},equation:[{pattern:/\$\$(?:\\[\s\S]|[^\\$])+\$\$|\$(?:\\[\s\S]|[^\\$])+\$|\\\([\s\S]*?\\\)|\\\[[\s\S]*?\\\]/,inside:s,alias:"string"},{pattern:/(\\begin\{((?:align|eqnarray|equation|gather|math|multline)\*?)\})[\s\S]*?(?=\\end\{\2\})/,lookbehind:!0,inside:s,alias:"string"}],keyword:{pattern:/(\\(?:begin|cite|documentclass|end|label|ref|usepackage)(?:\[[^\]]+\])?\{)[^}]+(?=\})/,lookbehind:!0},url:{pattern:/(\\url\{)[^}]+(?=\})/,lookbehind:!0},headline:{pattern:/(\\(?:chapter|frametitle|paragraph|part|section|subparagraph|subsection|subsubparagraph|subsubsection|subsubsubparagraph)\*?(?:\[[^\]]+\])?\{)[^}]+(?=\})/,lookbehind:!0,alias:"class-name"},function:{pattern:i,alias:"selector"},punctuation:/[[\]{}&]/},v.languages.tex=v.languages.latex,v.languages.context=v.languages.latex})(Prism);const yo=``,xo=``,xi=``,ll=/[&<>"']/,wo=new RegExp(ll.source,"g"),sl=/[<>"']|&(?!(#\d{1,7}|#[Xx][a-fA-F0-9]{1,6}|\w+);)/,ko=new RegExp(sl.source,"g"),So={"&":"&","<":"<",">":">",'"':""","'":"'"},wi=v=>So[v]||"";function Bn(v,i){if(i){if(ll.test(v))return v.replace(wo,wi)}else if(sl.test(v))return v.replace(ko,wi);return v}const Ao={code(v,i,s){const o=(i??"").match(/\S*/)?.[0]??"";if(this.options.highlight){const m=this.options.highlight(v,o);m!=null&&m!==v&&(s=!0,v=m)}return v=v.replace(/\n$/,"")+` -`,o?'
    '+xi+(s?v:Bn(v,!0))+`
    -`:"
    "+xi+(s?v:Bn(v,!0))+`
    -`}};he.use({gfm:!0,breaks:!0,pedantic:!1,headerIds:!1,mangle:!1},mo({highlight:(v,i)=>En.languages[i]?En.highlight(v,En.languages[i],i):v}),{renderer:Ao});function To(v){v.addEventListener("click",i);async function i(s){const o=s.composedPath(),[m]=o.filter(p=>p?.tagName==="BUTTON"&&p.classList.contains("copy_code_button"));if(m){let p=function(L){L.style.opacity="1",setTimeout(()=>{L.style.opacity="0"},2e3)};s.stopImmediatePropagation();const x=m.parentElement.innerText.trim(),w=Array.from(m.children)[1];await Mo(x)&&p(w)}}return{destroy(){v.removeEventListener("click",i)}}}async function Mo(v){let i=!1;if("clipboard"in navigator)await navigator.clipboard.writeText(v),i=!0;else{const s=document.createElement("textarea");s.value=v,s.style.position="absolute",s.style.left="-999999px",document.body.prepend(s),s.select();try{document.execCommand("copy"),i=!0}catch(o){console.error(o),i=!1}finally{s.remove()}}return i}const zo=async v=>(await Promise.all(v.map(async s=>await Promise.all(s.map(async(o,m)=>{if(o===null)return"";let p=m===0?"😃":"🤖",x="";if(typeof o=="string")x=o;else{const w=await Gs(o.data,"url");o.mime_type?.includes("audio")?x=``:o.mime_type?.includes("video")?x=w:o.mime_type?.includes("image")&&(x=``)}return`${p}: ${x}`}))))).map(s=>s.join(s[0]!==""&&s[1]!==""?` -`:"")).join(` -`);/*! @license DOMPurify 3.0.3 | (c) Cure53 and other contributors | Released under the Apache license 2.0 and Mozilla Public License 2.0 | github.com/cure53/DOMPurify/blob/3.0.3/LICENSE */const{entries:ol,setPrototypeOf:ki,isFrozen:_o,getPrototypeOf:Eo,getOwnPropertyDescriptor:Bo}=Object;let{freeze:nt,seal:Bt,create:Co}=Object,{apply:On,construct:qn}=typeof Reflect<"u"&&Reflect;On||(On=function(i,s,o){return i.apply(s,o)});nt||(nt=function(i){return i});Bt||(Bt=function(i){return i});qn||(qn=function(i,s){return new i(...s)});const Do=Tt(Array.prototype.forEach),Si=Tt(Array.prototype.pop),rr=Tt(Array.prototype.push),Fr=Tt(String.prototype.toLowerCase),Cn=Tt(String.prototype.toString),No=Tt(String.prototype.match),Et=Tt(String.prototype.replace),Ro=Tt(String.prototype.indexOf),Fo=Tt(String.prototype.trim),ft=Tt(RegExp.prototype.test),nr=Io(TypeError);function Tt(v){return function(i){for(var s=arguments.length,o=new Array(s>1?s-1:0),m=1;m/gm),Ho=Bt(/\${[\w\W]*}/gm),Uo=Bt(/^data-[\-\w.\u00B7-\uFFFF]/),Go=Bt(/^aria-[\-\w]+$/),ul=Bt(/^(?:(?:(?:f|ht)tps?|mailto|tel|callto|sms|cid|xmpp):|[^a-z]|[a-z+.\-]+(?:[^a-z+.\-:]|$))/i),Vo=Bt(/^(?:\w+script|data):/i),Wo=Bt(/[\u0000-\u0020\u00A0\u1680\u180E\u2000-\u2029\u205F\u3000]/g),cl=Bt(/^html$/i);var _i=Object.freeze({__proto__:null,MUSTACHE_EXPR:qo,ERB_EXPR:Po,TMPLIT_EXPR:Ho,DATA_ATTR:Uo,ARIA_ATTR:Go,IS_ALLOWED_URI:ul,IS_SCRIPT_OR_DATA:Vo,ATTR_WHITESPACE:Wo,DOCTYPE_NAME:cl});const Yo=()=>typeof window>"u"?null:window,jo=function(i,s){if(typeof i!="object"||typeof i.createPolicy!="function")return null;let o=null;const m="data-tt-policy-suffix";s&&s.hasAttribute(m)&&(o=s.getAttribute(m));const p="dompurify"+(o?"#"+o:"");try{return i.createPolicy(p,{createHTML(x){return x},createScriptURL(x){return x}})}catch{return console.warn("TrustedTypes policy "+p+" could not be created."),null}};function hl(){let v=arguments.length>0&&arguments[0]!==void 0?arguments[0]:Yo();const i=oe=>hl(oe);if(i.version="3.0.3",i.removed=[],!v||!v.document||v.document.nodeType!==9)return i.isSupported=!1,i;const s=v.document,o=s.currentScript;let{document:m}=v;const{DocumentFragment:p,HTMLTemplateElement:x,Node:w,Element:z,NodeFilter:L,NamedNodeMap:G=v.NamedNodeMap||v.MozNamedAttrMap,HTMLFormElement:K,DOMParser:ne,trustedTypes:V}=v,xe=z.prototype,ge=Dr(xe,"cloneNode"),le=Dr(xe,"nextSibling"),q=Dr(xe,"childNodes"),C=Dr(xe,"parentNode");if(typeof x=="function"){const oe=m.createElement("template");oe.content&&oe.content.ownerDocument&&(m=oe.content.ownerDocument)}let _,N="";const{implementation:D,createNodeIterator:I,createDocumentFragment:j,getElementsByTagName:re}=m,{importNode:Z}=s;let U={};i.isSupported=typeof ol=="function"&&typeof C=="function"&&D&&D.createHTMLDocument!==void 0;const{MUSTACHE_EXPR:fe,ERB_EXPR:se,TMPLIT_EXPR:ze,DATA_ATTR:Pe,ARIA_ATTR:V0,IS_SCRIPT_OR_DATA:W0,ATTR_WHITESPACE:o0}=_i;let{IS_ALLOWED_URI:He}=_i,Te=null;const Mt=ye({},[...Ai,...Dn,...Nn,...Rn,...Ti]);let J=null;const Ze=ye({},[...Mi,...Fn,...zi,...Nr]);let Fe=Object.seal(Object.create(null,{tagNameCheck:{writable:!0,configurable:!1,enumerable:!0,value:null},attributeNameCheck:{writable:!0,configurable:!1,enumerable:!0,value:null},allowCustomizedBuiltInElements:{writable:!0,configurable:!1,enumerable:!0,value:!1}})),Ve=null,pt=null,at=!0,Ct=!0,Vt=!1,u0=!0,gt=!1,it=!1,c0=!1,Lt=!1,Wt=!1,k0=!1,h0=!1,ar=!0,Yt=!1;const vt="user-content-";let jt=!0,Xt=!1,$t={},Dt=null;const S0=ye({},["annotation-xml","audio","colgroup","desc","foreignobject","head","iframe","math","mi","mn","mo","ms","mtext","noembed","noframes","noscript","plaintext","script","style","svg","template","thead","title","video","xmp"]);let ir=null;const lr=ye({},["audio","video","img","source","image","track"]);let A0=null;const Y0=ye({},["alt","class","for","id","label","name","pattern","placeholder","role","summary","title","value","style","xmlns"]),m0="http://www.w3.org/1998/Math/MathML",T0="http://www.w3.org/2000/svg",ht="http://www.w3.org/1999/xhtml";let Zt=ht,M0=!1,Ee=null;const X=ye({},[m0,T0,ht],Cn);let Je;const sr=["application/xhtml+xml","text/html"],or="text/html";let Ue,bt=null;const j0=m.createElement("form"),ur=function(A){return A instanceof RegExp||A instanceof Function},X0=function(A){if(!(bt&&bt===A)){if((!A||typeof A!="object")&&(A={}),A=L0(A),Je=sr.indexOf(A.PARSER_MEDIA_TYPE)===-1?Je=or:Je=A.PARSER_MEDIA_TYPE,Ue=Je==="application/xhtml+xml"?Cn:Fr,Te="ALLOWED_TAGS"in A?ye({},A.ALLOWED_TAGS,Ue):Mt,J="ALLOWED_ATTR"in A?ye({},A.ALLOWED_ATTR,Ue):Ze,Ee="ALLOWED_NAMESPACES"in A?ye({},A.ALLOWED_NAMESPACES,Cn):X,A0="ADD_URI_SAFE_ATTR"in A?ye(L0(Y0),A.ADD_URI_SAFE_ATTR,Ue):Y0,ir="ADD_DATA_URI_TAGS"in A?ye(L0(lr),A.ADD_DATA_URI_TAGS,Ue):lr,Dt="FORBID_CONTENTS"in A?ye({},A.FORBID_CONTENTS,Ue):S0,Ve="FORBID_TAGS"in A?ye({},A.FORBID_TAGS,Ue):{},pt="FORBID_ATTR"in A?ye({},A.FORBID_ATTR,Ue):{},$t="USE_PROFILES"in A?A.USE_PROFILES:!1,at=A.ALLOW_ARIA_ATTR!==!1,Ct=A.ALLOW_DATA_ATTR!==!1,Vt=A.ALLOW_UNKNOWN_PROTOCOLS||!1,u0=A.ALLOW_SELF_CLOSE_IN_ATTR!==!1,gt=A.SAFE_FOR_TEMPLATES||!1,it=A.WHOLE_DOCUMENT||!1,Wt=A.RETURN_DOM||!1,k0=A.RETURN_DOM_FRAGMENT||!1,h0=A.RETURN_TRUSTED_TYPE||!1,Lt=A.FORCE_BODY||!1,ar=A.SANITIZE_DOM!==!1,Yt=A.SANITIZE_NAMED_PROPS||!1,jt=A.KEEP_CONTENT!==!1,Xt=A.IN_PLACE||!1,He=A.ALLOWED_URI_REGEXP||ul,Zt=A.NAMESPACE||ht,Fe=A.CUSTOM_ELEMENT_HANDLING||{},A.CUSTOM_ELEMENT_HANDLING&&ur(A.CUSTOM_ELEMENT_HANDLING.tagNameCheck)&&(Fe.tagNameCheck=A.CUSTOM_ELEMENT_HANDLING.tagNameCheck),A.CUSTOM_ELEMENT_HANDLING&&ur(A.CUSTOM_ELEMENT_HANDLING.attributeNameCheck)&&(Fe.attributeNameCheck=A.CUSTOM_ELEMENT_HANDLING.attributeNameCheck),A.CUSTOM_ELEMENT_HANDLING&&typeof A.CUSTOM_ELEMENT_HANDLING.allowCustomizedBuiltInElements=="boolean"&&(Fe.allowCustomizedBuiltInElements=A.CUSTOM_ELEMENT_HANDLING.allowCustomizedBuiltInElements),gt&&(Ct=!1),k0&&(Wt=!0),$t&&(Te=ye({},[...Ti]),J=[],$t.html===!0&&(ye(Te,Ai),ye(J,Mi)),$t.svg===!0&&(ye(Te,Dn),ye(J,Fn),ye(J,Nr)),$t.svgFilters===!0&&(ye(Te,Nn),ye(J,Fn),ye(J,Nr)),$t.mathMl===!0&&(ye(Te,Rn),ye(J,zi),ye(J,Nr))),A.ADD_TAGS&&(Te===Mt&&(Te=L0(Te)),ye(Te,A.ADD_TAGS,Ue)),A.ADD_ATTR&&(J===Ze&&(J=L0(J)),ye(J,A.ADD_ATTR,Ue)),A.ADD_URI_SAFE_ATTR&&ye(A0,A.ADD_URI_SAFE_ATTR,Ue),A.FORBID_CONTENTS&&(Dt===S0&&(Dt=L0(Dt)),ye(Dt,A.FORBID_CONTENTS,Ue)),jt&&(Te["#text"]=!0),it&&ye(Te,["html","head","body"]),Te.table&&(ye(Te,["tbody"]),delete Ve.tbody),A.TRUSTED_TYPES_POLICY){if(typeof A.TRUSTED_TYPES_POLICY.createHTML!="function")throw nr('TRUSTED_TYPES_POLICY configuration option must provide a "createHTML" hook.');if(typeof A.TRUSTED_TYPES_POLICY.createScriptURL!="function")throw nr('TRUSTED_TYPES_POLICY configuration option must provide a "createScriptURL" hook.');_=A.TRUSTED_TYPES_POLICY,N=_.createHTML("")}else _===void 0&&(_=jo(V,o)),_!==null&&typeof N=="string"&&(N=_.createHTML(""));nt&&nt(A),bt=A}},tt=ye({},["mi","mo","mn","ms","mtext"]),yt=ye({},["foreignobject","desc","title","annotation-xml"]),Nt=ye({},["title","style","font","a","script"]),Kt=ye({},Dn);ye(Kt,Nn),ye(Kt,Lo);const z0=ye({},Rn);ye(z0,Oo);const Hr=function(A){let P=C(A);(!P||!P.tagName)&&(P={namespaceURI:Zt,tagName:"template"});const $=Fr(A.tagName),we=Fr(P.tagName);return Ee[A.namespaceURI]?A.namespaceURI===T0?P.namespaceURI===ht?$==="svg":P.namespaceURI===m0?$==="svg"&&(we==="annotation-xml"||tt[we]):!!Kt[$]:A.namespaceURI===m0?P.namespaceURI===ht?$==="math":P.namespaceURI===T0?$==="math"&&yt[we]:!!z0[$]:A.namespaceURI===ht?P.namespaceURI===T0&&!yt[we]||P.namespaceURI===m0&&!tt[we]?!1:!z0[$]&&(Nt[$]||!Kt[$]):!!(Je==="application/xhtml+xml"&&Ee[A.namespaceURI]):!1},Ot=function(A){rr(i.removed,{element:A});try{A.parentNode.removeChild(A)}catch{A.remove()}},$0=function(A,P){try{rr(i.removed,{attribute:P.getAttributeNode(A),from:P})}catch{rr(i.removed,{attribute:null,from:P})}if(P.removeAttribute(A),A==="is"&&!J[A])if(Wt||k0)try{Ot(P)}catch{}else try{P.setAttribute(A,"")}catch{}},d0=function(A){let P,$;if(Lt)A=""+A;else{const qe=No(A,/^[\r\n\t ]+/);$=qe&&qe[0]}Je==="application/xhtml+xml"&&Zt===ht&&(A=''+A+"");const we=_?_.createHTML(A):A;if(Zt===ht)try{P=new ne().parseFromString(we,Je)}catch{}if(!P||!P.documentElement){P=D.createDocument(Zt,"template",null);try{P.documentElement.innerHTML=M0?N:we}catch{}}const k=P.body||P.documentElement;return A&&$&&k.insertBefore(m.createTextNode($),k.childNodes[0]||null),Zt===ht?re.call(P,it?"html":"body")[0]:it?P.documentElement:k},Ne=function(A){return I.call(A.ownerDocument||A,A,L.SHOW_ELEMENT|L.SHOW_COMMENT|L.SHOW_TEXT,null,!1)},l=function(A){return A instanceof K&&(typeof A.nodeName!="string"||typeof A.textContent!="string"||typeof A.removeChild!="function"||!(A.attributes instanceof G)||typeof A.removeAttribute!="function"||typeof A.setAttribute!="function"||typeof A.namespaceURI!="string"||typeof A.insertBefore!="function"||typeof A.hasChildNodes!="function")},h=function(A){return typeof w=="object"?A instanceof w:A&&typeof A=="object"&&typeof A.nodeType=="number"&&typeof A.nodeName=="string"},H=function(A,P,$){U[A]&&Do(U[A],we=>{we.call(i,P,$,bt)})},f=function(A){let P;if(H("beforeSanitizeElements",A,null),l(A))return Ot(A),!0;const $=Ue(A.nodeName);if(H("uponSanitizeElement",A,{tagName:$,allowedTags:Te}),A.hasChildNodes()&&!h(A.firstElementChild)&&(!h(A.content)||!h(A.content.firstElementChild))&&ft(/<[/\w]/g,A.innerHTML)&&ft(/<[/\w]/g,A.textContent))return Ot(A),!0;if(!Te[$]||Ve[$]){if(!Ve[$]&&Be($)&&(Fe.tagNameCheck instanceof RegExp&&ft(Fe.tagNameCheck,$)||Fe.tagNameCheck instanceof Function&&Fe.tagNameCheck($)))return!1;if(jt&&!Dt[$]){const we=C(A)||A.parentNode,k=q(A)||A.childNodes;if(k&&we){const qe=k.length;for(let M=qe-1;M>=0;--M)we.insertBefore(ge(k[M],!0),le(A))}}return Ot(A),!0}return A instanceof z&&!Hr(A)||($==="noscript"||$==="noembed")&&ft(/<\/no(script|embed)/i,A.innerHTML)?(Ot(A),!0):(gt&&A.nodeType===3&&(P=A.textContent,P=Et(P,fe," "),P=Et(P,se," "),P=Et(P,ze," "),A.textContent!==P&&(rr(i.removed,{element:A.cloneNode()}),A.textContent=P)),H("afterSanitizeElements",A,null),!1)},S=function(A,P,$){if(ar&&(P==="id"||P==="name")&&($ in m||$ in j0))return!1;if(!(Ct&&!pt[P]&&ft(Pe,P))){if(!(at&&ft(V0,P))){if(!J[P]||pt[P]){if(!(Be(A)&&(Fe.tagNameCheck instanceof RegExp&&ft(Fe.tagNameCheck,A)||Fe.tagNameCheck instanceof Function&&Fe.tagNameCheck(A))&&(Fe.attributeNameCheck instanceof RegExp&&ft(Fe.attributeNameCheck,P)||Fe.attributeNameCheck instanceof Function&&Fe.attributeNameCheck(P))||P==="is"&&Fe.allowCustomizedBuiltInElements&&(Fe.tagNameCheck instanceof RegExp&&ft(Fe.tagNameCheck,$)||Fe.tagNameCheck instanceof Function&&Fe.tagNameCheck($))))return!1}else if(!A0[P]){if(!ft(He,Et($,o0,""))){if(!((P==="src"||P==="xlink:href"||P==="href")&&A!=="script"&&Ro($,"data:")===0&&ir[A])){if(!(Vt&&!ft(W0,Et($,o0,"")))){if($)return!1}}}}}}return!0},Be=function(A){return A.indexOf("-")>0},te=function(A){let P,$,we,k;H("beforeSanitizeAttributes",A,null);const{attributes:qe}=A;if(!qe)return;const M={attrName:"",attrValue:"",keepAttr:!0,allowedAttributes:J};for(k=qe.length;k--;){P=qe[k];const{name:mt,namespaceURI:_0}=P;if($=mt==="value"?P.value:Fo(P.value),we=Ue(mt),M.attrName=we,M.attrValue=$,M.keepAttr=!0,M.forceKeepAttr=void 0,H("uponSanitizeAttribute",A,M),$=M.attrValue,M.forceKeepAttr||($0(mt,A),!M.keepAttr))continue;if(!u0&&ft(/\/>/i,$)){$0(mt,A);continue}gt&&($=Et($,fe," "),$=Et($,se," "),$=Et($,ze," "));const E0=Ue(A.nodeName);if(S(E0,we,$)){if(Yt&&(we==="id"||we==="name")&&($0(mt,A),$=vt+$),_&&typeof V=="object"&&typeof V.getAttributeType=="function"&&!_0)switch(V.getAttributeType(E0,we)){case"TrustedHTML":{$=_.createHTML($);break}case"TrustedScriptURL":{$=_.createScriptURL($);break}}try{_0?A.setAttributeNS(_0,mt,$):A.setAttribute(mt,$),Si(i.removed)}catch{}}}H("afterSanitizeAttributes",A,null)},Ke=function oe(A){let P;const $=Ne(A);for(H("beforeSanitizeShadowDOM",A,null);P=$.nextNode();)H("uponSanitizeShadowNode",P,null),!f(P)&&(P.content instanceof p&&oe(P.content),te(P));H("afterSanitizeShadowDOM",A,null)};return i.sanitize=function(oe){let A=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{},P,$,we,k;if(M0=!oe,M0&&(oe=""),typeof oe!="string"&&!h(oe))if(typeof oe.toString=="function"){if(oe=oe.toString(),typeof oe!="string")throw nr("dirty is not a string, aborting")}else throw nr("toString is not a function");if(!i.isSupported)return oe;if(c0||X0(A),i.removed=[],typeof oe=="string"&&(Xt=!1),Xt){if(oe.nodeName){const mt=Ue(oe.nodeName);if(!Te[mt]||Ve[mt])throw nr("root node is forbidden and cannot be sanitized in-place")}}else if(oe instanceof w)P=d0(""),$=P.ownerDocument.importNode(oe,!0),$.nodeType===1&&$.nodeName==="BODY"||$.nodeName==="HTML"?P=$:P.appendChild($);else{if(!Wt&&!gt&&!it&&oe.indexOf("<")===-1)return _&&h0?_.createHTML(oe):oe;if(P=d0(oe),!P)return Wt?null:h0?N:""}P&&Lt&&Ot(P.firstChild);const qe=Ne(Xt?oe:P);for(;we=qe.nextNode();)f(we)||(we.content instanceof p&&Ke(we.content),te(we));if(Xt)return oe;if(Wt){if(k0)for(k=j.call(P.ownerDocument);P.firstChild;)k.appendChild(P.firstChild);else k=P;return(J.shadowroot||J.shadowrootmod)&&(k=Z.call(s,k,!0)),k}let M=it?P.outerHTML:P.innerHTML;return it&&Te["!doctype"]&&P.ownerDocument&&P.ownerDocument.doctype&&P.ownerDocument.doctype.name&&ft(cl,P.ownerDocument.doctype.name)&&(M=" -`+M),gt&&(M=Et(M,fe," "),M=Et(M,se," "),M=Et(M,ze," ")),_&&h0?_.createHTML(M):M},i.setConfig=function(oe){X0(oe),c0=!0},i.clearConfig=function(){bt=null,c0=!1},i.isValidAttribute=function(oe,A,P){bt||X0({});const $=Ue(oe),we=Ue(A);return S($,we,P)},i.addHook=function(oe,A){typeof A=="function"&&(U[oe]=U[oe]||[],rr(U[oe],A))},i.removeHook=function(oe){if(U[oe])return Si(U[oe])},i.removeHooks=function(oe){U[oe]&&(U[oe]=[])},i.removeAllHooks=function(){U={}},i}var Ei=hl(),ml={exports:{}},In={exports:{}},Bi;function Xo(){return Bi||(Bi=1,function(v,i){(function(o,m){v.exports=m()})(typeof self<"u"?self:Ir,function(){return function(){var s={};(function(){s.d=function(u,e){for(var t in e)s.o(e,t)&&!s.o(u,t)&&Object.defineProperty(u,t,{enumerable:!0,get:e[t]})}})(),function(){s.o=function(u,e){return Object.prototype.hasOwnProperty.call(u,e)}}();var o={};s.d(o,{default:function(){return Ds}});var m=function u(e,t){this.position=void 0;var r="KaTeX parse error: "+e,n,a=t&&t.loc;if(a&&a.start<=a.end){var c=a.lexer.input;n=a.start;var d=a.end;n===c.length?r+=" at end of input: ":r+=" at position "+(n+1)+": ";var g=c.slice(n,d).replace(/[^]/g,"$&̲"),y;n>15?y="…"+c.slice(n-15,n):y=c.slice(0,n);var T;d+15":">","<":"<",'"':""","'":"'"},K=/[&><"']/g;function ne(u){return String(u).replace(K,function(e){return G[e]})}var V=function u(e){return e.type==="ordgroup"||e.type==="color"?e.body.length===1?u(e.body[0]):e:e.type==="font"?u(e.body):e},xe=function(e){var t=V(e);return t.type==="mathord"||t.type==="textord"||t.type==="atom"},ge=function(e){if(!e)throw new Error("Expected non-null, but got "+String(e));return e},le=function(e){var t=/^\s*([^\\/#]*?)(?::|�*58|�*3a)/i.exec(e);return t!=null?t[1]:"_relative"},q={contains:x,deflt:w,escape:ne,hyphenate:L,getBaseElem:V,isCharacterBox:xe,protocolFromUrl:le},C={displayMode:{type:"boolean",description:"Render math in display mode, which puts the math in display style (so \\int and \\sum are large, for example), and centers the math on the page on its own line.",cli:"-d, --display-mode"},output:{type:{enum:["htmlAndMathml","html","mathml"]},description:"Determines the markup language of the output.",cli:"-F, --format "},leqno:{type:"boolean",description:"Render display math in leqno style (left-justified tags)."},fleqn:{type:"boolean",description:"Render display math flush left."},throwOnError:{type:"boolean",default:!0,cli:"-t, --no-throw-on-error",cliDescription:"Render errors (in the color given by --error-color) instead of throwing a ParseError exception when encountering an error."},errorColor:{type:"string",default:"#cc0000",cli:"-c, --error-color ",cliDescription:"A color string given in the format 'rgb' or 'rrggbb' (no #). This option determines the color of errors rendered by the -t option.",cliProcessor:function(e){return"#"+e}},macros:{type:"object",cli:"-m, --macro ",cliDescription:"Define custom macro of the form '\\foo:expansion' (use multiple -m arguments for multiple macros).",cliDefault:[],cliProcessor:function(e,t){return t.push(e),t}},minRuleThickness:{type:"number",description:"Specifies a minimum thickness, in ems, for fraction lines, `\\sqrt` top lines, `{array}` vertical lines, `\\hline`, `\\hdashline`, `\\underline`, `\\overline`, and the borders of `\\fbox`, `\\boxed`, and `\\fcolorbox`.",processor:function(e){return Math.max(0,e)},cli:"--min-rule-thickness ",cliProcessor:parseFloat},colorIsTextColor:{type:"boolean",description:"Makes \\color behave like LaTeX's 2-argument \\textcolor, instead of LaTeX's one-argument \\color mode change.",cli:"-b, --color-is-text-color"},strict:{type:[{enum:["warn","ignore","error"]},"boolean","function"],description:"Turn on strict / LaTeX faithfulness mode, which throws an error if the input uses features that are not supported by LaTeX.",cli:"-S, --strict",cliDefault:!1},trust:{type:["boolean","function"],description:"Trust the input, enabling all HTML features such as \\url.",cli:"-T, --trust"},maxSize:{type:"number",default:1/0,description:"If non-zero, all user-specified sizes, e.g. in \\rule{500em}{500em}, will be capped to maxSize ems. Otherwise, elements and spaces can be arbitrarily large",processor:function(e){return Math.max(0,e)},cli:"-s, --max-size ",cliProcessor:parseInt},maxExpand:{type:"number",default:1e3,description:"Limit the number of macro expansions to the specified number, to prevent e.g. infinite macro loops. If set to Infinity, the macro expander will try to fully expand as in LaTeX.",processor:function(e){return Math.max(0,e)},cli:"-e, --max-expand ",cliProcessor:function(e){return e==="Infinity"?1/0:parseInt(e)}},globalGroup:{type:"boolean",cli:!1}};function _(u){if(u.default)return u.default;var e=u.type,t=Array.isArray(e)?e[0]:e;if(typeof t!="string")return t.enum[0];switch(t){case"boolean":return!1;case"string":return"";case"number":return 0;case"object":return{}}}var N=function(){function u(t){this.displayMode=void 0,this.output=void 0,this.leqno=void 0,this.fleqn=void 0,this.throwOnError=void 0,this.errorColor=void 0,this.macros=void 0,this.minRuleThickness=void 0,this.colorIsTextColor=void 0,this.strict=void 0,this.trust=void 0,this.maxSize=void 0,this.maxExpand=void 0,this.globalGroup=void 0,t=t||{};for(var r in C)if(C.hasOwnProperty(r)){var n=C[r];this[r]=t[r]!==void 0?n.processor?n.processor(t[r]):t[r]:_(n)}}var e=u.prototype;return e.reportNonstrict=function(r,n,a){var c=this.strict;if(typeof c=="function"&&(c=c(r,n,a)),!(!c||c==="ignore")){if(c===!0||c==="error")throw new p("LaTeX-incompatible input and strict mode is set to 'error': "+(n+" ["+r+"]"),a);c==="warn"?typeof console<"u"&&console.warn("LaTeX-incompatible input and strict mode is set to 'warn': "+(n+" ["+r+"]")):typeof console<"u"&&console.warn("LaTeX-incompatible input and strict mode is set to "+("unrecognized '"+c+"': "+n+" ["+r+"]"))}},e.useStrictBehavior=function(r,n,a){var c=this.strict;if(typeof c=="function")try{c=c(r,n,a)}catch{c="error"}return!c||c==="ignore"?!1:c===!0||c==="error"?!0:c==="warn"?(typeof console<"u"&&console.warn("LaTeX-incompatible input and strict mode is set to 'warn': "+(n+" ["+r+"]")),!1):(typeof console<"u"&&console.warn("LaTeX-incompatible input and strict mode is set to "+("unrecognized '"+c+"': "+n+" ["+r+"]")),!1)},e.isTrusted=function(r){r.url&&!r.protocol&&(r.protocol=q.protocolFromUrl(r.url));var n=typeof this.trust=="function"?this.trust(r):this.trust;return!!n},u}(),D=function(){function u(t,r,n){this.id=void 0,this.size=void 0,this.cramped=void 0,this.id=t,this.size=r,this.cramped=n}var e=u.prototype;return e.sup=function(){return Pe[V0[this.id]]},e.sub=function(){return Pe[W0[this.id]]},e.fracNum=function(){return Pe[o0[this.id]]},e.fracDen=function(){return Pe[He[this.id]]},e.cramp=function(){return Pe[Te[this.id]]},e.text=function(){return Pe[Mt[this.id]]},e.isTight=function(){return this.size>=2},u}(),I=0,j=1,re=2,Z=3,U=4,fe=5,se=6,ze=7,Pe=[new D(I,0,!1),new D(j,0,!0),new D(re,1,!1),new D(Z,1,!0),new D(U,2,!1),new D(fe,2,!0),new D(se,3,!1),new D(ze,3,!0)],V0=[U,fe,U,fe,se,ze,se,ze],W0=[fe,fe,fe,fe,ze,ze,ze,ze],o0=[re,Z,U,fe,se,ze,se,ze],He=[Z,Z,fe,fe,ze,ze,ze,ze],Te=[j,j,Z,Z,fe,fe,ze,ze],Mt=[I,j,re,Z,re,Z,re,Z],J={DISPLAY:Pe[I],TEXT:Pe[re],SCRIPT:Pe[U],SCRIPTSCRIPT:Pe[se]},Ze=[{name:"latin",blocks:[[256,591],[768,879]]},{name:"cyrillic",blocks:[[1024,1279]]},{name:"armenian",blocks:[[1328,1423]]},{name:"brahmic",blocks:[[2304,4255]]},{name:"georgian",blocks:[[4256,4351]]},{name:"cjk",blocks:[[12288,12543],[19968,40879],[65280,65376]]},{name:"hangul",blocks:[[44032,55215]]}];function Fe(u){for(var e=0;e=n[0]&&u<=n[1])return t.name}return null}var Ve=[];Ze.forEach(function(u){return u.blocks.forEach(function(e){return Ve.push.apply(Ve,e)})});function pt(u){for(var e=0;e=Ve[e]&&u<=Ve[e+1])return!0;return!1}var at=80,Ct=function(e,t){return"M95,"+(622+e+t)+` -c-2.7,0,-7.17,-2.7,-13.5,-8c-5.8,-5.3,-9.5,-10,-9.5,-14 -c0,-2,0.3,-3.3,1,-4c1.3,-2.7,23.83,-20.7,67.5,-54 -c44.2,-33.3,65.8,-50.3,66.5,-51c1.3,-1.3,3,-2,5,-2c4.7,0,8.7,3.3,12,10 -s173,378,173,378c0.7,0,35.3,-71,104,-213c68.7,-142,137.5,-285,206.5,-429 -c69,-144,104.5,-217.7,106.5,-221 -l`+e/2.075+" -"+e+` -c5.3,-9.3,12,-14,20,-14 -H400000v`+(40+e)+`H845.2724 -s-225.272,467,-225.272,467s-235,486,-235,486c-2.7,4.7,-9,7,-19,7 -c-6,0,-10,-1,-12,-3s-194,-422,-194,-422s-65,47,-65,47z -M`+(834+e)+" "+t+"h400000v"+(40+e)+"h-400000z"},Vt=function(e,t){return"M263,"+(601+e+t)+`c0.7,0,18,39.7,52,119 -c34,79.3,68.167,158.7,102.5,238c34.3,79.3,51.8,119.3,52.5,120 -c340,-704.7,510.7,-1060.3,512,-1067 -l`+e/2.084+" -"+e+` -c4.7,-7.3,11,-11,19,-11 -H40000v`+(40+e)+`H1012.3 -s-271.3,567,-271.3,567c-38.7,80.7,-84,175,-136,283c-52,108,-89.167,185.3,-111.5,232 -c-22.3,46.7,-33.8,70.3,-34.5,71c-4.7,4.7,-12.3,7,-23,7s-12,-1,-12,-1 -s-109,-253,-109,-253c-72.7,-168,-109.3,-252,-110,-252c-10.7,8,-22,16.7,-34,26 -c-22,17.3,-33.3,26,-34,26s-26,-26,-26,-26s76,-59,76,-59s76,-60,76,-60z -M`+(1001+e)+" "+t+"h400000v"+(40+e)+"h-400000z"},u0=function(e,t){return"M983 "+(10+e+t)+` -l`+e/3.13+" -"+e+` -c4,-6.7,10,-10,18,-10 H400000v`+(40+e)+` -H1013.1s-83.4,268,-264.1,840c-180.7,572,-277,876.3,-289,913c-4.7,4.7,-12.7,7,-24,7 -s-12,0,-12,0c-1.3,-3.3,-3.7,-11.7,-7,-25c-35.3,-125.3,-106.7,-373.3,-214,-744 -c-10,12,-21,25,-33,39s-32,39,-32,39c-6,-5.3,-15,-14,-27,-26s25,-30,25,-30 -c26.7,-32.7,52,-63,76,-91s52,-60,52,-60s208,722,208,722 -c56,-175.3,126.3,-397.3,211,-666c84.7,-268.7,153.8,-488.2,207.5,-658.5 -c53.7,-170.3,84.5,-266.8,92.5,-289.5z -M`+(1001+e)+" "+t+"h400000v"+(40+e)+"h-400000z"},gt=function(e,t){return"M424,"+(2398+e+t)+` -c-1.3,-0.7,-38.5,-172,-111.5,-514c-73,-342,-109.8,-513.3,-110.5,-514 -c0,-2,-10.7,14.3,-32,49c-4.7,7.3,-9.8,15.7,-15.5,25c-5.7,9.3,-9.8,16,-12.5,20 -s-5,7,-5,7c-4,-3.3,-8.3,-7.7,-13,-13s-13,-13,-13,-13s76,-122,76,-122s77,-121,77,-121 -s209,968,209,968c0,-2,84.7,-361.7,254,-1079c169.3,-717.3,254.7,-1077.7,256,-1081 -l`+e/4.223+" -"+e+`c4,-6.7,10,-10,18,-10 H400000 -v`+(40+e)+`H1014.6 -s-87.3,378.7,-272.6,1166c-185.3,787.3,-279.3,1182.3,-282,1185 -c-2,6,-10,9,-24,9 -c-8,0,-12,-0.7,-12,-2z M`+(1001+e)+" "+t+` -h400000v`+(40+e)+"h-400000z"},it=function(e,t){return"M473,"+(2713+e+t)+` -c339.3,-1799.3,509.3,-2700,510,-2702 l`+e/5.298+" -"+e+` -c3.3,-7.3,9.3,-11,18,-11 H400000v`+(40+e)+`H1017.7 -s-90.5,478,-276.2,1466c-185.7,988,-279.5,1483,-281.5,1485c-2,6,-10,9,-24,9 -c-8,0,-12,-0.7,-12,-2c0,-1.3,-5.3,-32,-16,-92c-50.7,-293.3,-119.7,-693.3,-207,-1200 -c0,-1.3,-5.3,8.7,-16,30c-10.7,21.3,-21.3,42.7,-32,64s-16,33,-16,33s-26,-26,-26,-26 -s76,-153,76,-153s77,-151,77,-151c0.7,0.7,35.7,202,105,604c67.3,400.7,102,602.7,104, -606zM`+(1001+e)+" "+t+"h400000v"+(40+e)+"H1017.7z"},c0=function(e){var t=e/2;return"M400000 "+e+" H0 L"+t+" 0 l65 45 L145 "+(e-80)+" H400000z"},Lt=function(e,t,r){var n=r-54-t-e;return"M702 "+(e+t)+"H400000"+(40+e)+` -H742v`+n+`l-4 4-4 4c-.667.7 -2 1.5-4 2.5s-4.167 1.833-6.5 2.5-5.5 1-9.5 1 -h-12l-28-84c-16.667-52-96.667 -294.333-240-727l-212 -643 -85 170 -c-4-3.333-8.333-7.667-13 -13l-13-13l77-155 77-156c66 199.333 139 419.667 -219 661 l218 661zM702 `+t+"H400000v"+(40+e)+"H742z"},Wt=function(e,t,r){t=1e3*t;var n="";switch(e){case"sqrtMain":n=Ct(t,at);break;case"sqrtSize1":n=Vt(t,at);break;case"sqrtSize2":n=u0(t,at);break;case"sqrtSize3":n=gt(t,at);break;case"sqrtSize4":n=it(t,at);break;case"sqrtTall":n=Lt(t,at,r)}return n},k0=function(e,t){switch(e){case"⎜":return"M291 0 H417 V"+t+" H291z M291 0 H417 V"+t+" H291z";case"∣":return"M145 0 H188 V"+t+" H145z M145 0 H188 V"+t+" H145z";case"∥":return"M145 0 H188 V"+t+" H145z M145 0 H188 V"+t+" H145z"+("M367 0 H410 V"+t+" H367z M367 0 H410 V"+t+" H367z");case"⎟":return"M457 0 H583 V"+t+" H457z M457 0 H583 V"+t+" H457z";case"⎢":return"M319 0 H403 V"+t+" H319z M319 0 H403 V"+t+" H319z";case"⎥":return"M263 0 H347 V"+t+" H263z M263 0 H347 V"+t+" H263z";case"⎪":return"M384 0 H504 V"+t+" H384z M384 0 H504 V"+t+" H384z";case"⏐":return"M312 0 H355 V"+t+" H312z M312 0 H355 V"+t+" H312z";case"‖":return"M257 0 H300 V"+t+" H257z M257 0 H300 V"+t+" H257z"+("M478 0 H521 V"+t+" H478z M478 0 H521 V"+t+" H478z");default:return""}},h0={doubleleftarrow:`M262 157 -l10-10c34-36 62.7-77 86-123 3.3-8 5-13.3 5-16 0-5.3-6.7-8-20-8-7.3 - 0-12.2.5-14.5 1.5-2.3 1-4.8 4.5-7.5 10.5-49.3 97.3-121.7 169.3-217 216-28 - 14-57.3 25-88 33-6.7 2-11 3.8-13 5.5-2 1.7-3 4.2-3 7.5s1 5.8 3 7.5 -c2 1.7 6.3 3.5 13 5.5 68 17.3 128.2 47.8 180.5 91.5 52.3 43.7 93.8 96.2 124.5 - 157.5 9.3 8 15.3 12.3 18 13h6c12-.7 18-4 18-10 0-2-1.7-7-5-15-23.3-46-52-87 --86-123l-10-10h399738v-40H218c328 0 0 0 0 0l-10-8c-26.7-20-65.7-43-117-69 2.7 --2 6-3.7 10-5 36.7-16 72.3-37.3 107-64l10-8h399782v-40z -m8 0v40h399730v-40zm0 194v40h399730v-40z`,doublerightarrow:`M399738 392l --10 10c-34 36-62.7 77-86 123-3.3 8-5 13.3-5 16 0 5.3 6.7 8 20 8 7.3 0 12.2-.5 - 14.5-1.5 2.3-1 4.8-4.5 7.5-10.5 49.3-97.3 121.7-169.3 217-216 28-14 57.3-25 88 --33 6.7-2 11-3.8 13-5.5 2-1.7 3-4.2 3-7.5s-1-5.8-3-7.5c-2-1.7-6.3-3.5-13-5.5-68 --17.3-128.2-47.8-180.5-91.5-52.3-43.7-93.8-96.2-124.5-157.5-9.3-8-15.3-12.3-18 --13h-6c-12 .7-18 4-18 10 0 2 1.7 7 5 15 23.3 46 52 87 86 123l10 10H0v40h399782 -c-328 0 0 0 0 0l10 8c26.7 20 65.7 43 117 69-2.7 2-6 3.7-10 5-36.7 16-72.3 37.3 --107 64l-10 8H0v40zM0 157v40h399730v-40zm0 194v40h399730v-40z`,leftarrow:`M400000 241H110l3-3c68.7-52.7 113.7-120 - 135-202 4-14.7 6-23 6-25 0-7.3-7-11-21-11-8 0-13.2.8-15.5 2.5-2.3 1.7-4.2 5.8 --5.5 12.5-1.3 4.7-2.7 10.3-4 17-12 48.7-34.8 92-68.5 130S65.3 228.3 18 247 -c-10 4-16 7.7-18 11 0 8.7 6 14.3 18 17 47.3 18.7 87.8 47 121.5 85S196 441.3 208 - 490c.7 2 1.3 5 2 9s1.2 6.7 1.5 8c.3 1.3 1 3.3 2 6s2.2 4.5 3.5 5.5c1.3 1 3.3 - 1.8 6 2.5s6 1 10 1c14 0 21-3.7 21-11 0-2-2-10.3-6-25-20-79.3-65-146.7-135-202 - l-3-3h399890zM100 241v40h399900v-40z`,leftbrace:`M6 548l-6-6v-35l6-11c56-104 135.3-181.3 238-232 57.3-28.7 117 --45 179-50h399577v120H403c-43.3 7-81 15-113 26-100.7 33-179.7 91-237 174-2.7 - 5-6 9-10 13-.7 1-7.3 1-20 1H6z`,leftbraceunder:`M0 6l6-6h17c12.688 0 19.313.3 20 1 4 4 7.313 8.3 10 13 - 35.313 51.3 80.813 93.8 136.5 127.5 55.688 33.7 117.188 55.8 184.5 66.5.688 - 0 2 .3 4 1 18.688 2.7 76 4.3 172 5h399450v120H429l-6-1c-124.688-8-235-61.7 --331-161C60.687 138.7 32.312 99.3 7 54L0 41V6z`,leftgroup:`M400000 80 -H435C64 80 168.3 229.4 21 260c-5.9 1.2-18 0-18 0-2 0-3-1-3-3v-38C76 61 257 0 - 435 0h399565z`,leftgroupunder:`M400000 262 -H435C64 262 168.3 112.6 21 82c-5.9-1.2-18 0-18 0-2 0-3 1-3 3v38c76 158 257 219 - 435 219h399565z`,leftharpoon:`M0 267c.7 5.3 3 10 7 14h399993v-40H93c3.3 --3.3 10.2-9.5 20.5-18.5s17.8-15.8 22.5-20.5c50.7-52 88-110.3 112-175 4-11.3 5 --18.3 3-21-1.3-4-7.3-6-18-6-8 0-13 .7-15 2s-4.7 6.7-8 16c-42 98.7-107.3 174.7 --196 228-6.7 4.7-10.7 8-12 10-1.3 2-2 5.7-2 11zm100-26v40h399900v-40z`,leftharpoonplus:`M0 267c.7 5.3 3 10 7 14h399993v-40H93c3.3-3.3 10.2-9.5 - 20.5-18.5s17.8-15.8 22.5-20.5c50.7-52 88-110.3 112-175 4-11.3 5-18.3 3-21-1.3 --4-7.3-6-18-6-8 0-13 .7-15 2s-4.7 6.7-8 16c-42 98.7-107.3 174.7-196 228-6.7 4.7 --10.7 8-12 10-1.3 2-2 5.7-2 11zm100-26v40h399900v-40zM0 435v40h400000v-40z -m0 0v40h400000v-40z`,leftharpoondown:`M7 241c-4 4-6.333 8.667-7 14 0 5.333.667 9 2 11s5.333 - 5.333 12 10c90.667 54 156 130 196 228 3.333 10.667 6.333 16.333 9 17 2 .667 5 - 1 9 1h5c10.667 0 16.667-2 18-6 2-2.667 1-9.667-3-21-32-87.333-82.667-157.667 --152-211l-3-3h399907v-40zM93 281 H400000 v-40L7 241z`,leftharpoondownplus:`M7 435c-4 4-6.3 8.7-7 14 0 5.3.7 9 2 11s5.3 5.3 12 - 10c90.7 54 156 130 196 228 3.3 10.7 6.3 16.3 9 17 2 .7 5 1 9 1h5c10.7 0 16.7 --2 18-6 2-2.7 1-9.7-3-21-32-87.3-82.7-157.7-152-211l-3-3h399907v-40H7zm93 0 -v40h399900v-40zM0 241v40h399900v-40zm0 0v40h399900v-40z`,lefthook:`M400000 281 H103s-33-11.2-61-33.5S0 197.3 0 164s14.2-61.2 42.5 --83.5C70.8 58.2 104 47 142 47 c16.7 0 25 6.7 25 20 0 12-8.7 18.7-26 20-40 3.3 --68.7 15.7-86 37-10 12-15 25.3-15 40 0 22.7 9.8 40.7 29.5 54 19.7 13.3 43.5 21 - 71.5 23h399859zM103 281v-40h399897v40z`,leftlinesegment:`M40 281 V428 H0 V94 H40 V241 H400000 v40z -M40 281 V428 H0 V94 H40 V241 H400000 v40z`,leftmapsto:`M40 281 V448H0V74H40V241H400000v40z -M40 281 V448H0V74H40V241H400000v40z`,leftToFrom:`M0 147h400000v40H0zm0 214c68 40 115.7 95.7 143 167h22c15.3 0 23 --.3 23-1 0-1.3-5.3-13.7-16-37-18-35.3-41.3-69-70-101l-7-8h399905v-40H95l7-8 -c28.7-32 52-65.7 70-101 10.7-23.3 16-35.7 16-37 0-.7-7.7-1-23-1h-22C115.7 265.3 - 68 321 0 361zm0-174v-40h399900v40zm100 154v40h399900v-40z`,longequal:`M0 50 h400000 v40H0z m0 194h40000v40H0z -M0 50 h400000 v40H0z m0 194h40000v40H0z`,midbrace:`M200428 334 -c-100.7-8.3-195.3-44-280-108-55.3-42-101.7-93-139-153l-9-14c-2.7 4-5.7 8.7-9 14 --53.3 86.7-123.7 153-211 199-66.7 36-137.3 56.3-212 62H0V214h199568c178.3-11.7 - 311.7-78.3 403-201 6-8 9.7-12 11-12 .7-.7 6.7-1 18-1s17.3.3 18 1c1.3 0 5 4 11 - 12 44.7 59.3 101.3 106.3 170 141s145.3 54.3 229 60h199572v120z`,midbraceunder:`M199572 214 -c100.7 8.3 195.3 44 280 108 55.3 42 101.7 93 139 153l9 14c2.7-4 5.7-8.7 9-14 - 53.3-86.7 123.7-153 211-199 66.7-36 137.3-56.3 212-62h199568v120H200432c-178.3 - 11.7-311.7 78.3-403 201-6 8-9.7 12-11 12-.7.7-6.7 1-18 1s-17.3-.3-18-1c-1.3 0 --5-4-11-12-44.7-59.3-101.3-106.3-170-141s-145.3-54.3-229-60H0V214z`,oiintSize1:`M512.6 71.6c272.6 0 320.3 106.8 320.3 178.2 0 70.8-47.7 177.6 --320.3 177.6S193.1 320.6 193.1 249.8c0-71.4 46.9-178.2 319.5-178.2z -m368.1 178.2c0-86.4-60.9-215.4-368.1-215.4-306.4 0-367.3 129-367.3 215.4 0 85.8 -60.9 214.8 367.3 214.8 307.2 0 368.1-129 368.1-214.8z`,oiintSize2:`M757.8 100.1c384.7 0 451.1 137.6 451.1 230 0 91.3-66.4 228.8 --451.1 228.8-386.3 0-452.7-137.5-452.7-228.8 0-92.4 66.4-230 452.7-230z -m502.4 230c0-111.2-82.4-277.2-502.4-277.2s-504 166-504 277.2 -c0 110 84 276 504 276s502.4-166 502.4-276z`,oiiintSize1:`M681.4 71.6c408.9 0 480.5 106.8 480.5 178.2 0 70.8-71.6 177.6 --480.5 177.6S202.1 320.6 202.1 249.8c0-71.4 70.5-178.2 479.3-178.2z -m525.8 178.2c0-86.4-86.8-215.4-525.7-215.4-437.9 0-524.7 129-524.7 215.4 0 -85.8 86.8 214.8 524.7 214.8 438.9 0 525.7-129 525.7-214.8z`,oiiintSize2:`M1021.2 53c603.6 0 707.8 165.8 707.8 277.2 0 110-104.2 275.8 --707.8 275.8-606 0-710.2-165.8-710.2-275.8C311 218.8 415.2 53 1021.2 53z -m770.4 277.1c0-131.2-126.4-327.6-770.5-327.6S248.4 198.9 248.4 330.1 -c0 130 128.8 326.4 772.7 326.4s770.5-196.4 770.5-326.4z`,rightarrow:`M0 241v40h399891c-47.3 35.3-84 78-110 128 --16.7 32-27.7 63.7-33 95 0 1.3-.2 2.7-.5 4-.3 1.3-.5 2.3-.5 3 0 7.3 6.7 11 20 - 11 8 0 13.2-.8 15.5-2.5 2.3-1.7 4.2-5.5 5.5-11.5 2-13.3 5.7-27 11-41 14.7-44.7 - 39-84.5 73-119.5s73.7-60.2 119-75.5c6-2 9-5.7 9-11s-3-9-9-11c-45.3-15.3-85 --40.5-119-75.5s-58.3-74.8-73-119.5c-4.7-14-8.3-27.3-11-40-1.3-6.7-3.2-10.8-5.5 --12.5-2.3-1.7-7.5-2.5-15.5-2.5-14 0-21 3.7-21 11 0 2 2 10.3 6 25 20.7 83.3 67 - 151.7 139 205zm0 0v40h399900v-40z`,rightbrace:`M400000 542l --6 6h-17c-12.7 0-19.3-.3-20-1-4-4-7.3-8.3-10-13-35.3-51.3-80.8-93.8-136.5-127.5 -s-117.2-55.8-184.5-66.5c-.7 0-2-.3-4-1-18.7-2.7-76-4.3-172-5H0V214h399571l6 1 -c124.7 8 235 61.7 331 161 31.3 33.3 59.7 72.7 85 118l7 13v35z`,rightbraceunder:`M399994 0l6 6v35l-6 11c-56 104-135.3 181.3-238 232-57.3 - 28.7-117 45-179 50H-300V214h399897c43.3-7 81-15 113-26 100.7-33 179.7-91 237 --174 2.7-5 6-9 10-13 .7-1 7.3-1 20-1h17z`,rightgroup:`M0 80h399565c371 0 266.7 149.4 414 180 5.9 1.2 18 0 18 0 2 0 - 3-1 3-3v-38c-76-158-257-219-435-219H0z`,rightgroupunder:`M0 262h399565c371 0 266.7-149.4 414-180 5.9-1.2 18 0 18 - 0 2 0 3 1 3 3v38c-76 158-257 219-435 219H0z`,rightharpoon:`M0 241v40h399993c4.7-4.7 7-9.3 7-14 0-9.3 --3.7-15.3-11-18-92.7-56.7-159-133.7-199-231-3.3-9.3-6-14.7-8-16-2-1.3-7-2-15-2 --10.7 0-16.7 2-18 6-2 2.7-1 9.7 3 21 15.3 42 36.7 81.8 64 119.5 27.3 37.7 58 - 69.2 92 94.5zm0 0v40h399900v-40z`,rightharpoonplus:`M0 241v40h399993c4.7-4.7 7-9.3 7-14 0-9.3-3.7-15.3-11 --18-92.7-56.7-159-133.7-199-231-3.3-9.3-6-14.7-8-16-2-1.3-7-2-15-2-10.7 0-16.7 - 2-18 6-2 2.7-1 9.7 3 21 15.3 42 36.7 81.8 64 119.5 27.3 37.7 58 69.2 92 94.5z -m0 0v40h399900v-40z m100 194v40h399900v-40zm0 0v40h399900v-40z`,rightharpoondown:`M399747 511c0 7.3 6.7 11 20 11 8 0 13-.8 15-2.5s4.7-6.8 - 8-15.5c40-94 99.3-166.3 178-217 13.3-8 20.3-12.3 21-13 5.3-3.3 8.5-5.8 9.5 --7.5 1-1.7 1.5-5.2 1.5-10.5s-2.3-10.3-7-15H0v40h399908c-34 25.3-64.7 57-92 95 --27.3 38-48.7 77.7-64 119-3.3 8.7-5 14-5 16zM0 241v40h399900v-40z`,rightharpoondownplus:`M399747 705c0 7.3 6.7 11 20 11 8 0 13-.8 - 15-2.5s4.7-6.8 8-15.5c40-94 99.3-166.3 178-217 13.3-8 20.3-12.3 21-13 5.3-3.3 - 8.5-5.8 9.5-7.5 1-1.7 1.5-5.2 1.5-10.5s-2.3-10.3-7-15H0v40h399908c-34 25.3 --64.7 57-92 95-27.3 38-48.7 77.7-64 119-3.3 8.7-5 14-5 16zM0 435v40h399900v-40z -m0-194v40h400000v-40zm0 0v40h400000v-40z`,righthook:`M399859 241c-764 0 0 0 0 0 40-3.3 68.7-15.7 86-37 10-12 15-25.3 - 15-40 0-22.7-9.8-40.7-29.5-54-19.7-13.3-43.5-21-71.5-23-17.3-1.3-26-8-26-20 0 --13.3 8.7-20 26-20 38 0 71 11.2 99 33.5 0 0 7 5.6 21 16.7 14 11.2 21 33.5 21 - 66.8s-14 61.2-42 83.5c-28 22.3-61 33.5-99 33.5L0 241z M0 281v-40h399859v40z`,rightlinesegment:`M399960 241 V94 h40 V428 h-40 V281 H0 v-40z -M399960 241 V94 h40 V428 h-40 V281 H0 v-40z`,rightToFrom:`M400000 167c-70.7-42-118-97.7-142-167h-23c-15.3 0-23 .3-23 - 1 0 1.3 5.3 13.7 16 37 18 35.3 41.3 69 70 101l7 8H0v40h399905l-7 8c-28.7 32 --52 65.7-70 101-10.7 23.3-16 35.7-16 37 0 .7 7.7 1 23 1h23c24-69.3 71.3-125 142 --167z M100 147v40h399900v-40zM0 341v40h399900v-40z`,twoheadleftarrow:`M0 167c68 40 - 115.7 95.7 143 167h22c15.3 0 23-.3 23-1 0-1.3-5.3-13.7-16-37-18-35.3-41.3-69 --70-101l-7-8h125l9 7c50.7 39.3 85 86 103 140h46c0-4.7-6.3-18.7-19-42-18-35.3 --40-67.3-66-96l-9-9h399716v-40H284l9-9c26-28.7 48-60.7 66-96 12.7-23.333 19 --37.333 19-42h-46c-18 54-52.3 100.7-103 140l-9 7H95l7-8c28.7-32 52-65.7 70-101 - 10.7-23.333 16-35.7 16-37 0-.7-7.7-1-23-1h-22C115.7 71.3 68 127 0 167z`,twoheadrightarrow:`M400000 167 -c-68-40-115.7-95.7-143-167h-22c-15.3 0-23 .3-23 1 0 1.3 5.3 13.7 16 37 18 35.3 - 41.3 69 70 101l7 8h-125l-9-7c-50.7-39.3-85-86-103-140h-46c0 4.7 6.3 18.7 19 42 - 18 35.3 40 67.3 66 96l9 9H0v40h399716l-9 9c-26 28.7-48 60.7-66 96-12.7 23.333 --19 37.333-19 42h46c18-54 52.3-100.7 103-140l9-7h125l-7 8c-28.7 32-52 65.7-70 - 101-10.7 23.333-16 35.7-16 37 0 .7 7.7 1 23 1h22c27.3-71.3 75-127 143-167z`,tilde1:`M200 55.538c-77 0-168 73.953-177 73.953-3 0-7 --2.175-9-5.437L2 97c-1-2-2-4-2-6 0-4 2-7 5-9l20-12C116 12 171 0 207 0c86 0 - 114 68 191 68 78 0 168-68 177-68 4 0 7 2 9 5l12 19c1 2.175 2 4.35 2 6.525 0 - 4.35-2 7.613-5 9.788l-19 13.05c-92 63.077-116.937 75.308-183 76.128 --68.267.847-113-73.952-191-73.952z`,tilde2:`M344 55.266c-142 0-300.638 81.316-311.5 86.418 --8.01 3.762-22.5 10.91-23.5 5.562L1 120c-1-2-1-3-1-4 0-5 3-9 8-10l18.4-9C160.9 - 31.9 283 0 358 0c148 0 188 122 331 122s314-97 326-97c4 0 8 2 10 7l7 21.114 -c1 2.14 1 3.21 1 4.28 0 5.347-3 9.626-7 10.696l-22.3 12.622C852.6 158.372 751 - 181.476 676 181.476c-149 0-189-126.21-332-126.21z`,tilde3:`M786 59C457 59 32 175.242 13 175.242c-6 0-10-3.457 --11-10.37L.15 138c-1-7 3-12 10-13l19.2-6.4C378.4 40.7 634.3 0 804.3 0c337 0 - 411.8 157 746.8 157 328 0 754-112 773-112 5 0 10 3 11 9l1 14.075c1 8.066-.697 - 16.595-6.697 17.492l-21.052 7.31c-367.9 98.146-609.15 122.696-778.15 122.696 - -338 0-409-156.573-744-156.573z`,tilde4:`M786 58C457 58 32 177.487 13 177.487c-6 0-10-3.345 --11-10.035L.15 143c-1-7 3-12 10-13l22-6.7C381.2 35 637.15 0 807.15 0c337 0 409 - 177 744 177 328 0 754-127 773-127 5 0 10 3 11 9l1 14.794c1 7.805-3 13.38-9 - 14.495l-20.7 5.574c-366.85 99.79-607.3 139.372-776.3 139.372-338 0-409 - -175.236-744-175.236z`,vec:`M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5 -3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11 -10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63 --1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1 --7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59 -H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359 -c-16-25.333-24-45-24-59z`,widehat1:`M529 0h5l519 115c5 1 9 5 9 10 0 1-1 2-1 3l-4 22 -c-1 5-5 9-11 9h-2L532 67 19 159h-2c-5 0-9-4-11-9l-5-22c-1-6 2-12 8-13z`,widehat2:`M1181 0h2l1171 176c6 0 10 5 10 11l-2 23c-1 6-5 10 --11 10h-1L1182 67 15 220h-1c-6 0-10-4-11-10l-2-23c-1-6 4-11 10-11z`,widehat3:`M1181 0h2l1171 236c6 0 10 5 10 11l-2 23c-1 6-5 10 --11 10h-1L1182 67 15 280h-1c-6 0-10-4-11-10l-2-23c-1-6 4-11 10-11z`,widehat4:`M1181 0h2l1171 296c6 0 10 5 10 11l-2 23c-1 6-5 10 --11 10h-1L1182 67 15 340h-1c-6 0-10-4-11-10l-2-23c-1-6 4-11 10-11z`,widecheck1:`M529,159h5l519,-115c5,-1,9,-5,9,-10c0,-1,-1,-2,-1,-3l-4,-22c-1, --5,-5,-9,-11,-9h-2l-512,92l-513,-92h-2c-5,0,-9,4,-11,9l-5,22c-1,6,2,12,8,13z`,widecheck2:`M1181,220h2l1171,-176c6,0,10,-5,10,-11l-2,-23c-1,-6,-5,-10, --11,-10h-1l-1168,153l-1167,-153h-1c-6,0,-10,4,-11,10l-2,23c-1,6,4,11,10,11z`,widecheck3:`M1181,280h2l1171,-236c6,0,10,-5,10,-11l-2,-23c-1,-6,-5,-10, --11,-10h-1l-1168,213l-1167,-213h-1c-6,0,-10,4,-11,10l-2,23c-1,6,4,11,10,11z`,widecheck4:`M1181,340h2l1171,-296c6,0,10,-5,10,-11l-2,-23c-1,-6,-5,-10, --11,-10h-1l-1168,273l-1167,-273h-1c-6,0,-10,4,-11,10l-2,23c-1,6,4,11,10,11z`,baraboveleftarrow:`M400000 620h-399890l3 -3c68.7 -52.7 113.7 -120 135 -202 -c4 -14.7 6 -23 6 -25c0 -7.3 -7 -11 -21 -11c-8 0 -13.2 0.8 -15.5 2.5 -c-2.3 1.7 -4.2 5.8 -5.5 12.5c-1.3 4.7 -2.7 10.3 -4 17c-12 48.7 -34.8 92 -68.5 130 -s-74.2 66.3 -121.5 85c-10 4 -16 7.7 -18 11c0 8.7 6 14.3 18 17c47.3 18.7 87.8 47 -121.5 85s56.5 81.3 68.5 130c0.7 2 1.3 5 2 9s1.2 6.7 1.5 8c0.3 1.3 1 3.3 2 6 -s2.2 4.5 3.5 5.5c1.3 1 3.3 1.8 6 2.5s6 1 10 1c14 0 21 -3.7 21 -11 -c0 -2 -2 -10.3 -6 -25c-20 -79.3 -65 -146.7 -135 -202l-3 -3h399890z -M100 620v40h399900v-40z M0 241v40h399900v-40zM0 241v40h399900v-40z`,rightarrowabovebar:`M0 241v40h399891c-47.3 35.3-84 78-110 128-16.7 32 --27.7 63.7-33 95 0 1.3-.2 2.7-.5 4-.3 1.3-.5 2.3-.5 3 0 7.3 6.7 11 20 11 8 0 -13.2-.8 15.5-2.5 2.3-1.7 4.2-5.5 5.5-11.5 2-13.3 5.7-27 11-41 14.7-44.7 39 --84.5 73-119.5s73.7-60.2 119-75.5c6-2 9-5.7 9-11s-3-9-9-11c-45.3-15.3-85-40.5 --119-75.5s-58.3-74.8-73-119.5c-4.7-14-8.3-27.3-11-40-1.3-6.7-3.2-10.8-5.5 --12.5-2.3-1.7-7.5-2.5-15.5-2.5-14 0-21 3.7-21 11 0 2 2 10.3 6 25 20.7 83.3 67 -151.7 139 205zm96 379h399894v40H0zm0 0h399904v40H0z`,baraboveshortleftharpoon:`M507,435c-4,4,-6.3,8.7,-7,14c0,5.3,0.7,9,2,11 -c1.3,2,5.3,5.3,12,10c90.7,54,156,130,196,228c3.3,10.7,6.3,16.3,9,17 -c2,0.7,5,1,9,1c0,0,5,0,5,0c10.7,0,16.7,-2,18,-6c2,-2.7,1,-9.7,-3,-21 -c-32,-87.3,-82.7,-157.7,-152,-211c0,0,-3,-3,-3,-3l399351,0l0,-40 -c-398570,0,-399437,0,-399437,0z M593 435 v40 H399500 v-40z -M0 281 v-40 H399908 v40z M0 281 v-40 H399908 v40z`,rightharpoonaboveshortbar:`M0,241 l0,40c399126,0,399993,0,399993,0 -c4.7,-4.7,7,-9.3,7,-14c0,-9.3,-3.7,-15.3,-11,-18c-92.7,-56.7,-159,-133.7,-199, --231c-3.3,-9.3,-6,-14.7,-8,-16c-2,-1.3,-7,-2,-15,-2c-10.7,0,-16.7,2,-18,6 -c-2,2.7,-1,9.7,3,21c15.3,42,36.7,81.8,64,119.5c27.3,37.7,58,69.2,92,94.5z -M0 241 v40 H399908 v-40z M0 475 v-40 H399500 v40z M0 475 v-40 H399500 v40z`,shortbaraboveleftharpoon:`M7,435c-4,4,-6.3,8.7,-7,14c0,5.3,0.7,9,2,11 -c1.3,2,5.3,5.3,12,10c90.7,54,156,130,196,228c3.3,10.7,6.3,16.3,9,17c2,0.7,5,1,9, -1c0,0,5,0,5,0c10.7,0,16.7,-2,18,-6c2,-2.7,1,-9.7,-3,-21c-32,-87.3,-82.7,-157.7, --152,-211c0,0,-3,-3,-3,-3l399907,0l0,-40c-399126,0,-399993,0,-399993,0z -M93 435 v40 H400000 v-40z M500 241 v40 H400000 v-40z M500 241 v40 H400000 v-40z`,shortrightharpoonabovebar:`M53,241l0,40c398570,0,399437,0,399437,0 -c4.7,-4.7,7,-9.3,7,-14c0,-9.3,-3.7,-15.3,-11,-18c-92.7,-56.7,-159,-133.7,-199, --231c-3.3,-9.3,-6,-14.7,-8,-16c-2,-1.3,-7,-2,-15,-2c-10.7,0,-16.7,2,-18,6 -c-2,2.7,-1,9.7,3,21c15.3,42,36.7,81.8,64,119.5c27.3,37.7,58,69.2,92,94.5z -M500 241 v40 H399408 v-40z M500 435 v40 H400000 v-40z`},ar=function(e,t){switch(e){case"lbrack":return"M403 1759 V84 H666 V0 H319 V1759 v"+t+` v1759 h347 v-84 -H403z M403 1759 V0 H319 V1759 v`+t+" v1759 h84z";case"rbrack":return"M347 1759 V0 H0 V84 H263 V1759 v"+t+` v1759 H0 v84 H347z -M347 1759 V0 H263 V1759 v`+t+" v1759 h84z";case"vert":return"M145 15 v585 v"+t+` v585 c2.667,10,9.667,15,21,15 -c10,0,16.667,-5,20,-15 v-585 v`+-t+` v-585 c-2.667,-10,-9.667,-15,-21,-15 -c-10,0,-16.667,5,-20,15z M188 15 H145 v585 v`+t+" v585 h43z";case"doublevert":return"M145 15 v585 v"+t+` v585 c2.667,10,9.667,15,21,15 -c10,0,16.667,-5,20,-15 v-585 v`+-t+` v-585 c-2.667,-10,-9.667,-15,-21,-15 -c-10,0,-16.667,5,-20,15z M188 15 H145 v585 v`+t+` v585 h43z -M367 15 v585 v`+t+` v585 c2.667,10,9.667,15,21,15 -c10,0,16.667,-5,20,-15 v-585 v`+-t+` v-585 c-2.667,-10,-9.667,-15,-21,-15 -c-10,0,-16.667,5,-20,15z M410 15 H367 v585 v`+t+" v585 h43z";case"lfloor":return"M319 602 V0 H403 V602 v"+t+` v1715 h263 v84 H319z -MM319 602 V0 H403 V602 v`+t+" v1715 H319z";case"rfloor":return"M319 602 V0 H403 V602 v"+t+` v1799 H0 v-84 H319z -MM319 602 V0 H403 V602 v`+t+" v1715 H319z";case"lceil":return"M403 1759 V84 H666 V0 H319 V1759 v"+t+` v602 h84z -M403 1759 V0 H319 V1759 v`+t+" v602 h84z";case"rceil":return"M347 1759 V0 H0 V84 H263 V1759 v"+t+` v602 h84z -M347 1759 V0 h-84 V1759 v`+t+" v602 h84z";case"lparen":return`M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1 -c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349, --36,557 l0,`+(t+84)+`c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210, -949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9 -c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5, --544.7,-112.5,-882c-2,-104,-3,-167,-3,-189 -l0,-`+(t+92)+`c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3, --210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z`;case"rparen":return`M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3, -63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5 -c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,`+(t+9)+` -c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664 -c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11 -c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17 -c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558 -l0,-`+(t+144)+`c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7, --470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z`;default:throw new Error("Unknown stretchy delimiter.")}},Yt=function(){function u(t){this.children=void 0,this.classes=void 0,this.height=void 0,this.depth=void 0,this.maxFontSize=void 0,this.style=void 0,this.children=t,this.classes=[],this.height=0,this.depth=0,this.maxFontSize=0,this.style={}}var e=u.prototype;return e.hasClass=function(r){return q.contains(this.classes,r)},e.toNode=function(){for(var r=document.createDocumentFragment(),n=0;n=5?e=0:u>=3?e=1:e=2,!S0[e]){var t=S0[e]={cssEmPerMu:jt.quad[e]/18};for(var r in jt)jt.hasOwnProperty(r)&&(t[r]=jt[r][e])}return S0[e]}var lr=[[1,1,1],[2,1,1],[3,1,1],[4,2,1],[5,2,1],[6,3,1],[7,4,2],[8,6,3],[9,7,6],[10,8,7],[11,10,9]],A0=[.5,.6,.7,.8,.9,1,1.2,1.44,1.728,2.074,2.488],Y0=function(e,t){return t.size<2?e:lr[e-1][t.size-1]},m0=function(){function u(t){this.style=void 0,this.color=void 0,this.size=void 0,this.textSize=void 0,this.phantom=void 0,this.font=void 0,this.fontFamily=void 0,this.fontWeight=void 0,this.fontShape=void 0,this.sizeMultiplier=void 0,this.maxSize=void 0,this.minRuleThickness=void 0,this._fontMetrics=void 0,this.style=t.style,this.color=t.color,this.size=t.size||u.BASESIZE,this.textSize=t.textSize||this.size,this.phantom=!!t.phantom,this.font=t.font||"",this.fontFamily=t.fontFamily||"",this.fontWeight=t.fontWeight||"",this.fontShape=t.fontShape||"",this.sizeMultiplier=A0[this.size-1],this.maxSize=t.maxSize,this.minRuleThickness=t.minRuleThickness,this._fontMetrics=void 0}var e=u.prototype;return e.extend=function(r){var n={style:this.style,size:this.size,textSize:this.textSize,color:this.color,phantom:this.phantom,font:this.font,fontFamily:this.fontFamily,fontWeight:this.fontWeight,fontShape:this.fontShape,maxSize:this.maxSize,minRuleThickness:this.minRuleThickness};for(var a in r)r.hasOwnProperty(a)&&(n[a]=r[a]);return new u(n)},e.havingStyle=function(r){return this.style===r?this:this.extend({style:r,size:Y0(this.textSize,r)})},e.havingCrampedStyle=function(){return this.havingStyle(this.style.cramp())},e.havingSize=function(r){return this.size===r&&this.textSize===r?this:this.extend({style:this.style.text(),size:r,textSize:r,sizeMultiplier:A0[r-1]})},e.havingBaseStyle=function(r){r=r||this.style.text();var n=Y0(u.BASESIZE,r);return this.size===n&&this.textSize===u.BASESIZE&&this.style===r?this:this.extend({style:r,size:n})},e.havingBaseSizing=function(){var r;switch(this.style.id){case 4:case 5:r=3;break;case 6:case 7:r=1;break;default:r=6}return this.extend({style:this.style.text(),size:r})},e.withColor=function(r){return this.extend({color:r})},e.withPhantom=function(){return this.extend({phantom:!0})},e.withFont=function(r){return this.extend({font:r})},e.withTextFontFamily=function(r){return this.extend({fontFamily:r,font:""})},e.withTextFontWeight=function(r){return this.extend({fontWeight:r,font:""})},e.withTextFontShape=function(r){return this.extend({fontShape:r,font:""})},e.sizingClasses=function(r){return r.size!==this.size?["sizing","reset-size"+r.size,"size"+this.size]:[]},e.baseSizingClasses=function(){return this.size!==u.BASESIZE?["sizing","reset-size"+this.size,"size"+u.BASESIZE]:[]},e.fontMetrics=function(){return this._fontMetrics||(this._fontMetrics=ir(this.size)),this._fontMetrics},e.getColor=function(){return this.phantom?"transparent":this.color},u}();m0.BASESIZE=6;var T0=m0,ht={pt:1,mm:7227/2540,cm:7227/254,in:72.27,bp:803/800,pc:12,dd:1238/1157,cc:14856/1157,nd:685/642,nc:1370/107,sp:1/65536,px:803/800},Zt={ex:!0,em:!0,mu:!0},M0=function(e){return typeof e!="string"&&(e=e.unit),e in ht||e in Zt||e==="ex"},Ee=function(e,t){var r;if(e.unit in ht)r=ht[e.unit]/t.fontMetrics().ptPerEm/t.sizeMultiplier;else if(e.unit==="mu")r=t.fontMetrics().cssEmPerMu;else{var n;if(t.style.isTight()?n=t.havingStyle(t.style.text()):n=t,e.unit==="ex")r=n.fontMetrics().xHeight;else if(e.unit==="em")r=n.fontMetrics().quad;else throw new p("Invalid unit: '"+e.unit+"'");n!==t&&(r*=n.sizeMultiplier/t.sizeMultiplier)}return Math.min(e.number*r,t.maxSize)},X=function(e){return+e.toFixed(4)+"em"},Je=function(e){return e.filter(function(t){return t}).join(" ")},sr=function(e,t,r){if(this.classes=e||[],this.attributes={},this.height=0,this.depth=0,this.maxFontSize=0,this.style=r||{},t){t.style.isTight()&&this.classes.push("mtight");var n=t.getColor();n&&(this.style.color=n)}},or=function(e){var t=document.createElement(e);t.className=Je(this.classes);for(var r in this.style)this.style.hasOwnProperty(r)&&(t.style[r]=this.style[r]);for(var n in this.attributes)this.attributes.hasOwnProperty(n)&&t.setAttribute(n,this.attributes[n]);for(var a=0;a",t},bt=function(){function u(t,r,n,a){this.children=void 0,this.attributes=void 0,this.classes=void 0,this.height=void 0,this.depth=void 0,this.width=void 0,this.maxFontSize=void 0,this.style=void 0,sr.call(this,t,n,a),this.children=r||[]}var e=u.prototype;return e.setAttribute=function(r,n){this.attributes[r]=n},e.hasClass=function(r){return q.contains(this.classes,r)},e.toNode=function(){return or.call(this,"span")},e.toMarkup=function(){return Ue.call(this,"span")},u}(),j0=function(){function u(t,r,n,a){this.children=void 0,this.attributes=void 0,this.classes=void 0,this.height=void 0,this.depth=void 0,this.maxFontSize=void 0,this.style=void 0,sr.call(this,r,a),this.children=n||[],this.setAttribute("href",t)}var e=u.prototype;return e.setAttribute=function(r,n){this.attributes[r]=n},e.hasClass=function(r){return q.contains(this.classes,r)},e.toNode=function(){return or.call(this,"a")},e.toMarkup=function(){return Ue.call(this,"a")},u}(),ur=function(){function u(t,r,n){this.src=void 0,this.alt=void 0,this.classes=void 0,this.height=void 0,this.depth=void 0,this.maxFontSize=void 0,this.style=void 0,this.alt=r,this.src=t,this.classes=["mord"],this.style=n}var e=u.prototype;return e.hasClass=function(r){return q.contains(this.classes,r)},e.toNode=function(){var r=document.createElement("img");r.src=this.src,r.alt=this.alt,r.className="mord";for(var n in this.style)this.style.hasOwnProperty(n)&&(r.style[n]=this.style[n]);return r},e.toMarkup=function(){var r=""+this.alt+"0&&(n=document.createElement("span"),n.style.marginRight=X(this.italic)),this.classes.length>0&&(n=n||document.createElement("span"),n.className=Je(this.classes));for(var a in this.style)this.style.hasOwnProperty(a)&&(n=n||document.createElement("span"),n.style[a]=this.style[a]);return n?(n.appendChild(r),n):r},e.toMarkup=function(){var r=!1,n="0&&(a+="margin-right:"+this.italic+"em;");for(var c in this.style)this.style.hasOwnProperty(c)&&(a+=q.hyphenate(c)+":"+this.style[c]+";");a&&(r=!0,n+=' style="'+q.escape(a)+'"');var d=q.escape(this.text);return r?(n+=">",n+=d,n+="",n):d},u}(),yt=function(){function u(t,r){this.children=void 0,this.attributes=void 0,this.children=t||[],this.attributes=r||{}}var e=u.prototype;return e.toNode=function(){var r="http://www.w3.org/2000/svg",n=document.createElementNS(r,"svg");for(var a in this.attributes)Object.prototype.hasOwnProperty.call(this.attributes,a)&&n.setAttribute(a,this.attributes[a]);for(var c=0;c":""},u}(),Kt=function(){function u(t){this.attributes=void 0,this.attributes=t||{}}var e=u.prototype;return e.toNode=function(){var r="http://www.w3.org/2000/svg",n=document.createElementNS(r,"line");for(var a in this.attributes)Object.prototype.hasOwnProperty.call(this.attributes,a)&&n.setAttribute(a,this.attributes[a]);return n},e.toMarkup=function(){var r=" but got "+String(u)+".")}var Ot={bin:1,close:1,inner:1,open:1,punct:1,rel:1},$0={"accent-token":1,mathord:1,"op-token":1,spacing:1,textord:1},d0={math:{},text:{}},Ne=d0;function l(u,e,t,r,n,a){d0[u][n]={font:e,group:t,replace:r},a&&r&&(d0[u][r]=d0[u][n])}var h="math",H="text",f="main",S="ams",Be="accent-token",te="bin",Ke="close",oe="inner",A="mathord",P="op-token",$="open",we="punct",k="rel",qe="spacing",M="textord";l(h,f,k,"≡","\\equiv",!0),l(h,f,k,"≺","\\prec",!0),l(h,f,k,"≻","\\succ",!0),l(h,f,k,"∼","\\sim",!0),l(h,f,k,"⊥","\\perp"),l(h,f,k,"⪯","\\preceq",!0),l(h,f,k,"⪰","\\succeq",!0),l(h,f,k,"≃","\\simeq",!0),l(h,f,k,"∣","\\mid",!0),l(h,f,k,"≪","\\ll",!0),l(h,f,k,"≫","\\gg",!0),l(h,f,k,"≍","\\asymp",!0),l(h,f,k,"∥","\\parallel"),l(h,f,k,"⋈","\\bowtie",!0),l(h,f,k,"⌣","\\smile",!0),l(h,f,k,"⊑","\\sqsubseteq",!0),l(h,f,k,"⊒","\\sqsupseteq",!0),l(h,f,k,"≐","\\doteq",!0),l(h,f,k,"⌢","\\frown",!0),l(h,f,k,"∋","\\ni",!0),l(h,f,k,"∝","\\propto",!0),l(h,f,k,"⊢","\\vdash",!0),l(h,f,k,"⊣","\\dashv",!0),l(h,f,k,"∋","\\owns"),l(h,f,we,".","\\ldotp"),l(h,f,we,"⋅","\\cdotp"),l(h,f,M,"#","\\#"),l(H,f,M,"#","\\#"),l(h,f,M,"&","\\&"),l(H,f,M,"&","\\&"),l(h,f,M,"ℵ","\\aleph",!0),l(h,f,M,"∀","\\forall",!0),l(h,f,M,"ℏ","\\hbar",!0),l(h,f,M,"∃","\\exists",!0),l(h,f,M,"∇","\\nabla",!0),l(h,f,M,"♭","\\flat",!0),l(h,f,M,"ℓ","\\ell",!0),l(h,f,M,"♮","\\natural",!0),l(h,f,M,"♣","\\clubsuit",!0),l(h,f,M,"℘","\\wp",!0),l(h,f,M,"♯","\\sharp",!0),l(h,f,M,"♢","\\diamondsuit",!0),l(h,f,M,"ℜ","\\Re",!0),l(h,f,M,"♡","\\heartsuit",!0),l(h,f,M,"ℑ","\\Im",!0),l(h,f,M,"♠","\\spadesuit",!0),l(h,f,M,"§","\\S",!0),l(H,f,M,"§","\\S"),l(h,f,M,"¶","\\P",!0),l(H,f,M,"¶","\\P"),l(h,f,M,"†","\\dag"),l(H,f,M,"†","\\dag"),l(H,f,M,"†","\\textdagger"),l(h,f,M,"‡","\\ddag"),l(H,f,M,"‡","\\ddag"),l(H,f,M,"‡","\\textdaggerdbl"),l(h,f,Ke,"⎱","\\rmoustache",!0),l(h,f,$,"⎰","\\lmoustache",!0),l(h,f,Ke,"⟯","\\rgroup",!0),l(h,f,$,"⟮","\\lgroup",!0),l(h,f,te,"∓","\\mp",!0),l(h,f,te,"⊖","\\ominus",!0),l(h,f,te,"⊎","\\uplus",!0),l(h,f,te,"⊓","\\sqcap",!0),l(h,f,te,"∗","\\ast"),l(h,f,te,"⊔","\\sqcup",!0),l(h,f,te,"◯","\\bigcirc",!0),l(h,f,te,"∙","\\bullet",!0),l(h,f,te,"‡","\\ddagger"),l(h,f,te,"≀","\\wr",!0),l(h,f,te,"⨿","\\amalg"),l(h,f,te,"&","\\And"),l(h,f,k,"⟵","\\longleftarrow",!0),l(h,f,k,"⇐","\\Leftarrow",!0),l(h,f,k,"⟸","\\Longleftarrow",!0),l(h,f,k,"⟶","\\longrightarrow",!0),l(h,f,k,"⇒","\\Rightarrow",!0),l(h,f,k,"⟹","\\Longrightarrow",!0),l(h,f,k,"↔","\\leftrightarrow",!0),l(h,f,k,"⟷","\\longleftrightarrow",!0),l(h,f,k,"⇔","\\Leftrightarrow",!0),l(h,f,k,"⟺","\\Longleftrightarrow",!0),l(h,f,k,"↦","\\mapsto",!0),l(h,f,k,"⟼","\\longmapsto",!0),l(h,f,k,"↗","\\nearrow",!0),l(h,f,k,"↩","\\hookleftarrow",!0),l(h,f,k,"↪","\\hookrightarrow",!0),l(h,f,k,"↘","\\searrow",!0),l(h,f,k,"↼","\\leftharpoonup",!0),l(h,f,k,"⇀","\\rightharpoonup",!0),l(h,f,k,"↙","\\swarrow",!0),l(h,f,k,"↽","\\leftharpoondown",!0),l(h,f,k,"⇁","\\rightharpoondown",!0),l(h,f,k,"↖","\\nwarrow",!0),l(h,f,k,"⇌","\\rightleftharpoons",!0),l(h,S,k,"≮","\\nless",!0),l(h,S,k,"","\\@nleqslant"),l(h,S,k,"","\\@nleqq"),l(h,S,k,"⪇","\\lneq",!0),l(h,S,k,"≨","\\lneqq",!0),l(h,S,k,"","\\@lvertneqq"),l(h,S,k,"⋦","\\lnsim",!0),l(h,S,k,"⪉","\\lnapprox",!0),l(h,S,k,"⊀","\\nprec",!0),l(h,S,k,"⋠","\\npreceq",!0),l(h,S,k,"⋨","\\precnsim",!0),l(h,S,k,"⪹","\\precnapprox",!0),l(h,S,k,"≁","\\nsim",!0),l(h,S,k,"","\\@nshortmid"),l(h,S,k,"∤","\\nmid",!0),l(h,S,k,"⊬","\\nvdash",!0),l(h,S,k,"⊭","\\nvDash",!0),l(h,S,k,"⋪","\\ntriangleleft"),l(h,S,k,"⋬","\\ntrianglelefteq",!0),l(h,S,k,"⊊","\\subsetneq",!0),l(h,S,k,"","\\@varsubsetneq"),l(h,S,k,"⫋","\\subsetneqq",!0),l(h,S,k,"","\\@varsubsetneqq"),l(h,S,k,"≯","\\ngtr",!0),l(h,S,k,"","\\@ngeqslant"),l(h,S,k,"","\\@ngeqq"),l(h,S,k,"⪈","\\gneq",!0),l(h,S,k,"≩","\\gneqq",!0),l(h,S,k,"","\\@gvertneqq"),l(h,S,k,"⋧","\\gnsim",!0),l(h,S,k,"⪊","\\gnapprox",!0),l(h,S,k,"⊁","\\nsucc",!0),l(h,S,k,"⋡","\\nsucceq",!0),l(h,S,k,"⋩","\\succnsim",!0),l(h,S,k,"⪺","\\succnapprox",!0),l(h,S,k,"≆","\\ncong",!0),l(h,S,k,"","\\@nshortparallel"),l(h,S,k,"∦","\\nparallel",!0),l(h,S,k,"⊯","\\nVDash",!0),l(h,S,k,"⋫","\\ntriangleright"),l(h,S,k,"⋭","\\ntrianglerighteq",!0),l(h,S,k,"","\\@nsupseteqq"),l(h,S,k,"⊋","\\supsetneq",!0),l(h,S,k,"","\\@varsupsetneq"),l(h,S,k,"⫌","\\supsetneqq",!0),l(h,S,k,"","\\@varsupsetneqq"),l(h,S,k,"⊮","\\nVdash",!0),l(h,S,k,"⪵","\\precneqq",!0),l(h,S,k,"⪶","\\succneqq",!0),l(h,S,k,"","\\@nsubseteqq"),l(h,S,te,"⊴","\\unlhd"),l(h,S,te,"⊵","\\unrhd"),l(h,S,k,"↚","\\nleftarrow",!0),l(h,S,k,"↛","\\nrightarrow",!0),l(h,S,k,"⇍","\\nLeftarrow",!0),l(h,S,k,"⇏","\\nRightarrow",!0),l(h,S,k,"↮","\\nleftrightarrow",!0),l(h,S,k,"⇎","\\nLeftrightarrow",!0),l(h,S,k,"△","\\vartriangle"),l(h,S,M,"ℏ","\\hslash"),l(h,S,M,"▽","\\triangledown"),l(h,S,M,"◊","\\lozenge"),l(h,S,M,"Ⓢ","\\circledS"),l(h,S,M,"®","\\circledR"),l(H,S,M,"®","\\circledR"),l(h,S,M,"∡","\\measuredangle",!0),l(h,S,M,"∄","\\nexists"),l(h,S,M,"℧","\\mho"),l(h,S,M,"Ⅎ","\\Finv",!0),l(h,S,M,"⅁","\\Game",!0),l(h,S,M,"‵","\\backprime"),l(h,S,M,"▲","\\blacktriangle"),l(h,S,M,"▼","\\blacktriangledown"),l(h,S,M,"■","\\blacksquare"),l(h,S,M,"⧫","\\blacklozenge"),l(h,S,M,"★","\\bigstar"),l(h,S,M,"∢","\\sphericalangle",!0),l(h,S,M,"∁","\\complement",!0),l(h,S,M,"ð","\\eth",!0),l(H,f,M,"ð","ð"),l(h,S,M,"╱","\\diagup"),l(h,S,M,"╲","\\diagdown"),l(h,S,M,"□","\\square"),l(h,S,M,"□","\\Box"),l(h,S,M,"◊","\\Diamond"),l(h,S,M,"¥","\\yen",!0),l(H,S,M,"¥","\\yen",!0),l(h,S,M,"✓","\\checkmark",!0),l(H,S,M,"✓","\\checkmark"),l(h,S,M,"ℶ","\\beth",!0),l(h,S,M,"ℸ","\\daleth",!0),l(h,S,M,"ℷ","\\gimel",!0),l(h,S,M,"ϝ","\\digamma",!0),l(h,S,M,"ϰ","\\varkappa"),l(h,S,$,"┌","\\@ulcorner",!0),l(h,S,Ke,"┐","\\@urcorner",!0),l(h,S,$,"└","\\@llcorner",!0),l(h,S,Ke,"┘","\\@lrcorner",!0),l(h,S,k,"≦","\\leqq",!0),l(h,S,k,"⩽","\\leqslant",!0),l(h,S,k,"⪕","\\eqslantless",!0),l(h,S,k,"≲","\\lesssim",!0),l(h,S,k,"⪅","\\lessapprox",!0),l(h,S,k,"≊","\\approxeq",!0),l(h,S,te,"⋖","\\lessdot"),l(h,S,k,"⋘","\\lll",!0),l(h,S,k,"≶","\\lessgtr",!0),l(h,S,k,"⋚","\\lesseqgtr",!0),l(h,S,k,"⪋","\\lesseqqgtr",!0),l(h,S,k,"≑","\\doteqdot"),l(h,S,k,"≓","\\risingdotseq",!0),l(h,S,k,"≒","\\fallingdotseq",!0),l(h,S,k,"∽","\\backsim",!0),l(h,S,k,"⋍","\\backsimeq",!0),l(h,S,k,"⫅","\\subseteqq",!0),l(h,S,k,"⋐","\\Subset",!0),l(h,S,k,"⊏","\\sqsubset",!0),l(h,S,k,"≼","\\preccurlyeq",!0),l(h,S,k,"⋞","\\curlyeqprec",!0),l(h,S,k,"≾","\\precsim",!0),l(h,S,k,"⪷","\\precapprox",!0),l(h,S,k,"⊲","\\vartriangleleft"),l(h,S,k,"⊴","\\trianglelefteq"),l(h,S,k,"⊨","\\vDash",!0),l(h,S,k,"⊪","\\Vvdash",!0),l(h,S,k,"⌣","\\smallsmile"),l(h,S,k,"⌢","\\smallfrown"),l(h,S,k,"≏","\\bumpeq",!0),l(h,S,k,"≎","\\Bumpeq",!0),l(h,S,k,"≧","\\geqq",!0),l(h,S,k,"⩾","\\geqslant",!0),l(h,S,k,"⪖","\\eqslantgtr",!0),l(h,S,k,"≳","\\gtrsim",!0),l(h,S,k,"⪆","\\gtrapprox",!0),l(h,S,te,"⋗","\\gtrdot"),l(h,S,k,"⋙","\\ggg",!0),l(h,S,k,"≷","\\gtrless",!0),l(h,S,k,"⋛","\\gtreqless",!0),l(h,S,k,"⪌","\\gtreqqless",!0),l(h,S,k,"≖","\\eqcirc",!0),l(h,S,k,"≗","\\circeq",!0),l(h,S,k,"≜","\\triangleq",!0),l(h,S,k,"∼","\\thicksim"),l(h,S,k,"≈","\\thickapprox"),l(h,S,k,"⫆","\\supseteqq",!0),l(h,S,k,"⋑","\\Supset",!0),l(h,S,k,"⊐","\\sqsupset",!0),l(h,S,k,"≽","\\succcurlyeq",!0),l(h,S,k,"⋟","\\curlyeqsucc",!0),l(h,S,k,"≿","\\succsim",!0),l(h,S,k,"⪸","\\succapprox",!0),l(h,S,k,"⊳","\\vartriangleright"),l(h,S,k,"⊵","\\trianglerighteq"),l(h,S,k,"⊩","\\Vdash",!0),l(h,S,k,"∣","\\shortmid"),l(h,S,k,"∥","\\shortparallel"),l(h,S,k,"≬","\\between",!0),l(h,S,k,"⋔","\\pitchfork",!0),l(h,S,k,"∝","\\varpropto"),l(h,S,k,"◀","\\blacktriangleleft"),l(h,S,k,"∴","\\therefore",!0),l(h,S,k,"∍","\\backepsilon"),l(h,S,k,"▶","\\blacktriangleright"),l(h,S,k,"∵","\\because",!0),l(h,S,k,"⋘","\\llless"),l(h,S,k,"⋙","\\gggtr"),l(h,S,te,"⊲","\\lhd"),l(h,S,te,"⊳","\\rhd"),l(h,S,k,"≂","\\eqsim",!0),l(h,f,k,"⋈","\\Join"),l(h,S,k,"≑","\\Doteq",!0),l(h,S,te,"∔","\\dotplus",!0),l(h,S,te,"∖","\\smallsetminus"),l(h,S,te,"⋒","\\Cap",!0),l(h,S,te,"⋓","\\Cup",!0),l(h,S,te,"⩞","\\doublebarwedge",!0),l(h,S,te,"⊟","\\boxminus",!0),l(h,S,te,"⊞","\\boxplus",!0),l(h,S,te,"⋇","\\divideontimes",!0),l(h,S,te,"⋉","\\ltimes",!0),l(h,S,te,"⋊","\\rtimes",!0),l(h,S,te,"⋋","\\leftthreetimes",!0),l(h,S,te,"⋌","\\rightthreetimes",!0),l(h,S,te,"⋏","\\curlywedge",!0),l(h,S,te,"⋎","\\curlyvee",!0),l(h,S,te,"⊝","\\circleddash",!0),l(h,S,te,"⊛","\\circledast",!0),l(h,S,te,"⋅","\\centerdot"),l(h,S,te,"⊺","\\intercal",!0),l(h,S,te,"⋒","\\doublecap"),l(h,S,te,"⋓","\\doublecup"),l(h,S,te,"⊠","\\boxtimes",!0),l(h,S,k,"⇢","\\dashrightarrow",!0),l(h,S,k,"⇠","\\dashleftarrow",!0),l(h,S,k,"⇇","\\leftleftarrows",!0),l(h,S,k,"⇆","\\leftrightarrows",!0),l(h,S,k,"⇚","\\Lleftarrow",!0),l(h,S,k,"↞","\\twoheadleftarrow",!0),l(h,S,k,"↢","\\leftarrowtail",!0),l(h,S,k,"↫","\\looparrowleft",!0),l(h,S,k,"⇋","\\leftrightharpoons",!0),l(h,S,k,"↶","\\curvearrowleft",!0),l(h,S,k,"↺","\\circlearrowleft",!0),l(h,S,k,"↰","\\Lsh",!0),l(h,S,k,"⇈","\\upuparrows",!0),l(h,S,k,"↿","\\upharpoonleft",!0),l(h,S,k,"⇃","\\downharpoonleft",!0),l(h,f,k,"⊶","\\origof",!0),l(h,f,k,"⊷","\\imageof",!0),l(h,S,k,"⊸","\\multimap",!0),l(h,S,k,"↭","\\leftrightsquigarrow",!0),l(h,S,k,"⇉","\\rightrightarrows",!0),l(h,S,k,"⇄","\\rightleftarrows",!0),l(h,S,k,"↠","\\twoheadrightarrow",!0),l(h,S,k,"↣","\\rightarrowtail",!0),l(h,S,k,"↬","\\looparrowright",!0),l(h,S,k,"↷","\\curvearrowright",!0),l(h,S,k,"↻","\\circlearrowright",!0),l(h,S,k,"↱","\\Rsh",!0),l(h,S,k,"⇊","\\downdownarrows",!0),l(h,S,k,"↾","\\upharpoonright",!0),l(h,S,k,"⇂","\\downharpoonright",!0),l(h,S,k,"⇝","\\rightsquigarrow",!0),l(h,S,k,"⇝","\\leadsto"),l(h,S,k,"⇛","\\Rrightarrow",!0),l(h,S,k,"↾","\\restriction"),l(h,f,M,"‘","`"),l(h,f,M,"$","\\$"),l(H,f,M,"$","\\$"),l(H,f,M,"$","\\textdollar"),l(h,f,M,"%","\\%"),l(H,f,M,"%","\\%"),l(h,f,M,"_","\\_"),l(H,f,M,"_","\\_"),l(H,f,M,"_","\\textunderscore"),l(h,f,M,"∠","\\angle",!0),l(h,f,M,"∞","\\infty",!0),l(h,f,M,"′","\\prime"),l(h,f,M,"△","\\triangle"),l(h,f,M,"Γ","\\Gamma",!0),l(h,f,M,"Δ","\\Delta",!0),l(h,f,M,"Θ","\\Theta",!0),l(h,f,M,"Λ","\\Lambda",!0),l(h,f,M,"Ξ","\\Xi",!0),l(h,f,M,"Π","\\Pi",!0),l(h,f,M,"Σ","\\Sigma",!0),l(h,f,M,"Υ","\\Upsilon",!0),l(h,f,M,"Φ","\\Phi",!0),l(h,f,M,"Ψ","\\Psi",!0),l(h,f,M,"Ω","\\Omega",!0),l(h,f,M,"A","Α"),l(h,f,M,"B","Β"),l(h,f,M,"E","Ε"),l(h,f,M,"Z","Ζ"),l(h,f,M,"H","Η"),l(h,f,M,"I","Ι"),l(h,f,M,"K","Κ"),l(h,f,M,"M","Μ"),l(h,f,M,"N","Ν"),l(h,f,M,"O","Ο"),l(h,f,M,"P","Ρ"),l(h,f,M,"T","Τ"),l(h,f,M,"X","Χ"),l(h,f,M,"¬","\\neg",!0),l(h,f,M,"¬","\\lnot"),l(h,f,M,"⊤","\\top"),l(h,f,M,"⊥","\\bot"),l(h,f,M,"∅","\\emptyset"),l(h,S,M,"∅","\\varnothing"),l(h,f,A,"α","\\alpha",!0),l(h,f,A,"β","\\beta",!0),l(h,f,A,"γ","\\gamma",!0),l(h,f,A,"δ","\\delta",!0),l(h,f,A,"ϵ","\\epsilon",!0),l(h,f,A,"ζ","\\zeta",!0),l(h,f,A,"η","\\eta",!0),l(h,f,A,"θ","\\theta",!0),l(h,f,A,"ι","\\iota",!0),l(h,f,A,"κ","\\kappa",!0),l(h,f,A,"λ","\\lambda",!0),l(h,f,A,"μ","\\mu",!0),l(h,f,A,"ν","\\nu",!0),l(h,f,A,"ξ","\\xi",!0),l(h,f,A,"ο","\\omicron",!0),l(h,f,A,"π","\\pi",!0),l(h,f,A,"ρ","\\rho",!0),l(h,f,A,"σ","\\sigma",!0),l(h,f,A,"τ","\\tau",!0),l(h,f,A,"υ","\\upsilon",!0),l(h,f,A,"ϕ","\\phi",!0),l(h,f,A,"χ","\\chi",!0),l(h,f,A,"ψ","\\psi",!0),l(h,f,A,"ω","\\omega",!0),l(h,f,A,"ε","\\varepsilon",!0),l(h,f,A,"ϑ","\\vartheta",!0),l(h,f,A,"ϖ","\\varpi",!0),l(h,f,A,"ϱ","\\varrho",!0),l(h,f,A,"ς","\\varsigma",!0),l(h,f,A,"φ","\\varphi",!0),l(h,f,te,"∗","*",!0),l(h,f,te,"+","+"),l(h,f,te,"−","-",!0),l(h,f,te,"⋅","\\cdot",!0),l(h,f,te,"∘","\\circ",!0),l(h,f,te,"÷","\\div",!0),l(h,f,te,"±","\\pm",!0),l(h,f,te,"×","\\times",!0),l(h,f,te,"∩","\\cap",!0),l(h,f,te,"∪","\\cup",!0),l(h,f,te,"∖","\\setminus",!0),l(h,f,te,"∧","\\land"),l(h,f,te,"∨","\\lor"),l(h,f,te,"∧","\\wedge",!0),l(h,f,te,"∨","\\vee",!0),l(h,f,M,"√","\\surd"),l(h,f,$,"⟨","\\langle",!0),l(h,f,$,"∣","\\lvert"),l(h,f,$,"∥","\\lVert"),l(h,f,Ke,"?","?"),l(h,f,Ke,"!","!"),l(h,f,Ke,"⟩","\\rangle",!0),l(h,f,Ke,"∣","\\rvert"),l(h,f,Ke,"∥","\\rVert"),l(h,f,k,"=","="),l(h,f,k,":",":"),l(h,f,k,"≈","\\approx",!0),l(h,f,k,"≅","\\cong",!0),l(h,f,k,"≥","\\ge"),l(h,f,k,"≥","\\geq",!0),l(h,f,k,"←","\\gets"),l(h,f,k,">","\\gt",!0),l(h,f,k,"∈","\\in",!0),l(h,f,k,"","\\@not"),l(h,f,k,"⊂","\\subset",!0),l(h,f,k,"⊃","\\supset",!0),l(h,f,k,"⊆","\\subseteq",!0),l(h,f,k,"⊇","\\supseteq",!0),l(h,S,k,"⊈","\\nsubseteq",!0),l(h,S,k,"⊉","\\nsupseteq",!0),l(h,f,k,"⊨","\\models"),l(h,f,k,"←","\\leftarrow",!0),l(h,f,k,"≤","\\le"),l(h,f,k,"≤","\\leq",!0),l(h,f,k,"<","\\lt",!0),l(h,f,k,"→","\\rightarrow",!0),l(h,f,k,"→","\\to"),l(h,S,k,"≱","\\ngeq",!0),l(h,S,k,"≰","\\nleq",!0),l(h,f,qe," ","\\ "),l(h,f,qe," ","\\space"),l(h,f,qe," ","\\nobreakspace"),l(H,f,qe," ","\\ "),l(H,f,qe," "," "),l(H,f,qe," ","\\space"),l(H,f,qe," ","\\nobreakspace"),l(h,f,qe,null,"\\nobreak"),l(h,f,qe,null,"\\allowbreak"),l(h,f,we,",",","),l(h,f,we,";",";"),l(h,S,te,"⊼","\\barwedge",!0),l(h,S,te,"⊻","\\veebar",!0),l(h,f,te,"⊙","\\odot",!0),l(h,f,te,"⊕","\\oplus",!0),l(h,f,te,"⊗","\\otimes",!0),l(h,f,M,"∂","\\partial",!0),l(h,f,te,"⊘","\\oslash",!0),l(h,S,te,"⊚","\\circledcirc",!0),l(h,S,te,"⊡","\\boxdot",!0),l(h,f,te,"△","\\bigtriangleup"),l(h,f,te,"▽","\\bigtriangledown"),l(h,f,te,"†","\\dagger"),l(h,f,te,"⋄","\\diamond"),l(h,f,te,"⋆","\\star"),l(h,f,te,"◃","\\triangleleft"),l(h,f,te,"▹","\\triangleright"),l(h,f,$,"{","\\{"),l(H,f,M,"{","\\{"),l(H,f,M,"{","\\textbraceleft"),l(h,f,Ke,"}","\\}"),l(H,f,M,"}","\\}"),l(H,f,M,"}","\\textbraceright"),l(h,f,$,"{","\\lbrace"),l(h,f,Ke,"}","\\rbrace"),l(h,f,$,"[","\\lbrack",!0),l(H,f,M,"[","\\lbrack",!0),l(h,f,Ke,"]","\\rbrack",!0),l(H,f,M,"]","\\rbrack",!0),l(h,f,$,"(","\\lparen",!0),l(h,f,Ke,")","\\rparen",!0),l(H,f,M,"<","\\textless",!0),l(H,f,M,">","\\textgreater",!0),l(h,f,$,"⌊","\\lfloor",!0),l(h,f,Ke,"⌋","\\rfloor",!0),l(h,f,$,"⌈","\\lceil",!0),l(h,f,Ke,"⌉","\\rceil",!0),l(h,f,M,"\\","\\backslash"),l(h,f,M,"∣","|"),l(h,f,M,"∣","\\vert"),l(H,f,M,"|","\\textbar",!0),l(h,f,M,"∥","\\|"),l(h,f,M,"∥","\\Vert"),l(H,f,M,"∥","\\textbardbl"),l(H,f,M,"~","\\textasciitilde"),l(H,f,M,"\\","\\textbackslash"),l(H,f,M,"^","\\textasciicircum"),l(h,f,k,"↑","\\uparrow",!0),l(h,f,k,"⇑","\\Uparrow",!0),l(h,f,k,"↓","\\downarrow",!0),l(h,f,k,"⇓","\\Downarrow",!0),l(h,f,k,"↕","\\updownarrow",!0),l(h,f,k,"⇕","\\Updownarrow",!0),l(h,f,P,"∐","\\coprod"),l(h,f,P,"⋁","\\bigvee"),l(h,f,P,"⋀","\\bigwedge"),l(h,f,P,"⨄","\\biguplus"),l(h,f,P,"⋂","\\bigcap"),l(h,f,P,"⋃","\\bigcup"),l(h,f,P,"∫","\\int"),l(h,f,P,"∫","\\intop"),l(h,f,P,"∬","\\iint"),l(h,f,P,"∭","\\iiint"),l(h,f,P,"∏","\\prod"),l(h,f,P,"∑","\\sum"),l(h,f,P,"⨂","\\bigotimes"),l(h,f,P,"⨁","\\bigoplus"),l(h,f,P,"⨀","\\bigodot"),l(h,f,P,"∮","\\oint"),l(h,f,P,"∯","\\oiint"),l(h,f,P,"∰","\\oiiint"),l(h,f,P,"⨆","\\bigsqcup"),l(h,f,P,"∫","\\smallint"),l(H,f,oe,"…","\\textellipsis"),l(h,f,oe,"…","\\mathellipsis"),l(H,f,oe,"…","\\ldots",!0),l(h,f,oe,"…","\\ldots",!0),l(h,f,oe,"⋯","\\@cdots",!0),l(h,f,oe,"⋱","\\ddots",!0),l(h,f,M,"⋮","\\varvdots"),l(h,f,Be,"ˊ","\\acute"),l(h,f,Be,"ˋ","\\grave"),l(h,f,Be,"¨","\\ddot"),l(h,f,Be,"~","\\tilde"),l(h,f,Be,"ˉ","\\bar"),l(h,f,Be,"˘","\\breve"),l(h,f,Be,"ˇ","\\check"),l(h,f,Be,"^","\\hat"),l(h,f,Be,"⃗","\\vec"),l(h,f,Be,"˙","\\dot"),l(h,f,Be,"˚","\\mathring"),l(h,f,A,"","\\@imath"),l(h,f,A,"","\\@jmath"),l(h,f,M,"ı","ı"),l(h,f,M,"ȷ","ȷ"),l(H,f,M,"ı","\\i",!0),l(H,f,M,"ȷ","\\j",!0),l(H,f,M,"ß","\\ss",!0),l(H,f,M,"æ","\\ae",!0),l(H,f,M,"œ","\\oe",!0),l(H,f,M,"ø","\\o",!0),l(H,f,M,"Æ","\\AE",!0),l(H,f,M,"Œ","\\OE",!0),l(H,f,M,"Ø","\\O",!0),l(H,f,Be,"ˊ","\\'"),l(H,f,Be,"ˋ","\\`"),l(H,f,Be,"ˆ","\\^"),l(H,f,Be,"˜","\\~"),l(H,f,Be,"ˉ","\\="),l(H,f,Be,"˘","\\u"),l(H,f,Be,"˙","\\."),l(H,f,Be,"¸","\\c"),l(H,f,Be,"˚","\\r"),l(H,f,Be,"ˇ","\\v"),l(H,f,Be,"¨",'\\"'),l(H,f,Be,"˝","\\H"),l(H,f,Be,"◯","\\textcircled");var mt={"--":!0,"---":!0,"``":!0,"''":!0};l(H,f,M,"–","--",!0),l(H,f,M,"–","\\textendash"),l(H,f,M,"—","---",!0),l(H,f,M,"—","\\textemdash"),l(H,f,M,"‘","`",!0),l(H,f,M,"‘","\\textquoteleft"),l(H,f,M,"’","'",!0),l(H,f,M,"’","\\textquoteright"),l(H,f,M,"“","``",!0),l(H,f,M,"“","\\textquotedblleft"),l(H,f,M,"”","''",!0),l(H,f,M,"”","\\textquotedblright"),l(h,f,M,"°","\\degree",!0),l(H,f,M,"°","\\degree"),l(H,f,M,"°","\\textdegree",!0),l(h,f,M,"£","\\pounds"),l(h,f,M,"£","\\mathsterling",!0),l(H,f,M,"£","\\pounds"),l(H,f,M,"£","\\textsterling",!0),l(h,S,M,"✠","\\maltese"),l(H,S,M,"✠","\\maltese");for(var _0='0123456789/@."',E0=0;E0<_0.length;E0++){var Ur=_0.charAt(E0);l(h,f,M,Ur,Ur)}for(var Vn='0123456789!@*()-=+";:?/.,',Gr=0;Grt&&(t=c.height),c.depth>r&&(r=c.depth),c.maxFontSize>n&&(n=c.maxFontSize)}e.height=t,e.depth=r,e.maxFontSize=n},st=function(e,t,r,n){var a=new bt(e,t,r,n);return jr(a),a},jn=function(e,t,r,n){return new bt(e,t,r,n)},yl=function(e,t,r){var n=st([e],[],t);return n.height=Math.max(r||t.fontMetrics().defaultRuleThickness,t.minRuleThickness),n.style.borderBottomWidth=X(n.height),n.maxFontSize=1,n},xl=function(e,t,r,n){var a=new j0(e,t,r,n);return jr(a),a},Xn=function(e){var t=new Yt(e);return jr(t),t},wl=function(e,t){return e instanceof Yt?st([],[e],t):e},kl=function(e){if(e.positionType==="individualShift"){for(var t=e.children,r=[t[0]],n=-t[0].shift-t[0].elem.depth,a=n,c=1;c0&&(a.push(yr(c,e)),c=[]),a.push(r[d]));c.length>0&&a.push(yr(c,e));var y;t?(y=yr(je(t,e,!0)),y.classes=["tag"],a.push(y)):n&&a.push(n);var T=Pt(["katex-html"],a);if(T.setAttribute("aria-hidden","true"),y){var B=y.children[0];B.style.height=X(T.height+T.depth),T.depth&&(B.style.verticalAlign=X(-T.depth))}return T}function ea(u){return new Yt(u)}var xt=function(){function u(t,r,n){this.type=void 0,this.attributes=void 0,this.children=void 0,this.classes=void 0,this.type=t,this.attributes={},this.children=r||[],this.classes=n||[]}var e=u.prototype;return e.setAttribute=function(r,n){this.attributes[r]=n},e.getAttribute=function(r){return this.attributes[r]},e.toNode=function(){var r=document.createElementNS("http://www.w3.org/1998/Math/MathML",this.type);for(var n in this.attributes)Object.prototype.hasOwnProperty.call(this.attributes,n)&&r.setAttribute(n,this.attributes[n]);this.classes.length>0&&(r.className=Je(this.classes));for(var a=0;a0&&(r+=' class ="'+q.escape(Je(this.classes))+'"'),r+=">";for(var a=0;a",r},e.toText=function(){return this.children.map(function(r){return r.toText()}).join("")},u}(),K0=function(){function u(t){this.text=void 0,this.text=t}var e=u.prototype;return e.toNode=function(){return document.createTextNode(this.text)},e.toMarkup=function(){return q.escape(this.toText())},e.toText=function(){return this.text},u}(),Nl=function(){function u(t){this.width=void 0,this.character=void 0,this.width=t,t>=.05555&&t<=.05556?this.character=" ":t>=.1666&&t<=.1667?this.character=" ":t>=.2222&&t<=.2223?this.character=" ":t>=.2777&&t<=.2778?this.character="  ":t>=-.05556&&t<=-.05555?this.character=" ⁣":t>=-.1667&&t<=-.1666?this.character=" ⁣":t>=-.2223&&t<=-.2222?this.character=" ⁣":t>=-.2778&&t<=-.2777?this.character=" ⁣":this.character=null}var e=u.prototype;return e.toNode=function(){if(this.character)return document.createTextNode(this.character);var r=document.createElementNS("http://www.w3.org/1998/Math/MathML","mspace");return r.setAttribute("width",X(this.width)),r},e.toMarkup=function(){return this.character?""+this.character+"":''},e.toText=function(){return this.character?this.character:" "},u}(),W={MathNode:xt,TextNode:K0,SpaceNode:Nl,newDocumentFragment:ea},wt=function(e,t,r){return Ne[t][e]&&Ne[t][e].replace&&e.charCodeAt(0)!==55349&&!(mt.hasOwnProperty(e)&&r&&(r.fontFamily&&r.fontFamily.slice(4,6)==="tt"||r.font&&r.font.slice(4,6)==="tt"))&&(e=Ne[t][e].replace),new W.TextNode(e)},Zr=function(e){return e.length===1?e[0]:new W.MathNode("mrow",e)},Kr=function(e,t){if(t.fontFamily==="texttt")return"monospace";if(t.fontFamily==="textsf")return t.fontShape==="textit"&&t.fontWeight==="textbf"?"sans-serif-bold-italic":t.fontShape==="textit"?"sans-serif-italic":t.fontWeight==="textbf"?"bold-sans-serif":"sans-serif";if(t.fontShape==="textit"&&t.fontWeight==="textbf")return"bold-italic";if(t.fontShape==="textit")return"italic";if(t.fontWeight==="textbf")return"bold";var r=t.font;if(!r||r==="mathnormal")return null;var n=e.mode;if(r==="mathit")return"italic";if(r==="boldsymbol")return e.type==="textord"?"bold":"bold-italic";if(r==="mathbf")return"bold";if(r==="mathbb")return"double-struck";if(r==="mathfrak")return"fraktur";if(r==="mathscr"||r==="mathcal")return"script";if(r==="mathsf")return"sans-serif";if(r==="mathtt")return"monospace";var a=e.text;if(q.contains(["\\imath","\\jmath"],a))return null;Ne[n][a]&&Ne[n][a].replace&&(a=Ne[n][a].replace);var c=E.fontMap[r].fontName;return Dt(a,c,n)?E.fontMap[r].variant:null},ot=function(e,t,r){if(e.length===1){var n=Ce(e[0],t);return r&&n instanceof xt&&n.type==="mo"&&(n.setAttribute("lspace","0em"),n.setAttribute("rspace","0em")),[n]}for(var a=[],c,d=0;d0&&(O.text=O.text.slice(0,1)+"̸"+O.text.slice(1),a.pop())}}}a.push(g),c=g}return a},Jt=function(e,t,r){return Zr(ot(e,t,r))},Ce=function(e,t){if(!e)return new W.MathNode("mrow");if(vr[e.type]){var r=vr[e.type](e,t);return r}else throw new p("Got group of unknown type: '"+e.type+"'")};function ta(u,e,t,r,n){var a=ot(u,t),c;a.length===1&&a[0]instanceof xt&&q.contains(["mrow","mtable"],a[0].type)?c=a[0]:c=new W.MathNode("mrow",a);var d=new W.MathNode("annotation",[new W.TextNode(e)]);d.setAttribute("encoding","application/x-tex");var g=new W.MathNode("semantics",[c,d]),y=new W.MathNode("math",[g]);y.setAttribute("xmlns","http://www.w3.org/1998/Math/MathML"),r&&y.setAttribute("display","block");var T=n?"katex":"katex-mathml";return E.makeSpan([T],[y])}var ra=function(e){return new T0({style:e.displayMode?J.DISPLAY:J.TEXT,maxSize:e.maxSize,minRuleThickness:e.minRuleThickness})},na=function(e,t){if(t.displayMode){var r=["katex-display"];t.leqno&&r.push("leqno"),t.fleqn&&r.push("fleqn"),e=E.makeSpan(r,[e])}return e},Rl=function(e,t,r){var n=ra(r),a;if(r.output==="mathml")return ta(e,t,n,r.displayMode,!0);if(r.output==="html"){var c=$r(e,n);a=E.makeSpan(["katex"],[c])}else{var d=ta(e,t,n,r.displayMode,!1),g=$r(e,n);a=E.makeSpan(["katex"],[d,g])}return na(a,r)},Fl=function(e,t,r){var n=ra(r),a=$r(e,n),c=E.makeSpan(["katex"],[a]);return na(c,r)},Il={widehat:"^",widecheck:"ˇ",widetilde:"~",utilde:"~",overleftarrow:"←",underleftarrow:"←",xleftarrow:"←",overrightarrow:"→",underrightarrow:"→",xrightarrow:"→",underbrace:"⏟",overbrace:"⏞",overgroup:"⏠",undergroup:"⏡",overleftrightarrow:"↔",underleftrightarrow:"↔",xleftrightarrow:"↔",Overrightarrow:"⇒",xRightarrow:"⇒",overleftharpoon:"↼",xleftharpoonup:"↼",overrightharpoon:"⇀",xrightharpoonup:"⇀",xLeftarrow:"⇐",xLeftrightarrow:"⇔",xhookleftarrow:"↩",xhookrightarrow:"↪",xmapsto:"↦",xrightharpoondown:"⇁",xleftharpoondown:"↽",xrightleftharpoons:"⇌",xleftrightharpoons:"⇋",xtwoheadleftarrow:"↞",xtwoheadrightarrow:"↠",xlongequal:"=",xtofrom:"⇄",xrightleftarrows:"⇄",xrightequilibrium:"⇌",xleftequilibrium:"⇋","\\cdrightarrow":"→","\\cdleftarrow":"←","\\cdlongequal":"="},Ll=function(e){var t=new W.MathNode("mo",[new W.TextNode(Il[e.replace(/^\\/,"")])]);return t.setAttribute("stretchy","true"),t},Ol={overrightarrow:[["rightarrow"],.888,522,"xMaxYMin"],overleftarrow:[["leftarrow"],.888,522,"xMinYMin"],underrightarrow:[["rightarrow"],.888,522,"xMaxYMin"],underleftarrow:[["leftarrow"],.888,522,"xMinYMin"],xrightarrow:[["rightarrow"],1.469,522,"xMaxYMin"],"\\cdrightarrow":[["rightarrow"],3,522,"xMaxYMin"],xleftarrow:[["leftarrow"],1.469,522,"xMinYMin"],"\\cdleftarrow":[["leftarrow"],3,522,"xMinYMin"],Overrightarrow:[["doublerightarrow"],.888,560,"xMaxYMin"],xRightarrow:[["doublerightarrow"],1.526,560,"xMaxYMin"],xLeftarrow:[["doubleleftarrow"],1.526,560,"xMinYMin"],overleftharpoon:[["leftharpoon"],.888,522,"xMinYMin"],xleftharpoonup:[["leftharpoon"],.888,522,"xMinYMin"],xleftharpoondown:[["leftharpoondown"],.888,522,"xMinYMin"],overrightharpoon:[["rightharpoon"],.888,522,"xMaxYMin"],xrightharpoonup:[["rightharpoon"],.888,522,"xMaxYMin"],xrightharpoondown:[["rightharpoondown"],.888,522,"xMaxYMin"],xlongequal:[["longequal"],.888,334,"xMinYMin"],"\\cdlongequal":[["longequal"],3,334,"xMinYMin"],xtwoheadleftarrow:[["twoheadleftarrow"],.888,334,"xMinYMin"],xtwoheadrightarrow:[["twoheadrightarrow"],.888,334,"xMaxYMin"],overleftrightarrow:[["leftarrow","rightarrow"],.888,522],overbrace:[["leftbrace","midbrace","rightbrace"],1.6,548],underbrace:[["leftbraceunder","midbraceunder","rightbraceunder"],1.6,548],underleftrightarrow:[["leftarrow","rightarrow"],.888,522],xleftrightarrow:[["leftarrow","rightarrow"],1.75,522],xLeftrightarrow:[["doubleleftarrow","doublerightarrow"],1.75,560],xrightleftharpoons:[["leftharpoondownplus","rightharpoonplus"],1.75,716],xleftrightharpoons:[["leftharpoonplus","rightharpoondownplus"],1.75,716],xhookleftarrow:[["leftarrow","righthook"],1.08,522],xhookrightarrow:[["lefthook","rightarrow"],1.08,522],overlinesegment:[["leftlinesegment","rightlinesegment"],.888,522],underlinesegment:[["leftlinesegment","rightlinesegment"],.888,522],overgroup:[["leftgroup","rightgroup"],.888,342],undergroup:[["leftgroupunder","rightgroupunder"],.888,342],xmapsto:[["leftmapsto","rightarrow"],1.5,522],xtofrom:[["leftToFrom","rightToFrom"],1.75,528],xrightleftarrows:[["baraboveleftarrow","rightarrowabovebar"],1.75,901],xrightequilibrium:[["baraboveshortleftharpoon","rightharpoonaboveshortbar"],1.75,716],xleftequilibrium:[["shortbaraboveleftharpoon","shortrightharpoonabovebar"],1.75,716]},ql=function(e){return e.type==="ordgroup"?e.body.length:1},Pl=function(e,t){function r(){var g=4e5,y=e.label.slice(1);if(q.contains(["widehat","widecheck","widetilde","utilde"],y)){var T=e,B=ql(T.base),F,R,O;if(B>5)y==="widehat"||y==="widecheck"?(F=420,g=2364,O=.42,R=y+"4"):(F=312,g=2340,O=.34,R="tilde4");else{var Y=[1,1,2,2,3,3][B];y==="widehat"||y==="widecheck"?(g=[0,1062,2364,2364,2364][Y],F=[0,239,300,360,420][Y],O=[0,.24,.3,.3,.36,.42][Y],R=y+Y):(g=[0,600,1033,2339,2340][Y],F=[0,260,286,306,312][Y],O=[0,.26,.286,.3,.306,.34][Y],R="tilde"+Y)}var Q=new Nt(R),ae=new yt([Q],{width:"100%",height:X(O),viewBox:"0 0 "+g+" "+F,preserveAspectRatio:"none"});return{span:E.makeSvgSpan([],[ae],t),minWidth:0,height:O}}else{var ue=[],ce=Ol[y],Ae=ce[0],be=ce[1],Me=ce[2],Se=Me/1e3,_e=Ae.length,Ie,Qe;if(_e===1){var dt=ce[3];Ie=["hide-tail"],Qe=[dt]}else if(_e===2)Ie=["halfarrow-left","halfarrow-right"],Qe=["xMinYMin","xMaxYMin"];else if(_e===3)Ie=["brace-left","brace-center","brace-right"],Qe=["xMinYMin","xMidYMin","xMaxYMin"];else throw new Error(`Correct katexImagesData or update code here to support - `+_e+" children.");for(var Oe=0;Oe<_e;Oe++){var v0=new Nt(Ae[Oe]),kt=new yt([v0],{width:"400em",height:X(Se),viewBox:"0 0 "+g+" "+Me,preserveAspectRatio:Qe[Oe]+" slice"}),rt=E.makeSvgSpan([Ie[Oe]],[kt],t);if(_e===1)return{span:rt,minWidth:be,height:Se};rt.style.height=X(Se),ue.push(rt)}return{span:E.makeSpan(["stretchy"],ue,t),minWidth:be,height:Se}}}var n=r(),a=n.span,c=n.minWidth,d=n.height;return a.height=d,a.style.height=X(d),c>0&&(a.style.minWidth=X(c)),a},Hl=function(e,t,r,n,a){var c,d=e.height+e.depth+r+n;if(/fbox|color|angl/.test(t)){if(c=E.makeSpan(["stretchy",t],[],a),t==="fbox"){var g=a.color&&a.getColor();g&&(c.style.borderColor=g)}}else{var y=[];/^[bx]cancel$/.test(t)&&y.push(new Kt({x1:"0",y1:"0",x2:"100%",y2:"100%","stroke-width":"0.046em"})),/^x?cancel$/.test(t)&&y.push(new Kt({x1:"0",y1:"100%",x2:"100%",y2:"0","stroke-width":"0.046em"}));var T=new yt(y,{width:"100%",height:X(d)});c=E.makeSvgSpan([],[T],a)}return c.height=d,c.style.height=X(d),c},Ht={encloseSpan:Hl,mathMLnode:Ll,svgSpan:Pl};function ve(u,e){if(!u||u.type!==e)throw new Error("Expected node of type "+e+", but got "+(u?"node of type "+u.type:String(u)));return u}function Qr(u){var e=xr(u);if(!e)throw new Error("Expected node of symbol group type, but got "+(u?"node of type "+u.type:String(u)));return e}function xr(u){return u&&(u.type==="atom"||$0.hasOwnProperty(u.type))?u:null}var Jr=function(e,t){var r,n,a;e&&e.type==="supsub"?(n=ve(e.base,"accent"),r=n.base,e.base=r,a=Hr(ke(e,t)),e.base=n):(n=ve(e,"accent"),r=n.base);var c=ke(r,t.havingCrampedStyle()),d=n.isShifty&&q.isCharacterBox(r),g=0;if(d){var y=q.getBaseElem(r),T=ke(y,t.havingCrampedStyle());g=z0(T).skew}var B=n.label==="\\c",F=B?c.height+c.depth:Math.min(c.height,t.fontMetrics().xHeight),R;if(n.isStretchy)R=Ht.svgSpan(n,t),R=E.makeVList({positionType:"firstBaseline",children:[{type:"elem",elem:c},{type:"elem",elem:R,wrapperClasses:["svg-align"],wrapperStyle:g>0?{width:"calc(100% - "+X(2*g)+")",marginLeft:X(2*g)}:void 0}]},t);else{var O,Y;n.label==="\\vec"?(O=E.staticSvg("vec",t),Y=E.svgData.vec[1]):(O=E.makeOrd({mode:n.mode,text:n.label},t,"textord"),O=z0(O),O.italic=0,Y=O.width,B&&(F+=O.depth)),R=E.makeSpan(["accent-body"],[O]);var Q=n.label==="\\textcircled";Q&&(R.classes.push("accent-full"),F=c.height);var ae=g;Q||(ae-=Y/2),R.style.left=X(ae),n.label==="\\textcircled"&&(R.style.top=".2em"),R=E.makeVList({positionType:"firstBaseline",children:[{type:"elem",elem:c},{type:"kern",size:-F},{type:"elem",elem:R}]},t)}var ue=E.makeSpan(["mord","accent"],[R],t);return a?(a.children[0]=ue,a.height=Math.max(ue.height,a.height),a.classes[0]="mord",a):ue},aa=function(e,t){var r=e.isStretchy?Ht.mathMLnode(e.label):new W.MathNode("mo",[wt(e.label,e.mode)]),n=new W.MathNode("mover",[Ce(e.base,t),r]);return n.setAttribute("accent","true"),n},Ul=new RegExp(["\\acute","\\grave","\\ddot","\\tilde","\\bar","\\breve","\\check","\\hat","\\vec","\\dot","\\mathring"].map(function(u){return"\\"+u}).join("|"));ee({type:"accent",names:["\\acute","\\grave","\\ddot","\\tilde","\\bar","\\breve","\\check","\\hat","\\vec","\\dot","\\mathring","\\widecheck","\\widehat","\\widetilde","\\overrightarrow","\\overleftarrow","\\Overrightarrow","\\overleftrightarrow","\\overgroup","\\overlinesegment","\\overleftharpoon","\\overrightharpoon"],props:{numArgs:1},handler:function(e,t){var r=br(t[0]),n=!Ul.test(e.funcName),a=!n||e.funcName==="\\widehat"||e.funcName==="\\widetilde"||e.funcName==="\\widecheck";return{type:"accent",mode:e.parser.mode,label:e.funcName,isStretchy:n,isShifty:a,base:r}},htmlBuilder:Jr,mathmlBuilder:aa}),ee({type:"accent",names:["\\'","\\`","\\^","\\~","\\=","\\u","\\.",'\\"',"\\c","\\r","\\H","\\v","\\textcircled"],props:{numArgs:1,allowedInText:!0,allowedInMath:!0,argTypes:["primitive"]},handler:function(e,t){var r=t[0],n=e.parser.mode;return n==="math"&&(e.parser.settings.reportNonstrict("mathVsTextAccents","LaTeX's accent "+e.funcName+" works only in text mode"),n="text"),{type:"accent",mode:n,label:e.funcName,isStretchy:!1,isShifty:!0,base:r}},htmlBuilder:Jr,mathmlBuilder:aa}),ee({type:"accentUnder",names:["\\underleftarrow","\\underrightarrow","\\underleftrightarrow","\\undergroup","\\underlinesegment","\\utilde"],props:{numArgs:1},handler:function(e,t){var r=e.parser,n=e.funcName,a=t[0];return{type:"accentUnder",mode:r.mode,label:n,base:a}},htmlBuilder:function(e,t){var r=ke(e.base,t),n=Ht.svgSpan(e,t),a=e.label==="\\utilde"?.12:0,c=E.makeVList({positionType:"top",positionData:r.height,children:[{type:"elem",elem:n,wrapperClasses:["svg-align"]},{type:"kern",size:a},{type:"elem",elem:r}]},t);return E.makeSpan(["mord","accentunder"],[c],t)},mathmlBuilder:function(e,t){var r=Ht.mathMLnode(e.label),n=new W.MathNode("munder",[Ce(e.base,t),r]);return n.setAttribute("accentunder","true"),n}});var wr=function(e){var t=new W.MathNode("mpadded",e?[e]:[]);return t.setAttribute("width","+0.6em"),t.setAttribute("lspace","0.3em"),t};ee({type:"xArrow",names:["\\xleftarrow","\\xrightarrow","\\xLeftarrow","\\xRightarrow","\\xleftrightarrow","\\xLeftrightarrow","\\xhookleftarrow","\\xhookrightarrow","\\xmapsto","\\xrightharpoondown","\\xrightharpoonup","\\xleftharpoondown","\\xleftharpoonup","\\xrightleftharpoons","\\xleftrightharpoons","\\xlongequal","\\xtwoheadrightarrow","\\xtwoheadleftarrow","\\xtofrom","\\xrightleftarrows","\\xrightequilibrium","\\xleftequilibrium","\\\\cdrightarrow","\\\\cdleftarrow","\\\\cdlongequal"],props:{numArgs:1,numOptionalArgs:1},handler:function(e,t,r){var n=e.parser,a=e.funcName;return{type:"xArrow",mode:n.mode,label:a,body:t[0],below:r[0]}},htmlBuilder:function(e,t){var r=t.style,n=t.havingStyle(r.sup()),a=E.wrapFragment(ke(e.body,n,t),t),c=e.label.slice(0,2)==="\\x"?"x":"cd";a.classes.push(c+"-arrow-pad");var d;e.below&&(n=t.havingStyle(r.sub()),d=E.wrapFragment(ke(e.below,n,t),t),d.classes.push(c+"-arrow-pad"));var g=Ht.svgSpan(e,t),y=-t.fontMetrics().axisHeight+.5*g.height,T=-t.fontMetrics().axisHeight-.5*g.height-.111;(a.depth>.25||e.label==="\\xleftequilibrium")&&(T-=a.depth);var B;if(d){var F=-t.fontMetrics().axisHeight+d.height+.5*g.height+.111;B=E.makeVList({positionType:"individualShift",children:[{type:"elem",elem:a,shift:T},{type:"elem",elem:g,shift:y},{type:"elem",elem:d,shift:F}]},t)}else B=E.makeVList({positionType:"individualShift",children:[{type:"elem",elem:a,shift:T},{type:"elem",elem:g,shift:y}]},t);return B.children[0].children[0].children[1].classes.push("svg-align"),E.makeSpan(["mrel","x-arrow"],[B],t)},mathmlBuilder:function(e,t){var r=Ht.mathMLnode(e.label);r.setAttribute("minsize",e.label.charAt(0)==="x"?"1.75em":"3.0em");var n;if(e.body){var a=wr(Ce(e.body,t));if(e.below){var c=wr(Ce(e.below,t));n=new W.MathNode("munderover",[r,c,a])}else n=new W.MathNode("mover",[r,a])}else if(e.below){var d=wr(Ce(e.below,t));n=new W.MathNode("munder",[r,d])}else n=wr(),n=new W.MathNode("mover",[r,n]);return n}});var Gl=E.makeSpan;function ia(u,e){var t=je(u.body,e,!0);return Gl([u.mclass],t,e)}function la(u,e){var t,r=ot(u.body,e);return u.mclass==="minner"?t=new W.MathNode("mpadded",r):u.mclass==="mord"?u.isCharacterBox?(t=r[0],t.type="mi"):t=new W.MathNode("mi",r):(u.isCharacterBox?(t=r[0],t.type="mo"):t=new W.MathNode("mo",r),u.mclass==="mbin"?(t.attributes.lspace="0.22em",t.attributes.rspace="0.22em"):u.mclass==="mpunct"?(t.attributes.lspace="0em",t.attributes.rspace="0.17em"):u.mclass==="mopen"||u.mclass==="mclose"?(t.attributes.lspace="0em",t.attributes.rspace="0em"):u.mclass==="minner"&&(t.attributes.lspace="0.0556em",t.attributes.width="+0.1111em")),t}ee({type:"mclass",names:["\\mathord","\\mathbin","\\mathrel","\\mathopen","\\mathclose","\\mathpunct","\\mathinner"],props:{numArgs:1,primitive:!0},handler:function(e,t){var r=e.parser,n=e.funcName,a=t[0];return{type:"mclass",mode:r.mode,mclass:"m"+n.slice(5),body:Ge(a),isCharacterBox:q.isCharacterBox(a)}},htmlBuilder:ia,mathmlBuilder:la});var kr=function(e){var t=e.type==="ordgroup"&&e.body.length?e.body[0]:e;return t.type==="atom"&&(t.family==="bin"||t.family==="rel")?"m"+t.family:"mord"};ee({type:"mclass",names:["\\@binrel"],props:{numArgs:2},handler:function(e,t){var r=e.parser;return{type:"mclass",mode:r.mode,mclass:kr(t[0]),body:Ge(t[1]),isCharacterBox:q.isCharacterBox(t[1])}}}),ee({type:"mclass",names:["\\stackrel","\\overset","\\underset"],props:{numArgs:2},handler:function(e,t){var r=e.parser,n=e.funcName,a=t[1],c=t[0],d;n!=="\\stackrel"?d=kr(a):d="mrel";var g={type:"op",mode:a.mode,limits:!0,alwaysHandleSupSub:!0,parentIsSupSub:!1,symbol:!1,suppressBaseShift:n!=="\\stackrel",body:Ge(a)},y={type:"supsub",mode:c.mode,base:g,sup:n==="\\underset"?null:c,sub:n==="\\underset"?c:null};return{type:"mclass",mode:r.mode,mclass:d,body:[y],isCharacterBox:q.isCharacterBox(y)}},htmlBuilder:ia,mathmlBuilder:la}),ee({type:"pmb",names:["\\pmb"],props:{numArgs:1,allowedInText:!0},handler:function(e,t){var r=e.parser;return{type:"pmb",mode:r.mode,mclass:kr(t[0]),body:Ge(t[0])}},htmlBuilder:function(e,t){var r=je(e.body,t,!0),n=E.makeSpan([e.mclass],r,t);return n.style.textShadow="0.02em 0.01em 0.04px",n},mathmlBuilder:function(e,t){var r=ot(e.body,t),n=new W.MathNode("mstyle",r);return n.setAttribute("style","text-shadow: 0.02em 0.01em 0.04px"),n}});var Vl={">":"\\\\cdrightarrow","<":"\\\\cdleftarrow","=":"\\\\cdlongequal",A:"\\uparrow",V:"\\downarrow","|":"\\Vert",".":"no arrow"},sa=function(){return{type:"styling",body:[],mode:"math",style:"display"}},oa=function(e){return e.type==="textord"&&e.text==="@"},Wl=function(e,t){return(e.type==="mathord"||e.type==="atom")&&e.text===t};function Yl(u,e,t){var r=Vl[u];switch(r){case"\\\\cdrightarrow":case"\\\\cdleftarrow":return t.callFunction(r,[e[0]],[e[1]]);case"\\uparrow":case"\\downarrow":{var n=t.callFunction("\\\\cdleft",[e[0]],[]),a={type:"atom",text:r,mode:"math",family:"rel"},c=t.callFunction("\\Big",[a],[]),d=t.callFunction("\\\\cdright",[e[1]],[]),g={type:"ordgroup",mode:"math",body:[n,c,d]};return t.callFunction("\\\\cdparent",[g],[])}case"\\\\cdlongequal":return t.callFunction("\\\\cdlongequal",[],[]);case"\\Vert":{var y={type:"textord",text:"\\Vert",mode:"math"};return t.callFunction("\\Big",[y],[])}default:return{type:"textord",text:" ",mode:"math"}}}function jl(u){var e=[];for(u.gullet.beginGroup(),u.gullet.macros.set("\\cr","\\\\\\relax"),u.gullet.beginGroup();;){e.push(u.parseExpression(!1,"\\\\")),u.gullet.endGroup(),u.gullet.beginGroup();var t=u.fetch().text;if(t==="&"||t==="\\\\")u.consume();else if(t==="\\end"){e[e.length-1].length===0&&e.pop();break}else throw new p("Expected \\\\ or \\cr or \\end",u.nextToken)}for(var r=[],n=[r],a=0;a-1))if("<>AV".indexOf(y)>-1)for(var B=0;B<2;B++){for(var F=!0,R=g+1;RAV=|." after @',c[g]);var O=Yl(y,T,u),Y={type:"styling",body:[O],mode:"math",style:"display"};r.push(Y),d=sa()}a%2===0?r.push(d):r.shift(),r=[],n.push(r)}u.gullet.endGroup(),u.gullet.endGroup();var Q=new Array(n[0].length).fill({type:"align",align:"c",pregap:.25,postgap:.25});return{type:"array",mode:"math",body:n,arraystretch:1,addJot:!0,rowGaps:[null],cols:Q,colSeparationType:"CD",hLinesBeforeRow:new Array(n.length+1).fill([])}}ee({type:"cdlabel",names:["\\\\cdleft","\\\\cdright"],props:{numArgs:1},handler:function(e,t){var r=e.parser,n=e.funcName;return{type:"cdlabel",mode:r.mode,side:n.slice(4),label:t[0]}},htmlBuilder:function(e,t){var r=t.havingStyle(t.style.sup()),n=E.wrapFragment(ke(e.label,r,t),t);return n.classes.push("cd-label-"+e.side),n.style.bottom=X(.8-n.depth),n.height=0,n.depth=0,n},mathmlBuilder:function(e,t){var r=new W.MathNode("mrow",[Ce(e.label,t)]);return r=new W.MathNode("mpadded",[r]),r.setAttribute("width","0"),e.side==="left"&&r.setAttribute("lspace","-1width"),r.setAttribute("voffset","0.7em"),r=new W.MathNode("mstyle",[r]),r.setAttribute("displaystyle","false"),r.setAttribute("scriptlevel","1"),r}}),ee({type:"cdlabelparent",names:["\\\\cdparent"],props:{numArgs:1},handler:function(e,t){var r=e.parser;return{type:"cdlabelparent",mode:r.mode,fragment:t[0]}},htmlBuilder:function(e,t){var r=E.wrapFragment(ke(e.fragment,t),t);return r.classes.push("cd-vert-arrow"),r},mathmlBuilder:function(e,t){return new W.MathNode("mrow",[Ce(e.fragment,t)])}}),ee({type:"textord",names:["\\@char"],props:{numArgs:1,allowedInText:!0},handler:function(e,t){for(var r=e.parser,n=ve(t[0],"ordgroup"),a=n.body,c="",d=0;d=1114111)throw new p("\\@char with invalid code point "+c);return y<=65535?T=String.fromCharCode(y):(y-=65536,T=String.fromCharCode((y>>10)+55296,(y&1023)+56320)),{type:"textord",mode:r.mode,text:T}}});var ua=function(e,t){var r=je(e.body,t.withColor(e.color),!1);return E.makeFragment(r)},ca=function(e,t){var r=ot(e.body,t.withColor(e.color)),n=new W.MathNode("mstyle",r);return n.setAttribute("mathcolor",e.color),n};ee({type:"color",names:["\\textcolor"],props:{numArgs:2,allowedInText:!0,argTypes:["color","original"]},handler:function(e,t){var r=e.parser,n=ve(t[0],"color-token").color,a=t[1];return{type:"color",mode:r.mode,color:n,body:Ge(a)}},htmlBuilder:ua,mathmlBuilder:ca}),ee({type:"color",names:["\\color"],props:{numArgs:1,allowedInText:!0,argTypes:["color"]},handler:function(e,t){var r=e.parser,n=e.breakOnTokenText,a=ve(t[0],"color-token").color;r.gullet.macros.set("\\current@color",a);var c=r.parseExpression(!0,n);return{type:"color",mode:r.mode,color:a,body:c}},htmlBuilder:ua,mathmlBuilder:ca}),ee({type:"cr",names:["\\\\"],props:{numArgs:0,numOptionalArgs:0,allowedInText:!0},handler:function(e,t,r){var n=e.parser,a=n.gullet.future().text==="["?n.parseSizeGroup(!0):null,c=!n.settings.displayMode||!n.settings.useStrictBehavior("newLineInDisplayMode","In LaTeX, \\\\ or \\newline does nothing in display mode");return{type:"cr",mode:n.mode,newLine:c,size:a&&ve(a,"size").value}},htmlBuilder:function(e,t){var r=E.makeSpan(["mspace"],[],t);return e.newLine&&(r.classes.push("newline"),e.size&&(r.style.marginTop=X(Ee(e.size,t)))),r},mathmlBuilder:function(e,t){var r=new W.MathNode("mspace");return e.newLine&&(r.setAttribute("linebreak","newline"),e.size&&r.setAttribute("height",X(Ee(e.size,t)))),r}});var en={"\\global":"\\global","\\long":"\\\\globallong","\\\\globallong":"\\\\globallong","\\def":"\\gdef","\\gdef":"\\gdef","\\edef":"\\xdef","\\xdef":"\\xdef","\\let":"\\\\globallet","\\futurelet":"\\\\globalfuture"},ha=function(e){var t=e.text;if(/^(?:[\\{}$&#^_]|EOF)$/.test(t))throw new p("Expected a control sequence",e);return t},Xl=function(e){var t=e.gullet.popToken();return t.text==="="&&(t=e.gullet.popToken(),t.text===" "&&(t=e.gullet.popToken())),t},ma=function(e,t,r,n){var a=e.gullet.macros.get(r.text);a==null&&(r.noexpand=!0,a={tokens:[r],numArgs:0,unexpandable:!e.gullet.isExpandable(r.text)}),e.gullet.macros.set(t,a,n)};ee({type:"internal",names:["\\global","\\long","\\\\globallong"],props:{numArgs:0,allowedInText:!0},handler:function(e){var t=e.parser,r=e.funcName;t.consumeSpaces();var n=t.fetch();if(en[n.text])return(r==="\\global"||r==="\\\\globallong")&&(n.text=en[n.text]),ve(t.parseFunction(),"internal");throw new p("Invalid token after macro prefix",n)}}),ee({type:"internal",names:["\\def","\\gdef","\\edef","\\xdef"],props:{numArgs:0,allowedInText:!0,primitive:!0},handler:function(e){var t=e.parser,r=e.funcName,n=t.gullet.popToken(),a=n.text;if(/^(?:[\\{}$&#^_]|EOF)$/.test(a))throw new p("Expected a control sequence",n);for(var c=0,d,g=[[]];t.gullet.future().text!=="{";)if(n=t.gullet.popToken(),n.text==="#"){if(t.gullet.future().text==="{"){d=t.gullet.future(),g[c].push("{");break}if(n=t.gullet.popToken(),!/^[1-9]$/.test(n.text))throw new p('Invalid argument number "'+n.text+'"');if(parseInt(n.text)!==c+1)throw new p('Argument number "'+n.text+'" out of order');c++,g.push([])}else{if(n.text==="EOF")throw new p("Expected a macro definition");g[c].push(n.text)}var y=t.gullet.consumeArg(),T=y.tokens;return d&&T.unshift(d),(r==="\\edef"||r==="\\xdef")&&(T=t.gullet.expandTokens(T),T.reverse()),t.gullet.macros.set(a,{tokens:T,numArgs:c,delimiters:g},r===en[r]),{type:"internal",mode:t.mode}}}),ee({type:"internal",names:["\\let","\\\\globallet"],props:{numArgs:0,allowedInText:!0,primitive:!0},handler:function(e){var t=e.parser,r=e.funcName,n=ha(t.gullet.popToken());t.gullet.consumeSpaces();var a=Xl(t);return ma(t,n,a,r==="\\\\globallet"),{type:"internal",mode:t.mode}}}),ee({type:"internal",names:["\\futurelet","\\\\globalfuture"],props:{numArgs:0,allowedInText:!0,primitive:!0},handler:function(e){var t=e.parser,r=e.funcName,n=ha(t.gullet.popToken()),a=t.gullet.popToken(),c=t.gullet.popToken();return ma(t,n,c,r==="\\\\globalfuture"),t.gullet.pushToken(c),t.gullet.pushToken(a),{type:"internal",mode:t.mode}}});var Q0=function(e,t,r){var n=Ne.math[e]&&Ne.math[e].replace,a=Dt(n||e,t,r);if(!a)throw new Error("Unsupported symbol "+e+" and font size "+t+".");return a},tn=function(e,t,r,n){var a=r.havingBaseStyle(t),c=E.makeSpan(n.concat(a.sizingClasses(r)),[e],r),d=a.sizeMultiplier/r.sizeMultiplier;return c.height*=d,c.depth*=d,c.maxFontSize=a.sizeMultiplier,c},da=function(e,t,r){var n=t.havingBaseStyle(r),a=(1-t.sizeMultiplier/n.sizeMultiplier)*t.fontMetrics().axisHeight;e.classes.push("delimcenter"),e.style.top=X(a),e.height-=a,e.depth+=a},$l=function(e,t,r,n,a,c){var d=E.makeSymbol(e,"Main-Regular",a,n),g=tn(d,t,n,c);return r&&da(g,n,t),g},Zl=function(e,t,r,n){return E.makeSymbol(e,"Size"+t+"-Regular",r,n)},fa=function(e,t,r,n,a,c){var d=Zl(e,t,a,n),g=tn(E.makeSpan(["delimsizing","size"+t],[d],n),J.TEXT,n,c);return r&&da(g,n,J.TEXT),g},rn=function(e,t,r){var n;t==="Size1-Regular"?n="delim-size1":n="delim-size4";var a=E.makeSpan(["delimsizinginner",n],[E.makeSpan([],[E.makeSymbol(e,t,r)])]);return{type:"elem",elem:a}},nn=function(e,t,r){var n=vt["Size4-Regular"][e.charCodeAt(0)]?vt["Size4-Regular"][e.charCodeAt(0)][4]:vt["Size1-Regular"][e.charCodeAt(0)][4],a=new Nt("inner",k0(e,Math.round(1e3*t))),c=new yt([a],{width:X(n),height:X(t),style:"width:"+X(n),viewBox:"0 0 "+1e3*n+" "+Math.round(1e3*t),preserveAspectRatio:"xMinYMin"}),d=E.makeSvgSpan([],[c],r);return d.height=t,d.style.height=X(t),d.style.width=X(n),{type:"elem",elem:d}},an=.008,Sr={type:"kern",size:-1*an},Kl=["|","\\lvert","\\rvert","\\vert"],Ql=["\\|","\\lVert","\\rVert","\\Vert"],pa=function(e,t,r,n,a,c){var d,g,y,T,B="",F=0;d=y=T=e,g=null;var R="Size1-Regular";e==="\\uparrow"?y=T="⏐":e==="\\Uparrow"?y=T="‖":e==="\\downarrow"?d=y="⏐":e==="\\Downarrow"?d=y="‖":e==="\\updownarrow"?(d="\\uparrow",y="⏐",T="\\downarrow"):e==="\\Updownarrow"?(d="\\Uparrow",y="‖",T="\\Downarrow"):q.contains(Kl,e)?(y="∣",B="vert",F=333):q.contains(Ql,e)?(y="∥",B="doublevert",F=556):e==="["||e==="\\lbrack"?(d="⎡",y="⎢",T="⎣",R="Size4-Regular",B="lbrack",F=667):e==="]"||e==="\\rbrack"?(d="⎤",y="⎥",T="⎦",R="Size4-Regular",B="rbrack",F=667):e==="\\lfloor"||e==="⌊"?(y=d="⎢",T="⎣",R="Size4-Regular",B="lfloor",F=667):e==="\\lceil"||e==="⌈"?(d="⎡",y=T="⎢",R="Size4-Regular",B="lceil",F=667):e==="\\rfloor"||e==="⌋"?(y=d="⎥",T="⎦",R="Size4-Regular",B="rfloor",F=667):e==="\\rceil"||e==="⌉"?(d="⎤",y=T="⎥",R="Size4-Regular",B="rceil",F=667):e==="("||e==="\\lparen"?(d="⎛",y="⎜",T="⎝",R="Size4-Regular",B="lparen",F=875):e===")"||e==="\\rparen"?(d="⎞",y="⎟",T="⎠",R="Size4-Regular",B="rparen",F=875):e==="\\{"||e==="\\lbrace"?(d="⎧",g="⎨",T="⎩",y="⎪",R="Size4-Regular"):e==="\\}"||e==="\\rbrace"?(d="⎫",g="⎬",T="⎭",y="⎪",R="Size4-Regular"):e==="\\lgroup"||e==="⟮"?(d="⎧",T="⎩",y="⎪",R="Size4-Regular"):e==="\\rgroup"||e==="⟯"?(d="⎫",T="⎭",y="⎪",R="Size4-Regular"):e==="\\lmoustache"||e==="⎰"?(d="⎧",T="⎭",y="⎪",R="Size4-Regular"):(e==="\\rmoustache"||e==="⎱")&&(d="⎫",T="⎩",y="⎪",R="Size4-Regular");var O=Q0(d,R,a),Y=O.height+O.depth,Q=Q0(y,R,a),ae=Q.height+Q.depth,ue=Q0(T,R,a),ce=ue.height+ue.depth,Ae=0,be=1;if(g!==null){var Me=Q0(g,R,a);Ae=Me.height+Me.depth,be=2}var Se=Y+ce+Ae,_e=Math.max(0,Math.ceil((t-Se)/(be*ae))),Ie=Se+_e*be*ae,Qe=n.fontMetrics().axisHeight;r&&(Qe*=n.sizeMultiplier);var dt=Ie/2-Qe,Oe=[];if(B.length>0){var v0=Ie-Y-ce,kt=Math.round(Ie*1e3),rt=ar(B,Math.round(v0*1e3)),n0=new Nt(B,rt),C0=(F/1e3).toFixed(3)+"em",D0=(kt/1e3).toFixed(3)+"em",An=new yt([n0],{width:C0,height:D0,viewBox:"0 0 "+F+" "+kt}),a0=E.makeSvgSpan([],[An],n);a0.height=kt/1e3,a0.style.width=C0,a0.style.height=D0,Oe.push({type:"elem",elem:a0})}else{if(Oe.push(rn(T,R,a)),Oe.push(Sr),g===null){var i0=Ie-Y-ce+2*an;Oe.push(nn(y,i0,n))}else{var St=(Ie-Y-ce-Ae)/2+2*an;Oe.push(nn(y,St,n)),Oe.push(Sr),Oe.push(rn(g,R,a)),Oe.push(Sr),Oe.push(nn(y,St,n))}Oe.push(Sr),Oe.push(rn(d,R,a))}var tr=n.havingBaseStyle(J.TEXT),Tn=E.makeVList({positionType:"bottom",positionData:dt,children:Oe},tr);return tn(E.makeSpan(["delimsizing","mult"],[Tn],tr),J.TEXT,n,c)},ln=80,sn=.08,on=function(e,t,r,n,a){var c=Wt(e,n,r),d=new Nt(e,c),g=new yt([d],{width:"400em",height:X(t),viewBox:"0 0 400000 "+r,preserveAspectRatio:"xMinYMin slice"});return E.makeSvgSpan(["hide-tail"],[g],a)},Jl=function(e,t){var r=t.havingBaseSizing(),n=ya("\\surd",e*r.sizeMultiplier,ba,r),a=r.sizeMultiplier,c=Math.max(0,t.minRuleThickness-t.fontMetrics().sqrtRuleThickness),d,g=0,y=0,T=0,B;return n.type==="small"?(T=1e3+1e3*c+ln,e<1?a=1:e<1.4&&(a=.7),g=(1+c+sn)/a,y=(1+c)/a,d=on("sqrtMain",g,T,c,t),d.style.minWidth="0.853em",B=.833/a):n.type==="large"?(T=(1e3+ln)*J0[n.size],y=(J0[n.size]+c)/a,g=(J0[n.size]+c+sn)/a,d=on("sqrtSize"+n.size,g,T,c,t),d.style.minWidth="1.02em",B=1/a):(g=e+c+sn,y=e+c,T=Math.floor(1e3*e+c)+ln,d=on("sqrtTall",g,T,c,t),d.style.minWidth="0.742em",B=1.056),d.height=y,d.style.height=X(g),{span:d,advanceWidth:B,ruleWidth:(t.fontMetrics().sqrtRuleThickness+c)*a}},ga=["(","\\lparen",")","\\rparen","[","\\lbrack","]","\\rbrack","\\{","\\lbrace","\\}","\\rbrace","\\lfloor","\\rfloor","⌊","⌋","\\lceil","\\rceil","⌈","⌉","\\surd"],es=["\\uparrow","\\downarrow","\\updownarrow","\\Uparrow","\\Downarrow","\\Updownarrow","|","\\|","\\vert","\\Vert","\\lvert","\\rvert","\\lVert","\\rVert","\\lgroup","\\rgroup","⟮","⟯","\\lmoustache","\\rmoustache","⎰","⎱"],va=["<",">","\\langle","\\rangle","/","\\backslash","\\lt","\\gt"],J0=[0,1.2,1.8,2.4,3],ts=function(e,t,r,n,a){if(e==="<"||e==="\\lt"||e==="⟨"?e="\\langle":(e===">"||e==="\\gt"||e==="⟩")&&(e="\\rangle"),q.contains(ga,e)||q.contains(va,e))return fa(e,t,!1,r,n,a);if(q.contains(es,e))return pa(e,J0[t],!1,r,n,a);throw new p("Illegal delimiter: '"+e+"'")},rs=[{type:"small",style:J.SCRIPTSCRIPT},{type:"small",style:J.SCRIPT},{type:"small",style:J.TEXT},{type:"large",size:1},{type:"large",size:2},{type:"large",size:3},{type:"large",size:4}],ns=[{type:"small",style:J.SCRIPTSCRIPT},{type:"small",style:J.SCRIPT},{type:"small",style:J.TEXT},{type:"stack"}],ba=[{type:"small",style:J.SCRIPTSCRIPT},{type:"small",style:J.SCRIPT},{type:"small",style:J.TEXT},{type:"large",size:1},{type:"large",size:2},{type:"large",size:3},{type:"large",size:4},{type:"stack"}],as=function(e){if(e.type==="small")return"Main-Regular";if(e.type==="large")return"Size"+e.size+"-Regular";if(e.type==="stack")return"Size4-Regular";throw new Error("Add support for delim type '"+e.type+"' here.")},ya=function(e,t,r,n){for(var a=Math.min(2,3-n.style.size),c=a;ct)return r[c]}return r[r.length-1]},xa=function(e,t,r,n,a,c){e==="<"||e==="\\lt"||e==="⟨"?e="\\langle":(e===">"||e==="\\gt"||e==="⟩")&&(e="\\rangle");var d;q.contains(va,e)?d=rs:q.contains(ga,e)?d=ba:d=ns;var g=ya(e,t,d,n);return g.type==="small"?$l(e,g.style,r,n,a,c):g.type==="large"?fa(e,g.size,r,n,a,c):pa(e,t,r,n,a,c)},is=function(e,t,r,n,a,c){var d=n.fontMetrics().axisHeight*n.sizeMultiplier,g=901,y=5/n.fontMetrics().ptPerEm,T=Math.max(t-d,r+d),B=Math.max(T/500*g,2*T-y);return xa(e,B,!0,n,a,c)},Ut={sqrtImage:Jl,sizedDelim:ts,sizeToMaxHeight:J0,customSizedDelim:xa,leftRightDelim:is},wa={"\\bigl":{mclass:"mopen",size:1},"\\Bigl":{mclass:"mopen",size:2},"\\biggl":{mclass:"mopen",size:3},"\\Biggl":{mclass:"mopen",size:4},"\\bigr":{mclass:"mclose",size:1},"\\Bigr":{mclass:"mclose",size:2},"\\biggr":{mclass:"mclose",size:3},"\\Biggr":{mclass:"mclose",size:4},"\\bigm":{mclass:"mrel",size:1},"\\Bigm":{mclass:"mrel",size:2},"\\biggm":{mclass:"mrel",size:3},"\\Biggm":{mclass:"mrel",size:4},"\\big":{mclass:"mord",size:1},"\\Big":{mclass:"mord",size:2},"\\bigg":{mclass:"mord",size:3},"\\Bigg":{mclass:"mord",size:4}},ls=["(","\\lparen",")","\\rparen","[","\\lbrack","]","\\rbrack","\\{","\\lbrace","\\}","\\rbrace","\\lfloor","\\rfloor","⌊","⌋","\\lceil","\\rceil","⌈","⌉","<",">","\\langle","⟨","\\rangle","⟩","\\lt","\\gt","\\lvert","\\rvert","\\lVert","\\rVert","\\lgroup","\\rgroup","⟮","⟯","\\lmoustache","\\rmoustache","⎰","⎱","/","\\backslash","|","\\vert","\\|","\\Vert","\\uparrow","\\Uparrow","\\downarrow","\\Downarrow","\\updownarrow","\\Updownarrow","."];function Ar(u,e){var t=xr(u);if(t&&q.contains(ls,t.text))return t;throw t?new p("Invalid delimiter '"+t.text+"' after '"+e.funcName+"'",u):new p("Invalid delimiter type '"+u.type+"'",u)}ee({type:"delimsizing",names:["\\bigl","\\Bigl","\\biggl","\\Biggl","\\bigr","\\Bigr","\\biggr","\\Biggr","\\bigm","\\Bigm","\\biggm","\\Biggm","\\big","\\Big","\\bigg","\\Bigg"],props:{numArgs:1,argTypes:["primitive"]},handler:function(e,t){var r=Ar(t[0],e);return{type:"delimsizing",mode:e.parser.mode,size:wa[e.funcName].size,mclass:wa[e.funcName].mclass,delim:r.text}},htmlBuilder:function(e,t){return e.delim==="."?E.makeSpan([e.mclass]):Ut.sizedDelim(e.delim,e.size,t,e.mode,[e.mclass])},mathmlBuilder:function(e){var t=[];e.delim!=="."&&t.push(wt(e.delim,e.mode));var r=new W.MathNode("mo",t);e.mclass==="mopen"||e.mclass==="mclose"?r.setAttribute("fence","true"):r.setAttribute("fence","false"),r.setAttribute("stretchy","true");var n=X(Ut.sizeToMaxHeight[e.size]);return r.setAttribute("minsize",n),r.setAttribute("maxsize",n),r}});function ka(u){if(!u.body)throw new Error("Bug: The leftright ParseNode wasn't fully parsed.")}ee({type:"leftright-right",names:["\\right"],props:{numArgs:1,primitive:!0},handler:function(e,t){var r=e.parser.gullet.macros.get("\\current@color");if(r&&typeof r!="string")throw new p("\\current@color set to non-string in \\right");return{type:"leftright-right",mode:e.parser.mode,delim:Ar(t[0],e).text,color:r}}}),ee({type:"leftright",names:["\\left"],props:{numArgs:1,primitive:!0},handler:function(e,t){var r=Ar(t[0],e),n=e.parser;++n.leftrightDepth;var a=n.parseExpression(!1);--n.leftrightDepth,n.expect("\\right",!1);var c=ve(n.parseFunction(),"leftright-right");return{type:"leftright",mode:n.mode,body:a,left:r.text,right:c.delim,rightColor:c.color}},htmlBuilder:function(e,t){ka(e);for(var r=je(e.body,t,!0,["mopen","mclose"]),n=0,a=0,c=!1,d=0;d-1?"mpadded":"menclose",[Ce(e.body,t)]);switch(e.label){case"\\cancel":n.setAttribute("notation","updiagonalstrike");break;case"\\bcancel":n.setAttribute("notation","downdiagonalstrike");break;case"\\phase":n.setAttribute("notation","phasorangle");break;case"\\sout":n.setAttribute("notation","horizontalstrike");break;case"\\fbox":n.setAttribute("notation","box");break;case"\\angl":n.setAttribute("notation","actuarial");break;case"\\fcolorbox":case"\\colorbox":if(r=t.fontMetrics().fboxsep*t.fontMetrics().ptPerEm,n.setAttribute("width","+"+2*r+"pt"),n.setAttribute("height","+"+2*r+"pt"),n.setAttribute("lspace",r+"pt"),n.setAttribute("voffset",r+"pt"),e.label==="\\fcolorbox"){var a=Math.max(t.fontMetrics().fboxrule,t.minRuleThickness);n.setAttribute("style","border: "+a+"em solid "+String(e.borderColor))}break;case"\\xcancel":n.setAttribute("notation","updiagonalstrike downdiagonalstrike");break}return e.backgroundColor&&n.setAttribute("mathbackground",e.backgroundColor),n};ee({type:"enclose",names:["\\colorbox"],props:{numArgs:2,allowedInText:!0,argTypes:["color","text"]},handler:function(e,t,r){var n=e.parser,a=e.funcName,c=ve(t[0],"color-token").color,d=t[1];return{type:"enclose",mode:n.mode,label:a,backgroundColor:c,body:d}},htmlBuilder:un,mathmlBuilder:cn}),ee({type:"enclose",names:["\\fcolorbox"],props:{numArgs:3,allowedInText:!0,argTypes:["color","color","text"]},handler:function(e,t,r){var n=e.parser,a=e.funcName,c=ve(t[0],"color-token").color,d=ve(t[1],"color-token").color,g=t[2];return{type:"enclose",mode:n.mode,label:a,backgroundColor:d,borderColor:c,body:g}},htmlBuilder:un,mathmlBuilder:cn}),ee({type:"enclose",names:["\\fbox"],props:{numArgs:1,argTypes:["hbox"],allowedInText:!0},handler:function(e,t){var r=e.parser;return{type:"enclose",mode:r.mode,label:"\\fbox",body:t[0]}}}),ee({type:"enclose",names:["\\cancel","\\bcancel","\\xcancel","\\sout","\\phase"],props:{numArgs:1},handler:function(e,t){var r=e.parser,n=e.funcName,a=t[0];return{type:"enclose",mode:r.mode,label:n,body:a}},htmlBuilder:un,mathmlBuilder:cn}),ee({type:"enclose",names:["\\angl"],props:{numArgs:1,argTypes:["hbox"],allowedInText:!1},handler:function(e,t){var r=e.parser;return{type:"enclose",mode:r.mode,label:"\\angl",body:t[0]}}});var Sa={};function Rt(u){for(var e=u.type,t=u.names,r=u.props,n=u.handler,a=u.htmlBuilder,c=u.mathmlBuilder,d={type:e,numArgs:r.numArgs||0,allowedInText:!1,numOptionalArgs:0,handler:n},g=0;g1||!T)&&Y.pop(),ae.length0&&(ce+=.25),y.push({pos:ce,isDashed:Er[Br]})}for(Ae(c[0]),r=0;r0&&(dt+=ue,Se=d)){var R0=void 0;(n>0||e.hskipBeforeAndAfter)&&(R0=q.deflt(St.pregap,F),R0!==0&&(rt=E.makeSpan(["arraycolsep"],[]),rt.style.width=X(R0),kt.push(rt)));var F0=[];for(r=0;r0){for(var Rs=E.makeLineSpan("hline",t,T),Fs=E.makeLineSpan("hdashline",t,T),Mn=[{type:"elem",elem:g,shift:0}];y.length>0;){var si=y.pop(),oi=si.pos-Oe;si.isDashed?Mn.push({type:"elem",elem:Fs,shift:oi}):Mn.push({type:"elem",elem:Rs,shift:oi})}g=E.makeVList({positionType:"individualShift",children:Mn},t)}if(C0.length===0)return E.makeSpan(["mord"],[g],t);var zn=E.makeVList({positionType:"individualShift",children:C0},t);return zn=E.makeSpan(["tag"],[zn],t),E.makeFragment([g,zn])},ss={c:"center ",l:"left ",r:"right "},It=function(e,t){for(var r=[],n=new W.MathNode("mtd",[],["mtr-glue"]),a=new W.MathNode("mtd",[],["mml-eqn-num"]),c=0;c0){var O=e.cols,Y="",Q=!1,ae=0,ue=O.length;O[0].type==="separator"&&(F+="top ",ae=1),O[O.length-1].type==="separator"&&(F+="bottom ",ue-=1);for(var ce=ae;ce0?"left ":"",F+=_e[_e.length-1].length>0?"right ":"";for(var Ie=1;Ie<_e.length-1;Ie++)Se+=_e[Ie].length===0?"none ":_e[Ie][0]?"dashed ":"solid ";return/[sd]/.test(Se)&&T.setAttribute("rowlines",Se.trim()),F!==""&&(T=new W.MathNode("menclose",[T]),T.setAttribute("notation",F.trim())),e.arraystretch&&e.arraystretch<1&&(T=new W.MathNode("mstyle",[T]),T.setAttribute("scriptlevel","1")),T},Ma=function(e,t){e.envName.indexOf("ed")===-1&&Tr(e);var r=[],n=e.envName.indexOf("at")>-1?"alignat":"align",a=e.envName==="split",c=t0(e.parser,{cols:r,addJot:!0,autoTag:a?void 0:hn(e.envName),emptySingleRow:!0,colSeparationType:n,maxNumCols:a?2:void 0,leqno:e.parser.settings.leqno},"display"),d,g=0,y={type:"ordgroup",mode:e.mode,body:[]};if(t[0]&&t[0].type==="ordgroup"){for(var T="",B=0;B0&&R&&(Q=1),r[O]={type:"align",align:Y,pregap:Q,postgap:0}}return c.colSeparationType=R?"align":"alignat",c};Rt({type:"array",names:["array","darray"],props:{numArgs:1},handler:function(e,t){var r=xr(t[0]),n=r?[t[0]]:ve(t[0],"ordgroup").body,a=n.map(function(d){var g=Qr(d),y=g.text;if("lcr".indexOf(y)!==-1)return{type:"align",align:y};if(y==="|")return{type:"separator",separator:"|"};if(y===":")return{type:"separator",separator:":"};throw new p("Unknown column alignment: "+y,d)}),c={cols:a,hskipBeforeAndAfter:!0,maxNumCols:a.length};return t0(e.parser,c,mn(e.envName))},htmlBuilder:Ft,mathmlBuilder:It}),Rt({type:"array",names:["matrix","pmatrix","bmatrix","Bmatrix","vmatrix","Vmatrix","matrix*","pmatrix*","bmatrix*","Bmatrix*","vmatrix*","Vmatrix*"],props:{numArgs:0},handler:function(e){var t={matrix:null,pmatrix:["(",")"],bmatrix:["[","]"],Bmatrix:["\\{","\\}"],vmatrix:["|","|"],Vmatrix:["\\Vert","\\Vert"]}[e.envName.replace("*","")],r="c",n={hskipBeforeAndAfter:!1,cols:[{type:"align",align:r}]};if(e.envName.charAt(e.envName.length-1)==="*"){var a=e.parser;if(a.consumeSpaces(),a.fetch().text==="["){if(a.consume(),a.consumeSpaces(),r=a.fetch().text,"lcr".indexOf(r)===-1)throw new p("Expected l or c or r",a.nextToken);a.consume(),a.consumeSpaces(),a.expect("]"),a.consume(),n.cols=[{type:"align",align:r}]}}var c=t0(e.parser,n,mn(e.envName)),d=Math.max.apply(Math,[0].concat(c.body.map(function(g){return g.length})));return c.cols=new Array(d).fill({type:"align",align:r}),t?{type:"leftright",mode:e.mode,body:[c],left:t[0],right:t[1],rightColor:void 0}:c},htmlBuilder:Ft,mathmlBuilder:It}),Rt({type:"array",names:["smallmatrix"],props:{numArgs:0},handler:function(e){var t={arraystretch:.5},r=t0(e.parser,t,"script");return r.colSeparationType="small",r},htmlBuilder:Ft,mathmlBuilder:It}),Rt({type:"array",names:["subarray"],props:{numArgs:1},handler:function(e,t){var r=xr(t[0]),n=r?[t[0]]:ve(t[0],"ordgroup").body,a=n.map(function(d){var g=Qr(d),y=g.text;if("lc".indexOf(y)!==-1)return{type:"align",align:y};throw new p("Unknown column alignment: "+y,d)});if(a.length>1)throw new p("{subarray} can contain only one column");var c={cols:a,hskipBeforeAndAfter:!1,arraystretch:.5};if(c=t0(e.parser,c,"script"),c.body.length>0&&c.body[0].length>1)throw new p("{subarray} can contain only one column");return c},htmlBuilder:Ft,mathmlBuilder:It}),Rt({type:"array",names:["cases","dcases","rcases","drcases"],props:{numArgs:0},handler:function(e){var t={arraystretch:1.2,cols:[{type:"align",align:"l",pregap:0,postgap:1},{type:"align",align:"l",pregap:0,postgap:0}]},r=t0(e.parser,t,mn(e.envName));return{type:"leftright",mode:e.mode,body:[r],left:e.envName.indexOf("r")>-1?".":"\\{",right:e.envName.indexOf("r")>-1?"\\}":".",rightColor:void 0}},htmlBuilder:Ft,mathmlBuilder:It}),Rt({type:"array",names:["align","align*","aligned","split"],props:{numArgs:0},handler:Ma,htmlBuilder:Ft,mathmlBuilder:It}),Rt({type:"array",names:["gathered","gather","gather*"],props:{numArgs:0},handler:function(e){q.contains(["gather","gather*"],e.envName)&&Tr(e);var t={cols:[{type:"align",align:"c"}],addJot:!0,colSeparationType:"gather",autoTag:hn(e.envName),emptySingleRow:!0,leqno:e.parser.settings.leqno};return t0(e.parser,t,"display")},htmlBuilder:Ft,mathmlBuilder:It}),Rt({type:"array",names:["alignat","alignat*","alignedat"],props:{numArgs:1},handler:Ma,htmlBuilder:Ft,mathmlBuilder:It}),Rt({type:"array",names:["equation","equation*"],props:{numArgs:0},handler:function(e){Tr(e);var t={autoTag:hn(e.envName),emptySingleRow:!0,singleRow:!0,maxNumCols:1,leqno:e.parser.settings.leqno};return t0(e.parser,t,"display")},htmlBuilder:Ft,mathmlBuilder:It}),Rt({type:"array",names:["CD"],props:{numArgs:0},handler:function(e){return Tr(e),jl(e.parser)},htmlBuilder:Ft,mathmlBuilder:It}),b("\\nonumber","\\gdef\\@eqnsw{0}"),b("\\notag","\\nonumber"),ee({type:"text",names:["\\hline","\\hdashline"],props:{numArgs:0,allowedInText:!0,allowedInMath:!0},handler:function(e,t){throw new p(e.funcName+" valid only within array environment")}});var os=Sa,za=os;ee({type:"environment",names:["\\begin","\\end"],props:{numArgs:1,argTypes:["text"]},handler:function(e,t){var r=e.parser,n=e.funcName,a=t[0];if(a.type!=="ordgroup")throw new p("Invalid environment name",a);for(var c="",d=0;d=J.SCRIPT.id?r.text():J.DISPLAY:e==="text"&&r.size===J.DISPLAY.size?r=J.TEXT:e==="script"?r=J.SCRIPT:e==="scriptscript"&&(r=J.SCRIPTSCRIPT),r},dn=function(e,t){var r=Ca(e.size,t.style),n=r.fracNum(),a=r.fracDen(),c;c=t.havingStyle(n);var d=ke(e.numer,c,t);if(e.continued){var g=8.5/t.fontMetrics().ptPerEm,y=3.5/t.fontMetrics().ptPerEm;d.height=d.height0?Y=3*R:Y=7*R,Q=t.fontMetrics().denom1):(F>0?(O=t.fontMetrics().num2,Y=R):(O=t.fontMetrics().num3,Y=3*R),Q=t.fontMetrics().denom2);var ae;if(B){var ce=t.fontMetrics().axisHeight;O-d.depth-(ce+.5*F)0&&(t=e,t=t==="."?null:t),t};ee({type:"genfrac",names:["\\genfrac"],props:{numArgs:6,allowedInArgument:!0,argTypes:["math","math","size","text","math","math"]},handler:function(e,t){var r=e.parser,n=t[4],a=t[5],c=br(t[0]),d=c.type==="atom"&&c.family==="open"?Na(c.text):null,g=br(t[1]),y=g.type==="atom"&&g.family==="close"?Na(g.text):null,T=ve(t[2],"size"),B,F=null;T.isBlank?B=!0:(F=T.value,B=F.number>0);var R="auto",O=t[3];if(O.type==="ordgroup"){if(O.body.length>0){var Y=ve(O.body[0],"textord");R=Da[Number(Y.text)]}}else O=ve(O,"textord"),R=Da[Number(O.text)];return{type:"genfrac",mode:r.mode,numer:n,denom:a,continued:!1,hasBarLine:B,barSize:F,leftDelim:d,rightDelim:y,size:R}},htmlBuilder:dn,mathmlBuilder:fn}),ee({type:"infix",names:["\\above"],props:{numArgs:1,argTypes:["size"],infix:!0},handler:function(e,t){var r=e.parser;e.funcName;var n=e.token;return{type:"infix",mode:r.mode,replaceWith:"\\\\abovefrac",size:ve(t[0],"size").value,token:n}}}),ee({type:"genfrac",names:["\\\\abovefrac"],props:{numArgs:3,argTypes:["math","size","math"]},handler:function(e,t){var r=e.parser;e.funcName;var n=t[0],a=ge(ve(t[1],"infix").size),c=t[2],d=a.number>0;return{type:"genfrac",mode:r.mode,numer:n,denom:c,continued:!1,hasBarLine:d,barSize:a,leftDelim:null,rightDelim:null,size:"auto"}},htmlBuilder:dn,mathmlBuilder:fn});var Ra=function(e,t){var r=t.style,n,a;e.type==="supsub"?(n=e.sup?ke(e.sup,t.havingStyle(r.sup()),t):ke(e.sub,t.havingStyle(r.sub()),t),a=ve(e.base,"horizBrace")):a=ve(e,"horizBrace");var c=ke(a.base,t.havingBaseStyle(J.DISPLAY)),d=Ht.svgSpan(a,t),g;if(a.isOver?(g=E.makeVList({positionType:"firstBaseline",children:[{type:"elem",elem:c},{type:"kern",size:.1},{type:"elem",elem:d}]},t),g.children[0].children[0].children[1].classes.push("svg-align")):(g=E.makeVList({positionType:"bottom",positionData:c.depth+.1+d.height,children:[{type:"elem",elem:d},{type:"kern",size:.1},{type:"elem",elem:c}]},t),g.children[0].children[0].children[0].classes.push("svg-align")),n){var y=E.makeSpan(["mord",a.isOver?"mover":"munder"],[g],t);a.isOver?g=E.makeVList({positionType:"firstBaseline",children:[{type:"elem",elem:y},{type:"kern",size:.2},{type:"elem",elem:n}]},t):g=E.makeVList({positionType:"bottom",positionData:y.depth+.2+n.height+n.depth,children:[{type:"elem",elem:n},{type:"kern",size:.2},{type:"elem",elem:y}]},t)}return E.makeSpan(["mord",a.isOver?"mover":"munder"],[g],t)},us=function(e,t){var r=Ht.mathMLnode(e.label);return new W.MathNode(e.isOver?"mover":"munder",[Ce(e.base,t),r])};ee({type:"horizBrace",names:["\\overbrace","\\underbrace"],props:{numArgs:1},handler:function(e,t){var r=e.parser,n=e.funcName;return{type:"horizBrace",mode:r.mode,label:n,isOver:/^\\over/.test(n),base:t[0]}},htmlBuilder:Ra,mathmlBuilder:us}),ee({type:"href",names:["\\href"],props:{numArgs:2,argTypes:["url","original"],allowedInText:!0},handler:function(e,t){var r=e.parser,n=t[1],a=ve(t[0],"url").url;return r.settings.isTrusted({command:"\\href",url:a})?{type:"href",mode:r.mode,href:a,body:Ge(n)}:r.formatUnsupportedCmd("\\href")},htmlBuilder:function(e,t){var r=je(e.body,t,!1);return E.makeAnchor(e.href,[],r,t)},mathmlBuilder:function(e,t){var r=Jt(e.body,t);return r instanceof xt||(r=new xt("mrow",[r])),r.setAttribute("href",e.href),r}}),ee({type:"href",names:["\\url"],props:{numArgs:1,argTypes:["url"],allowedInText:!0},handler:function(e,t){var r=e.parser,n=ve(t[0],"url").url;if(!r.settings.isTrusted({command:"\\url",url:n}))return r.formatUnsupportedCmd("\\url");for(var a=[],c=0;c0&&(n=Ee(e.totalheight,t)-r);var a=0;e.width.number>0&&(a=Ee(e.width,t));var c={height:X(r+n)};a>0&&(c.width=X(a)),n>0&&(c.verticalAlign=X(-n));var d=new ur(e.src,e.alt,c);return d.height=r,d.depth=n,d},mathmlBuilder:function(e,t){var r=new W.MathNode("mglyph",[]);r.setAttribute("alt",e.alt);var n=Ee(e.height,t),a=0;if(e.totalheight.number>0&&(a=Ee(e.totalheight,t)-n,r.setAttribute("valign",X(-a))),r.setAttribute("height",X(n+a)),e.width.number>0){var c=Ee(e.width,t);r.setAttribute("width",X(c))}return r.setAttribute("src",e.src),r}}),ee({type:"kern",names:["\\kern","\\mkern","\\hskip","\\mskip"],props:{numArgs:1,argTypes:["size"],primitive:!0,allowedInText:!0},handler:function(e,t){var r=e.parser,n=e.funcName,a=ve(t[0],"size");if(r.settings.strict){var c=n[1]==="m",d=a.value.unit==="mu";c?(d||r.settings.reportNonstrict("mathVsTextUnits","LaTeX's "+n+" supports only mu units, "+("not "+a.value.unit+" units")),r.mode!=="math"&&r.settings.reportNonstrict("mathVsTextUnits","LaTeX's "+n+" works only in math mode")):d&&r.settings.reportNonstrict("mathVsTextUnits","LaTeX's "+n+" doesn't support mu units")}return{type:"kern",mode:r.mode,dimension:a.value}},htmlBuilder:function(e,t){return E.makeGlue(e.dimension,t)},mathmlBuilder:function(e,t){var r=Ee(e.dimension,t);return new W.SpaceNode(r)}}),ee({type:"lap",names:["\\mathllap","\\mathrlap","\\mathclap"],props:{numArgs:1,allowedInText:!0},handler:function(e,t){var r=e.parser,n=e.funcName,a=t[0];return{type:"lap",mode:r.mode,alignment:n.slice(5),body:a}},htmlBuilder:function(e,t){var r;e.alignment==="clap"?(r=E.makeSpan([],[ke(e.body,t)]),r=E.makeSpan(["inner"],[r],t)):r=E.makeSpan(["inner"],[ke(e.body,t)]);var n=E.makeSpan(["fix"],[]),a=E.makeSpan([e.alignment],[r,n],t),c=E.makeSpan(["strut"]);return c.style.height=X(a.height+a.depth),a.depth&&(c.style.verticalAlign=X(-a.depth)),a.children.unshift(c),a=E.makeSpan(["thinbox"],[a],t),E.makeSpan(["mord","vbox"],[a],t)},mathmlBuilder:function(e,t){var r=new W.MathNode("mpadded",[Ce(e.body,t)]);if(e.alignment!=="rlap"){var n=e.alignment==="llap"?"-1":"-0.5";r.setAttribute("lspace",n+"width")}return r.setAttribute("width","0px"),r}}),ee({type:"styling",names:["\\(","$"],props:{numArgs:0,allowedInText:!0,allowedInMath:!1},handler:function(e,t){var r=e.funcName,n=e.parser,a=n.mode;n.switchMode("math");var c=r==="\\("?"\\)":"$",d=n.parseExpression(!1,c);return n.expect(c),n.switchMode(a),{type:"styling",mode:n.mode,style:"text",body:d}}}),ee({type:"text",names:["\\)","\\]"],props:{numArgs:0,allowedInText:!0,allowedInMath:!1},handler:function(e,t){throw new p("Mismatched "+e.funcName)}});var Fa=function(e,t){switch(t.style.size){case J.DISPLAY.size:return e.display;case J.TEXT.size:return e.text;case J.SCRIPT.size:return e.script;case J.SCRIPTSCRIPT.size:return e.scriptscript;default:return e.text}};ee({type:"mathchoice",names:["\\mathchoice"],props:{numArgs:4,primitive:!0},handler:function(e,t){var r=e.parser;return{type:"mathchoice",mode:r.mode,display:Ge(t[0]),text:Ge(t[1]),script:Ge(t[2]),scriptscript:Ge(t[3])}},htmlBuilder:function(e,t){var r=Fa(e,t),n=je(r,t,!1);return E.makeFragment(n)},mathmlBuilder:function(e,t){var r=Fa(e,t);return Jt(r,t)}});var Ia=function(e,t,r,n,a,c,d){e=E.makeSpan([],[e]);var g=r&&q.isCharacterBox(r),y,T;if(t){var B=ke(t,n.havingStyle(a.sup()),n);T={elem:B,kern:Math.max(n.fontMetrics().bigOpSpacing1,n.fontMetrics().bigOpSpacing3-B.depth)}}if(r){var F=ke(r,n.havingStyle(a.sub()),n);y={elem:F,kern:Math.max(n.fontMetrics().bigOpSpacing2,n.fontMetrics().bigOpSpacing4-F.height)}}var R;if(T&&y){var O=n.fontMetrics().bigOpSpacing5+y.elem.height+y.elem.depth+y.kern+e.depth+d;R=E.makeVList({positionType:"bottom",positionData:O,children:[{type:"kern",size:n.fontMetrics().bigOpSpacing5},{type:"elem",elem:y.elem,marginLeft:X(-c)},{type:"kern",size:y.kern},{type:"elem",elem:e},{type:"kern",size:T.kern},{type:"elem",elem:T.elem,marginLeft:X(c)},{type:"kern",size:n.fontMetrics().bigOpSpacing5}]},n)}else if(y){var Y=e.height-d;R=E.makeVList({positionType:"top",positionData:Y,children:[{type:"kern",size:n.fontMetrics().bigOpSpacing5},{type:"elem",elem:y.elem,marginLeft:X(-c)},{type:"kern",size:y.kern},{type:"elem",elem:e}]},n)}else if(T){var Q=e.depth+d;R=E.makeVList({positionType:"bottom",positionData:Q,children:[{type:"elem",elem:e},{type:"kern",size:T.kern},{type:"elem",elem:T.elem,marginLeft:X(c)},{type:"kern",size:n.fontMetrics().bigOpSpacing5}]},n)}else return e;var ae=[R];if(y&&c!==0&&!g){var ue=E.makeSpan(["mspace"],[],n);ue.style.marginRight=X(c),ae.unshift(ue)}return E.makeSpan(["mop","op-limits"],ae,n)},La=["\\smallint"],B0=function(e,t){var r,n,a=!1,c;e.type==="supsub"?(r=e.sup,n=e.sub,c=ve(e.base,"op"),a=!0):c=ve(e,"op");var d=t.style,g=!1;d.size===J.DISPLAY.size&&c.symbol&&!q.contains(La,c.name)&&(g=!0);var y;if(c.symbol){var T=g?"Size2-Regular":"Size1-Regular",B="";if((c.name==="\\oiint"||c.name==="\\oiiint")&&(B=c.name.slice(1),c.name=B==="oiint"?"\\iint":"\\iiint"),y=E.makeSymbol(c.name,T,"math",t,["mop","op-symbol",g?"large-op":"small-op"]),B.length>0){var F=y.italic,R=E.staticSvg(B+"Size"+(g?"2":"1"),t);y=E.makeVList({positionType:"individualShift",children:[{type:"elem",elem:y,shift:0},{type:"elem",elem:R,shift:g?.08:0}]},t),c.name="\\"+B,y.classes.unshift("mop"),y.italic=F}}else if(c.body){var O=je(c.body,t,!0);O.length===1&&O[0]instanceof tt?(y=O[0],y.classes[0]="mop"):y=E.makeSpan(["mop"],O,t)}else{for(var Y=[],Q=1;Q0){for(var g=c.body.map(function(F){var R=F.text;return typeof R=="string"?{type:"textord",mode:F.mode,text:R}:F}),y=je(g,t.withFont("mathrm"),!0),T=0;T=0?g.setAttribute("height",X(a)):(g.setAttribute("height",X(a)),g.setAttribute("depth",X(-a))),g.setAttribute("voffset",X(a)),g}});function qa(u,e,t){for(var r=je(u,e,!1),n=e.sizeMultiplier/t.sizeMultiplier,a=0;ar.height+r.depth+d&&(d=(d+R-r.height-r.depth)/2);var O=T.height-r.height-d-B;r.style.paddingLeft=X(F);var Y=E.makeVList({positionType:"firstBaseline",children:[{type:"elem",elem:r,wrapperClasses:["svg-align"]},{type:"kern",size:-(r.height+O)},{type:"elem",elem:T},{type:"kern",size:B}]},t);if(e.index){var Q=t.havingStyle(J.SCRIPTSCRIPT),ae=ke(e.index,Q,t),ue=.6*(Y.height-Y.depth),ce=E.makeVList({positionType:"shift",positionData:-ue,children:[{type:"elem",elem:ae}]},t),Ae=E.makeSpan(["root"],[ce]);return E.makeSpan(["mord","sqrt"],[Ae,Y],t)}else return E.makeSpan(["mord","sqrt"],[Y],t)},mathmlBuilder:function(e,t){var r=e.body,n=e.index;return n?new W.MathNode("mroot",[Ce(r,t),Ce(n,t)]):new W.MathNode("msqrt",[Ce(r,t)])}});var Ha={display:J.DISPLAY,text:J.TEXT,script:J.SCRIPT,scriptscript:J.SCRIPTSCRIPT};ee({type:"styling",names:["\\displaystyle","\\textstyle","\\scriptstyle","\\scriptscriptstyle"],props:{numArgs:0,allowedInText:!0,primitive:!0},handler:function(e,t){var r=e.breakOnTokenText,n=e.funcName,a=e.parser,c=a.parseExpression(!0,r),d=n.slice(1,n.length-5);return{type:"styling",mode:a.mode,style:d,body:c}},htmlBuilder:function(e,t){var r=Ha[e.style],n=t.havingStyle(r).withFont("");return qa(e.body,n,t)},mathmlBuilder:function(e,t){var r=Ha[e.style],n=t.havingStyle(r),a=ot(e.body,n),c=new W.MathNode("mstyle",a),d={display:["0","true"],text:["0","false"],script:["1","false"],scriptscript:["2","false"]},g=d[e.style];return c.setAttribute("scriptlevel",g[0]),c.setAttribute("displaystyle",g[1]),c}});var fs=function(e,t){var r=e.base;if(r)if(r.type==="op"){var n=r.limits&&(t.style.size===J.DISPLAY.size||r.alwaysHandleSupSub);return n?B0:null}else if(r.type==="operatorname"){var a=r.alwaysHandleSupSub&&(t.style.size===J.DISPLAY.size||r.limits);return a?Oa:null}else{if(r.type==="accent")return q.isCharacterBox(r.base)?Jr:null;if(r.type==="horizBrace"){var c=!e.sub;return c===r.isOver?Ra:null}else return null}else return null};g0({type:"supsub",htmlBuilder:function(e,t){var r=fs(e,t);if(r)return r(e,t);var n=e.base,a=e.sup,c=e.sub,d=ke(n,t),g,y,T=t.fontMetrics(),B=0,F=0,R=n&&q.isCharacterBox(n);if(a){var O=t.havingStyle(t.style.sup());g=ke(a,O,t),R||(B=d.height-O.fontMetrics().supDrop*O.sizeMultiplier/t.sizeMultiplier)}if(c){var Y=t.havingStyle(t.style.sub());y=ke(c,Y,t),R||(F=d.depth+Y.fontMetrics().subDrop*Y.sizeMultiplier/t.sizeMultiplier)}var Q;t.style===J.DISPLAY?Q=T.sup1:t.style.cramped?Q=T.sup3:Q=T.sup2;var ae=t.sizeMultiplier,ue=X(.5/T.ptPerEm/ae),ce=null;if(y){var Ae=e.base&&e.base.type==="op"&&e.base.name&&(e.base.name==="\\oiint"||e.base.name==="\\oiiint");(d instanceof tt||Ae)&&(ce=X(-d.italic))}var be;if(g&&y){B=Math.max(B,Q,g.depth+.25*T.xHeight),F=Math.max(F,T.sub2);var Me=T.defaultRuleThickness,Se=4*Me;if(B-g.depth-(y.height-F)0&&(B+=_e,F-=_e)}var Ie=[{type:"elem",elem:y,shift:F,marginRight:ue,marginLeft:ce},{type:"elem",elem:g,shift:-B,marginRight:ue}];be=E.makeVList({positionType:"individualShift",children:Ie},t)}else if(y){F=Math.max(F,T.sub1,y.height-.8*T.xHeight);var Qe=[{type:"elem",elem:y,marginLeft:ce,marginRight:ue}];be=E.makeVList({positionType:"shift",positionData:F,children:Qe},t)}else if(g)B=Math.max(B,Q,g.depth+.25*T.xHeight),be=E.makeVList({positionType:"shift",positionData:-B,children:[{type:"elem",elem:g,marginRight:ue}]},t);else throw new Error("supsub must have either sup or sub.");var dt=Xr(d,"right")||"mord";return E.makeSpan([dt],[d,E.makeSpan(["msupsub"],[be])],t)},mathmlBuilder:function(e,t){var r=!1,n,a;e.base&&e.base.type==="horizBrace"&&(a=!!e.sup,a===e.base.isOver&&(r=!0,n=e.base.isOver)),e.base&&(e.base.type==="op"||e.base.type==="operatorname")&&(e.base.parentIsSupSub=!0);var c=[Ce(e.base,t)];e.sub&&c.push(Ce(e.sub,t)),e.sup&&c.push(Ce(e.sup,t));var d;if(r)d=n?"mover":"munder";else if(e.sub)if(e.sup){var T=e.base;T&&T.type==="op"&&T.limits&&t.style===J.DISPLAY||T&&T.type==="operatorname"&&T.alwaysHandleSupSub&&(t.style===J.DISPLAY||T.limits)?d="munderover":d="msubsup"}else{var y=e.base;y&&y.type==="op"&&y.limits&&(t.style===J.DISPLAY||y.alwaysHandleSupSub)||y&&y.type==="operatorname"&&y.alwaysHandleSupSub&&(y.limits||t.style===J.DISPLAY)?d="munder":d="msub"}else{var g=e.base;g&&g.type==="op"&&g.limits&&(t.style===J.DISPLAY||g.alwaysHandleSupSub)||g&&g.type==="operatorname"&&g.alwaysHandleSupSub&&(g.limits||t.style===J.DISPLAY)?d="mover":d="msup"}return new W.MathNode(d,c)}}),g0({type:"atom",htmlBuilder:function(e,t){return E.mathsym(e.text,e.mode,t,["m"+e.family])},mathmlBuilder:function(e,t){var r=new W.MathNode("mo",[wt(e.text,e.mode)]);if(e.family==="bin"){var n=Kr(e,t);n==="bold-italic"&&r.setAttribute("mathvariant",n)}else e.family==="punct"?r.setAttribute("separator","true"):(e.family==="open"||e.family==="close")&&r.setAttribute("stretchy","false");return r}});var Ua={mi:"italic",mn:"normal",mtext:"normal"};g0({type:"mathord",htmlBuilder:function(e,t){return E.makeOrd(e,t,"mathord")},mathmlBuilder:function(e,t){var r=new W.MathNode("mi",[wt(e.text,e.mode,t)]),n=Kr(e,t)||"italic";return n!==Ua[r.type]&&r.setAttribute("mathvariant",n),r}}),g0({type:"textord",htmlBuilder:function(e,t){return E.makeOrd(e,t,"textord")},mathmlBuilder:function(e,t){var r=wt(e.text,e.mode,t),n=Kr(e,t)||"normal",a;return e.mode==="text"?a=new W.MathNode("mtext",[r]):/[0-9]/.test(e.text)?a=new W.MathNode("mn",[r]):e.text==="\\prime"?a=new W.MathNode("mo",[r]):a=new W.MathNode("mi",[r]),n!==Ua[a.type]&&a.setAttribute("mathvariant",n),a}});var gn={"\\nobreak":"nobreak","\\allowbreak":"allowbreak"},vn={" ":{},"\\ ":{},"~":{className:"nobreak"},"\\space":{},"\\nobreakspace":{className:"nobreak"}};g0({type:"spacing",htmlBuilder:function(e,t){if(vn.hasOwnProperty(e.text)){var r=vn[e.text].className||"";if(e.mode==="text"){var n=E.makeOrd(e,t,"textord");return n.classes.push(r),n}else return E.makeSpan(["mspace",r],[E.mathsym(e.text,e.mode,t)],t)}else{if(gn.hasOwnProperty(e.text))return E.makeSpan(["mspace",gn[e.text]],[],t);throw new p('Unknown type of space "'+e.text+'"')}},mathmlBuilder:function(e,t){var r;if(vn.hasOwnProperty(e.text))r=new W.MathNode("mtext",[new W.TextNode(" ")]);else{if(gn.hasOwnProperty(e.text))return new W.MathNode("mspace");throw new p('Unknown type of space "'+e.text+'"')}return r}});var Ga=function(){var e=new W.MathNode("mtd",[]);return e.setAttribute("width","50%"),e};g0({type:"tag",mathmlBuilder:function(e,t){var r=new W.MathNode("mtable",[new W.MathNode("mtr",[Ga(),new W.MathNode("mtd",[Jt(e.body,t)]),Ga(),new W.MathNode("mtd",[Jt(e.tag,t)])])]);return r.setAttribute("width","100%"),r}});var Va={"\\text":void 0,"\\textrm":"textrm","\\textsf":"textsf","\\texttt":"texttt","\\textnormal":"textrm"},Wa={"\\textbf":"textbf","\\textmd":"textmd"},ps={"\\textit":"textit","\\textup":"textup"},Ya=function(e,t){var r=e.font;return r?Va[r]?t.withTextFontFamily(Va[r]):Wa[r]?t.withTextFontWeight(Wa[r]):t.withTextFontShape(ps[r]):t};ee({type:"text",names:["\\text","\\textrm","\\textsf","\\texttt","\\textnormal","\\textbf","\\textmd","\\textit","\\textup"],props:{numArgs:1,argTypes:["text"],allowedInArgument:!0,allowedInText:!0},handler:function(e,t){var r=e.parser,n=e.funcName,a=t[0];return{type:"text",mode:r.mode,body:Ge(a),font:n}},htmlBuilder:function(e,t){var r=Ya(e,t),n=je(e.body,r,!0);return E.makeSpan(["mord","text"],n,r)},mathmlBuilder:function(e,t){var r=Ya(e,t);return Jt(e.body,r)}}),ee({type:"underline",names:["\\underline"],props:{numArgs:1,allowedInText:!0},handler:function(e,t){var r=e.parser;return{type:"underline",mode:r.mode,body:t[0]}},htmlBuilder:function(e,t){var r=ke(e.body,t),n=E.makeLineSpan("underline-line",t),a=t.fontMetrics().defaultRuleThickness,c=E.makeVList({positionType:"top",positionData:r.height,children:[{type:"kern",size:a},{type:"elem",elem:n},{type:"kern",size:3*a},{type:"elem",elem:r}]},t);return E.makeSpan(["mord","underline"],[c],t)},mathmlBuilder:function(e,t){var r=new W.MathNode("mo",[new W.TextNode("‾")]);r.setAttribute("stretchy","true");var n=new W.MathNode("munder",[Ce(e.body,t),r]);return n.setAttribute("accentunder","true"),n}}),ee({type:"vcenter",names:["\\vcenter"],props:{numArgs:1,argTypes:["original"],allowedInText:!1},handler:function(e,t){var r=e.parser;return{type:"vcenter",mode:r.mode,body:t[0]}},htmlBuilder:function(e,t){var r=ke(e.body,t),n=t.fontMetrics().axisHeight,a=.5*(r.height-n-(r.depth+n));return E.makeVList({positionType:"shift",positionData:a,children:[{type:"elem",elem:r}]},t)},mathmlBuilder:function(e,t){return new W.MathNode("mpadded",[Ce(e.body,t)],["vcenter"])}}),ee({type:"verb",names:["\\verb"],props:{numArgs:0,allowedInText:!0},handler:function(e,t,r){throw new p("\\verb ended by end of line instead of matching delimiter")},htmlBuilder:function(e,t){for(var r=ja(e),n=[],a=t.havingStyle(t.style.text()),c=0;c0;)this.endGroup()},e.has=function(r){return this.current.hasOwnProperty(r)||this.builtins.hasOwnProperty(r)},e.get=function(r){return this.current.hasOwnProperty(r)?this.current[r]:this.builtins[r]},e.set=function(r,n,a){if(a===void 0&&(a=!1),a){for(var c=0;c0&&(this.undefStack[this.undefStack.length-1][r]=n)}else{var d=this.undefStack[this.undefStack.length-1];d&&!d.hasOwnProperty(r)&&(d[r]=this.current[r])}n==null?delete this.current[r]:this.current[r]=n},u}(),As=Aa,Ts=As;b("\\noexpand",function(u){var e=u.popToken();return u.isExpandable(e.text)&&(e.noexpand=!0,e.treatAsRelax=!0),{tokens:[e],numArgs:0}}),b("\\expandafter",function(u){var e=u.popToken();return u.expandOnce(!0),{tokens:[e],numArgs:0}}),b("\\@firstoftwo",function(u){var e=u.consumeArgs(2);return{tokens:e[0],numArgs:0}}),b("\\@secondoftwo",function(u){var e=u.consumeArgs(2);return{tokens:e[1],numArgs:0}}),b("\\@ifnextchar",function(u){var e=u.consumeArgs(3);u.consumeSpaces();var t=u.future();return e[0].length===1&&e[0][0].text===t.text?{tokens:e[1],numArgs:0}:{tokens:e[2],numArgs:0}}),b("\\@ifstar","\\@ifnextchar *{\\@firstoftwo{#1}}"),b("\\TextOrMath",function(u){var e=u.consumeArgs(2);return u.mode==="text"?{tokens:e[0],numArgs:0}:{tokens:e[1],numArgs:0}});var Za={0:0,1:1,2:2,3:3,4:4,5:5,6:6,7:7,8:8,9:9,a:10,A:10,b:11,B:11,c:12,C:12,d:13,D:13,e:14,E:14,f:15,F:15};b("\\char",function(u){var e=u.popToken(),t,r="";if(e.text==="'")t=8,e=u.popToken();else if(e.text==='"')t=16,e=u.popToken();else if(e.text==="`")if(e=u.popToken(),e.text[0]==="\\")r=e.text.charCodeAt(1);else{if(e.text==="EOF")throw new p("\\char` missing argument");r=e.text.charCodeAt(0)}else t=10;if(t){if(r=Za[e.text],r==null||r>=t)throw new p("Invalid base-"+t+" digit "+e.text);for(var n;(n=Za[u.future().text])!=null&&n":"\\dotsb","-":"\\dotsb","*":"\\dotsb",":":"\\dotsb","\\DOTSB":"\\dotsb","\\coprod":"\\dotsb","\\bigvee":"\\dotsb","\\bigwedge":"\\dotsb","\\biguplus":"\\dotsb","\\bigcap":"\\dotsb","\\bigcup":"\\dotsb","\\prod":"\\dotsb","\\sum":"\\dotsb","\\bigotimes":"\\dotsb","\\bigoplus":"\\dotsb","\\bigodot":"\\dotsb","\\bigsqcup":"\\dotsb","\\And":"\\dotsb","\\longrightarrow":"\\dotsb","\\Longrightarrow":"\\dotsb","\\longleftarrow":"\\dotsb","\\Longleftarrow":"\\dotsb","\\longleftrightarrow":"\\dotsb","\\Longleftrightarrow":"\\dotsb","\\mapsto":"\\dotsb","\\longmapsto":"\\dotsb","\\hookrightarrow":"\\dotsb","\\doteq":"\\dotsb","\\mathbin":"\\dotsb","\\mathrel":"\\dotsb","\\relbar":"\\dotsb","\\Relbar":"\\dotsb","\\xrightarrow":"\\dotsb","\\xleftarrow":"\\dotsb","\\DOTSI":"\\dotsi","\\int":"\\dotsi","\\oint":"\\dotsi","\\iint":"\\dotsi","\\iiint":"\\dotsi","\\iiiint":"\\dotsi","\\idotsint":"\\dotsi","\\DOTSX":"\\dotsx"};b("\\dots",function(u){var e="\\dotso",t=u.expandAfterFuture().text;return t in Ka?e=Ka[t]:(t.slice(0,4)==="\\not"||t in Ne.math&&q.contains(["bin","rel"],Ne.math[t].group))&&(e="\\dotsb"),e});var xn={")":!0,"]":!0,"\\rbrack":!0,"\\}":!0,"\\rbrace":!0,"\\rangle":!0,"\\rceil":!0,"\\rfloor":!0,"\\rgroup":!0,"\\rmoustache":!0,"\\right":!0,"\\bigr":!0,"\\biggr":!0,"\\Bigr":!0,"\\Biggr":!0,$:!0,";":!0,".":!0,",":!0};b("\\dotso",function(u){var e=u.future().text;return e in xn?"\\ldots\\,":"\\ldots"}),b("\\dotsc",function(u){var e=u.future().text;return e in xn&&e!==","?"\\ldots\\,":"\\ldots"}),b("\\cdots",function(u){var e=u.future().text;return e in xn?"\\@cdots\\,":"\\@cdots"}),b("\\dotsb","\\cdots"),b("\\dotsm","\\cdots"),b("\\dotsi","\\!\\cdots"),b("\\dotsx","\\ldots\\,"),b("\\DOTSI","\\relax"),b("\\DOTSB","\\relax"),b("\\DOTSX","\\relax"),b("\\tmspace","\\TextOrMath{\\kern#1#3}{\\mskip#1#2}\\relax"),b("\\,","\\tmspace+{3mu}{.1667em}"),b("\\thinspace","\\,"),b("\\>","\\mskip{4mu}"),b("\\:","\\tmspace+{4mu}{.2222em}"),b("\\medspace","\\:"),b("\\;","\\tmspace+{5mu}{.2777em}"),b("\\thickspace","\\;"),b("\\!","\\tmspace-{3mu}{.1667em}"),b("\\negthinspace","\\!"),b("\\negmedspace","\\tmspace-{4mu}{.2222em}"),b("\\negthickspace","\\tmspace-{5mu}{.277em}"),b("\\enspace","\\kern.5em "),b("\\enskip","\\hskip.5em\\relax"),b("\\quad","\\hskip1em\\relax"),b("\\qquad","\\hskip2em\\relax"),b("\\tag","\\@ifstar\\tag@literal\\tag@paren"),b("\\tag@paren","\\tag@literal{({#1})}"),b("\\tag@literal",function(u){if(u.macros.get("\\df@tag"))throw new p("Multiple \\tag");return"\\gdef\\df@tag{\\text{#1}}"}),b("\\bmod","\\mathchoice{\\mskip1mu}{\\mskip1mu}{\\mskip5mu}{\\mskip5mu}\\mathbin{\\rm mod}\\mathchoice{\\mskip1mu}{\\mskip1mu}{\\mskip5mu}{\\mskip5mu}"),b("\\pod","\\allowbreak\\mathchoice{\\mkern18mu}{\\mkern8mu}{\\mkern8mu}{\\mkern8mu}(#1)"),b("\\pmod","\\pod{{\\rm mod}\\mkern6mu#1}"),b("\\mod","\\allowbreak\\mathchoice{\\mkern18mu}{\\mkern12mu}{\\mkern12mu}{\\mkern12mu}{\\rm mod}\\,\\,#1"),b("\\newline","\\\\\\relax"),b("\\TeX","\\textrm{\\html@mathml{T\\kern-.1667em\\raisebox{-.5ex}{E}\\kern-.125emX}{TeX}}");var Qa=X(vt["Main-Regular"]["T".charCodeAt(0)][1]-.7*vt["Main-Regular"]["A".charCodeAt(0)][1]);b("\\LaTeX","\\textrm{\\html@mathml{"+("L\\kern-.36em\\raisebox{"+Qa+"}{\\scriptstyle A}")+"\\kern-.15em\\TeX}{LaTeX}}"),b("\\KaTeX","\\textrm{\\html@mathml{"+("K\\kern-.17em\\raisebox{"+Qa+"}{\\scriptstyle A}")+"\\kern-.15em\\TeX}{KaTeX}}"),b("\\hspace","\\@ifstar\\@hspacer\\@hspace"),b("\\@hspace","\\hskip #1\\relax"),b("\\@hspacer","\\rule{0pt}{0pt}\\hskip #1\\relax"),b("\\ordinarycolon",":"),b("\\vcentcolon","\\mathrel{\\mathop\\ordinarycolon}"),b("\\dblcolon",'\\html@mathml{\\mathrel{\\vcentcolon\\mathrel{\\mkern-.9mu}\\vcentcolon}}{\\mathop{\\char"2237}}'),b("\\coloneqq",'\\html@mathml{\\mathrel{\\vcentcolon\\mathrel{\\mkern-1.2mu}=}}{\\mathop{\\char"2254}}'),b("\\Coloneqq",'\\html@mathml{\\mathrel{\\dblcolon\\mathrel{\\mkern-1.2mu}=}}{\\mathop{\\char"2237\\char"3d}}'),b("\\coloneq",'\\html@mathml{\\mathrel{\\vcentcolon\\mathrel{\\mkern-1.2mu}\\mathrel{-}}}{\\mathop{\\char"3a\\char"2212}}'),b("\\Coloneq",'\\html@mathml{\\mathrel{\\dblcolon\\mathrel{\\mkern-1.2mu}\\mathrel{-}}}{\\mathop{\\char"2237\\char"2212}}'),b("\\eqqcolon",'\\html@mathml{\\mathrel{=\\mathrel{\\mkern-1.2mu}\\vcentcolon}}{\\mathop{\\char"2255}}'),b("\\Eqqcolon",'\\html@mathml{\\mathrel{=\\mathrel{\\mkern-1.2mu}\\dblcolon}}{\\mathop{\\char"3d\\char"2237}}'),b("\\eqcolon",'\\html@mathml{\\mathrel{\\mathrel{-}\\mathrel{\\mkern-1.2mu}\\vcentcolon}}{\\mathop{\\char"2239}}'),b("\\Eqcolon",'\\html@mathml{\\mathrel{\\mathrel{-}\\mathrel{\\mkern-1.2mu}\\dblcolon}}{\\mathop{\\char"2212\\char"2237}}'),b("\\colonapprox",'\\html@mathml{\\mathrel{\\vcentcolon\\mathrel{\\mkern-1.2mu}\\approx}}{\\mathop{\\char"3a\\char"2248}}'),b("\\Colonapprox",'\\html@mathml{\\mathrel{\\dblcolon\\mathrel{\\mkern-1.2mu}\\approx}}{\\mathop{\\char"2237\\char"2248}}'),b("\\colonsim",'\\html@mathml{\\mathrel{\\vcentcolon\\mathrel{\\mkern-1.2mu}\\sim}}{\\mathop{\\char"3a\\char"223c}}'),b("\\Colonsim",'\\html@mathml{\\mathrel{\\dblcolon\\mathrel{\\mkern-1.2mu}\\sim}}{\\mathop{\\char"2237\\char"223c}}'),b("∷","\\dblcolon"),b("∹","\\eqcolon"),b("≔","\\coloneqq"),b("≕","\\eqqcolon"),b("⩴","\\Coloneqq"),b("\\ratio","\\vcentcolon"),b("\\coloncolon","\\dblcolon"),b("\\colonequals","\\coloneqq"),b("\\coloncolonequals","\\Coloneqq"),b("\\equalscolon","\\eqqcolon"),b("\\equalscoloncolon","\\Eqqcolon"),b("\\colonminus","\\coloneq"),b("\\coloncolonminus","\\Coloneq"),b("\\minuscolon","\\eqcolon"),b("\\minuscoloncolon","\\Eqcolon"),b("\\coloncolonapprox","\\Colonapprox"),b("\\coloncolonsim","\\Colonsim"),b("\\simcolon","\\mathrel{\\sim\\mathrel{\\mkern-1.2mu}\\vcentcolon}"),b("\\simcoloncolon","\\mathrel{\\sim\\mathrel{\\mkern-1.2mu}\\dblcolon}"),b("\\approxcolon","\\mathrel{\\approx\\mathrel{\\mkern-1.2mu}\\vcentcolon}"),b("\\approxcoloncolon","\\mathrel{\\approx\\mathrel{\\mkern-1.2mu}\\dblcolon}"),b("\\notni","\\html@mathml{\\not\\ni}{\\mathrel{\\char`∌}}"),b("\\limsup","\\DOTSB\\operatorname*{lim\\,sup}"),b("\\liminf","\\DOTSB\\operatorname*{lim\\,inf}"),b("\\injlim","\\DOTSB\\operatorname*{inj\\,lim}"),b("\\projlim","\\DOTSB\\operatorname*{proj\\,lim}"),b("\\varlimsup","\\DOTSB\\operatorname*{\\overline{lim}}"),b("\\varliminf","\\DOTSB\\operatorname*{\\underline{lim}}"),b("\\varinjlim","\\DOTSB\\operatorname*{\\underrightarrow{lim}}"),b("\\varprojlim","\\DOTSB\\operatorname*{\\underleftarrow{lim}}"),b("\\gvertneqq","\\html@mathml{\\@gvertneqq}{≩}"),b("\\lvertneqq","\\html@mathml{\\@lvertneqq}{≨}"),b("\\ngeqq","\\html@mathml{\\@ngeqq}{≱}"),b("\\ngeqslant","\\html@mathml{\\@ngeqslant}{≱}"),b("\\nleqq","\\html@mathml{\\@nleqq}{≰}"),b("\\nleqslant","\\html@mathml{\\@nleqslant}{≰}"),b("\\nshortmid","\\html@mathml{\\@nshortmid}{∤}"),b("\\nshortparallel","\\html@mathml{\\@nshortparallel}{∦}"),b("\\nsubseteqq","\\html@mathml{\\@nsubseteqq}{⊈}"),b("\\nsupseteqq","\\html@mathml{\\@nsupseteqq}{⊉}"),b("\\varsubsetneq","\\html@mathml{\\@varsubsetneq}{⊊}"),b("\\varsubsetneqq","\\html@mathml{\\@varsubsetneqq}{⫋}"),b("\\varsupsetneq","\\html@mathml{\\@varsupsetneq}{⊋}"),b("\\varsupsetneqq","\\html@mathml{\\@varsupsetneqq}{⫌}"),b("\\imath","\\html@mathml{\\@imath}{ı}"),b("\\jmath","\\html@mathml{\\@jmath}{ȷ}"),b("\\llbracket","\\html@mathml{\\mathopen{[\\mkern-3.2mu[}}{\\mathopen{\\char`⟦}}"),b("\\rrbracket","\\html@mathml{\\mathclose{]\\mkern-3.2mu]}}{\\mathclose{\\char`⟧}}"),b("⟦","\\llbracket"),b("⟧","\\rrbracket"),b("\\lBrace","\\html@mathml{\\mathopen{\\{\\mkern-3.2mu[}}{\\mathopen{\\char`⦃}}"),b("\\rBrace","\\html@mathml{\\mathclose{]\\mkern-3.2mu\\}}}{\\mathclose{\\char`⦄}}"),b("⦃","\\lBrace"),b("⦄","\\rBrace"),b("\\minuso","\\mathbin{\\html@mathml{{\\mathrlap{\\mathchoice{\\kern{0.145em}}{\\kern{0.145em}}{\\kern{0.1015em}}{\\kern{0.0725em}}\\circ}{-}}}{\\char`⦵}}"),b("⦵","\\minuso"),b("\\darr","\\downarrow"),b("\\dArr","\\Downarrow"),b("\\Darr","\\Downarrow"),b("\\lang","\\langle"),b("\\rang","\\rangle"),b("\\uarr","\\uparrow"),b("\\uArr","\\Uparrow"),b("\\Uarr","\\Uparrow"),b("\\N","\\mathbb{N}"),b("\\R","\\mathbb{R}"),b("\\Z","\\mathbb{Z}"),b("\\alef","\\aleph"),b("\\alefsym","\\aleph"),b("\\Alpha","\\mathrm{A}"),b("\\Beta","\\mathrm{B}"),b("\\bull","\\bullet"),b("\\Chi","\\mathrm{X}"),b("\\clubs","\\clubsuit"),b("\\cnums","\\mathbb{C}"),b("\\Complex","\\mathbb{C}"),b("\\Dagger","\\ddagger"),b("\\diamonds","\\diamondsuit"),b("\\empty","\\emptyset"),b("\\Epsilon","\\mathrm{E}"),b("\\Eta","\\mathrm{H}"),b("\\exist","\\exists"),b("\\harr","\\leftrightarrow"),b("\\hArr","\\Leftrightarrow"),b("\\Harr","\\Leftrightarrow"),b("\\hearts","\\heartsuit"),b("\\image","\\Im"),b("\\infin","\\infty"),b("\\Iota","\\mathrm{I}"),b("\\isin","\\in"),b("\\Kappa","\\mathrm{K}"),b("\\larr","\\leftarrow"),b("\\lArr","\\Leftarrow"),b("\\Larr","\\Leftarrow"),b("\\lrarr","\\leftrightarrow"),b("\\lrArr","\\Leftrightarrow"),b("\\Lrarr","\\Leftrightarrow"),b("\\Mu","\\mathrm{M}"),b("\\natnums","\\mathbb{N}"),b("\\Nu","\\mathrm{N}"),b("\\Omicron","\\mathrm{O}"),b("\\plusmn","\\pm"),b("\\rarr","\\rightarrow"),b("\\rArr","\\Rightarrow"),b("\\Rarr","\\Rightarrow"),b("\\real","\\Re"),b("\\reals","\\mathbb{R}"),b("\\Reals","\\mathbb{R}"),b("\\Rho","\\mathrm{P}"),b("\\sdot","\\cdot"),b("\\sect","\\S"),b("\\spades","\\spadesuit"),b("\\sub","\\subset"),b("\\sube","\\subseteq"),b("\\supe","\\supseteq"),b("\\Tau","\\mathrm{T}"),b("\\thetasym","\\vartheta"),b("\\weierp","\\wp"),b("\\Zeta","\\mathrm{Z}"),b("\\argmin","\\DOTSB\\operatorname*{arg\\,min}"),b("\\argmax","\\DOTSB\\operatorname*{arg\\,max}"),b("\\plim","\\DOTSB\\mathop{\\operatorname{plim}}\\limits"),b("\\bra","\\mathinner{\\langle{#1}|}"),b("\\ket","\\mathinner{|{#1}\\rangle}"),b("\\braket","\\mathinner{\\langle{#1}\\rangle}"),b("\\Bra","\\left\\langle#1\\right|"),b("\\Ket","\\left|#1\\right\\rangle");var Ja=function(e){return function(t){var r=t.consumeArg().tokens,n=t.consumeArg().tokens,a=t.consumeArg().tokens,c=t.consumeArg().tokens,d=t.macros.get("|"),g=t.macros.get("\\|");t.macros.beginGroup();var y=function(R){return function(O){e&&(O.macros.set("|",d),a.length&&O.macros.set("\\|",g));var Y=R;if(!R&&a.length){var Q=O.future();Q.text==="|"&&(O.popToken(),Y=!0)}return{tokens:Y?a:n,numArgs:0}}};t.macros.set("|",y(!1)),a.length&&t.macros.set("\\|",y(!0));var T=t.consumeArg().tokens,B=t.expandTokens([].concat(c,T,r));return t.macros.endGroup(),{tokens:B.reverse(),numArgs:0}}};b("\\bra@ket",Ja(!1)),b("\\bra@set",Ja(!0)),b("\\Braket","\\bra@ket{\\left\\langle}{\\,\\middle\\vert\\,}{\\,\\middle\\vert\\,}{\\right\\rangle}"),b("\\Set","\\bra@set{\\left\\{\\:}{\\;\\middle\\vert\\;}{\\;\\middle\\Vert\\;}{\\:\\right\\}}"),b("\\set","\\bra@set{\\{\\,}{\\mid}{}{\\,\\}}"),b("\\angln","{\\angl n}"),b("\\blue","\\textcolor{##6495ed}{#1}"),b("\\orange","\\textcolor{##ffa500}{#1}"),b("\\pink","\\textcolor{##ff00af}{#1}"),b("\\red","\\textcolor{##df0030}{#1}"),b("\\green","\\textcolor{##28ae7b}{#1}"),b("\\gray","\\textcolor{gray}{#1}"),b("\\purple","\\textcolor{##9d38bd}{#1}"),b("\\blueA","\\textcolor{##ccfaff}{#1}"),b("\\blueB","\\textcolor{##80f6ff}{#1}"),b("\\blueC","\\textcolor{##63d9ea}{#1}"),b("\\blueD","\\textcolor{##11accd}{#1}"),b("\\blueE","\\textcolor{##0c7f99}{#1}"),b("\\tealA","\\textcolor{##94fff5}{#1}"),b("\\tealB","\\textcolor{##26edd5}{#1}"),b("\\tealC","\\textcolor{##01d1c1}{#1}"),b("\\tealD","\\textcolor{##01a995}{#1}"),b("\\tealE","\\textcolor{##208170}{#1}"),b("\\greenA","\\textcolor{##b6ffb0}{#1}"),b("\\greenB","\\textcolor{##8af281}{#1}"),b("\\greenC","\\textcolor{##74cf70}{#1}"),b("\\greenD","\\textcolor{##1fab54}{#1}"),b("\\greenE","\\textcolor{##0d923f}{#1}"),b("\\goldA","\\textcolor{##ffd0a9}{#1}"),b("\\goldB","\\textcolor{##ffbb71}{#1}"),b("\\goldC","\\textcolor{##ff9c39}{#1}"),b("\\goldD","\\textcolor{##e07d10}{#1}"),b("\\goldE","\\textcolor{##a75a05}{#1}"),b("\\redA","\\textcolor{##fca9a9}{#1}"),b("\\redB","\\textcolor{##ff8482}{#1}"),b("\\redC","\\textcolor{##f9685d}{#1}"),b("\\redD","\\textcolor{##e84d39}{#1}"),b("\\redE","\\textcolor{##bc2612}{#1}"),b("\\maroonA","\\textcolor{##ffbde0}{#1}"),b("\\maroonB","\\textcolor{##ff92c6}{#1}"),b("\\maroonC","\\textcolor{##ed5fa6}{#1}"),b("\\maroonD","\\textcolor{##ca337c}{#1}"),b("\\maroonE","\\textcolor{##9e034e}{#1}"),b("\\purpleA","\\textcolor{##ddd7ff}{#1}"),b("\\purpleB","\\textcolor{##c6b9fc}{#1}"),b("\\purpleC","\\textcolor{##aa87ff}{#1}"),b("\\purpleD","\\textcolor{##7854ab}{#1}"),b("\\purpleE","\\textcolor{##543b78}{#1}"),b("\\mintA","\\textcolor{##f5f9e8}{#1}"),b("\\mintB","\\textcolor{##edf2df}{#1}"),b("\\mintC","\\textcolor{##e0e5cc}{#1}"),b("\\grayA","\\textcolor{##f6f7f7}{#1}"),b("\\grayB","\\textcolor{##f0f1f2}{#1}"),b("\\grayC","\\textcolor{##e3e5e6}{#1}"),b("\\grayD","\\textcolor{##d6d8da}{#1}"),b("\\grayE","\\textcolor{##babec2}{#1}"),b("\\grayF","\\textcolor{##888d93}{#1}"),b("\\grayG","\\textcolor{##626569}{#1}"),b("\\grayH","\\textcolor{##3b3e40}{#1}"),b("\\grayI","\\textcolor{##21242c}{#1}"),b("\\kaBlue","\\textcolor{##314453}{#1}"),b("\\kaGreen","\\textcolor{##71B307}{#1}");var ei={"^":!0,_:!0,"\\limits":!0,"\\nolimits":!0},Ms=function(){function u(t,r,n){this.settings=void 0,this.expansionCount=void 0,this.lexer=void 0,this.macros=void 0,this.stack=void 0,this.mode=void 0,this.settings=r,this.expansionCount=0,this.feed(t),this.macros=new Ss(Ts,r.macros),this.mode=n,this.stack=[]}var e=u.prototype;return e.feed=function(r){this.lexer=new $a(r,this.settings)},e.switchMode=function(r){this.mode=r},e.beginGroup=function(){this.macros.beginGroup()},e.endGroup=function(){this.macros.endGroup()},e.endGroups=function(){this.macros.endGroups()},e.future=function(){return this.stack.length===0&&this.pushToken(this.lexer.lex()),this.stack[this.stack.length-1]},e.popToken=function(){return this.future(),this.stack.pop()},e.pushToken=function(r){this.stack.push(r)},e.pushTokens=function(r){var n;(n=this.stack).push.apply(n,r)},e.scanArgument=function(r){var n,a,c;if(r){if(this.consumeSpaces(),this.future().text!=="[")return null;n=this.popToken();var d=this.consumeArg(["]"]);c=d.tokens,a=d.end}else{var g=this.consumeArg();c=g.tokens,n=g.start,a=g.end}return this.pushToken(new e0("EOF",a.loc)),this.pushTokens(c),n.range(a,"")},e.consumeSpaces=function(){for(;;){var r=this.future();if(r.text===" ")this.stack.pop();else break}},e.consumeArg=function(r){var n=[],a=r&&r.length>0;a||this.consumeSpaces();var c=this.future(),d,g=0,y=0;do{if(d=this.popToken(),n.push(d),d.text==="{")++g;else if(d.text==="}"){if(--g,g===-1)throw new p("Extra }",d)}else if(d.text==="EOF")throw new p("Unexpected end of input in a macro argument, expected '"+(r&&a?r[y]:"}")+"'",d);if(r&&a)if((g===0||g===1&&r[y]==="{")&&d.text===r[y]){if(++y,y===r.length){n.splice(-y,y);break}}else y=0}while(g!==0||a);return c.text==="{"&&n[n.length-1].text==="}"&&(n.pop(),n.shift()),n.reverse(),{tokens:n,start:c,end:d}},e.consumeArgs=function(r,n){if(n){if(n.length!==r+1)throw new p("The length of delimiters doesn't match the number of args!");for(var a=n[0],c=0;cthis.settings.maxExpand)throw new p("Too many expansions: infinite loop or need to increase maxExpand setting");var d=c.tokens,g=this.consumeArgs(c.numArgs,c.delimiters);if(c.numArgs){d=d.slice();for(var y=d.length-1;y>=0;--y){var T=d[y];if(T.text==="#"){if(y===0)throw new p("Incomplete placeholder at end of macro body",T);if(T=d[--y],T.text==="#")d.splice(y+1,1);else if(/^[1-9]$/.test(T.text)){var B;(B=d).splice.apply(B,[y,2].concat(g[+T.text-1]))}else throw new p("Not a valid argument number",T)}}}return this.pushTokens(d),d.length},e.expandAfterFuture=function(){return this.expandOnce(),this.future()},e.expandNextToken=function(){for(;;)if(this.expandOnce()===!1){var r=this.stack.pop();return r.treatAsRelax&&(r.text="\\relax"),r}throw new Error},e.expandMacro=function(r){return this.macros.has(r)?this.expandTokens([new e0(r)]):void 0},e.expandTokens=function(r){var n=[],a=this.stack.length;for(this.pushTokens(r);this.stack.length>a;)if(this.expandOnce(!0)===!1){var c=this.stack.pop();c.treatAsRelax&&(c.noexpand=!1,c.treatAsRelax=!1),n.push(c)}return n},e.expandMacroAsText=function(r){var n=this.expandMacro(r);return n&&n.map(function(a){return a.text}).join("")},e._getExpansion=function(r){var n=this.macros.get(r);if(n==null)return n;if(r.length===1){var a=this.lexer.catcodes[r];if(a!=null&&a!==13)return}var c=typeof n=="function"?n(this):n;if(typeof c=="string"){var d=0;if(c.indexOf("#")!==-1)for(var g=c.replace(/##/g,"");g.indexOf("#"+(d+1))!==-1;)++d;for(var y=new $a(c,this.settings),T=[],B=y.lex();B.text!=="EOF";)T.push(B),B=y.lex();T.reverse();var F={tokens:T,numArgs:d};return F}return c},e.isDefined=function(r){return this.macros.has(r)||r0.hasOwnProperty(r)||Ne.math.hasOwnProperty(r)||Ne.text.hasOwnProperty(r)||ei.hasOwnProperty(r)},e.isExpandable=function(r){var n=this.macros.get(r);return n!=null?typeof n=="string"||typeof n=="function"||!n.unexpandable:r0.hasOwnProperty(r)&&!r0[r].primitive},u}(),ti=/^[₊₋₌₍₎₀₁₂₃₄₅₆₇₈₉ₐₑₕᵢⱼₖₗₘₙₒₚᵣₛₜᵤᵥₓᵦᵧᵨᵩᵪ]/,Mr=Object.freeze({"₊":"+","₋":"-","₌":"=","₍":"(","₎":")","₀":"0","₁":"1","₂":"2","₃":"3","₄":"4","₅":"5","₆":"6","₇":"7","₈":"8","₉":"9","ₐ":"a","ₑ":"e","ₕ":"h","ᵢ":"i","ⱼ":"j","ₖ":"k","ₗ":"l","ₘ":"m","ₙ":"n","ₒ":"o","ₚ":"p","ᵣ":"r","ₛ":"s","ₜ":"t","ᵤ":"u","ᵥ":"v","ₓ":"x","ᵦ":"β","ᵧ":"γ","ᵨ":"ρ","ᵩ":"ϕ","ᵪ":"χ","⁺":"+","⁻":"-","⁼":"=","⁽":"(","⁾":")","⁰":"0","¹":"1","²":"2","³":"3","⁴":"4","⁵":"5","⁶":"6","⁷":"7","⁸":"8","⁹":"9","ᴬ":"A","ᴮ":"B","ᴰ":"D","ᴱ":"E","ᴳ":"G","ᴴ":"H","ᴵ":"I","ᴶ":"J","ᴷ":"K","ᴸ":"L","ᴹ":"M","ᴺ":"N","ᴼ":"O","ᴾ":"P","ᴿ":"R","ᵀ":"T","ᵁ":"U","ⱽ":"V","ᵂ":"W","ᵃ":"a","ᵇ":"b","ᶜ":"c","ᵈ":"d","ᵉ":"e","ᶠ":"f","ᵍ":"g",ʰ:"h","ⁱ":"i",ʲ:"j","ᵏ":"k",ˡ:"l","ᵐ":"m",ⁿ:"n","ᵒ":"o","ᵖ":"p",ʳ:"r",ˢ:"s","ᵗ":"t","ᵘ":"u","ᵛ":"v",ʷ:"w",ˣ:"x",ʸ:"y","ᶻ":"z","ᵝ":"β","ᵞ":"γ","ᵟ":"δ","ᵠ":"ϕ","ᵡ":"χ","ᶿ":"θ"}),wn={"́":{text:"\\'",math:"\\acute"},"̀":{text:"\\`",math:"\\grave"},"̈":{text:'\\"',math:"\\ddot"},"̃":{text:"\\~",math:"\\tilde"},"̄":{text:"\\=",math:"\\bar"},"̆":{text:"\\u",math:"\\breve"},"̌":{text:"\\v",math:"\\check"},"̂":{text:"\\^",math:"\\hat"},"̇":{text:"\\.",math:"\\dot"},"̊":{text:"\\r",math:"\\mathring"},"̋":{text:"\\H"},"̧":{text:"\\c"}},ri={á:"á",à:"à",ä:"ä",ǟ:"ǟ",ã:"ã",ā:"ā",ă:"ă",ắ:"ắ",ằ:"ằ",ẵ:"ẵ",ǎ:"ǎ",â:"â",ấ:"ấ",ầ:"ầ",ẫ:"ẫ",ȧ:"ȧ",ǡ:"ǡ",å:"å",ǻ:"ǻ",ḃ:"ḃ",ć:"ć",ḉ:"ḉ",č:"č",ĉ:"ĉ",ċ:"ċ",ç:"ç",ď:"ď",ḋ:"ḋ",ḑ:"ḑ",é:"é",è:"è",ë:"ë",ẽ:"ẽ",ē:"ē",ḗ:"ḗ",ḕ:"ḕ",ĕ:"ĕ",ḝ:"ḝ",ě:"ě",ê:"ê",ế:"ế",ề:"ề",ễ:"ễ",ė:"ė",ȩ:"ȩ",ḟ:"ḟ",ǵ:"ǵ",ḡ:"ḡ",ğ:"ğ",ǧ:"ǧ",ĝ:"ĝ",ġ:"ġ",ģ:"ģ",ḧ:"ḧ",ȟ:"ȟ",ĥ:"ĥ",ḣ:"ḣ",ḩ:"ḩ",í:"í",ì:"ì",ï:"ï",ḯ:"ḯ",ĩ:"ĩ",ī:"ī",ĭ:"ĭ",ǐ:"ǐ",î:"î",ǰ:"ǰ",ĵ:"ĵ",ḱ:"ḱ",ǩ:"ǩ",ķ:"ķ",ĺ:"ĺ",ľ:"ľ",ļ:"ļ",ḿ:"ḿ",ṁ:"ṁ",ń:"ń",ǹ:"ǹ",ñ:"ñ",ň:"ň",ṅ:"ṅ",ņ:"ņ",ó:"ó",ò:"ò",ö:"ö",ȫ:"ȫ",õ:"õ",ṍ:"ṍ",ṏ:"ṏ",ȭ:"ȭ",ō:"ō",ṓ:"ṓ",ṑ:"ṑ",ŏ:"ŏ",ǒ:"ǒ",ô:"ô",ố:"ố",ồ:"ồ",ỗ:"ỗ",ȯ:"ȯ",ȱ:"ȱ",ő:"ő",ṕ:"ṕ",ṗ:"ṗ",ŕ:"ŕ",ř:"ř",ṙ:"ṙ",ŗ:"ŗ",ś:"ś",ṥ:"ṥ",š:"š",ṧ:"ṧ",ŝ:"ŝ",ṡ:"ṡ",ş:"ş",ẗ:"ẗ",ť:"ť",ṫ:"ṫ",ţ:"ţ",ú:"ú",ù:"ù",ü:"ü",ǘ:"ǘ",ǜ:"ǜ",ǖ:"ǖ",ǚ:"ǚ",ũ:"ũ",ṹ:"ṹ",ū:"ū",ṻ:"ṻ",ŭ:"ŭ",ǔ:"ǔ",û:"û",ů:"ů",ű:"ű",ṽ:"ṽ",ẃ:"ẃ",ẁ:"ẁ",ẅ:"ẅ",ŵ:"ŵ",ẇ:"ẇ",ẘ:"ẘ",ẍ:"ẍ",ẋ:"ẋ",ý:"ý",ỳ:"ỳ",ÿ:"ÿ",ỹ:"ỹ",ȳ:"ȳ",ŷ:"ŷ",ẏ:"ẏ",ẙ:"ẙ",ź:"ź",ž:"ž",ẑ:"ẑ",ż:"ż",Á:"Á",À:"À",Ä:"Ä",Ǟ:"Ǟ",Ã:"Ã",Ā:"Ā",Ă:"Ă",Ắ:"Ắ",Ằ:"Ằ",Ẵ:"Ẵ",Ǎ:"Ǎ",Â:"Â",Ấ:"Ấ",Ầ:"Ầ",Ẫ:"Ẫ",Ȧ:"Ȧ",Ǡ:"Ǡ",Å:"Å",Ǻ:"Ǻ",Ḃ:"Ḃ",Ć:"Ć",Ḉ:"Ḉ",Č:"Č",Ĉ:"Ĉ",Ċ:"Ċ",Ç:"Ç",Ď:"Ď",Ḋ:"Ḋ",Ḑ:"Ḑ",É:"É",È:"È",Ë:"Ë",Ẽ:"Ẽ",Ē:"Ē",Ḗ:"Ḗ",Ḕ:"Ḕ",Ĕ:"Ĕ",Ḝ:"Ḝ",Ě:"Ě",Ê:"Ê",Ế:"Ế",Ề:"Ề",Ễ:"Ễ",Ė:"Ė",Ȩ:"Ȩ",Ḟ:"Ḟ",Ǵ:"Ǵ",Ḡ:"Ḡ",Ğ:"Ğ",Ǧ:"Ǧ",Ĝ:"Ĝ",Ġ:"Ġ",Ģ:"Ģ",Ḧ:"Ḧ",Ȟ:"Ȟ",Ĥ:"Ĥ",Ḣ:"Ḣ",Ḩ:"Ḩ",Í:"Í",Ì:"Ì",Ï:"Ï",Ḯ:"Ḯ",Ĩ:"Ĩ",Ī:"Ī",Ĭ:"Ĭ",Ǐ:"Ǐ",Î:"Î",İ:"İ",Ĵ:"Ĵ",Ḱ:"Ḱ",Ǩ:"Ǩ",Ķ:"Ķ",Ĺ:"Ĺ",Ľ:"Ľ",Ļ:"Ļ",Ḿ:"Ḿ",Ṁ:"Ṁ",Ń:"Ń",Ǹ:"Ǹ",Ñ:"Ñ",Ň:"Ň",Ṅ:"Ṅ",Ņ:"Ņ",Ó:"Ó",Ò:"Ò",Ö:"Ö",Ȫ:"Ȫ",Õ:"Õ",Ṍ:"Ṍ",Ṏ:"Ṏ",Ȭ:"Ȭ",Ō:"Ō",Ṓ:"Ṓ",Ṑ:"Ṑ",Ŏ:"Ŏ",Ǒ:"Ǒ",Ô:"Ô",Ố:"Ố",Ồ:"Ồ",Ỗ:"Ỗ",Ȯ:"Ȯ",Ȱ:"Ȱ",Ő:"Ő",Ṕ:"Ṕ",Ṗ:"Ṗ",Ŕ:"Ŕ",Ř:"Ř",Ṙ:"Ṙ",Ŗ:"Ŗ",Ś:"Ś",Ṥ:"Ṥ",Š:"Š",Ṧ:"Ṧ",Ŝ:"Ŝ",Ṡ:"Ṡ",Ş:"Ş",Ť:"Ť",Ṫ:"Ṫ",Ţ:"Ţ",Ú:"Ú",Ù:"Ù",Ü:"Ü",Ǘ:"Ǘ",Ǜ:"Ǜ",Ǖ:"Ǖ",Ǚ:"Ǚ",Ũ:"Ũ",Ṹ:"Ṹ",Ū:"Ū",Ṻ:"Ṻ",Ŭ:"Ŭ",Ǔ:"Ǔ",Û:"Û",Ů:"Ů",Ű:"Ű",Ṽ:"Ṽ",Ẃ:"Ẃ",Ẁ:"Ẁ",Ẅ:"Ẅ",Ŵ:"Ŵ",Ẇ:"Ẇ",Ẍ:"Ẍ",Ẋ:"Ẋ",Ý:"Ý",Ỳ:"Ỳ",Ÿ:"Ÿ",Ỹ:"Ỹ",Ȳ:"Ȳ",Ŷ:"Ŷ",Ẏ:"Ẏ",Ź:"Ź",Ž:"Ž",Ẑ:"Ẑ",Ż:"Ż",ά:"ά",ὰ:"ὰ",ᾱ:"ᾱ",ᾰ:"ᾰ",έ:"έ",ὲ:"ὲ",ή:"ή",ὴ:"ὴ",ί:"ί",ὶ:"ὶ",ϊ:"ϊ",ΐ:"ΐ",ῒ:"ῒ",ῑ:"ῑ",ῐ:"ῐ",ό:"ό",ὸ:"ὸ",ύ:"ύ",ὺ:"ὺ",ϋ:"ϋ",ΰ:"ΰ",ῢ:"ῢ",ῡ:"ῡ",ῠ:"ῠ",ώ:"ώ",ὼ:"ὼ",Ύ:"Ύ",Ὺ:"Ὺ",Ϋ:"Ϋ",Ῡ:"Ῡ",Ῠ:"Ῠ",Ώ:"Ώ",Ὼ:"Ὼ"},ni=function(){function u(t,r){this.mode=void 0,this.gullet=void 0,this.settings=void 0,this.leftrightDepth=void 0,this.nextToken=void 0,this.mode="math",this.gullet=new Ms(t,r,this.mode),this.settings=r,this.leftrightDepth=0}var e=u.prototype;return e.expect=function(r,n){if(n===void 0&&(n=!0),this.fetch().text!==r)throw new p("Expected '"+r+"', got '"+this.fetch().text+"'",this.fetch());n&&this.consume()},e.consume=function(){this.nextToken=null},e.fetch=function(){return this.nextToken==null&&(this.nextToken=this.gullet.expandNextToken()),this.nextToken},e.switchMode=function(r){this.mode=r,this.gullet.switchMode(r)},e.parse=function(){this.settings.globalGroup||this.gullet.beginGroup(),this.settings.colorIsTextColor&&this.gullet.macros.set("\\color","\\textcolor");try{var r=this.parseExpression(!1);return this.expect("EOF"),this.settings.globalGroup||this.gullet.endGroup(),r}finally{this.gullet.endGroups()}},e.subparse=function(r){var n=this.nextToken;this.consume(),this.gullet.pushToken(new e0("}")),this.gullet.pushTokens(r);var a=this.parseExpression(!1);return this.expect("}"),this.nextToken=n,a},e.parseExpression=function(r,n){for(var a=[];;){this.mode==="math"&&this.consumeSpaces();var c=this.fetch();if(u.endOfExpression.indexOf(c.text)!==-1||n&&c.text===n||r&&r0[c.text]&&r0[c.text].infix)break;var d=this.parseAtom(n);if(d){if(d.type==="internal")continue}else break;a.push(d)}return this.mode==="text"&&this.formLigatures(a),this.handleInfixNodes(a)},e.handleInfixNodes=function(r){for(var n=-1,a,c=0;c=0&&this.settings.reportNonstrict("unicodeTextInMathMode",'Latin-1/Unicode text character "'+n[0]+'" used in math mode',r);var y=Ne[this.mode][n].group,T=_t.range(r),B;if(Ot.hasOwnProperty(y)){var F=y;B={type:"atom",mode:this.mode,family:F,loc:T,text:n}}else B={type:y,mode:this.mode,loc:T,text:n};g=B}else if(n.charCodeAt(0)>=128)this.settings.strict&&(pt(n.charCodeAt(0))?this.mode==="math"&&this.settings.reportNonstrict("unicodeTextInMathMode",'Unicode text character "'+n[0]+'" used in math mode',r):this.settings.reportNonstrict("unknownSymbol",'Unrecognized Unicode character "'+n[0]+'"'+(" ("+n.charCodeAt(0)+")"),r)),g={type:"textord",mode:"text",loc:_t.range(r),text:n};else return null;if(this.consume(),d)for(var R=0;R0&&(I.push({type:"text",data:_.slice(0,D)}),_=_.slice(D));var re=N.findIndex(function(fe){return _.startsWith(fe.left)});if(D=L(N[re].right,_,N[re].left.length),D===-1)break;var Z=_.slice(0,D+N[re].right.length),U=K.test(Z)?Z:_.slice(N[re].left.length,D);I.push({type:"math",data:U,rawData:Z,display:N[re].display}),_=_.slice(D+N[re].right.length)}return _!==""&&I.push({type:"text",data:_}),I},V=ne,xe=function(_,N){var D=V(_,N.delimiters);if(D.length===1&&D[0].type==="text")return null;for(var I=document.createDocumentFragment(),j=0;j{Is().then(()=>{m!==p&&requestAnimationFrame(()=>{s(0,w.innerHTML=Ei.sanitize(he.parse(m)),w),s(3,z=!0),p=m,o("load")})})});function L(G){Yi[G?"unshift":"push"](()=>{w=G,s(0,w)})}return v.$$set=G=>{"message"in G&&s(1,m=G.message),"latex_delimiters"in G&&s(2,x=G.latex_delimiters)},v.$$.update=()=>{v.$$.dirty&13&&z&&x.length>0&&Zo(w,{delimiters:x,throwOnError:!1})},[w,m,x,z,L]}class Jo extends Or{constructor(i){super(),qr(this,i,Qo,Ko,Pr,{message:1,latex_delimiters:2})}}function Ci(v,i,s){const o=v.slice();return o[25]=i[s],o[27]=s,o}function Di(v,i,s){const o=v.slice();return o[28]=i[s],o[30]=s,o}function Ni(v,i,s){const o=v.slice();return o[31]=i[s],o}function Ri(v){let i,s,o;return s=new Vs({props:{formatter:zo,value:v[0]}}),s.$on("error",v[18]),s.$on("share",v[19]),{c(){i=ct("div"),H0(s.$$.fragment),de(i,"class","icon-button svelte-1fzvtqo")},m(m,p){Xe(m,i,p),U0(s,i,null),o=!0},p(m,p){const x={};p[0]&1&&(x.value=m[0]),s.$set(x)},i(m){o||(Re(s.$$.fragment,m),o=!0)},o(m){Ye(s.$$.fragment,m),o=!1},d(m){m&&$e(i),G0(s)}}}function Fi(v){let i,s,o=q0(v[0]),m=[];for(let x=0;xYe(m[x],1,1,()=>{m[x]=null});return{c(){for(let x=0;x{V[C]=null}),x0()),~p?(x=V[p],x?x.p(v,q):(x=V[p]=ne[p](v),x.c()),Re(x,1),x.m(i,w)):x=null),(!L||q[0]&64&&z!==(z=v[6]?"rtl":"ltr"))&&de(i,"dir",z),(!L||q[0]&1)&&I0(i,"latest",v[27]===v[0].length-1),(!L||q[0]&1)&&I0(i,"hide",v[28]===null),(!L||q[0]&16)&&I0(i,"selectable",v[4])},i(le){L||(Re(x),L=!0)},o(le){Ye(x),L=!1},d(le){le&&$e(i),~p&&V[p].d(),G=!1,K()}}}function qi(v){let i,s,o=q0(v[25]),m=[];for(let x=0;xYe(m[x],1,1,()=>{m[x]=null});return{c(){for(let x=0;x -   -
    -   -
    `,de(i,"class","message pending svelte-1fzvtqo")},m(s,o){Xe(s,i,o)},d(s){s&&$e(i)}}}function i1(v){let i,s,o,m,p,x,w,z=v[5]&&v[0]!==null&&v[0].length>0&&Ri(v),L=v[0]!==null&&Fi(v),G=v[2]&&Pi();return{c(){z&&z.c(),i=O0(),s=ct("div"),o=ct("div"),L&&L.c(),m=O0(),G&&G.c(),de(o,"class","message-wrap svelte-1fzvtqo"),de(s,"class","wrap svelte-1fzvtqo")},m(K,ne){z&&z.m(K,ne),Xe(K,i,ne),Xe(K,s,ne),Gt(s,o),L&&L.m(o,null),Gt(o,m),G&&G.m(o,null),v[21](s),p=!0,x||(w=Ls(To.call(null,o)),x=!0)},p(K,ne){K[5]&&K[0]!==null&&K[0].length>0?z?(z.p(K,ne),ne[0]&33&&Re(z,1)):(z=Ri(K),z.c(),Re(z,1),z.m(i.parentNode,i)):z&&(y0(),Ye(z,1,1,()=>{z=null}),x0()),K[0]!==null?L?(L.p(K,ne),ne[0]&1&&Re(L,1)):(L=Fi(K),L.c(),Re(L,1),L.m(o,m)):L&&(y0(),Ye(L,1,1,()=>{L=null}),x0()),K[2]?G||(G=Pi(),G.c(),G.m(o,null)):G&&(G.d(1),G=null)},i(K){p||(Re(z),Re(L),p=!0)},o(K){Ye(z),Ye(L),p=!1},d(K){K&&($e(i),$e(s)),z&&z.d(K),L&&L.d(),G&&G.d(),v[21](null),x=!1,w()}}}function l1(v,i,s){const o={light:()=>ui(()=>Promise.resolve({}),["./prism-0efcbb52.css"],import.meta.url),dark:()=>ui(()=>Promise.resolve({}),["./prism-dark-490e4a1c.css"],import.meta.url)};let{value:m}=i,p=null,{latex_delimiters:x}=i,{pending_message:w=!1}=i,{feedback:z=null}=i,{selectable:L=!1}=i,{show_share_button:G=!1}=i,{theme_mode:K}=i,{rtl:ne=!1}=i,V,xe;const ge=Vi();Os(()=>{xe=V&&V.offsetHeight+V.scrollTop>V.scrollHeight-100});const le=()=>{xe&&V.scrollTo(0,V.scrollHeight)};Wi(()=>{xe&&(le(),V.querySelectorAll("img").forEach(se=>{se.addEventListener("load",()=>{le()})}))});function q(se,ze,Pe){ge("select",{index:[se,ze],value:Pe})}function C(se){At.call(this,v,se)}function _(se){At.call(this,v,se)}function N(se){At.call(this,v,se)}function D(se){At.call(this,v,se)}function I(se){At.call(this,v,se)}function j(se){At.call(this,v,se)}function re(se){At.call(this,v,se)}function Z(se){At.call(this,v,se)}const U=(se,ze,Pe)=>q(se,ze,Pe);function fe(se){Yi[se?"unshift":"push"](()=>{V=se,s(7,V)})}return v.$$set=se=>{"value"in se&&s(0,m=se.value),"latex_delimiters"in se&&s(1,x=se.latex_delimiters),"pending_message"in se&&s(2,w=se.pending_message),"feedback"in se&&s(3,z=se.feedback),"selectable"in se&&s(4,L=se.selectable),"show_share_button"in se&&s(5,G=se.show_share_button),"theme_mode"in se&&s(10,K=se.theme_mode),"rtl"in se&&s(6,ne=se.rtl)},v.$$.update=()=>{v.$$.dirty[0]&1024&&(K=="dark"?o.dark():o.light()),v.$$.dirty[0]&2049&&m!==p&&(s(11,p=m),ge("change"))},[m,x,w,z,L,G,ne,V,le,q,K,p,C,_,N,D,I,j,re,Z,U,fe]}class s1 extends Or{constructor(i){super(),qr(this,i,l1,i1,Pr,{value:0,latex_delimiters:1,pending_message:2,feedback:3,selectable:4,show_share_button:5,theme_mode:10,rtl:6},null,[-1,-1])}}function Hi(v){let i,s;const o=[v[12],{show_progress:v[12].show_progress==="hidden"?"hidden":"minimal"}];let m={};for(let p=0;p{x=null}),x0()),z[7]?w?(w.p(z,L),L&128&&Re(w,1)):(w=Ui(z),w.c(),Re(w,1),w.m(s,o)):w&&(y0(),Ye(w,1,1,()=>{w=null}),x0());const G={};L&256&&(G.selectable=z[8]),L&1024&&(G.show_share_button=z[10]),L&512&&(G.theme_mode=z[9]),L&16384&&(G.value=z[14]),L&8&&(G.latex_delimiters=z[3]),L&4096&&(G.pending_message=z[12]?.status==="pending"),L&2048&&(G.rtl=z[11]),m.$set(G)},i(z){p||(Re(x),Re(w),Re(m.$$.fragment,z),p=!0)},o(z){Ye(x),Ye(w),Ye(m.$$.fragment,z),p=!1},d(z){z&&($e(i),$e(s)),x&&x.d(z),w&&w.d(),G0(m)}}}function u1(v){let i,s;return i=new Ws({props:{elem_id:v[0],elem_classes:v[1],visible:v[2],padding:!1,scale:v[4],min_width:v[5],height:v[13],allow_overflow:!1,$$slots:{default:[o1]},$$scope:{ctx:v}}}),{c(){H0(i.$$.fragment)},m(o,m){U0(i,o,m),s=!0},p(o,[m]){const p={};m&1&&(p.elem_id=o[0]),m&2&&(p.elem_classes=o[1]),m&4&&(p.visible=o[2]),m&16&&(p.scale=o[4]),m&32&&(p.min_width=o[5]),m&8192&&(p.height=o[13]),m&8413128&&(p.$$scope={dirty:m,ctx:o}),i.$set(p)},i(o){s||(Re(i.$$.fragment,o),s=!0)},o(o){Ye(i.$$.fragment,o),s=!1},d(o){G0(i,o)}}}function c1(v,i,s){let{elem_id:o=""}=i,{elem_classes:m=[]}=i,{visible:p=!0}=i,{value:x=[]}=i,w,{latex_delimiters:z}=i,{scale:L=null}=i,{min_width:G=void 0}=i,{label:K}=i,{show_label:ne=!0}=i,{root:V}=i,{root_url:xe}=i,{selectable:ge=!1}=i,{theme_mode:le}=i,{show_share_button:q=!1}=i,{rtl:C=!1}=i;const _=U=>U.replace('src="/file',`src="${V}file`);let{loading_status:N=void 0}=i,{height:D=400}=i;function I(U){At.call(this,v,U)}function j(U){At.call(this,v,U)}function re(U){At.call(this,v,U)}function Z(U){At.call(this,v,U)}return v.$$set=U=>{"elem_id"in U&&s(0,o=U.elem_id),"elem_classes"in U&&s(1,m=U.elem_classes),"visible"in U&&s(2,p=U.visible),"value"in U&&s(15,x=U.value),"latex_delimiters"in U&&s(3,z=U.latex_delimiters),"scale"in U&&s(4,L=U.scale),"min_width"in U&&s(5,G=U.min_width),"label"in U&&s(6,K=U.label),"show_label"in U&&s(7,ne=U.show_label),"root"in U&&s(16,V=U.root),"root_url"in U&&s(17,xe=U.root_url),"selectable"in U&&s(8,ge=U.selectable),"theme_mode"in U&&s(9,le=U.theme_mode),"show_share_button"in U&&s(10,q=U.show_share_button),"rtl"in U&&s(11,C=U.rtl),"loading_status"in U&&s(12,N=U.loading_status),"height"in U&&s(13,D=U.height)},v.$$.update=()=>{v.$$.dirty&229376&&s(14,w=x?x.map(([U,fe])=>[typeof U=="string"?_(U):ci(U,V,xe),typeof fe=="string"?_(fe):ci(fe,V,xe)]):[])},[o,m,p,z,L,G,K,ne,ge,le,q,C,N,D,w,x,V,xe,I,j,re,Z]}class h1 extends Or{constructor(i){super(),qr(this,i,c1,u1,Pr,{elem_id:0,elem_classes:1,visible:2,value:15,latex_delimiters:3,scale:4,min_width:5,label:6,show_label:7,root:16,root_url:17,selectable:8,theme_mode:9,show_share_button:10,rtl:11,loading_status:12,height:13})}}const b1=h1,y1=["static"];export{b1 as Component,y1 as modes}; -//# sourceMappingURL=index-add9ad59.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_typing.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_typing.py deleted file mode 100644 index c8885eb1eb7ca419eb9a3ef0235685f634d6bd16..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_typing.py +++ /dev/null @@ -1,28 +0,0 @@ -# coding=utf-8 -# Copyright 2022-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Handle typing imports based on system compatibility.""" -import sys -from typing import Callable, TypeVar - - -if sys.version_info >= (3, 8): - from typing import Literal, TypedDict -else: - from typing_extensions import Literal, TypedDict # noqa: F401 - -HTTP_METHOD_T = Literal["GET", "OPTIONS", "HEAD", "POST", "PUT", "PATCH", "DELETE"] - -# type hint meaning "function signature not changed by decorator" -CallableT = TypeVar("CallableT", bound=Callable) diff --git a/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/inference.py b/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/inference.py deleted file mode 100644 index ce0f2b08df75e6d62f06c4119f1dc859930de032..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/LoRA-DreamBooth-Training-UI/inference.py +++ /dev/null @@ -1,94 +0,0 @@ -from __future__ import annotations - -import gc -import pathlib - -import gradio as gr -import PIL.Image -import torch -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler -from huggingface_hub import ModelCard - - -class InferencePipeline: - def __init__(self, hf_token: str | None = None): - self.hf_token = hf_token - self.pipe = None - self.device = torch.device( - 'cuda:0' if torch.cuda.is_available() else 'cpu') - self.lora_model_id = None - self.base_model_id = None - - def clear(self) -> None: - self.lora_model_id = None - self.base_model_id = None - del self.pipe - self.pipe = None - torch.cuda.empty_cache() - gc.collect() - - @staticmethod - def check_if_model_is_local(lora_model_id: str) -> bool: - return pathlib.Path(lora_model_id).exists() - - @staticmethod - def get_model_card(model_id: str, - hf_token: str | None = None) -> ModelCard: - if InferencePipeline.check_if_model_is_local(model_id): - card_path = (pathlib.Path(model_id) / 'README.md').as_posix() - else: - card_path = model_id - return ModelCard.load(card_path, token=hf_token) - - @staticmethod - def get_base_model_info(lora_model_id: str, - hf_token: str | None = None) -> str: - card = InferencePipeline.get_model_card(lora_model_id, hf_token) - return card.data.base_model - - def load_pipe(self, lora_model_id: str) -> None: - if lora_model_id == self.lora_model_id: - return - base_model_id = self.get_base_model_info(lora_model_id, self.hf_token) - if base_model_id != self.base_model_id: - if self.device.type == 'cpu': - pipe = DiffusionPipeline.from_pretrained( - base_model_id, use_auth_token=self.hf_token) - else: - pipe = DiffusionPipeline.from_pretrained( - base_model_id, - torch_dtype=torch.float16, - use_auth_token=self.hf_token) - pipe = pipe.to(self.device) - pipe.scheduler = DPMSolverMultistepScheduler.from_config( - pipe.scheduler.config) - self.pipe = pipe - self.pipe.unet.load_attn_procs( # type: ignore - lora_model_id, use_auth_token=self.hf_token) - - self.lora_model_id = lora_model_id # type: ignore - self.base_model_id = base_model_id # type: ignore - - def run( - self, - lora_model_id: str, - prompt: str, - lora_scale: float, - seed: int, - n_steps: int, - guidance_scale: float, - ) -> PIL.Image.Image: - if not torch.cuda.is_available(): - raise gr.Error('CUDA is not available.') - - self.load_pipe(lora_model_id) - - generator = torch.Generator(device=self.device).manual_seed(seed) - out = self.pipe( - prompt, - num_inference_steps=n_steps, - guidance_scale=guidance_scale, - generator=generator, - cross_attention_kwargs={'scale': lora_scale}, - ) # type: ignore - return out.images[0] diff --git a/spaces/Detomo/ai-comic-generation/src/app/interface/about/index.tsx b/spaces/Detomo/ai-comic-generation/src/app/interface/about/index.tsx deleted file mode 100644 index 45ef9f30aff4e54048f3e2b274ec714ddb23f7ec..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/app/interface/about/index.tsx +++ /dev/null @@ -1,46 +0,0 @@ -import { Button } from "@/components/ui/button" -import { Dialog, DialogContent, DialogDescription, DialogFooter, DialogHeader, DialogTitle, DialogTrigger } from "@/components/ui/dialog" -import { useState } from "react" - -export function About() { - const [isOpen, setOpen] = useState(false) - - return ( - - - - - - - The AI Comic Factory - - What is the AI Comic Factory? - - -
    -

    - The AI Comic Factory is a free and open-source application made to demonstrate the capabilities of AI models. -

    -

    - 👉 The language model used to generate the descriptions of each panel is Llama-2 70b. -

    -

    - 👉 The stable diffusion model used to generate the images is the base SDXL 1.0. -

    -

    - The code is public and can be deployed at home with some changes in the code. See the README for details about the architecture. -

    -

    - Do you want to create high-res image exports? Please check this tutorial. -

    -
    - - - - - - ) -} \ No newline at end of file diff --git a/spaces/DiegoLigtenberg/realtimespeech/parsarg.py b/spaces/DiegoLigtenberg/realtimespeech/parsarg.py deleted file mode 100644 index 1abb9f508fb308074981b20ae71e53990114e23f..0000000000000000000000000000000000000000 --- a/spaces/DiegoLigtenberg/realtimespeech/parsarg.py +++ /dev/null @@ -1,26 +0,0 @@ -import argparse -import yaml - -def model_parser_args(): - with open(r'utils/models.yaml') as f: - settings = yaml.full_load(f) - parser = argparse.ArgumentParser() - parser.add_argument("--model", help="see model_settings.yaml",default=settings) - parser.add_argument("--model_names", help="see model_settings.yaml",default=list(settings)) - setting_list = [] - task_list = [] - for i in range(len(settings)): - setting_list.append(list(settings[list(settings.keys())[i]].keys())) - for model in (list(settings.keys())): - task = (settings[model]["task"]) - if task not in task_list:task_list.append(task) - setting_list = ([setting for sublist in setting_list for setting in sublist]) # generate all sublists - setting_list = [x for i, x in enumerate(setting_list) if x not in setting_list[:i]] # remain order of sublists - parser.add_argument("--model_settings",help="see model_settings.yaml",default=setting_list) - parser.add_argument("--model_tasks",help="see model_settings.yaml",default=task_list) - parser=parser.parse_args() - return parser - -if __name__ == "__main__": - model_parser_args() - diff --git a/spaces/Dimalker/Faceswapper/roop/globals.py b/spaces/Dimalker/Faceswapper/roop/globals.py deleted file mode 100644 index 77fd391db235b878ce1f91765596bd76adb06697..0000000000000000000000000000000000000000 --- a/spaces/Dimalker/Faceswapper/roop/globals.py +++ /dev/null @@ -1,17 +0,0 @@ -from typing import List - -source_path = None -target_path = None -output_path = None -frame_processors: List[str] = [] -keep_fps = None -keep_audio = None -keep_frames = None -many_faces = None -video_encoder = None -video_quality = None -max_memory = None -execution_providers: List[str] = [] -execution_threads = None -headless = None -log_level = 'error' diff --git a/spaces/DrGabrielLopez/GPT2_Chatbot/app.py b/spaces/DrGabrielLopez/GPT2_Chatbot/app.py deleted file mode 100644 index 4499e8ac66bb875b29da035cee3af8f74d59a9df..0000000000000000000000000000000000000000 --- a/spaces/DrGabrielLopez/GPT2_Chatbot/app.py +++ /dev/null @@ -1,139 +0,0 @@ -from transformers import TFAutoModelForCausalLM, AutoTokenizer -import tensorflow as tf -import gradio as gr -import spacy -from spacy import displacy -from transformers import TFAutoModelForSequenceClassification -from transformers import AutoTokenizer -from scipy.special import softmax -import plotly.express as px -import plotly.io as pio - -# configuration params -pio.templates.default = "plotly_dark" - -# setting up the text in the page -TITLE = "

    Talk with an AI

    " -DESCRIPTION = r"""
    This application allows you to talk with a machine/robot with state-of-the-art technology!!
    - In the back-end is using the GPT2 model from OpenAI. One of the best models in text generation and comprehension.
    - Language processing is done using RoBERTa for sentiment-analysis and spaCy for named-entity recognition and dependency plotting.
    - The AI thinks he is a human, so please treat him as such, else he migh get angry!
    - """ -EXAMPLES = [ - ["What is your favorite videogame?"], - ["What gets you really sad?"], - ["How can I make you really angry? "], - ["What do you do for work?"], - ["What are your hobbies?"], - ["What is your favorite food?"], -] -ARTICLE = r"""
    - Done by dr. Gabriel Lopez
    - For more please visit: My Page
    - For info about the chat-bot model can also see the ArXiv paper
    -
    """ - -# Loading necessary NLP models -# dialog -checkpoint = "microsoft/DialoGPT-medium" # tf -model_gtp2 = TFAutoModelForCausalLM.from_pretrained(checkpoint) -tokenizer_gtp2 = AutoTokenizer.from_pretrained(checkpoint) -# sentiment -checkpoint = f"cardiffnlp/twitter-roberta-base-emotion" -model_roberta = TFAutoModelForSequenceClassification.from_pretrained(checkpoint) -tokenizer_roberta = AutoTokenizer.from_pretrained(checkpoint) -# NER & Dependency -nlp = spacy.load("en_core_web_sm") - -# test-to-test : chatting function -- GPT2 -def chat_with_bot(user_input, chat_history_and_input=[]): - """Text generation using GPT2""" - emb_user_input = tokenizer_gtp2.encode( - user_input + tokenizer_gtp2.eos_token, return_tensors="tf" - ) - if chat_history_and_input == []: - bot_input_ids = emb_user_input # first iteration - else: - bot_input_ids = tf.concat( - [chat_history_and_input, emb_user_input], axis=-1 - ) # other iterations - chat_history_and_input = model_gtp2.generate( - bot_input_ids, max_length=1000, pad_token_id=tokenizer_gtp2.eos_token_id - ).numpy() - # print - bot_response = tokenizer_gtp2.decode( - chat_history_and_input[:, bot_input_ids.shape[-1] :][0], - skip_special_tokens=True, - ) - return bot_response, chat_history_and_input - - -# text-to-sentiment -def text_to_sentiment(text_input): - """Sentiment analysis using RoBERTa""" - labels = ["anger", "joy", "optimism", "sadness"] - encoded_input = tokenizer_roberta(text_input, return_tensors="tf") - output = model_roberta(encoded_input) - scores = output[0][0].numpy() - scores = softmax(scores) - return px.histogram(x=labels, y=scores, height=200) - - -# text_to_semantics -def text_to_semantics(text_input): - """NER and Dependency plot using Spacy""" - processed_text = nlp(text_input) - # Dependency - html_dep = displacy.render( - processed_text, - style="dep", - options={"compact": True, "color": "white", "bg": "light-black"}, - page=False, - ) - html_dep = "" + html_dep + "" - # NER - pos_tokens = [] - for token in processed_text: - pos_tokens.extend([(token.text, token.pos_), (" ", None)]) - # html_ner = ("" + html_ner + "")s - return pos_tokens, html_dep - - -# gradio interface -blocks = gr.Blocks() -with blocks: - # physical elements - session_state = gr.State([]) - gr.Markdown(TITLE) - gr.Markdown(DESCRIPTION) - with gr.Row(): - with gr.Column(): - in_text = gr.Textbox(value="How was the class?", label="Start chatting!") - submit_button = gr.Button("Submit") - gr.Examples(inputs=in_text, examples=EXAMPLES) - with gr.Column(): - response_text = gr.Textbox(value="", label="GPT2 response:") - sentiment_plot = gr.Plot( - label="How is GPT2 feeling about your conversation?:", visible=True - ) - ner_response = gr.Highlight( - label="Named Entity Recognition (NER) over response" - ) - dependency_plot = gr.HTML(label="Dependency plot of response") - gr.Markdown(ARTICLE) - # event listeners - submit_button.click( - inputs=[in_text, session_state], - outputs=[response_text, session_state], - fn=chat_with_bot, - ) - response_text.change( - inputs=response_text, outputs=sentiment_plot, fn=text_to_sentiment - ) - response_text.change( - inputs=response_text, - outputs=[ner_response, dependency_plot], - fn=text_to_semantics, - ) - -blocks.launch() diff --git a/spaces/Dusan/clickbaitonator/fudge/README.md b/spaces/Dusan/clickbaitonator/fudge/README.md deleted file mode 100644 index 66a77024a34699e9c0bf7e1f9a42f6569c2a1eec..0000000000000000000000000000000000000000 --- a/spaces/Dusan/clickbaitonator/fudge/README.md +++ /dev/null @@ -1,155 +0,0 @@ -# FUDGE: Controlled Text Generation With Future Discriminators - -This repo contains code corresponding to the paper FUDGE: Controlled Text Generation With Future Discriminators (https://arxiv.org/abs/2104.05218) by Kevin Yang and Dan Klein, published at NAACL 2021. - -You can also find a video presentation at http://somup.com/crhlVPFKN7 and the corresponding slides in `slides.pptx`. - -## Setup/Installation - -We tested on Python 3.8.5 but earlier versions of Python 3 are almost certainly fine. To get the required packages (other versions likely to work too): - -``` -pip install -r requirements.txt -``` - -Additionally, to get our pre-trained predictor checkpoints and training data, run: - -``` -wget https://naacl2021-fudge-files.s3.amazonaws.com/large_files.zip -``` - -and extract the zip to the top-level `lm-prediction/` folder. (There should be three folders, `ckpt/`, `train_data/`, and `topic_human_evals/`. The zip is 7GB.) Note: the zip seems to not work for some people actually, if this is the case you can get the files directly from https://drive.google.com/drive/folders/1GZfOGqpQxDmIfD2RvuhUQla9eX2OHUXU?usp=sharing (13GB). - -`ckpt/` contains predictor checkpoints for each task if you are just interested in running inference. (Note that for the paper results, we used predictors trained with an older version of the code, but the new checkpoints get similar results, so you are OK to use the new predictors provided here if e.g. you just want to use FUDGE as a baseline. You can just run the evaluation commands provided below; it should take maybe 5-60 minutes depending on the task and your compute, assuming you have a GPU.) - -`train_data/` contains our GPT2-generated training data for the poetry and topic tasks' predictors. See https://github.com/raosudha89/GYAFC-corpus for instructions on gaining access to the GYAFC data used for the machine translation formality task; replace our dummy folders with the corresponding folders/files if you want to train our formality predictor. - -## Clickbait -To generate outputs, run: - -``` -python -u evaluate_clickbait.py --ckpt ckpt/topic/future_word_predictor/model.pth.tar --dataset_info ckpt/topic/future_word_predictor/dataset_info --in_file topic_data/topic_prefixes.txt --condition_lambda 4.0 --verbose --precondition_topk 200 --length_cutoff 80 --device cpu - -python -u evaluate_clickbait.py --ckpt ckpt/formality/predictor_gyafc_entertainment_music/model.pth.tar --dataset_info ckpt/formality/predictor_gyafc_entertainment_music/dataset_info --in_file formality_data/fisher_test_oracle.es - -python -u evaluate_clickbait.py --ckpt ckpt/topic/future_word_predictor/model.pth.tar --dataset_info ckpt/topic/future_word_predictor/dataset_info --in_file topic_data/topic_prefixes.txt --condition_lambda 4.0 --verbose --precondition_topk 200 --sample_size 3 --max_sample_batch 1 --length_cutoff 80 --log_file clickbait_preds.log -``` - -Then evaluate metrics using: - -``` -python eval_topic_metrics.py --log_file topic_preds.log --tw_dir topic_data/test_wordlists -``` - - -## Poetry Couplet Completion - -### Evaluation - -To generate outputs, run: - -``` -python -u evaluate_poetry.py --iambic_ckpt ckpt/poetry/iambic_predictor/model.pth.tar --rhyme_ckpt ckpt/poetry/rhyme_predictor/model.pth.tar --newline_ckpt ckpt/poetry/newline_predictor/model.pth.tar --dataset_info ckpt/poetry/rhyme_predictor/dataset_info --rhyme_info ckpt/poetry/rhyme_predictor/rhyme_info --prefix_file poetry_data/couplet_prefixes.txt --precondition_topk 200 > poetry_preds.log -``` - -Then evaluate metrics using: - -``` -python eval_poetry_metrics.py --pred_file poetry_preds.log --prefix_file poetry_data/couplet_prefixes.txt -``` - -### Training your own predictors - -Example commands for all three predictors used in the poetry task below. (You actually probably don't need so many epochs for iambic and rhyme; in any case the commands will save intermediate ckpts so you can just stop them early if needed by inspecting the log.) - -Iambic predictor: - -``` -python -u main.py --task iambic --data_dir train_data/gpt2_generations --save_dir ckpt/poetry/iambic_retrain_predictor --num_workers 20 --batch_size 128 --epoch_max_len 100000 --validation_freq 10 --lr 2e-4 --epochs 1500 > iambic_retrain_predictor.log -``` - -Rhyme predictor: - -``` -python -u main.py --task rhyme --data_dir train_data/gpt2_generations --save_dir ckpt/poetry/rhyme_retrain_predictor --num_workers 20 --batch_size 128 --epoch_max_len 100000 --validation_freq 10 --lr 2e-4 --epochs 1500 > rhyme_retrain_predictor.log -``` - -End of sentence predictor (referred to as "newline" in the code; 50 epochs is more than enough for this one): - -``` -python -u main.py --task newline --data_dir train_data/gpt2_generations --save_dir ckpt/poetry/newline_retrain_predictor --num_workers 20 --batch_size 128 --epoch_max_len 100000 --validation_freq 10 --lr 2e-4 --epochs 50 > newline_retrain_predictor.log -``` - -The same evaluation commands as before will work; just modify the paths in the command to point to `model_best.pth.tar`, `dataset_info`, and `rhyme_info` from your newly trained ckpt folders. - -## Topic Control - -### Evaluation - -To generate outputs, run: - -``` -python -u evaluate_topic.py --ckpt ckpt/topic/future_word_predictor/model.pth.tar --dataset_info ckpt/topic/future_word_predictor/dataset_info --prefix_file topic_data/topic_prefixes.txt --wordlist_dir topic_data/wordlists --condition_lambda 4.0 --verbose --precondition_topk 200 --topk 10 --sample_size 3 --max_sample_batch 1 --length_cutoff 80 --log_file topic_preds.log -``` - -Then evaluate metrics using: - -``` -python eval_topic_metrics.py --log_file topic_preds.log --tw_dir topic_data/test_wordlists -``` - -You can also find our original generations and baselines in `topic_human_evals/`. - -### Training your own predictors - -Example command below. - -``` -python -u main.py --task topic --data_dir train_data/gpt2_generations --save_dir ckpt/topic/future_word_retrain_predictor --num_workers 20 --batch_size 128 --epoch_max_len 100000 --validation_freq 10 --lr 2e-4 --epochs 500 --glove_file train_data/glove.840B.300d.txt > future_word_retrain_predictor.log -``` - -The same evaluation commands as before will work; just modify the paths in the command to point to `model_best.pth.tar`, `dataset_info`, and `rhyme_info` from your newly trained ckpt folders. - -## Machine Translation Formality - -### Evaluation - -To generate outputs, run: - -``` -python -u evaluate_formality.py --ckpt ckpt/formality/predictor_gyafc_entertainment_music/model.pth.tar --dataset_info ckpt/formality/predictor_gyafc_entertainment_music/dataset_info --in_file formality_data/fisher_test_oracle.es --model_path ckpt/formality/marian_finetune_fisher > formality_preds.log -``` - -The above command generates predictions using the Marian model finetuned on the Fisher dataset; remove the `--model_path` argument to get predictions with the un-finetuned Marian model from HuggingFace (referred to as 0-shot in the paper) - -Then evaluate metrics using: - -``` -python eval_formality_metrics.py --pred formality_preds.log --ref formality_data/test.noid.cleaned_0 formality_data/test.noid.cleaned_1 --ckpt ckpt/formality/test_evaluator_gyafc_family_relationships/model.pth.tar --dataset_info ckpt/formality/test_evaluator_gyafc_family_relationships/dataset_info -``` - -### Training your own predictors - -Example command below. (Reminder: you need to go get the GYAFC dataset following the instructions in https://github.com/raosudha89/GYAFC-corpus.) - -``` -python -u main.py --task formality --data_dir train_data/GYAFC_Corpus/Entertainment_Music --save_dir ckpt/formality/formality_retrain_predictor --num_workers 20 --batch_size 32 --epoch_max_len 1000000 --validation_freq 1 --lr 2e-5 --epochs 20 > formality_retrain_predictor.log -``` - -(The test-time formality evaluator is trained in the same way, just using the Family/Relationships half of the GYAFC dataset.) - -The same evaluation commands as before will work; just modify the paths in the command to point to `model_best.pth.tar`, `dataset_info`, and `rhyme_info` from your newly trained ckpt folders. - -## Running FUDGE on your own data - -The code has been refactored so that the iambic (poetry), rhyme (poetry), newline (poetry), future word (topic), and formality (machine translation) are controlled by the `--task` flag to `main.py`. You should add your task as another option here, then modify the data processing in `data.py` and the model in `model.py` as needed for your task. (In `data.py` you probably won't need all the entries of the tuple that is expected of the loader; you can just put dummy entries in the ones you don't need.) You might also need to modify the loss computation in the `train` and `validate` functions in `main.py`. You'll probably want to write new evaluation scripts, though the existing poetry/topic/formality ones are hopefully helpful as references. - -Alternatively, the general FUDGE framework is pretty simple, so you could always try reimplementing things yourself. A few additional details based on questions I've received: - -(1) The formality task setup is likely closest to what you want if you're just trying to run the simplest form of FUDGE (take a language model, and use a classifier to optimize toward a single attribute) although you may need to swap out the Marian translation model/tokenizer we use. - -(2) When you construct your training data, if you have an example in your data e.g. "This movie is great!" for positive sentiment, you want to learn on all the pairs (This, +), (This movie, +), (This movie is, +), etc., as that's one of the main points of our approach. - -(3) For computational efficiency, we first filter the base model's next token probabilities down to the top 200 (Sec. 3.1 in the paper), before adding the classifier logits. This way you only need to evaluate your classifier on 200 continuations. Then afterward, you filter down again to whatever top-k/greedy/nucleus sampling you're using for evaluation (we use top-k with k=10 for poetry and topic, greedy for formality). - -(4) You can use a pretrained LM backbone instead of a simple LSTM backbone for the predictor as well. This should work better when your dataset is smaller. \ No newline at end of file diff --git a/spaces/Dusan/clickbaitonator/fudge/formality_data/README.md b/spaces/Dusan/clickbaitonator/fudge/formality_data/README.md deleted file mode 100644 index a3744ed8ab664d4d1c33e3dd104bdc1b32f1cb04..0000000000000000000000000000000000000000 --- a/spaces/Dusan/clickbaitonator/fudge/formality_data/README.md +++ /dev/null @@ -1,2 +0,0 @@ -`fisher_test_oracle.es` is the source-side Spanish test set. -`test_noid.cleaned_0` and `test_noid.cleaned_1` are Salesky 2019's fluent English test-time references. \ No newline at end of file diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/psgformer/psgformer_r50_psg_inference.py b/spaces/ECCV2022/PSG/OpenPSG/configs/psgformer/psgformer_r50_psg_inference.py deleted file mode 100644 index 37bebaf42627dc17503986567b18fc6a9770f427..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/PSG/OpenPSG/configs/psgformer/psgformer_r50_psg_inference.py +++ /dev/null @@ -1,31 +0,0 @@ -_base_ = [ - './psgformer_r50_psg.py' -] - -img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True) -pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - # NOTE: Do not change the img to DC. - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - - ], - ), -] - -data = dict( - test=dict( - pipeline=pipeline, - ), -) \ No newline at end of file diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/data/realesrgan_dataset.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/data/realesrgan_dataset.py deleted file mode 100644 index 4cf2d9e6583a6789b771679734ce55bb8a22e628..0000000000000000000000000000000000000000 --- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/realesrgan/data/realesrgan_dataset.py +++ /dev/null @@ -1,192 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import os.path as osp -import random -import time -import torch -from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels -from basicsr.data.transforms import augment -from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data - - -@DATASET_REGISTRY.register() -class RealESRGANDataset(data.Dataset): - """Dataset used for Real-ESRGAN model: - Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It loads gt (Ground-Truth) images, and augments them. - It also generates blur kernels and sinc kernels for generating low-quality images. - Note that the low-quality images are processed in tensors on GPUS for faster processing. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation). - Please see more options in the codes. - """ - - def __init__(self, opt): - super(RealESRGANDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt['io_backend'] - self.gt_folder = opt['dataroot_gt'] - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = [self.gt_folder] - self.io_backend_opt['client_keys'] = ['gt'] - if not self.gt_folder.endswith('.lmdb'): - raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}") - with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin: - self.paths = [line.split('.')[0] for line in fin] - else: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt['meta_info']) as fin: - paths = [line.strip().split(' ')[0] for line in fin] - self.paths = [os.path.join(self.gt_folder, v) for v in paths] - - # blur settings for the first degradation - self.blur_kernel_size = opt['blur_kernel_size'] - self.kernel_list = opt['kernel_list'] - self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability - self.blur_sigma = opt['blur_sigma'] - self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels - self.betap_range = opt['betap_range'] # betap used in plateau blur kernels - self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters - - # blur settings for the second degradation - self.blur_kernel_size2 = opt['blur_kernel_size2'] - self.kernel_list2 = opt['kernel_list2'] - self.kernel_prob2 = opt['kernel_prob2'] - self.blur_sigma2 = opt['blur_sigma2'] - self.betag_range2 = opt['betag_range2'] - self.betap_range2 = opt['betap_range2'] - self.sinc_prob2 = opt['sinc_prob2'] - - # a final sinc filter - self.final_sinc_prob = opt['final_sinc_prob'] - - self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21 - # TODO: kernel range is now hard-coded, should be in the configure file - self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect - self.pulse_tensor[10, 10] = 1 - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - # -------------------------------- Load gt images -------------------------------- # - # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32. - gt_path = self.paths[index] - # avoid errors caused by high latency in reading files - retry = 3 - while retry > 0: - try: - img_bytes = self.file_client.get(gt_path, 'gt') - except (IOError, OSError) as e: - logger = get_root_logger() - logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}') - # change another file to read - index = random.randint(0, self.__len__()) - gt_path = self.paths[index] - time.sleep(1) # sleep 1s for occasional server congestion - else: - break - finally: - retry -= 1 - img_gt = imfrombytes(img_bytes, float32=True) - - # -------------------- Do augmentation for training: flip, rotation -------------------- # - img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot']) - - # crop or pad to 400 - # TODO: 400 is hard-coded. You may change it accordingly - h, w = img_gt.shape[0:2] - crop_pad_size = 400 - # pad - if h < crop_pad_size or w < crop_pad_size: - pad_h = max(0, crop_pad_size - h) - pad_w = max(0, crop_pad_size - w) - img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101) - # crop - if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size: - h, w = img_gt.shape[0:2] - # randomly choose top and left coordinates - top = random.randint(0, h - crop_pad_size) - left = random.randint(0, w - crop_pad_size) - img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...] - - # ------------------------ Generate kernels (used in the first degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob']: - # this sinc filter setting is for kernels ranging from [7, 21] - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel = random_mixed_kernels( - self.kernel_list, - self.kernel_prob, - kernel_size, - self.blur_sigma, - self.blur_sigma, [-math.pi, math.pi], - self.betag_range, - self.betap_range, - noise_range=None) - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------ Generate kernels (used in the second degradation) ------------------------ # - kernel_size = random.choice(self.kernel_range) - if np.random.uniform() < self.opt['sinc_prob2']: - if kernel_size < 13: - omega_c = np.random.uniform(np.pi / 3, np.pi) - else: - omega_c = np.random.uniform(np.pi / 5, np.pi) - kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False) - else: - kernel2 = random_mixed_kernels( - self.kernel_list2, - self.kernel_prob2, - kernel_size, - self.blur_sigma2, - self.blur_sigma2, [-math.pi, math.pi], - self.betag_range2, - self.betap_range2, - noise_range=None) - - # pad kernel - pad_size = (21 - kernel_size) // 2 - kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size))) - - # ------------------------------------- the final sinc kernel ------------------------------------- # - if np.random.uniform() < self.opt['final_sinc_prob']: - kernel_size = random.choice(self.kernel_range) - omega_c = np.random.uniform(np.pi / 3, np.pi) - sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21) - sinc_kernel = torch.FloatTensor(sinc_kernel) - else: - sinc_kernel = self.pulse_tensor - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0] - kernel = torch.FloatTensor(kernel) - kernel2 = torch.FloatTensor(kernel2) - - return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path} - return return_d - - def __len__(self): - return len(self.paths) diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/rmvpe.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/rmvpe.py deleted file mode 100644 index 2a387ebe73c7e1dd8bb7ccad1ea9e0ea89848ece..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/lib/rmvpe.py +++ /dev/null @@ -1,717 +0,0 @@ -import pdb, os - -import numpy as np -import torch -try: - #Fix "Torch not compiled with CUDA enabled" - import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - if torch.xpu.is_available(): - from infer.modules.ipex import ipex_init - ipex_init() -except Exception: - pass -import torch.nn as nn -import torch.nn.functional as F -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window - -import logging - -logger = logging.getLogger(__name__) - - -###stft codes from https://github.com/pseeth/torch-stft/blob/master/torch_stft/util.py -def window_sumsquare( - window, - n_frames, - hop_length=200, - win_length=800, - n_fft=800, - dtype=np.float32, - norm=None, -): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - n_frames : int > 0 - The number of analysis frames - hop_length : int > 0 - The number of samples to advance between frames - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - n_fft : int > 0 - The length of each analysis frame. - dtype : np.dtype - The data type of the output - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = normalize(win_sq, norm=norm) ** 2 - win_sq = pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))] - return x - - -class STFT(torch.nn.Module): - def __init__( - self, filter_length=1024, hop_length=512, win_length=None, window="hann" - ): - """ - This module implements an STFT using 1D convolution and 1D transpose convolutions. - This is a bit tricky so there are some cases that probably won't work as working - out the same sizes before and after in all overlap add setups is tough. Right now, - this code should work with hop lengths that are half the filter length (50% overlap - between frames). - - Keyword Arguments: - filter_length {int} -- Length of filters used (default: {1024}) - hop_length {int} -- Hop length of STFT (restrict to 50% overlap between frames) (default: {512}) - win_length {[type]} -- Length of the window function applied to each frame (if not specified, it - equals the filter length). (default: {None}) - window {str} -- Type of window to use (options are bartlett, hann, hamming, blackman, blackmanharris) - (default: {'hann'}) - """ - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length if win_length else filter_length - self.window = window - self.forward_transform = None - self.pad_amount = int(self.filter_length / 2) - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])] - ) - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :] - ) - - assert filter_length >= self.win_length - # get window and zero center pad it to filter_length - fft_window = get_window(window, self.win_length, fftbins=True) - fft_window = pad_center(fft_window, size=filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer("forward_basis", forward_basis.float()) - self.register_buffer("inverse_basis", inverse_basis.float()) - - def transform(self, input_data): - """Take input data (audio) to STFT domain. - - Arguments: - input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples) - - Returns: - magnitude {tensor} -- Magnitude of STFT with shape (num_batch, - num_frequencies, num_frames) - phase {tensor} -- Phase of STFT with shape (num_batch, - num_frequencies, num_frames) - """ - num_batches = input_data.shape[0] - num_samples = input_data.shape[-1] - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - # print(1234,input_data.shape) - input_data = F.pad( - input_data.unsqueeze(1), - (self.pad_amount, self.pad_amount, 0, 0, 0, 0), - mode="reflect", - ).squeeze(1) - # print(2333,input_data.shape,self.forward_basis.shape,self.hop_length) - # pdb.set_trace() - forward_transform = F.conv1d( - input_data, self.forward_basis, stride=self.hop_length, padding=0 - ) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - # phase = torch.atan2(imag_part.data, real_part.data) - - return magnitude # , phase - - def inverse(self, magnitude, phase): - """Call the inverse STFT (iSTFT), given magnitude and phase tensors produced - by the ```transform``` function. - - Arguments: - magnitude {tensor} -- Magnitude of STFT with shape (num_batch, - num_frequencies, num_frames) - phase {tensor} -- Phase of STFT with shape (num_batch, - num_frequencies, num_frames) - - Returns: - inverse_transform {tensor} -- Reconstructed audio given magnitude and phase. Of - shape (num_batch, num_samples) - """ - recombine_magnitude_phase = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1 - ) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - self.inverse_basis, - stride=self.hop_length, - padding=0, - ) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, - magnitude.size(-1), - hop_length=self.hop_length, - win_length=self.win_length, - n_fft=self.filter_length, - dtype=np.float32, - ) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0] - ) - window_sum = torch.from_numpy(window_sum).to(inverse_transform.device) - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[ - approx_nonzero_indices - ] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[..., self.pad_amount :] - inverse_transform = inverse_transform[..., : self.num_samples] - inverse_transform = inverse_transform.squeeze(1) - - return inverse_transform - - def forward(self, input_data): - """Take input data (audio) to STFT domain and then back to audio. - - Arguments: - input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples) - - Returns: - reconstruction {tensor} -- Reconstructed audio given magnitude and phase. Of - shape (num_batch, num_samples) - """ - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction - - -from time import time as ttime - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - # print(mel.shape) - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - # print(x.shape) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - # "cpu"if(audio.device.type=="privateuseone") else audio.device - audio.device - ) - # fft = torch.stft(#doesn't support pytorch_dml - # # audio.cpu() if(audio.device.type=="privateuseone")else audio, - # audio, - # n_fft=n_fft_new, - # hop_length=hop_length_new, - # win_length=win_length_new, - # window=self.hann_window[keyshift_key], - # center=center, - # return_complex=True, - # ) - # magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - # print(1111111111) - # print(222222222222222,audio.device,self.is_half) - if hasattr(self, "stft") == False: - # print(n_fft_new,hop_length_new,win_length_new,audio.shape) - self.stft = STFT( - filter_length=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window="hann", - ).to(audio.device) - magnitude = self.stft.transform(audio) # phase - # if (audio.device.type == "privateuseone"): - # magnitude=magnitude.to(audio.device) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - # print(log_mel_spec.device.type) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - if "privateuseone" in str(device): - import onnxruntime as ort - - ort_session = ort.InferenceSession( - "%s/rmvpe.onnx" % os.environ["rmvpe_root"], - providers=["DmlExecutionProvider"], - ) - self.model = ort_session - else: - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="constant" - ) - if "privateuseone" in str(self.device): - onnx_input_name = self.model.get_inputs()[0].name - onnx_outputs_names = self.model.get_outputs()[0].name - hidden = self.model.run( - [onnx_outputs_names], - input_feed={onnx_input_name: mel.cpu().numpy()}, - )[0] - else: - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - # torch.cuda.synchronize() - t0 = ttime() - mel = self.mel_extractor( - torch.from_numpy(audio).float().to(self.device).unsqueeze(0), center=True - ) - # print(123123123,mel.device.type) - # torch.cuda.synchronize() - t1 = ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - t2 = ttime() - # print(234234,hidden.device.type) - if "privateuseone" not in str(self.device): - hidden = hidden.squeeze(0).cpu().numpy() - else: - hidden = hidden[0] - if self.is_half == True: - hidden = hidden.astype("float32") - - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - t3 = ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def infer_from_audio_with_pitch(self, audio, thred=0.03, f0_min=50, f0_max=1100): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - mel = self.mel_extractor(audio, center=True) - hidden = self.mel2hidden(mel) - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - f0[(f0 < f0_min) | (f0 > f0_max)] = 0 - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -if __name__ == "__main__": - import librosa - import soundfile as sf - - audio, sampling_rate = sf.read(r"C:\Users\liujing04\Desktop\Z\冬之花clip1.wav") - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - audio_bak = audio.copy() - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - model_path = r"D:\BaiduNetdiskDownload\RVC-beta-v2-0727AMD_realtime\rmvpe.pt" - thred = 0.03 # 0.01 - device = "cuda" if torch.cuda.is_available() else "cpu" - rmvpe = RMVPE(model_path, is_half=False, device=device) - t0 = ttime() - f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - t1 = ttime() - logger.info("%s %.2f", f0.shape, t1 - t0) diff --git a/spaces/Eddycrack864/Applio-Inference/tools/infer/infer-pm-index256.py b/spaces/Eddycrack864/Applio-Inference/tools/infer/infer-pm-index256.py deleted file mode 100644 index da5430421f1de17a57379aefbe7919dd555b2f50..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/tools/infer/infer-pm-index256.py +++ /dev/null @@ -1,202 +0,0 @@ -""" - -对源特征进行检索 -""" -import os -import logging - -logger = logging.getLogger(__name__) - -import parselmouth -import torch - -os.environ["CUDA_VISIBLE_DEVICES"] = "0" -# import torchcrepe -from time import time as ttime - -# import pyworld -import librosa -import numpy as np -import soundfile as sf -import torch.nn.functional as F -from fairseq import checkpoint_utils - -# from models import SynthesizerTrn256#hifigan_nonsf -# from lib.infer_pack.models import SynthesizerTrn256NSF as SynthesizerTrn256#hifigan_nsf -from infer.lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid as SynthesizerTrn256, -) # hifigan_nsf -from scipy.io import wavfile - -# from lib.infer_pack.models import SynthesizerTrnMs256NSFsid_sim as SynthesizerTrn256#hifigan_nsf -# from models import SynthesizerTrn256NSFsim as SynthesizerTrn256#hifigan_nsf -# from models import SynthesizerTrn256NSFsimFlow as SynthesizerTrn256#hifigan_nsf - - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model_path = r"E:\codes\py39\vits_vc_gpu_train\assets\hubert\hubert_base.pt" # -logger.info("Load model(s) from {}".format(model_path)) -models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", -) -model = models[0] -model = model.to(device) -model = model.half() -model.eval() - -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],183,256,is_half=True)#hifigan#512#256 -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],109,256,is_half=True)#hifigan#512#256 -net_g = SynthesizerTrn256( - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 10, 2, 2], - 512, - [16, 16, 4, 4], - 183, - 256, - is_half=True, -) # hifigan#512#256#no_dropout -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,3,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],0)#ts3 -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2],512,[16,16,4],0)#hifigan-ps-sr -# -# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [5,5], 512, [15,15], 0)#ms -# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,10], 512, [16,16], 0)#idwt2 - -# weights=torch.load("infer/ft-mi_1k-noD.pt") -# weights=torch.load("infer/ft-mi-freeze-vocoder-flow-enc_q_1k.pt") -# weights=torch.load("infer/ft-mi-freeze-vocoder_true_1k.pt") -# weights=torch.load("infer/ft-mi-sim1k.pt") -weights = torch.load("infer/ft-mi-no_opt-no_dropout.pt") -logger.debug(net_g.load_state_dict(weights, strict=True)) - -net_g.eval().to(device) -net_g.half() - - -def get_f0(x, p_len, f0_up_key=0): - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = ( - parselmouth.Sound(x, 16000) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0 *= pow(2, f0_up_key / 12) - f0bak = f0.copy() - - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - # f0_mel[f0_mel > 188] = 188 - f0_coarse = np.rint(f0_mel).astype(np.int32) - return f0_coarse, f0bak - - -import faiss - -index = faiss.read_index("infer/added_IVF512_Flat_mi_baseline_src_feat.index") -big_npy = np.load("infer/big_src_feature_mi.npy") -ta0 = ta1 = ta2 = 0 -for idx, name in enumerate( - [ - "冬之花clip1.wav", - ] -): ## - wav_path = "todo-songs/%s" % name # - f0_up_key = -2 # - audio, sampling_rate = sf.read(wav_path) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - - feats = torch.from_numpy(audio).float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.half().to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9, # layer 9 - } - if torch.cuda.is_available(): - torch.cuda.synchronize() - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - ####索引优化 - npy = feats[0].cpu().numpy().astype("float32") - D, I = index.search(npy, 1) - feats = ( - torch.from_numpy(big_npy[I.squeeze()].astype("float16")).unsqueeze(0).to(device) - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if torch.cuda.is_available(): - torch.cuda.synchronize() - t1 = ttime() - # p_len = min(feats.shape[1],10000,pitch.shape[0])#太大了爆显存 - p_len = min(feats.shape[1], 10000) # - pitch, pitchf = get_f0(audio, p_len, f0_up_key) - p_len = min(feats.shape[1], 10000, pitch.shape[0]) # 太大了爆显存 - if torch.cuda.is_available(): - torch.cuda.synchronize() - t2 = ttime() - feats = feats[:, :p_len, :] - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - p_len = torch.LongTensor([p_len]).to(device) - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - sid = torch.LongTensor([0]).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - with torch.no_grad(): - audio = ( - net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - .numpy() - ) # nsf - if torch.cuda.is_available(): - torch.cuda.synchronize() - t3 = ttime() - ta0 += t1 - t0 - ta1 += t2 - t1 - ta2 += t3 - t2 - # wavfile.write("ft-mi_1k-index256-noD-%s.wav"%name, 40000, audio)## - # wavfile.write("ft-mi-freeze-vocoder-flow-enc_q_1k-%s.wav"%name, 40000, audio)## - # wavfile.write("ft-mi-sim1k-%s.wav"%name, 40000, audio)## - wavfile.write("ft-mi-no_opt-no_dropout-%s.wav" % name, 40000, audio) ## - - -logger.debug("%.2fs %.2fs %.2fs", ta0, ta1, ta2) # diff --git a/spaces/EronSamez/RVC_HFmeu/audioEffects.py b/spaces/EronSamez/RVC_HFmeu/audioEffects.py deleted file mode 100644 index 1830b19e1a5e3ec1f431388d8444ef3a2c9ed91f..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/audioEffects.py +++ /dev/null @@ -1,37 +0,0 @@ -from pedalboard import Pedalboard, Compressor, Reverb, NoiseGate -from pedalboard.io import AudioFile -import sys -import os -now_dir = os.getcwd() -sys.path.append(now_dir) -from i18n import I18nAuto -i18n = I18nAuto() -from pydub import AudioSegment -import numpy as np -import soundfile as sf -from pydub.playback import play - -def process_audio(input_path, output_path, reverb_enabled, compressor_enabled, noise_gate_enabled, ): - print(reverb_enabled) - print(compressor_enabled) - print(noise_gate_enabled) - effects = [] - if reverb_enabled: - effects.append(Reverb(room_size=0.01)) - if compressor_enabled: - effects.append(Compressor(threshold_db=-10, ratio=25)) - if noise_gate_enabled: - effects.append(NoiseGate(threshold_db=-16, ratio=1.5, release_ms=250)) - - board = Pedalboard(effects) - - with AudioFile(input_path) as f: - with AudioFile(output_path, 'w', f.samplerate, f.num_channels) as o: - while f.tell() < f.frames: - chunk = f.read(f.samplerate) - effected = board(chunk, f.samplerate, reset=False) - o.write(effected) - - result = i18n("Processed audio saved at: ") + output_path - print(result) - return output_path \ No newline at end of file diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/raw.py b/spaces/EronSamez/RVC_HFmeu/demucs/raw.py deleted file mode 100644 index d4941ad2d7ed858f490db441f5b46b12bd61ad78..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/demucs/raw.py +++ /dev/null @@ -1,173 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -from collections import defaultdict, namedtuple -from pathlib import Path - -import musdb -import numpy as np -import torch as th -import tqdm -from torch.utils.data import DataLoader - -from .audio import AudioFile - -ChunkInfo = namedtuple("ChunkInfo", ["file_index", "offset", "local_index"]) - - -class Rawset: - """ - Dataset of raw, normalized, float32 audio files - """ - def __init__(self, path, samples=None, stride=None, channels=2, streams=None): - self.path = Path(path) - self.channels = channels - self.samples = samples - if stride is None: - stride = samples if samples is not None else 0 - self.stride = stride - entries = defaultdict(list) - for root, folders, files in os.walk(self.path, followlinks=True): - folders.sort() - files.sort() - for file in files: - if file.endswith(".raw"): - path = Path(root) / file - name, stream = path.stem.rsplit('.', 1) - entries[(path.parent.relative_to(self.path), name)].append(int(stream)) - - self._entries = list(entries.keys()) - - sizes = [] - self._lengths = [] - ref_streams = sorted(entries[self._entries[0]]) - assert ref_streams == list(range(len(ref_streams))) - if streams is None: - self.streams = ref_streams - else: - self.streams = streams - for entry in sorted(entries.keys()): - streams = entries[entry] - assert sorted(streams) == ref_streams - file = self._path(*entry) - length = file.stat().st_size // (4 * channels) - if samples is None: - sizes.append(1) - else: - if length < samples: - self._entries.remove(entry) - continue - sizes.append((length - samples) // stride + 1) - self._lengths.append(length) - if not sizes: - raise ValueError(f"Empty dataset {self.path}") - self._cumulative_sizes = np.cumsum(sizes) - self._sizes = sizes - - def __len__(self): - return self._cumulative_sizes[-1] - - @property - def total_length(self): - return sum(self._lengths) - - def chunk_info(self, index): - file_index = np.searchsorted(self._cumulative_sizes, index, side='right') - if file_index == 0: - local_index = index - else: - local_index = index - self._cumulative_sizes[file_index - 1] - return ChunkInfo(offset=local_index * self.stride, - file_index=file_index, - local_index=local_index) - - def _path(self, folder, name, stream=0): - return self.path / folder / (name + f'.{stream}.raw') - - def __getitem__(self, index): - chunk = self.chunk_info(index) - entry = self._entries[chunk.file_index] - - length = self.samples or self._lengths[chunk.file_index] - streams = [] - to_read = length * self.channels * 4 - for stream_index, stream in enumerate(self.streams): - offset = chunk.offset * 4 * self.channels - file = open(self._path(*entry, stream=stream), 'rb') - file.seek(offset) - content = file.read(to_read) - assert len(content) == to_read - content = np.frombuffer(content, dtype=np.float32) - content = content.copy() # make writable - streams.append(th.from_numpy(content).view(length, self.channels).t()) - return th.stack(streams, dim=0) - - def name(self, index): - chunk = self.chunk_info(index) - folder, name = self._entries[chunk.file_index] - return folder / name - - -class MusDBSet: - def __init__(self, mus, streams=slice(None), samplerate=44100, channels=2): - self.mus = mus - self.streams = streams - self.samplerate = samplerate - self.channels = channels - - def __len__(self): - return len(self.mus.tracks) - - def __getitem__(self, index): - track = self.mus.tracks[index] - return (track.name, AudioFile(track.path).read(channels=self.channels, - seek_time=0, - streams=self.streams, - samplerate=self.samplerate)) - - -def build_raw(mus, destination, normalize, workers, samplerate, channels): - destination.mkdir(parents=True, exist_ok=True) - loader = DataLoader(MusDBSet(mus, channels=channels, samplerate=samplerate), - batch_size=1, - num_workers=workers, - collate_fn=lambda x: x[0]) - for name, streams in tqdm.tqdm(loader): - if normalize: - ref = streams[0].mean(dim=0) # use mono mixture as reference - streams = (streams - ref.mean()) / ref.std() - for index, stream in enumerate(streams): - open(destination / (name + f'.{index}.raw'), "wb").write(stream.t().numpy().tobytes()) - - -def main(): - parser = argparse.ArgumentParser('rawset') - parser.add_argument('--workers', type=int, default=10) - parser.add_argument('--samplerate', type=int, default=44100) - parser.add_argument('--channels', type=int, default=2) - parser.add_argument('musdb', type=Path) - parser.add_argument('destination', type=Path) - - args = parser.parse_args() - - build_raw(musdb.DB(root=args.musdb, subsets=["train"], split="train"), - args.destination / "train", - normalize=True, - channels=args.channels, - samplerate=args.samplerate, - workers=args.workers) - build_raw(musdb.DB(root=args.musdb, subsets=["train"], split="valid"), - args.destination / "valid", - normalize=True, - samplerate=args.samplerate, - channels=args.channels, - workers=args.workers) - - -if __name__ == "__main__": - main() diff --git a/spaces/EsoCode/text-generation-webui/docs/LoRA.md b/spaces/EsoCode/text-generation-webui/docs/LoRA.md deleted file mode 100644 index f1504d1096c44227e8c510fce4bcaa6254849cb0..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/docs/LoRA.md +++ /dev/null @@ -1,71 +0,0 @@ -# LoRA - -LoRA (Low-Rank Adaptation) is an extremely powerful method for customizing a base model by training only a small number of parameters. They can be attached to models at runtime. - -For instance, a 50mb LoRA can teach LLaMA an entire new language, a given writing style, or give it instruction-following or chat abilities. - -This is the current state of LoRA integration in the web UI: - -|Loader | Status | -|--------|------| -| Transformers | Full support in 16-bit, `--load-in-8bit`, `--load-in-4bit`, and CPU modes. | -| ExLlama | Single LoRA support. Fast to remove the LoRA afterwards. | -| AutoGPTQ | Single LoRA support. Removing the LoRA requires reloading the entire model.| -| GPTQ-for-LLaMa | Full support with the [monkey patch](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#using-loras-with-gptq-for-llama). | - -## Downloading a LoRA - -The download script can be used. For instance: - -``` -python download-model.py tloen/alpaca-lora-7b -``` - -The files will be saved to `loras/tloen_alpaca-lora-7b`. - -## Using the LoRA - -The `--lora` command-line flag can be used. Examples: - -``` -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-8bit -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --load-in-4bit -python server.py --model llama-7b-hf --lora tloen_alpaca-lora-7b --cpu -``` - -Instead of using the `--lora` command-line flag, you can also select the LoRA in the "Parameters" tab of the interface. - -## Prompt -For the Alpaca LoRA in particular, the prompt must be formatted like this: - -``` -Below is an instruction that describes a task. Write a response that appropriately completes the request. -### Instruction: -Write a Python script that generates text using the transformers library. -### Response: -``` - -Sample output: - -``` -Below is an instruction that describes a task. Write a response that appropriately completes the request. -### Instruction: -Write a Python script that generates text using the transformers library. -### Response: - -import transformers -from transformers import AutoTokenizer, AutoModelForCausalLM -tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") -model = AutoModelForCausalLM.from_pretrained("bert-base-uncased") -texts = ["Hello world", "How are you"] -for sentence in texts: -sentence = tokenizer(sentence) -print(f"Generated {len(sentence)} tokens from '{sentence}'") -output = model(sentences=sentence).predict() -print(f"Predicted {len(output)} tokens for '{sentence}':\n{output}") -``` - -## Training a LoRA - -You can train your own LoRAs from the `Training` tab. See [Training LoRAs](Training-LoRAs.md) for details. diff --git a/spaces/EuroPython2022/BayesCap/README.md b/spaces/EuroPython2022/BayesCap/README.md deleted file mode 100644 index 6a00bdbc6f5c8d9a16d35c2ad27c9a0e6810a445..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/BayesCap/README.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: BayesCap -emoji: 🔥 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false ---- -# Configuration -`title`: _string_ -Display title for the Space -`emoji`: _string_ -Space emoji (emoji-only character allowed) -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) -`sdk`: _string_ -Can be either `gradio` or `streamlit` -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/EuroPython2022/Warehouse_Apparel_Detection/app.py b/spaces/EuroPython2022/Warehouse_Apparel_Detection/app.py deleted file mode 100644 index 77e9e7e0b0a9d78c6df5014658d050e6de7a0a3d..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/Warehouse_Apparel_Detection/app.py +++ /dev/null @@ -1,47 +0,0 @@ -from charset_normalizer import detect -import numpy as np -import gradio as gr -import torch -import torch.nn as nn -import cv2 -import os -from numpy import random -from metadata.utils.utils import decodeImage -from metadata.predictor_yolo_detector.detector_test import Detector -from PIL import Image - -class ClientApp: - def __init__(self): - self.filename = "inputImage.jpg" - #modelPath = 'research/ssd_mobilenet_v1_coco_2017_11_17' - self.objectDetection = Detector(self.filename) - - - - -clApp = ClientApp() - -def predict_image(input_img): - - img = Image.fromarray(input_img) - img.save("./metadata/predictor_yolo_detector/inference/images/"+ clApp.filename) - resultant_img = clApp.objectDetection.detect_action() - - - return resultant_img - -demo = gr.Blocks() - -with demo: - gr.Markdown( - """ -

    Warehouse Apparel Detection

    - """) - - detect = gr.Interface(predict_image, 'image', 'image', examples=[ - os.path.join(os.path.dirname(__file__), "images/image_1.jpg"), - os.path.join(os.path.dirname(__file__), "images/image_2.jpg"), - os.path.join(os.path.dirname(__file__), "images/image_3.jpg") - ]) - -demo.launch() \ No newline at end of file diff --git a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/README.md b/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/README.md deleted file mode 100644 index 028fa988bb6cd9843aec9454636e1541b53680e7..0000000000000000000000000000000000000000 --- a/spaces/FantasticGNU/AnomalyGPT/model/ImageBind/README.md +++ /dev/null @@ -1,155 +0,0 @@ -# ImageBind: One Embedding Space To Bind Them All - -**[FAIR, Meta AI](https://ai.facebook.com/research/)** - -Rohit Girdhar*, -Alaaeldin El-Nouby*, -Zhuang Liu, -Mannat Singh, -Kalyan Vasudev Alwala, -Armand Joulin, -Ishan Misra* - -To appear at CVPR 2023 (*Highlighted paper*) - -[[`Paper`](https://facebookresearch.github.io/ImageBind/paper)] [[`Blog`](https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/)] [[`Demo`](https://imagebind.metademolab.com/)] [[`Supplementary Video`](https://dl.fbaipublicfiles.com/imagebind/imagebind_video.mp4)] [[`BibTex`](#citing-imagebind)] - -PyTorch implementation and pretrained models for ImageBind. For details, see the paper: **[ImageBind: One Embedding Space To Bind Them All](https://facebookresearch.github.io/ImageBind/paper)**. - -ImageBind learns a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. It enables novel emergent applications ‘out-of-the-box’ including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation. - - - -![ImageBind](https://user-images.githubusercontent.com/8495451/236859695-ffa13364-3e39-4d99-a8da-fbfab17f9a6b.gif) - -## ImageBind model - -Emergent zero-shot classification performance. - - - - - - - - - - - - - - - - - - - - - - - -
    ModelIN1kK400NYU-DESCLLVIPEgo4Ddownload
    imagebind_huge77.750.054.066.963.425.0checkpoint
    - -## Usage - -Install pytorch 1.13+ and other 3rd party dependencies. - -```shell -conda create --name imagebind python=3.8 -y -conda activate imagebind - -pip install -r requirements.txt -``` - -For windows users, you might need to install `soundfile` for reading/writing audio files. (Thanks @congyue1977) - -``` -pip install soundfile -``` - - -Extract and compare features across modalities (e.g. Image, Text and Audio). - -```python -import data -import torch -from models import imagebind_model -from models.imagebind_model import ModalityType - -text_list=["A dog.", "A car", "A bird"] -image_paths=[".assets/dog_image.jpg", ".assets/car_image.jpg", ".assets/bird_image.jpg"] -audio_paths=[".assets/dog_audio.wav", ".assets/car_audio.wav", ".assets/bird_audio.wav"] - -device = "cuda:0" if torch.cuda.is_available() else "cpu" - -# Instantiate model -model = imagebind_model.imagebind_huge(pretrained=True) -model.eval() -model.to(device) - -# Load data -inputs = { - ModalityType.TEXT: data.load_and_transform_text(text_list, device), - ModalityType.VISION: data.load_and_transform_vision_data(image_paths, device), - ModalityType.AUDIO: data.load_and_transform_audio_data(audio_paths, device), -} - -with torch.no_grad(): - embeddings = model(inputs) - -print( - "Vision x Text: ", - torch.softmax(embeddings[ModalityType.VISION] @ embeddings[ModalityType.TEXT].T, dim=-1), -) -print( - "Audio x Text: ", - torch.softmax(embeddings[ModalityType.AUDIO] @ embeddings[ModalityType.TEXT].T, dim=-1), -) -print( - "Vision x Audio: ", - torch.softmax(embeddings[ModalityType.VISION] @ embeddings[ModalityType.AUDIO].T, dim=-1), -) - -# Expected output: -# -# Vision x Text: -# tensor([[9.9761e-01, 2.3694e-03, 1.8612e-05], -# [3.3836e-05, 9.9994e-01, 2.4118e-05], -# [4.7997e-05, 1.3496e-02, 9.8646e-01]]) -# -# Audio x Text: -# tensor([[1., 0., 0.], -# [0., 1., 0.], -# [0., 0., 1.]]) -# -# Vision x Audio: -# tensor([[0.8070, 0.1088, 0.0842], -# [0.1036, 0.7884, 0.1079], -# [0.0018, 0.0022, 0.9960]]) - -``` - -## Model card -Please see the [model card](model_card.md) for details. - -## License - -ImageBind code and model weights are released under the CC-BY-NC 4.0 license. See [LICENSE](LICENSE) for additional details. - -## Contributing - -See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md). - -## Citing ImageBind - -If you find this repository useful, please consider giving a star :star: and citation - -``` -@inproceedings{girdhar2023imagebind, - title={ImageBind: One Embedding Space To Bind Them All}, - author={Girdhar, Rohit and El-Nouby, Alaaeldin and Liu, Zhuang -and Singh, Mannat and Alwala, Kalyan Vasudev and Joulin, Armand and Misra, Ishan}, - booktitle={CVPR}, - year={2023} -} -``` diff --git a/spaces/Feraxin/chatGPT/encoder.py b/spaces/Feraxin/chatGPT/encoder.py deleted file mode 100644 index f461b87d889b75d2239c0a9cc731fe4ad41c7c7b..0000000000000000000000000000000000000000 --- a/spaces/Feraxin/chatGPT/encoder.py +++ /dev/null @@ -1,120 +0,0 @@ -# This file includes code which was modified from https://github.com/openai/gpt-2 - -import tensorflow as tf -import os -import json -import regex as re -from functools import lru_cache -import requests -import boto3 -import pdb - - -@lru_cache() -def bytes_to_unicode(): - - bs = ( - list(range(ord("!"), ord("~") + 1)) - + list(range(ord("¡"), ord("¬") + 1)) - + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2 ** 8): - if b not in bs: - bs.append(b) - cs.append(2 ** 8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -class Encoder: - def __init__(self, encoder, bpe_merges, errors="replace"): - self.encoder = encoder - self.decoder = {v: k for k, v in self.encoder.items()} - self.errors = errors - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) - self.cache = {} - self.pat = re.compile( - r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""" - ) - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token) - - pairs = get_pairs(word) - - if not pairs: - return token - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - - word = " ".join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - for token in re.findall(self.pat, text): - token = "".join(self.byte_encoder[b] for b in token.encode("utf-8")) - - bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ")) - return bpe_tokens - - def decode(self, tokens): - text = "".join([self.decoder[token] for token in tokens]) - text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) - return text - - -def get_encoder(): - with open("encoder.json", "r") as f: - encoder = json.load(f) - with open("vocab.bpe", "r", encoding="utf-8") as f: - bpe_data = f.read() - bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split("\n")[1:-1]] - return Encoder(encoder=encoder, bpe_merges=bpe_merges) - -# encoder = get_encoder() -# print('encoded is ', encoder.encode('hello 👋 world 🌍 This is a long string to test whether or not the emoji issue was fixed!')) \ No newline at end of file diff --git a/spaces/Fouzia/Harvard-USPTO_Patentability-Score/README.md b/spaces/Fouzia/Harvard-USPTO_Patentability-Score/README.md deleted file mode 100644 index 3f4706ef069486d60b030a54da79cbe3975f37d5..0000000000000000000000000000000000000000 --- a/spaces/Fouzia/Harvard-USPTO_Patentability-Score/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -Title: Harvard-USPTO Patentability-Score -Emoji: 🧐 -colorFrom: blue -colorTo: red -sdk: streamlit -sdk_version: 1.21.0 -App_file: app.py -Pinned: false ---- - -# Milestone - 3 -## Harvard USPTO Patentability Score in Streamlit - -This application classifies a user selected Patent Application Number and displays its patentibility score. - - -## Demo -To see the app in action, please click on below link: - -* [Harvard USPTO Patentability Score App](https://huggingface.co/spaces/Fouzia/Harvard-USPTO_Patentability-Score) diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/cluster/__init__.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/cluster/__init__.py deleted file mode 100644 index f1b9bde04e73e9218a5d534227caa4c25332f424..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/cluster/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -import numpy as np -import torch -from sklearn.cluster import KMeans - -def get_cluster_model(ckpt_path): - checkpoint = torch.load(ckpt_path) - kmeans_dict = {} - for spk, ckpt in checkpoint.items(): - km = KMeans(ckpt["n_features_in_"]) - km.__dict__["n_features_in_"] = ckpt["n_features_in_"] - km.__dict__["_n_threads"] = ckpt["_n_threads"] - km.__dict__["cluster_centers_"] = ckpt["cluster_centers_"] - kmeans_dict[spk] = km - return kmeans_dict - -def get_cluster_result(model, x, speaker): - """ - x: np.array [t, 256] - return cluster class result - """ - return model[speaker].predict(x) - -def get_cluster_center_result(model, x,speaker): - """x: np.array [t, 256]""" - predict = model[speaker].predict(x) - return model[speaker].cluster_centers_[predict] - -def get_center(model, x,speaker): - return model[speaker].cluster_centers_[x] diff --git a/spaces/FritsLyneborg/kunstnerfrits/app/gradio/app_gradio.py b/spaces/FritsLyneborg/kunstnerfrits/app/gradio/app_gradio.py deleted file mode 100644 index 40013735519c4f0bab10dce4a6466af236454151..0000000000000000000000000000000000000000 --- a/spaces/FritsLyneborg/kunstnerfrits/app/gradio/app_gradio.py +++ /dev/null @@ -1,179 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# Uncomment to run on cpu -# import os -# os.environ["JAX_PLATFORM_NAME"] = "cpu" - -import random - -import gradio as gr -import jax -import numpy as np -from flax.jax_utils import replicate -from flax.training.common_utils import shard -from PIL import Image, ImageDraw, ImageFont - -# ## CLIP Scoring -from transformers import BartTokenizer, CLIPProcessor, FlaxCLIPModel -from vqgan_jax.modeling_flax_vqgan import VQModel - -from dalle_mini.model import CustomFlaxBartForConditionalGeneration - -DALLE_REPO = "flax-community/dalle-mini" -DALLE_COMMIT_ID = "4d34126d0df8bc4a692ae933e3b902a1fa8b6114" - -VQGAN_REPO = "flax-community/vqgan_f16_16384" -VQGAN_COMMIT_ID = "90cc46addd2dd8f5be21586a9a23e1b95aa506a9" - -tokenizer = BartTokenizer.from_pretrained(DALLE_REPO, revision=DALLE_COMMIT_ID) -model = CustomFlaxBartForConditionalGeneration.from_pretrained( - DALLE_REPO, revision=DALLE_COMMIT_ID -) -vqgan = VQModel.from_pretrained(VQGAN_REPO, revision=VQGAN_COMMIT_ID) - - -def captioned_strip(images, caption=None, rows=1): - increased_h = 0 if caption is None else 48 - w, h = images[0].size[0], images[0].size[1] - img = Image.new("RGB", (len(images) * w // rows, h * rows + increased_h)) - for i, img_ in enumerate(images): - img.paste(img_, (i // rows * w, increased_h + (i % rows) * h)) - - if caption is not None: - draw = ImageDraw.Draw(img) - font = ImageFont.truetype( - "/usr/share/fonts/truetype/liberation2/LiberationMono-Bold.ttf", 40 - ) - draw.text((20, 3), caption, (255, 255, 255), font=font) - return img - - -def custom_to_pil(x): - x = np.clip(x, 0.0, 1.0) - x = (255 * x).astype(np.uint8) - x = Image.fromarray(x) - if not x.mode == "RGB": - x = x.convert("RGB") - return x - - -def generate(input, rng, params): - return model.generate( - **input, - max_length=257, - num_beams=1, - do_sample=True, - prng_key=rng, - eos_token_id=50000, - pad_token_id=50000, - params=params, - ) - - -def get_images(indices, params): - return vqgan.decode_code(indices, params=params) - - -p_generate = jax.pmap(generate, "batch") -p_get_images = jax.pmap(get_images, "batch") - -bart_params = replicate(model.params) -vqgan_params = replicate(vqgan.params) - -clip = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32") -print("Initialize FlaxCLIPModel") -processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") -print("Initialize CLIPProcessor") - - -def hallucinate(prompt, num_images=64): - prompt = [prompt] * jax.device_count() - inputs = tokenizer( - prompt, - return_tensors="jax", - padding="max_length", - truncation=True, - max_length=128, - ).data - inputs = shard(inputs) - - all_images = [] - for i in range(num_images // jax.device_count()): - key = random.randint(0, 1e7) - rng = jax.random.PRNGKey(key) - rngs = jax.random.split(rng, jax.local_device_count()) - indices = p_generate(inputs, rngs, bart_params).sequences - indices = indices[:, :, 1:] - - images = p_get_images(indices, vqgan_params) - images = np.squeeze(np.asarray(images), 1) - for image in images: - all_images.append(custom_to_pil(image)) - return all_images - - -def clip_top_k(prompt, images, k=8): - inputs = processor(text=prompt, images=images, return_tensors="np", padding=True) - outputs = clip(**inputs) - logits = outputs.logits_per_text - scores = np.array(logits[0]).argsort()[-k:][::-1] - return [images[score] for score in scores] - - -def compose_predictions(images, caption=None): - increased_h = 0 if caption is None else 48 - w, h = images[0].size[0], images[0].size[1] - img = Image.new("RGB", (len(images) * w, h + increased_h)) - for i, img_ in enumerate(images): - img.paste(img_, (i * w, increased_h)) - - if caption is not None: - draw = ImageDraw.Draw(img) - font = ImageFont.truetype( - "/usr/share/fonts/truetype/liberation2/LiberationMono-Bold.ttf", 40 - ) - draw.text((20, 3), caption, (255, 255, 255), font=font) - return img - - -def top_k_predictions(prompt, num_candidates=32, k=8): - images = hallucinate(prompt, num_images=num_candidates) - images = clip_top_k(prompt, images, k=k) - return images - - -def run_inference(prompt, num_images=32, num_preds=8): - images = top_k_predictions(prompt, num_candidates=num_images, k=num_preds) - predictions = captioned_strip(images) - output_title = f""" - {prompt} - """ - return (output_title, predictions) - - -outputs = [ - gr.outputs.HTML(label=""), # To be used as title - gr.outputs.Image(label=""), -] - -description = """ -DALL·E-mini is an AI model that generates images from any prompt you give! Generate images from text: -""" -gr.Interface( - run_inference, - inputs=[gr.inputs.Textbox(label="What do you want to see?")], - outputs=outputs, - title="DALL·E mini", - description=description, - article="

    Created by Boris Dayma et al. 2021 | GitHub | Report

    ", - layout="vertical", - theme="huggingface", - examples=[ - ["an armchair in the shape of an avocado"], - ["snowy mountains by the sea"], - ], - allow_flagging=False, - live=False, - # server_port=8999 -).launch(share=True) diff --git a/spaces/GAIR/Factool/test/app.py b/spaces/GAIR/Factool/test/app.py deleted file mode 100644 index 2123b7783d8ff33dfbc79589f1b5bcd9e7b83a49..0000000000000000000000000000000000000000 --- a/spaces/GAIR/Factool/test/app.py +++ /dev/null @@ -1,53 +0,0 @@ -import gradio as gr -import openai -import json -from factool import Factool -import os - -def chat_with_gpt(api_key, model, message): - openai.api_key = api_key - response = openai.ChatCompletion.create( - model=model, - messages=[ - {"role": "system", "content": "You are a helpful assistant."}, - {"role": "user", "content": message}, - ] - ) - return response.choices[0].message['content'] - -def fact_check(openai_api_key, serper_api_key, scraper_api_key, model, message, response, category): - os.environ['SCRAPER_API_KEY'] = '' - os.environ['SERPER_API_KEY'] = '' - os.environ['OPENAI_API_KEY'] = '' - os.environ['SCRAPER_API_KEY'] = scraper_api_key - os.environ['SERPER_API_KEY'] = serper_api_key - os.environ['OPENAI_API_KEY'] = openai_api_key - factool_instance = Factool(model) - inputs = [ - { - "prompt": message, - "response": response, - "category": category, - "search_type": "online", - }, - ] - response_list = factool_instance.run(inputs) - return response_list - -with gr.Blocks() as demo: - openai_api_key = gr.Textbox(label="OpenAI API Key") - serper_api_key = gr.Textbox(label="Serper API Key") - scraper_api_key = gr.Textbox(label="Scraper API Key") - chat_model = gr.inputs.Radio(choices=["gpt-3.5-turbo", "gpt-4"], label="Chat Model") - prompt = gr.Textbox(label="Prompt") - response = gr.Textbox(label="Response") - category = gr.inputs.Radio(choices=["kbqa", "code", "math", "scientific"], label="Category") - fact_check_model = gr.inputs.Radio(choices=["gpt-3.5-turbo", "gpt-4"], label="Fact Check Model") - fact_check_result = gr.Textbox(label="Fact Check Result") - chat_btn = gr.Button("Chat") - fact_check_btn = gr.Button("Fact Check") - chat_btn.click(chat_with_gpt, inputs=[openai_api_key,chat_model,prompt], outputs=response) - fact_check_btn.click(fact_check, inputs=[openai_api_key,serper_api_key,scraper_api_key,fact_check_model,prompt,response,category], outputs=fact_check_result) - -demo.launch(share=True) - diff --git a/spaces/Gen-Sim/Gen-Sim/README.md b/spaces/Gen-Sim/Gen-Sim/README.md deleted file mode 100644 index 72b7f562da27d7b3bd45a8737d5444fe0ab4e13c..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/README.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: GenSim -emoji: 📈 -colorFrom: purple -colorTo: indigo -sdk: gradio -python_version: 3.9.13 -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -# Generative Simulation Interactive Demo - -This demo is from the project: - -**GenSim: Generating Robotic Simulation Tasks via Large Language Models** - - -## Preparations -1. Obtain an [OpenAI API Key](https://openai.com/blog/openai-api/) - -## Usage -0. Click Run-Example will simulate one example of pre-saved tasks in the task library and render videos. -1. Top-Down Model: - 0. Type in the desired task name in the box. Then GenSim will try to run through the pipeline to generate the task. - 1. The task name has the form word separated by a dash. **Example: 'place-blue-in-yellow' and 'align-rainbow-along-line'.** -2. Bottom-Up Model: No need to type in desired task. GenSim will try to generate novel tasks that are different from the task library. -3. Usage: Always click on "Setup/Reset Simulation" and then click "Run". - -## Guideline -0. The first output is the current stage of the task generation pipeline. -1. The second output shows the generated code from Gen-Sim -2. If there are errors in the generation stage above, you will see an error log on the top right. -3. If the orange borders are still on, then the task is being simulated and rendered. -4. The rendered video will come out in a stream, i.e. it will render and re-render in a sequence. Each new update takes 15 seconds. - - -## Known Limitations -1. Code generation can fail or generate infeasible tasks. The success rate is around *0.5*. -2. The low-level pick place primitive does not do collision checking and cannot pick up certain objects. -3. Top-down generation is typically more challenging if the task name is too vague or too distant from the primitives. - - -## Note -For GPT-4 model, each inference costs about *$\\$$0.03*. For GPT-3.5 model, each inference costs about *$\\$$0.005*. You can select which LLM model you would like to use. - diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/train_test_single_task_sim2real.sh b/spaces/Gen-Sim/Gen-Sim/scripts/train_test_single_task_sim2real.sh deleted file mode 100644 index 17e05a97f06e4f80bbfaed464f1446ef9a6f6183..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/train_test_single_task_sim2real.sh +++ /dev/null @@ -1,49 +0,0 @@ -#!/bin/bash - -DATA_DIR=$1 -TASK=$2 -DISP=False - -echo "Training dataset... Folder: $DATA_DIR Task $TASK" - -# You can parallelize these depending on how much resources you have - -############################# -## Language-Conditioned Tasks -trap "kill 0" SIGINT -LANG_TASKS=$2 - - -for task in $LANG_TASKS - do - # Generate data - bash scripts/generate_gpt_datasets.sh data $task - - # TRAIN - python cliport/train.py train.task=$task \ - train.agent=cliport \ - train.attn_stream_fusion_type=add \ - train.trans_stream_fusion_type=conv \ - train.lang_fusion_type=mult \ - train.n_demos=200 \ - train.n_steps=5000 \ - train.exp_folder=exps/exps-singletask-sim2real \ - dataset.cache=True \ - train.data_augmentation=True - - - - # TEST - python cliport/eval.py eval_task=$task \ - agent=cliport \ - mode=test \ - n_demos=100 \ - train_demos=200 \ - checkpoint_type=test_best \ - exp_folder=exps/exps-singletask-sim2real \ - update_results=True - done - -python notebooks/print_results.py -r=exps/exps-singletask - -echo "Finished Training." diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/coder/__init__.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/coder/__init__.py deleted file mode 100644 index ae455ba8fc0e0727e2d581cdc8f20fceededf99a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/coder/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from .base_bbox_coder import BaseBBoxCoder -from .bucketing_bbox_coder import BucketingBBoxCoder -from .delta_xywh_bbox_coder import DeltaXYWHBBoxCoder -from .legacy_delta_xywh_bbox_coder import LegacyDeltaXYWHBBoxCoder -from .pseudo_bbox_coder import PseudoBBoxCoder -from .tblr_bbox_coder import TBLRBBoxCoder -from .yolo_bbox_coder import YOLOBBoxCoder - -__all__ = [ - 'BaseBBoxCoder', 'PseudoBBoxCoder', 'DeltaXYWHBBoxCoder', - 'LegacyDeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'YOLOBBoxCoder', - 'BucketingBBoxCoder' -] diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_r101-d8_512x1024_40k_b8_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_r101-d8_512x1024_40k_b8_cityscapes.py deleted file mode 100644 index e34f3432e581ff506c9d2951c98b5aad7b1be6a5..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_r101-d8_512x1024_40k_b8_cityscapes.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = [ - '../_base_/models/ocrnet_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r50_512x512_80k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r50_512x512_80k_ade20k.py deleted file mode 100644 index f561e309e3bddb439c90af930c4de5a0c7e209a7..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/upernet/upernet_r50_512x512_80k_ade20k.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/upernet_r50.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] -model = dict( - decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150)) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/musicgen.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/musicgen.py deleted file mode 100644 index 1d4b2292eaec5016e208bbdf61ec5c99b40b67da..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/models/musicgen.py +++ /dev/null @@ -1,409 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using MusicGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import typing as tp -import warnings - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes, WavCondition -from ..utils.autocast import TorchAutocast - - -MelodyList = tp.List[tp.Optional[torch.Tensor]] -MelodyType = tp.Union[torch.Tensor, MelodyList] - - -# backward compatible names mapping -_HF_MODEL_CHECKPOINTS_MAP = { - "small": "GrandaddyShmax/musicgen-small", - "medium": "GrandaddyShmax/musicgen-medium", - "large": "GrandaddyShmax/musicgen-large", - "melody": "GrandaddyShmax/musicgen-melody", -} - - -class MusicGen: - """MusicGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - max_duration (float, optional): maximum duration the model can produce, - otherwise, inferred from the training params. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel, - max_duration: tp.Optional[float] = None): - self.name = name - self.compression_model = compression_model - self.lm = lm - if max_duration is None: - if hasattr(lm, 'cfg'): - max_duration = lm.cfg.dataset.segment_duration # type: ignore - else: - raise ValueError("You must provide max_duration when building directly MusicGen") - assert max_duration is not None - self.max_duration: float = max_duration - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=15) # 15 seconds by default - self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> float: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'GrandaddyShmax/musicgen-melody', device=None): - """Return pretrained model, we provide four models: - - facebook/musicgen-small (300M), text to music, - # see: https://huggingface.co/facebook/musicgen-small - - facebook/musicgen-medium (1.5B), text to music, - # see: https://huggingface.co/facebook/musicgen-medium - - facebook/musicgen-melody (1.5B) text to music and text+melody to music, - # see: https://huggingface.co/facebook/musicgen-melody - - facebook/musicgen-large (3.3B), text to music, - # see: https://huggingface.co/facebook/musicgen-large - """ - if device is None: - if torch.cuda.device_count(): - device = 'cuda' - else: - device = 'cpu' - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device) - lm = get_debug_lm_model(device) - return MusicGen(name, compression_model, lm, max_duration=30) - - lm = load_lm_model(name, device=device) - compression_model = load_compression_model(name, device=device) - if 'self_wav' in lm.condition_provider.conditioners: - lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True - - return MusicGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 30.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False, extend_stride: float = 18): - """Set the generation parameters for MusicGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 30.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much - should we extend the audio each time. Larger values will mean less context is - preserved, and shorter value will require extra computations. - """ - assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration." - self.extend_stride = extend_stride - self.duration = duration - self.generation_params = { - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None): - """Override the default progress callback.""" - self._progress_callback = progress_callback - - def generate_unconditional(self, num_samples: int, progress: bool = False, return_tokens: bool = False) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]: - """Generate samples in an unconditional manner. - - Args: - num_samples (int): Number of samples to be generated. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - descriptions: tp.List[tp.Optional[str]] = [None] * num_samples - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - tokens = self._generate_tokens(attributes, prompt_tokens, progress) - if return_tokens: - return self.generate_audio(tokens), tokens - return self.generate_audio(tokens) - - def generate(self, descriptions: tp.List[str], progress: bool = False, return_tokens: bool = False) \ - -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]: - """Generate samples conditioned on text. - - Args: - descriptions (list of str): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - tokens = self._generate_tokens(attributes, prompt_tokens, progress) - if return_tokens: - return self.generate_audio(tokens), tokens - return self.generate_audio(tokens) - - def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, melody_sample_rate: int, progress: bool = False, return_tokens: bool = False) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]: - """Generate samples conditioned on text and melody. - - Args: - descriptions (list of str): A list of strings used as text conditioning. - melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as - melody conditioning. Should have shape [B, C, T] with B matching the description length, - C=1 or 2. It can be [C, T] if there is a single description. It can also be - a list of [C, T] tensors. - melody_sample_rate: (int): Sample rate of the melody waveforms. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if isinstance(melody_wavs, torch.Tensor): - if melody_wavs.dim() == 2: - melody_wavs = melody_wavs[None] - if melody_wavs.dim() != 3: - raise ValueError("Melody wavs should have a shape [B, C, T].") - melody_wavs = list(melody_wavs) - else: - for melody in melody_wavs: - if melody is not None: - assert melody.dim() == 2, "One melody in the list has the wrong number of dims." - - melody_wavs = [ - convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels) - if wav is not None else None - for wav in melody_wavs] - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None, - melody_wavs=melody_wavs) - assert prompt_tokens is None - tokens = self._generate_tokens(attributes, prompt_tokens, progress) - if return_tokens: - return self.generate_audio(tokens), tokens - return self.generate_audio(tokens) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False, return_tokens: bool = False) \ - -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, torch.Tensor]]: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (list of str, optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - tokens = self._generate_tokens(attributes, prompt_tokens, progress) - if return_tokens: - return self.generate_audio(tokens), tokens - return self.generate_audio(tokens) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - melody_wavs: tp.Optional[MelodyList] = None, - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (list of str): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - melody_wavs (torch.Tensor, optional): A batch of waveforms - used as melody conditioning. Defaults to None. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if melody_wavs is None: - for attr in attributes: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1, 1), device=self.device), - torch.tensor([0], device=self.device), - sample_rate=[self.sample_rate], - path=[None]) - else: - if 'self_wav' not in self.lm.condition_provider.conditioners: - raise RuntimeError("This model doesn't support melody conditioning. " - "Use the `melody` model.") - assert len(melody_wavs) == len(descriptions), \ - f"number of melody wavs must match number of descriptions! " \ - f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}" - for attr, melody in zip(attributes, melody_wavs): - if melody is None: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1, 1), device=self.device), - torch.tensor([0], device=self.device), - sample_rate=[self.sample_rate], - path=[None]) - else: - attr.wav['self_wav'] = WavCondition( - melody[None].to(device=self.device), - torch.tensor([melody.shape[-1]], device=self.device), - sample_rate=[self.sample_rate], - path=[None], - ) - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (list of ConditioningAttributes): Conditions used for generation (text/melody). - prompt_tokens (torch.Tensor, optional): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - i = 0 - prompt_list = attributes[0].text['description'] - total_gen_len = int(self.duration * self.frame_rate) - max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate) - current_gen_offset: int = 0 - - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - generated_tokens += current_gen_offset - if current_gen_offset > 0: - generated_tokens += (self.max_duration - self.extend_stride) * self.frame_rate - if self._progress_callback is not None: - # Note that total_gen_len might be quite wrong depending on the - # codebook pattern used, but with delay it is almost accurate. - self._progress_callback(generated_tokens, total_gen_len) - else: - print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r') - - if prompt_tokens is not None: - assert max_prompt_len >= prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - if self.duration <= self.max_duration: - # generate by sampling from LM, simple case. - with self.autocast: - attributes[0].text['description'] = prompt_list[0] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=total_gen_len, **self.generation_params) - - else: - # now this gets a bit messier, we need to handle prompts, - # melody conditioning etc. - ref_wavs = [attr.wav['self_wav'] for attr in attributes] - all_tokens = [] - if prompt_tokens is None: - prompt_length = 0 - else: - all_tokens.append(prompt_tokens) - prompt_length = prompt_tokens.shape[-1] - - stride_tokens = int(self.frame_rate * self.extend_stride) - - while current_gen_offset + prompt_length < total_gen_len: - time_offset = current_gen_offset / self.frame_rate - chunk_duration = min(self.duration - time_offset, self.max_duration) - max_gen_len = int(chunk_duration * self.frame_rate) - for attr, ref_wav in zip(attributes, ref_wavs): - wav_length = ref_wav.length.item() - if wav_length == 0: - continue - # We will extend the wav periodically if it not long enough. - # we have to do it here rather than in conditioners.py as otherwise - # we wouldn't have the full wav. - initial_position = int(time_offset * self.sample_rate) - wav_target_length = int(self.max_duration * self.sample_rate) - positions = torch.arange(initial_position, - initial_position + wav_target_length, device=self.device) - attr.wav['self_wav'] = WavCondition( - ref_wav[0][..., positions % wav_length], - torch.full_like(ref_wav[1], wav_target_length), - [self.sample_rate] * ref_wav[0].size(0), - [None], [0.]) - with self.autocast: - if i >= len(prompt_list): - i = len(prompt_list) - 1 - attributes[0].text['description'] = prompt_list[i] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=max_gen_len, **self.generation_params) - i = i + 1 - if prompt_tokens is None: - all_tokens.append(gen_tokens) - else: - all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:]) - prompt_tokens = gen_tokens[:, :, stride_tokens:] - prompt_length = prompt_tokens.shape[-1] - current_gen_offset += stride_tokens - - gen_tokens = torch.cat(all_tokens, dim=-1) - return gen_tokens - - def generate_audio(self, gen_tokens: torch.Tensor): - """Generate Audio from tokens""" - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio - - def to(self, device: str): - self.compression_model.to(device) - self.lm.to(device) - return self \ No newline at end of file diff --git a/spaces/HUBioDataLab/DrugGEN/README.md b/spaces/HUBioDataLab/DrugGEN/README.md deleted file mode 100644 index ac557ac11c5b6c6456d0749e1017333f662abd23..0000000000000000000000000000000000000000 --- a/spaces/HUBioDataLab/DrugGEN/README.md +++ /dev/null @@ -1,315 +0,0 @@ ---- -title: Druggen -sdk: gradio -app_file: gradio_app.py -emoji: 💊 -colorFrom: red -colorTo: green ---- -# DrugGEN: Target Centric De Novo Design of Drug Candidate Molecules with Graph Generative Deep Adversarial Networks - - - -

    - - - -

    - - - - - - -## Updated Pre-print! - -**Please see our most up-to-date document (pre-print) from 15.02.2023 here:** [2302.07868.pdf](https://github.com/HUBioDataLab/DrugGEN/files/10828402/2302.07868.pdf), [arXiv link](https://arxiv.org/abs/2302.07868) - -  -  - -## Abstract - -Discovering novel drug candidate molecules is one of the most fundamental and critical steps in drug development. Generative deep learning models, which create synthetic data given a probability distribution, have been developed with the purpose of picking completely new samples from a partially known space. Generative models offer high potential for designing de novo molecules; however, in order for them to be useful in real-life drug development pipelines, these models should be able to design target-specific molecules, which is the next step in this field. In this study, we propose DrugGEN, for the de novo design of drug candidate molecules that interact with selected target proteins. The proposed system represents compounds and protein structures as graphs and processes them via serially connected two generative adversarial networks comprising graph transformers. DrugGEN is trained using a large dataset of compounds from ChEMBL and target-specific bioactive molecules, to design effective and specific inhibitory molecules against the AKT1 protein, which has critical importance for developing treatments against various types of cancer. On fundamental benchmarks, DrugGEN models have either competitive or better performance against other methods. To assess the target-specific generation performance, we conducted further in silico analysis with molecular docking and deep learning-based bioactivity prediction. Results indicate that de novo molecules have high potential for interacting with the AKT1 protein structure in the level of its native ligand. DrugGEN can be used to design completely novel and effective target-specific drug candidate molecules for any druggable protein, given target features and a dataset of experimental bioactivities. Code base, datasets, results and trained models of DrugGEN are available in this repository. - -Our up-to-date pre-print is shared [here](https://github.com/HUBioDataLab/DrugGEN/files/10828402/2302.07868.pdf) - - - -  -  - - -

    - -

    - -**Fig. 1.** **(A)** Generator (*G1*) of the GAN1 consists of an MLP and graph transformer encoder module. The generator encodes the given input into a new representation; **(B)** the MLP-based discriminator (*D1*) of GAN1 compares the generated de novo molecules to the real ones in the training dataset, scoring them for their assignment to the classes of “real” and “fake” molecules; **(C)** Generator (*G2*) of GAN2 makes use of the transformer decoder architecture to process target protein features and GAN1 generated de novo molecules together. The output of the generator two (*G2*) is the modified molecules, based on the given protein features; **(D)** the second discriminator (*D2*) takes the modified de novo molecules and known inhibitors of the given target protein and scores them for their assignment to the classes of “real” and “fake” inhibitors. - -  -  - -## Transformer Modules - -Given a random noise *z*, **the first generator** *G1* (below, on the left side) creates annotation and adjacency matrices of a supposed molecule. *G1* processes the input by passing it through a multi-layer perceptron (MLP). The input is then fed to the transformer encoder module [Vaswani et al., (2017)](https://arxiv.org/abs/1706.03762), which has a depth of 8 encoder layers with 8 multi-head attention heads for each. In the graph transformer setting, *Q*, *K* and *V* are the variables representing the annotation matrix of the molecule. After the final products are created in the attention mechanism, both the annotation and adjacency matrices are forwarded to layer normalization and then summed with the initial matrices to create a residual connection. These matrices are fed to separate feedforward layers, and finally, given to the discriminator network *D1* together with real molecules. - -**The second generator** *G2* (below, on the right side) modifies molecules that were previously generated by *G1*, with the aim of generating binders for the given target protein. *G2* module utilizes the transformer decoder architecture. This module has a depth of 8 decoder layers and uses 8 multi-head attention heads for each. *G2* takes both *G1(z)*, which is data generated by *G1*, and the protein features as input. Interactions between molecules and proteins are processed inside the multi-head attention module via taking their scaled dot product, and thus, new molecular graphs are created. Apart from the attention mechanism, further processing of the molecular matrices follows the same workflow as the transformer encoder. The output of this module are the final product of the DrugGEN model and are forwarded to *D2*. - - - - - - - - - - -| First Generator | Second Generator | -|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------| - | ![FirstGAN](assets/DrugGEN_G1_final2.gif) | ![SecondGAN](assets/DrugGEN_G2_final2.gif) | - -  -  - -## Model Variations - -- **DrugGEN-Prot** (the default model) is composed of two GANs. It incorporates protein features to the transformer decoder module of GAN2 (together with the de novo molecules generated by GAN1) to direct the target centric molecule design. The information provided above belongs to this model. -- **DrugGEN-CrossLoss** is composed of only one GAN. The input of the GAN1 generator is the real molecules (ChEMBL) dataset (to ease the learning process) and the GAN1 discriminator compares the generated molecules with the real inhibitors of the given target protein. -- **DrugGEN-Ligand** is composed of two GANs. It incorporates AKT1 inhibitor molecule features as the input of the GAN2-generator’s transformer decoder instead of the protein features in the default model. -- **DrugGEN-RL** utilizes the same architecture as the DrugGEN-Ligand model. It uses reinforcement learning (RL) to avoid using molecular scaffolds that are already presented in the training set. -- **DrugGEN-NoTarget** is composed of only one GAN. This model only focuses on learning the chemical properties from the ChEMBL training dataset, as a result, there is no target-specific generation. - -  -  - -## Files & Folders - -We provide the implementation of the DrugGEN, along with scripts from PyTorch Geometric framework to generate and run. The repository is organised as follows: - -```data``` contains: -- **Raw dataset files**, which should be text files containing SMILES strings only. Raw datasets preferably should not contain stereoisomeric SMILES to prevent Hydrogen atoms to be included in the final graph data. -- Constructed **graph datasets** (.pt) will be saved in this folder along with atom and bond encoder/decoder files (.pk). - -```experiments``` contains: -- ```logs``` folder. Model loss and performance metrics will be saved in this directory in seperate files for each model. -- ```tboard_output``` folder. Tensorboard files will be saved here if TensorBoard is used. -- ```models``` folder. Models will be saved in this directory at last or preferred steps. -- ```samples``` folder. Molecule samples will be saved in this folder. -- ```inference``` folder. Molecules generated in inference mode will be saved in this folder. - -**Python scripts:** - -- ```layers.py``` contains **transformer encoder** and **transformer decoder** implementations. -- ```main.py``` contains arguments and this file is used to run the model. -- ```models.py``` has the implementation of the **Generators** and **Discriminators** which are used in GAN1 and GAN2. -- ```new_dataloader.py``` constructs the graph dataset from given raw data. Uses PyG based data classes. -- ```trainer.py``` is the training and testing file for the model. Workflow is constructed in this file. -- ```utils.py``` contains performance metrics from several other papers and some unique implementations. (De Cao et al, 2018; Polykovskiy et al., 2020) - -  -  - -## Datasets - -Three different data types (i.e., compound, protein, and bioactivity) were retrieved from various data sources to train our deep generative models. GAN1 module requires only compound data while GAN2 requires all of three data types including compound, protein, and bioactivity. -- **Compound data** includes atomic, physicochemical, and structural properties of real drug and drug candidate molecules. [ChEMBL v29 compound dataset](data/dataset_download.sh) was used for the GAN1 module. It consists of 1,588,865 stable organic molecules with a maximum of 45 atoms and containing C, O, N, F, Ca, K, Br, B, S, P, Cl, and As heavy atoms. -- **Protein data** was retrieved from Protein Data Bank (PDB) in biological assembly format, and the coordinates of protein-ligand complexes were used to construct the binding sites of proteins from the bioassembly data. The atoms of protein residues within a maximum distance of 9 A from all ligand atoms were recorded as binding sites. GAN2 was trained for generating compounds specific to the target protein AKT1, which is a member of serine/threonine-protein kinases and involved in many cancer-associated cellular processes including metabolism, proliferation, cell survival, growth and angiogenesis. Binding site of human AKT1 protein was generated from the kinase domain (PDB: 4GV1). -- **Bioactivity data** of AKT target protein was retrieved from large-scale ChEMBL bioactivity database. It contains ligand interactions of human AKT1 (CHEMBL4282) protein with a pChEMBL value equal to or greater than 6 (IC50 <= 1 µM) as well as SMILES information of these ligands. The dataset was extended by including drug molecules from DrugBank database known to interact with human AKT proteins. Thus, a total of [1,600 bioactivity data](data/filtered_akt_inhibitors.smi) points were obtained for training the AKT-specific generative model. - - -More details on the construction of datasets can be found in our paper referenced above. - - - -  -  - -## Getting Started -DrugGEN has been implemented and tested on Ubuntu 18.04 with python >= 3.9. It supports both GPU and CPU inference. - -Clone the repo: -```bash -git clone https://github.com/HUBioDataLab/DrugGEN.git -``` - - - -  -  - -## Training - -### Setting up environment - -You can set up the environment using either conda or pip. - -Here is with conda: - -```bash -# set up the environment (installs the requirements): - -conda env create -f DrugGEN/dependencies.yml - -# activate the environment: - -conda activate druggen -``` - -Here is with pip using virtual environment: - -```bash -python -m venv DrugGEN/.venv -./Druggen/.venv/bin/activate -pip install -r DrugGEN/requirements.txt -``` - - -### Starting the training - -``` -# Download input files: - -cd DrugGEN/data - -bash dataset_download.sh - -cd - -# DrugGEN can be trained with the one-liner: - -python DrugGEN/main.py --submodel="CrossLoss" --mode="train" --raw_file="DrugGEN/data/chembl_train.smi" --dataset_file="chembl45_train.pt" --drug_raw_file="DrugGEN/data/akt_train.smi" --drug_dataset_file="drugs_train.pt" --max_atom=45 -``` - -** Explanations of arguments can be found below: - -```bash -Model arguments: - --submodel SUBMODEL Choose the submodel for training - --act ACT Activation function for the model - --z_dim Z_DIM Prior noise for the first GAN - --max_atom MAX ATOM Maximum atom number for molecules must be specified - --lambda_gp LAMBDA_GP Gradient penalty lambda multiplier for the first GAN - --dim DIM Dimension of the Transformer models for both GANs - --depth DEPTH Depth of the Transformer model from the first GAN - --heads HEADS Number of heads for the MultiHeadAttention module from the first GAN - --dec_depth DEC_DEPTH Depth of the Transformer model from the second GAN - --dec_heads DEC_HEADS Number of heads for the MultiHeadAttention module from the second GAN - --mlp_ratio MLP_RATIO MLP ratio for the Transformers - --dis_select DIS_SELECT Select the discriminator for the first and second GAN - --init_type INIT_TYPE Initialization type for the model - --dropout DROPOUT Dropout rate for the encoder - --dec_dropout DEC_DROPOUT Dropout rate for the decoder -Training arguments: - --batch_size BATCH_SIZE Batch size for the training - --epoch EPOCH Epoch number for Training - --warm_up_steps Warm up steps for the first GAN - --g_lr G_LR Learning rate for G - --g2_lr G2_LR Learning rate for G2 - --d_lr D_LR Learning rate for D - --d2_lr D2_LR Learning rate for D2 - --n_critic N_CRITIC Number of D updates per each G update - --beta1 BETA1 Beta1 for Adam optimizer - --beta2 BETA2 Beta2 for Adam optimizer - --clipping_value Clipping value for the gradient clipping process - --resume_iters Resume training from this step for fine tuning if desired -Dataset arguments: - --features FEATURES Additional node features (Boolean) (Please check new_dataloader.py Line 102) -``` - - - -  -  - -## Molecule Generation Using Trained DrugGEN Models in the Inference Mode - - -- First, please download the model weights of trained model, e.g., [DrugGEN-Prot](https://drive.google.com/drive/folders/19knQAtpieSamaxB4L5ft8bFiCVikBFDS?usp=share_link) and place it in the folder: "DrugGEN/experiments/models/". -- After that, please run the code below: - - -```bash - -python DrugGEN/main.py --submodel="Prot" --mode="inference" --inference_model="DrugGEN/experiments/models/{Chosen model name}" -``` - -- SMILES representation of the generated molecules will be saved into the file: "DrugGEN/experiments/inference/{Chosen submodel name}/denovo_molecules.txt". - -  -  - -## Results (De Novo Generated Molecules of DrugGEN Models) - -- SMILES notations of 50,000 de novo generated molecules from DrugGEN models (10,000 from each) can be downloaded from [here](results/generated_molecules). -- We first filtered the 50,000 de novo generated molecules by applying Lipinski, Veber and PAINS filters; and 43,000 of them remained in our dataset after this operation ([SMILES notations of filtered de novo molecules](results/generated_molecules/filtered_all_generated_molecules.smi)). -- We run our deep learning-based drug/compound-target protein interaction prediction system [DEEPScreen](https://pubs.rsc.org/en/content/articlehtml/2020/sc/c9sc03414e) on 43,000 filtered molecules. DEEPScreen predicted 18,000 of them as active against AKT1, 301 of which received high confidence scores (> 80%) ([SMILES notations of DeepScreen predicted actives](results/deepscreen)). -- At the same time, we performed a molecular docking analysis on these 43,000 filtered de novo molecules against the crystal structure of [AKT1](https://www.rcsb.org/structure/4gv1), and found that 118 of them had sufficiently low binding free energies (< -9 kcal/mol) ([SMILES notations of de novo molecules with low binding free energies](results/docking/Molecules_th9_docking.smi)). -- Finally, de novo molecules to effectively target AKT1 protein are selected via expert curation from the dataset of molecules with binding free energies lower than -9 kcal/mol. The structural representations of the selected molecules are shown in the figure below ([SMILES notations of the expert selected de novo AKT1 inhibitor molecules](results/docking/Selected_denovo_AKT1_inhibitors.smi)). - -![structures](assets/Selected_denovo_AKT1_inhibitors.png) -Fig. 2. Promising de novo molecules to effectively target AKT1 protein (generated by DrugGEN models), selected via expert curation from the dataset of molecules with sufficiently low binding free energies (< -9 kcal/mol) in the molecular docking experiment. - -  -  - -## Updates - -- 15/02/2023: Our pre-print is shared [here](https://github.com/HUBioDataLab/DrugGEN/files/10828402/2302.07868.pdf). -- 01/01/2023: Five different DrugGEN models are released. - -  -  - -## Citation -```bash -@misc{nl2023target, - doi = {10.48550/ARXIV.2302.07868}, - title={Target Specific De Novo Design of Drug Candidate Molecules with Graph Transformer-based Generative Adversarial Networks}, - author={Atabey Ünlü and Elif Çevrim and Ahmet Sarıgün and Hayriye Çelikbilek and Heval Ataş Güvenilir and Altay Koyaş and Deniz Cansen Kahraman and Abdurrahman Olğaç and Ahmet Rifaioğlu and Tunca Doğan}, - year={2023}, - eprint={2302.07868}, - archivePrefix={arXiv}, - primaryClass={cs.LG} -} -``` - -Ünlü, A., Çevrim, E., Sarıgün, A., Çelikbilek, H., Güvenilir, H.A., Koyaş, A., Kahraman, D.C., Olğaç, A., Rifaioğlu, A., Doğan, T. (2023). Target Specific De Novo Design of Drug Candidate Molecules with Graph Transformer-based Generative Adversarial Networks. *arXiv preprint* arXiv:2302.07868. - - -  -  - -## References/Resources - -In each file, we indicate whether a function or script is imported from another source. Here are some excellent sources from which we benefit from: - -- Molecule generation GAN schematic was inspired from [MolGAN](https://github.com/yongqyu/MolGAN-pytorch). -- [MOSES](https://github.com/molecularsets/moses) was used for performance calculation (MOSES Script are directly embedded to our code due to current installation issues related to the MOSES repo). -- [PyG](https://github.com/pyg-team/pytorch_geometric) was used to construct the custom dataset. -- Transformer architecture was taken from [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762). -- Graph Transformer Encoder architecture was taken from [Dwivedi & Bresson (2021)](https://arxiv.org/abs/2012.09699) and [Vignac et al. (2022)](https://github.com/cvignac/DiGress) and modified. - -Our initial project repository was [this one](https://github.com/asarigun/DrugGEN). - -  -  - -## License -Copyright (C) 2023 HUBioDataLab - -This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. - -This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. - -You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/. \ No newline at end of file diff --git a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/dataset.py b/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/dataset.py deleted file mode 100644 index 7713ea2f8bc94d202d2dfbe830af3cb96b1e803d..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan2/stylegan2-pytorch/dataset.py +++ /dev/null @@ -1,40 +0,0 @@ -from io import BytesIO - -import lmdb -from PIL import Image -from torch.utils.data import Dataset - - -class MultiResolutionDataset(Dataset): - def __init__(self, path, transform, resolution=256): - self.env = lmdb.open( - path, - max_readers=32, - readonly=True, - lock=False, - readahead=False, - meminit=False, - ) - - if not self.env: - raise IOError('Cannot open lmdb dataset', path) - - with self.env.begin(write=False) as txn: - self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8')) - - self.resolution = resolution - self.transform = transform - - def __len__(self): - return self.length - - def __getitem__(self, index): - with self.env.begin(write=False) as txn: - key = f'{self.resolution}-{str(index).zfill(5)}'.encode('utf-8') - img_bytes = txn.get(key) - - buffer = BytesIO(img_bytes) - img = Image.open(buffer) - img = self.transform(img) - - return img diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/pointer_generator/preprocess.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/pointer_generator/preprocess.py deleted file mode 100644 index f72ca7d3d97e12ab7b405dcff314bdb6c0a78755..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/pointer_generator/preprocess.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -from itertools import zip_longest - - -def replace_oovs(source_in, target_in, vocabulary, source_out, target_out): - """Replaces out-of-vocabulary words in source and target text with , - where N in is the position of the word in the source sequence. - """ - - def format_unk(pos): - return "".format(pos) - - if target_in is None: - target_in = [] - - for seq_num, (source_seq, target_seq) in enumerate( - zip_longest(source_in, target_in) - ): - source_seq_out = [] - target_seq_out = [] - - word_to_pos = dict() - for position, token in enumerate(source_seq.strip().split()): - if token in vocabulary: - token_out = token - else: - if token in word_to_pos: - oov_pos = word_to_pos[token] - else: - word_to_pos[token] = position - oov_pos = position - token_out = format_unk(oov_pos) - source_seq_out.append(token_out) - source_out.write(" ".join(source_seq_out) + "\n") - - if target_seq is not None: - for token in target_seq.strip().split(): - if token in word_to_pos: - token_out = format_unk(word_to_pos[token]) - else: - token_out = token - target_seq_out.append(token_out) - if target_out is not None: - target_out.write(" ".join(target_seq_out) + "\n") - - -def main(): - parser = argparse.ArgumentParser( - description="Replaces out-of-vocabulary words in both source and target " - "sequences with tokens that indicate the position of the word " - "in the source sequence." - ) - parser.add_argument( - "--source", type=str, help="text file with source sequences", required=True - ) - parser.add_argument( - "--target", type=str, help="text file with target sequences", default=None - ) - parser.add_argument("--vocab", type=str, help="vocabulary file", required=True) - parser.add_argument( - "--source-out", - type=str, - help="where to write source sequences with entries", - required=True, - ) - parser.add_argument( - "--target-out", - type=str, - help="where to write target sequences with entries", - default=None, - ) - args = parser.parse_args() - - with open(args.vocab, encoding="utf-8") as vocab: - vocabulary = vocab.read().splitlines() - - target_in = ( - open(args.target, "r", encoding="utf-8") if args.target is not None else None - ) - target_out = ( - open(args.target_out, "w", encoding="utf-8") - if args.target_out is not None - else None - ) - with open(args.source, "r", encoding="utf-8") as source_in, open( - args.source_out, "w", encoding="utf-8" - ) as source_out: - replace_oovs(source_in, target_in, vocabulary, source_out, target_out) - if target_in is not None: - target_in.close() - if target_out is not None: - target_out.close() - - -if __name__ == "__main__": - main() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/script/english_script.py b/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/script/english_script.py deleted file mode 100644 index 62250de944af2298cb6675b920fbd7963b9fb0ae..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/en_to_indic_translation/indic_nlp_library/indicnlp/script/english_script.py +++ /dev/null @@ -1,154 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -import pandas as pd -import numpy as np - -from indicnlp import common -from indicnlp.common import IndicNlpException - - -#### Maps from ARPABET to Internal Id -ARPABET_ID_MAP={} -ID_ARPABET_MAP={} - - -### -# Phonetic Information about script characters -### - -""" Phonetic data for English """ -ENGLISH_PHONETIC_DATA=None - -""" Phonetic vector for English""" -ENGLISH_PHONETIC_VECTORS=None - -""" Length of phonetic vector """ -PHONETIC_VECTOR_LENGTH=38 - -""" Start offset for the phonetic feature vector in the phonetic data vector """ -PHONETIC_VECTOR_START_OFFSET=6 - -## PHONETIC PROPERTIES in order in which they occur in the vector -## This list must be in sync with the keys in the PV_PROP_RANGES dictionary -PV_PROP=['basic_type', - 'vowel_length', - 'vowel_strength', - 'vowel_status', - 'consonant_type', - 'articulation_place', - 'aspiration', - 'voicing', - 'nasalization', - 'vowel_horizontal', - 'vowel_vertical', - 'vowel_roundness', - ] - -### -# Bit vector ranges for various properties -### - -PV_PROP_RANGES={ - 'basic_type': [0,6], - 'vowel_length': [6,8], - 'vowel_strength': [8,11], - 'vowel_status': [11,13], - 'consonant_type': [13,18], - 'articulation_place': [18,23], - 'aspiration': [23,25], - 'voicing': [25,27], - 'nasalization': [27,29], - 'vowel_horizontal': [29,32], - 'vowel_vertical': [32,36], - 'vowel_roundness': [36,38], - } - - -#### -# Indexes into the Phonetic Vector -#### -PVIDX_BT_VOWEL=0 -PVIDX_BT_CONSONANT=1 -PVIDX_BT_NUKTA=2 -PVIDX_BT_HALANT=3 -PVIDX_BT_ANUSVAAR=4 -PVIDX_BT_MISC=5 -PVIDX_BT_S=PVIDX_BT_VOWEL -PVIDX_BT_E=PVIDX_BT_MISC+1 - -PVIDX_VSTAT_DEP=12 - -#### -SCRIPT_RANGE_START=0x0D00 -## TBD -SCRIPT_RANGE_END=0x0D2E - - -def init(): - """ - To be called by library loader, do not call it in your program - """ - - global ENGLISH_PHONETIC_DATA, ENGLISH_PHONETIC_VECTORS, PHONETIC_VECTOR_LENGTH, PHONETIC_VECTOR_START_OFFSET - - ENGLISH_PHONETIC_DATA=pd.read_csv(common.get_resources_path()+'/script/english_script_phonetic_data.csv',encoding='utf-8') - - ENGLISH_PHONETIC_VECTORS=ENGLISH_PHONETIC_DATA.iloc[:,PHONETIC_VECTOR_START_OFFSET:].values - - PHONETIC_VECTOR_LENGTH=ENGLISH_PHONETIC_VECTORS.shape[1] - - ### Load mapping from ARPABET representation of phoneme to internal ID - global ARPABET_ID_MAP, ID_ARPABET_MAP - - with open(common.get_resources_path()+'/script/english_arpabet_list.csv','r',encoding='utf-8') as infile: - for ph_id, name in enumerate(iter(infile)): - name=name.strip() - ARPABET_ID_MAP[name]=ph_id - ID_ARPABET_MAP[ph_id]=name - - -def phoneme_to_offset(ph): - return ARPABET_ID_MAP[ph] - -def offset_to_phoneme(ph_id): - return ID_ARPABET_MAP[ph_id] - -def phoneme_to_enc(ph): - return chr(SCRIPT_RANGE_START+phoneme_to_offset(ph)) - -def enc_to_phoneme(ph): - return offset_to_phoneme(enc_to_offset(ph)) - -def enc_to_offset(c): - return ord(c)-SCRIPT_RANGE_START - -def in_range(offset): - return offset>=SCRIPT_RANGE_START and offset int: - style_name = self.xparagraph.style.name - level = INFINITE - if '.Titre' in style_name: - suffix = style_name[-1] - try: - level = int(suffix) - except: - pass - return level - - diff --git a/spaces/Hexamind/swarms/bluetraj.py b/spaces/Hexamind/swarms/bluetraj.py deleted file mode 100644 index 74cba42893f2e01914eda3bd71420eb8b09aed51..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/swarms/bluetraj.py +++ /dev/null @@ -1,81 +0,0 @@ -import numpy as np -import param_ -from drone import Drone - - -def calculate_target(blue_drone: Drone, red_drone: Drone) -> np.ndarray(3, ): - ''' - - :param blue_drone: - :param red_drone: - :return: - ''' - - def transform(pos, delta, theta): - pos[0] -= delta - pos[1] -= theta - return pos[0] * np.exp(1j * pos[1]) - - def untransform_to_array(pos, delta, theta): - pos[0] += delta - pos[1] += theta - return pos - - theta = red_drone.position[1] - delta = param_.GROUNDZONE - - z_blue = transform(blue_drone.position, delta, theta) - z_red = np.real(transform(red_drone.position, delta, theta)) - - v_blue = blue_drone.drone_model.max_speed - v_red = red_drone.drone_model.max_speed - - blue_shooting_distance = blue_drone.drone_model.distance_to_neutralisation - - blue_time_to_zero = (np.abs(z_blue) - blue_shooting_distance) / v_blue - red_time_to_zero = z_red / v_red - - if red_time_to_zero <= param_.STEP or red_time_to_zero < blue_time_to_zero + param_.STEP: - return np.zeros(3), red_time_to_zero - else: - max_target = z_red - min_target = 0 - while True: - target = (max_target + min_target) / 2 - blue_time_to_target = max(0, (np.abs(z_blue - target) - blue_shooting_distance) / v_blue) - red_time_to_target = np.abs(z_red - target) / v_red - if red_time_to_target - param_.STEP < blue_time_to_target <= red_time_to_target: - target = untransform_to_array((target / z_red) * red_drone.position, delta, theta) - return target, blue_time_to_target - if red_time_to_target < blue_time_to_target: - max_target = target - min_target = min_target - else: # blue_ time_to_target <= red_time_to_target -1: - max_target = max_target - min_target = target - - - - -def unitary_test(rho_blue: float, theta_blue: float, rho_red: float, theta_red: float): - blue_drone = Drone() - blue_drone.position = np.array([rho_blue, theta_blue, 100]) - red_drone = Drone(is_blue=False) - red_drone.position = np.array([rho_red, theta_red, 100]) - tg, time = calculate_target(blue_drone, red_drone) - print('rho_blue : ', rho_blue, ' theta_blue : ', theta_blue, ' rho_red : ', rho_red, ' theta_red : ', theta_red, - ' tg : ', tg, ' time : ', time) - return tg, time - - - - - -def test(): - for rho_blue in [1000]: - for theta_blue in np.pi * np.array([-1, 0.75, 0.5, 0.25, 0]): - for rho_red in [1000]: - for theta_red in np.pi * np.array([0, 1/4]): - unitary_test(rho_blue=rho_blue, theta_blue=theta_blue, rho_red=rho_red, theta_red=theta_red) - print('done') - diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/cache_dir/hate_speech18_default_train_text/zipf/zipf_fig.html b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/cache_dir/hate_speech18_default_train_text/zipf/zipf_fig.html deleted file mode 100644 index 129a05aee7ea85a44b4532d3c339101902a8d226..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/cache_dir/hate_speech18_default_train_text/zipf/zipf_fig.html +++ /dev/null @@ -1,64 +0,0 @@ - - - -
    -
    - - \ No newline at end of file diff --git a/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/modeling/sam.py b/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/modeling/sam.py deleted file mode 100644 index 303bc2f40c3dbc84f5d4286bb73336e075a86589..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/modeling/sam.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -from torch.nn import functional as F - -from typing import Any, Dict, List, Tuple - -from .image_encoder import ImageEncoderViT -from .mask_decoder import MaskDecoder -from .prompt_encoder import PromptEncoder - - -class Sam(nn.Module): - mask_threshold: float = 0.0 - image_format: str = "RGB" - - def __init__( - self, - image_encoder: ImageEncoderViT, - prompt_encoder: PromptEncoder, - mask_decoder: MaskDecoder, - pixel_mean: List[float] = [123.675, 116.28, 103.53], - pixel_std: List[float] = [58.395, 57.12, 57.375], - ) -> None: - """ - SAM predicts object masks from an image and input prompts. - - Arguments: - image_encoder (ImageEncoderViT): The backbone used to encode the - image into image embeddings that allow for efficient mask prediction. - prompt_encoder (PromptEncoder): Encodes various types of input prompts. - mask_decoder (MaskDecoder): Predicts masks from the image embeddings - and encoded prompts. - pixel_mean (list(float)): Mean values for normalizing pixels in the input image. - pixel_std (list(float)): Std values for normalizing pixels in the input image. - """ - super().__init__() - self.image_encoder = image_encoder - self.prompt_encoder = prompt_encoder - self.mask_decoder = mask_decoder - self.register_buffer("pixel_mean", torch.Tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.Tensor(pixel_std).view(-1, 1, 1), False) - - @property - def device(self) -> Any: - return self.pixel_mean.device - - @torch.no_grad() - def forward( - self, - batched_input: List[Dict[str, Any]], - multimask_output: bool, - ) -> List[Dict[str, torch.Tensor]]: - """ - Predicts masks end-to-end from provided images and prompts. - If prompts are not known in advance, using SamPredictor is - recommended over calling the model directly. - - Arguments: - batched_input (list(dict)): A list over input images, each a - dictionary with the following keys. A prompt key can be - excluded if it is not present. - 'image': The image as a torch tensor in 3xHxW format, - already transformed for input to the model. - 'original_size': (tuple(int, int)) The original size of - the image before transformation, as (H, W). - 'point_coords': (torch.Tensor) Batched point prompts for - this image, with shape BxNx2. Already transformed to the - input frame of the model. - 'point_labels': (torch.Tensor) Batched labels for point prompts, - with shape BxN. - 'boxes': (torch.Tensor) Batched box inputs, with shape Bx4. - Already transformed to the input frame of the model. - 'mask_inputs': (torch.Tensor) Batched mask inputs to the model, - in the form Bx1xHxW. - multimask_output (bool): Whether the model should predict multiple - disambiguating masks, or return a single mask. - - Returns: - (list(dict)): A list over input images, where each element is - as dictionary with the following keys. - 'masks': (torch.Tensor) Batched binary mask predictions, - with shape BxCxHxW, where B is the number of input promts, - C is determiend by multimask_output, and (H, W) is the - original size of the image. - 'iou_predictions': (torch.Tensor) The model's predictions - of mask quality, in shape BxC. - 'low_res_logits': (torch.Tensor) Low resolution logits with - shape BxCxHxW, where H=W=256. Can be passed as mask input - to subsequent iterations of prediction. - """ - input_images = torch.stack([self.preprocess(x["image"]) for x in batched_input], dim=0) - image_embeddings = self.image_encoder(input_images) - - outputs = [] - for image_record, curr_embedding in zip(batched_input, image_embeddings): - if "point_coords" in image_record: - points = (image_record["point_coords"], image_record["point_labels"]) - else: - points = None - sparse_embeddings, dense_embeddings = self.prompt_encoder( - points=points, - boxes=image_record.get("boxes", None), - masks=image_record.get("mask_inputs", None), - ) - low_res_masks, iou_predictions = self.mask_decoder( - image_embeddings=curr_embedding.unsqueeze(0), - image_pe=self.prompt_encoder.get_dense_pe(), - sparse_prompt_embeddings=sparse_embeddings, - dense_prompt_embeddings=dense_embeddings, - multimask_output=multimask_output, - ) - masks = self.postprocess_masks( - low_res_masks, - input_size=image_record["image"].shape[-2:], - original_size=image_record["original_size"], - ) - masks = masks > self.mask_threshold - outputs.append( - { - "masks": masks, - "iou_predictions": iou_predictions, - "low_res_logits": low_res_masks, - } - ) - return outputs - - def postprocess_masks( - self, - masks: torch.Tensor, - input_size: Tuple[int, ...], - original_size: Tuple[int, ...], - ) -> torch.Tensor: - """ - Remove padding and upscale masks to the original image size. - - Arguments: - masks (torch.Tensor): Batched masks from the mask_decoder, - in BxCxHxW format. - input_size (tuple(int, int)): The size of the image input to the - model, in (H, W) format. Used to remove padding. - original_size (tuple(int, int)): The original size of the image - before resizing for input to the model, in (H, W) format. - - Returns: - (torch.Tensor): Batched masks in BxCxHxW format, where (H, W) - is given by original_size. - """ - masks = F.interpolate( - masks, - (self.image_encoder.img_size, self.image_encoder.img_size), - mode="bilinear", - align_corners=False, - ) - masks = masks[..., : input_size[0], : input_size[1]] - masks = F.interpolate(masks, original_size, mode="bilinear", align_corners=False) - return masks - - def preprocess(self, x: torch.Tensor) -> torch.Tensor: - """Normalize pixel values and pad to a square input.""" - # Normalize colors - x = (x - self.pixel_mean) / self.pixel_std - - # Pad - h, w = x.shape[-2:] - padh = self.image_encoder.img_size - h - padw = self.image_encoder.img_size - w - x = F.pad(x, (0, padw, 0, padh)) - return x diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/wandb/wandb_utils.py b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/wandb/wandb_utils.py deleted file mode 100644 index 238f4edbf2a0ddf34c024fbb6775c71dd19e18aa..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/wandb/wandb_utils.py +++ /dev/null @@ -1,589 +0,0 @@ -"""Utilities and tools for tracking runs with Weights & Biases.""" - -import logging -import os -import sys -from contextlib import contextmanager -from pathlib import Path -from typing import Dict - -import yaml -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[3] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -from utils.dataloaders import LoadImagesAndLabels, img2label_paths -from utils.general import LOGGER, check_dataset, check_file - -try: - import wandb - - assert hasattr(wandb, '__version__') # verify package import not local dir -except (ImportError, AssertionError): - wandb = None - -RANK = int(os.getenv('RANK', -1)) -WANDB_ARTIFACT_PREFIX = 'wandb-artifact://' - - -def remove_prefix(from_string, prefix=WANDB_ARTIFACT_PREFIX): - return from_string[len(prefix):] - - -def check_wandb_config_file(data_config_file): - wandb_config = '_wandb.'.join(data_config_file.rsplit('.', 1)) # updated data.yaml path - if Path(wandb_config).is_file(): - return wandb_config - return data_config_file - - -def check_wandb_dataset(data_file): - is_trainset_wandb_artifact = False - is_valset_wandb_artifact = False - if isinstance(data_file, dict): - # In that case another dataset manager has already processed it and we don't have to - return data_file - if check_file(data_file) and data_file.endswith('.yaml'): - with open(data_file, errors='ignore') as f: - data_dict = yaml.safe_load(f) - is_trainset_wandb_artifact = isinstance(data_dict['train'], - str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX) - is_valset_wandb_artifact = isinstance(data_dict['val'], - str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX) - if is_trainset_wandb_artifact or is_valset_wandb_artifact: - return data_dict - else: - return check_dataset(data_file) - - -def get_run_info(run_path): - run_path = Path(remove_prefix(run_path, WANDB_ARTIFACT_PREFIX)) - run_id = run_path.stem - project = run_path.parent.stem - entity = run_path.parent.parent.stem - model_artifact_name = 'run_' + run_id + '_model' - return entity, project, run_id, model_artifact_name - - -def check_wandb_resume(opt): - process_wandb_config_ddp_mode(opt) if RANK not in [-1, 0] else None - if isinstance(opt.resume, str): - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - if RANK not in [-1, 0]: # For resuming DDP runs - entity, project, run_id, model_artifact_name = get_run_info(opt.resume) - api = wandb.Api() - artifact = api.artifact(entity + '/' + project + '/' + model_artifact_name + ':latest') - modeldir = artifact.download() - opt.weights = str(Path(modeldir) / "last.pt") - return True - return None - - -def process_wandb_config_ddp_mode(opt): - with open(check_file(opt.data), errors='ignore') as f: - data_dict = yaml.safe_load(f) # data dict - train_dir, val_dir = None, None - if isinstance(data_dict['train'], str) and data_dict['train'].startswith(WANDB_ARTIFACT_PREFIX): - api = wandb.Api() - train_artifact = api.artifact(remove_prefix(data_dict['train']) + ':' + opt.artifact_alias) - train_dir = train_artifact.download() - train_path = Path(train_dir) / 'data/images/' - data_dict['train'] = str(train_path) - - if isinstance(data_dict['val'], str) and data_dict['val'].startswith(WANDB_ARTIFACT_PREFIX): - api = wandb.Api() - val_artifact = api.artifact(remove_prefix(data_dict['val']) + ':' + opt.artifact_alias) - val_dir = val_artifact.download() - val_path = Path(val_dir) / 'data/images/' - data_dict['val'] = str(val_path) - if train_dir or val_dir: - ddp_data_path = str(Path(val_dir) / 'wandb_local_data.yaml') - with open(ddp_data_path, 'w') as f: - yaml.safe_dump(data_dict, f) - opt.data = ddp_data_path - - -class WandbLogger(): - """Log training runs, datasets, models, and predictions to Weights & Biases. - - This logger sends information to W&B at wandb.ai. By default, this information - includes hyperparameters, system configuration and metrics, model metrics, - and basic data metrics and analyses. - - By providing additional command line arguments to train.py, datasets, - models and predictions can also be logged. - - For more on how this logger is used, see the Weights & Biases documentation: - https://docs.wandb.com/guides/integrations/yolov5 - """ - - def __init__(self, opt, run_id=None, job_type='Training'): - """ - - Initialize WandbLogger instance - - Upload dataset if opt.upload_dataset is True - - Setup training processes if job_type is 'Training' - - arguments: - opt (namespace) -- Commandline arguments for this run - run_id (str) -- Run ID of W&B run to be resumed - job_type (str) -- To set the job_type for this run - - """ - # Temporary-fix - if opt.upload_dataset: - opt.upload_dataset = False - # LOGGER.info("Uploading Dataset functionality is not being supported temporarily due to a bug.") - - # Pre-training routine -- - self.job_type = job_type - self.wandb, self.wandb_run = wandb, None if not wandb else wandb.run - self.val_artifact, self.train_artifact = None, None - self.train_artifact_path, self.val_artifact_path = None, None - self.result_artifact = None - self.val_table, self.result_table = None, None - self.bbox_media_panel_images = [] - self.val_table_path_map = None - self.max_imgs_to_log = 16 - self.wandb_artifact_data_dict = None - self.data_dict = None - # It's more elegant to stick to 1 wandb.init call, - # but useful config data is overwritten in the WandbLogger's wandb.init call - if isinstance(opt.resume, str): # checks resume from artifact - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - entity, project, run_id, model_artifact_name = get_run_info(opt.resume) - model_artifact_name = WANDB_ARTIFACT_PREFIX + model_artifact_name - assert wandb, 'install wandb to resume wandb runs' - # Resume wandb-artifact:// runs here| workaround for not overwriting wandb.config - self.wandb_run = wandb.init(id=run_id, - project=project, - entity=entity, - resume='allow', - allow_val_change=True) - opt.resume = model_artifact_name - elif self.wandb: - self.wandb_run = wandb.init(config=opt, - resume="allow", - project='YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem, - entity=opt.entity, - name=opt.name if opt.name != 'exp' else None, - job_type=job_type, - id=run_id, - allow_val_change=True) if not wandb.run else wandb.run - if self.wandb_run: - if self.job_type == 'Training': - if opt.upload_dataset: - if not opt.resume: - self.wandb_artifact_data_dict = self.check_and_upload_dataset(opt) - - if isinstance(opt.data, dict): - # This means another dataset manager has already processed the dataset info (e.g. ClearML) - # and they will have stored the already processed dict in opt.data - self.data_dict = opt.data - elif opt.resume: - # resume from artifact - if isinstance(opt.resume, str) and opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - self.data_dict = dict(self.wandb_run.config.data_dict) - else: # local resume - self.data_dict = check_wandb_dataset(opt.data) - else: - self.data_dict = check_wandb_dataset(opt.data) - self.wandb_artifact_data_dict = self.wandb_artifact_data_dict or self.data_dict - - # write data_dict to config. useful for resuming from artifacts. Do this only when not resuming. - self.wandb_run.config.update({'data_dict': self.wandb_artifact_data_dict}, allow_val_change=True) - self.setup_training(opt) - - if self.job_type == 'Dataset Creation': - self.wandb_run.config.update({"upload_dataset": True}) - self.data_dict = self.check_and_upload_dataset(opt) - - def check_and_upload_dataset(self, opt): - """ - Check if the dataset format is compatible and upload it as W&B artifact - - arguments: - opt (namespace)-- Commandline arguments for current run - - returns: - Updated dataset info dictionary where local dataset paths are replaced by WAND_ARFACT_PREFIX links. - """ - assert wandb, 'Install wandb to upload dataset' - config_path = self.log_dataset_artifact(opt.data, opt.single_cls, - 'YOLOv5' if opt.project == 'runs/train' else Path(opt.project).stem) - with open(config_path, errors='ignore') as f: - wandb_data_dict = yaml.safe_load(f) - return wandb_data_dict - - def setup_training(self, opt): - """ - Setup the necessary processes for training YOLO models: - - Attempt to download model checkpoint and dataset artifacts if opt.resume stats with WANDB_ARTIFACT_PREFIX - - Update data_dict, to contain info of previous run if resumed and the paths of dataset artifact if downloaded - - Setup log_dict, initialize bbox_interval - - arguments: - opt (namespace) -- commandline arguments for this run - - """ - self.log_dict, self.current_epoch = {}, 0 - self.bbox_interval = opt.bbox_interval - if isinstance(opt.resume, str): - modeldir, _ = self.download_model_artifact(opt) - if modeldir: - self.weights = Path(modeldir) / "last.pt" - config = self.wandb_run.config - opt.weights, opt.save_period, opt.batch_size, opt.bbox_interval, opt.epochs, opt.hyp, opt.imgsz = str( - self.weights), config.save_period, config.batch_size, config.bbox_interval, config.epochs,\ - config.hyp, config.imgsz - data_dict = self.data_dict - if self.val_artifact is None: # If --upload_dataset is set, use the existing artifact, don't download - self.train_artifact_path, self.train_artifact = self.download_dataset_artifact( - data_dict.get('train'), opt.artifact_alias) - self.val_artifact_path, self.val_artifact = self.download_dataset_artifact( - data_dict.get('val'), opt.artifact_alias) - - if self.train_artifact_path is not None: - train_path = Path(self.train_artifact_path) / 'data/images/' - data_dict['train'] = str(train_path) - if self.val_artifact_path is not None: - val_path = Path(self.val_artifact_path) / 'data/images/' - data_dict['val'] = str(val_path) - - if self.val_artifact is not None: - self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") - columns = ["epoch", "id", "ground truth", "prediction"] - columns.extend(self.data_dict['names']) - self.result_table = wandb.Table(columns) - self.val_table = self.val_artifact.get("val") - if self.val_table_path_map is None: - self.map_val_table_path() - if opt.bbox_interval == -1: - self.bbox_interval = opt.bbox_interval = (opt.epochs // 10) if opt.epochs > 10 else 1 - if opt.evolve or opt.noplots: - self.bbox_interval = opt.bbox_interval = opt.epochs + 1 # disable bbox_interval - train_from_artifact = self.train_artifact_path is not None and self.val_artifact_path is not None - # Update the the data_dict to point to local artifacts dir - if train_from_artifact: - self.data_dict = data_dict - - def download_dataset_artifact(self, path, alias): - """ - download the model checkpoint artifact if the path starts with WANDB_ARTIFACT_PREFIX - - arguments: - path -- path of the dataset to be used for training - alias (str)-- alias of the artifact to be download/used for training - - returns: - (str, wandb.Artifact) -- path of the downladed dataset and it's corresponding artifact object if dataset - is found otherwise returns (None, None) - """ - if isinstance(path, str) and path.startswith(WANDB_ARTIFACT_PREFIX): - artifact_path = Path(remove_prefix(path, WANDB_ARTIFACT_PREFIX) + ":" + alias) - dataset_artifact = wandb.use_artifact(artifact_path.as_posix().replace("\\", "/")) - assert dataset_artifact is not None, "'Error: W&B dataset artifact doesn\'t exist'" - datadir = dataset_artifact.download() - return datadir, dataset_artifact - return None, None - - def download_model_artifact(self, opt): - """ - download the model checkpoint artifact if the resume path starts with WANDB_ARTIFACT_PREFIX - - arguments: - opt (namespace) -- Commandline arguments for this run - """ - if opt.resume.startswith(WANDB_ARTIFACT_PREFIX): - model_artifact = wandb.use_artifact(remove_prefix(opt.resume, WANDB_ARTIFACT_PREFIX) + ":latest") - assert model_artifact is not None, 'Error: W&B model artifact doesn\'t exist' - modeldir = model_artifact.download() - # epochs_trained = model_artifact.metadata.get('epochs_trained') - total_epochs = model_artifact.metadata.get('total_epochs') - is_finished = total_epochs is None - assert not is_finished, 'training is finished, can only resume incomplete runs.' - return modeldir, model_artifact - return None, None - - def log_model(self, path, opt, epoch, fitness_score, best_model=False): - """ - Log the model checkpoint as W&B artifact - - arguments: - path (Path) -- Path of directory containing the checkpoints - opt (namespace) -- Command line arguments for this run - epoch (int) -- Current epoch number - fitness_score (float) -- fitness score for current epoch - best_model (boolean) -- Boolean representing if the current checkpoint is the best yet. - """ - model_artifact = wandb.Artifact('run_' + wandb.run.id + '_model', - type='model', - metadata={ - 'original_url': str(path), - 'epochs_trained': epoch + 1, - 'save period': opt.save_period, - 'project': opt.project, - 'total_epochs': opt.epochs, - 'fitness_score': fitness_score}) - model_artifact.add_file(str(path / 'last.pt'), name='last.pt') - wandb.log_artifact(model_artifact, - aliases=['latest', 'last', 'epoch ' + str(self.current_epoch), 'best' if best_model else '']) - LOGGER.info(f"Saving model artifact on epoch {epoch + 1}") - - def log_dataset_artifact(self, data_file, single_cls, project, overwrite_config=False): - """ - Log the dataset as W&B artifact and return the new data file with W&B links - - arguments: - data_file (str) -- the .yaml file with information about the dataset like - path, classes etc. - single_class (boolean) -- train multi-class data as single-class - project (str) -- project name. Used to construct the artifact path - overwrite_config (boolean) -- overwrites the data.yaml file if set to true otherwise creates a new - file with _wandb postfix. Eg -> data_wandb.yaml - - returns: - the new .yaml file with artifact links. it can be used to start training directly from artifacts - """ - upload_dataset = self.wandb_run.config.upload_dataset - log_val_only = isinstance(upload_dataset, str) and upload_dataset == 'val' - self.data_dict = check_dataset(data_file) # parse and check - data = dict(self.data_dict) - nc, names = (1, ['item']) if single_cls else (int(data['nc']), data['names']) - names = {k: v for k, v in enumerate(names)} # to index dictionary - - # log train set - if not log_val_only: - self.train_artifact = self.create_dataset_table(LoadImagesAndLabels(data['train'], rect=True, batch_size=1), - names, - name='train') if data.get('train') else None - if data.get('train'): - data['train'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'train') - - self.val_artifact = self.create_dataset_table( - LoadImagesAndLabels(data['val'], rect=True, batch_size=1), names, name='val') if data.get('val') else None - if data.get('val'): - data['val'] = WANDB_ARTIFACT_PREFIX + str(Path(project) / 'val') - - path = Path(data_file) - # create a _wandb.yaml file with artifacts links if both train and test set are logged - if not log_val_only: - path = (path.stem if overwrite_config else path.stem + '_wandb') + '.yaml' # updated data.yaml path - path = ROOT / 'data' / path - data.pop('download', None) - data.pop('path', None) - with open(path, 'w') as f: - yaml.safe_dump(data, f) - LOGGER.info(f"Created dataset config file {path}") - - if self.job_type == 'Training': # builds correct artifact pipeline graph - if not log_val_only: - self.wandb_run.log_artifact( - self.train_artifact) # calling use_artifact downloads the dataset. NOT NEEDED! - self.wandb_run.use_artifact(self.val_artifact) - self.val_artifact.wait() - self.val_table = self.val_artifact.get('val') - self.map_val_table_path() - else: - self.wandb_run.log_artifact(self.train_artifact) - self.wandb_run.log_artifact(self.val_artifact) - return path - - def map_val_table_path(self): - """ - Map the validation dataset Table like name of file -> it's id in the W&B Table. - Useful for - referencing artifacts for evaluation. - """ - self.val_table_path_map = {} - LOGGER.info("Mapping dataset") - for i, data in enumerate(tqdm(self.val_table.data)): - self.val_table_path_map[data[3]] = data[0] - - def create_dataset_table(self, dataset: LoadImagesAndLabels, class_to_id: Dict[int, str], name: str = 'dataset'): - """ - Create and return W&B artifact containing W&B Table of the dataset. - - arguments: - dataset -- instance of LoadImagesAndLabels class used to iterate over the data to build Table - class_to_id -- hash map that maps class ids to labels - name -- name of the artifact - - returns: - dataset artifact to be logged or used - """ - # TODO: Explore multiprocessing to slpit this loop parallely| This is essential for speeding up the the logging - artifact = wandb.Artifact(name=name, type="dataset") - img_files = tqdm([dataset.path]) if isinstance(dataset.path, str) and Path(dataset.path).is_dir() else None - img_files = tqdm(dataset.im_files) if not img_files else img_files - for img_file in img_files: - if Path(img_file).is_dir(): - artifact.add_dir(img_file, name='data/images') - labels_path = 'labels'.join(dataset.path.rsplit('images', 1)) - artifact.add_dir(labels_path, name='data/labels') - else: - artifact.add_file(img_file, name='data/images/' + Path(img_file).name) - label_file = Path(img2label_paths([img_file])[0]) - artifact.add_file(str(label_file), name='data/labels/' + - label_file.name) if label_file.exists() else None - table = wandb.Table(columns=["id", "train_image", "Classes", "name"]) - class_set = wandb.Classes([{'id': id, 'name': name} for id, name in class_to_id.items()]) - for si, (img, labels, paths, shapes) in enumerate(tqdm(dataset)): - box_data, img_classes = [], {} - for cls, *xywh in labels[:, 1:].tolist(): - cls = int(cls) - box_data.append({ - "position": { - "middle": [xywh[0], xywh[1]], - "width": xywh[2], - "height": xywh[3]}, - "class_id": cls, - "box_caption": "%s" % (class_to_id[cls])}) - img_classes[cls] = class_to_id[cls] - boxes = {"ground_truth": {"box_data": box_data, "class_labels": class_to_id}} # inference-space - table.add_data(si, wandb.Image(paths, classes=class_set, boxes=boxes), list(img_classes.values()), - Path(paths).name) - artifact.add(table, name) - return artifact - - def log_training_progress(self, predn, path, names): - """ - Build evaluation Table. Uses reference from validation dataset table. - - arguments: - predn (list): list of predictions in the native space in the format - [xmin, ymin, xmax, ymax, confidence, class] - path (str): local path of the current evaluation image - names (dict(int, str)): hash map that maps class ids to labels - """ - class_set = wandb.Classes([{'id': id, 'name': name} for id, name in names.items()]) - box_data = [] - avg_conf_per_class = [0] * len(self.data_dict['names']) - pred_class_count = {} - for *xyxy, conf, cls in predn.tolist(): - if conf >= 0.25: - cls = int(cls) - box_data.append({ - "position": { - "minX": xyxy[0], - "minY": xyxy[1], - "maxX": xyxy[2], - "maxY": xyxy[3]}, - "class_id": cls, - "box_caption": f"{names[cls]} {conf:.3f}", - "scores": { - "class_score": conf}, - "domain": "pixel"}) - avg_conf_per_class[cls] += conf - - if cls in pred_class_count: - pred_class_count[cls] += 1 - else: - pred_class_count[cls] = 1 - - for pred_class in pred_class_count.keys(): - avg_conf_per_class[pred_class] = avg_conf_per_class[pred_class] / pred_class_count[pred_class] - - boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space - id = self.val_table_path_map[Path(path).name] - self.result_table.add_data(self.current_epoch, id, self.val_table.data[id][1], - wandb.Image(self.val_table.data[id][1], boxes=boxes, classes=class_set), - *avg_conf_per_class) - - def val_one_image(self, pred, predn, path, names, im): - """ - Log validation data for one image. updates the result Table if validation dataset is uploaded and log bbox media panel - - arguments: - pred (list): list of scaled predictions in the format - [xmin, ymin, xmax, ymax, confidence, class] - predn (list): list of predictions in the native space - [xmin, ymin, xmax, ymax, confidence, class] - path (str): local path of the current evaluation image - """ - if self.val_table and self.result_table: # Log Table if Val dataset is uploaded as artifact - self.log_training_progress(predn, path, names) - - if len(self.bbox_media_panel_images) < self.max_imgs_to_log and self.current_epoch > 0: - if self.current_epoch % self.bbox_interval == 0: - box_data = [{ - "position": { - "minX": xyxy[0], - "minY": xyxy[1], - "maxX": xyxy[2], - "maxY": xyxy[3]}, - "class_id": int(cls), - "box_caption": f"{names[int(cls)]} {conf:.3f}", - "scores": { - "class_score": conf}, - "domain": "pixel"} for *xyxy, conf, cls in pred.tolist()] - boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space - self.bbox_media_panel_images.append(wandb.Image(im, boxes=boxes, caption=path.name)) - - def log(self, log_dict): - """ - save the metrics to the logging dictionary - - arguments: - log_dict (Dict) -- metrics/media to be logged in current step - """ - if self.wandb_run: - for key, value in log_dict.items(): - self.log_dict[key] = value - - def end_epoch(self, best_result=False): - """ - commit the log_dict, model artifacts and Tables to W&B and flush the log_dict. - - arguments: - best_result (boolean): Boolean representing if the result of this evaluation is best or not - """ - if self.wandb_run: - with all_logging_disabled(): - if self.bbox_media_panel_images: - self.log_dict["BoundingBoxDebugger"] = self.bbox_media_panel_images - try: - wandb.log(self.log_dict) - except BaseException as e: - LOGGER.info( - f"An error occurred in wandb logger. The training will proceed without interruption. More info\n{e}" - ) - self.wandb_run.finish() - self.wandb_run = None - - self.log_dict = {} - self.bbox_media_panel_images = [] - if self.result_artifact: - self.result_artifact.add(self.result_table, 'result') - wandb.log_artifact(self.result_artifact, - aliases=[ - 'latest', 'last', 'epoch ' + str(self.current_epoch), - ('best' if best_result else '')]) - - wandb.log({"evaluation": self.result_table}) - columns = ["epoch", "id", "ground truth", "prediction"] - columns.extend(self.data_dict['names']) - self.result_table = wandb.Table(columns) - self.result_artifact = wandb.Artifact("run_" + wandb.run.id + "_progress", "evaluation") - - def finish_run(self): - """ - Log metrics if any and finish the current W&B run - """ - if self.wandb_run: - if self.log_dict: - with all_logging_disabled(): - wandb.log(self.log_dict) - wandb.run.finish() - - -@contextmanager -def all_logging_disabled(highest_level=logging.CRITICAL): - """ source - https://gist.github.com/simon-weber/7853144 - A context manager that will prevent any logging messages triggered during the body from being processed. - :param highest_level: the maximum logging level in use. - This would only need to be changed if a custom level greater than CRITICAL is defined. - """ - previous_level = logging.root.manager.disable - logging.disable(highest_level) - try: - yield - finally: - logging.disable(previous_level) diff --git a/spaces/Illumotion/Koboldcpp/build-info.h b/spaces/Illumotion/Koboldcpp/build-info.h deleted file mode 100644 index 6428ea9753921c461b9c31aa5693a79579a37cfe..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/build-info.h +++ /dev/null @@ -1,9 +0,0 @@ -#ifndef BUILD_INFO_H -#define BUILD_INFO_H - -#define BUILD_NUMBER 999 -#define BUILD_COMMIT "KOBOLDCPP" -#define BUILD_COMPILER "KCPP" -#define BUILD_TARGET "KCPP" - -#endif // BUILD_INFO_H diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/__init__.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Intae/deepfake/training/zoo/unet.py b/spaces/Intae/deepfake/training/zoo/unet.py deleted file mode 100644 index e190ea7f341a176d44aa5da472a2753502163b77..0000000000000000000000000000000000000000 --- a/spaces/Intae/deepfake/training/zoo/unet.py +++ /dev/null @@ -1,151 +0,0 @@ -from functools import partial - -import torch -from timm.models.efficientnet import tf_efficientnet_b3_ns, tf_efficientnet_b5_ns -from torch import nn -from torch.nn import Dropout2d, Conv2d -from torch.nn.modules.dropout import Dropout -from torch.nn.modules.linear import Linear -from torch.nn.modules.pooling import AdaptiveAvgPool2d -from torch.nn.modules.upsampling import UpsamplingBilinear2d - -encoder_params = { - "tf_efficientnet_b3_ns": { - "features": 1536, - "filters": [40, 32, 48, 136, 1536], - "decoder_filters": [64, 128, 256, 256], - "init_op": partial(tf_efficientnet_b3_ns, pretrained=True, drop_path_rate=0.2) - }, - "tf_efficientnet_b5_ns": { - "features": 2048, - "filters": [48, 40, 64, 176, 2048], - "decoder_filters": [64, 128, 256, 256], - "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.2) - }, -} - - -class DecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels): - super().__init__() - self.layer = nn.Sequential( - nn.Upsample(scale_factor=2), - nn.Conv2d(in_channels, out_channels, 3, padding=1), - nn.ReLU(inplace=True) - ) - - def forward(self, x): - return self.layer(x) - - -class ConcatBottleneck(nn.Module): - def __init__(self, in_channels, out_channels): - super().__init__() - self.seq = nn.Sequential( - nn.Conv2d(in_channels, out_channels, 3, padding=1), - nn.ReLU(inplace=True) - ) - - def forward(self, dec, enc): - x = torch.cat([dec, enc], dim=1) - return self.seq(x) - - -class Decoder(nn.Module): - def __init__(self, decoder_filters, filters, upsample_filters=None, - decoder_block=DecoderBlock, bottleneck=ConcatBottleneck, dropout=0): - super().__init__() - self.decoder_filters = decoder_filters - self.filters = filters - self.decoder_block = decoder_block - self.decoder_stages = nn.ModuleList([self._get_decoder(idx) for idx in range(0, len(decoder_filters))]) - self.bottlenecks = nn.ModuleList([bottleneck(self.filters[-i - 2] + f, f) - for i, f in enumerate(reversed(decoder_filters))]) - self.dropout = Dropout2d(dropout) if dropout > 0 else None - self.last_block = None - if upsample_filters: - self.last_block = decoder_block(decoder_filters[0], out_channels=upsample_filters) - else: - self.last_block = UpsamplingBilinear2d(scale_factor=2) - - def forward(self, encoder_results: list): - x = encoder_results[0] - bottlenecks = self.bottlenecks - for idx, bottleneck in enumerate(bottlenecks): - rev_idx = - (idx + 1) - x = self.decoder_stages[rev_idx](x) - x = bottleneck(x, encoder_results[-rev_idx]) - if self.last_block: - x = self.last_block(x) - if self.dropout: - x = self.dropout(x) - return x - - def _get_decoder(self, layer): - idx = layer + 1 - if idx == len(self.decoder_filters): - in_channels = self.filters[idx] - else: - in_channels = self.decoder_filters[idx] - return self.decoder_block(in_channels, self.decoder_filters[max(layer, 0)]) - - -def _initialize_weights(module): - for m in module.modules(): - if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d) or isinstance(m, nn.Linear): - m.weight.data = nn.init.kaiming_normal_(m.weight.data) - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - -class EfficientUnetClassifier(nn.Module): - def __init__(self, encoder, dropout_rate=0.5) -> None: - super().__init__() - self.decoder = Decoder(decoder_filters=encoder_params[encoder]["decoder_filters"], - filters=encoder_params[encoder]["filters"]) - self.avg_pool = AdaptiveAvgPool2d((1, 1)) - self.dropout = Dropout(dropout_rate) - self.fc = Linear(encoder_params[encoder]["features"], 1) - self.final = Conv2d(encoder_params[encoder]["decoder_filters"][0], out_channels=1, kernel_size=1, bias=False) - _initialize_weights(self) - self.encoder = encoder_params[encoder]["init_op"]() - - def get_encoder_features(self, x): - encoder_results = [] - x = self.encoder.conv_stem(x) - x = self.encoder.bn1(x) - x = self.encoder.act1(x) - encoder_results.append(x) - x = self.encoder.blocks[:2](x) - encoder_results.append(x) - x = self.encoder.blocks[2:3](x) - encoder_results.append(x) - x = self.encoder.blocks[3:5](x) - encoder_results.append(x) - x = self.encoder.blocks[5:](x) - x = self.encoder.conv_head(x) - x = self.encoder.bn2(x) - x = self.encoder.act2(x) - encoder_results.append(x) - encoder_results = list(reversed(encoder_results)) - return encoder_results - - def forward(self, x): - encoder_results = self.get_encoder_features(x) - seg = self.final(self.decoder(encoder_results)) - x = encoder_results[0] - x = self.avg_pool(x).flatten(1) - x = self.dropout(x) - x = self.fc(x) - return x, seg - - -if __name__ == '__main__': - model = EfficientUnetClassifier("tf_efficientnet_b5_ns") - model.eval() - with torch.no_grad(): - input = torch.rand(4, 3, 224, 224) - print(model(input)) diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/model/convert_fp16.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/model/convert_fp16.py deleted file mode 100644 index efc40aa83bf3a85129a668387df86a41d925f13d..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/model/convert_fp16.py +++ /dev/null @@ -1,26 +0,0 @@ -""" -Usage: -python3 -m fastchat.model.convert_fp16 --in in-folder --out out-folder -""" -import argparse - -from transformers import AutoTokenizer, AutoModelForCausalLM -import torch - - -def convert_fp16(in_checkpoint, out_checkpoint): - tokenizer = AutoTokenizer.from_pretrained(in_checkpoint, use_fast=False) - model = AutoModelForCausalLM.from_pretrained( - in_checkpoint, torch_dtype=torch.float16, low_cpu_mem_usage=True - ) - model.save_pretrained(out_checkpoint) - tokenizer.save_pretrained(out_checkpoint) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--in-checkpoint", type=str, help="Path to the model") - parser.add_argument("--out-checkpoint", type=str, help="Path to the output model") - args = parser.parse_args() - - convert_fp16(args.in_checkpoint, args.out_checkpoint) diff --git a/spaces/JKLUCY99/voice-cloning/app.py b/spaces/JKLUCY99/voice-cloning/app.py deleted file mode 100644 index 169883a7a4093c827878bea9819bf2875406b8a5..0000000000000000000000000000000000000000 --- a/spaces/JKLUCY99/voice-cloning/app.py +++ /dev/null @@ -1,229 +0,0 @@ -import json -import os -import subprocess -from pathlib import Path - -import gradio as gr -import librosa -import numpy as np -import torch -from demucs.apply import apply_model -from demucs.pretrained import DEFAULT_MODEL, get_model -from huggingface_hub import hf_hub_download, list_repo_files - -from so_vits_svc_fork.hparams import HParams -from so_vits_svc_fork.inference.core import Svc - - -################################################################### -# REPLACE THESE VALUES TO CHANGE THE MODEL REPO/CKPT NAME/SETTINGS -################################################################### -# The Hugging Face Hub repo ID -repo_id = "dog/kanye" - -# If None, Uses latest ckpt in the repo -ckpt_name = None - -# If None, Uses "kmeans.pt" if it exists in the repo -cluster_model_name = None - -# Set the default f0 type to use - use the one it was trained on. -# The default for so-vits-svc-fork is "dio". -# Options: "crepe", "crepe-tiny", "parselmouth", "dio", "harvest" -default_f0_method = "crepe" - -# The default ratio of cluster inference to SVC inference. -# If cluster_model_name is not found in the repo, this is set to 0. -default_cluster_infer_ratio = 0.5 - -# Limit on duration of audio at inference time. increase if you can -# In this parent app, we set the limit with an env var to 30 seconds -# If you didnt set env var + you go OOM try changing 9e9 to <=300ish -duration_limit = int(os.environ.get("MAX_DURATION_SECONDS", 9e9)) -################################################################### - -# Figure out the latest generator by taking highest value one. -# Ex. if the repo has: G_0.pth, G_100.pth, G_200.pth, we'd use G_200.pth -if ckpt_name is None: - latest_id = sorted( - [ - int(Path(x).stem.split("_")[1]) - for x in list_repo_files(repo_id) - if x.startswith("G_") and x.endswith(".pth") - ] - )[-1] - ckpt_name = f"G_{latest_id}.pth" - -cluster_model_name = cluster_model_name or "kmeans.pt" -if cluster_model_name in list_repo_files(repo_id): - print(f"Found Cluster model - Downloading {cluster_model_name} from {repo_id}") - cluster_model_path = hf_hub_download(repo_id, cluster_model_name) -else: - print(f"Could not find {cluster_model_name} in {repo_id}. Using None") - cluster_model_path = None -default_cluster_infer_ratio = default_cluster_infer_ratio if cluster_model_path else 0 - -generator_path = hf_hub_download(repo_id, ckpt_name) -config_path = hf_hub_download(repo_id, "config.json") -hparams = HParams(**json.loads(Path(config_path).read_text())) -speakers = list(hparams.spk.keys()) -device = "cuda" if torch.cuda.is_available() else "cpu" -model = Svc(net_g_path=generator_path, config_path=config_path, device=device, cluster_model_path=cluster_model_path) -demucs_model = get_model(DEFAULT_MODEL) - - -def extract_vocal_demucs(model, filename, sr=44100, device=None, shifts=1, split=True, overlap=0.25, jobs=0): - wav, sr = librosa.load(filename, mono=False, sr=sr) - wav = torch.tensor(wav) - ref = wav.mean(0) - wav = (wav - ref.mean()) / ref.std() - sources = apply_model( - model, wav[None], device=device, shifts=shifts, split=split, overlap=overlap, progress=True, num_workers=jobs - )[0] - sources = sources * ref.std() + ref.mean() - # We take just the vocals stem. I know the vocals for this model are at index -1 - # If using different model, check model.sources.index('vocals') - vocal_wav = sources[-1] - # I did this because its the same normalization the so-vits model required - vocal_wav = vocal_wav / max(1.01 * vocal_wav.abs().max(), 1) - vocal_wav = vocal_wav.numpy() - vocal_wav = librosa.to_mono(vocal_wav) - vocal_wav = vocal_wav.T - instrumental_wav = sources[:-1].sum(0).numpy().T - return vocal_wav, instrumental_wav - - -def download_youtube_clip( - video_identifier, - start_time, - end_time, - output_filename, - num_attempts=5, - url_base="https://www.youtube.com/watch?v=", - quiet=False, - force=False, -): - output_path = Path(output_filename) - if output_path.exists(): - if not force: - return output_path - else: - output_path.unlink() - - quiet = "--quiet --no-warnings" if quiet else "" - command = f""" - yt-dlp {quiet} -x --audio-format wav -f bestaudio -o "{output_filename}" --download-sections "*{start_time}-{end_time}" "{url_base}{video_identifier}" # noqa: E501 - """.strip() - - attempts = 0 - while True: - try: - _ = subprocess.check_output(command, shell=True, stderr=subprocess.STDOUT) - except subprocess.CalledProcessError: - attempts += 1 - if attempts == num_attempts: - return None - else: - break - - if output_path.exists(): - return output_path - else: - return None - - -def predict( - speaker, - audio, - transpose: int = 0, - auto_predict_f0: bool = False, - cluster_infer_ratio: float = 0, - noise_scale: float = 0.4, - f0_method: str = "crepe", - db_thresh: int = -40, - pad_seconds: float = 0.5, - chunk_seconds: float = 0.5, - absolute_thresh: bool = False, -): - audio, _ = librosa.load(audio, sr=model.target_sample, duration=duration_limit) - audio = model.infer_silence( - audio.astype(np.float32), - speaker=speaker, - transpose=transpose, - auto_predict_f0=auto_predict_f0, - cluster_infer_ratio=cluster_infer_ratio, - noise_scale=noise_scale, - f0_method=f0_method, - db_thresh=db_thresh, - pad_seconds=pad_seconds, - chunk_seconds=chunk_seconds, - absolute_thresh=absolute_thresh, - ) - return model.target_sample, audio - -SPACE_ID = "nateraw/voice-cloning" -description = f""" -# Attention - This Space may be slow in the shared UI if there is a long queue. To speed it up, you can duplicate and use it with a paid private T4 GPU. - -
    Duplicate Space
    - -#### This app uses models trained with [so-vits-svc-fork](https://github.com/voicepaw/so-vits-svc-fork) to clone a voice. Model currently being used is https://hf.co/{repo_id}. To change the model being served, duplicate the space and update the `repo_id`/other settings in `app.py`. - -#### Train Your Own: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nateraw/voice-cloning/blob/main/training_so_vits_svc_fork.ipynb) -""".strip() - -article = """ -

    - Github Repo -

    -""".strip() - - -interface_mic = gr.Interface( - predict, - inputs=[ - gr.Dropdown(speakers, value=speakers[0], label="Target Speaker"), - gr.Audio(type="filepath", source="microphone", label="Source Audio"), - gr.Slider(-12, 12, value=0, step=1, label="Transpose (Semitones)"), - gr.Checkbox(False, label="Auto Predict F0"), - gr.Slider(0.0, 1.0, value=default_cluster_infer_ratio, step=0.1, label="cluster infer ratio"), - gr.Slider(0.0, 1.0, value=0.4, step=0.1, label="noise scale"), - gr.Dropdown( - choices=["crepe", "crepe-tiny", "parselmouth", "dio", "harvest"], - value=default_f0_method, - label="f0 method", - ), - ], - outputs="audio", - title="Voice Cloning", - description=description, - article=article, -) -interface_file = gr.Interface( - predict, - inputs=[ - gr.Dropdown(speakers, value=speakers[0], label="Target Speaker"), - gr.Audio(type="filepath", source="upload", label="Source Audio"), - gr.Slider(-12, 12, value=0, step=1, label="Transpose (Semitones)"), - gr.Checkbox(False, label="Auto Predict F0"), - gr.Slider(0.0, 1.0, value=default_cluster_infer_ratio, step=0.1, label="cluster infer ratio"), - gr.Slider(0.0, 1.0, value=0.4, step=0.1, label="noise scale"), - gr.Dropdown( - choices=["crepe", "crepe-tiny", "parselmouth", "dio", "harvest"], - value=default_f0_method, - label="f0 method", - ), - ], - outputs="audio", - title="Voice Cloning", - description=description, - article=article, -) -interface = gr.TabbedInterface( - [interface_mic, interface_file], - ["Clone From Mic", "Clone From File"], -) - - -if __name__ == "__main__": - interface.launch() diff --git a/spaces/JUNGU/VToonify/vtoonify/LICENSE.md b/spaces/JUNGU/VToonify/vtoonify/LICENSE.md deleted file mode 100644 index a7e5837d44361b7aa1d633b9d36783ac838a45bc..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/VToonify/vtoonify/LICENSE.md +++ /dev/null @@ -1,12 +0,0 @@ -# S-Lab License 1.0 - -Copyright 2022 S-Lab - -Redistribution and use for non-commercial purpose in source and binary forms, with or without modification, are permitted provided that the following conditions are met: -1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. -2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. -3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.\ -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -4. In the event that redistribution and/or use for commercial purpose in source or binary forms, with or without modification is required, please contact the contributor(s) of the work. - - diff --git a/spaces/Jacks2003/3D_Photo_Inpainting/mesh_tools.py b/spaces/Jacks2003/3D_Photo_Inpainting/mesh_tools.py deleted file mode 100644 index ca2a065e08f41b35358b82e89b5045ad8172c54b..0000000000000000000000000000000000000000 --- a/spaces/Jacks2003/3D_Photo_Inpainting/mesh_tools.py +++ /dev/null @@ -1,1083 +0,0 @@ -import os -import numpy as np -try: - import cynetworkx as netx -except ImportError: - import networkx as netx - -import json -import scipy.misc as misc -#import OpenEXR -import scipy.signal as signal -import matplotlib.pyplot as plt -import cv2 -import scipy.misc as misc -from skimage import io -from functools import partial -from vispy import scene, io -from vispy.scene import visuals -from functools import reduce -# from moviepy.editor import ImageSequenceClip -import scipy.misc as misc -from vispy.visuals.filters import Alpha -import cv2 -from skimage.transform import resize -import copy -import torch -import os -from utils import refine_depth_around_edge, smooth_cntsyn_gap -from utils import require_depth_edge, filter_irrelevant_edge_new, open_small_mask -from skimage.feature import canny -from scipy import ndimage -import time -import transforms3d - -def relabel_node(mesh, nodes, cur_node, new_node): - if cur_node == new_node: - return mesh - mesh.add_node(new_node) - for key, value in nodes[cur_node].items(): - nodes[new_node][key] = value - for ne in mesh.neighbors(cur_node): - mesh.add_edge(new_node, ne) - mesh.remove_node(cur_node) - - return mesh - -def filter_edge(mesh, edge_ccs, config, invalid=False): - context_ccs = [set() for _ in edge_ccs] - mesh_nodes = mesh.nodes - for edge_id, edge_cc in enumerate(edge_ccs): - if config['context_thickness'] == 0: - continue - edge_group = {} - for edge_node in edge_cc: - far_nodes = mesh_nodes[edge_node].get('far') - if far_nodes is None: - continue - for far_node in far_nodes: - context_ccs[edge_id].add(far_node) - if mesh_nodes[far_node].get('edge_id') is not None: - if edge_group.get(mesh_nodes[far_node]['edge_id']) is None: - edge_group[mesh_nodes[far_node]['edge_id']] = set() - edge_group[mesh_nodes[far_node]['edge_id']].add(far_node) - if len(edge_cc) > 2: - for edge_key in [*edge_group.keys()]: - if len(edge_group[edge_key]) == 1: - context_ccs[edge_id].remove([*edge_group[edge_key]][0]) - valid_edge_ccs = [] - for xidx, yy in enumerate(edge_ccs): - if invalid is not True and len(context_ccs[xidx]) > 0: - # if len(context_ccs[xidx]) > 0: - valid_edge_ccs.append(yy) - elif invalid is True and len(context_ccs[xidx]) == 0: - valid_edge_ccs.append(yy) - else: - valid_edge_ccs.append(set()) - # valid_edge_ccs = [yy for xidx, yy in enumerate(edge_ccs) if len(context_ccs[xidx]) > 0] - - return valid_edge_ccs - -def extrapolate(global_mesh, - info_on_pix, - image, - depth, - other_edge_with_id, - edge_map, - edge_ccs, - depth_edge_model, - depth_feat_model, - rgb_feat_model, - config, - direc='right-up'): - h_off, w_off = global_mesh.graph['hoffset'], global_mesh.graph['woffset'] - noext_H, noext_W = global_mesh.graph['noext_H'], global_mesh.graph['noext_W'] - - if "up" in direc.lower() and "-" not in direc.lower(): - all_anchor = [0, h_off + config['context_thickness'], w_off, w_off + noext_W] - global_shift = [all_anchor[0], all_anchor[2]] - mask_anchor = [0, h_off, w_off, w_off + noext_W] - context_anchor = [h_off, h_off + config['context_thickness'], w_off, w_off + noext_W] - valid_line_anchor = [h_off, h_off + 1, w_off, w_off + noext_W] - valid_anchor = [min(mask_anchor[0], context_anchor[0]), max(mask_anchor[1], context_anchor[1]), - min(mask_anchor[2], context_anchor[2]), max(mask_anchor[3], context_anchor[3])] - elif "down" in direc.lower() and "-" not in direc.lower(): - all_anchor = [h_off + noext_H - config['context_thickness'], 2 * h_off + noext_H, w_off, w_off + noext_W] - global_shift = [all_anchor[0], all_anchor[2]] - mask_anchor = [h_off + noext_H, 2 * h_off + noext_H, w_off, w_off + noext_W] - context_anchor = [h_off + noext_H - config['context_thickness'], h_off + noext_H, w_off, w_off + noext_W] - valid_line_anchor = [h_off + noext_H - 1, h_off + noext_H, w_off, w_off + noext_W] - valid_anchor = [min(mask_anchor[0], context_anchor[0]), max(mask_anchor[1], context_anchor[1]), - min(mask_anchor[2], context_anchor[2]), max(mask_anchor[3], context_anchor[3])] - elif "left" in direc.lower() and "-" not in direc.lower(): - all_anchor = [h_off, h_off + noext_H, 0, w_off + config['context_thickness']] - global_shift = [all_anchor[0], all_anchor[2]] - mask_anchor = [h_off, h_off + noext_H, 0, w_off] - context_anchor = [h_off, h_off + noext_H, w_off, w_off + config['context_thickness']] - valid_line_anchor = [h_off, h_off + noext_H, w_off, w_off + 1] - valid_anchor = [min(mask_anchor[0], context_anchor[0]), max(mask_anchor[1], context_anchor[1]), - min(mask_anchor[2], context_anchor[2]), max(mask_anchor[3], context_anchor[3])] - elif "right" in direc.lower() and "-" not in direc.lower(): - all_anchor = [h_off, h_off + noext_H, w_off + noext_W - config['context_thickness'], 2 * w_off + noext_W] - global_shift = [all_anchor[0], all_anchor[2]] - mask_anchor = [h_off, h_off + noext_H, w_off + noext_W, 2 * w_off + noext_W] - context_anchor = [h_off, h_off + noext_H, w_off + noext_W - config['context_thickness'], w_off + noext_W] - valid_line_anchor = [h_off, h_off + noext_H, w_off + noext_W - 1, w_off + noext_W] - valid_anchor = [min(mask_anchor[0], context_anchor[0]), max(mask_anchor[1], context_anchor[1]), - min(mask_anchor[2], context_anchor[2]), max(mask_anchor[3], context_anchor[3])] - elif "left" in direc.lower() and "up" in direc.lower() and "-" in direc.lower(): - all_anchor = [0, h_off + config['context_thickness'], 0, w_off + config['context_thickness']] - global_shift = [all_anchor[0], all_anchor[2]] - mask_anchor = [0, h_off, 0, w_off] - context_anchor = "inv-mask" - valid_line_anchor = None - valid_anchor = all_anchor - elif "left" in direc.lower() and "down" in direc.lower() and "-" in direc.lower(): - all_anchor = [h_off + noext_H - config['context_thickness'], 2 * h_off + noext_H, 0, w_off + config['context_thickness']] - global_shift = [all_anchor[0], all_anchor[2]] - mask_anchor = [h_off + noext_H, 2 * h_off + noext_H, 0, w_off] - context_anchor = "inv-mask" - valid_line_anchor = None - valid_anchor = all_anchor - elif "right" in direc.lower() and "up" in direc.lower() and "-" in direc.lower(): - all_anchor = [0, h_off + config['context_thickness'], w_off + noext_W - config['context_thickness'], 2 * w_off + noext_W] - global_shift = [all_anchor[0], all_anchor[2]] - mask_anchor = [0, h_off, w_off + noext_W, 2 * w_off + noext_W] - context_anchor = "inv-mask" - valid_line_anchor = None - valid_anchor = all_anchor - elif "right" in direc.lower() and "down" in direc.lower() and "-" in direc.lower(): - all_anchor = [h_off + noext_H - config['context_thickness'], 2 * h_off + noext_H, w_off + noext_W - config['context_thickness'], 2 * w_off + noext_W] - global_shift = [all_anchor[0], all_anchor[2]] - mask_anchor = [h_off + noext_H, 2 * h_off + noext_H, w_off + noext_W, 2 * w_off + noext_W] - context_anchor = "inv-mask" - valid_line_anchor = None - valid_anchor = all_anchor - - global_mask = np.zeros_like(depth) - global_mask[mask_anchor[0]:mask_anchor[1],mask_anchor[2]:mask_anchor[3]] = 1 - mask = global_mask[valid_anchor[0]:valid_anchor[1], valid_anchor[2]:valid_anchor[3]] * 1 - context = 1 - mask - global_context = np.zeros_like(depth) - global_context[all_anchor[0]:all_anchor[1],all_anchor[2]:all_anchor[3]] = context - # context = global_context[valid_anchor[0]:valid_anchor[1], valid_anchor[2]:valid_anchor[3]] * 1 - - - - valid_area = mask + context - input_rgb = image[valid_anchor[0]:valid_anchor[1], valid_anchor[2]:valid_anchor[3]] / 255. * context[..., None] - input_depth = depth[valid_anchor[0]:valid_anchor[1], valid_anchor[2]:valid_anchor[3]] * context - log_depth = np.log(input_depth + 1e-8) - log_depth[mask > 0] = 0 - input_mean_depth = np.mean(log_depth[context > 0]) - input_zero_mean_depth = (log_depth - input_mean_depth) * context - input_disp = 1./np.abs(input_depth) - input_disp[mask > 0] = 0 - input_disp = input_disp / input_disp.max() - valid_line = np.zeros_like(depth) - if valid_line_anchor is not None: - valid_line[valid_line_anchor[0]:valid_line_anchor[1], valid_line_anchor[2]:valid_line_anchor[3]] = 1 - valid_line = valid_line[all_anchor[0]:all_anchor[1], all_anchor[2]:all_anchor[3]] - # f, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True); ax1.imshow(global_context * 1 + global_mask * 2); ax2.imshow(image); plt.show() - # f, ((ax1, ax2, ax3)) = plt.subplots(1, 3, sharex=True, sharey=True); ax1.imshow(context * 1 + mask * 2); ax2.imshow(input_rgb); ax3.imshow(valid_line); plt.show() - # import pdb; pdb.set_trace() - # return - input_edge_map = edge_map[all_anchor[0]:all_anchor[1], all_anchor[2]:all_anchor[3]] * context - input_other_edge_with_id = other_edge_with_id[all_anchor[0]:all_anchor[1], all_anchor[2]:all_anchor[3]] - end_depth_maps = ((valid_line * input_edge_map) > 0) * input_depth - - - if isinstance(config["gpu_ids"], int) and (config["gpu_ids"] >= 0): - device = config["gpu_ids"] - else: - device = "cpu" - - valid_edge_ids = sorted(list(input_other_edge_with_id[(valid_line * input_edge_map) > 0])) - valid_edge_ids = valid_edge_ids[1:] if (len(valid_edge_ids) > 0 and valid_edge_ids[0] == -1) else valid_edge_ids - edge = reduce(lambda x, y: (x + (input_other_edge_with_id == y).astype(np.uint8)).clip(0, 1), [np.zeros_like(mask)] + list(valid_edge_ids)) - t_edge = torch.FloatTensor(edge).to(device)[None, None, ...] - t_rgb = torch.FloatTensor(input_rgb).to(device).permute(2,0,1).unsqueeze(0) - t_mask = torch.FloatTensor(mask).to(device)[None, None, ...] - t_context = torch.FloatTensor(context).to(device)[None, None, ...] - t_disp = torch.FloatTensor(input_disp).to(device)[None, None, ...] - t_depth_zero_mean_depth = torch.FloatTensor(input_zero_mean_depth).to(device)[None, None, ...] - - depth_edge_output = depth_edge_model.forward_3P(t_mask, t_context, t_rgb, t_disp, t_edge, unit_length=128, - cuda=device) - t_output_edge = (depth_edge_output> config['ext_edge_threshold']).float() * t_mask + t_edge - output_raw_edge = t_output_edge.data.cpu().numpy().squeeze() - # import pdb; pdb.set_trace() - mesh = netx.Graph() - hxs, hys = np.where(output_raw_edge * mask > 0) - valid_map = mask + context - for hx, hy in zip(hxs, hys): - node = (hx, hy) - mesh.add_node((hx, hy)) - eight_nes = [ne for ne in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), \ - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)]\ - if 0 <= ne[0] < output_raw_edge.shape[0] and 0 <= ne[1] < output_raw_edge.shape[1] and 0 < output_raw_edge[ne[0], ne[1]]] - for ne in eight_nes: - mesh.add_edge(node, ne, length=np.hypot(ne[0] - hx, ne[1] - hy)) - if end_depth_maps[ne[0], ne[1]] != 0: - mesh.nodes[ne[0], ne[1]]['cnt'] = True - mesh.nodes[ne[0], ne[1]]['depth'] = end_depth_maps[ne[0], ne[1]] - ccs = [*netx.connected_components(mesh)] - end_pts = [] - for cc in ccs: - end_pts.append(set()) - for node in cc: - if mesh.nodes[node].get('cnt') is not None: - end_pts[-1].add((node[0], node[1], mesh.nodes[node]['depth'])) - fpath_map = np.zeros_like(output_raw_edge) - 1 - npath_map = np.zeros_like(output_raw_edge) - 1 - for end_pt, cc in zip(end_pts, ccs): - sorted_end_pt = [] - if len(end_pt) >= 2: - continue - if len(end_pt) == 0: - continue - if len(end_pt) == 1: - sub_mesh = mesh.subgraph(list(cc)).copy() - pnodes = netx.periphery(sub_mesh) - ends = [*end_pt] - edge_id = global_mesh.nodes[(ends[0][0] + all_anchor[0], ends[0][1] + all_anchor[2], -ends[0][2])]['edge_id'] - pnodes = sorted(pnodes, - key=lambda x: np.hypot((x[0] - ends[0][0]), (x[1] - ends[0][1])), - reverse=True)[0] - npath = [*netx.shortest_path(sub_mesh, (ends[0][0], ends[0][1]), pnodes, weight='length')] - for np_node in npath: - npath_map[np_node[0], np_node[1]] = edge_id - fpath = [] - if global_mesh.nodes[(ends[0][0] + all_anchor[0], ends[0][1] + all_anchor[2], -ends[0][2])].get('far') is None: - print("None far") - import pdb; pdb.set_trace() - else: - fnodes = global_mesh.nodes[(ends[0][0] + all_anchor[0], ends[0][1] + all_anchor[2], -ends[0][2])].get('far') - fnodes = [(xx[0] - all_anchor[0], xx[1] - all_anchor[2], xx[2]) for xx in fnodes] - dmask = mask + 0 - did = 0 - while True: - did += 1 - dmask = cv2.dilate(dmask, np.ones((3, 3)), iterations=1) - if did > 3: - break - # ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0)] - ffnode = [fnode for fnode in fnodes if (dmask[fnode[0], fnode[1]] > 0 and mask[fnode[0], fnode[1]] == 0)] - if len(ffnode) > 0: - fnode = ffnode[0] - break - if len(ffnode) == 0: - continue - fpath.append((fnode[0], fnode[1])) - for step in range(0, len(npath) - 1): - parr = (npath[step + 1][0] - npath[step][0], npath[step + 1][1] - npath[step][1]) - new_loc = (fpath[-1][0] + parr[0], fpath[-1][1] + parr[1]) - new_loc_nes = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1)]\ - if xx[0] >= 0 and xx[0] < fpath_map.shape[0] and xx[1] >= 0 and xx[1] < fpath_map.shape[1]] - if np.sum([fpath_map[nlne[0], nlne[1]] for nlne in new_loc_nes]) != -4: - break - if npath_map[new_loc[0], new_loc[1]] != -1: - if npath_map[new_loc[0], new_loc[1]] != edge_id: - break - else: - continue - if valid_area[new_loc[0], new_loc[1]] == 0: - break - new_loc_nes_eight = [xx for xx in [(new_loc[0] + 1, new_loc[1]), (new_loc[0] - 1, new_loc[1]), - (new_loc[0], new_loc[1] + 1), (new_loc[0], new_loc[1] - 1), - (new_loc[0] + 1, new_loc[1] + 1), (new_loc[0] + 1, new_loc[1] - 1), - (new_loc[0] - 1, new_loc[1] - 1), (new_loc[0] - 1, new_loc[1] + 1)]\ - if xx[0] >= 0 and xx[0] < fpath_map.shape[0] and xx[1] >= 0 and xx[1] < fpath_map.shape[1]] - if np.sum([int(npath_map[nlne[0], nlne[1]] == edge_id) for nlne in new_loc_nes_eight]) == 0: - break - fpath.append((fpath[-1][0] + parr[0], fpath[-1][1] + parr[1])) - if step != len(npath) - 2: - for xx in npath[step+1:]: - if npath_map[xx[0], xx[1]] == edge_id: - npath_map[xx[0], xx[1]] = -1 - if len(fpath) > 0: - for fp_node in fpath: - fpath_map[fp_node[0], fp_node[1]] = edge_id - # import pdb; pdb.set_trace() - far_edge = (fpath_map > -1).astype(np.uint8) - update_edge = (npath_map > -1) * mask + edge - t_update_edge = torch.FloatTensor(update_edge).to(device)[None, None, ...] - depth_output = depth_feat_model.forward_3P(t_mask, t_context, t_depth_zero_mean_depth, t_update_edge, unit_length=128, - cuda=device) - depth_output = depth_output.cpu().data.numpy().squeeze() - depth_output = np.exp(depth_output + input_mean_depth) * mask # + input_depth * context - # if "right" in direc.lower() and "-" not in direc.lower(): - # plt.imshow(depth_output); plt.show() - # import pdb; pdb.set_trace() - # f, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True); ax1.imshow(depth_output); ax2.imshow(npath_map + fpath_map); plt.show() - for near_id in np.unique(npath_map[npath_map > -1]): - depth_output = refine_depth_around_edge(depth_output.copy(), - (fpath_map == near_id).astype(np.uint8) * mask, # far_edge_map_in_mask, - (fpath_map == near_id).astype(np.uint8), # far_edge_map, - (npath_map == near_id).astype(np.uint8) * mask, - mask.copy(), - np.zeros_like(mask), - config) - # if "right" in direc.lower() and "-" not in direc.lower(): - # plt.imshow(depth_output); plt.show() - # import pdb; pdb.set_trace() - # f, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True); ax1.imshow(depth_output); ax2.imshow(npath_map + fpath_map); plt.show() - rgb_output = rgb_feat_model.forward_3P(t_mask, t_context, t_rgb, t_update_edge, unit_length=128, - cuda=device) - - # rgb_output = rgb_feat_model.forward_3P(t_mask, t_context, t_rgb, t_update_edge, unit_length=128, cuda=config['gpu_ids']) - if config.get('gray_image') is True: - rgb_output = rgb_output.mean(1, keepdim=True).repeat((1,3,1,1)) - rgb_output = ((rgb_output.squeeze().data.cpu().permute(1,2,0).numpy() * mask[..., None] + input_rgb) * 255).astype(np.uint8) - image[all_anchor[0]:all_anchor[1], all_anchor[2]:all_anchor[3]][mask > 0] = rgb_output[mask > 0] # np.array([255,0,0]) # rgb_output[mask > 0] - depth[all_anchor[0]:all_anchor[1], all_anchor[2]:all_anchor[3]][mask > 0] = depth_output[mask > 0] - # nxs, nys = np.where(mask > -1) - # for nx, ny in zip(nxs, nys): - # info_on_pix[(nx, ny)][0]['color'] = rgb_output[] - - - nxs, nys = np.where((npath_map > -1)) - for nx, ny in zip(nxs, nys): - n_id = npath_map[nx, ny] - four_nes = [xx for xx in [(nx + 1, ny), (nx - 1, ny), (nx, ny + 1), (nx, ny - 1)]\ - if 0 <= xx[0] < fpath_map.shape[0] and 0 <= xx[1] < fpath_map.shape[1]] - for nex, ney in four_nes: - if fpath_map[nex, ney] == n_id: - na, nb = (nx + all_anchor[0], ny + all_anchor[2], info_on_pix[(nx + all_anchor[0], ny + all_anchor[2])][0]['depth']), \ - (nex + all_anchor[0], ney + all_anchor[2], info_on_pix[(nex + all_anchor[0], ney + all_anchor[2])][0]['depth']) - if global_mesh.has_edge(na, nb): - global_mesh.remove_edge(na, nb) - nxs, nys = np.where((fpath_map > -1)) - for nx, ny in zip(nxs, nys): - n_id = fpath_map[nx, ny] - four_nes = [xx for xx in [(nx + 1, ny), (nx - 1, ny), (nx, ny + 1), (nx, ny - 1)]\ - if 0 <= xx[0] < npath_map.shape[0] and 0 <= xx[1] < npath_map.shape[1]] - for nex, ney in four_nes: - if npath_map[nex, ney] == n_id: - na, nb = (nx + all_anchor[0], ny + all_anchor[2], info_on_pix[(nx + all_anchor[0], ny + all_anchor[2])][0]['depth']), \ - (nex + all_anchor[0], ney + all_anchor[2], info_on_pix[(nex + all_anchor[0], ney + all_anchor[2])][0]['depth']) - if global_mesh.has_edge(na, nb): - global_mesh.remove_edge(na, nb) - nxs, nys = np.where(mask > 0) - for x, y in zip(nxs, nys): - x = x + all_anchor[0] - y = y + all_anchor[2] - cur_node = (x, y, 0) - new_node = (x, y, -abs(depth[x, y])) - disp = 1. / -abs(depth[x, y]) - mapping_dict = {cur_node: new_node} - info_on_pix, global_mesh = update_info(mapping_dict, info_on_pix, global_mesh) - global_mesh.nodes[new_node]['color'] = image[x, y] - global_mesh.nodes[new_node]['old_color'] = image[x, y] - global_mesh.nodes[new_node]['disp'] = disp - info_on_pix[(x, y)][0]['depth'] = -abs(depth[x, y]) - info_on_pix[(x, y)][0]['disp'] = disp - info_on_pix[(x, y)][0]['color'] = image[x, y] - - - nxs, nys = np.where((npath_map > -1)) - for nx, ny in zip(nxs, nys): - self_node = (nx + all_anchor[0], ny + all_anchor[2], info_on_pix[(nx + all_anchor[0], ny + all_anchor[2])][0]['depth']) - if global_mesh.has_node(self_node) is False: - break - n_id = int(round(npath_map[nx, ny])) - four_nes = [xx for xx in [(nx + 1, ny), (nx - 1, ny), (nx, ny + 1), (nx, ny - 1)]\ - if 0 <= xx[0] < fpath_map.shape[0] and 0 <= xx[1] < fpath_map.shape[1]] - for nex, ney in four_nes: - ne_node = (nex + all_anchor[0], ney + all_anchor[2], info_on_pix[(nex + all_anchor[0], ney + all_anchor[2])][0]['depth']) - if global_mesh.has_node(ne_node) is False: - continue - if fpath_map[nex, ney] == n_id: - if global_mesh.nodes[self_node].get('edge_id') is None: - global_mesh.nodes[self_node]['edge_id'] = n_id - edge_ccs[n_id].add(self_node) - info_on_pix[(self_node[0], self_node[1])][0]['edge_id'] = n_id - if global_mesh.has_edge(self_node, ne_node) is True: - global_mesh.remove_edge(self_node, ne_node) - if global_mesh.nodes[self_node].get('far') is None: - global_mesh.nodes[self_node]['far'] = [] - global_mesh.nodes[self_node]['far'].append(ne_node) - - global_fpath_map = np.zeros_like(other_edge_with_id) - 1 - global_fpath_map[all_anchor[0]:all_anchor[1], all_anchor[2]:all_anchor[3]] = fpath_map - fpath_ids = np.unique(global_fpath_map) - fpath_ids = fpath_ids[1:] if fpath_ids.shape[0] > 0 and fpath_ids[0] == -1 else [] - fpath_real_id_map = np.zeros_like(global_fpath_map) - 1 - for fpath_id in fpath_ids: - fpath_real_id = np.unique(((global_fpath_map == fpath_id).astype(np.int) * (other_edge_with_id + 1)) - 1) - fpath_real_id = fpath_real_id[1:] if fpath_real_id.shape[0] > 0 and fpath_real_id[0] == -1 else [] - fpath_real_id = fpath_real_id.astype(np.int) - fpath_real_id = np.bincount(fpath_real_id).argmax() - fpath_real_id_map[global_fpath_map == fpath_id] = fpath_real_id - nxs, nys = np.where((fpath_map > -1)) - for nx, ny in zip(nxs, nys): - self_node = (nx + all_anchor[0], ny + all_anchor[2], info_on_pix[(nx + all_anchor[0], ny + all_anchor[2])][0]['depth']) - n_id = fpath_map[nx, ny] - four_nes = [xx for xx in [(nx + 1, ny), (nx - 1, ny), (nx, ny + 1), (nx, ny - 1)]\ - if 0 <= xx[0] < npath_map.shape[0] and 0 <= xx[1] < npath_map.shape[1]] - for nex, ney in four_nes: - ne_node = (nex + all_anchor[0], ney + all_anchor[2], info_on_pix[(nex + all_anchor[0], ney + all_anchor[2])][0]['depth']) - if global_mesh.has_node(ne_node) is False: - continue - if npath_map[nex, ney] == n_id or global_mesh.nodes[ne_node].get('edge_id') == n_id: - if global_mesh.has_edge(self_node, ne_node) is True: - global_mesh.remove_edge(self_node, ne_node) - if global_mesh.nodes[self_node].get('near') is None: - global_mesh.nodes[self_node]['near'] = [] - if global_mesh.nodes[self_node].get('edge_id') is None: - f_id = int(round(fpath_real_id_map[self_node[0], self_node[1]])) - global_mesh.nodes[self_node]['edge_id'] = f_id - info_on_pix[(self_node[0], self_node[1])][0]['edge_id'] = f_id - edge_ccs[f_id].add(self_node) - global_mesh.nodes[self_node]['near'].append(ne_node) - - return info_on_pix, global_mesh, image, depth, edge_ccs - # for edge_cc in edge_ccs: - # for edge_node in edge_cc: - # edge_ccs - # context_ccs, mask_ccs, broken_mask_ccs, edge_ccs, erode_context_ccs, init_mask_connect, edge_maps, extend_context_ccs, extend_edge_ccs - -def get_valid_size(imap): - x_max = np.where(imap.sum(1).squeeze() > 0)[0].max() + 1 - x_min = np.where(imap.sum(1).squeeze() > 0)[0].min() - y_max = np.where(imap.sum(0).squeeze() > 0)[0].max() + 1 - y_min = np.where(imap.sum(0).squeeze() > 0)[0].min() - size_dict = {'x_max':x_max, 'y_max':y_max, 'x_min':x_min, 'y_min':y_min} - - return size_dict - -def dilate_valid_size(isize_dict, imap, dilate=[0, 0]): - osize_dict = copy.deepcopy(isize_dict) - osize_dict['x_min'] = max(0, osize_dict['x_min'] - dilate[0]) - osize_dict['x_max'] = min(imap.shape[0], osize_dict['x_max'] + dilate[0]) - osize_dict['y_min'] = max(0, osize_dict['y_min'] - dilate[0]) - osize_dict['y_max'] = min(imap.shape[1], osize_dict['y_max'] + dilate[1]) - - return osize_dict - -def size_operation(size_a, size_b, operation): - assert operation == '+' or operation == '-', "Operation must be '+' (union) or '-' (exclude)" - osize = {} - if operation == '+': - osize['x_min'] = min(size_a['x_min'], size_b['x_min']) - osize['y_min'] = min(size_a['y_min'], size_b['y_min']) - osize['x_max'] = max(size_a['x_max'], size_b['x_max']) - osize['y_max'] = max(size_a['y_max'], size_b['y_max']) - assert operation != '-', "Operation '-' is undefined !" - - return osize - -def fill_dummy_bord(mesh, info_on_pix, image, depth, config): - context = np.zeros_like(depth).astype(np.uint8) - context[mesh.graph['hoffset']:mesh.graph['hoffset'] + mesh.graph['noext_H'], - mesh.graph['woffset']:mesh.graph['woffset'] + mesh.graph['noext_W']] = 1 - mask = 1 - context - xs, ys = np.where(mask > 0) - depth = depth * context - image = image * context[..., None] - cur_depth = 0 - cur_disp = 0 - color = [0, 0, 0] - for x, y in zip(xs, ys): - cur_node = (x, y, cur_depth) - mesh.add_node(cur_node, color=color, - synthesis=False, - disp=cur_disp, - cc_id=set(), - ext_pixel=True) - info_on_pix[(x, y)] = [{'depth':cur_depth, - 'color':mesh.nodes[(x, y, cur_depth)]['color'], - 'synthesis':False, - 'disp':mesh.nodes[cur_node]['disp'], - 'ext_pixel':True}] - # for x, y in zip(xs, ys): - four_nes = [(xx, yy) for xx, yy in [(x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)] if\ - 0 <= x < mesh.graph['H'] and 0 <= y < mesh.graph['W'] and info_on_pix.get((xx, yy)) is not None] - for ne in four_nes: - # if (ne[0] - x) + (ne[1] - y) == 1 and info_on_pix.get((ne[0], ne[1])) is not None: - mesh.add_edge(cur_node, (ne[0], ne[1], info_on_pix[(ne[0], ne[1])][0]['depth'])) - - return mesh, info_on_pix - - -def enlarge_border(mesh, info_on_pix, depth, image, config): - mesh.graph['hoffset'], mesh.graph['woffset'] = config['extrapolation_thickness'], config['extrapolation_thickness'] - mesh.graph['bord_up'], mesh.graph['bord_left'], mesh.graph['bord_down'], mesh.graph['bord_right'] = \ - 0, 0, mesh.graph['H'], mesh.graph['W'] - # new_image = np.pad(image, - # pad_width=((config['extrapolation_thickness'], config['extrapolation_thickness']), - # (config['extrapolation_thickness'], config['extrapolation_thickness']), (0, 0)), - # mode='constant') - # new_depth = np.pad(depth, - # pad_width=((config['extrapolation_thickness'], config['extrapolation_thickness']), - # (config['extrapolation_thickness'], config['extrapolation_thickness'])), - # mode='constant') - - return mesh, info_on_pix, depth, image - -def fill_missing_node(mesh, info_on_pix, image, depth): - for x in range(mesh.graph['bord_up'], mesh.graph['bord_down']): - for y in range(mesh.graph['bord_left'], mesh.graph['bord_right']): - if info_on_pix.get((x, y)) is None: - print("fill missing node = ", x, y) - import pdb; pdb.set_trace() - re_depth, re_count = 0, 0 - for ne in [(x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)]: - if info_on_pix.get(ne) is not None: - re_depth += info_on_pix[ne][0]['depth'] - re_count += 1 - if re_count == 0: - re_depth = -abs(depth[x, y]) - else: - re_depth = re_depth / re_count - depth[x, y] = abs(re_depth) - info_on_pix[(x, y)] = [{'depth':re_depth, - 'color':image[x, y], - 'synthesis':False, - 'disp':1./re_depth}] - mesh.add_node((x, y, re_depth), color=image[x, y], - synthesis=False, - disp=1./re_depth, - cc_id=set()) - return mesh, info_on_pix, depth - - - -def refresh_bord_depth(mesh, info_on_pix, image, depth): - H, W = mesh.graph['H'], mesh.graph['W'] - corner_nodes = [(mesh.graph['bord_up'], mesh.graph['bord_left']), - (mesh.graph['bord_up'], mesh.graph['bord_right'] - 1), - (mesh.graph['bord_down'] - 1, mesh.graph['bord_left']), - (mesh.graph['bord_down'] - 1, mesh.graph['bord_right'] - 1)] - # (0, W - 1), (H - 1, 0), (H - 1, W - 1)] - bord_nodes = [] - bord_nodes += [(mesh.graph['bord_up'], xx) for xx in range(mesh.graph['bord_left'] + 1, mesh.graph['bord_right'] - 1)] - bord_nodes += [(mesh.graph['bord_down'] - 1, xx) for xx in range(mesh.graph['bord_left'] + 1, mesh.graph['bord_right'] - 1)] - bord_nodes += [(xx, mesh.graph['bord_left']) for xx in range(mesh.graph['bord_up'] + 1, mesh.graph['bord_down'] - 1)] - bord_nodes += [(xx, mesh.graph['bord_right'] - 1) for xx in range(mesh.graph['bord_up'] + 1, mesh.graph['bord_down'] - 1)] - for xy in bord_nodes: - tgt_loc = None - if xy[0] == mesh.graph['bord_up']: - tgt_loc = (xy[0] + 1, xy[1])# (1, xy[1]) - elif xy[0] == mesh.graph['bord_down'] - 1: - tgt_loc = (xy[0] - 1, xy[1]) # (H - 2, xy[1]) - elif xy[1] == mesh.graph['bord_left']: - tgt_loc = (xy[0], xy[1] + 1) - elif xy[1] == mesh.graph['bord_right'] - 1: - tgt_loc = (xy[0], xy[1] - 1) - if tgt_loc is not None: - ne_infos = info_on_pix.get(tgt_loc) - if ne_infos is None: - import pdb; pdb.set_trace() - # if ne_infos is not None and len(ne_infos) == 1: - tgt_depth = ne_infos[0]['depth'] - tgt_disp = ne_infos[0]['disp'] - new_node = (xy[0], xy[1], tgt_depth) - src_node = (tgt_loc[0], tgt_loc[1], tgt_depth) - tgt_nes_loc = [(xx[0], xx[1]) \ - for xx in mesh.neighbors(src_node)] - tgt_nes_loc = [(xx[0] - tgt_loc[0] + xy[0], xx[1] - tgt_loc[1] + xy[1]) for xx in tgt_nes_loc \ - if abs(xx[0] - xy[0]) == 1 and abs(xx[1] - xy[1]) == 1] - tgt_nes_loc = [xx for xx in tgt_nes_loc if info_on_pix.get(xx) is not None] - tgt_nes_loc.append(tgt_loc) - # if (xy[0], xy[1]) == (559, 60): - # import pdb; pdb.set_trace() - if info_on_pix.get(xy) is not None and len(info_on_pix.get(xy)) > 0: - old_depth = info_on_pix[xy][0].get('depth') - old_node = (xy[0], xy[1], old_depth) - mesh.remove_edges_from([(old_ne, old_node) for old_ne in mesh.neighbors(old_node)]) - mesh.add_edges_from([((zz[0], zz[1], info_on_pix[zz][0]['depth']), old_node) for zz in tgt_nes_loc]) - mapping_dict = {old_node: new_node} - # if old_node[2] == new_node[2]: - # print("mapping_dict = ", mapping_dict) - info_on_pix, mesh = update_info(mapping_dict, info_on_pix, mesh) - else: - info_on_pix[xy] = [] - info_on_pix[xy][0] = info_on_pix[tgt_loc][0] - info_on_pix['color'] = image[xy[0], xy[1]] - info_on_pix['old_color'] = image[xy[0], xy[1]] - mesh.add_node(new_node) - mesh.add_edges_from([((zz[0], zz[1], info_on_pix[zz][0]['depth']), new_node) for zz in tgt_nes_loc]) - mesh.nodes[new_node]['far'] = None - mesh.nodes[new_node]['near'] = None - if mesh.nodes[src_node].get('far') is not None: - redundant_nodes = [ne for ne in mesh.nodes[src_node]['far'] if (ne[0], ne[1]) == xy] - [mesh.nodes[src_node]['far'].remove(aa) for aa in redundant_nodes] - if mesh.nodes[src_node].get('near') is not None: - redundant_nodes = [ne for ne in mesh.nodes[src_node]['near'] if (ne[0], ne[1]) == xy] - [mesh.nodes[src_node]['near'].remove(aa) for aa in redundant_nodes] - for xy in corner_nodes: - hx, hy = xy - four_nes = [xx for xx in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] if \ - mesh.graph['bord_up'] <= xx[0] < mesh.graph['bord_down'] and \ - mesh.graph['bord_left'] <= xx[1] < mesh.graph['bord_right']] - ne_nodes = [] - ne_depths = [] - for ne_loc in four_nes: - if info_on_pix.get(ne_loc) is not None: - ne_depths.append(info_on_pix[ne_loc][0]['depth']) - ne_nodes.append((ne_loc[0], ne_loc[1], info_on_pix[ne_loc][0]['depth'])) - new_node = (xy[0], xy[1], float(np.mean(ne_depths))) - if info_on_pix.get(xy) is not None and len(info_on_pix.get(xy)) > 0: - old_depth = info_on_pix[xy][0].get('depth') - old_node = (xy[0], xy[1], old_depth) - mesh.remove_edges_from([(old_ne, old_node) for old_ne in mesh.neighbors(old_node)]) - mesh.add_edges_from([(zz, old_node) for zz in ne_nodes]) - mapping_dict = {old_node: new_node} - info_on_pix, mesh = update_info(mapping_dict, info_on_pix, mesh) - else: - info_on_pix[xy] = [] - info_on_pix[xy][0] = info_on_pix[ne_loc[-1]][0] - info_on_pix['color'] = image[xy[0], xy[1]] - info_on_pix['old_color'] = image[xy[0], xy[1]] - mesh.add_node(new_node) - mesh.add_edges_from([(zz, new_node) for zz in ne_nodes]) - mesh.nodes[new_node]['far'] = None - mesh.nodes[new_node]['near'] = None - for xy in bord_nodes + corner_nodes: - # if (xy[0], xy[1]) == (559, 60): - # import pdb; pdb.set_trace() - depth[xy[0], xy[1]] = abs(info_on_pix[xy][0]['depth']) - for xy in bord_nodes: - cur_node = (xy[0], xy[1], info_on_pix[xy][0]['depth']) - nes = mesh.neighbors(cur_node) - four_nes = set([(xy[0] + 1, xy[1]), (xy[0] - 1, xy[1]), (xy[0], xy[1] + 1), (xy[0], xy[1] - 1)]) - \ - set([(ne[0], ne[1]) for ne in nes]) - four_nes = [ne for ne in four_nes if mesh.graph['bord_up'] <= ne[0] < mesh.graph['bord_down'] and \ - mesh.graph['bord_left'] <= ne[1] < mesh.graph['bord_right']] - four_nes = [(ne[0], ne[1], info_on_pix[(ne[0], ne[1])][0]['depth']) for ne in four_nes] - mesh.nodes[cur_node]['far'] = [] - mesh.nodes[cur_node]['near'] = [] - for ne in four_nes: - if abs(ne[2]) >= abs(cur_node[2]): - mesh.nodes[cur_node]['far'].append(ne) - else: - mesh.nodes[cur_node]['near'].append(ne) - - return mesh, info_on_pix, depth - -def get_union_size(mesh, dilate, *alls_cc): - all_cc = reduce(lambda x, y: x | y, [set()] + [*alls_cc]) - min_x, min_y, max_x, max_y = mesh.graph['H'], mesh.graph['W'], 0, 0 - H, W = mesh.graph['H'], mesh.graph['W'] - for node in all_cc: - if node[0] < min_x: - min_x = node[0] - if node[0] > max_x: - max_x = node[0] - if node[1] < min_y: - min_y = node[1] - if node[1] > max_y: - max_y = node[1] - max_x = max_x + 1 - max_y = max_y + 1 - # mask_size = dilate_valid_size(mask_size, edge_dict['mask'], dilate=[20, 20]) - osize_dict = dict() - osize_dict['x_min'] = max(0, min_x - dilate[0]) - osize_dict['x_max'] = min(H, max_x + dilate[0]) - osize_dict['y_min'] = max(0, min_y - dilate[1]) - osize_dict['y_max'] = min(W, max_y + dilate[1]) - - return osize_dict - -def incomplete_node(mesh, edge_maps, info_on_pix): - vis_map = np.zeros((mesh.graph['H'], mesh.graph['W'])) - - for node in mesh.nodes: - if mesh.nodes[node].get('synthesis') is not True: - connect_all_flag = False - nes = [xx for xx in mesh.neighbors(node) if mesh.nodes[xx].get('synthesis') is not True] - if len(nes) < 3 and 0 < node[0] < mesh.graph['H'] - 1 and 0 < node[1] < mesh.graph['W'] - 1: - if len(nes) <= 1: - connect_all_flag = True - else: - dan_ne_node_a = nes[0] - dan_ne_node_b = nes[1] - if abs(dan_ne_node_a[0] - dan_ne_node_b[0]) > 1 or \ - abs(dan_ne_node_a[1] - dan_ne_node_b[1]) > 1: - connect_all_flag = True - if connect_all_flag == True: - vis_map[node[0], node[1]] = len(nes) - four_nes = [(node[0] - 1, node[1]), (node[0] + 1, node[1]), (node[0], node[1] - 1), (node[0], node[1] + 1)] - for ne in four_nes: - for info in info_on_pix[(ne[0], ne[1])]: - ne_node = (ne[0], ne[1], info['depth']) - if info.get('synthesis') is not True and mesh.has_node(ne_node): - mesh.add_edge(node, ne_node) - break - - return mesh - -def edge_inpainting(edge_id, context_cc, erode_context_cc, mask_cc, edge_cc, extend_edge_cc, - mesh, edge_map, edge_maps_with_id, config, union_size, depth_edge_model, inpaint_iter): - edge_dict = get_edge_from_nodes(context_cc, erode_context_cc, mask_cc, edge_cc, extend_edge_cc, - mesh.graph['H'], mesh.graph['W'], mesh) - edge_dict['edge'], end_depth_maps, _ = \ - filter_irrelevant_edge_new(edge_dict['self_edge'] + edge_dict['comp_edge'], - edge_map, - edge_maps_with_id, - edge_id, - edge_dict['context'], - edge_dict['depth'], mesh, context_cc | erode_context_cc, spdb=True) - patch_edge_dict = dict() - patch_edge_dict['mask'], patch_edge_dict['context'], patch_edge_dict['rgb'], \ - patch_edge_dict['disp'], patch_edge_dict['edge'] = \ - crop_maps_by_size(union_size, edge_dict['mask'], edge_dict['context'], - edge_dict['rgb'], edge_dict['disp'], edge_dict['edge']) - tensor_edge_dict = convert2tensor(patch_edge_dict) - if require_depth_edge(patch_edge_dict['edge'], patch_edge_dict['mask']) and inpaint_iter == 0: - with torch.no_grad(): - device = config["gpu_ids"] if isinstance(config["gpu_ids"], int) and config["gpu_ids"] >= 0 else "cpu" - depth_edge_output = depth_edge_model.forward_3P(tensor_edge_dict['mask'], - tensor_edge_dict['context'], - tensor_edge_dict['rgb'], - tensor_edge_dict['disp'], - tensor_edge_dict['edge'], - unit_length=128, - cuda=device) - depth_edge_output = depth_edge_output.cpu() - tensor_edge_dict['output'] = (depth_edge_output > config['ext_edge_threshold']).float() * tensor_edge_dict['mask'] + tensor_edge_dict['edge'] - else: - tensor_edge_dict['output'] = tensor_edge_dict['edge'] - depth_edge_output = tensor_edge_dict['edge'] + 0 - patch_edge_dict['output'] = tensor_edge_dict['output'].squeeze().data.cpu().numpy() - edge_dict['output'] = np.zeros((mesh.graph['H'], mesh.graph['W'])) - edge_dict['output'][union_size['x_min']:union_size['x_max'], union_size['y_min']:union_size['y_max']] = \ - patch_edge_dict['output'] - - return edge_dict, end_depth_maps - -def depth_inpainting(context_cc, extend_context_cc, erode_context_cc, mask_cc, mesh, config, union_size, depth_feat_model, edge_output, given_depth_dict=False, spdb=False): - if given_depth_dict is False: - depth_dict = get_depth_from_nodes(context_cc | extend_context_cc, erode_context_cc, mask_cc, mesh.graph['H'], mesh.graph['W'], mesh, config['log_depth']) - if edge_output is not None: - depth_dict['edge'] = edge_output - else: - depth_dict = given_depth_dict - patch_depth_dict = dict() - patch_depth_dict['mask'], patch_depth_dict['context'], patch_depth_dict['depth'], \ - patch_depth_dict['zero_mean_depth'], patch_depth_dict['edge'] = \ - crop_maps_by_size(union_size, depth_dict['mask'], depth_dict['context'], - depth_dict['real_depth'], depth_dict['zero_mean_depth'], depth_dict['edge']) - tensor_depth_dict = convert2tensor(patch_depth_dict) - resize_mask = open_small_mask(tensor_depth_dict['mask'], tensor_depth_dict['context'], 3, 41) - with torch.no_grad(): - device = config["gpu_ids"] if isinstance(config["gpu_ids"], int) and config["gpu_ids"] >= 0 else "cpu" - depth_output = depth_feat_model.forward_3P(resize_mask, - tensor_depth_dict['context'], - tensor_depth_dict['zero_mean_depth'], - tensor_depth_dict['edge'], - unit_length=128, - cuda=device) - depth_output = depth_output.cpu() - tensor_depth_dict['output'] = torch.exp(depth_output + depth_dict['mean_depth']) * \ - tensor_depth_dict['mask'] + tensor_depth_dict['depth'] - patch_depth_dict['output'] = tensor_depth_dict['output'].data.cpu().numpy().squeeze() - depth_dict['output'] = np.zeros((mesh.graph['H'], mesh.graph['W'])) - depth_dict['output'][union_size['x_min']:union_size['x_max'], union_size['y_min']:union_size['y_max']] = \ - patch_depth_dict['output'] - depth_output = depth_dict['output'] * depth_dict['mask'] + depth_dict['depth'] * depth_dict['context'] - depth_output = smooth_cntsyn_gap(depth_dict['output'].copy() * depth_dict['mask'] + depth_dict['depth'] * depth_dict['context'], - depth_dict['mask'], depth_dict['context'], - init_mask_region=depth_dict['mask']) - if spdb is True: - f, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True); - ax1.imshow(depth_output * depth_dict['mask'] + depth_dict['depth']); ax2.imshow(depth_dict['output'] * depth_dict['mask'] + depth_dict['depth']); plt.show() - import pdb; pdb.set_trace() - depth_dict['output'] = depth_output * depth_dict['mask'] + depth_dict['depth'] * depth_dict['context'] - - return depth_dict - -def update_info(mapping_dict, info_on_pix, *meshes): - rt_meshes = [] - for mesh in meshes: - rt_meshes.append(relabel_node(mesh, mesh.nodes, [*mapping_dict.keys()][0], [*mapping_dict.values()][0])) - x, y, _ = [*mapping_dict.keys()][0] - info_on_pix[(x, y)][0]['depth'] = [*mapping_dict.values()][0][2] - - return [info_on_pix] + rt_meshes - -def build_connection(mesh, cur_node, dst_node): - if (abs(cur_node[0] - dst_node[0]) + abs(cur_node[1] - dst_node[1])) < 2: - mesh.add_edge(cur_node, dst_node) - if abs(cur_node[0] - dst_node[0]) > 1 or abs(cur_node[1] - dst_node[1]) > 1: - return mesh - ne_nodes = [*mesh.neighbors(cur_node)].copy() - for ne_node in ne_nodes: - if mesh.has_edge(ne_node, dst_node) or ne_node == dst_node: - continue - else: - mesh = build_connection(mesh, ne_node, dst_node) - - return mesh - -def recursive_add_edge(edge_mesh, mesh, info_on_pix, cur_node, mark): - ne_nodes = [(x[0], x[1]) for x in edge_mesh.neighbors(cur_node)] - for node_xy in ne_nodes: - node = (node_xy[0], node_xy[1], info_on_pix[node_xy][0]['depth']) - if mark[node[0], node[1]] != 3: - continue - else: - mark[node[0], node[1]] = 0 - mesh.remove_edges_from([(xx, node) for xx in mesh.neighbors(node)]) - mesh = build_connection(mesh, cur_node, node) - re_info = dict(depth=0, count=0) - for re_ne in mesh.neighbors(node): - re_info['depth'] += re_ne[2] - re_info['count'] += 1. - try: - re_depth = re_info['depth'] / re_info['count'] - except: - re_depth = node[2] - re_node = (node_xy[0], node_xy[1], re_depth) - mapping_dict = {node: re_node} - info_on_pix, edge_mesh, mesh = update_info(mapping_dict, info_on_pix, edge_mesh, mesh) - - edge_mesh, mesh, mark, info_on_pix = recursive_add_edge(edge_mesh, mesh, info_on_pix, re_node, mark) - - return edge_mesh, mesh, mark, info_on_pix - -def resize_for_edge(tensor_dict, largest_size): - resize_dict = {k: v.clone() for k, v in tensor_dict.items()} - frac = largest_size / np.array([*resize_dict['edge'].shape[-2:]]).max() - if frac < 1: - resize_mark = torch.nn.functional.interpolate(torch.cat((resize_dict['mask'], - resize_dict['context']), - dim=1), - scale_factor=frac, - mode='bilinear') - resize_dict['mask'] = (resize_mark[:, 0:1] > 0).float() - resize_dict['context'] = (resize_mark[:, 1:2] == 1).float() - resize_dict['context'][resize_dict['mask'] > 0] = 0 - resize_dict['edge'] = torch.nn.functional.interpolate(resize_dict['edge'], - scale_factor=frac, - mode='bilinear') - resize_dict['edge'] = (resize_dict['edge'] > 0).float() - resize_dict['edge'] = resize_dict['edge'] * resize_dict['context'] - resize_dict['disp'] = torch.nn.functional.interpolate(resize_dict['disp'], - scale_factor=frac, - mode='nearest') - resize_dict['disp'] = resize_dict['disp'] * resize_dict['context'] - resize_dict['rgb'] = torch.nn.functional.interpolate(resize_dict['rgb'], - scale_factor=frac, - mode='bilinear') - resize_dict['rgb'] = resize_dict['rgb'] * resize_dict['context'] - return resize_dict - -def get_map_from_nodes(nodes, height, width): - omap = np.zeros((height, width)) - for n in nodes: - omap[n[0], n[1]] = 1 - - return omap - -def get_map_from_ccs(ccs, height, width, condition_input=None, condition=None, real_id=False, id_shift=0): - if condition is None: - condition = lambda x, condition_input: True - - if real_id is True: - omap = np.zeros((height, width)) + (-1) + id_shift - else: - omap = np.zeros((height, width)) - for cc_id, cc in enumerate(ccs): - for n in cc: - if condition(n, condition_input): - if real_id is True: - omap[n[0], n[1]] = cc_id + id_shift - else: - omap[n[0], n[1]] = 1 - return omap - -def revise_map_by_nodes(nodes, imap, operation, limit_constr=None): - assert operation == '+' or operation == '-', "Operation must be '+' (union) or '-' (exclude)" - omap = copy.deepcopy(imap) - revise_flag = True - if operation == '+': - for n in nodes: - omap[n[0], n[1]] = 1 - if limit_constr is not None and omap.sum() > limit_constr: - omap = imap - revise_flag = False - elif operation == '-': - for n in nodes: - omap[n[0], n[1]] = 0 - if limit_constr is not None and omap.sum() < limit_constr: - omap = imap - revise_flag = False - - return omap, revise_flag - -def repaint_info(mesh, cc, x_anchor, y_anchor, source_type): - if source_type == 'rgb': - feat = np.zeros((3, x_anchor[1] - x_anchor[0], y_anchor[1] - y_anchor[0])) - else: - feat = np.zeros((1, x_anchor[1] - x_anchor[0], y_anchor[1] - y_anchor[0])) - for node in cc: - if source_type == 'rgb': - feat[:, node[0] - x_anchor[0], node[1] - y_anchor[0]] = np.array(mesh.nodes[node]['color']) / 255. - elif source_type == 'd': - feat[:, node[0] - x_anchor[0], node[1] - y_anchor[0]] = abs(node[2]) - - return feat - -def get_context_from_nodes(mesh, cc, H, W, source_type=''): - if 'rgb' in source_type or 'color' in source_type: - feat = np.zeros((H, W, 3)) - else: - feat = np.zeros((H, W)) - context = np.zeros((H, W)) - for node in cc: - if 'rgb' in source_type or 'color' in source_type: - feat[node[0], node[1]] = np.array(mesh.nodes[node]['color']) / 255. - context[node[0], node[1]] = 1 - else: - feat[node[0], node[1]] = abs(node[2]) - - return feat, context - -def get_mask_from_nodes(mesh, cc, H, W): - mask = np.zeros((H, W)) - for node in cc: - mask[node[0], node[1]] = abs(node[2]) - - return mask - - -def get_edge_from_nodes(context_cc, erode_context_cc, mask_cc, edge_cc, extend_edge_cc, H, W, mesh): - context = np.zeros((H, W)) - mask = np.zeros((H, W)) - rgb = np.zeros((H, W, 3)) - disp = np.zeros((H, W)) - depth = np.zeros((H, W)) - real_depth = np.zeros((H, W)) - edge = np.zeros((H, W)) - comp_edge = np.zeros((H, W)) - fpath_map = np.zeros((H, W)) - 1 - npath_map = np.zeros((H, W)) - 1 - near_depth = np.zeros((H, W)) - for node in context_cc: - rgb[node[0], node[1]] = np.array(mesh.nodes[node]['color']) - disp[node[0], node[1]] = mesh.nodes[node]['disp'] - depth[node[0], node[1]] = node[2] - context[node[0], node[1]] = 1 - for node in erode_context_cc: - rgb[node[0], node[1]] = np.array(mesh.nodes[node]['color']) - disp[node[0], node[1]] = mesh.nodes[node]['disp'] - depth[node[0], node[1]] = node[2] - context[node[0], node[1]] = 1 - rgb = rgb / 255. - disp = np.abs(disp) - disp = disp / disp.max() - real_depth = depth.copy() - for node in context_cc: - if mesh.nodes[node].get('real_depth') is not None: - real_depth[node[0], node[1]] = mesh.nodes[node]['real_depth'] - for node in erode_context_cc: - if mesh.nodes[node].get('real_depth') is not None: - real_depth[node[0], node[1]] = mesh.nodes[node]['real_depth'] - for node in mask_cc: - mask[node[0], node[1]] = 1 - near_depth[node[0], node[1]] = node[2] - for node in edge_cc: - edge[node[0], node[1]] = 1 - for node in extend_edge_cc: - comp_edge[node[0], node[1]] = 1 - rt_dict = {'rgb': rgb, 'disp': disp, 'depth': depth, 'real_depth': real_depth, 'self_edge': edge, 'context': context, - 'mask': mask, 'fpath_map': fpath_map, 'npath_map': npath_map, 'comp_edge': comp_edge, 'valid_area': context + mask, - 'near_depth': near_depth} - - return rt_dict - -def get_depth_from_maps(context_map, mask_map, depth_map, H, W, log_depth=False): - context = context_map.astype(np.uint8) - mask = mask_map.astype(np.uint8).copy() - depth = np.abs(depth_map) - real_depth = depth.copy() - zero_mean_depth = np.zeros((H, W)) - - if log_depth is True: - log_depth = np.log(real_depth + 1e-8) * context - mean_depth = np.mean(log_depth[context > 0]) - zero_mean_depth = (log_depth - mean_depth) * context - else: - zero_mean_depth = real_depth - mean_depth = 0 - edge = np.zeros_like(depth) - - rt_dict = {'depth': depth, 'real_depth': real_depth, 'context': context, 'mask': mask, - 'mean_depth': mean_depth, 'zero_mean_depth': zero_mean_depth, 'edge': edge} - - return rt_dict - -def get_depth_from_nodes(context_cc, erode_context_cc, mask_cc, H, W, mesh, log_depth=False): - context = np.zeros((H, W)) - mask = np.zeros((H, W)) - depth = np.zeros((H, W)) - real_depth = np.zeros((H, W)) - zero_mean_depth = np.zeros((H, W)) - for node in context_cc: - depth[node[0], node[1]] = node[2] - context[node[0], node[1]] = 1 - for node in erode_context_cc: - depth[node[0], node[1]] = node[2] - context[node[0], node[1]] = 1 - depth = np.abs(depth) - real_depth = depth.copy() - for node in context_cc: - if mesh.nodes[node].get('real_depth') is not None: - real_depth[node[0], node[1]] = mesh.nodes[node]['real_depth'] - for node in erode_context_cc: - if mesh.nodes[node].get('real_depth') is not None: - real_depth[node[0], node[1]] = mesh.nodes[node]['real_depth'] - real_depth = np.abs(real_depth) - for node in mask_cc: - mask[node[0], node[1]] = 1 - if log_depth is True: - log_depth = np.log(real_depth + 1e-8) * context - mean_depth = np.mean(log_depth[context > 0]) - zero_mean_depth = (log_depth - mean_depth) * context - else: - zero_mean_depth = real_depth - mean_depth = 0 - - rt_dict = {'depth': depth, 'real_depth': real_depth, 'context': context, 'mask': mask, - 'mean_depth': mean_depth, 'zero_mean_depth': zero_mean_depth} - - return rt_dict - -def get_rgb_from_nodes(context_cc, erode_context_cc, mask_cc, H, W, mesh): - context = np.zeros((H, W)) - mask = np.zeros((H, W)) - rgb = np.zeros((H, W, 3)) - erode_context = np.zeros((H, W)) - for node in context_cc: - rgb[node[0], node[1]] = np.array(mesh.nodes[node]['color']) - context[node[0], node[1]] = 1 - rgb = rgb / 255. - for node in mask_cc: - mask[node[0], node[1]] = 1 - for node in erode_context_cc: - erode_context[node[0], node[1]] = 1 - mask[node[0], node[1]] = 1 - rt_dict = {'rgb': rgb, 'context': context, 'mask': mask, - 'erode': erode_context} - - return rt_dict - -def crop_maps_by_size(size, *imaps): - omaps = [] - for imap in imaps: - omaps.append(imap[size['x_min']:size['x_max'], size['y_min']:size['y_max']].copy()) - - return omaps - -def convert2tensor(input_dict): - rt_dict = {} - for key, value in input_dict.items(): - if 'rgb' in key or 'color' in key: - rt_dict[key] = torch.FloatTensor(value).permute(2, 0, 1)[None, ...] - else: - rt_dict[key] = torch.FloatTensor(value)[None, None, ...] - - return rt_dict diff --git a/spaces/Jeff2323/ai-comic-factory/src/lib/getImageDimension.ts b/spaces/Jeff2323/ai-comic-factory/src/lib/getImageDimension.ts deleted file mode 100644 index 50a94ae1eee733b23b1d4916780e597c759c608e..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/lib/getImageDimension.ts +++ /dev/null @@ -1,16 +0,0 @@ -export interface ImageDimension { - width: number - height: number -} - -export async function getImageDimension(src: string): Promise { - if (!src) { - return { width: 0, height: 0 } - } - const img = new Image() - img.src = src - await img.decode() - const width = img.width - const height = img.height - return { width, height } -} \ No newline at end of file diff --git a/spaces/JosephusCheung/ACertainsStrategyTalk/6.html b/spaces/JosephusCheung/ACertainsStrategyTalk/6.html deleted file mode 100644 index 6c22ae84135b63a4ec84377ebbdd3c33913a5790..0000000000000000000000000000000000000000 --- a/spaces/JosephusCheung/ACertainsStrategyTalk/6.html +++ /dev/null @@ -1,116 +0,0 @@ - - - - - - - - - -
    - - - - - - - - - - - - - - - -
    -
    - - - - -
    Comparable Analysis -CertainThing CertainThing CertainThing -((((masterpiece, best quality, ultra-detailed)))), (official_art, -thick_eyebrows, laugh), kawaii, cleavage, (((two side up, white and -orangeish and medium streaked hair))), ((tsurime)), Thigh-high socks, -Clear vinyl jacket, skindentation, multicolored black bikini, -dappled_sunlight, Santorini, geometrical pattern, sport fashion, chaos -Courtesy of -@hcanadli12345 -masterpiece emphasized -Note: the "masterpiece" tag is already -fine-tuned and it is not recommended -to emphasize it again.
    - - - -
    - - diff --git a/spaces/JoshMe1/YTYT/README.md b/spaces/JoshMe1/YTYT/README.md deleted file mode 100644 index 60f774b736bb1b549e06513437f77003c6aec706..0000000000000000000000000000000000000000 --- a/spaces/JoshMe1/YTYT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: YTYT -emoji: 🏃 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/data/dataset.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/data/dataset.py deleted file mode 100644 index 0076d53f2b9eafcfb5a85813e88613b239c70168..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/data/dataset.py +++ /dev/null @@ -1,149 +0,0 @@ -import h5py -import numpy as np -import pandas as pd -import torch -from dotmap import DotMap - -from salad.utils.paths import DATA_DIR -from salad.utils import thutil - - -class SALADDataset(torch.utils.data.Dataset): - def __init__(self, data_path, repeat=None, **kwargs): - super().__init__() - self.data_path = str(DATA_DIR / data_path) - self.repeat = repeat - self.__dict__.update(kwargs) - self.hparams = DotMap(self.__dict__) - - """ - Global Data statistics. - """ - if self.hparams.get("global_normalization"): - with h5py.File(self.data_path.replace(".hdf5", "_mean_std.hdf5")) as f: - self.global_mean = f["mean"][:].astype(np.float32) - self.global_std = f["std"][:].astype(np.float32) - - self.data = dict() - with h5py.File(self.data_path) as f: - for k in self.hparams.data_keys: - self.data[k] = f[k][:].astype(np.float32) - - """ - global_normalization arg is for gaussians only. - """ - if k == "g_js_affine": - if self.hparams.get("global_normalization") == "partial": - assert k == "g_js_affine" - if self.hparams.get("verbose"): - print("[*] Normalize data only for pi and eigenvalues.") - # 3: mu, 9: eigvec, 1: pi, 3: eigval - self.data[k] = self.normalize_global_static( - self.data[k], slice(12, None) - ) - elif self.hparams.get("global_normalization") == "all": - assert k == "g_js_affine" - if self.hparams.get("verbose"): - print("[*] Normalize data for all elements.") - self.data[k] = self.normalize_global_static( - self.data[k], slice(None) - ) - - def __getitem__(self, idx): - if self.repeat is not None and self.repeat > 1: - idx = int(idx / self.repeat) - - items = [] - for k in self.hparams.data_keys: - data = torch.from_numpy(self.data[k][idx]) - items.append(data) - - if self.hparams.get("concat_data"): - return torch.cat(items, -1) # [16,528] - if len(items) == 1: - return items[0] - return items - - def __len__(self): - k = self.hparams.data_keys[0] - if self.repeat is not None and self.repeat > 1: - return len(self.data[k]) * self.repeat - return len(self.data[k]) - - def get_other_latents(self, key): - with h5py.File(self.data_path) as f: - return f[key][:].astype(np.float32) - - def normalize_global_static(self, data: np.ndarray, normalize_indices=slice(None)): - """ - Input: - np.ndarray or torch.Tensor. [16,16] or [B,16,16] - slice(None) -> full - slice(12, None) -> partial - Output: - [16,16] or [B,16,16] - """ - assert normalize_indices == slice(None) or normalize_indices == slice( - 12, None - ), print(f"{normalize_indices} is wrong.") - data = thutil.th2np(data).copy() - data[..., normalize_indices] = ( - data[..., normalize_indices] - self.global_mean[normalize_indices] - ) / self.global_std[normalize_indices] - return data - - def unnormalize_global_static( - self, data: np.ndarray, unnormalize_indices=slice(None) - ): - """ - Input: - np.ndarray or torch.Tensor. [16,16] or [B,16,16] - slice(None) -> full - slice(12, None) -> partial - Output: - [16,16] or [B,16,16] - """ - assert unnormalize_indices == slice(None) or unnormalize_indices == slice( - 12, None - ), print(f"{unnormalize_indices} is wrong.") - data = thutil.th2np(data).copy() - data[..., unnormalize_indices] = ( - data[..., unnormalize_indices] - ) * self.global_std[unnormalize_indices] + self.global_mean[unnormalize_indices] - return data - - -class LangSALADDataset(SALADDataset): - def __init__(self, data_path, repeat=None, **kwargs): - super().__init__(data_path, repeat, **kwargs) - - # self.game_data = pd.read_csv(self.hparams.lang_data_path) - self.game_data = pd.read_csv(DATA_DIR / "autosdf_spaghetti_intersec_game_data.csv") - self.shapenet_ids = np.array(self.game_data["sn"]) - self.spaghetti_indices = np.array(self.game_data["spaghetti_idx"]) # for 5401 - self.texts = np.array(self.game_data["text"]) - - assert len(self.shapenet_ids) == len(self.spaghetti_indices) == len(self.texts) - - def __getitem__(self, idx): - if self.repeat is not None and self.repeat > 1: - idx = int(idx / self.repeat) - - spa_idx = self.spaghetti_indices[idx] - text = self.texts[idx] - latents = [] - for k in self.hparams.data_keys: - data = torch.from_numpy(self.data[k][spa_idx]) - latents.append(data) - - item = latents + [text] - if self.hparams.get("concat_data"): - latents = torch.cat(latents, -1) - return latents, text - - return item - - def __len__(self): - if self.repeat is not None and self.repeat > 1: - return len(self.texts) * self.repeat - return len(self.texts) diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/diffusionmodules/__init__.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Kevin676/AutoGPT/autogpt/configurator.py b/spaces/Kevin676/AutoGPT/autogpt/configurator.py deleted file mode 100644 index 1dc3be124f638b8859eb459bcb2d46696f62e2b7..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/autogpt/configurator.py +++ /dev/null @@ -1,134 +0,0 @@ -"""Configurator module.""" -import click -from colorama import Back, Fore, Style - -from autogpt import utils -from autogpt.config import Config -from autogpt.logs import logger -from autogpt.memory import get_supported_memory_backends - -CFG = Config() - - -def create_config( - continuous: bool, - continuous_limit: int, - ai_settings_file: str, - skip_reprompt: bool, - speak: bool, - debug: bool, - gpt3only: bool, - gpt4only: bool, - memory_type: str, - browser_name: str, - allow_downloads: bool, - skip_news: bool, -) -> None: - """Updates the config object with the given arguments. - - Args: - continuous (bool): Whether to run in continuous mode - continuous_limit (int): The number of times to run in continuous mode - ai_settings_file (str): The path to the ai_settings.yaml file - skip_reprompt (bool): Whether to skip the re-prompting messages at the beginning of the script - speak (bool): Whether to enable speak mode - debug (bool): Whether to enable debug mode - gpt3only (bool): Whether to enable GPT3.5 only mode - gpt4only (bool): Whether to enable GPT4 only mode - memory_type (str): The type of memory backend to use - browser_name (str): The name of the browser to use when using selenium to scrape the web - allow_downloads (bool): Whether to allow Auto-GPT to download files natively - skips_news (bool): Whether to suppress the output of latest news on startup - """ - CFG.set_debug_mode(False) - CFG.set_continuous_mode(False) - CFG.set_speak_mode(False) - - if debug: - logger.typewriter_log("Debug Mode: ", Fore.GREEN, "ENABLED") - CFG.set_debug_mode(True) - - if continuous: - logger.typewriter_log("Continuous Mode: ", Fore.RED, "ENABLED") - logger.typewriter_log( - "WARNING: ", - Fore.RED, - "Continuous mode is not recommended. It is potentially dangerous and may" - " cause your AI to run forever or carry out actions you would not usually" - " authorise. Use at your own risk.", - ) - CFG.set_continuous_mode(True) - - if continuous_limit: - logger.typewriter_log( - "Continuous Limit: ", Fore.GREEN, f"{continuous_limit}" - ) - CFG.set_continuous_limit(continuous_limit) - - # Check if continuous limit is used without continuous mode - if continuous_limit and not continuous: - raise click.UsageError("--continuous-limit can only be used with --continuous") - - if speak: - logger.typewriter_log("Speak Mode: ", Fore.GREEN, "ENABLED") - CFG.set_speak_mode(True) - - if gpt3only: - logger.typewriter_log("GPT3.5 Only Mode: ", Fore.GREEN, "ENABLED") - CFG.set_smart_llm_model(CFG.fast_llm_model) - - if gpt4only: - logger.typewriter_log("GPT4 Only Mode: ", Fore.GREEN, "ENABLED") - CFG.set_fast_llm_model(CFG.smart_llm_model) - - if memory_type: - supported_memory = get_supported_memory_backends() - chosen = memory_type - if chosen not in supported_memory: - logger.typewriter_log( - "ONLY THE FOLLOWING MEMORY BACKENDS ARE SUPPORTED: ", - Fore.RED, - f"{supported_memory}", - ) - logger.typewriter_log("Defaulting to: ", Fore.YELLOW, CFG.memory_backend) - else: - CFG.memory_backend = chosen - - if skip_reprompt: - logger.typewriter_log("Skip Re-prompt: ", Fore.GREEN, "ENABLED") - CFG.skip_reprompt = True - - if ai_settings_file: - file = ai_settings_file - - # Validate file - (validated, message) = utils.validate_yaml_file(file) - if not validated: - logger.typewriter_log("FAILED FILE VALIDATION", Fore.RED, message) - logger.double_check() - exit(1) - - logger.typewriter_log("Using AI Settings File:", Fore.GREEN, file) - CFG.ai_settings_file = file - CFG.skip_reprompt = True - - if allow_downloads: - logger.typewriter_log("Native Downloading:", Fore.GREEN, "ENABLED") - logger.typewriter_log( - "WARNING: ", - Fore.YELLOW, - f"{Back.LIGHTYELLOW_EX}Auto-GPT will now be able to download and save files to your machine.{Back.RESET} " - + "It is recommended that you monitor any files it downloads carefully.", - ) - logger.typewriter_log( - "WARNING: ", - Fore.YELLOW, - f"{Back.RED + Style.BRIGHT}ALWAYS REMEMBER TO NEVER OPEN FILES YOU AREN'T SURE OF!{Style.RESET_ALL}", - ) - CFG.allow_downloads = True - - if skip_news: - CFG.skip_news = True - - if browser_name: - CFG.selenium_web_browser = browser_name diff --git a/spaces/KevinQHLin/UniVTG/main/train_hl.py b/spaces/KevinQHLin/UniVTG/main/train_hl.py deleted file mode 100644 index ceec407bc1f4ff92077bda46e90cfd7b566ca56b..0000000000000000000000000000000000000000 --- a/spaces/KevinQHLin/UniVTG/main/train_hl.py +++ /dev/null @@ -1,229 +0,0 @@ -import os -import pdb -import time -import json -import pprint -import random -import importlib -import numpy as np -from tqdm import tqdm, trange -from collections import defaultdict - -import torch -import torch.nn as nn -import torch.backends.cudnn as cudnn -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter - -import sys -sys.path.append('/data/home/qinghonglin/univtg') -from main.config import BaseOptions, setup_model -from main.dataset import DatasetHL, prepare_batch_inputs_hl, start_end_collate_hl -from utils.basic_utils import set_seed, AverageMeter, dict_to_markdown, save_json, save_jsonl -from utils.model_utils import count_parameters - -import logging -logger = logging.getLogger(__name__) -logging.basicConfig(format="%(asctime)s.%(msecs)03d:%(levelname)s:%(name)s - %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=logging.INFO) - -def eval_epoch(model, train_val_dataset, opt): #, nms_thresh, device): - model.eval() - - scores = [] - train_val_dataset.set_state('val') - val_loader = DataLoader( - train_val_dataset, - collate_fn=start_end_collate_hl, - batch_size=opt.eval_bsz, - num_workers=opt.num_workers, - shuffle=False, - pin_memory=opt.pin_memory - ) - - with torch.no_grad(): - for data in val_loader: - model_inputs, targets = prepare_batch_inputs_hl(data) - outputs = model(**model_inputs) - # pred_cls = outputs['pred_logits'].squeeze(-1) - # pred_cls = outputs['saliency_scores'] - # pred_cls = outputs['saliency_scores'] + outputs['pred_logits'].squeeze(-1) - - # pdb.set_trace() - if opt.f_loss_coef == 0: - pred_cls = outputs['saliency_scores'] - elif opt.s_loss_intra_coef == 0: - pred_cls = outputs['pred_logits'].squeeze(-1) - else: - if opt.eval_mode == 'add': - pred_cls = outputs['saliency_scores'] + outputs['pred_logits'].squeeze(-1) - else: - pred_cls = outputs['pred_logits'].squeeze(-1) - - pred_cls = pred_cls.detach().cpu() - scores.append(pred_cls) - map = round(train_val_dataset.evaluate(scores)['mAP'] * 100, 4) - return map - -def train_epoch(model, criterion, train_val_dataset, optimizer, opt, epoch_i, tb_writer): - logger.info(f"[Epoch {epoch_i+1}]") - model.train() - criterion.train() - - train_val_dataset.set_state('train') - train_loader = DataLoader( - train_val_dataset, - collate_fn=start_end_collate_hl, - batch_size=opt.bsz, - num_workers=opt.num_workers, - shuffle=True, - pin_memory=opt.pin_memory - ) - - # init meters - time_meters = defaultdict(AverageMeter) - loss_meters = defaultdict(AverageMeter) - - num_training_examples = len(train_loader) - timer_dataloading = time.time() - for batch_idx, batch in enumerate(train_loader): - time_meters["dataloading_time"].update(time.time() - timer_dataloading) - timer_start = time.time() - model_inputs, targets = prepare_batch_inputs_hl(batch) - time_meters["prepare_inputs_time"].update(time.time() - timer_start) - - timer_start = time.time() - outputs = model(**model_inputs) - loss_dict = criterion(outputs, targets) - weight_dict = criterion.weight_dict - losses = sum(loss_dict[k] * weight_dict[k] for k in loss_dict.keys() if k in weight_dict) - time_meters["model_forward_time"].update(time.time() - timer_start) - - timer_start = time.time() - optimizer.zero_grad() - losses.backward() - if opt.grad_clip > 0: - nn.utils.clip_grad_norm_(model.parameters(), opt.grad_clip) - optimizer.step() - time_meters["model_backward_time"].update(time.time() - timer_start) - - loss_dict["loss_overall"] = float(losses) - for k, v in loss_dict.items(): - loss_meters[k].update(float(v) * weight_dict[k] if k in weight_dict else float(v)) - - timer_dataloading = time.time() - if opt.debug and batch_idx == 3: - break - - # print/add logs - tb_writer.add_scalar("Train/lr", float(optimizer.param_groups[0]["lr"]), epoch_i+1) - for k, v in loss_meters.items(): - tb_writer.add_scalar("Train/{}".format(k), v.avg, epoch_i+1) - - to_write = opt.train_log_txt_formatter.format( - time_str=time.strftime("%Y_%m_%d_%H_%M_%S"), - epoch=epoch_i+1, - loss_str=" ".join(["{} {:.4f}".format(k, v.avg) for k, v in loss_meters.items()])) - with open(opt.train_log_filepath, "a") as f: - f.write(to_write) - - logger.info("Epoch time stats:") - for name, meter in time_meters.items(): - d = {k: f"{getattr(meter, k):.4f}" for k in ["max", "min", "avg"]} - logger.info(f"{name} ==> {d}") - -# train in single domain. -def train(model, criterion, optimizer, lr_scheduler, train_val_dataset, opt): - # if opt.device.type == "cuda": - # logger.info("CUDA enabled.") - # model.to(opt.device) - - tb_writer = SummaryWriter(opt.tensorboard_log_dir) - tb_writer.add_text("hyperparameters", dict_to_markdown(vars(opt), max_str_len=None)) - opt.train_log_txt_formatter = "{time_str} [Epoch] {epoch:03d} [Loss] {loss_str}\n" - opt.eval_log_txt_formatter = "{time_str} [Epoch] {epoch:03d} [Loss] {loss_str} [Metrics] {eval_metrics_str}\n" - - prev_best_score = 0. - if opt.start_epoch is None: - start_epoch = -1 if opt.eval_init else 0 - else: - start_epoch = opt.start_epoch - - for epoch_i in trange(start_epoch, opt.n_epoch, desc="Epoch"): - if epoch_i > -1: - train_epoch(model, criterion, train_val_dataset, optimizer, opt, epoch_i, tb_writer) - lr_scheduler.step() - eval_epoch_interval = opt.eval_epoch - if opt.eval_path is not None and (epoch_i + 1) % eval_epoch_interval == 0: - with torch.no_grad(): - scores = eval_epoch(model, train_val_dataset, opt) - tb_writer.add_scalar(f"Eval/HL-{opt.dset_name}-{train_val_dataset.domain}-mAP", float(scores), epoch_i+1) - if prev_best_score < scores: - prev_best_score = scores - checkpoint = { - "model": model.state_dict(), - "optimizer": optimizer.state_dict(), - "epoch": epoch_i, - "opt": opt - } - torch.save(checkpoint, opt.ckpt_filepath.replace(".ckpt", f"_{train_val_dataset.domain}_best.ckpt")) - tb_writer.close() - return prev_best_score - -def start_training(): - logger.info("Setup config, data and model...") - opt = BaseOptions().parse() - set_seed(opt.seed) - - from main.config_hl import TVSUM_SPLITS, YOUTUBE_SPLITS - if opt.dset_name == "tvsum": - domain_splits = TVSUM_SPLITS.keys() - if opt.dset_name == "youtube": - domain_splits = YOUTUBE_SPLITS.keys() - - scores = {} - if opt.lr_warmup > 0: - # total_steps = opt.n_epoch * len(train_dataset) // opt.bsz - total_steps = opt.n_epoch - warmup_steps = opt.lr_warmup if opt.lr_warmup > 1 else int(opt.lr_warmup * total_steps) - opt.lr_warmup = [warmup_steps, total_steps] - - domain_splits = domain_splits if not opt.domain_name else [opt.domain_name] - - for domain in domain_splits: - dataset_config = dict( - dset_name=opt.dset_name, - domain=domain, - data_path=opt.train_path, - v_feat_types=opt.v_feat_types, - v_feat_dirs=opt.v_feat_dirs, - t_feat_dir=opt.t_feat_dir, - use_tef=True - ) - dataloader = DatasetHL(**dataset_config) - - model, criterion, optimizer, lr_scheduler = setup_model(opt) - count_parameters(model) - logger.info(f"Start Training {domain}") - best_score = train(model, criterion, optimizer, lr_scheduler, dataloader, opt) - scores[domain] = best_score - scores['AVG'] = sum(scores.values()) / len(scores) - - # save the final results. - save_metrics_path = os.path.join(opt.results_dir, f"best_{opt.dset_name}_{opt.eval_split_name}_preds_metrics.json") - save_json(scores, save_metrics_path, save_pretty=True, sort_keys=False) - - tb_writer = SummaryWriter(opt.tensorboard_log_dir) - tb_writer.add_text(f"HL-{opt.dset_name}", dict_to_markdown(scores, max_str_len=None)) - tb_writer.add_scalar(f"Eval/HL-{opt.dset_name}-avg-mAP-key", float(scores['AVG']), 1) - tb_writer.close() - # return opt.ckpt_filepath.replace(".ckpt", "_best.ckpt"), opt.eval_split_name, opt.eval_path, opt.debug - - print(opt.dset_name) - print(scores) - return - -if __name__ == '__main__': - start_training() - results = logger.info("\n\n\nFINISHED TRAINING!!!") \ No newline at end of file diff --git a/spaces/Konglinu/myai/Dockerfile b/spaces/Konglinu/myai/Dockerfile deleted file mode 100644 index 17d5fc69eb4e25ea7dd048268ede3a8c59ef26c9..0000000000000000000000000000000000000000 --- a/spaces/Konglinu/myai/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="1089t2vLQ9hA3wezNKMmWZ6EwT4hNDXBoudEeEHWjYIYT1lq8_j39eCPQY6EvHxUQ7o31UmfZ518XFUH6_hLCZqPtTOJW9UeBTnsDXpKxj4YJef5dSa4hKyNrL_zJWIkOGXApadL0YaT6Ub4iEMJW2ljH9ewnAJTPP6Y2TzfyODDUo_9snxEvlQaevmO51F1rwuHEyGtQP0sIO0a09zDQfg" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/Kreaols/ChuanhuChatGPT/README.md b/spaces/Kreaols/ChuanhuChatGPT/README.md deleted file mode 100644 index 79790f767ded0eb77b8129f8e960c65b8d166c14..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.33.1 -app_file: ChuanhuChatbot.py -pinned: false -license: gpl-3.0 -duplicated_from: JohnSmith9982/ChuanhuChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmpl/datasets/pl_datamodule.py b/spaces/KyanChen/RSPrompter/mmpl/datasets/pl_datamodule.py deleted file mode 100644 index 36492b5eada36c3b936aa16b9cda2b9e2ae4741f..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/datasets/pl_datamodule.py +++ /dev/null @@ -1,73 +0,0 @@ -from mmpl.registry import DATASETS -import lightning.pytorch as pl -from torch.utils.data import DataLoader -from .builder import build_dataset -from mmengine.registry import FUNCTIONS -from functools import partial - - -def get_collate_fn(dataloader_cfg): - collate_fn_cfg = dataloader_cfg.pop('collate_fn', dict(type='pseudo_collate')) - collate_fn_type = collate_fn_cfg.pop('type') - collate_fn = FUNCTIONS.get(collate_fn_type) - collate_fn = partial(collate_fn, **collate_fn_cfg) # type: ignore - return collate_fn - - -@DATASETS.register_module() -class PLDataModule(pl.LightningDataModule): - def __init__(self, - train_loader=None, - val_loader=None, - test_loader=None, - predict_loader=None, - **kwargs - ): - super().__init__() - self.train_loader = train_loader - self.val_loader = val_loader - self.test_loader = test_loader - self.predict_loader = predict_loader - self.train_dataset = None - self.val_dataset = None - self.test_dataset = None - self.predict_dataset = None - - def prepare_data(self): - pass - - def setup(self, stage: str): - if stage == "fit": - dataset_cfg = self.train_loader.pop('dataset') - self.train_dataset = build_dataset(dataset_cfg) - if self.val_loader is not None: - dataset_cfg = self.val_loader.pop('dataset') - self.val_dataset = build_dataset(dataset_cfg) - if stage == "val": - if self.val_loader is not None: - dataset_cfg = self.val_loader.pop('dataset') - self.val_dataset = build_dataset(dataset_cfg) - if stage == "test": - if self.test_loader is not None: - dataset_cfg = self.test_loader.pop('dataset') - self.test_dataset = build_dataset(dataset_cfg) - if stage == "predict": - if self.predict_loader is not None: - dataset_cfg = self.predict_loader.pop('dataset') - self.predict_dataset = build_dataset(dataset_cfg) - - def train_dataloader(self): - collate_fn = get_collate_fn(self.train_loader) - return DataLoader(self.train_dataset, collate_fn=collate_fn, **self.train_loader) - - def val_dataloader(self): - collate_fn = get_collate_fn(self.val_loader) - return DataLoader(self.val_dataset, collate_fn=collate_fn, **self.val_loader) - - def test_dataloader(self): - collate_fn = get_collate_fn(self.test_loader) - return DataLoader(self.test_dataset, collate_fn=collate_fn, **self.test_loader) - - def predict_dataloader(self): - collate_fn = get_collate_fn(self.predict_loader) - return DataLoader(self.predict_dataset, collate_fn=collate_fn, **self.predict_loader) diff --git a/spaces/LinkSoul/LLaSM/static/js/index_demo.js b/spaces/LinkSoul/LLaSM/static/js/index_demo.js deleted file mode 100644 index 20d3e1ec46fd73bcb3693b03d1760e8482fbd0ce..0000000000000000000000000000000000000000 --- a/spaces/LinkSoul/LLaSM/static/js/index_demo.js +++ /dev/null @@ -1,374 +0,0 @@ -window.HELP_IMPROVE_VIDEOJS = false; - -// var INTERP_BASE = "./static/interpolation/stacked"; -var NUM_INTERP_FRAMES = 240; - -var interp_images = []; -// function preloadInterpolationImages() { -// for (var i = 0; i < NUM_INTERP_FRAMES; i++) { -// var path = INTERP_BASE + '/' + String(i).padStart(6, '0') + '.jpg'; -// interp_images[i] = new Image(); -// interp_images[i].src = path; -// } -// } - -// function setInterpolationImage(i) { -// var image = interp_images[i]; -// image.ondragstart = function() { return false; }; -// image.oncontextmenu = function() { return false; }; -// $('#interpolation-image-wrapper').empty().append(image); -// } - - -$(document).ready(function() { - // Check for click events on the navbar burger icon - $(".navbar-burger").click(function() { - // Toggle the "is-active" class on both the "navbar-burger" and the "navbar-menu" - $(".navbar-burger").toggleClass("is-active"); - $(".navbar-menu").toggleClass("is-active"); - - }); - - var options = { - slidesToScroll: 1, - slidesToShow: 3, - loop: true, - infinite: true, - autoplay: false, - autoplaySpeed: 3000, - } - - // Initialize all div with carousel class - var carousels = bulmaCarousel.attach('.carousel', options); - - // Loop on each carousel initialized - for(var i = 0; i < carousels.length; i++) { - // Add listener to event - carousels[i].on('before:show', state => { - console.log(state); - }); - } - - // Access to bulmaCarousel instance of an element - var element = document.querySelector('#my-element'); - if (element && element.bulmaCarousel) { - // bulmaCarousel instance is available as element.bulmaCarousel - element.bulmaCarousel.on('before-show', function(state) { - console.log(state); - }); - } - - /*var player = document.getElementById('interpolation-video'); - player.addEventListener('loadedmetadata', function() { - $('#interpolation-slider').on('input', function(event) { - console.log(this.value, player.duration); - player.currentTime = player.duration / 100 * this.value; - }) - }, false);*/ - // preloadInterpolationImages(); - - // $('#interpolation-slider').on('input', function(event) { - // setInterpolationImage(this.value); - // }); - // setInterpolationImage(0); - // $('#interpolation-slider').prop('max', NUM_INTERP_FRAMES - 1); - - bulmaSlider.attach(); - -}) - - - - - -// 全局初始化 -// connect ws -var ws = null; -var recorder = null; -var isRecording = false; -var vc_enabled = location.search.split('vc=')[1] == '1' ? true : false; -var text = '' -var audio_base64 = null; -Recorder.CLog = function(){} //update -var wave = Recorder.WaveView({elem:"#waveform"}); //创建wave对象,写这里面浏览器妥妥的; -const audioPlayer = document.getElementById('audioPlayer'); -const waveformDiv = document.getElementById('waveform'); -const resultsDiv = document.getElementById('results'); -const llasaLoading = document.getElementById('llasaLoading'); -const container = document.getElementById('llasa'); - -// sent text element -function createSentMessageElement(message) { - const sentDiv = document.createElement('div'); - sentDiv.id = 'sent'; - sentDiv.setAttribute('class', 'd-flex flex-row justify-content-end mb-2 pt-1 text-start'); - - const sentMessageP = document.createElement('p'); - sentMessageP.setAttribute('class', 'sent-message small p-2 me-2 mb-1 text-white rounded-3 bg-primary'); - sentMessageP.textContent = message['value']; - sentMessageP.id = message['cid'] - - const imageDiv = document.createElement('div'); - const senderImage = document.createElement('img'); - senderImage.setAttribute('src', './images/user.png'); - senderImage.setAttribute('class', 'rounded-4'); - senderImage.setAttribute('alt', 'avatar 1'); - senderImage.setAttribute('height', '30'); - senderImage.setAttribute('width', '30'); - imageDiv.appendChild(senderImage); - - sentDiv.appendChild(sentMessageP); - sentDiv.appendChild(imageDiv); - - return sentDiv; -} - -// Function to add a new sent message to the DOM -function addSentMessageToDOM(message) { - const sentDiv = createSentMessageElement(message); - resultsDiv.appendChild(sentDiv); -} - -function createRecieveMessageElement(message) { - const responseDiv = document.createElement("div"); - responseDiv.id = "response"; - responseDiv.classList.add("d-flex", "flex-row", "justify-content-start", "pt-2", "mb-2"); - - const imageDiv = document.createElement('div') - const avatarImg = document.createElement("img"); - avatarImg.src = "../../images/gpt.png"; - avatarImg.classList.add("rounded-4"); - avatarImg.alt = "avatar 1"; - avatarImg.height = 30; - avatarImg.width = 30; - imageDiv.appendChild(avatarImg) - responseDiv.appendChild(imageDiv); - - const responseMessageP = document.createElement("p"); - responseMessageP.id = message['cid']; - responseMessageP.classList.add("small", "p-2", "ms-2", "mb-1", "rounded-3"); - responseMessageP.style.backgroundColor = "#f5f6f7"; - responseMessageP.innerText = message['value']; - responseDiv.appendChild(responseMessageP); - - return responseDiv; -} - -// Function to add a new recieve message to the DOM -function addRecieveMessageToDOM(message) { - const reciDiv = createRecieveMessageElement(message); - resultsDiv.appendChild(reciDiv); - resultsDiv.scrollTo(0, resultsDiv.scrollHeight); -} - -function createSentAudioMessageElement(message) { - const sentDiv = document.createElement('div'); - sentDiv.id = 'sent'; - sentDiv.className = 'd-flex flex-row justify-content-end mb-2 pt-1 text-start'; - - const audio = document.createElement('audio'); - audio.controls = true; - audio.id = message['cid']; - audio.className = 'sent-message p-2 me-2 bg-primagry'; - // audio.style.width = '200px'; - - const sourceElement = document.createElement('source'); - sourceElement.src = message['value']; - // sourceElement.type = 'audio/ogg'; - - const unsupportedText = document.createTextNode('Your browser does not support the audio element.'); - - audio.appendChild(sourceElement); - audio.appendChild(unsupportedText); - - const imageDiv = document.createElement('div') - const image = document.createElement('img'); - image.src = '../../images/user.png'; - image.id = 'sender-image'; - image.className = 'rounded-4'; - image.alt = 'avatar 1'; - image.height = 30; - image.width = 30; - imageDiv.appendChild(image); - - sentDiv.appendChild(audio); - sentDiv.appendChild(imageDiv); - - return sentDiv; -} - -// Function to add a new audio sent message to the DOM -function addAudioSentMessageToDOM(message) { - const sentDiv = createSentAudioMessageElement(message); - resultsDiv.appendChild(sentDiv); - resultsDiv.scrollTo(0, resultsDiv.scrollHeight); -} - -// stream update response -function updateResponse(cID, answer) { - const responseP = document.getElementById("a_text_" + cID); - responseP.innerText = answer; -} - -// connect ws -window.onload = async () => { - await connect(); -} - -async function connect() { - // url = ((window.location.protocol === "https:") ? "wss://" : "ws://") + window.location.host + "/api"; - url = "wss://demo.linksoul.ai/alm/api"; - ws = new WebSocket(url); - ws.onopen = function (e) { - console.log('握手成功'); - if (ws.readyState == 1) { //ws进入连接状态,则每隔500毫秒发送一包数据 - console.log('连接状态成功'); - // resultsDiv.style.display = ''; - // llasaLoading.style.display = 'none'; - container.style.opacity = 1; - llasaLoading.style.display = 'none'; - } - }; - - ws.onmessage = function (e) { - console.log(e['data']) - var response = JSON.parse(e['data']); - if(response["action"] == "qa"){ - // nothing to do - if(response['msg'] == 'ok') { - console.log(response["data"]) - updateResponse(response['data']['cid'], response['data']['answer']) - }else{ - console.log(response['msg']); - } - } - } - ws.onerror = function (err) { - console.info('ws error: '+err) - } - - ws.onclose=function(e){ - console.info('ws close: '+e); - }; -} -const sleep = (delay) => new Promise((resolve) => setTimeout(resolve, delay)) -const submitTextButton = document.getElementById('send_button'); -submitTextButton.onclick = async () => { - await sendMessage(); -} -async function sendMessage() { - if(ws == null || ws.readyState != 1) { - // alert('服务未连接,请刷新页面'); - // return; - await connect(); - await sleep(800); - } - var userTextDiv = document.getElementById('user-text'); - text = userTextDiv.value - userTextDiv.value = '' - console.log('user input text', text); - console.log('user input audio', audio_base64); - if (text.length == 0 && audio_base64 == null) return; - var cid = crypto.randomUUID(); - if (text.length > 0) { - addSentMessageToDOM({ - 'cid': "q_text_" + cid, - 'from': 'human', - 'value': text, - 'type': 'text' - }); - } - if (audio_base64 != null) { - addAudioSentMessageToDOM({ - 'cid': "q_audio_" + cid, - 'from': 'human', - 'value': audio_base64, - 'type': 'audio' - }); - } - - ws.send(JSON.stringify({"action": "qa", "data":{"cid": cid, "text": text, "audio": audio_base64, "vc_enabled": vc_enabled}})); - - addRecieveMessageToDOM({ - 'cid': "a_text_" + cid, - 'from': 'gpt', - 'value': '', - 'type': 'text' - }) - if (text != '') { - text = ''; - } - if (audio_base64 != null) { - audio_base64 = null; - // 清空缓存区 - audioPlayer.src = ''; - audioPlayer.style.display = 'none'; - } -} - -function blobToDataURI(blob, callback) { - var reader = new FileReader(); - reader.onload = function (e) { - callback(e.target.result); - } - reader.readAsDataURL(blob); -} - -const resetButton = document.getElementById('delete_button'); -resetButton.onclick = () => { - clear(); - resultsDiv.innerHTML = ''; - audioPlayer.src = ''; - audioPlayer.style.display = 'none'; - waveformDiv.style.display = 'none'; -} -function clear() {//update - ws.send(JSON.stringify({"action": "clear"})); -} - -const recordButton = document.getElementById('start_button'); -recordButton.onclick = () => { - record_audio(); - -} - -function record_audio() {//update - if (!isRecording) { - recorder = Recorder({type:"mp3", sampleRate:44100, bitRate:128, onProcess:function(buffers,powerLevel,bufferDuration,bufferSampleRate,newBufferIdx,asyncEnd){ - wave&&wave.input(buffers[buffers.length-1],powerLevel,bufferSampleRate); - }}); - recorder.open(function(){ - isRecording = true; - recorder.start(); - audioPlayer.style.display = 'none'; - waveformDiv.style.display = ''; - recordButton.style.filter = "invert(18%) sepia(66%) saturate(5808%) hue-rotate(338deg) brightness(91%) contrast(125%)"; - },function(msg,isUserNotAllow){ - alert("请允许浏览器获取麦克风录音权限"); - console.log((isUserNotAllow?"UserNotAllow, ":"")+"无法录音:"+msg); - }); - }else { - isRecording = false; - recorder.stop(function(blob, duration){ - audioPlayer.style.display = ''; - waveformDiv.style.display = 'none'; - blobToDataURI(blob, function(audio_base64_data){ - audio_base64 = audio_base64_data; - // document.getElementById('audioPlayer').src = URL.createObjectURL(blob); - recorder.close(); - recorder=null; - // 移动 audio 到暂存区 - audioPlayer.src = audio_base64; - // sendMessage(); - }); - },function(msg){ - alert("录音失败"); - console.log("录音失败:"+msg); - recorder.close(); - recorder=null; - }); - recordButton.style.filter = null; - } -} - - diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/robust_scanner.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/robust_scanner.py deleted file mode 100644 index 4cc2fa108855a102e1f4e48b6f94bac3b7f7d644..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_models/robust_scanner.py +++ /dev/null @@ -1,24 +0,0 @@ -label_convertor = dict( - type='AttnConvertor', dict_type='DICT90', with_unknown=True) - -hybrid_decoder = dict(type='SequenceAttentionDecoder') - -position_decoder = dict(type='PositionAttentionDecoder') - -model = dict( - type='RobustScanner', - backbone=dict(type='ResNet31OCR'), - encoder=dict( - type='ChannelReductionEncoder', - in_channels=512, - out_channels=128, - ), - decoder=dict( - type='RobustScannerDecoder', - dim_input=512, - dim_model=128, - hybrid_decoder=hybrid_decoder, - position_decoder=position_decoder), - loss=dict(type='SARLoss'), - label_convertor=label_convertor, - max_seq_len=30) diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py deleted file mode 100644 index 983378118b4d589f531a7f401a06d238966a45d4..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', '../../_base_/recog_models/sar.py', - '../../_base_/schedules/schedule_adam_step_5e.py', - '../../_base_/recog_pipelines/sar_pipeline.py', - '../../_base_/recog_datasets/ST_SA_MJ_real_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - samples_per_gpu=64, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/models/pix2pixHD_model_DA.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/models/pix2pixHD_model_DA.py deleted file mode 100644 index 617589df30ef1d808115332f76a77acaaeba099c..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Global/models/pix2pixHD_model_DA.py +++ /dev/null @@ -1,372 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import numpy as np -import torch -import os -from torch.autograd import Variable -from util.image_pool import ImagePool -from .base_model import BaseModel -from . import networks - - -class Pix2PixHDModel(BaseModel): - def name(self): - return 'Pix2PixHDModel' - - def init_loss_filter(self, use_gan_feat_loss, use_vgg_loss): - flags = (True, use_gan_feat_loss, use_vgg_loss, True, True, True, True, True, True) - - def loss_filter(g_gan, g_gan_feat, g_vgg, g_kl, d_real, d_fake, g_featd, featd_real, featd_fake): - return [l for (l, f) in zip((g_gan, g_gan_feat, g_vgg, g_kl, d_real, d_fake, g_featd, featd_real, featd_fake), flags) if f] - - return loss_filter - - def initialize(self, opt): - BaseModel.initialize(self, opt) - if opt.resize_or_crop != 'none' or not opt.isTrain: # when training at full res this causes OOM - torch.backends.cudnn.benchmark = True - self.isTrain = opt.isTrain - self.use_features = opt.instance_feat or opt.label_feat ## Clearly it is false - self.gen_features = self.use_features and not self.opt.load_features ## it is also false - input_nc = opt.label_nc if opt.label_nc != 0 else opt.input_nc ## Just is the origin input channel # - - ##### define networks - # Generator network - netG_input_nc = input_nc - if not opt.no_instance: - netG_input_nc += 1 - if self.use_features: - netG_input_nc += opt.feat_num - self.netG = networks.define_G(netG_input_nc, opt.output_nc, opt.ngf, opt.netG, opt.k_size, - opt.n_downsample_global, opt.n_blocks_global, opt.n_local_enhancers, - opt.n_blocks_local, opt.norm, gpu_ids=self.gpu_ids, opt=opt) - - # Discriminator network - if self.isTrain: - use_sigmoid = opt.no_lsgan - netD_input_nc = opt.output_nc if opt.no_cgan else input_nc + opt.output_nc - if not opt.no_instance: - netD_input_nc += 1 - self.netD = networks.define_D(netD_input_nc, opt.ndf, opt.n_layers_D, opt,opt.norm, use_sigmoid, - opt.num_D, not opt.no_ganFeat_loss, gpu_ids=self.gpu_ids) - - self.feat_D=networks.define_D(64, opt.ndf, opt.n_layers_D, opt, opt.norm, use_sigmoid, - 1, not opt.no_ganFeat_loss, gpu_ids=self.gpu_ids) - - if self.opt.verbose: - print('---------- Networks initialized -------------') - - # load networks - if not self.isTrain or opt.continue_train or opt.load_pretrain: - pretrained_path = '' if not self.isTrain else opt.load_pretrain - self.load_network(self.netG, 'G', opt.which_epoch, pretrained_path) - - print("---------- G Networks reloaded -------------") - if self.isTrain: - self.load_network(self.netD, 'D', opt.which_epoch, pretrained_path) - self.load_network(self.feat_D, 'feat_D', opt.which_epoch, pretrained_path) - print("---------- D Networks reloaded -------------") - - - # set loss functions and optimizers - if self.isTrain: - if opt.pool_size > 0 and (len(self.gpu_ids)) > 1: ## The pool_size is 0! - raise NotImplementedError("Fake Pool Not Implemented for MultiGPU") - self.fake_pool = ImagePool(opt.pool_size) - self.old_lr = opt.lr - - # define loss functions - self.loss_filter = self.init_loss_filter(not opt.no_ganFeat_loss, not opt.no_vgg_loss) - - self.criterionGAN = networks.GANLoss(use_lsgan=not opt.no_lsgan, tensor=self.Tensor) - self.criterionFeat = torch.nn.L1Loss() - if not opt.no_vgg_loss: - self.criterionVGG = networks.VGGLoss_torch(self.gpu_ids) - - # Names so we can breakout loss - self.loss_names = self.loss_filter('G_GAN', 'G_GAN_Feat', 'G_VGG', 'G_KL', 'D_real', 'D_fake', 'G_featD', 'featD_real','featD_fake') - - # initialize optimizers - # optimizer G - params = list(self.netG.parameters()) - if self.gen_features: - params += list(self.netE.parameters()) - self.optimizer_G = torch.optim.Adam(params, lr=opt.lr, betas=(opt.beta1, 0.999)) - - # optimizer D - params = list(self.netD.parameters()) - self.optimizer_D = torch.optim.Adam(params, lr=opt.lr, betas=(opt.beta1, 0.999)) - - params = list(self.feat_D.parameters()) - self.optimizer_featD = torch.optim.Adam(params, lr=opt.lr, betas=(opt.beta1, 0.999)) - - print("---------- Optimizers initialized -------------") - - if opt.continue_train: - self.load_optimizer(self.optimizer_D, 'D', opt.which_epoch) - self.load_optimizer(self.optimizer_G, "G", opt.which_epoch) - self.load_optimizer(self.optimizer_featD,'featD',opt.which_epoch) - for param_groups in self.optimizer_D.param_groups: - self.old_lr = param_groups['lr'] - - print("---------- Optimizers reloaded -------------") - print("---------- Current LR is %.8f -------------" % (self.old_lr)) - - ## We also want to re-load the parameters of optimizer. - - def encode_input(self, label_map, inst_map=None, real_image=None, feat_map=None, infer=False): - if self.opt.label_nc == 0: - input_label = label_map.data.cuda() - else: - # create one-hot vector for label map - size = label_map.size() - oneHot_size = (size[0], self.opt.label_nc, size[2], size[3]) - input_label = torch.cuda.FloatTensor(torch.Size(oneHot_size)).zero_() - input_label = input_label.scatter_(1, label_map.data.long().cuda(), 1.0) - if self.opt.data_type == 16: - input_label = input_label.half() - - # get edges from instance map - if not self.opt.no_instance: - inst_map = inst_map.data.cuda() - edge_map = self.get_edges(inst_map) - input_label = torch.cat((input_label, edge_map), dim=1) - input_label = Variable(input_label, volatile=infer) - - # real images for training - if real_image is not None: - real_image = Variable(real_image.data.cuda()) - - # instance map for feature encoding - if self.use_features: - # get precomputed feature maps - if self.opt.load_features: - feat_map = Variable(feat_map.data.cuda()) - if self.opt.label_feat: - inst_map = label_map.cuda() - - return input_label, inst_map, real_image, feat_map - - def discriminate(self, input_label, test_image, use_pool=False): - if input_label is None: - input_concat = test_image.detach() - else: - input_concat = torch.cat((input_label, test_image.detach()), dim=1) - if use_pool: - fake_query = self.fake_pool.query(input_concat) - return self.netD.forward(fake_query) - else: - return self.netD.forward(input_concat) - - def feat_discriminate(self,input): - - return self.feat_D.forward(input.detach()) - - - def forward(self, label, inst, image, feat, infer=False): - # Encode Inputs - input_label, inst_map, real_image, feat_map = self.encode_input(label, inst, image, feat) - - # Fake Generation - if self.use_features: - if not self.opt.load_features: - feat_map = self.netE.forward(real_image, inst_map) - input_concat = torch.cat((input_label, feat_map), dim=1) - else: - input_concat = input_label - hiddens = self.netG.forward(input_concat, 'enc') - noise = Variable(torch.randn(hiddens.size()).cuda(hiddens.data.get_device())) - # This is a reduced VAE implementation where we assume the outputs are multivariate Gaussian distribution with mean = hiddens and std_dev = all ones. - # We follow the the VAE of MUNIT (https://github.com/NVlabs/MUNIT/blob/master/networks.py) - fake_image = self.netG.forward(hiddens + noise, 'dec') - - #################### - ##### GAN for the intermediate feature - real_old_feat =[] - syn_feat = [] - for index,x in enumerate(inst): - if x==1: - real_old_feat.append(hiddens[index].unsqueeze(0)) - else: - syn_feat.append(hiddens[index].unsqueeze(0)) - L=min(len(real_old_feat),len(syn_feat)) - real_old_feat=real_old_feat[:L] - syn_feat=syn_feat[:L] - real_old_feat=torch.cat(real_old_feat,0) - syn_feat=torch.cat(syn_feat,0) - - pred_fake_feat=self.feat_discriminate(real_old_feat) - loss_featD_fake = self.criterionGAN(pred_fake_feat, False) - pred_real_feat=self.feat_discriminate(syn_feat) - loss_featD_real = self.criterionGAN(pred_real_feat, True) - - pred_fake_feat_G=self.feat_D.forward(real_old_feat) - loss_G_featD=self.criterionGAN(pred_fake_feat_G,True) - - - ##################################### - if self.opt.no_cgan: - # Fake Detection and Loss - pred_fake_pool = self.discriminate(None, fake_image, use_pool=True) - loss_D_fake = self.criterionGAN(pred_fake_pool, False) - - # Real Detection and Loss - pred_real = self.discriminate(None, real_image) - loss_D_real = self.criterionGAN(pred_real, True) - - # GAN loss (Fake Passability Loss) - pred_fake = self.netD.forward(fake_image) - loss_G_GAN = self.criterionGAN(pred_fake, True) - else: - # Fake Detection and Loss - pred_fake_pool = self.discriminate(input_label, fake_image, use_pool=True) - loss_D_fake = self.criterionGAN(pred_fake_pool, False) - - # Real Detection and Loss - pred_real = self.discriminate(input_label, real_image) - loss_D_real = self.criterionGAN(pred_real, True) - - # GAN loss (Fake Passability Loss) - pred_fake = self.netD.forward(torch.cat((input_label, fake_image), dim=1)) - loss_G_GAN = self.criterionGAN(pred_fake, True) - - loss_G_kl = torch.mean(torch.pow(hiddens, 2)) * self.opt.kl - - # GAN feature matching loss - loss_G_GAN_Feat = 0 - if not self.opt.no_ganFeat_loss: - feat_weights = 4.0 / (self.opt.n_layers_D + 1) - D_weights = 1.0 / self.opt.num_D - for i in range(self.opt.num_D): - for j in range(len(pred_fake[i]) - 1): - loss_G_GAN_Feat += D_weights * feat_weights * \ - self.criterionFeat(pred_fake[i][j], - pred_real[i][j].detach()) * self.opt.lambda_feat - - # VGG feature matching loss - loss_G_VGG = 0 - if not self.opt.no_vgg_loss: - loss_G_VGG = self.criterionVGG(fake_image, real_image) * self.opt.lambda_feat - - # Only return the fake_B image if necessary to save BW - return [self.loss_filter(loss_G_GAN, loss_G_GAN_Feat, loss_G_VGG, loss_G_kl, loss_D_real, loss_D_fake,loss_G_featD, loss_featD_real, loss_featD_fake), - None if not infer else fake_image] - - def inference(self, label, inst, image=None, feat=None): - # Encode Inputs - image = Variable(image) if image is not None else None - input_label, inst_map, real_image, _ = self.encode_input(Variable(label), Variable(inst), image, infer=True) - - # Fake Generation - if self.use_features: - if self.opt.use_encoded_image: - # encode the real image to get feature map - feat_map = self.netE.forward(real_image, inst_map) - else: - # sample clusters from precomputed features - feat_map = self.sample_features(inst_map) - input_concat = torch.cat((input_label, feat_map), dim=1) - else: - input_concat = input_label - - if torch.__version__.startswith('0.4'): - with torch.no_grad(): - fake_image = self.netG.forward(input_concat) - else: - fake_image = self.netG.forward(input_concat) - return fake_image - - def sample_features(self, inst): - # read precomputed feature clusters - cluster_path = os.path.join(self.opt.checkpoints_dir, self.opt.name, self.opt.cluster_path) - features_clustered = np.load(cluster_path, encoding='latin1').item() - - # randomly sample from the feature clusters - inst_np = inst.cpu().numpy().astype(int) - feat_map = self.Tensor(inst.size()[0], self.opt.feat_num, inst.size()[2], inst.size()[3]) - for i in np.unique(inst_np): - label = i if i < 1000 else i // 1000 - if label in features_clustered: - feat = features_clustered[label] - cluster_idx = np.random.randint(0, feat.shape[0]) - - idx = (inst == int(i)).nonzero() - for k in range(self.opt.feat_num): - feat_map[idx[:, 0], idx[:, 1] + k, idx[:, 2], idx[:, 3]] = feat[cluster_idx, k] - if self.opt.data_type == 16: - feat_map = feat_map.half() - return feat_map - - def encode_features(self, image, inst): - image = Variable(image.cuda(), volatile=True) - feat_num = self.opt.feat_num - h, w = inst.size()[2], inst.size()[3] - block_num = 32 - feat_map = self.netE.forward(image, inst.cuda()) - inst_np = inst.cpu().numpy().astype(int) - feature = {} - for i in range(self.opt.label_nc): - feature[i] = np.zeros((0, feat_num + 1)) - for i in np.unique(inst_np): - label = i if i < 1000 else i // 1000 - idx = (inst == int(i)).nonzero() - num = idx.size()[0] - idx = idx[num // 2, :] - val = np.zeros((1, feat_num + 1)) - for k in range(feat_num): - val[0, k] = feat_map[idx[0], idx[1] + k, idx[2], idx[3]].data[0] - val[0, feat_num] = float(num) / (h * w // block_num) - feature[label] = np.append(feature[label], val, axis=0) - return feature - - def get_edges(self, t): - edge = torch.cuda.ByteTensor(t.size()).zero_() - edge[:, :, :, 1:] = edge[:, :, :, 1:] | (t[:, :, :, 1:] != t[:, :, :, :-1]) - edge[:, :, :, :-1] = edge[:, :, :, :-1] | (t[:, :, :, 1:] != t[:, :, :, :-1]) - edge[:, :, 1:, :] = edge[:, :, 1:, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]) - edge[:, :, :-1, :] = edge[:, :, :-1, :] | (t[:, :, 1:, :] != t[:, :, :-1, :]) - if self.opt.data_type == 16: - return edge.half() - else: - return edge.float() - - def save(self, which_epoch): - self.save_network(self.netG, 'G', which_epoch, self.gpu_ids) - self.save_network(self.netD, 'D', which_epoch, self.gpu_ids) - self.save_network(self.feat_D,'featD',which_epoch,self.gpu_ids) - - self.save_optimizer(self.optimizer_G, "G", which_epoch) - self.save_optimizer(self.optimizer_D, "D", which_epoch) - self.save_optimizer(self.optimizer_featD,'featD',which_epoch) - - if self.gen_features: - self.save_network(self.netE, 'E', which_epoch, self.gpu_ids) - - def update_fixed_params(self): - - params = list(self.netG.parameters()) - if self.gen_features: - params += list(self.netE.parameters()) - self.optimizer_G = torch.optim.Adam(params, lr=self.opt.lr, betas=(self.opt.beta1, 0.999)) - if self.opt.verbose: - print('------------ Now also finetuning global generator -----------') - - def update_learning_rate(self): - lrd = self.opt.lr / self.opt.niter_decay - lr = self.old_lr - lrd - for param_group in self.optimizer_D.param_groups: - param_group['lr'] = lr - for param_group in self.optimizer_G.param_groups: - param_group['lr'] = lr - for param_group in self.optimizer_featD.param_groups: - param_group['lr'] = lr - if self.opt.verbose: - print('update learning rate: %f -> %f' % (self.old_lr, lr)) - self.old_lr = lr - - -class InferenceModel(Pix2PixHDModel): - def forward(self, inp): - label, inst = inp - return self.inference(label, inst) diff --git a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/smpl/smpl_webuser/hello_world/render_smpl.py b/spaces/Marshalls/testmtd/analysis/aistplusplus_api/smpl/smpl_webuser/hello_world/render_smpl.py deleted file mode 100644 index 7370e8ab9f750b49a283b2cf4d71d4ca24ef7066..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/smpl/smpl_webuser/hello_world/render_smpl.py +++ /dev/null @@ -1,97 +0,0 @@ -''' -Copyright 2015 Matthew Loper, Naureen Mahmood and the Max Planck Gesellschaft. All rights reserved. -This software is provided for research purposes only. -By using this software you agree to the terms of the SMPL Model license here http://smpl.is.tue.mpg.de/license - -More information about SMPL is available here http://smpl.is.tue.mpg. -For comments or questions, please email us at: smpl@tuebingen.mpg.de - - -Please Note: -============ -This is a demo version of the script for driving the SMPL model with python. -We would be happy to receive comments, help and suggestions on improving this code -and in making it available on more platforms. - - -System Requirements: -==================== -Operating system: OSX, Linux - -Python Dependencies: -- Numpy & Scipy [http://www.scipy.org/scipylib/download.html] -- Chumpy [https://github.com/mattloper/chumpy] -- OpenCV [http://opencv.org/downloads.html] - --> (alternatively: matplotlib [http://matplotlib.org/downloads.html]) - - -About the Script: -================= -This script demonstrates loading the smpl model and rendering it using OpenDR -to render and OpenCV to display (or alternatively matplotlib can also be used -for display, as shown in commented code below). - -This code shows how to: - - Load the SMPL model - - Edit pose & shape parameters of the model to create a new body in a new pose - - Create an OpenDR scene (with a basic renderer, camera & light) - - Render the scene using OpenCV / matplotlib - - -Running the Hello World code: -============================= -Inside Terminal, navigate to the smpl/webuser/hello_world directory. You can run -the hello world script now by typing the following: -> python render_smpl.py - - -''' - -import numpy as np -from opendr.renderer import ColoredRenderer -from opendr.lighting import LambertianPointLight -from opendr.camera import ProjectPoints -from smpl_webuser.serialization import load_model - -## Load SMPL model (here we load the female model) -m = load_model('../../models/basicModel_f_lbs_10_207_0_v1.0.0.pkl') - -## Assign random pose and shape parameters -m.pose[:] = np.random.rand(m.pose.size) * .2 -m.betas[:] = np.random.rand(m.betas.size) * .03 -m.pose[0] = np.pi - -## Create OpenDR renderer -rn = ColoredRenderer() - -## Assign attributes to renderer -w, h = (640, 480) - -rn.camera = ProjectPoints(v=m, rt=np.zeros(3), t=np.array([0, 0, 2.]), f=np.array([w,w])/2., c=np.array([w,h])/2., k=np.zeros(5)) -rn.frustum = {'near': 1., 'far': 10., 'width': w, 'height': h} -rn.set(v=m, f=m.f, bgcolor=np.zeros(3)) - -## Construct point light source -rn.vc = LambertianPointLight( - f=m.f, - v=rn.v, - num_verts=len(m), - light_pos=np.array([-1000,-1000,-2000]), - vc=np.ones_like(m)*.9, - light_color=np.array([1., 1., 1.])) - - -## Show it using OpenCV -import cv2 -cv2.imshow('render_SMPL', rn.r) -print ('..Print any key while on the display window') -cv2.waitKey(0) -cv2.destroyAllWindows() - - -## Could also use matplotlib to display -# import matplotlib.pyplot as plt -# plt.ion() -# plt.imshow(rn.r) -# plt.show() -# import pdb; pdb.set_trace() \ No newline at end of file diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/evaluation/tempo.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/evaluation/tempo.py deleted file mode 100644 index 62f0d761314cc8e7b7a20eda0adaa81af7df08e2..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/madmom/evaluation/tempo.py +++ /dev/null @@ -1,365 +0,0 @@ -# encoding: utf-8 -# pylint: disable=no-member -# pylint: disable=invalid-name -# pylint: disable=too-many-arguments -""" -This module contains tempo evaluation functionality. - -""" - -from __future__ import absolute_import, division, print_function - -import warnings -import numpy as np - -from . import EvaluationMixin, MeanEvaluation, evaluation_io -from ..io import load_tempo - -# default tempo evaluation values -TOLERANCE = 0.04 -DOUBLE = True -TRIPLE = True - - -# function to sort tempi -def sort_tempo(tempo): - """ - Sort tempi according to their strengths. - - Parameters - ---------- - tempo : numpy array, shape (num_tempi, 2) - Tempi (first column) and their relative strength (second column). - - Returns - ------- - tempi : numpy array, shape (num_tempi, 2) - Tempi sorted according to their strength. - - """ - tempo = np.array(tempo, copy=False, ndmin=1) - if tempo.ndim != 2: - raise ValueError('`tempo` has no strength information, cannot sort ' - 'them.') - tempi = tempo[:, 0] - strengths = tempo[:, 1] - # Note: use 'mergesort', because we want a stable sorting algorithm - # which keeps the order of the keys in case of duplicate keys - # but we need to apply this (-strengths) trick because we want - # tempi with uniformly distributed strengths to keep their order - sort_idx = (-strengths).argsort(kind='mergesort') - tempi = tempi[sort_idx] - strengths = strengths[sort_idx] - return np.vstack((tempi, strengths)).T - - -# this evaluation function can evaluate multiple tempi simultaneously -def tempo_evaluation(detections, annotations, tolerance=TOLERANCE): - """ - Calculate the tempo P-Score, at least one and all tempi correct. - - Parameters - ---------- - detections : list of tuples or numpy array - Detected tempi (rows, first column) and their relative strengths - (second column). - annotations : list or numpy array - Annotated tempi (rows, first column) and their relative strengths - (second column). - tolerance : float, optional - Evaluation tolerance (max. allowed deviation). - - Returns - ------- - pscore : float - P-Score. - at_least_one : bool - At least one tempo correctly identified. - all : bool - All tempi correctly identified. - - Notes - ----- - All given detections are evaluated against all annotations according to the - relative strengths given. If no strengths are given, evenly distributed - strengths are assumed. If the strengths do not sum to 1, they will be - normalized. - - References - ---------- - .. [1] M. McKinney, D. Moelants, M. Davies and A. Klapuri, - "Evaluation of audio beat tracking and music tempo extraction - algorithms", - Journal of New Music Research, vol. 36, no. 1, 2007. - - """ - # neither detections nor annotations are given - if len(detections) == 0 and len(annotations) == 0: - # perfect result - return 1., True, True - # either detections or annotations are empty - if len(detections) == 0 or len(annotations) == 0: - # worst result - return 0., False, False - # tolerance must be greater than 0 - if float(tolerance) <= 0: - raise ValueError('tolerance must be greater than 0') - # make sure the annotations and detections have a float dtype - detections = np.array(detections, dtype=np.float, ndmin=1) - annotations = np.array(annotations, dtype=np.float, ndmin=1) - # extract the detected tempi, ignore the strengths - if detections.ndim == 2: - detections = detections[:, 0] - # extract the annotated tempi and strengths - strengths = [] - if annotations.ndim == 2: - # Note: extract the strength before using only the tempo annotations - strengths = annotations[:, 1] - annotations = annotations[:, 0] - # strengths must sum up to 1 - strengths_sum = np.sum(strengths) - if strengths_sum == 0: - # uniformly distribute strengths - warnings.warn('no annotated tempo strengths given, assuming a uniform ' - 'distribution') - strengths = np.ones_like(annotations) / float(len(annotations)) - elif strengths_sum != 1: - # normalize strengths - warnings.warn('annotated tempo strengths do not sum to 1, normalizing') - strengths /= float(strengths_sum) - # test all detected tempi against all annotated tempi - errors = np.abs(1 - (detections[:, np.newaxis] / annotations)) - # correctly identified annotation tempi - correct = np.asarray(np.sum(errors <= tolerance, axis=0), np.bool) - # the P-Score is the sum of the strengths of the correctly identified tempi - pscore = np.sum(strengths[correct]) - # return the scores - # TODO: also return the errors? - return pscore, correct.any(), correct.all() - - -# basic tempo evaluation -class TempoEvaluation(EvaluationMixin): - """ - Tempo evaluation class. - - Parameters - ---------- - detections : str, list of tuples or numpy array - Detected tempi (rows) and their strengths (columns). - If a file name is given, load them from this file. - annotations : str, list or numpy array - Annotated ground truth tempi (rows) and their strengths (columns). - If a file name is given, load them from this file. - tolerance : float, optional - Evaluation tolerance (max. allowed deviation). - double : bool, optional - Include double and half tempo variations. - triple : bool, optional - Include triple and third tempo variations. - sort : bool, optional - Sort the tempi by their strengths (descending order). - max_len : bool, optional - Evaluate at most `max_len` tempi. - name : str, optional - Name of the evaluation to be displayed. - - Notes - ----- - For P-Score, the number of detected tempi will be limited to the number - of annotations (if not further limited by `max_len`). - For Accuracy 1 & 2 only one detected tempo is used. Depending on `sort`, - this can be either the first or the strongest one. - - """ - METRIC_NAMES = [ - ('pscore', 'P-score'), - ('any', 'one tempo correct'), - ('all', 'both tempi correct'), - ('acc1', 'Accuracy 1'), - ('acc2', 'Accuracy 2') - ] - - def __init__(self, detections, annotations, tolerance=TOLERANCE, - double=DOUBLE, triple=TRIPLE, sort=True, max_len=None, - name=None, **kwargs): - # pylint: disable=unused-argument - # convert to numpy array - detections = np.array(detections, dtype=np.float, ndmin=1) - annotations = np.array(annotations, dtype=np.float, ndmin=1) - if sort and detections.ndim == 2: - detections = sort_tempo(detections) - if sort and annotations.ndim == 2: - annotations = sort_tempo(annotations) - # truncate detections and detections to the same length - if max_len: - detections = detections[:max_len] - annotations = annotations[:max_len] - # evaluate P-score with all tempo annotations - self.pscore, self.any, self.all = tempo_evaluation( - detections, annotations, tolerance) - # evaluate accuracies only with the strongest/first tempo - # Note: the strengths are irrelevant or acc1 & acc2 calculation - # the accuracies correspond to either any or all tempi - # evaluate acc1 (i.e. any of the annotated tempi) - self.acc1 = tempo_evaluation( - detections[:1], annotations[:1], tolerance)[1] - # evaluate acc2 like acc1 but include double/half & triple/third tempi - try: - tempi = annotations[:1, 0].copy() - except IndexError: - tempi = annotations[:1].copy() - tempi_ = tempi.copy() - if double: - tempi_ = np.hstack((tempi_, tempi * 2., tempi / 2.)) - if triple: - tempi_ = np.hstack((tempi_, tempi * 3., tempi / 3.)) - self.acc2 = tempo_evaluation(detections[:1], tempi_, tolerance)[1] - # save the name - self.name = name - - def __len__(self): - return 1 - - def tostring(self, **kwargs): - """ - Format the evaluation metrics as a human readable string. - - Returns - ------- - str - Evaluation metrics formatted as a human readable string. - - """ - # pylint: disable=unused-argument - - ret = '' - if self.name is not None: - ret += '%s\n ' % self.name - ret += 'pscore=%.3f (one tempo: %.3f, all tempi: %.3f) ' \ - 'acc1=%.3f acc2=%.3f' % \ - (self.pscore, self.any, self.all, self.acc1, self.acc2) - return ret - - def __str__(self): - return self.tostring() - - -class TempoMeanEvaluation(MeanEvaluation): - """ - Class for averaging tempo evaluation scores. - - """ - METRIC_NAMES = TempoEvaluation.METRIC_NAMES - - @property - def pscore(self): - """P-Score.""" - return np.nanmean([e.pscore for e in self.eval_objects]) - - @property - def any(self): - """At least one tempo correct.""" - return np.nanmean([e.any for e in self.eval_objects]) - - @property - def all(self): - """All tempi correct.""" - return np.nanmean([e.all for e in self.eval_objects]) - - @property - def acc1(self): - """Accuracy 1.""" - return np.nanmean([e.acc1 for e in self.eval_objects]) - - @property - def acc2(self): - """Accuracy 2.""" - return np.nanmean([e.acc2 for e in self.eval_objects]) - - def tostring(self, **kwargs): - """ - Format the evaluation metrics as a human readable string. - - Returns - ------- - str - Evaluation metrics formatted as a human readable string. - - """ - ret = '' - if self.name is not None: - ret += '%s\n ' % self.name - ret += 'pscore=%.3f (one tempo: %.3f, all tempi: %.3f) ' \ - 'acc1=%.3f acc2=%.3f' % \ - (self.pscore, self.any, self.all, self.acc1, self.acc2) - return ret - - def __str__(self): - return self.tostring() - - -def add_parser(parser): - """ - Add a tempo evaluation sub-parser to an existing parser. - - Parameters - ---------- - parser : argparse parser instance - Existing argparse parser object. - - Returns - ------- - sub_parser : argparse sub-parser instance - Tempo evaluation sub-parser. - parser_group : argparse argument group - Tempo evaluation argument group. - - """ - import argparse - # add tempo evaluation sub-parser to the existing parser - p = parser.add_parser( - 'tempo', help='tempo evaluation', - formatter_class=argparse.RawDescriptionHelpFormatter, - description=''' - This program evaluates pairs of files containing the tempo annotations and - detections. Suffixes can be given to filter them from the list of files. - - A single line represents the tempi and their relative strength and must - have the following format with values being separated by whitespace: - `tempo_one tempo_two relative_strength` - - Lines starting with # are treated as comments and are ignored. - - For P-Score evaluation as many tempi detections are used as tempo - annotations are given. - - For Accuracy 1 & 2 evaluation, only the strongest (if strengths are given) - or the first tempo is used. - - ''') - # set defaults - p.set_defaults(eval=TempoEvaluation, mean_eval=TempoMeanEvaluation, - sum_eval=None, load_fn=load_tempo) - # file I/O - evaluation_io(p, ann_suffix='.bpm', det_suffix='.bpm.txt') - # evaluation parameters - g = p.add_argument_group('tempo manipulation arguments') - g.add_argument('--tolerance', type=float, action='store', - default=TOLERANCE, - help='tolerance for tempo detection ' - '[default=%(default).3f]') - g.add_argument('--no_double', dest='double', action='store_false', - help='do not include double/half tempo evaluation') - g.add_argument('--no_triple', dest='triple', action='store_false', - help='do not include triple/third tempo evaluation') - # how many and which of the tempi should be evaluated? - g.add_argument('--no_sort', dest='sort', action='store_false', - help='do not sort the tempi by strength [default: sort ' - 'them by strength]') - # TODO: add option to evaluate any other than the default number of tempi? - # g.add_argument('--num', dest='max_len', action='store', type=int, - # help='evaluate NUM tempi [default: evaluate only the ' - # 'first (after sorting them)]') - # return the sub-parser and evaluation argument group - return p, g diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/config_generators/base.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/config_generators/base.py deleted file mode 100644 index ba3811a425f203d2e5a810dc6e57a0934fb13a93..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/config_generators/base.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from abc import abstractmethod -from typing import Dict, List, Optional - -from mmengine import mkdir_or_exist - - -class BaseDatasetConfigGenerator: - """Base class for dataset config generator. - - Args: - data_root (str): The root path of the dataset. - task (str): The task of the dataset. - dataset_name (str): The name of the dataset. - overwrite_cfg (bool): Whether to overwrite the dataset config file if - it already exists. If False, config generator will not generate new - config for datasets whose configs are already in base. - train_anns (List[Dict], optional): A list of train annotation files - to appear in the base configs. Defaults to None. - Each element is typically a dict with the following fields: - - ann_file (str): The path to the annotation file relative to - data_root. - - dataset_postfix (str, optional): Affects the postfix of the - resulting variable in the generated config. If specified, the - dataset variable will be named in the form of - ``{dataset_name}_{dataset_postfix}_{task}_{split}``. Defaults to - None. - val_anns (List[Dict], optional): A list of val annotation files - to appear in the base configs, similar to ``train_anns``. Defaults - to None. - test_anns (List[Dict], optional): A list of test annotation files - to appear in the base configs, similar to ``train_anns``. Defaults - to None. - config_path (str): Path to the configs. Defaults to 'configs/'. - """ - - def __init__( - self, - data_root: str, - task: str, - dataset_name: str, - overwrite_cfg: bool = False, - train_anns: Optional[List[Dict]] = None, - val_anns: Optional[List[Dict]] = None, - test_anns: Optional[List[Dict]] = None, - config_path: str = 'configs/', - ) -> None: - self.config_path = config_path - self.data_root = data_root - self.task = task - self.dataset_name = dataset_name - self.overwrite_cfg = overwrite_cfg - self._prepare_anns(train_anns, val_anns, test_anns) - - def _prepare_anns(self, train_anns: Optional[List[Dict]], - val_anns: Optional[List[Dict]], - test_anns: Optional[List[Dict]]) -> None: - """Preprocess input arguments and stores these information into - ``self.anns``. - - ``self.anns`` is a dict that maps the name of a dataset config variable - to a dict, which contains the following fields: - - ann_file (str): The path to the annotation file relative to - data_root. - - split (str): The split the annotation belongs to. Usually - it can be 'train', 'val' and 'test'. - - dataset_postfix (str, optional): Affects the postfix of the - resulting variable in the generated config. If specified, the - dataset variable will be named in the form of - ``{dataset_name}_{dataset_postfix}_{task}_{split}``. Defaults to - None. - """ - self.anns = {} - for split, ann_list in zip(('train', 'val', 'test'), - (train_anns, val_anns, test_anns)): - if ann_list is None: - continue - if not isinstance(ann_list, list): - raise ValueError(f'{split}_anns must be either a list or' - ' None!') - for ann_dict in ann_list: - assert 'ann_file' in ann_dict - suffix = ann_dict['ann_file'].split('.')[-1] - if suffix == 'json': - dataset_type = 'OCRDataset' - elif suffix == 'lmdb': - assert self.task == 'textrecog', \ - 'LMDB format only works for textrecog now.' - dataset_type = 'RecogLMDBDataset' - else: - raise NotImplementedError( - 'ann file only supports JSON file or LMDB file') - ann_dict['dataset_type'] = dataset_type - if ann_dict.get('dataset_postfix', ''): - key = f'{self.dataset_name}_{ann_dict["dataset_postfix"]}_{self.task}_{split}' # noqa - else: - key = f'{self.dataset_name}_{self.task}_{split}' - ann_dict['split'] = split - if key in self.anns: - raise ValueError( - f'Duplicate dataset variable {key} found! ' - 'Please use different dataset_postfix to avoid ' - 'conflict.') - self.anns[key] = ann_dict - - def __call__(self) -> None: - """Generates the base dataset config.""" - - dataset_config = self._gen_dataset_config() - - cfg_path = osp.join(self.config_path, self.task, '_base_', 'datasets', - f'{self.dataset_name}.py') - if osp.exists(cfg_path) and not self.overwrite_cfg: - print(f'{cfg_path} found, skipping.') - return - mkdir_or_exist(osp.dirname(cfg_path)) - with open(cfg_path, 'w') as f: - f.write( - f'{self.dataset_name}_{self.task}_data_root = \'{self.data_root}\'\n' # noqa: E501 - ) - f.write(dataset_config) - - @abstractmethod - def _gen_dataset_config(self) -> str: - """Generate a full dataset config based on the annotation file - dictionary. - - Returns: - str: The generated dataset config. - """ diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/decoders/position_attention_decoder.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/decoders/position_attention_decoder.py deleted file mode 100644 index 7543c2b199814143fab916d811cc419c1163274a..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/decoders/position_attention_decoder.py +++ /dev/null @@ -1,220 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math -from typing import Dict, Optional, Sequence, Union - -import torch -import torch.nn as nn - -from mmocr.models.common.dictionary import Dictionary -from mmocr.models.textrecog.layers import (DotProductAttentionLayer, - PositionAwareLayer) -from mmocr.registry import MODELS -from mmocr.structures import TextRecogDataSample -from .base import BaseDecoder - - -@MODELS.register_module() -class PositionAttentionDecoder(BaseDecoder): - """Position attention decoder for RobustScanner. - - RobustScanner: `RobustScanner: Dynamically Enhancing Positional Clues for - Robust Text Recognition `_ - - Args: - dictionary (dict or :obj:`Dictionary`): The config for `Dictionary` or - the instance of `Dictionary`. - module_loss (dict, optional): Config to build module_loss. Defaults - to None. - postprocessor (dict, optional): Config to build postprocessor. - Defaults to None. - rnn_layers (int): Number of RNN layers. Defaults to 2. - dim_input (int): Dimension :math:`D_i` of input vector ``feat``. - Defaults to 512. - dim_model (int): Dimension :math:`D_m` of the model. Should also be the - same as encoder output vector ``out_enc``. Defaults to 128. - max_seq_len (int): Maximum output sequence length :math:`T`. Defaults - to 40. - mask (bool): Whether to mask input features according to - ``img_meta['valid_ratio']``. Defaults to True. - return_feature (bool): Return feature or logits as the result. Defaults - to True. - encode_value (bool): Whether to use the output of encoder ``out_enc`` - as `value` of attention layer. If False, the original feature - ``feat`` will be used. Defaults to False. - init_cfg (dict or list[dict], optional): Initialization configs. - Defaults to None. - """ - - def __init__(self, - dictionary: Union[Dictionary, Dict], - module_loss: Optional[Dict] = None, - postprocessor: Optional[Dict] = None, - rnn_layers: int = 2, - dim_input: int = 512, - dim_model: int = 128, - max_seq_len: int = 40, - mask: bool = True, - return_feature: bool = True, - encode_value: bool = False, - init_cfg: Optional[Union[Dict, - Sequence[Dict]]] = None) -> None: - super().__init__( - dictionary=dictionary, - module_loss=module_loss, - postprocessor=postprocessor, - max_seq_len=max_seq_len, - init_cfg=init_cfg) - - self.dim_input = dim_input - self.dim_model = dim_model - self.return_feature = return_feature - self.encode_value = encode_value - self.mask = mask - - self.embedding = nn.Embedding(self.max_seq_len + 1, self.dim_model) - - self.position_aware_module = PositionAwareLayer( - self.dim_model, rnn_layers) - - self.attention_layer = DotProductAttentionLayer() - - self.prediction = None - if not self.return_feature: - self.prediction = nn.Linear( - dim_model if encode_value else dim_input, - self.dictionary.num_classes) - self.softmax = nn.Softmax(dim=-1) - - def _get_position_index(self, - length: int, - batch_size: int, - device: Optional[torch.device] = None - ) -> torch.Tensor: - """Get position index for position attention. - - Args: - length (int): Length of the sequence. - batch_size (int): Batch size. - device (torch.device, optional): Device. Defaults to None. - - Returns: - torch.Tensor: Position index. - """ - position_index = torch.arange(0, length, device=device) - position_index = position_index.repeat([batch_size, 1]) - position_index = position_index.long() - return position_index - - def forward_train(self, feat: torch.Tensor, out_enc: torch.Tensor, - data_samples: Sequence[TextRecogDataSample] - ) -> torch.Tensor: - """ - Args: - feat (Tensor): Tensor of shape :math:`(N, D_i, H, W)`. - out_enc (Tensor): Encoder output of shape - :math:`(N, D_m, H, W)`. - data_samples (list[TextRecogDataSample], optional): Batch of - TextRecogDataSample, containing gt_text information. Defaults - to None. - - Returns: - Tensor: A raw logit tensor of shape :math:`(N, T, C)` if - ``return_feature=False``. Otherwise it will be the hidden feature - before the prediction projection layer, whose shape is - :math:`(N, T, D_m)`. - """ - valid_ratios = [ - data_sample.get('valid_ratio', 1.0) for data_sample in data_samples - ] if self.mask else None - - # - n, c_enc, h, w = out_enc.size() - assert c_enc == self.dim_model - _, c_feat, _, _ = feat.size() - assert c_feat == self.dim_input - position_index = self._get_position_index(self.max_seq_len, n, - feat.device) - - position_out_enc = self.position_aware_module(out_enc) - - query = self.embedding(position_index) - query = query.permute(0, 2, 1).contiguous() - key = position_out_enc.view(n, c_enc, h * w) - if self.encode_value: - value = out_enc.view(n, c_enc, h * w) - else: - value = feat.view(n, c_feat, h * w) - - mask = None - if valid_ratios is not None: - mask = query.new_zeros((n, h, w)) - for i, valid_ratio in enumerate(valid_ratios): - valid_width = min(w, math.ceil(w * valid_ratio)) - mask[i, :, valid_width:] = 1 - mask = mask.bool() - mask = mask.view(n, h * w) - - attn_out = self.attention_layer(query, key, value, mask) - attn_out = attn_out.permute(0, 2, - 1).contiguous() # [n, max_seq_len, dim_v] - - if self.return_feature: - return attn_out - - return self.prediction(attn_out) - - def forward_test(self, feat: torch.Tensor, out_enc: torch.Tensor, - img_metas: Sequence[TextRecogDataSample]) -> torch.Tensor: - """ - Args: - feat (Tensor): Tensor of shape :math:`(N, D_i, H, W)`. - out_enc (Tensor): Encoder output of shape - :math:`(N, D_m, H, W)`. - data_samples (list[TextRecogDataSample], optional): Batch of - TextRecogDataSample, containing gt_text information. Defaults - to None. - - Returns: - Tensor: Character probabilities of shape :math:`(N, T, C)` if - ``return_feature=False``. Otherwise it would be the hidden feature - before the prediction projection layer, whose shape is - :math:`(N, T, D_m)`. - """ - valid_ratios = [ - img_meta.get('valid_ratio', 1.0) for img_meta in img_metas - ] if self.mask else None - - seq_len = self.max_seq_len - n, c_enc, h, w = out_enc.size() - assert c_enc == self.dim_model - _, c_feat, _, _ = feat.size() - assert c_feat == self.dim_input - - position_index = self._get_position_index(seq_len, n, feat.device) - - position_out_enc = self.position_aware_module(out_enc) - - query = self.embedding(position_index) - query = query.permute(0, 2, 1).contiguous() - key = position_out_enc.view(n, c_enc, h * w) - if self.encode_value: - value = out_enc.view(n, c_enc, h * w) - else: - value = feat.view(n, c_feat, h * w) - - mask = None - if valid_ratios is not None: - mask = query.new_zeros((n, h, w)) - for i, valid_ratio in enumerate(valid_ratios): - valid_width = min(w, math.ceil(w * valid_ratio)) - mask[i, :, valid_width:] = 1 - mask = mask.bool() - mask = mask.view(n, h * w) - - attn_out = self.attention_layer(query, key, value, mask) - attn_out = attn_out.permute(0, 2, 1).contiguous() - - if self.return_feature: - return attn_out - - return self.softmax(self.prediction(attn_out)) diff --git a/spaces/MrBodean/VoiceClone/vocoder/gen_wavernn.py b/spaces/MrBodean/VoiceClone/vocoder/gen_wavernn.py deleted file mode 100644 index 2036737f805f6055893812e48f99d524624aab07..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/vocoder/gen_wavernn.py +++ /dev/null @@ -1,31 +0,0 @@ -from vocoder.models.fatchord_version import WaveRNN -from vocoder.audio import * - - -def gen_testset(model: WaveRNN, test_set, samples, batched, target, overlap, save_path): - k = model.get_step() // 1000 - - for i, (m, x) in enumerate(test_set, 1): - if i > samples: - break - - print('\n| Generating: %i/%i' % (i, samples)) - - x = x[0].numpy() - - bits = 16 if hp.voc_mode == 'MOL' else hp.bits - - if hp.mu_law and hp.voc_mode != 'MOL' : - x = decode_mu_law(x, 2**bits, from_labels=True) - else : - x = label_2_float(x, bits) - - save_wav(x, save_path.joinpath("%dk_steps_%d_target.wav" % (k, i))) - - batch_str = "gen_batched_target%d_overlap%d" % (target, overlap) if batched else \ - "gen_not_batched" - save_str = save_path.joinpath("%dk_steps_%d_%s.wav" % (k, i, batch_str)) - - wav = model.generate(m, batched, target, overlap, hp.mu_law) - save_wav(wav, save_str) - diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/scripts/prepro_labels.py b/spaces/NAACL2022/CLIP-Caption-Reward/scripts/prepro_labels.py deleted file mode 100644 index 57fd82fb5144e51fd7dfe3e159080dbf29a63567..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/scripts/prepro_labels.py +++ /dev/null @@ -1,206 +0,0 @@ -""" -Preprocess a raw json dataset into hdf5/json files for use in data_loader.py - -Input: json file that has the form -[{ file_path: 'path/img.jpg', captions: ['a caption', ...] }, ...] -example element in this list would look like -{'captions': [u'A man with a red helmet on a small moped on a dirt road. ', u'Man riding a motor bike on a dirt road on the countryside.', u'A man riding on the back of a motorcycle.', u'A dirt path with a young person on a motor bike rests to the foreground of a verdant area with a bridge and a background of cloud-wreathed mountains. ', u'A man in a red shirt and a red hat is on a motorcycle on a hill side.'], 'file_path': u'val2014/COCO_val2014_000000391895.jpg', 'id': 391895} - -This script reads this json, does some basic preprocessing on the captions -(e.g. lowercase, etc.), creates a special UNK token, and encodes everything to arrays - -Output: a json file and an hdf5 file -The hdf5 file contains several fields: -/labels is (M,max_length) uint32 array of encoded labels, zero padded -/label_start_ix and /label_end_ix are (N,) uint32 arrays of pointers to the - first and last indices (in range 1..M) of labels for each image -/label_length stores the length of the sequence for each of the M sequences - -The json file has a dict that contains: -- an 'ix_to_word' field storing the vocab in form {ix:'word'}, where ix is 1-indexed -- an 'images' field that is a list holding auxiliary information for each image, - such as in particular the 'split' it was assigned to. -""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os -import json -import argparse -from random import shuffle, seed -import string -# non-standard dependencies: -import h5py -import numpy as np -import torch -import torchvision.models as models -import skimage.io -from PIL import Image - - -def build_vocab(imgs, params): - count_thr = params['word_count_threshold'] - - # count up the number of words - counts = {} - for img in imgs: - for sent in img['sentences']: - for w in sent['tokens']: - counts[w] = counts.get(w, 0) + 1 - cw = sorted([(count,w) for w,count in counts.items()], reverse=True) - print('top words and their counts:') - print('\n'.join(map(str,cw[:20]))) - - # print some stats - total_words = sum(counts.values()) - print('total words:', total_words) - bad_words = [w for w,n in counts.items() if n <= count_thr] - vocab = [w for w,n in counts.items() if n > count_thr] - bad_count = sum(counts[w] for w in bad_words) - print('number of bad words: %d/%d = %.2f%%' % (len(bad_words), len(counts), len(bad_words)*100.0/len(counts))) - print('number of words in vocab would be %d' % (len(vocab), )) - print('number of UNKs: %d/%d = %.2f%%' % (bad_count, total_words, bad_count*100.0/total_words)) - - # lets look at the distribution of lengths as well - sent_lengths = {} - for img in imgs: - for sent in img['sentences']: - txt = sent['tokens'] - nw = len(txt) - sent_lengths[nw] = sent_lengths.get(nw, 0) + 1 - max_len = max(sent_lengths.keys()) - print('max length sentence in raw data: ', max_len) - print('sentence length distribution (count, number of words):') - sum_len = sum(sent_lengths.values()) - for i in range(max_len+1): - print('%2d: %10d %f%%' % (i, sent_lengths.get(i,0), sent_lengths.get(i,0)*100.0/sum_len)) - - # lets now produce the final annotations - if bad_count > 0: - # additional special UNK token we will use below to map infrequent words to - print('inserting the special UNK token') - vocab.append('UNK') - - for img in imgs: - img['final_captions'] = [] - for sent in img['sentences']: - txt = sent['tokens'] - caption = [w if counts.get(w,0) > count_thr else 'UNK' for w in txt] - img['final_captions'].append(caption) - - return vocab - - -def encode_captions(imgs, params, wtoi): - """ - encode all captions into one large array, which will be 1-indexed. - also produces label_start_ix and label_end_ix which store 1-indexed - and inclusive (Lua-style) pointers to the first and last caption for - each image in the dataset. - """ - - max_length = params['max_length'] - N = len(imgs) - M = sum(len(img['final_captions']) for img in imgs) # total number of captions - - label_arrays = [] - label_start_ix = np.zeros(N, dtype='uint32') # note: these will be one-indexed - label_end_ix = np.zeros(N, dtype='uint32') - label_length = np.zeros(M, dtype='uint32') - caption_counter = 0 - counter = 1 - for i,img in enumerate(imgs): - n = len(img['final_captions']) - assert n > 0, 'error: some image has no captions' - - Li = np.zeros((n, max_length), dtype='uint32') - for j,s in enumerate(img['final_captions']): - label_length[caption_counter] = min(max_length, len(s)) # record the length of this sequence - caption_counter += 1 - for k,w in enumerate(s): - if k < max_length: - Li[j,k] = wtoi[w] - - # note: word indices are 1-indexed, and captions are padded with zeros - label_arrays.append(Li) - label_start_ix[i] = counter - label_end_ix[i] = counter + n - 1 - - counter += n - - L = np.concatenate(label_arrays, axis=0) # put all the labels together - assert L.shape[0] == M, 'lengths don\'t match? that\'s weird' - assert np.all(label_length > 0), 'error: some caption had no words?' - - print('encoded captions to array of size ', L.shape) - return L, label_start_ix, label_end_ix, label_length - - -def main(params): - - imgs = json.load(open(params['input_json'], 'r')) - imgs = imgs['images'] - - seed(123) # make reproducible - - # create the vocab - vocab = build_vocab(imgs, params) - itow = {i+1:w for i,w in enumerate(vocab)} # a 1-indexed vocab translation table - wtoi = {w:i+1 for i,w in enumerate(vocab)} # inverse table - - # encode captions in large arrays, ready to ship to hdf5 file - L, label_start_ix, label_end_ix, label_length = encode_captions(imgs, params, wtoi) - - # create output h5 file - N = len(imgs) - f_lb = h5py.File(params['output_h5']+'_label.h5', "w") - f_lb.create_dataset("labels", dtype='uint32', data=L) - f_lb.create_dataset("label_start_ix", dtype='uint32', data=label_start_ix) - f_lb.create_dataset("label_end_ix", dtype='uint32', data=label_end_ix) - f_lb.create_dataset("label_length", dtype='uint32', data=label_length) - f_lb.close() - - # create output json file - out = {} - out['ix_to_word'] = itow # encode the (1-indexed) vocab - out['images'] = [] - for i,img in enumerate(imgs): - - jimg = {} - jimg['split'] = img['split'] - if 'filename' in img: jimg['file_path'] = os.path.join(img.get('filepath', ''), img['filename']) # copy it over, might need - if 'cocoid' in img: - jimg['id'] = img['cocoid'] # copy over & mantain an id, if present (e.g. coco ids, useful) - elif 'imgid' in img: - jimg['id'] = img['imgid'] - - if params['images_root'] != '': - with Image.open(os.path.join(params['images_root'], img['filepath'], img['filename'])) as _img: - jimg['width'], jimg['height'] = _img.size - - out['images'].append(jimg) - - json.dump(out, open(params['output_json'], 'w')) - print('wrote ', params['output_json']) - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - - # input json - parser.add_argument('--input_json', required=True, help='input json file to process into hdf5') - parser.add_argument('--output_json', default='data.json', help='output json file') - parser.add_argument('--output_h5', default='data', help='output h5 file') - parser.add_argument('--images_root', default='', help='root location in which images are stored, to be prepended to file_path in input json') - - # options - parser.add_argument('--max_length', default=16, type=int, help='max length of a caption, in number of words. captions longer than this get clipped.') - parser.add_argument('--word_count_threshold', default=5, type=int, help='only words that occur more than this number of times will be put in vocab') - - args = parser.parse_args() - params = vars(args) # convert to ordinary dict - print('parsed input parameters:') - print(json.dumps(params, indent = 2)) - main(params) diff --git a/spaces/NCTCMumbai/NCTC/models/official/modeling/optimization/configs/optimization_config.py b/spaces/NCTCMumbai/NCTC/models/official/modeling/optimization/configs/optimization_config.py deleted file mode 100644 index 1cf3616c75bec20c2560747561530f332cd2466c..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/modeling/optimization/configs/optimization_config.py +++ /dev/null @@ -1,95 +0,0 @@ -# Lint as: python3 -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Dataclasses for optimization configs. - -This file define the dataclass for optimization configs (OptimizationConfig). -It also has two helper functions get_optimizer_config, and get_lr_config from -an OptimizationConfig class. -""" -from typing import Optional - -import dataclasses - -from official.modeling.hyperparams import base_config -from official.modeling.hyperparams import oneof -from official.modeling.optimization.configs import learning_rate_config as lr_cfg -from official.modeling.optimization.configs import optimizer_config as opt_cfg - - -@dataclasses.dataclass -class OptimizerConfig(oneof.OneOfConfig): - """Configuration for optimizer. - - Attributes: - type: 'str', type of optimizer to be used, on the of fields below. - sgd: sgd optimizer config. - adam: adam optimizer config. - adamw: adam with weight decay. - lamb: lamb optimizer. - rmsprop: rmsprop optimizer. - """ - type: Optional[str] = None - sgd: opt_cfg.SGDConfig = opt_cfg.SGDConfig() - adam: opt_cfg.AdamConfig = opt_cfg.AdamConfig() - adamw: opt_cfg.AdamWeightDecayConfig = opt_cfg.AdamWeightDecayConfig() - lamb: opt_cfg.LAMBConfig = opt_cfg.LAMBConfig() - rmsprop: opt_cfg.RMSPropConfig = opt_cfg.RMSPropConfig() - - -@dataclasses.dataclass -class LrConfig(oneof.OneOfConfig): - """Configuration for lr schedule. - - Attributes: - type: 'str', type of lr schedule to be used, on the of fields below. - stepwise: stepwise learning rate config. - exponential: exponential learning rate config. - polynomial: polynomial learning rate config. - cosine: cosine learning rate config. - """ - type: Optional[str] = None - stepwise: lr_cfg.StepwiseLrConfig = lr_cfg.StepwiseLrConfig() - exponential: lr_cfg.ExponentialLrConfig = lr_cfg.ExponentialLrConfig() - polynomial: lr_cfg.PolynomialLrConfig = lr_cfg.PolynomialLrConfig() - cosine: lr_cfg.CosineLrConfig = lr_cfg.CosineLrConfig() - - -@dataclasses.dataclass -class WarmupConfig(oneof.OneOfConfig): - """Configuration for lr schedule. - - Attributes: - type: 'str', type of warmup schedule to be used, on the of fields below. - linear: linear warmup config. - polynomial: polynomial warmup config. - """ - type: Optional[str] = None - linear: lr_cfg.LinearWarmupConfig = lr_cfg.LinearWarmupConfig() - polynomial: lr_cfg.PolynomialWarmupConfig = lr_cfg.PolynomialWarmupConfig() - - -@dataclasses.dataclass -class OptimizationConfig(base_config.Config): - """Configuration for optimizer and learning rate schedule. - - Attributes: - optimizer: optimizer oneof config. - learning_rate: learning rate oneof config. - warmup: warmup oneof config. - """ - optimizer: OptimizerConfig = OptimizerConfig() - learning_rate: LrConfig = LrConfig() - warmup: WarmupConfig = WarmupConfig() diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/README.md b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/README.md deleted file mode 100644 index 3e2a1656d8f145569266c19c64b41779ccbf308c..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/README.md +++ /dev/null @@ -1,34 +0,0 @@ -![No Maintenance Intended](https://img.shields.io/badge/No%20Maintenance%20Intended-%E2%9C%95-red.svg) -![TensorFlow Requirement: 1.x](https://img.shields.io/badge/TensorFlow%20Requirement-1.x-brightgreen) -![TensorFlow 2 Not Supported](https://img.shields.io/badge/TensorFlow%202%20Not%20Supported-%E2%9C%95-red.svg) - -# Brain Coder - -*Authors: Daniel Abolafia, Mohammad Norouzi, Quoc Le* - -Brain coder is a code synthesis experimental environment. We provide code that reproduces the results from our recent paper [Neural Program Synthesis with Priority Queue Training](https://arxiv.org/abs/1801.03526). See single_task/README.md for details on how to build and reproduce those experiments. - -## Installation - -First install dependencies seperately: - -* [bazel](https://docs.bazel.build/versions/master/install.html) -* [TensorFlow](https://www.tensorflow.org/install/) -* [scipy](https://www.scipy.org/install.html) -* [absl-py](https://github.com/abseil/abseil-py) - -Note: even if you already have these dependencies installed, make sure they are -up-to-date to avoid unnecessary debugging. - - -## Building - -Use bazel from the top-level repo directory. - -For example: - -```bash -bazel build single_task:run -``` - -View README.md files in subdirectories for more details. diff --git a/spaces/Nixic/rvc-models/app.py b/spaces/Nixic/rvc-models/app.py deleted file mode 100644 index da1eaddfefe51a110d508e01520a5fb685409a0d..0000000000000000000000000000000000000000 --- a/spaces/Nixic/rvc-models/app.py +++ /dev/null @@ -1,188 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = False # os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - #cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
    RVC Models\n" - "##
    The input audio should be clean and pure voice without background music.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=ardha27.Rvc-Models)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12rbZk9CoXD1m84dqBW5IKMBjiVY6tcoj?usp=share_link)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/ardha27pi/rvc-models?duplicate=true)\n\n" - "[![Train Own Voice](https://badgen.net/badge/icon/github?icon=github&label=Train%20Voice)](https://github.com/ardha27/AI-Song-Cover-RVC)\n\n" - "[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/R6R7AH1FA)\n\n" - ) - with gr.Tabs(): - for (name, title, author, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - #(f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=8, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/NoCrypt/pixelization/models/p2cGen.py b/spaces/NoCrypt/pixelization/models/p2cGen.py deleted file mode 100644 index 864e8e3c476d7fff4e903089f75f3956d6c81556..0000000000000000000000000000000000000000 --- a/spaces/NoCrypt/pixelization/models/p2cGen.py +++ /dev/null @@ -1,76 +0,0 @@ -from .basic_layer import * - - -class P2CGen(nn.Module): - def __init__(self, input_dim, output_dim, dim, n_downsample, n_res, activ='relu', pad_type='reflect'): - super(P2CGen, self).__init__() - self.RGBEnc = RGBEncoder(input_dim, dim, n_downsample, n_res, "in", activ, pad_type=pad_type) - self.RGBDec = RGBDecoder(self.RGBEnc.output_dim, output_dim, n_downsample, n_res, res_norm='in', - activ=activ, pad_type=pad_type) - - def forward(self, x): - x = self.RGBEnc(x) - # print("encoder->>", x.shape) - x = self.RGBDec(x) - # print(x_small.shape) - # print(x_middle.shape) - # print(x_big.shape) - #return y_small, y_middle, y_big - return x - - -class RGBEncoder(nn.Module): - def __init__(self, input_dim, dim, n_downsample, n_res, norm, activ, pad_type): - super(RGBEncoder, self).__init__() - self.model = [] - self.model += [ConvBlock(input_dim, dim, 7, 1, 3, norm=norm, activation=activ, pad_type=pad_type)] - # downsampling blocks - for i in range(n_downsample): - self.model += [ConvBlock(dim, 2 * dim, 4, 2, 1, norm=norm, activation=activ, pad_type=pad_type)] - dim *= 2 - # residual blocks - self.model += [ResBlocks(n_res, dim, norm=norm, activation=activ, pad_type=pad_type)] - self.model = nn.Sequential(*self.model) - self.output_dim = dim - - def forward(self, x): - return self.model(x) - - -class RGBDecoder(nn.Module): - def __init__(self, dim, output_dim, n_upsample, n_res, res_norm, activ='relu', pad_type='zero'): - super(RGBDecoder, self).__init__() - # self.model = [] - # # AdaIN residual blocks - # self.model += [ResBlocks(n_res, dim, res_norm, activ, pad_type=pad_type)] - # # upsampling blocks - # for i in range(n_upsample): - # self.model += [nn.Upsample(scale_factor=2, mode='nearest'), - # ConvBlock(dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type=pad_type)] - # dim //= 2 - # # use reflection padding in the last conv layer - # self.model += [ConvBlock(dim, output_dim, 7, 1, 3, norm='none', activation='tanh', pad_type=pad_type)] - # self.model = nn.Sequential(*self.model) - self.Res_Blocks = ResBlocks(n_res, dim, res_norm, activ, pad_type=pad_type) - self.upsample_block1 = nn.Upsample(scale_factor=2, mode='nearest') - self.conv_1 = ConvBlock(dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type=pad_type) - dim //= 2 - self.upsample_block2 = nn.Upsample(scale_factor=2, mode='nearest') - self.conv_2 = ConvBlock(dim, dim // 2, 5, 1, 2, norm='ln', activation=activ, pad_type=pad_type) - dim //= 2 - self.conv_3 = ConvBlock(dim, output_dim, 7, 1, 3, norm='none', activation='tanh', pad_type=pad_type) - - def forward(self, x): - x = self.Res_Blocks(x) - # print(x.shape) - x = self.upsample_block1(x) - # print(x.shape) - x = self.conv_1(x) - # print(x_small.shape) - x = self.upsample_block2(x) - # print(x.shape) - x = self.conv_2(x) - # print(x_middle.shape) - x = self.conv_3(x) - # print(x_big.shape) - return x diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/linformer/linformer_src/modules/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/linformer/linformer_src/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/waveglow_denoiser.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/waveglow_denoiser.py deleted file mode 100644 index 6a6585e8b6901a059445ff54ca20ea87751bbb11..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/waveglow_denoiser.py +++ /dev/null @@ -1,40 +0,0 @@ -# import sys -# sys.path.append('tacotron2') -import torch -from .layers import STFT - - -class Denoiser(torch.nn.Module): - """ Removes model bias from audio produced with waveglow """ - - def __init__(self, waveglow, filter_length=1024, n_overlap=4, - win_length=1024, mode='zeros'): - super(Denoiser, self).__init__() - self.stft = STFT(filter_length=filter_length, - hop_length=int(filter_length/n_overlap), - win_length=win_length).cuda() - if mode == 'zeros': - mel_input = torch.zeros( - (1, 80, 88), - dtype=waveglow.upsample.weight.dtype, - device=waveglow.upsample.weight.device) - elif mode == 'normal': - mel_input = torch.randn( - (1, 80, 88), - dtype=waveglow.upsample.weight.dtype, - device=waveglow.upsample.weight.device) - else: - raise Exception("Mode {} if not supported".format(mode)) - - with torch.no_grad(): - bias_audio = waveglow.infer(mel_input, sigma=0.0).float() - bias_spec, _ = self.stft.transform(bias_audio) - - self.register_buffer('bias_spec', bias_spec[:, :, 0][:, :, None]) - - def forward(self, audio, strength=0.1): - audio_spec, audio_angles = self.stft.transform(audio.cuda().float()) - audio_spec_denoised = audio_spec - self.bias_spec * strength - audio_spec_denoised = torch.clamp(audio_spec_denoised, 0.0) - audio_denoised = self.stft.inverse(audio_spec_denoised, audio_angles) - return audio_denoised diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/roll_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/roll_dataset.py deleted file mode 100644 index a2915eeb3e8fb4dfb4b2bb33e0464ad0783d854c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/roll_dataset.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import BaseWrapperDataset - - -class RollDataset(BaseWrapperDataset): - def __init__(self, dataset, shifts): - super().__init__(dataset) - self.shifts = shifts - - def __getitem__(self, index): - item = self.dataset[index] - return torch.roll(item, self.shifts) diff --git a/spaces/OIUGLK/bingo/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/OIUGLK/bingo/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py deleted file mode 100644 index 2a7c376da5f9269197c44079f3e0f3b09cdc63fa..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_R_50_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 2 # 100ep -> 200ep - -lr_multiplier.scheduler.milestones = [ - milestone * 2 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/augmentation_impl.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/augmentation_impl.py deleted file mode 100644 index 652a34a9aef2d4004f46ad7814befe6d1c230bc4..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/augmentation_impl.py +++ /dev/null @@ -1,614 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Implement many useful :class:`Augmentation`. -""" -import numpy as np -import sys -from typing import Tuple -import torch -from fvcore.transforms.transform import ( - BlendTransform, - CropTransform, - HFlipTransform, - NoOpTransform, - PadTransform, - Transform, - TransformList, - VFlipTransform, -) -from PIL import Image - -from .augmentation import Augmentation, _transform_to_aug -from .transform import ExtentTransform, ResizeTransform, RotationTransform - -__all__ = [ - "FixedSizeCrop", - "RandomApply", - "RandomBrightness", - "RandomContrast", - "RandomCrop", - "RandomExtent", - "RandomFlip", - "RandomSaturation", - "RandomLighting", - "RandomRotation", - "Resize", - "ResizeScale", - "ResizeShortestEdge", - "RandomCrop_CategoryAreaConstraint", -] - - -class RandomApply(Augmentation): - """ - Randomly apply an augmentation with a given probability. - """ - - def __init__(self, tfm_or_aug, prob=0.5): - """ - Args: - tfm_or_aug (Transform, Augmentation): the transform or augmentation - to be applied. It can either be a `Transform` or `Augmentation` - instance. - prob (float): probability between 0.0 and 1.0 that - the wrapper transformation is applied - """ - super().__init__() - self.aug = _transform_to_aug(tfm_or_aug) - assert 0.0 <= prob <= 1.0, f"Probablity must be between 0.0 and 1.0 (given: {prob})" - self.prob = prob - - def get_transform(self, *args): - do = self._rand_range() < self.prob - if do: - return self.aug.get_transform(*args) - else: - return NoOpTransform() - - def __call__(self, aug_input): - do = self._rand_range() < self.prob - if do: - return self.aug(aug_input) - else: - return NoOpTransform() - - -class RandomFlip(Augmentation): - """ - Flip the image horizontally or vertically with the given probability. - """ - - def __init__(self, prob=0.5, *, horizontal=True, vertical=False): - """ - Args: - prob (float): probability of flip. - horizontal (boolean): whether to apply horizontal flipping - vertical (boolean): whether to apply vertical flipping - """ - super().__init__() - - if horizontal and vertical: - raise ValueError("Cannot do both horiz and vert. Please use two Flip instead.") - if not horizontal and not vertical: - raise ValueError("At least one of horiz or vert has to be True!") - self._init(locals()) - - def get_transform(self, image): - h, w = image.shape[:2] - do = self._rand_range() < self.prob - if do: - if self.horizontal: - return HFlipTransform(w) - elif self.vertical: - return VFlipTransform(h) - else: - return NoOpTransform() - - -class Resize(Augmentation): - """Resize image to a fixed target size""" - - def __init__(self, shape, interp=Image.BILINEAR): - """ - Args: - shape: (h, w) tuple or a int - interp: PIL interpolation method - """ - if isinstance(shape, int): - shape = (shape, shape) - shape = tuple(shape) - self._init(locals()) - - def get_transform(self, image): - return ResizeTransform( - image.shape[0], image.shape[1], self.shape[0], self.shape[1], self.interp - ) - - -class ResizeShortestEdge(Augmentation): - """ - Resize the image while keeping the aspect ratio unchanged. - It attempts to scale the shorter edge to the given `short_edge_length`, - as long as the longer edge does not exceed `max_size`. - If `max_size` is reached, then downscale so that the longer edge does not exceed max_size. - """ - - @torch.jit.unused - def __init__( - self, short_edge_length, max_size=sys.maxsize, sample_style="range", interp=Image.BILINEAR - ): - """ - Args: - short_edge_length (list[int]): If ``sample_style=="range"``, - a [min, max] interval from which to sample the shortest edge length. - If ``sample_style=="choice"``, a list of shortest edge lengths to sample from. - max_size (int): maximum allowed longest edge length. - sample_style (str): either "range" or "choice". - """ - super().__init__() - assert sample_style in ["range", "choice"], sample_style - - self.is_range = sample_style == "range" - if isinstance(short_edge_length, int): - short_edge_length = (short_edge_length, short_edge_length) - if self.is_range: - assert len(short_edge_length) == 2, ( - "short_edge_length must be two values using 'range' sample style." - f" Got {short_edge_length}!" - ) - self._init(locals()) - - @torch.jit.unused - def get_transform(self, image): - h, w = image.shape[:2] - if self.is_range: - size = np.random.randint(self.short_edge_length[0], self.short_edge_length[1] + 1) - else: - size = np.random.choice(self.short_edge_length) - if size == 0: - return NoOpTransform() - - newh, neww = ResizeShortestEdge.get_output_shape(h, w, size, self.max_size) - return ResizeTransform(h, w, newh, neww, self.interp) - - @staticmethod - def get_output_shape( - oldh: int, oldw: int, short_edge_length: int, max_size: int - ) -> Tuple[int, int]: - """ - Compute the output size given input size and target short edge length. - """ - h, w = oldh, oldw - size = short_edge_length * 1.0 - scale = size / min(h, w) - if h < w: - newh, neww = size, scale * w - else: - newh, neww = scale * h, size - if max(newh, neww) > max_size: - scale = max_size * 1.0 / max(newh, neww) - newh = newh * scale - neww = neww * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) - - -class ResizeScale(Augmentation): - """ - Takes target size as input and randomly scales the given target size between `min_scale` - and `max_scale`. It then scales the input image such that it fits inside the scaled target - box, keeping the aspect ratio constant. - This implements the resize part of the Google's 'resize_and_crop' data augmentation: - https://github.com/tensorflow/tpu/blob/master/models/official/detection/utils/input_utils.py#L127 - """ - - def __init__( - self, - min_scale: float, - max_scale: float, - target_height: int, - target_width: int, - interp: int = Image.BILINEAR, - ): - """ - Args: - min_scale: minimum image scale range. - max_scale: maximum image scale range. - target_height: target image height. - target_width: target image width. - interp: image interpolation method. - """ - super().__init__() - self._init(locals()) - - def _get_resize(self, image: np.ndarray, scale: float) -> Transform: - input_size = image.shape[:2] - - # Compute new target size given a scale. - target_size = (self.target_height, self.target_width) - target_scale_size = np.multiply(target_size, scale) - - # Compute actual rescaling applied to input image and output size. - output_scale = np.minimum( - target_scale_size[0] / input_size[0], target_scale_size[1] / input_size[1] - ) - output_size = np.round(np.multiply(input_size, output_scale)).astype(int) - - return ResizeTransform( - input_size[0], input_size[1], output_size[0], output_size[1], self.interp - ) - - def get_transform(self, image: np.ndarray) -> Transform: - random_scale = np.random.uniform(self.min_scale, self.max_scale) - return self._get_resize(image, random_scale) - - -class RandomRotation(Augmentation): - """ - This method returns a copy of this image, rotated the given - number of degrees counter clockwise around the given center. - """ - - def __init__(self, angle, expand=True, center=None, sample_style="range", interp=None): - """ - Args: - angle (list[float]): If ``sample_style=="range"``, - a [min, max] interval from which to sample the angle (in degrees). - If ``sample_style=="choice"``, a list of angles to sample from - expand (bool): choose if the image should be resized to fit the whole - rotated image (default), or simply cropped - center (list[[float, float]]): If ``sample_style=="range"``, - a [[minx, miny], [maxx, maxy]] relative interval from which to sample the center, - [0, 0] being the top left of the image and [1, 1] the bottom right. - If ``sample_style=="choice"``, a list of centers to sample from - Default: None, which means that the center of rotation is the center of the image - center has no effect if expand=True because it only affects shifting - """ - super().__init__() - assert sample_style in ["range", "choice"], sample_style - self.is_range = sample_style == "range" - if isinstance(angle, (float, int)): - angle = (angle, angle) - if center is not None and isinstance(center[0], (float, int)): - center = (center, center) - self._init(locals()) - - def get_transform(self, image): - h, w = image.shape[:2] - center = None - if self.is_range: - angle = np.random.uniform(self.angle[0], self.angle[1]) - if self.center is not None: - center = ( - np.random.uniform(self.center[0][0], self.center[1][0]), - np.random.uniform(self.center[0][1], self.center[1][1]), - ) - else: - angle = np.random.choice(self.angle) - if self.center is not None: - center = np.random.choice(self.center) - - if center is not None: - center = (w * center[0], h * center[1]) # Convert to absolute coordinates - - if angle % 360 == 0: - return NoOpTransform() - - return RotationTransform(h, w, angle, expand=self.expand, center=center, interp=self.interp) - - -class FixedSizeCrop(Augmentation): - """ - If `crop_size` is smaller than the input image size, then it uses a random crop of - the crop size. If `crop_size` is larger than the input image size, then it pads - the right and the bottom of the image to the crop size if `pad` is True, otherwise - it returns the smaller image. - """ - - def __init__(self, crop_size: Tuple[int], pad: bool = True, pad_value: float = 128.0): - """ - Args: - crop_size: target image (height, width). - pad: if True, will pad images smaller than `crop_size` up to `crop_size` - pad_value: the padding value. - """ - super().__init__() - self._init(locals()) - - def _get_crop(self, image: np.ndarray) -> Transform: - # Compute the image scale and scaled size. - input_size = image.shape[:2] - output_size = self.crop_size - - # Add random crop if the image is scaled up. - max_offset = np.subtract(input_size, output_size) - max_offset = np.maximum(max_offset, 0) - offset = np.multiply(max_offset, np.random.uniform(0.0, 1.0)) - offset = np.round(offset).astype(int) - return CropTransform( - offset[1], offset[0], output_size[1], output_size[0], input_size[1], input_size[0] - ) - - def _get_pad(self, image: np.ndarray) -> Transform: - # Compute the image scale and scaled size. - input_size = image.shape[:2] - output_size = self.crop_size - - # Add padding if the image is scaled down. - pad_size = np.subtract(output_size, input_size) - pad_size = np.maximum(pad_size, 0) - original_size = np.minimum(input_size, output_size) - return PadTransform( - 0, 0, pad_size[1], pad_size[0], original_size[1], original_size[0], self.pad_value - ) - - def get_transform(self, image: np.ndarray) -> TransformList: - transforms = [self._get_crop(image)] - if self.pad: - transforms.append(self._get_pad(image)) - return TransformList(transforms) - - -class RandomCrop(Augmentation): - """ - Randomly crop a rectangle region out of an image. - """ - - def __init__(self, crop_type: str, crop_size): - """ - Args: - crop_type (str): one of "relative_range", "relative", "absolute", "absolute_range". - crop_size (tuple[float, float]): two floats, explained below. - - - "relative": crop a (H * crop_size[0], W * crop_size[1]) region from an input image of - size (H, W). crop size should be in (0, 1] - - "relative_range": uniformly sample two values from [crop_size[0], 1] - and [crop_size[1]], 1], and use them as in "relative" crop type. - - "absolute" crop a (crop_size[0], crop_size[1]) region from input image. - crop_size must be smaller than the input image size. - - "absolute_range", for an input of size (H, W), uniformly sample H_crop in - [crop_size[0], min(H, crop_size[1])] and W_crop in [crop_size[0], min(W, crop_size[1])]. - Then crop a region (H_crop, W_crop). - """ - # TODO style of relative_range and absolute_range are not consistent: - # one takes (h, w) but another takes (min, max) - super().__init__() - assert crop_type in ["relative_range", "relative", "absolute", "absolute_range"] - self._init(locals()) - - def get_transform(self, image): - h, w = image.shape[:2] - croph, cropw = self.get_crop_size((h, w)) - assert h >= croph and w >= cropw, "Shape computation in {} has bugs.".format(self) - h0 = np.random.randint(h - croph + 1) - w0 = np.random.randint(w - cropw + 1) - return CropTransform(w0, h0, cropw, croph) - - def get_crop_size(self, image_size): - """ - Args: - image_size (tuple): height, width - - Returns: - crop_size (tuple): height, width in absolute pixels - """ - h, w = image_size - if self.crop_type == "relative": - ch, cw = self.crop_size - return int(h * ch + 0.5), int(w * cw + 0.5) - elif self.crop_type == "relative_range": - crop_size = np.asarray(self.crop_size, dtype=np.float32) - ch, cw = crop_size + np.random.rand(2) * (1 - crop_size) - return int(h * ch + 0.5), int(w * cw + 0.5) - elif self.crop_type == "absolute": - return (min(self.crop_size[0], h), min(self.crop_size[1], w)) - elif self.crop_type == "absolute_range": - assert self.crop_size[0] <= self.crop_size[1] - ch = np.random.randint(min(h, self.crop_size[0]), min(h, self.crop_size[1]) + 1) - cw = np.random.randint(min(w, self.crop_size[0]), min(w, self.crop_size[1]) + 1) - return ch, cw - else: - raise NotImplementedError("Unknown crop type {}".format(self.crop_type)) - - -class RandomCrop_CategoryAreaConstraint(Augmentation): - """ - Similar to :class:`RandomCrop`, but find a cropping window such that no single category - occupies a ratio of more than `single_category_max_area` in semantic segmentation ground - truth, which can cause unstability in training. The function attempts to find such a valid - cropping window for at most 10 times. - """ - - def __init__( - self, - crop_type: str, - crop_size, - single_category_max_area: float = 1.0, - ignored_category: int = None, - ): - """ - Args: - crop_type, crop_size: same as in :class:`RandomCrop` - single_category_max_area: the maximum allowed area ratio of a - category. Set to 1.0 to disable - ignored_category: allow this category in the semantic segmentation - ground truth to exceed the area ratio. Usually set to the category - that's ignored in training. - """ - self.crop_aug = RandomCrop(crop_type, crop_size) - self._init(locals()) - - def get_transform(self, image, sem_seg): - if self.single_category_max_area >= 1.0: - return self.crop_aug.get_transform(image) - else: - h, w = sem_seg.shape - for _ in range(10): - crop_size = self.crop_aug.get_crop_size((h, w)) - y0 = np.random.randint(h - crop_size[0] + 1) - x0 = np.random.randint(w - crop_size[1] + 1) - sem_seg_temp = sem_seg[y0 : y0 + crop_size[0], x0 : x0 + crop_size[1]] - labels, cnt = np.unique(sem_seg_temp, return_counts=True) - if self.ignored_category is not None: - cnt = cnt[labels != self.ignored_category] - if len(cnt) > 1 and np.max(cnt) < np.sum(cnt) * self.single_category_max_area: - break - crop_tfm = CropTransform(x0, y0, crop_size[1], crop_size[0]) - return crop_tfm - - -class RandomExtent(Augmentation): - """ - Outputs an image by cropping a random "subrect" of the source image. - - The subrect can be parameterized to include pixels outside the source image, - in which case they will be set to zeros (i.e. black). The size of the output - image will vary with the size of the random subrect. - """ - - def __init__(self, scale_range, shift_range): - """ - Args: - output_size (h, w): Dimensions of output image - scale_range (l, h): Range of input-to-output size scaling factor - shift_range (x, y): Range of shifts of the cropped subrect. The rect - is shifted by [w / 2 * Uniform(-x, x), h / 2 * Uniform(-y, y)], - where (w, h) is the (width, height) of the input image. Set each - component to zero to crop at the image's center. - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - img_h, img_w = image.shape[:2] - - # Initialize src_rect to fit the input image. - src_rect = np.array([-0.5 * img_w, -0.5 * img_h, 0.5 * img_w, 0.5 * img_h]) - - # Apply a random scaling to the src_rect. - src_rect *= np.random.uniform(self.scale_range[0], self.scale_range[1]) - - # Apply a random shift to the coordinates origin. - src_rect[0::2] += self.shift_range[0] * img_w * (np.random.rand() - 0.5) - src_rect[1::2] += self.shift_range[1] * img_h * (np.random.rand() - 0.5) - - # Map src_rect coordinates into image coordinates (center at corner). - src_rect[0::2] += 0.5 * img_w - src_rect[1::2] += 0.5 * img_h - - return ExtentTransform( - src_rect=(src_rect[0], src_rect[1], src_rect[2], src_rect[3]), - output_size=(int(src_rect[3] - src_rect[1]), int(src_rect[2] - src_rect[0])), - ) - - -class RandomContrast(Augmentation): - """ - Randomly transforms image contrast. - - Contrast intensity is uniformly sampled in (intensity_min, intensity_max). - - intensity < 1 will reduce contrast - - intensity = 1 will preserve the input image - - intensity > 1 will increase contrast - - See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html - """ - - def __init__(self, intensity_min, intensity_max): - """ - Args: - intensity_min (float): Minimum augmentation - intensity_max (float): Maximum augmentation - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - w = np.random.uniform(self.intensity_min, self.intensity_max) - return BlendTransform(src_image=image.mean(), src_weight=1 - w, dst_weight=w) - - -class RandomBrightness(Augmentation): - """ - Randomly transforms image brightness. - - Brightness intensity is uniformly sampled in (intensity_min, intensity_max). - - intensity < 1 will reduce brightness - - intensity = 1 will preserve the input image - - intensity > 1 will increase brightness - - See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html - """ - - def __init__(self, intensity_min, intensity_max): - """ - Args: - intensity_min (float): Minimum augmentation - intensity_max (float): Maximum augmentation - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - w = np.random.uniform(self.intensity_min, self.intensity_max) - return BlendTransform(src_image=0, src_weight=1 - w, dst_weight=w) - - -class RandomSaturation(Augmentation): - """ - Randomly transforms saturation of an RGB image. - Input images are assumed to have 'RGB' channel order. - - Saturation intensity is uniformly sampled in (intensity_min, intensity_max). - - intensity < 1 will reduce saturation (make the image more grayscale) - - intensity = 1 will preserve the input image - - intensity > 1 will increase saturation - - See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html - """ - - def __init__(self, intensity_min, intensity_max): - """ - Args: - intensity_min (float): Minimum augmentation (1 preserves input). - intensity_max (float): Maximum augmentation (1 preserves input). - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - assert image.shape[-1] == 3, "RandomSaturation only works on RGB images" - w = np.random.uniform(self.intensity_min, self.intensity_max) - grayscale = image.dot([0.299, 0.587, 0.114])[:, :, np.newaxis] - return BlendTransform(src_image=grayscale, src_weight=1 - w, dst_weight=w) - - -class RandomLighting(Augmentation): - """ - The "lighting" augmentation described in AlexNet, using fixed PCA over ImageNet. - Input images are assumed to have 'RGB' channel order. - - The degree of color jittering is randomly sampled via a normal distribution, - with standard deviation given by the scale parameter. - """ - - def __init__(self, scale): - """ - Args: - scale (float): Standard deviation of principal component weighting. - """ - super().__init__() - self._init(locals()) - self.eigen_vecs = np.array( - [[-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140], [-0.5836, -0.6948, 0.4203]] - ) - self.eigen_vals = np.array([0.2175, 0.0188, 0.0045]) - - def get_transform(self, image): - assert image.shape[-1] == 3, "RandomLighting only works on RGB images" - weights = np.random.normal(scale=self.scale, size=3) - return BlendTransform( - src_image=self.eigen_vecs.dot(weights * self.eigen_vals), src_weight=1.0, dst_weight=1.0 - ) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/points_in_boxes.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/points_in_boxes.py deleted file mode 100644 index 4003173a53052161dbcd687a2fa1d755642fdab8..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/points_in_boxes.py +++ /dev/null @@ -1,133 +0,0 @@ -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'points_in_boxes_part_forward', 'points_in_boxes_cpu_forward', - 'points_in_boxes_all_forward' -]) - - -def points_in_boxes_part(points, boxes): - """Find the box in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz] in - LiDAR/DEPTH coordinate, (x, y, z) is the bottom center - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M), default background = -1 - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - - box_idxs_of_pts = points.new_zeros((batch_size, num_points), - dtype=torch.int).fill_(-1) - - # If manually put the tensor 'points' or 'boxes' on a device - # which is not the current device, some temporary variables - # will be created on the current device in the cuda op, - # and the output will be incorrect. - # Therefore, we force the current device to be the same - # as the device of the tensors if it was not. - # Please refer to https://github.com/open-mmlab/mmdetection3d/issues/305 - # for the incorrect output before the fix. - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_part_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts - - -def points_in_boxes_cpu(points, boxes): - """Find all boxes in which each point is (CPU). The CPU version of - :meth:`points_in_boxes_all`. - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in - LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert points.shape[0] == boxes.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {points.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - point_indices = points.new_zeros((batch_size, num_boxes, num_points), - dtype=torch.int) - for b in range(batch_size): - ext_module.points_in_boxes_cpu_forward(boxes[b].float().contiguous(), - points[b].float().contiguous(), - point_indices[b]) - point_indices = point_indices.transpose(1, 2) - - return point_indices - - -def points_in_boxes_all(points, boxes): - """Find all boxes in which each point is (CUDA). - - Args: - points (torch.Tensor): [B, M, 3], [x, y, z] in LiDAR/DEPTH coordinate - boxes (torch.Tensor): [B, T, 7], - num_valid_boxes <= T, [x, y, z, x_size, y_size, z_size, rz], - (x, y, z) is the bottom center. - - Returns: - box_idxs_of_pts (torch.Tensor): (B, M, T), default background = 0. - """ - assert boxes.shape[0] == points.shape[0], \ - 'Points and boxes should have the same batch size, ' \ - f'but got {boxes.shape[0]} and {boxes.shape[0]}' - assert boxes.shape[2] == 7, \ - 'boxes dimension should be 7, ' \ - f'but got unexpected shape {boxes.shape[2]}' - assert points.shape[2] == 3, \ - 'points dimension should be 3, ' \ - f'but got unexpected shape {points.shape[2]}' - batch_size, num_points, _ = points.shape - num_boxes = boxes.shape[1] - - box_idxs_of_pts = points.new_zeros((batch_size, num_points, num_boxes), - dtype=torch.int).fill_(0) - - # Same reason as line 25-32 - points_device = points.get_device() - assert points_device == boxes.get_device(), \ - 'Points and boxes should be put on the same device' - if torch.cuda.current_device() != points_device: - torch.cuda.set_device(points_device) - - ext_module.points_in_boxes_all_forward(boxes.contiguous(), - points.contiguous(), - box_idxs_of_pts) - - return box_idxs_of_pts diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-16.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-16.go deleted file mode 100644 index 9abb91b15a72f5c312be7ebfbcb35aebc4bf805f..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-16.go and /dev/null differ diff --git a/spaces/Pavankunchala/Depth-Estimation-App/depth_estim_app.py b/spaces/Pavankunchala/Depth-Estimation-App/depth_estim_app.py deleted file mode 100644 index 9c6bb9c5cbf5db4a834236386a941cb353f08d56..0000000000000000000000000000000000000000 --- a/spaces/Pavankunchala/Depth-Estimation-App/depth_estim_app.py +++ /dev/null @@ -1,219 +0,0 @@ -import sys -import time -from pathlib import Path - -import cv2 -from openvino.inference_engine import IECore -import matplotlib.cm -import matplotlib.pyplot as plt -import numpy as np -import streamlit as st -from PIL import Image -import tempfile - - -DEMO_IMAGE = 'dog-new.jpg' - -DEMO_VIDEO = 'demo.mp4' - - -@st.cache -def normalize_minmax(data): - - return (data - data.min()) / (data.max() - data.min()) - -@st.cache -def convert_result_to_image(result, colormap="inferno"): - - cmap = matplotlib.cm.get_cmap(colormap) - result = result.squeeze(0) - result = normalize_minmax(result) - result = cmap(result)[:, :, :3] * 255 - result = result.astype(np.uint8) - return result - -@st.cache -def to_rgb(image_data) -> np.ndarray: - - return cv2.cvtColor(image_data, cv2.COLOR_BGR2RGB) - - -st.title("Depth Estimation App") -st.sidebar.title('Depth Estimation') -st.sidebar.subheader('Parameters') - -DEVICE = "CPU" -MODEL_FILE = "models/MiDaS_small.xml" - -model_xml_path = Path(MODEL_FILE) - - -ie = IECore() -net = ie.read_network(model=model_xml_path, weights=model_xml_path.with_suffix(".bin")) -exec_net = ie.load_network(network=net, device_name=DEVICE) - -input_key = list(exec_net.input_info)[0] -output_key = list(exec_net.outputs.keys())[0] - -network_input_shape = exec_net.input_info[input_key].tensor_desc.dims -network_image_height, network_image_width = network_input_shape[2:] - - -app_mode = st.sidebar.selectbox('Choose the App mode', -['Run on Image','Run on Video'],index = 0) - - -if app_mode == "Run on Image": - - - st.markdown('Running on Image') - - st.sidebar.text('Params for Image') - st.markdown( - """ - - """, - unsafe_allow_html=True, - ) - - img_file_buffer = st.sidebar.file_uploader("Upload an image", type=[ "jpg", "jpeg",'png']) - - if img_file_buffer is not None: - image = np.array(Image.open(img_file_buffer)) - - else: - demo_image = DEMO_IMAGE - image = np.array(Image.open(demo_image)) - - st.sidebar.text('Original Image') - st.sidebar.image(image) - resized_image = cv2.resize(src=image, dsize=(network_image_height, network_image_width)) - # reshape image to network input shape NCHW - input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0) - - - result = exec_net.infer(inputs={input_key: input_image})[output_key] - # convert network result of disparity map to an image that shows - # distance as colors - result_image = convert_result_to_image(result=result) - # resize back to original image shape. cv2.resize expects shape - # in (width, height), [::-1] reverses the (height, width) shape to match this. - result_image = cv2.resize(result_image, image.shape[:2][::-1]) - - - st.subheader('Output Image') - - st.image(result_image,use_column_width= True) - -if app_mode =='Run on Video': - - st.markdown('Running on Video') - - use_webcam = st.sidebar.button('Use Webcam') - - video_file_buffer = st.sidebar.file_uploader("Upload a video", type=[ "mp4", "mov",'avi','asf', 'm4v' ]) - - tfflie = tempfile.NamedTemporaryFile(delete=False) - - stop_button = st.sidebar.button('Stop Processing') - - if stop_button: - st.stop() - - - - if not video_file_buffer: - if use_webcam: - vid = cv2.VideoCapture(0) - - else: - vid = cv2.VideoCapture(DEMO_VIDEO) - tfflie.name = DEMO_VIDEO - - - - else: - tfflie.write(video_file_buffer.read()) - vid = cv2.VideoCapture(tfflie.name) - - - - - - - - width = int(vid.get(cv2.CAP_PROP_FRAME_WIDTH)) - height = int(vid.get(cv2.CAP_PROP_FRAME_HEIGHT)) - fps = int(vid.get(cv2.CAP_PROP_FPS))#codec = cv2.VideoWriter_fourcc(*FLAGS.output_format) - codec = cv2.VideoWriter_fourcc('X','V','I','D') - out = cv2.VideoWriter('output_depth.mp4', codec, fps, (width, height)) - - start_time = time.perf_counter() - total_inference_duration = 0 - stframe = st.empty() - SCALE_OUTPUT = 1 - st.markdown("**Frame Rate**") - kpi1_text = st.markdown("0") - save_video = st.checkbox('Save video') - - while vid.isOpened(): - ret, image = vid.read() - new_time = time.time() - input_video_frame_height, input_video_frame_width = image.shape[:2] - target_frame_height = int(input_video_frame_height * SCALE_OUTPUT) - target_frame_width = int(input_video_frame_width * SCALE_OUTPUT) - - - if not ret: - vid.release() - break - resized_image = cv2.resize(src=image, dsize=(network_image_height, network_image_width)) - # reshape image to network input shape NCHW - input_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0) - - inference_start_time = time.perf_counter() - result = exec_net.infer(inputs={input_key: input_image})[output_key] - inference_stop_time = time.perf_counter() - inference_duration = inference_stop_time - inference_start_time - total_inference_duration += inference_duration - - - result_frame = to_rgb(convert_result_to_image(result)) - # Resize image and result to target frame shape - result_frame = cv2.resize(result_frame, (target_frame_width, target_frame_height)) - image = cv2.resize(image, (target_frame_width, target_frame_height)) - # Put image and result side by side - stacked_frame = np.vstack((image, result_frame)) - if save_video: - out.write(stacked_frame) - - stframe.image(stacked_frame,channels = 'BGR',use_column_width=True) - fps = 1.0/(time.time() - new_time) - kpi1_text.write(f"

    {'{:.1f}'.format(fps)}

    ", unsafe_allow_html=True) - - - - vid.release() - out.release() - cv2.destroyAllWindows() - st.success('Video is Processed') - st.stop() - - - - - - - - - - - diff --git a/spaces/Pengyey/bingo-chuchu/src/components/chat-suggestions.tsx b/spaces/Pengyey/bingo-chuchu/src/components/chat-suggestions.tsx deleted file mode 100644 index 48aec7c84e4407c482acdfcc7857fb0f660d12d3..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length, setSuggestions]) - - return currentSuggestions?.length ? ( -
    -
    - - { - currentSuggestions.map(suggestion => ( - - )) - } -
    -
    - ) : null -} diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py deleted file mode 100644 index c8ef1cbc21268516c8c6a94a0bf6c8f997b27ed0..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -""" -Implements the Generalized R-CNN framework -""" - -import torch -from torch import nn - -from maskrcnn_benchmark.structures.image_list import to_image_list - -from ..backbone import build_backbone -from ..rpn import build_rpn -from ..roi_heads import build_roi_heads - -import timeit - -class GeneralizedRCNN(nn.Module): - """ - Main class for Generalized R-CNN. Currently supports boxes and masks. - It consists of three main parts: - - backbone - - rpn - - heads: takes the features + the proposals from the RPN and computes - detections / masks from it. - """ - - def __init__(self, cfg): - super(GeneralizedRCNN, self).__init__() - - self.backbone = build_backbone(cfg) - self.rpn = build_rpn(cfg) - self.roi_heads = build_roi_heads(cfg) - self.DEBUG = cfg.MODEL.DEBUG - self.ONNX = cfg.MODEL.ONNX - self.freeze_backbone = cfg.MODEL.BACKBONE.FREEZE - self.freeze_fpn = cfg.MODEL.FPN.FREEZE - self.freeze_rpn = cfg.MODEL.RPN.FREEZE - - if cfg.MODEL.LINEAR_PROB: - assert cfg.MODEL.BACKBONE.FREEZE, "For linear probing, backbone should be frozen!" - if hasattr(self.backbone, 'fpn'): - assert cfg.MODEL.FPN.FREEZE, "For linear probing, FPN should be frozen!" - self.linear_prob = cfg.MODEL.LINEAR_PROB - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(GeneralizedRCNN, self).train(mode) - if self.freeze_backbone: - self.backbone.body.eval() - for p in self.backbone.body.parameters(): - p.requires_grad = False - if self.freeze_fpn: - self.backbone.fpn.eval() - for p in self.backbone.fpn.parameters(): - p.requires_grad = False - if self.freeze_rpn: - self.rpn.eval() - for p in self.rpn.parameters(): - p.requires_grad = False - if self.linear_prob: - if self.rpn is not None: - for key, value in self.rpn.named_parameters(): - if not ('bbox_pred' in key or 'cls_logits' in key or 'centerness' in key or 'cosine_scale' in key): - value.requires_grad = False - if self.roi_heads is not None: - for key, value in self.roi_heads.named_parameters(): - if not ('bbox_pred' in key or 'cls_logits' in key or 'centerness' in key or 'cosine_scale' in key): - value.requires_grad = False - - def forward(self, images, targets=None): - """ - Arguments: - images (list[Tensor] or ImageList): images to be processed - targets (list[BoxList]): ground-truth boxes present in the image (optional) - - Returns: - result (list[BoxList] or dict[Tensor]): the output from the model. - During training, it returns a dict[Tensor] which contains the losses. - During testing, it returns list[BoxList] contains additional fields - like `scores`, `labels` and `mask` (for Mask R-CNN models). - - """ - if self.training and targets is None: - raise ValueError("In training mode, targets should be passed") - - if self.DEBUG: debug_info = {} - if self.DEBUG: debug_info['input_size'] = images[0].size() - if self.DEBUG: tic = timeit.time.perf_counter() - - if self.ONNX: - features = self.backbone(images) - else: - images = to_image_list(images) - features = self.backbone(images.tensors) - - if self.DEBUG: debug_info['feat_time'] = timeit.time.perf_counter() - tic - if self.DEBUG: debug_info['feat_size'] = [feat.size() for feat in features] - if self.DEBUG: tic = timeit.time.perf_counter() - - proposals, proposal_losses = self.rpn(images, features, targets) - - if self.DEBUG: debug_info['rpn_time'] = timeit.time.perf_counter() - tic - if self.DEBUG: debug_info['#rpn'] = [prop for prop in proposals] - if self.DEBUG: tic = timeit.time.perf_counter() - - if self.roi_heads: - x, result, detector_losses = self.roi_heads(features, proposals, targets) - else: - # RPN-only models don't have roi_heads - x = features - result = proposals - detector_losses = {} - - if self.DEBUG: debug_info['rcnn_time'] = timeit.time.perf_counter() - tic - if self.DEBUG: debug_info['#rcnn'] = result - if self.DEBUG: return result, debug_info - - if self.training: - losses = {} - losses.update(detector_losses) - losses.update(proposal_losses) - return losses - - return result \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py deleted file mode 100644 index f7e67bcc89dd0c8e50d770e600b55f179fe19588..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Training of the 4 diffusion models described in -"From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion" -(paper link). -""" - -from ._explorers import DiffusionExplorer - - -@DiffusionExplorer -def explorer(launcher): - launcher.slurm_(gpus=4, partition='learnfair') - - launcher.bind_({'solver': 'diffusion/default', - 'dset': 'internal/music_10k_32khz'}) - - with launcher.job_array(): - launcher({'filter.use': True, 'filter.idx_band': 0, "processor.use": False, 'processor.power_std': 0.4}) - launcher({'filter.use': True, 'filter.idx_band': 1, "processor.use": False, 'processor.power_std': 0.4}) - launcher({'filter.use': True, 'filter.idx_band': 2, "processor.use": True, 'processor.power_std': 0.4}) - launcher({'filter.use': True, 'filter.idx_band': 3, "processor.use": True, 'processor.power_std': 0.75}) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/list.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/list.py deleted file mode 100644 index 8e1426dbb6c6762a673db2691ecd7ac124d46ec8..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/list.py +++ /dev/null @@ -1,365 +0,0 @@ -import json -import logging -from optparse import Values -from typing import TYPE_CHECKING, Generator, List, Optional, Sequence, Tuple, cast - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.cli import cmdoptions -from pip._internal.cli.req_command import IndexGroupCommand -from pip._internal.cli.status_codes import SUCCESS -from pip._internal.exceptions import CommandError -from pip._internal.index.collector import LinkCollector -from pip._internal.index.package_finder import PackageFinder -from pip._internal.metadata import BaseDistribution, get_environment -from pip._internal.models.selection_prefs import SelectionPreferences -from pip._internal.network.session import PipSession -from pip._internal.utils.compat import stdlib_pkgs -from pip._internal.utils.misc import tabulate, write_output - -if TYPE_CHECKING: - from pip._internal.metadata.base import DistributionVersion - - class _DistWithLatestInfo(BaseDistribution): - """Give the distribution object a couple of extra fields. - - These will be populated during ``get_outdated()``. This is dirty but - makes the rest of the code much cleaner. - """ - - latest_version: DistributionVersion - latest_filetype: str - - _ProcessedDists = Sequence[_DistWithLatestInfo] - - -logger = logging.getLogger(__name__) - - -class ListCommand(IndexGroupCommand): - """ - List installed packages, including editables. - - Packages are listed in a case-insensitive sorted order. - """ - - ignore_require_venv = True - usage = """ - %prog [options]""" - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-o", - "--outdated", - action="store_true", - default=False, - help="List outdated packages", - ) - self.cmd_opts.add_option( - "-u", - "--uptodate", - action="store_true", - default=False, - help="List uptodate packages", - ) - self.cmd_opts.add_option( - "-e", - "--editable", - action="store_true", - default=False, - help="List editable projects.", - ) - self.cmd_opts.add_option( - "-l", - "--local", - action="store_true", - default=False, - help=( - "If in a virtualenv that has global access, do not list " - "globally-installed packages." - ), - ) - self.cmd_opts.add_option( - "--user", - dest="user", - action="store_true", - default=False, - help="Only output packages installed in user-site.", - ) - self.cmd_opts.add_option(cmdoptions.list_path()) - self.cmd_opts.add_option( - "--pre", - action="store_true", - default=False, - help=( - "Include pre-release and development versions. By default, " - "pip only finds stable versions." - ), - ) - - self.cmd_opts.add_option( - "--format", - action="store", - dest="list_format", - default="columns", - choices=("columns", "freeze", "json"), - help="Select the output format among: columns (default), freeze, or json", - ) - - self.cmd_opts.add_option( - "--not-required", - action="store_true", - dest="not_required", - help="List packages that are not dependencies of installed packages.", - ) - - self.cmd_opts.add_option( - "--exclude-editable", - action="store_false", - dest="include_editable", - help="Exclude editable package from output.", - ) - self.cmd_opts.add_option( - "--include-editable", - action="store_true", - dest="include_editable", - help="Include editable package from output.", - default=True, - ) - self.cmd_opts.add_option(cmdoptions.list_exclude()) - index_opts = cmdoptions.make_option_group(cmdoptions.index_group, self.parser) - - self.parser.insert_option_group(0, index_opts) - self.parser.insert_option_group(0, self.cmd_opts) - - def _build_package_finder( - self, options: Values, session: PipSession - ) -> PackageFinder: - """ - Create a package finder appropriate to this list command. - """ - link_collector = LinkCollector.create(session, options=options) - - # Pass allow_yanked=False to ignore yanked versions. - selection_prefs = SelectionPreferences( - allow_yanked=False, - allow_all_prereleases=options.pre, - ) - - return PackageFinder.create( - link_collector=link_collector, - selection_prefs=selection_prefs, - ) - - def run(self, options: Values, args: List[str]) -> int: - if options.outdated and options.uptodate: - raise CommandError("Options --outdated and --uptodate cannot be combined.") - - if options.outdated and options.list_format == "freeze": - raise CommandError( - "List format 'freeze' can not be used with the --outdated option." - ) - - cmdoptions.check_list_path_option(options) - - skip = set(stdlib_pkgs) - if options.excludes: - skip.update(canonicalize_name(n) for n in options.excludes) - - packages: "_ProcessedDists" = [ - cast("_DistWithLatestInfo", d) - for d in get_environment(options.path).iter_installed_distributions( - local_only=options.local, - user_only=options.user, - editables_only=options.editable, - include_editables=options.include_editable, - skip=skip, - ) - ] - - # get_not_required must be called firstly in order to find and - # filter out all dependencies correctly. Otherwise a package - # can't be identified as requirement because some parent packages - # could be filtered out before. - if options.not_required: - packages = self.get_not_required(packages, options) - - if options.outdated: - packages = self.get_outdated(packages, options) - elif options.uptodate: - packages = self.get_uptodate(packages, options) - - self.output_package_listing(packages, options) - return SUCCESS - - def get_outdated( - self, packages: "_ProcessedDists", options: Values - ) -> "_ProcessedDists": - return [ - dist - for dist in self.iter_packages_latest_infos(packages, options) - if dist.latest_version > dist.version - ] - - def get_uptodate( - self, packages: "_ProcessedDists", options: Values - ) -> "_ProcessedDists": - return [ - dist - for dist in self.iter_packages_latest_infos(packages, options) - if dist.latest_version == dist.version - ] - - def get_not_required( - self, packages: "_ProcessedDists", options: Values - ) -> "_ProcessedDists": - dep_keys = { - canonicalize_name(dep.name) - for dist in packages - for dep in (dist.iter_dependencies() or ()) - } - - # Create a set to remove duplicate packages, and cast it to a list - # to keep the return type consistent with get_outdated and - # get_uptodate - return list({pkg for pkg in packages if pkg.canonical_name not in dep_keys}) - - def iter_packages_latest_infos( - self, packages: "_ProcessedDists", options: Values - ) -> Generator["_DistWithLatestInfo", None, None]: - with self._build_session(options) as session: - finder = self._build_package_finder(options, session) - - def latest_info( - dist: "_DistWithLatestInfo", - ) -> Optional["_DistWithLatestInfo"]: - all_candidates = finder.find_all_candidates(dist.canonical_name) - if not options.pre: - # Remove prereleases - all_candidates = [ - candidate - for candidate in all_candidates - if not candidate.version.is_prerelease - ] - - evaluator = finder.make_candidate_evaluator( - project_name=dist.canonical_name, - ) - best_candidate = evaluator.sort_best_candidate(all_candidates) - if best_candidate is None: - return None - - remote_version = best_candidate.version - if best_candidate.link.is_wheel: - typ = "wheel" - else: - typ = "sdist" - dist.latest_version = remote_version - dist.latest_filetype = typ - return dist - - for dist in map(latest_info, packages): - if dist is not None: - yield dist - - def output_package_listing( - self, packages: "_ProcessedDists", options: Values - ) -> None: - packages = sorted( - packages, - key=lambda dist: dist.canonical_name, - ) - if options.list_format == "columns" and packages: - data, header = format_for_columns(packages, options) - self.output_package_listing_columns(data, header) - elif options.list_format == "freeze": - for dist in packages: - if options.verbose >= 1: - write_output( - "%s==%s (%s)", dist.raw_name, dist.version, dist.location - ) - else: - write_output("%s==%s", dist.raw_name, dist.version) - elif options.list_format == "json": - write_output(format_for_json(packages, options)) - - def output_package_listing_columns( - self, data: List[List[str]], header: List[str] - ) -> None: - # insert the header first: we need to know the size of column names - if len(data) > 0: - data.insert(0, header) - - pkg_strings, sizes = tabulate(data) - - # Create and add a separator. - if len(data) > 0: - pkg_strings.insert(1, " ".join(map(lambda x: "-" * x, sizes))) - - for val in pkg_strings: - write_output(val) - - -def format_for_columns( - pkgs: "_ProcessedDists", options: Values -) -> Tuple[List[List[str]], List[str]]: - """ - Convert the package data into something usable - by output_package_listing_columns. - """ - header = ["Package", "Version"] - - running_outdated = options.outdated - if running_outdated: - header.extend(["Latest", "Type"]) - - has_editables = any(x.editable for x in pkgs) - if has_editables: - header.append("Editable project location") - - if options.verbose >= 1: - header.append("Location") - if options.verbose >= 1: - header.append("Installer") - - data = [] - for proj in pkgs: - # if we're working on the 'outdated' list, separate out the - # latest_version and type - row = [proj.raw_name, str(proj.version)] - - if running_outdated: - row.append(str(proj.latest_version)) - row.append(proj.latest_filetype) - - if has_editables: - row.append(proj.editable_project_location or "") - - if options.verbose >= 1: - row.append(proj.location or "") - if options.verbose >= 1: - row.append(proj.installer) - - data.append(row) - - return data, header - - -def format_for_json(packages: "_ProcessedDists", options: Values) -> str: - data = [] - for dist in packages: - info = { - "name": dist.raw_name, - "version": str(dist.version), - } - if options.verbose >= 1: - info["location"] = dist.location or "" - info["installer"] = dist.installer - if options.outdated: - info["latest_version"] = str(dist.latest_version) - info["latest_filetype"] = dist.latest_filetype - editable_project_location = dist.editable_project_location - if editable_project_location: - info["editable_project_location"] = editable_project_location - data.append(info) - return json.dumps(data) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/eucjpprober.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/eucjpprober.py deleted file mode 100644 index abf2e66e283eb45c404e2d566c3933ae369324e8..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/chardet/eucjpprober.py +++ /dev/null @@ -1,95 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .chardistribution import EUCJPDistributionAnalysis -from .codingstatemachine import CodingStateMachine -from .enums import MachineState, ProbingState -from .jpcntx import EUCJPContextAnalysis -from .mbcharsetprober import MultiByteCharSetProber -from .mbcssm import EUCJP_SM_MODEL - - -class EUCJPProber(MultiByteCharSetProber): - def __init__(self): - super().__init__() - self.coding_sm = CodingStateMachine(EUCJP_SM_MODEL) - self.distribution_analyzer = EUCJPDistributionAnalysis() - self.context_analyzer = EUCJPContextAnalysis() - self.reset() - - def reset(self): - super().reset() - self.context_analyzer.reset() - - @property - def charset_name(self): - return "EUC-JP" - - @property - def language(self): - return "Japanese" - - def feed(self, byte_str): - for i, byte in enumerate(byte_str): - # PY3K: byte_str is a byte array, so byte is an int, not a byte - coding_state = self.coding_sm.next_state(byte) - if coding_state == MachineState.ERROR: - self.logger.debug( - "%s %s prober hit error at byte %s", - self.charset_name, - self.language, - i, - ) - self._state = ProbingState.NOT_ME - break - if coding_state == MachineState.ITS_ME: - self._state = ProbingState.FOUND_IT - break - if coding_state == MachineState.START: - char_len = self.coding_sm.get_current_charlen() - if i == 0: - self._last_char[1] = byte - self.context_analyzer.feed(self._last_char, char_len) - self.distribution_analyzer.feed(self._last_char, char_len) - else: - self.context_analyzer.feed(byte_str[i - 1 : i + 1], char_len) - self.distribution_analyzer.feed(byte_str[i - 1 : i + 1], char_len) - - self._last_char[0] = byte_str[-1] - - if self.state == ProbingState.DETECTING: - if self.context_analyzer.got_enough_data() and ( - self.get_confidence() > self.SHORTCUT_THRESHOLD - ): - self._state = ProbingState.FOUND_IT - - return self.state - - def get_confidence(self): - context_conf = self.context_analyzer.get_confidence() - distrib_conf = self.distribution_analyzer.get_confidence() - return max(context_conf, distrib_conf) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/idna/compat.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/idna/compat.py deleted file mode 100644 index 786e6bda63699b72d588ba91dd73df017570aee5..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/idna/compat.py +++ /dev/null @@ -1,13 +0,0 @@ -from .core import * -from .codec import * -from typing import Any, Union - -def ToASCII(label: str) -> bytes: - return encode(label) - -def ToUnicode(label: Union[bytes, bytearray]) -> str: - return decode(label) - -def nameprep(s: Any) -> None: - raise NotImplementedError('IDNA 2008 does not utilise nameprep protocol') - diff --git a/spaces/Ridzuan/random_name_selector/README.md b/spaces/Ridzuan/random_name_selector/README.md deleted file mode 100644 index f25a7c1e62d91684fb9721c1572a12024269cbb4..0000000000000000000000000000000000000000 --- a/spaces/Ridzuan/random_name_selector/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Random Name Selector -emoji: 🐢 -colorFrom: gray -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: unlicense ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Rmpmartinspro2/Waifu-Diffusers/app.py b/spaces/Rmpmartinspro2/Waifu-Diffusers/app.py deleted file mode 100644 index 85c2b9c77173fff8e4c2a4c800c863df3c6821a3..0000000000000000000000000000000000000000 --- a/spaces/Rmpmartinspro2/Waifu-Diffusers/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import gradio as gr - -API_KEY=os.environ.get('HUGGING_FACE_HUB_TOKEN', None) - -article = """--- -This space was created using [SD Space Creator](https://huggingface.co/spaces/anzorq/sd-space-creator).""" - -gr.Interface.load( - name="models/Nilaier/Waifu-Diffusers", - title="""Waifu Diffusers""", - description="""Demo for Waifu Diffusers Stable Diffusion model.""", - article=article, - api_key=API_KEY, - ).queue(concurrency_count=20).launch() diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/arraymisc/quantization.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/arraymisc/quantization.py deleted file mode 100644 index 8e47a3545780cf071a1ef8195efb0b7b662c8186..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/arraymisc/quantization.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - - -def quantize(arr, min_val, max_val, levels, dtype=np.int64): - """Quantize an array of (-inf, inf) to [0, levels-1]. - - Args: - arr (ndarray): Input array. - min_val (scalar): Minimum value to be clipped. - max_val (scalar): Maximum value to be clipped. - levels (int): Quantization levels. - dtype (np.type): The type of the quantized array. - - Returns: - tuple: Quantized array. - """ - if not (isinstance(levels, int) and levels > 1): - raise ValueError( - f'levels must be a positive integer, but got {levels}') - if min_val >= max_val: - raise ValueError( - f'min_val ({min_val}) must be smaller than max_val ({max_val})') - - arr = np.clip(arr, min_val, max_val) - min_val - quantized_arr = np.minimum( - np.floor(levels * arr / (max_val - min_val)).astype(dtype), levels - 1) - - return quantized_arr - - -def dequantize(arr, min_val, max_val, levels, dtype=np.float64): - """Dequantize an array. - - Args: - arr (ndarray): Input array. - min_val (scalar): Minimum value to be clipped. - max_val (scalar): Maximum value to be clipped. - levels (int): Quantization levels. - dtype (np.type): The type of the dequantized array. - - Returns: - tuple: Dequantized array. - """ - if not (isinstance(levels, int) and levels > 1): - raise ValueError( - f'levels must be a positive integer, but got {levels}') - if min_val >= max_val: - raise ValueError( - f'min_val ({min_val}) must be smaller than max_val ({max_val})') - - dequantized_arr = (arr + 0.5).astype(dtype) * (max_val - - min_val) / levels + min_val - - return dequantized_arr diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/video/optflow.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/video/optflow.py deleted file mode 100644 index 84160f8d6ef9fceb5a2f89e7481593109fc1905d..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/video/optflow.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import cv2 -import numpy as np - -from annotator.uniformer.mmcv.arraymisc import dequantize, quantize -from annotator.uniformer.mmcv.image import imread, imwrite -from annotator.uniformer.mmcv.utils import is_str - - -def flowread(flow_or_path, quantize=False, concat_axis=0, *args, **kwargs): - """Read an optical flow map. - - Args: - flow_or_path (ndarray or str): A flow map or filepath. - quantize (bool): whether to read quantized pair, if set to True, - remaining args will be passed to :func:`dequantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - - Returns: - ndarray: Optical flow represented as a (h, w, 2) numpy array - """ - if isinstance(flow_or_path, np.ndarray): - if (flow_or_path.ndim != 3) or (flow_or_path.shape[-1] != 2): - raise ValueError(f'Invalid flow with shape {flow_or_path.shape}') - return flow_or_path - elif not is_str(flow_or_path): - raise TypeError(f'"flow_or_path" must be a filename or numpy array, ' - f'not {type(flow_or_path)}') - - if not quantize: - with open(flow_or_path, 'rb') as f: - try: - header = f.read(4).decode('utf-8') - except Exception: - raise IOError(f'Invalid flow file: {flow_or_path}') - else: - if header != 'PIEH': - raise IOError(f'Invalid flow file: {flow_or_path}, ' - 'header does not contain PIEH') - - w = np.fromfile(f, np.int32, 1).squeeze() - h = np.fromfile(f, np.int32, 1).squeeze() - flow = np.fromfile(f, np.float32, w * h * 2).reshape((h, w, 2)) - else: - assert concat_axis in [0, 1] - cat_flow = imread(flow_or_path, flag='unchanged') - if cat_flow.ndim != 2: - raise IOError( - f'{flow_or_path} is not a valid quantized flow file, ' - f'its dimension is {cat_flow.ndim}.') - assert cat_flow.shape[concat_axis] % 2 == 0 - dx, dy = np.split(cat_flow, 2, axis=concat_axis) - flow = dequantize_flow(dx, dy, *args, **kwargs) - - return flow.astype(np.float32) - - -def flowwrite(flow, filename, quantize=False, concat_axis=0, *args, **kwargs): - """Write optical flow to file. - - If the flow is not quantized, it will be saved as a .flo file losslessly, - otherwise a jpeg image which is lossy but of much smaller size. (dx and dy - will be concatenated horizontally into a single image if quantize is True.) - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - filename (str): Output filepath. - quantize (bool): Whether to quantize the flow and save it to 2 jpeg - images. If set to True, remaining args will be passed to - :func:`quantize_flow`. - concat_axis (int): The axis that dx and dy are concatenated, - can be either 0 or 1. Ignored if quantize is False. - """ - if not quantize: - with open(filename, 'wb') as f: - f.write('PIEH'.encode('utf-8')) - np.array([flow.shape[1], flow.shape[0]], dtype=np.int32).tofile(f) - flow = flow.astype(np.float32) - flow.tofile(f) - f.flush() - else: - assert concat_axis in [0, 1] - dx, dy = quantize_flow(flow, *args, **kwargs) - dxdy = np.concatenate((dx, dy), axis=concat_axis) - imwrite(dxdy, filename) - - -def quantize_flow(flow, max_val=0.02, norm=True): - """Quantize flow to [0, 255]. - - After this step, the size of flow will be much smaller, and can be - dumped as jpeg images. - - Args: - flow (ndarray): (h, w, 2) array of optical flow. - max_val (float): Maximum value of flow, values beyond - [-max_val, max_val] will be truncated. - norm (bool): Whether to divide flow values by image width/height. - - Returns: - tuple[ndarray]: Quantized dx and dy. - """ - h, w, _ = flow.shape - dx = flow[..., 0] - dy = flow[..., 1] - if norm: - dx = dx / w # avoid inplace operations - dy = dy / h - # use 255 levels instead of 256 to make sure 0 is 0 after dequantization. - flow_comps = [ - quantize(d, -max_val, max_val, 255, np.uint8) for d in [dx, dy] - ] - return tuple(flow_comps) - - -def dequantize_flow(dx, dy, max_val=0.02, denorm=True): - """Recover from quantized flow. - - Args: - dx (ndarray): Quantized dx. - dy (ndarray): Quantized dy. - max_val (float): Maximum value used when quantizing. - denorm (bool): Whether to multiply flow values with width/height. - - Returns: - ndarray: Dequantized flow. - """ - assert dx.shape == dy.shape - assert dx.ndim == 2 or (dx.ndim == 3 and dx.shape[-1] == 1) - - dx, dy = [dequantize(d, -max_val, max_val, 255) for d in [dx, dy]] - - if denorm: - dx *= dx.shape[1] - dy *= dx.shape[0] - flow = np.dstack((dx, dy)) - return flow - - -def flow_warp(img, flow, filling_value=0, interpolate_mode='nearest'): - """Use flow to warp img. - - Args: - img (ndarray, float or uint8): Image to be warped. - flow (ndarray, float): Optical Flow. - filling_value (int): The missing pixels will be set with filling_value. - interpolate_mode (str): bilinear -> Bilinear Interpolation; - nearest -> Nearest Neighbor. - - Returns: - ndarray: Warped image with the same shape of img - """ - warnings.warn('This function is just for prototyping and cannot ' - 'guarantee the computational efficiency.') - assert flow.ndim == 3, 'Flow must be in 3D arrays.' - height = flow.shape[0] - width = flow.shape[1] - channels = img.shape[2] - - output = np.ones( - (height, width, channels), dtype=img.dtype) * filling_value - - grid = np.indices((height, width)).swapaxes(0, 1).swapaxes(1, 2) - dx = grid[:, :, 0] + flow[:, :, 1] - dy = grid[:, :, 1] + flow[:, :, 0] - sx = np.floor(dx).astype(int) - sy = np.floor(dy).astype(int) - valid = (sx >= 0) & (sx < height - 1) & (sy >= 0) & (sy < width - 1) - - if interpolate_mode == 'nearest': - output[valid, :] = img[dx[valid].round().astype(int), - dy[valid].round().astype(int), :] - elif interpolate_mode == 'bilinear': - # dirty walkround for integer positions - eps_ = 1e-6 - dx, dy = dx + eps_, dy + eps_ - left_top_ = img[np.floor(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - left_down_ = img[np.ceil(dx[valid]).astype(int), - np.floor(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - np.ceil(dy[valid]) - dy[valid])[:, None] - right_top_ = img[np.floor(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - np.ceil(dx[valid]) - dx[valid])[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - right_down_ = img[np.ceil(dx[valid]).astype(int), - np.ceil(dy[valid]).astype(int), :] * ( - dx[valid] - np.floor(dx[valid]))[:, None] * ( - dy[valid] - np.floor(dy[valid]))[:, None] - output[valid, :] = left_top_ + left_down_ + right_top_ + right_down_ - else: - raise NotImplementedError( - 'We only support interpolation modes of nearest and bilinear, ' - f'but got {interpolate_mode}.') - return output.astype(img.dtype) - - -def flow_from_bytes(content): - """Read dense optical flow from bytes. - - .. note:: - This load optical flow function works for FlyingChairs, FlyingThings3D, - Sintel, FlyingChairsOcc datasets, but cannot load the data from - ChairsSDHom. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - ndarray: Loaded optical flow with the shape (H, W, 2). - """ - - # header in first 4 bytes - header = content[:4] - if header.decode('utf-8') != 'PIEH': - raise Exception('Flow file header does not contain PIEH') - # width in second 4 bytes - width = np.frombuffer(content[4:], np.int32, 1).squeeze() - # height in third 4 bytes - height = np.frombuffer(content[8:], np.int32, 1).squeeze() - # after first 12 bytes, all bytes are flow - flow = np.frombuffer(content[12:], np.float32, width * height * 2).reshape( - (height, width, 2)) - - return flow - - -def sparse_flow_from_bytes(content): - """Read the optical flow in KITTI datasets from bytes. - - This function is modified from RAFT load the `KITTI datasets - `_. - - Args: - content (bytes): Optical flow bytes got from files or other streams. - - Returns: - Tuple(ndarray, ndarray): Loaded optical flow with the shape (H, W, 2) - and flow valid mask with the shape (H, W). - """ # nopa - - content = np.frombuffer(content, np.uint8) - flow = cv2.imdecode(content, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR) - flow = flow[:, :, ::-1].astype(np.float32) - # flow shape (H, W, 2) valid shape (H, W) - flow, valid = flow[:, :, :2], flow[:, :, 2] - flow = (flow - 2**15) / 64.0 - return flow, valid diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/losses/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/losses/__init__.py deleted file mode 100644 index beca72045694273d63465bac2f27dbc6672271db..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/losses/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -from .accuracy import Accuracy, accuracy -from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy, - cross_entropy, mask_cross_entropy) -from .dice_loss import DiceLoss -from .lovasz_loss import LovaszLoss -from .utils import reduce_loss, weight_reduce_loss, weighted_loss - -__all__ = [ - 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy', - 'mask_cross_entropy', 'CrossEntropyLoss', 'reduce_loss', - 'weight_reduce_loss', 'weighted_loss', 'LovaszLoss', 'DiceLoss' -] diff --git a/spaces/Rodrigo21/space1/app.py b/spaces/Rodrigo21/space1/app.py deleted file mode 100644 index 916fb0b38eaff6a9148a0f2e3a9dffcdcb76c552..0000000000000000000000000000000000000000 --- a/spaces/Rodrigo21/space1/app.py +++ /dev/null @@ -1,5 +0,0 @@ -import gradio as gr - -examples = [["The Moon's orbit around Earth has"], ["There once was a pineapple"]] - -gr.Interface.load("huggingface/EleutherAI/gpt-j-6B", examples=examples).launch(); \ No newline at end of file diff --git a/spaces/RuijiaTan/MultiPrincipalElementAlloyPropertyPredictor/app.py b/spaces/RuijiaTan/MultiPrincipalElementAlloyPropertyPredictor/app.py deleted file mode 100644 index a118f4aa01216d006b2a2dd73d872e79c3db7587..0000000000000000000000000000000000000000 --- a/spaces/RuijiaTan/MultiPrincipalElementAlloyPropertyPredictor/app.py +++ /dev/null @@ -1,292 +0,0 @@ -from pip._internal import main -main(['install', 'matminer']) - -import os -import gradio as gr -from pandas import Series, DataFrame -import pandas as pd -import sys -import numpy as np - -import os - -from Interface.empirical_parameter_calculator import EmpiricalParams -from pymatgen.core.periodic_table import Element - -# main(['install', 'joblib']) -main(['install', 'scikit-learn==1.2.2']) -import joblib - -''' -Get the path of all of the saved models. -''' -gmm_ssl_compressive_strength = "./Model/saved_model/gmm_ssl/compressive_strength.pkl" -gmm_ssl_elongation = "./Model/saved_model/gmm_ssl/elongation.pkl" -gmm_ssl_hardness ="./Model/saved_model/gmm_ssl/hardness.pkl" -gmm_ssl_plasticity = "./Model/saved_model/gmm_ssl/plasticity.pkl" -gmm_ssl_tensile_strength = "./Model/saved_model/gmm_ssl/tensile_strength.pkl" -gmm_ssl_yield_strength = "./Model/saved_model/gmm_ssl/yield_strength.pkl" - -''' -Get the composition of input alloy. -''' - -def normalize_molar_ratios(ratios): - normalized_ratios = list() - ele_sum = sum(ratios) - for ele in ratios: - ele = float(ele / ele_sum) - normalized_ratios.append(ele) - return normalized_ratios - - -''' -Load the saved ML models. -''' - -def predict_input(path): - with open(path, 'rb') as p: - loaded_model = joblib.load(p) - return loaded_model - - -''' -Predict the six properties of the input alloy, -with Linear Regression Models, K-Means Semi-supervised Model and GMM Semi-supervised Model. -''' - -def pred(Al, B, C, Co, Cr, Cu, Fe, Ga, Ge, Hf, Li, Mg, Mn, Mo, N, Nb, Ni, Sc, Si, Sn, Ta, Ti, V, W, Y, Zn, Zr, operation): - # Get the model acceptable composition format. - # print(B) - comp = {"Al": Al, "B": B, "C": C, "Co": Co, "Cr": Cr, "Cu": Cu, "Fe": Fe, "Ga": Ga, "Ge": Ge, "Hf": Hf, "Li": Li, - "Mg": Mg, "Mn": Mn, - "Mo": Mo, "N": N, "Nb": Nb, "Ni": Ni, "Sc": Sc, "Si": Si, "Sn": Sn, "Ta": Ta, "Ti": Ti, "V": V, "W": W, - "Y": Y, "Zn": Zn, "Zr": Zr} - - df_values = normalize_molar_ratios(comp.values()) - df = pd.DataFrame(data=[df_values], - columns=["Al", "B", "C", "Co", "Cr", "Cu", "Fe", "Ga", "Ge", "Hf", "Li", "Mg", "Mn", "Mo", "N", - "Nb", "Ni", - "Sc", "Si", "Sn", "Ta", "Ti", "V", "W", "Y", "Zn", "Zr"]) - # print(df) - - Composition = "" - index = 0 - for k, v in comp.items(): - if v != 0: - Composition = Composition + k + str(round(df_values[index], 2)) - index += 1 - # print(Composition) - # print(comp.values()) - # Using semi_supervisor Label Propagation to predict properties. - Hardness = predict_input(gmm_ssl_hardness).predict(df) - YieldStrength = predict_input(gmm_ssl_yield_strength).predict(df) - TensileStrength = predict_input(gmm_ssl_tensile_strength).predict(df) - Elongation = predict_input(gmm_ssl_elongation).predict(df) - CompressiveStrength = predict_input(gmm_ssl_compressive_strength).predict(df) - Plasticity = predict_input(gmm_ssl_plasticity).predict(df) - - Hardness = round(float(Hardness),2) - YieldStrength = round(float(YieldStrength),2) - TensileStrength = round(float(TensileStrength),2) - Elongation = round(float(Elongation),2) - CompressiveStrength = round(float(CompressiveStrength),2) - Plasticity = round(float(Plasticity),2) - - return Composition, Hardness, YieldStrength, TensileStrength, Elongation, CompressiveStrength, Plasticity - - -''' -Import the function to Calculate the empirical parameters. -''' - -def empirical_parameter(Al, B, C, Co, Cr, Cu, Fe, Ga, Ge, Hf, Li, Mg, Mn, Mo, N, Nb, Ni, Sc, Si, Sn, Ta, Ti, V, W, Y, Zn, Zr): - # Get the model acceptable composition format. - # print(B) - comp = {"Al": Al, "B": B, "C": C, "Co": Co, "Cr": Cr, "Cu": Cu, "Fe": Fe, "Ga": Ga, "Ge": Ge, "Hf": Hf, "Li": Li, - "Mg": Mg, "Mn": Mn, - "Mo": Mo, "N": N, "Nb": Nb, "Ni": Ni, "Sc": Sc, "Si": Si, "Sn": Sn, "Ta": Ta, "Ti": Ti, "V": V, "W": W, - "Y": Y, "Zn": Zn, "Zr": Zr} - - df_values = normalize_molar_ratios(comp.values()) - print(df_values) - - df_element=[] - df_ratio=[] - index = 0 - for key, value in comp.items(): - if int(value) != 0: - df_element.append(key) - df_ratio.append(df_values[index]) - index += 1 - print(df_element) - print(df_ratio) - - df_elements = [] - for i in df_element: - df_elements.append(Element[i]) - print(df_elements) - - input_ele = EmpiricalParams(element_list=df_elements,mol_ratio=df_ratio) - - # 1. Calculate the entropy mixing. - para1 = round(float(input_ele.entropy_mixing()),2) - - #2. Calculate the average atomic radius. - para2 = round(float(input_ele.mean_atomic_radius()),2) - - #3. Calculate the atomic size difference. - para3 = round(float(input_ele.atomic_size_difference()),2) - - #4. Calculate the enthalpy of mixing. - para4= round(float(input_ele.enthalpy_mixing()),2) - - #5. Calculate the standard deviation of enthalpy. - para5= round(float(input_ele.std_enthalpy_mixing()),2) - - #6. Calculate the average melting point. - para6= round(float(input_ele.average_melting_point()),2) - - #7. Calculate the standard melting point. - para7= round(float(input_ele.std_melting_point()),2) - - #8. Calculate the average electronegativity. - para8= round(float(input_ele.mean_electronegativity()),2) - - #9. Calculate the standard deviation of electronegativity. - para9= round(float(input_ele.std_electronegativity()),2) - - #10. Calculate the valence electron concentration. - para10= round(float(input_ele.average_vec()),2) - - #11. Calculate the standard deviation of valence electron concentration. - para11= round(float(input_ele.std_vec()),2) - - #12. Calculate the omega. - para12= round(float(input_ele.calc_omega()),2) - - #13. Calculate the density. - para13= round(float(input_ele.calc_density()),2) - - #14. Calculate the price. - para14= round(float(input_ele.calc_price()),2) - - return para1, para2, para3, para4,para5, para6, para7, para8, para9, para10, para11, para12, para13, para14 - - -''' -The function of Clear button. -''' - -def clear_input(): - return 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, "",\ - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 - - -''' -Interface Details -''' - -with gr.Blocks() as demo: - # The title and description of interface. - gr.Markdown("# Multi Principal Element Alloy Property Predictor") - gr.Markdown("Input alloy composition to obtain output of alloy properties (sum of composition should be equal to 100)") - - # The section to input the element ratio. - with gr.Row(): - Al = gr.Number(label="Al") - B = gr.Number(label="B") - C = gr.Number(label="C") - Co = gr.Number(label="Co") - Cr = gr.Number(label="Cr") - Cu = gr.Number(label="Cu") - Fe = gr.Number(label="Fe") - Ga = gr.Number(label="Ga") - Ge = gr.Number(label="Ge") - Hf = gr.Number(label="Hf") - Li = gr.Number(label="Li") - Mg = gr.Number(label="Mg") - Mn = gr.Number(label="Mn") - Mo = gr.Number(label="Mo") - N = gr.Number(label="N") - Nb = gr.Number(label="Nb") - Ni = gr.Number(label="Ni") - Sc = gr.Number(label="Sc") - Si = gr.Number(label="Si") - Sn = gr.Number(label="Sn") - Ta = gr.Number(label="Ta") - Ti = gr.Number(label="Ti") - V = gr.Number(label="V") - W = gr.Number(label="W") - Y = gr.Number(label="Y") - Zn = gr.Number(label="Zn") - Zr = gr.Number(label="Zr") - Composition = gr.Text(label="Alloy Composition") - - #Action buttons.("Clear" and "Prediction") - with gr.Row(): - clear = gr.Button("Clear") - submit = gr.Button("Predict") - - #The prediction result(Six mechanical properties) of ML models. - with gr.Row(): - Hardness = gr.Number(label="Hardness (VHN)") - YieldStrength = gr.Number(label="Yield Strength (MPa)") - TensileStrength = gr.Number(label="Tensile Strength (MPa)") - Elongation = gr.Number(label="Elongation (%)") - CompressiveStrength = gr.Number(label="Compressive Strength (MPa)") - Plasticity = gr.Number(label="Plasticity (from compression)") - - #Calculer resutl of empirical parameters. - with gr.Row(): - entropy_mixing = gr.Number(label="Entropy of Mixing (J/K*mol)") - average_atomic_radius = gr.Number(label="Average Atomic Radius (Angstroms)") - atomic_size_dif = gr.Number(label="Atomic Size Difference") - enthalpy_mixing = gr.Number(label="Enthalpy of Mixing (kJ/mol)") - std_deviation_enthalpy = gr.Number(label="Standard Deviation of Enthalpy") - average_melting_point = gr.Number(label="Average Melting Point (Tm, in Celcius)") - std_deviation_melting_point = gr.Number(label="Standard Deviation of Melting Point") - average_electronegativity = gr.Number(label="Average Electronegativity (X)") - std_deviation_electronegativity= gr.Number(label="Standard Deviation of Electronegativity") - valence_electron_concentration = gr.Number(label="Valence Electron Concentration (VEC)") - std_deviation_valence_electron_concentration = gr.Number(label="Standard Deviation of Valence Electron Concentration (VEC)") - omega = gr.Number(label="The Unitless Parameter Omega") - density = gr.Number(label="Density (g/cm^3)") - price = gr.Number(label="Price (USD/kg)") - - # Define the action of "Clear" button. - clear.click(fn=clear_input, inputs=[], - outputs=[Al, B, C, Co, Cr, Cu, Fe, Ga, Ge, Hf, Li, Mg, Mn, Mo, N, Nb, Ni, Sc, Si, Sn, Ta, Ti, V, W, Y, - Zn, Zr, Composition, - Hardness, YieldStrength, TensileStrength, Elongation, CompressiveStrength, Plasticity, entropy_mixing, average_atomic_radius, atomic_size_dif, enthalpy_mixing, - std_deviation_enthalpy, average_melting_point, std_deviation_melting_point, - average_electronegativity, std_deviation_electronegativity, valence_electron_concentration, - std_deviation_valence_electron_concentration, omega, density, price]) - - # Define the action of "Predict" button. - # 1.Predict the result of "Composition", "Hardness", "YieldStrength", "TensileStrength", "Elongation", "CompressiveStrength", "Plasticity". - submit.click(fn=pred, - inputs=[Al, B, C, Co, Cr, Cu, Fe, Ga, Ge, Hf, Li, Mg, Mn, Mo, N, Nb, Ni, Sc, Si, Sn, Ta, Ti, V, W, Y, - Zn, Zr], - outputs=[Composition, Hardness, YieldStrength, TensileStrength, Elongation, CompressiveStrength, - Plasticity]) - - # 2.Activate the empirical parameter calculator. - submit.click(fn=empirical_parameter, - inputs=[Al, B, C, Co, Cr, Cu, Fe, Ga, Ge, Hf, Li, Mg, Mn, Mo, N, Nb, Ni, Sc, Si, Sn, Ta, Ti, V, W, Y, - Zn, Zr], - outputs=[entropy_mixing, average_atomic_radius, atomic_size_dif, enthalpy_mixing, - std_deviation_enthalpy, average_melting_point, std_deviation_melting_point, - average_electronegativity, std_deviation_electronegativity, valence_electron_concentration, - std_deviation_valence_electron_concentration, omega, density, price]) - - -''' -Launch the interface. -''' -if __name__ == "__main__": - # Run the Alloy Property Predictor Interface, without public URL. - demo.launch() - - # Run the Alloy Property Predictor Interface, with a public URL. - # demo.launch(share="True") diff --git a/spaces/SHULGIN/MiDaS/README.md b/spaces/SHULGIN/MiDaS/README.md deleted file mode 100644 index 8cbdc0422eadfe3d403a4b25e554e06a73af3acf..0000000000000000000000000000000000000000 --- a/spaces/SHULGIN/MiDaS/README.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: MiDaS -emoji: 😻 -colorFrom: pink -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false -duplicated_from: pytorch/MiDaS ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/SIB/Smart_Resume/app.py b/spaces/SIB/Smart_Resume/app.py deleted file mode 100644 index e05a457a6ea2fbde413de6e1d0fd107015626afd..0000000000000000000000000000000000000000 --- a/spaces/SIB/Smart_Resume/app.py +++ /dev/null @@ -1,142 +0,0 @@ - -from gtts import gTTS -import gradio as gr -from PyPDF2 import PdfFileReader -from googletrans import Translator -import googletrans -import numpy as np -import requests -from PIL import Image -import pytesseract -import os -# from docx import Document - -cnt = 0 -langues = googletrans.LANGUAGES - - - -API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-cnn" -headers = {"Authorization": "Bearer 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"} - -def query(payload): - response = requests.post(API_URL, headers=headers, json=payload) - return response.json() - -def get_key(val): - for key, value in langues.items(): - if val == value: - return key - -def read_article(file_name): - - name = file_name.name.replace("\\",'/') - file = None - article = "" - if name.endswith(".txt"): - file = open(name, "r") - filedata = file.readlines() - for e in filedata : - article = article + e - if name.endswith(".pdf"): - # article = textract.process('document_path.PDF', method='PDFminer') - document = PdfFileReader(name)#open(name, 'rb')) - for page in range(document.numPages): - pageObj = document.getPage(page) - article += pageObj.extractText().replace('\n','') - if name.endswith(".docx"): - pass - # doc = Document(name) - # article = None - # for para in doc.paragraphs: - # article = article + para.text - if name.endswith(".jpg") or name.endswith(".png") or name.endswith(".jpeg"): - img = Image.open(name) - # path where the tesseract module is installed - pytesseract.pytesseract.tesseract_cmd ='C:/Program Files (x86)/Tesseract-OCR/tesseract.exe' - # converts the image to result and saves it into result variable - result = pytesseract.image_to_string(img) - - return article - - -def translate_data(text, final_language): - translator = Translator() - translation = translator.translate(text, dest=get_key(final_language)) - return translation.text - - -def generate_summary(file_name, mode,final_language): - # Step 1 - Read text anc split it - global cnt - sentences = read_article(file_name) - translator = Translator() - # cnt +=1 - if mode == "traduction": - text_translate = translate_data(sentences,final_language) - myobj = gTTS(text=text_translate, lang=get_key(final_language), slow=False) - #nous devrions vérifier si le fichier existe ou non avant de le supprimer. - if os.path.exists(f"audio_traduce{cnt}.wav"): - os.remove(f"audio_traduce{cnt}.wav") - else: - print("Impossible de supprimer le fichier car il n'existe pas") - myobj.save(f"audio_traduce{cnt}.wav") - return f"audio_traduce{cnt}.wav", text_translate - elif mode=="lecture": - text = translator.translate(sentences) - text_translate = sentences - myobj = gTTS(text=text_translate, lang=get_key(final_language), slow=False) - if os.path.exists(f"audio_lecture{cnt}.wav"): - os.remove(f"audio_lecture{cnt}.wav") - else: - print("Impossible de supprimer le fichier car il n'existe pas") - myobj.save(f"audio_lecture{cnt}.wav") - return f"audio_lecture{cnt}.wav", text_translate - elif mode == "resume_et_traduire": - text_translate = query({"inputs": sentences,}) - text_translate = text_translate[0]['summary_text'] - text = translate_data(text_translate,final_language) - text_translate = text - myobj = gTTS(text=text, lang=get_key(final_language), slow=False) - if os.path.exists(f"audio_resume_traduire{cnt}.wav"): - os.remove(f"audio_resume_traduire{cnt}.wav") - else: - print("Impossible de supprimer le fichier car il n'existe pas") - myobj.save(f"audio_resume_traduire{cnt}.wav") - return f"audio_resume_traduire{cnt}.wav", text_translate - else: - text_translate = query({"inputs": sentences,}) - text_translate = text_translate[0]['summary_text'] - text = translator.translate(text_translate) - myobj = gTTS(text=text_translate, lang=text.src, slow=False) - if os.path.exists(f"audio_resume{cnt}.wav"): - os.remove(f"audio_resume{cnt}.wav") - else: - print("Impossible de supprimer le fichier car il n'existe pas") - myobj.save(f"audio_resume{cnt}.wav") - return f"audio_resume{cnt}.wav", text_translate - - - -iface = gr.Interface( - fn=generate_summary, - inputs=[ - gr.inputs.File( file_count="single",type="file", label="Fichier à Traduire"), - gr.inputs.Radio(['resume', 'traduction','resume_et_traduire','lecture'], label="Choix du mode de fonctionnement"), - gr.inputs.Radio(['afrikaans', 'albanian', 'amharic', 'arabic', 'armenian', 'azerbaijani', - 'basque', 'belarusian', 'bengali', 'bosnian', 'bulgarian', 'catalan', 'cebuano', 'chichewa', - 'chinese (simplified)', 'chinese (traditional)', 'corsican', 'croatian', 'czech', 'danish', - 'dutch', 'english', 'esperanto', 'estonian', 'filipino', 'finnish', 'french', 'frisian', - 'galician', 'georgian', 'german', 'greek', 'gujarati', 'haitian creole', 'hausa', 'hawaiian', - 'hebrew', 'hebrew', 'hindi', 'hmong', 'hungarian', 'icelandic', 'igbo', 'indonesian', 'irish', - 'italian', 'japanese', 'javanese', 'kannada', 'kazakh', 'khmer', 'korean', 'kurdish (kurmanji)', - 'kyrgyz', 'lao', 'latin', 'latvian', 'lithuanian', 'luxembourgish', 'macedonian', 'malagasy', - 'malay', 'malayalam', 'maltese', 'maori', 'marathi', 'mongolian', 'myanmar (burmese)', 'nepali', - 'norwegian', 'odia', 'pashto', 'persian', 'polish', 'portuguese', 'punjabi', 'romanian', 'russian', - 'samoan', 'scots gaelic', 'serbian', 'sesotho', 'shona', 'sindhi', 'sinhala', 'slovak', 'slovenian', - 'somali', 'spanish', 'sundanese', 'swahili', 'swedish', 'tajik', 'tamil', 'telugu', 'thai', 'turkish', - 'ukrainian', 'urdu', 'uyghur', 'uzbek', 'vietnamese', 'welsh', 'xhosa', 'yiddish', 'yoruba', 'zulu'],label="Langage à traduire")], - outputs= [gr.outputs.Audio(type="file", label="Audio du livre") - ,gr.outputs.Textbox(label="resultat")], - theme="dark-seafoam") -iface.launch() \ No newline at end of file diff --git a/spaces/SIGGRAPH2022/DCT-Net/app.py b/spaces/SIGGRAPH2022/DCT-Net/app.py deleted file mode 100644 index 772d93321bd92a7a257bc189b60d5d6e3fe38ffe..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/DCT-Net/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -os.system('pip install "modelscope[cv]" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html') -import gradio as gr -import numpy as np -from modelscope.outputs import OutputKeys -from modelscope.pipelines import pipeline -from modelscope.utils.constant import Tasks -from modelscope.hub.snapshot_download import snapshot_download - -model_dir = snapshot_download('damo/cv_unet_person-image-cartoon_compound-models', cache_dir='.') - -img_cartoon = pipeline( - Tasks.image_portrait_stylization, - model='damo/cv_unet_person-image-cartoon_compound-models') - - -def infer(image): - result = img_cartoon(image.name) - out = result[OutputKeys.OUTPUT_IMG] - out = np.clip(out, 0, 255).astype(np.uint8) - return out[:, :, ::-1] - - -with gr.Blocks() as demo: - title= gr.Markdown(""" - # Gradio Demo for [DCT-Net: Domain-Calibrated Translation for Portrait Stylization](https://github.com/menyifang/DCT-Net), SIGGRAPH 2022 (TOG); Multi-style cartoonization - """ - ) - with gr.Row(): - image = gr.Image(label='Input', type='file') - result = gr.Image(label='Output') - run_button = gr.Button('Run') - run_button.click(fn=infer, inputs=image, outputs=result) -demo.launch() \ No newline at end of file diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/__init__.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/__init__.py deleted file mode 100644 index a2f5835bbc2e5b03a8b33464008e8183c04307da..0000000000000000000000000000000000000000 --- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -from streamlit_langchain_chat.customized_langchain.docstore.in_memory import InMemoryDocstore -from streamlit_langchain_chat.customized_langchain.vectorstores import FAISS -from streamlit_langchain_chat.customized_langchain.vectorstores import Pinecone - - -__all__ = [ - "FAISS", - "InMemoryDocstore", - "Pinecone", -] diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/utils/__init__.py b/spaces/Salesforce/EDICT/my_half_diffusers/utils/__init__.py deleted file mode 100644 index c00a28e1058fbd47451bfe48e23865876c08ed69..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/utils/__init__.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import os - -from .import_utils import ( - ENV_VARS_TRUE_AND_AUTO_VALUES, - ENV_VARS_TRUE_VALUES, - USE_JAX, - USE_TF, - USE_TORCH, - DummyObject, - is_flax_available, - is_inflect_available, - is_modelcards_available, - is_onnx_available, - is_scipy_available, - is_tf_available, - is_torch_available, - is_transformers_available, - is_unidecode_available, - requires_backends, -) -from .logging import get_logger -from .outputs import BaseOutput - - -logger = get_logger(__name__) - - -hf_cache_home = os.path.expanduser( - os.getenv("HF_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), "huggingface")) -) -default_cache_path = os.path.join(hf_cache_home, "diffusers") - - -CONFIG_NAME = "config.json" -HUGGINGFACE_CO_RESOLVE_ENDPOINT = "https://huggingface.co" -DIFFUSERS_CACHE = default_cache_path -DIFFUSERS_DYNAMIC_MODULE_NAME = "diffusers_modules" -HF_MODULES_CACHE = os.getenv("HF_MODULES_CACHE", os.path.join(hf_cache_home, "modules")) diff --git a/spaces/SarthakSidhant/Go-Cattle/diseases/malignant theileriosis.md b/spaces/SarthakSidhant/Go-Cattle/diseases/malignant theileriosis.md deleted file mode 100644 index ff3bad45dfc666b5021f6f4e6f5a1f5c90867830..0000000000000000000000000000000000000000 --- a/spaces/SarthakSidhant/Go-Cattle/diseases/malignant theileriosis.md +++ /dev/null @@ -1,42 +0,0 @@ -## Malignant theileriosis - -**Information** : Malignant theileriosis is a tick-borne disease of cattle caused by the protozoan parasite Theileria annulata. The parasite is spread through the bite of infected ticks. Malignant theileriosis is a serious disease that can be fatal if not treated. - -**Symptoms** - -The symptoms of malignant theileriosis typically appear within 1-2 weeks of infection and include: - -* Fever -* Depression -* Weight loss -* Pale mucous membranes -* Jaundice -* Increased heart rate and respiratory rate -* Hemoglobinuria (blood in the urine) -* Death - -**Remedies** - -There is no specific treatment for malignant theileriosis. Treatment is usually supportive and may include: - -* Administering fluids and electrolytes -* Treating secondary bacterial infections -* Administering anti-parasitic drugs - -**Causes** - -Malignant theileriosis is caused by the protozoan parasite Theileria annulata. The parasite is spread through the bite of infected ticks. The most common tick vectors for malignant theileriosis in cattle are Rhipicephalus (Boophilus) spp. and Ixodes spp. - -**Prevention** - -There are a number of preventive measures that can be taken to reduce the risk of malignant theileriosis in cattle, such as: - -* Using tick control measures, such as acaricides and tick dips -* Vaccinating cattle against malignant theileriosis -* Testing cattle for malignant theileriosis -* Isolating infected animals from healthy animals -* Treating contaminated feed and water - -**Differential diagnosis** - -Malignant theileriosis can be difficult to distinguish from other diseases that cause fever, weight loss, and anemia, such as anaplasmosis, babesiosis, and leptospirosis. A veterinarian can diagnose malignant theileriosis by testing a sample of the blood for the presence of Theileria annulata. diff --git a/spaces/ShreyashNadage/InvestmentCopilot/TopMovers.py b/spaces/ShreyashNadage/InvestmentCopilot/TopMovers.py deleted file mode 100644 index 3505b2621056444180cdb92355bf5331a252bd82..0000000000000000000000000000000000000000 --- a/spaces/ShreyashNadage/InvestmentCopilot/TopMovers.py +++ /dev/null @@ -1,21 +0,0 @@ -from nsetools import Nse -import pandas as pd -from collections import OrderedDict - -nse = Nse() - -def GetTopLosers(): - try: - top5losers = nse.get_top_losers()[:5] - data = OrderedDict({'Symbol': [i['symbol'] for i in top5losers], '%Change': [str(i['netPrice'])+'%' for i in top5losers]}) - return pd.DataFrame(data).set_index('Symbol') - except: - return None - -def GetTopGainers(): - try: - top5gainers = nse.get_top_gainers()[:5] - data = OrderedDict({'Symbol': [i['symbol'] for i in top5gainers], '%Change': [str(i['netPrice'])+'%' for i in top5gainers]}) - return pd.DataFrame(data).set_index('Symbol') - except: - return None diff --git a/spaces/Silentlin/DiffSinger/data_gen/tts/binarizer_zh.py b/spaces/Silentlin/DiffSinger/data_gen/tts/binarizer_zh.py deleted file mode 100644 index 7bd424a1a669ecf1a74cb69d690a690d0d39fe55..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/data_gen/tts/binarizer_zh.py +++ /dev/null @@ -1,59 +0,0 @@ -import os - -os.environ["OMP_NUM_THREADS"] = "1" - -from data_gen.tts.txt_processors.zh_g2pM import ALL_SHENMU -from data_gen.tts.base_binarizer import BaseBinarizer, BinarizationError -from data_gen.tts.data_gen_utils import get_mel2ph -from utils.hparams import set_hparams, hparams -import numpy as np - - -class ZhBinarizer(BaseBinarizer): - @staticmethod - def get_align(tg_fn, ph, mel, phone_encoded, res): - if tg_fn is not None and os.path.exists(tg_fn): - _, dur = get_mel2ph(tg_fn, ph, mel, hparams) - else: - raise BinarizationError(f"Align not found") - ph_list = ph.split(" ") - assert len(dur) == len(ph_list) - mel2ph = [] - # 分隔符的时长分配给韵母 - dur_cumsum = np.pad(np.cumsum(dur), [1, 0], mode='constant', constant_values=0) - for i in range(len(dur)): - p = ph_list[i] - if p[0] != '<' and not p[0].isalpha(): - uv_ = res['f0'][dur_cumsum[i]:dur_cumsum[i + 1]] == 0 - j = 0 - while j < len(uv_) and not uv_[j]: - j += 1 - dur[i - 1] += j - dur[i] -= j - if dur[i] < 100: - dur[i - 1] += dur[i] - dur[i] = 0 - # 声母和韵母等长 - for i in range(len(dur)): - p = ph_list[i] - if p in ALL_SHENMU: - p_next = ph_list[i + 1] - if not (dur[i] > 0 and p_next[0].isalpha() and p_next not in ALL_SHENMU): - print(f"assert dur[i] > 0 and p_next[0].isalpha() and p_next not in ALL_SHENMU, " - f"dur[i]: {dur[i]}, p: {p}, p_next: {p_next}.") - continue - total = dur[i + 1] + dur[i] - dur[i] = total // 2 - dur[i + 1] = total - dur[i] - for i in range(len(dur)): - mel2ph += [i + 1] * dur[i] - mel2ph = np.array(mel2ph) - if mel2ph.max() - 1 >= len(phone_encoded): - raise BinarizationError(f"| Align does not match: {(mel2ph.max() - 1, len(phone_encoded))}") - res['mel2ph'] = mel2ph - res['dur'] = dur - - -if __name__ == "__main__": - set_hparams() - ZhBinarizer().process() diff --git a/spaces/SriniJalasuthram/SJ-02-H5-AR-VR-IOT/index.html b/spaces/SriniJalasuthram/SJ-02-H5-AR-VR-IOT/index.html deleted file mode 100644 index f64aad6580cd12cbdbb0bcc0321ed7a6486d2a19..0000000000000000000000000000000000000000 --- a/spaces/SriniJalasuthram/SJ-02-H5-AR-VR-IOT/index.html +++ /dev/null @@ -1,66 +0,0 @@ - - - - Dynamic Lights - A-Frame - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/Sumit7864/Image-Enhancer/inference_realesrgan copy.py b/spaces/Sumit7864/Image-Enhancer/inference_realesrgan copy.py deleted file mode 100644 index 60bbb4904555b5b108ddabfa1eb7e715ec21b7e7..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/inference_realesrgan copy.py +++ /dev/null @@ -1,166 +0,0 @@ -import argparse -import cv2 -import glob -import os -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - - -def main(): - """Inference demo for Real-ESRGAN. - """ - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='inputs', help='Input image or folder') - parser.add_argument( - '-n', - '--model_name', - type=str, - default='RealESRGAN_x4plus', - help=('Model names: RealESRGAN_x4plus | RealESRNet_x4plus | RealESRGAN_x4plus_anime_6B | RealESRGAN_x2plus | ' - 'realesr-animevideov3 | realesr-general-x4v3')) - parser.add_argument('-o', '--output', type=str, default='results', help='Output folder') - parser.add_argument( - '-dn', - '--denoise_strength', - type=float, - default=0.5, - help=('Denoise strength. 0 for weak denoise (keep noise), 1 for strong denoise ability. ' - 'Only used for the realesr-general-x4v3 model')) - parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image') - parser.add_argument( - '--model_path', type=str, default=None, help='[Option] Model path. Usually, you do not need to specify it') - parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored image') - parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing') - parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding') - parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border') - parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face') - parser.add_argument( - '--fp32', action='store_true',default='--fp32', help='Use fp32 precision during inference. Default: fp16 (half precision).') - parser.add_argument( - '--alpha_upsampler', - type=str, - default='realesrgan', - help='The upsampler for the alpha channels. Options: realesrgan | bicubic') - parser.add_argument( - '--ext', - type=str, - default='auto', - help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs') - parser.add_argument( - '-g', '--gpu-id', type=int, default=None, help='gpu device to use (default=None) can be 0,1,2 for multi-gpu') - - args = parser.parse_args() - - # determine models according to model names - args.model_name = args.model_name.split('.')[0] - if args.model_name == 'RealESRGAN_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'] - elif args.model_name == 'RealESRNet_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth'] - elif args.model_name == 'RealESRGAN_x4plus_anime_6B': # x4 RRDBNet model with 6 blocks - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'] - elif args.model_name == 'RealESRGAN_x2plus': # x2 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - netscale = 2 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth'] - elif args.model_name == 'realesr-animevideov3': # x4 VGG-style model (XS size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth'] - elif args.model_name == 'realesr-general-x4v3': # x4 VGG-style model (S size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - netscale = 4 - file_url = [ - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth', - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth' - ] - - # determine model paths - if args.model_path is not None: - model_path = args.model_path - else: - model_path = os.path.join('weights', args.model_name + '.pth') - if not os.path.isfile(model_path): - ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) - for url in file_url: - # model_path will be updated - model_path = load_file_from_url( - url=url, model_dir=os.path.join(ROOT_DIR, 'weights'), progress=True, file_name=None) - - # use dni to control the denoise strength - dni_weight = None - if args.model_name == 'realesr-general-x4v3' and args.denoise_strength != 1: - wdn_model_path = model_path.replace('realesr-general-x4v3', 'realesr-general-wdn-x4v3') - model_path = [model_path, wdn_model_path] - dni_weight = [args.denoise_strength, 1 - args.denoise_strength] - - # restorer - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=dni_weight, - model=model, - tile=args.tile, - tile_pad=args.tile_pad, - pre_pad=args.pre_pad, - half=not args.fp32, - gpu_id=args.gpu_id) - - if args.face_enhance: # Use GFPGAN for face enhancement - from gfpgan import GFPGANer - face_enhancer = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth', - upscale=args.outscale, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) - os.makedirs(args.output, exist_ok=True) - - if os.path.isfile(args.input): - paths = [args.input] - else: - paths = sorted(glob.glob(os.path.join(args.input, '*'))) - - for idx, path in enumerate(paths): - imgname, extension = os.path.splitext(os.path.basename(path)) - print('Testing', idx, imgname) - - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = 'RGBA' - else: - img_mode = None - - try: - if args.face_enhance: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = upsampler.enhance(img, outscale=args.outscale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - else: - if args.ext == 'auto': - extension = extension[1:] - else: - extension = args.ext - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - if args.suffix == '': - save_path = os.path.join(args.output, f'{imgname}.{extension}') - else: - save_path = os.path.join(args.output, f'{imgname}_{args.suffix}.{extension}') - # cv2.imwrite(save_path, output) - return output - -if __name__ == '__main__': - main() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/_debug_adapter/pydevd_base_schema.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/_debug_adapter/pydevd_base_schema.py deleted file mode 100644 index 0cbb3f5b36386df738e34dd4ebb65e8f46960afd..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/_debug_adapter/pydevd_base_schema.py +++ /dev/null @@ -1,147 +0,0 @@ -from _pydevd_bundle._debug_adapter.pydevd_schema_log import debug_exception -import json -import itertools -from functools import partial - - -class BaseSchema(object): - - @staticmethod - def initialize_ids_translation(): - BaseSchema._dap_id_to_obj_id = {0:0, None:None} - BaseSchema._obj_id_to_dap_id = {0:0, None:None} - BaseSchema._next_dap_id = partial(next, itertools.count(1)) - - def to_json(self): - return json.dumps(self.to_dict()) - - @staticmethod - def _translate_id_to_dap(obj_id): - if obj_id == '*': - return '*' - # Note: we don't invalidate ids, so, if some object starts using the same id - # of another object, the same id will be used. - dap_id = BaseSchema._obj_id_to_dap_id.get(obj_id) - if dap_id is None: - dap_id = BaseSchema._obj_id_to_dap_id[obj_id] = BaseSchema._next_dap_id() - BaseSchema._dap_id_to_obj_id[dap_id] = obj_id - return dap_id - - @staticmethod - def _translate_id_from_dap(dap_id): - if dap_id == '*': - return '*' - try: - return BaseSchema._dap_id_to_obj_id[dap_id] - except: - raise KeyError('Wrong ID sent from the client: %s' % (dap_id,)) - - @staticmethod - def update_dict_ids_to_dap(dct): - return dct - - @staticmethod - def update_dict_ids_from_dap(dct): - return dct - - -BaseSchema.initialize_ids_translation() - -_requests_to_types = {} -_responses_to_types = {} -_event_to_types = {} -_all_messages = {} - - -def register(cls): - _all_messages[cls.__name__] = cls - return cls - - -def register_request(command): - - def do_register(cls): - _requests_to_types[command] = cls - return cls - - return do_register - - -def register_response(command): - - def do_register(cls): - _responses_to_types[command] = cls - return cls - - return do_register - - -def register_event(event): - - def do_register(cls): - _event_to_types[event] = cls - return cls - - return do_register - - -def from_dict(dct, update_ids_from_dap=False): - msg_type = dct.get('type') - if msg_type is None: - raise ValueError('Unable to make sense of message: %s' % (dct,)) - - if msg_type == 'request': - to_type = _requests_to_types - use = dct['command'] - - elif msg_type == 'response': - to_type = _responses_to_types - use = dct['command'] - - else: - to_type = _event_to_types - use = dct['event'] - - cls = to_type.get(use) - if cls is None: - raise ValueError('Unable to create message from dict: %s. %s not in %s' % (dct, use, sorted(to_type.keys()))) - try: - return cls(update_ids_from_dap=update_ids_from_dap, **dct) - except: - msg = 'Error creating %s from %s' % (cls, dct) - debug_exception(msg) - raise - - -def from_json(json_msg, update_ids_from_dap=False, on_dict_loaded=lambda dct:None): - if isinstance(json_msg, bytes): - json_msg = json_msg.decode('utf-8') - - as_dict = json.loads(json_msg) - on_dict_loaded(as_dict) - try: - return from_dict(as_dict, update_ids_from_dap=update_ids_from_dap) - except: - if as_dict.get('type') == 'response' and not as_dict.get('success'): - # Error messages may not have required body (return as a generic Response). - Response = _all_messages['Response'] - return Response(**as_dict) - else: - raise - - -def get_response_class(request): - if request.__class__ == dict: - return _responses_to_types[request['command']] - return _responses_to_types[request.command] - - -def build_response(request, kwargs=None): - if kwargs is None: - kwargs = {'success':True} - else: - if 'success' not in kwargs: - kwargs['success'] = True - response_class = _responses_to_types[request.command] - kwargs.setdefault('seq', -1) # To be overwritten before sending - return response_class(command=request.command, request_seq=request.seq, **kwargs) diff --git a/spaces/Suniilkumaar/SwapMukham/upscaler/RealESRGAN/rrdbnet_arch.py b/spaces/Suniilkumaar/SwapMukham/upscaler/RealESRGAN/rrdbnet_arch.py deleted file mode 100644 index 683fb7ffda5a4641276e555d6ca5ba98be2fd65e..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/SwapMukham/upscaler/RealESRGAN/rrdbnet_arch.py +++ /dev/null @@ -1,121 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F - -from .arch_utils import default_init_weights, make_layer, pixel_unshuffle - - -class ResidualDenseBlock(nn.Module): - """Residual Dense Block. - - Used in RRDB block in ESRGAN. - - Args: - num_feat (int): Channel number of intermediate features. - num_grow_ch (int): Channels for each growth. - """ - - def __init__(self, num_feat=64, num_grow_ch=32): - super(ResidualDenseBlock, self).__init__() - self.conv1 = nn.Conv2d(num_feat, num_grow_ch, 3, 1, 1) - self.conv2 = nn.Conv2d(num_feat + num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv3 = nn.Conv2d(num_feat + 2 * num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv4 = nn.Conv2d(num_feat + 3 * num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv5 = nn.Conv2d(num_feat + 4 * num_grow_ch, num_feat, 3, 1, 1) - - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - # initialization - default_init_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1) - - def forward(self, x): - x1 = self.lrelu(self.conv1(x)) - x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1))) - x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1))) - x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1))) - x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1)) - # Emperically, we use 0.2 to scale the residual for better performance - return x5 * 0.2 + x - - -class RRDB(nn.Module): - """Residual in Residual Dense Block. - - Used in RRDB-Net in ESRGAN. - - Args: - num_feat (int): Channel number of intermediate features. - num_grow_ch (int): Channels for each growth. - """ - - def __init__(self, num_feat, num_grow_ch=32): - super(RRDB, self).__init__() - self.rdb1 = ResidualDenseBlock(num_feat, num_grow_ch) - self.rdb2 = ResidualDenseBlock(num_feat, num_grow_ch) - self.rdb3 = ResidualDenseBlock(num_feat, num_grow_ch) - - def forward(self, x): - out = self.rdb1(x) - out = self.rdb2(out) - out = self.rdb3(out) - # Emperically, we use 0.2 to scale the residual for better performance - return out * 0.2 + x - - -class RRDBNet(nn.Module): - """Networks consisting of Residual in Residual Dense Block, which is used - in ESRGAN. - - ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. - - We extend ESRGAN for scale x2 and scale x1. - Note: This is one option for scale 1, scale 2 in RRDBNet. - We first employ the pixel-unshuffle (an inverse operation of pixelshuffle to reduce the spatial size - and enlarge the channel size before feeding inputs into the main ESRGAN architecture. - - Args: - num_in_ch (int): Channel number of inputs. - num_out_ch (int): Channel number of outputs. - num_feat (int): Channel number of intermediate features. - Default: 64 - num_block (int): Block number in the trunk network. Defaults: 23 - num_grow_ch (int): Channels for each growth. Default: 32. - """ - - def __init__(self, num_in_ch, num_out_ch, scale=4, num_feat=64, num_block=23, num_grow_ch=32): - super(RRDBNet, self).__init__() - self.scale = scale - if scale == 2: - num_in_ch = num_in_ch * 4 - elif scale == 1: - num_in_ch = num_in_ch * 16 - self.conv_first = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1) - self.body = make_layer(RRDB, num_block, num_feat=num_feat, num_grow_ch=num_grow_ch) - self.conv_body = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - # upsample - self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - if scale == 8: - self.conv_up3 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - def forward(self, x): - if self.scale == 2: - feat = pixel_unshuffle(x, scale=2) - elif self.scale == 1: - feat = pixel_unshuffle(x, scale=4) - else: - feat = x - feat = self.conv_first(feat) - body_feat = self.conv_body(self.body(feat)) - feat = feat + body_feat - # upsample - feat = self.lrelu(self.conv_up1(F.interpolate(feat, scale_factor=2, mode='nearest'))) - feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode='nearest'))) - if self.scale == 8: - feat = self.lrelu(self.conv_up3(F.interpolate(feat, scale_factor=2, mode='nearest'))) - out = self.conv_last(self.lrelu(self.conv_hr(feat))) - return out diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/datasets/ade.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/datasets/ade.py deleted file mode 100644 index 5913e43775ed4920b6934c855eb5a37c54218ebf..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/datasets/ade.py +++ /dev/null @@ -1,84 +0,0 @@ -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class ADE20KDataset(CustomDataset): - """ADE20K dataset. - - In segmentation map annotation for ADE20K, 0 stands for background, which - is not included in 150 categories. ``reduce_zero_label`` is fixed to True. - The ``img_suffix`` is fixed to '.jpg' and ``seg_map_suffix`` is fixed to - '.png'. - """ - CLASSES = ( - 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ', - 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth', - 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car', - 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug', - 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe', - 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column', - 'signboard', 'chest of drawers', 'counter', 'sand', 'sink', - 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path', - 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door', - 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table', - 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove', - 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar', - 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower', - 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver', - 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister', - 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van', - 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything', - 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent', - 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank', - 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake', - 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce', - 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen', - 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass', - 'clock', 'flag') - - PALETTE = [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50], - [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255], - [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7], - [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82], - [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3], - [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255], - [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220], - [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224], - [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255], - [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7], - [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153], - [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255], - [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0], - [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255], - [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255], - [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255], - [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0], - [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0], - [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255], - [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255], - [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20], - [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255], - [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255], - [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255], - [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0], - [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0], - [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255], - [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112], - [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160], - [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163], - [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0], - [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0], - [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255], - [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204], - [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255], - [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255], - [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194], - [102, 255, 0], [92, 0, 255]] - - def __init__(self, **kwargs): - super(ADE20KDataset, self).__init__( - img_suffix='.jpg', - seg_map_suffix='.png', - reduce_zero_label=True, - **kwargs) diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/da_head.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/da_head.py deleted file mode 100644 index 5cd49fcfdc7c0a70f9485cc71843dcf3e0cb1774..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/da_head.py +++ /dev/null @@ -1,178 +0,0 @@ -import torch -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule, Scale -from torch import nn - -from annotator.uniformer.mmseg.core import add_prefix -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class PAM(_SelfAttentionBlock): - """Position Attention Module (PAM) - - Args: - in_channels (int): Input channels of key/query feature. - channels (int): Output channels of key/query transform. - """ - - def __init__(self, in_channels, channels): - super(PAM, self).__init__( - key_in_channels=in_channels, - query_in_channels=in_channels, - channels=channels, - out_channels=in_channels, - share_key_query=False, - query_downsample=None, - key_downsample=None, - key_query_num_convs=1, - key_query_norm=False, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=False, - with_out=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None) - - self.gamma = Scale(0) - - def forward(self, x): - """Forward function.""" - out = super(PAM, self).forward(x, x) - - out = self.gamma(out) + x - return out - - -class CAM(nn.Module): - """Channel Attention Module (CAM)""" - - def __init__(self): - super(CAM, self).__init__() - self.gamma = Scale(0) - - def forward(self, x): - """Forward function.""" - batch_size, channels, height, width = x.size() - proj_query = x.view(batch_size, channels, -1) - proj_key = x.view(batch_size, channels, -1).permute(0, 2, 1) - energy = torch.bmm(proj_query, proj_key) - energy_new = torch.max( - energy, -1, keepdim=True)[0].expand_as(energy) - energy - attention = F.softmax(energy_new, dim=-1) - proj_value = x.view(batch_size, channels, -1) - - out = torch.bmm(attention, proj_value) - out = out.view(batch_size, channels, height, width) - - out = self.gamma(out) + x - return out - - -@HEADS.register_module() -class DAHead(BaseDecodeHead): - """Dual Attention Network for Scene Segmentation. - - This head is the implementation of `DANet - `_. - - Args: - pam_channels (int): The channels of Position Attention Module(PAM). - """ - - def __init__(self, pam_channels, **kwargs): - super(DAHead, self).__init__(**kwargs) - self.pam_channels = pam_channels - self.pam_in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.pam = PAM(self.channels, pam_channels) - self.pam_out_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.pam_conv_seg = nn.Conv2d( - self.channels, self.num_classes, kernel_size=1) - - self.cam_in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.cam = CAM() - self.cam_out_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.cam_conv_seg = nn.Conv2d( - self.channels, self.num_classes, kernel_size=1) - - def pam_cls_seg(self, feat): - """PAM feature classification.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.pam_conv_seg(feat) - return output - - def cam_cls_seg(self, feat): - """CAM feature classification.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.cam_conv_seg(feat) - return output - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - pam_feat = self.pam_in_conv(x) - pam_feat = self.pam(pam_feat) - pam_feat = self.pam_out_conv(pam_feat) - pam_out = self.pam_cls_seg(pam_feat) - - cam_feat = self.cam_in_conv(x) - cam_feat = self.cam(cam_feat) - cam_feat = self.cam_out_conv(cam_feat) - cam_out = self.cam_cls_seg(cam_feat) - - feat_sum = pam_feat + cam_feat - pam_cam_out = self.cls_seg(feat_sum) - - return pam_cam_out, pam_out, cam_out - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing, only ``pam_cam`` is used.""" - return self.forward(inputs)[0] - - def losses(self, seg_logit, seg_label): - """Compute ``pam_cam``, ``pam``, ``cam`` loss.""" - pam_cam_seg_logit, pam_seg_logit, cam_seg_logit = seg_logit - loss = dict() - loss.update( - add_prefix( - super(DAHead, self).losses(pam_cam_seg_logit, seg_label), - 'pam_cam')) - loss.update( - add_prefix( - super(DAHead, self).losses(pam_seg_logit, seg_label), 'pam')) - loss.update( - add_prefix( - super(DAHead, self).losses(cam_seg_logit, seg_label), 'cam')) - return loss diff --git a/spaces/TRI-ML/risk_biased_prediction/risk_biased/config/paths.py b/spaces/TRI-ML/risk_biased_prediction/risk_biased/config/paths.py deleted file mode 100644 index a8c0ec1ac3e9ab33539d9506233095f096eefc57..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/risk_biased/config/paths.py +++ /dev/null @@ -1,9 +0,0 @@ -# Data split path: -base_path = "" -data_dir = "interactive_veh_type" # This directory name conditions the model hyperparameters, make sure to set it correctly -sample_dataset_path = base_path + data_dir + "/sample" -val_dataset_path = base_path + data_dir + "/validation" -train_dataset_path = base_path + data_dir + "/training" -test_dataset_path = base_path + data_dir + "/sample" - -log_path = "" diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_emoji_codes.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_emoji_codes.py deleted file mode 100644 index 1f2877bb2bd520253502b1c05bb811bb0d7ef64c..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/_emoji_codes.py +++ /dev/null @@ -1,3610 +0,0 @@ -EMOJI = { - "1st_place_medal": "🥇", - "2nd_place_medal": "🥈", - "3rd_place_medal": "🥉", - "ab_button_(blood_type)": "🆎", - "atm_sign": "🏧", - "a_button_(blood_type)": "🅰", - "afghanistan": "🇦🇫", - "albania": "🇦🇱", - "algeria": "🇩🇿", - "american_samoa": "🇦🇸", - "andorra": "🇦🇩", - "angola": "🇦🇴", - "anguilla": "🇦🇮", - "antarctica": "🇦🇶", - "antigua_&_barbuda": "🇦🇬", - "aquarius": "♒", - "argentina": "🇦🇷", - "aries": "♈", - "armenia": "🇦🇲", - "aruba": "🇦🇼", - "ascension_island": "🇦🇨", - "australia": "🇦🇺", - "austria": "🇦🇹", - "azerbaijan": "🇦🇿", - "back_arrow": "🔙", - "b_button_(blood_type)": "🅱", - "bahamas": "🇧🇸", - "bahrain": "🇧🇭", - "bangladesh": "🇧🇩", - "barbados": "🇧🇧", - "belarus": "🇧🇾", - "belgium": "🇧🇪", - "belize": "🇧🇿", - "benin": "🇧🇯", - "bermuda": "🇧🇲", - "bhutan": "🇧🇹", - "bolivia": "🇧🇴", - "bosnia_&_herzegovina": "🇧🇦", - "botswana": "🇧🇼", - "bouvet_island": "🇧🇻", - "brazil": "🇧🇷", - "british_indian_ocean_territory": "🇮🇴", - "british_virgin_islands": "🇻🇬", - "brunei": "🇧🇳", - "bulgaria": "🇧🇬", - "burkina_faso": "🇧🇫", - "burundi": "🇧🇮", - "cl_button": "🆑", - "cool_button": "🆒", - "cambodia": "🇰🇭", - "cameroon": "🇨🇲", - "canada": "🇨🇦", - "canary_islands": "🇮🇨", - "cancer": "♋", - "cape_verde": "🇨🇻", - "capricorn": "♑", - "caribbean_netherlands": "🇧🇶", - "cayman_islands": "🇰🇾", - "central_african_republic": "🇨🇫", - "ceuta_&_melilla": "🇪🇦", - "chad": "🇹🇩", - "chile": "🇨🇱", - "china": "🇨🇳", - "christmas_island": "🇨🇽", - "christmas_tree": "🎄", - "clipperton_island": "🇨🇵", - "cocos_(keeling)_islands": "🇨🇨", - "colombia": "🇨🇴", - "comoros": "🇰🇲", - "congo_-_brazzaville": "🇨🇬", - "congo_-_kinshasa": "🇨🇩", - "cook_islands": "🇨🇰", - "costa_rica": "🇨🇷", - "croatia": "🇭🇷", - "cuba": "🇨🇺", - "curaçao": "🇨🇼", - "cyprus": "🇨🇾", - "czechia": "🇨🇿", - "côte_d’ivoire": "🇨🇮", - "denmark": "🇩🇰", - "diego_garcia": "🇩🇬", - "djibouti": "🇩🇯", - "dominica": "🇩🇲", - "dominican_republic": "🇩🇴", - "end_arrow": "🔚", - "ecuador": "🇪🇨", - "egypt": "🇪🇬", - "el_salvador": "🇸🇻", - "england": "🏴\U000e0067\U000e0062\U000e0065\U000e006e\U000e0067\U000e007f", - "equatorial_guinea": "🇬🇶", - "eritrea": "🇪🇷", - "estonia": "🇪🇪", - "ethiopia": "🇪🇹", - "european_union": "🇪🇺", - "free_button": "🆓", - "falkland_islands": "🇫🇰", - "faroe_islands": "🇫🇴", - "fiji": "🇫🇯", - "finland": "🇫🇮", - "france": "🇫🇷", - "french_guiana": "🇬🇫", - "french_polynesia": "🇵🇫", - "french_southern_territories": "🇹🇫", - "gabon": "🇬🇦", - "gambia": "🇬🇲", - "gemini": "♊", - "georgia": "🇬🇪", - "germany": "🇩🇪", - "ghana": "🇬🇭", - "gibraltar": "🇬🇮", - "greece": "🇬🇷", - "greenland": "🇬🇱", - "grenada": "🇬🇩", - "guadeloupe": "🇬🇵", - "guam": "🇬🇺", - "guatemala": "🇬🇹", - "guernsey": "🇬🇬", - "guinea": "🇬🇳", - "guinea-bissau": "🇬🇼", - "guyana": "🇬🇾", - "haiti": "🇭🇹", - "heard_&_mcdonald_islands": "🇭🇲", - "honduras": "🇭🇳", - "hong_kong_sar_china": "🇭🇰", - "hungary": "🇭🇺", - "id_button": "🆔", - "iceland": "🇮🇸", - "india": "🇮🇳", - "indonesia": "🇮🇩", - "iran": "🇮🇷", - "iraq": "🇮🇶", - "ireland": "🇮🇪", - "isle_of_man": "🇮🇲", - "israel": "🇮🇱", - "italy": "🇮🇹", - "jamaica": "🇯🇲", - "japan": "🗾", - "japanese_acceptable_button": "🉑", - "japanese_application_button": "🈸", - "japanese_bargain_button": "🉐", - "japanese_castle": "🏯", - "japanese_congratulations_button": "㊗", - "japanese_discount_button": "🈹", - "japanese_dolls": "🎎", - "japanese_free_of_charge_button": "🈚", - "japanese_here_button": "🈁", - "japanese_monthly_amount_button": "🈷", - "japanese_no_vacancy_button": "🈵", - "japanese_not_free_of_charge_button": "🈶", - "japanese_open_for_business_button": "🈺", - "japanese_passing_grade_button": "🈴", - "japanese_post_office": "🏣", - "japanese_prohibited_button": "🈲", - "japanese_reserved_button": "🈯", - "japanese_secret_button": "㊙", - "japanese_service_charge_button": "🈂", - "japanese_symbol_for_beginner": "🔰", - "japanese_vacancy_button": "🈳", - "jersey": "🇯🇪", - "jordan": "🇯🇴", - "kazakhstan": "🇰🇿", - "kenya": "🇰🇪", - "kiribati": "🇰🇮", - "kosovo": "🇽🇰", - "kuwait": "🇰🇼", - "kyrgyzstan": "🇰🇬", - "laos": "🇱🇦", - "latvia": "🇱🇻", - "lebanon": "🇱🇧", - "leo": "♌", - "lesotho": "🇱🇸", - "liberia": "🇱🇷", - "libra": "♎", - "libya": "🇱🇾", - "liechtenstein": "🇱🇮", - "lithuania": "🇱🇹", - "luxembourg": "🇱🇺", - "macau_sar_china": "🇲🇴", - "macedonia": "🇲🇰", - "madagascar": "🇲🇬", - "malawi": "🇲🇼", - "malaysia": "🇲🇾", - "maldives": "🇲🇻", - "mali": "🇲🇱", - "malta": "🇲🇹", - "marshall_islands": "🇲🇭", - "martinique": "🇲🇶", - "mauritania": "🇲🇷", - "mauritius": "🇲🇺", - "mayotte": "🇾🇹", - "mexico": "🇲🇽", - "micronesia": "🇫🇲", - "moldova": "🇲🇩", - "monaco": "🇲🇨", - "mongolia": "🇲🇳", - "montenegro": "🇲🇪", - "montserrat": "🇲🇸", - "morocco": "🇲🇦", - "mozambique": "🇲🇿", - "mrs._claus": "🤶", - "mrs._claus_dark_skin_tone": "🤶🏿", - "mrs._claus_light_skin_tone": "🤶🏻", - "mrs._claus_medium-dark_skin_tone": "🤶🏾", - "mrs._claus_medium-light_skin_tone": "🤶🏼", - "mrs._claus_medium_skin_tone": "🤶🏽", - "myanmar_(burma)": "🇲🇲", - "new_button": "🆕", - "ng_button": "🆖", - "namibia": "🇳🇦", - "nauru": "🇳🇷", - "nepal": "🇳🇵", - "netherlands": "🇳🇱", - "new_caledonia": "🇳🇨", - "new_zealand": "🇳🇿", - "nicaragua": "🇳🇮", - "niger": "🇳🇪", - "nigeria": "🇳🇬", - "niue": "🇳🇺", - "norfolk_island": "🇳🇫", - "north_korea": "🇰🇵", - "northern_mariana_islands": "🇲🇵", - "norway": "🇳🇴", - "ok_button": "🆗", - "ok_hand": "👌", - "ok_hand_dark_skin_tone": "👌🏿", - "ok_hand_light_skin_tone": "👌🏻", - "ok_hand_medium-dark_skin_tone": "👌🏾", - "ok_hand_medium-light_skin_tone": "👌🏼", - "ok_hand_medium_skin_tone": "👌🏽", - "on!_arrow": "🔛", - "o_button_(blood_type)": "🅾", - "oman": "🇴🇲", - "ophiuchus": "⛎", - "p_button": "🅿", - "pakistan": "🇵🇰", - "palau": "🇵🇼", - "palestinian_territories": "🇵🇸", - "panama": "🇵🇦", - "papua_new_guinea": "🇵🇬", - "paraguay": "🇵🇾", - "peru": "🇵🇪", - "philippines": "🇵🇭", - "pisces": "♓", - "pitcairn_islands": "🇵🇳", - "poland": "🇵🇱", - "portugal": "🇵🇹", - "puerto_rico": "🇵🇷", - "qatar": "🇶🇦", - "romania": "🇷🇴", - "russia": "🇷🇺", - "rwanda": "🇷🇼", - "réunion": "🇷🇪", - "soon_arrow": "🔜", - "sos_button": "🆘", - "sagittarius": "♐", - "samoa": "🇼🇸", - "san_marino": "🇸🇲", - "santa_claus": "🎅", - "santa_claus_dark_skin_tone": "🎅🏿", - "santa_claus_light_skin_tone": "🎅🏻", - "santa_claus_medium-dark_skin_tone": "🎅🏾", - "santa_claus_medium-light_skin_tone": "🎅🏼", - "santa_claus_medium_skin_tone": "🎅🏽", - "saudi_arabia": "🇸🇦", - "scorpio": "♏", - "scotland": "🏴\U000e0067\U000e0062\U000e0073\U000e0063\U000e0074\U000e007f", - "senegal": "🇸🇳", - "serbia": "🇷🇸", - "seychelles": "🇸🇨", - "sierra_leone": "🇸🇱", - "singapore": "🇸🇬", - "sint_maarten": "🇸🇽", - "slovakia": "🇸🇰", - "slovenia": "🇸🇮", - "solomon_islands": "🇸🇧", - "somalia": "🇸🇴", - "south_africa": "🇿🇦", - "south_georgia_&_south_sandwich_islands": "🇬🇸", - "south_korea": "🇰🇷", - "south_sudan": "🇸🇸", - "spain": "🇪🇸", - "sri_lanka": "🇱🇰", - "st._barthélemy": "🇧🇱", - "st._helena": "🇸🇭", - "st._kitts_&_nevis": "🇰🇳", - "st._lucia": "🇱🇨", - "st._martin": "🇲🇫", - "st._pierre_&_miquelon": "🇵🇲", - "st._vincent_&_grenadines": "🇻🇨", - "statue_of_liberty": "🗽", - "sudan": "🇸🇩", - "suriname": "🇸🇷", - "svalbard_&_jan_mayen": "🇸🇯", - "swaziland": "🇸🇿", - "sweden": "🇸🇪", - "switzerland": "🇨🇭", - "syria": "🇸🇾", - "são_tomé_&_príncipe": "🇸🇹", - "t-rex": "🦖", - "top_arrow": "🔝", - "taiwan": "🇹🇼", - "tajikistan": "🇹🇯", - "tanzania": "🇹🇿", - "taurus": "♉", - "thailand": "🇹🇭", - "timor-leste": "🇹🇱", - "togo": "🇹🇬", - "tokelau": "🇹🇰", - "tokyo_tower": "🗼", - "tonga": "🇹🇴", - "trinidad_&_tobago": "🇹🇹", - "tristan_da_cunha": "🇹🇦", - "tunisia": "🇹🇳", - "turkey": "🦃", - "turkmenistan": "🇹🇲", - "turks_&_caicos_islands": "🇹🇨", - "tuvalu": "🇹🇻", - "u.s._outlying_islands": "🇺🇲", - "u.s._virgin_islands": "🇻🇮", - "up!_button": "🆙", - "uganda": "🇺🇬", - "ukraine": "🇺🇦", - "united_arab_emirates": "🇦🇪", - "united_kingdom": "🇬🇧", - "united_nations": "🇺🇳", - "united_states": "🇺🇸", - "uruguay": "🇺🇾", - "uzbekistan": "🇺🇿", - "vs_button": "🆚", - "vanuatu": "🇻🇺", - "vatican_city": "🇻🇦", - "venezuela": "🇻🇪", - "vietnam": "🇻🇳", - "virgo": "♍", - "wales": "🏴\U000e0067\U000e0062\U000e0077\U000e006c\U000e0073\U000e007f", - "wallis_&_futuna": "🇼🇫", - "western_sahara": "🇪🇭", - "yemen": "🇾🇪", - "zambia": "🇿🇲", - "zimbabwe": "🇿🇼", - "abacus": "🧮", - "adhesive_bandage": "🩹", - "admission_tickets": "🎟", - "adult": "🧑", - "adult_dark_skin_tone": "🧑🏿", - "adult_light_skin_tone": "🧑🏻", - "adult_medium-dark_skin_tone": "🧑🏾", - "adult_medium-light_skin_tone": "🧑🏼", - "adult_medium_skin_tone": "🧑🏽", - "aerial_tramway": "🚡", - "airplane": "✈", - "airplane_arrival": "🛬", - "airplane_departure": "🛫", - "alarm_clock": "⏰", - "alembic": "⚗", - "alien": "👽", - "alien_monster": "👾", - "ambulance": "🚑", - "american_football": "🏈", - "amphora": "🏺", - "anchor": "⚓", - "anger_symbol": "💢", - "angry_face": "😠", - "angry_face_with_horns": "👿", - "anguished_face": "😧", - "ant": "🐜", - "antenna_bars": "📶", - "anxious_face_with_sweat": "😰", - "articulated_lorry": "🚛", - "artist_palette": "🎨", - "astonished_face": "😲", - "atom_symbol": "⚛", - "auto_rickshaw": "🛺", - "automobile": "🚗", - "avocado": "🥑", - "axe": "🪓", - "baby": "👶", - "baby_angel": "👼", - "baby_angel_dark_skin_tone": "👼🏿", - "baby_angel_light_skin_tone": "👼🏻", - "baby_angel_medium-dark_skin_tone": "👼🏾", - "baby_angel_medium-light_skin_tone": "👼🏼", - "baby_angel_medium_skin_tone": "👼🏽", - "baby_bottle": "🍼", - "baby_chick": "🐤", - "baby_dark_skin_tone": "👶🏿", - "baby_light_skin_tone": "👶🏻", - "baby_medium-dark_skin_tone": "👶🏾", - "baby_medium-light_skin_tone": "👶🏼", - "baby_medium_skin_tone": "👶🏽", - "baby_symbol": "🚼", - "backhand_index_pointing_down": "👇", - "backhand_index_pointing_down_dark_skin_tone": "👇🏿", - "backhand_index_pointing_down_light_skin_tone": "👇🏻", - "backhand_index_pointing_down_medium-dark_skin_tone": "👇🏾", - "backhand_index_pointing_down_medium-light_skin_tone": "👇🏼", - "backhand_index_pointing_down_medium_skin_tone": "👇🏽", - "backhand_index_pointing_left": "👈", - "backhand_index_pointing_left_dark_skin_tone": "👈🏿", - "backhand_index_pointing_left_light_skin_tone": "👈🏻", - "backhand_index_pointing_left_medium-dark_skin_tone": "👈🏾", - "backhand_index_pointing_left_medium-light_skin_tone": "👈🏼", - "backhand_index_pointing_left_medium_skin_tone": "👈🏽", - "backhand_index_pointing_right": "👉", - "backhand_index_pointing_right_dark_skin_tone": "👉🏿", - "backhand_index_pointing_right_light_skin_tone": "👉🏻", - "backhand_index_pointing_right_medium-dark_skin_tone": "👉🏾", - "backhand_index_pointing_right_medium-light_skin_tone": "👉🏼", - "backhand_index_pointing_right_medium_skin_tone": "👉🏽", - "backhand_index_pointing_up": "👆", - "backhand_index_pointing_up_dark_skin_tone": "👆🏿", - "backhand_index_pointing_up_light_skin_tone": "👆🏻", - "backhand_index_pointing_up_medium-dark_skin_tone": "👆🏾", - "backhand_index_pointing_up_medium-light_skin_tone": "👆🏼", - "backhand_index_pointing_up_medium_skin_tone": "👆🏽", - "bacon": "🥓", - "badger": "🦡", - "badminton": "🏸", - "bagel": "🥯", - "baggage_claim": "🛄", - "baguette_bread": "🥖", - "balance_scale": "⚖", - "bald": "🦲", - "bald_man": "👨\u200d🦲", - "bald_woman": "👩\u200d🦲", - "ballet_shoes": "🩰", - "balloon": "🎈", - "ballot_box_with_ballot": "🗳", - "ballot_box_with_check": "☑", - "banana": "🍌", - "banjo": "🪕", - "bank": "🏦", - "bar_chart": "📊", - "barber_pole": "💈", - "baseball": "⚾", - "basket": "🧺", - "basketball": "🏀", - "bat": "🦇", - "bathtub": "🛁", - "battery": "🔋", - "beach_with_umbrella": "🏖", - "beaming_face_with_smiling_eyes": "😁", - "bear_face": "🐻", - "bearded_person": "🧔", - "bearded_person_dark_skin_tone": "🧔🏿", - "bearded_person_light_skin_tone": "🧔🏻", - "bearded_person_medium-dark_skin_tone": "🧔🏾", - "bearded_person_medium-light_skin_tone": "🧔🏼", - "bearded_person_medium_skin_tone": "🧔🏽", - "beating_heart": "💓", - "bed": "🛏", - "beer_mug": "🍺", - "bell": "🔔", - "bell_with_slash": "🔕", - "bellhop_bell": "🛎", - "bento_box": "🍱", - "beverage_box": "🧃", - "bicycle": "🚲", - "bikini": "👙", - "billed_cap": "🧢", - "biohazard": "☣", - "bird": "🐦", - "birthday_cake": "🎂", - "black_circle": "⚫", - "black_flag": "🏴", - "black_heart": "🖤", - "black_large_square": "⬛", - "black_medium-small_square": "◾", - "black_medium_square": "◼", - "black_nib": "✒", - "black_small_square": "▪", - "black_square_button": "🔲", - "blond-haired_man": "👱\u200d♂️", - "blond-haired_man_dark_skin_tone": "👱🏿\u200d♂️", - "blond-haired_man_light_skin_tone": "👱🏻\u200d♂️", - "blond-haired_man_medium-dark_skin_tone": "👱🏾\u200d♂️", - "blond-haired_man_medium-light_skin_tone": "👱🏼\u200d♂️", - "blond-haired_man_medium_skin_tone": "👱🏽\u200d♂️", - "blond-haired_person": "👱", - "blond-haired_person_dark_skin_tone": "👱🏿", - "blond-haired_person_light_skin_tone": "👱🏻", - "blond-haired_person_medium-dark_skin_tone": "👱🏾", - "blond-haired_person_medium-light_skin_tone": "👱🏼", - "blond-haired_person_medium_skin_tone": "👱🏽", - "blond-haired_woman": "👱\u200d♀️", - "blond-haired_woman_dark_skin_tone": "👱🏿\u200d♀️", - "blond-haired_woman_light_skin_tone": "👱🏻\u200d♀️", - "blond-haired_woman_medium-dark_skin_tone": "👱🏾\u200d♀️", - "blond-haired_woman_medium-light_skin_tone": "👱🏼\u200d♀️", - "blond-haired_woman_medium_skin_tone": "👱🏽\u200d♀️", - "blossom": "🌼", - "blowfish": "🐡", - "blue_book": "📘", - "blue_circle": "🔵", - "blue_heart": "💙", - "blue_square": "🟦", - "boar": "🐗", - "bomb": "💣", - "bone": "🦴", - "bookmark": "🔖", - "bookmark_tabs": "📑", - "books": "📚", - "bottle_with_popping_cork": "🍾", - "bouquet": "💐", - "bow_and_arrow": "🏹", - "bowl_with_spoon": "🥣", - "bowling": "🎳", - "boxing_glove": "🥊", - "boy": "👦", - "boy_dark_skin_tone": "👦🏿", - "boy_light_skin_tone": "👦🏻", - "boy_medium-dark_skin_tone": "👦🏾", - "boy_medium-light_skin_tone": "👦🏼", - "boy_medium_skin_tone": "👦🏽", - "brain": "🧠", - "bread": "🍞", - "breast-feeding": "🤱", - "breast-feeding_dark_skin_tone": "🤱🏿", - "breast-feeding_light_skin_tone": "🤱🏻", - "breast-feeding_medium-dark_skin_tone": "🤱🏾", - "breast-feeding_medium-light_skin_tone": "🤱🏼", - "breast-feeding_medium_skin_tone": "🤱🏽", - "brick": "🧱", - "bride_with_veil": "👰", - "bride_with_veil_dark_skin_tone": "👰🏿", - "bride_with_veil_light_skin_tone": "👰🏻", - "bride_with_veil_medium-dark_skin_tone": "👰🏾", - "bride_with_veil_medium-light_skin_tone": "👰🏼", - "bride_with_veil_medium_skin_tone": "👰🏽", - "bridge_at_night": "🌉", - "briefcase": "💼", - "briefs": "🩲", - "bright_button": "🔆", - "broccoli": "🥦", - "broken_heart": "💔", - "broom": "🧹", - "brown_circle": "🟤", - "brown_heart": "🤎", - "brown_square": "🟫", - "bug": "🐛", - "building_construction": "🏗", - "bullet_train": "🚅", - "burrito": "🌯", - "bus": "🚌", - "bus_stop": "🚏", - "bust_in_silhouette": "👤", - "busts_in_silhouette": "👥", - "butter": "🧈", - "butterfly": "🦋", - "cactus": "🌵", - "calendar": "📆", - "call_me_hand": "🤙", - "call_me_hand_dark_skin_tone": "🤙🏿", - "call_me_hand_light_skin_tone": "🤙🏻", - "call_me_hand_medium-dark_skin_tone": "🤙🏾", - "call_me_hand_medium-light_skin_tone": "🤙🏼", - "call_me_hand_medium_skin_tone": "🤙🏽", - "camel": "🐫", - "camera": "📷", - "camera_with_flash": "📸", - "camping": "🏕", - "candle": "🕯", - "candy": "🍬", - "canned_food": "🥫", - "canoe": "🛶", - "card_file_box": "🗃", - "card_index": "📇", - "card_index_dividers": "🗂", - "carousel_horse": "🎠", - "carp_streamer": "🎏", - "carrot": "🥕", - "castle": "🏰", - "cat": "🐱", - "cat_face": "🐱", - "cat_face_with_tears_of_joy": "😹", - "cat_face_with_wry_smile": "😼", - "chains": "⛓", - "chair": "🪑", - "chart_decreasing": "📉", - "chart_increasing": "📈", - "chart_increasing_with_yen": "💹", - "cheese_wedge": "🧀", - "chequered_flag": "🏁", - "cherries": "🍒", - "cherry_blossom": "🌸", - "chess_pawn": "♟", - "chestnut": "🌰", - "chicken": "🐔", - "child": "🧒", - "child_dark_skin_tone": "🧒🏿", - "child_light_skin_tone": "🧒🏻", - "child_medium-dark_skin_tone": "🧒🏾", - "child_medium-light_skin_tone": "🧒🏼", - "child_medium_skin_tone": "🧒🏽", - "children_crossing": "🚸", - "chipmunk": "🐿", - "chocolate_bar": "🍫", - "chopsticks": "🥢", - "church": "⛪", - "cigarette": "🚬", - "cinema": "🎦", - "circled_m": "Ⓜ", - "circus_tent": "🎪", - "cityscape": "🏙", - "cityscape_at_dusk": "🌆", - "clamp": "🗜", - "clapper_board": "🎬", - "clapping_hands": "👏", - "clapping_hands_dark_skin_tone": "👏🏿", - "clapping_hands_light_skin_tone": "👏🏻", - "clapping_hands_medium-dark_skin_tone": "👏🏾", - "clapping_hands_medium-light_skin_tone": "👏🏼", - "clapping_hands_medium_skin_tone": "👏🏽", - "classical_building": "🏛", - "clinking_beer_mugs": "🍻", - "clinking_glasses": "🥂", - "clipboard": "📋", - "clockwise_vertical_arrows": "🔃", - "closed_book": "📕", - "closed_mailbox_with_lowered_flag": "📪", - "closed_mailbox_with_raised_flag": "📫", - "closed_umbrella": "🌂", - "cloud": "☁", - "cloud_with_lightning": "🌩", - "cloud_with_lightning_and_rain": "⛈", - "cloud_with_rain": "🌧", - "cloud_with_snow": "🌨", - "clown_face": "🤡", - "club_suit": "♣", - "clutch_bag": "👝", - "coat": "🧥", - "cocktail_glass": "🍸", - "coconut": "🥥", - "coffin": "⚰", - "cold_face": "🥶", - "collision": "💥", - "comet": "☄", - "compass": "🧭", - "computer_disk": "💽", - "computer_mouse": "🖱", - "confetti_ball": "🎊", - "confounded_face": "😖", - "confused_face": "😕", - "construction": "🚧", - "construction_worker": "👷", - "construction_worker_dark_skin_tone": "👷🏿", - "construction_worker_light_skin_tone": "👷🏻", - "construction_worker_medium-dark_skin_tone": "👷🏾", - "construction_worker_medium-light_skin_tone": "👷🏼", - "construction_worker_medium_skin_tone": "👷🏽", - "control_knobs": "🎛", - "convenience_store": "🏪", - "cooked_rice": "🍚", - "cookie": "🍪", - "cooking": "🍳", - "copyright": "©", - "couch_and_lamp": "🛋", - "counterclockwise_arrows_button": "🔄", - "couple_with_heart": "💑", - "couple_with_heart_man_man": "👨\u200d❤️\u200d👨", - "couple_with_heart_woman_man": "👩\u200d❤️\u200d👨", - "couple_with_heart_woman_woman": "👩\u200d❤️\u200d👩", - "cow": "🐮", - "cow_face": "🐮", - "cowboy_hat_face": "🤠", - "crab": "🦀", - "crayon": "🖍", - "credit_card": "💳", - "crescent_moon": "🌙", - "cricket": "🦗", - "cricket_game": "🏏", - "crocodile": "🐊", - "croissant": "🥐", - "cross_mark": "❌", - "cross_mark_button": "❎", - "crossed_fingers": "🤞", - "crossed_fingers_dark_skin_tone": "🤞🏿", - "crossed_fingers_light_skin_tone": "🤞🏻", - "crossed_fingers_medium-dark_skin_tone": "🤞🏾", - "crossed_fingers_medium-light_skin_tone": "🤞🏼", - "crossed_fingers_medium_skin_tone": "🤞🏽", - "crossed_flags": "🎌", - "crossed_swords": "⚔", - "crown": "👑", - "crying_cat_face": "😿", - "crying_face": "😢", - "crystal_ball": "🔮", - "cucumber": "🥒", - "cupcake": "🧁", - "cup_with_straw": "🥤", - "curling_stone": "🥌", - "curly_hair": "🦱", - "curly-haired_man": "👨\u200d🦱", - "curly-haired_woman": "👩\u200d🦱", - "curly_loop": "➰", - "currency_exchange": "💱", - "curry_rice": "🍛", - "custard": "🍮", - "customs": "🛃", - "cut_of_meat": "🥩", - "cyclone": "🌀", - "dagger": "🗡", - "dango": "🍡", - "dashing_away": "💨", - "deaf_person": "🧏", - "deciduous_tree": "🌳", - "deer": "🦌", - "delivery_truck": "🚚", - "department_store": "🏬", - "derelict_house": "🏚", - "desert": "🏜", - "desert_island": "🏝", - "desktop_computer": "🖥", - "detective": "🕵", - "detective_dark_skin_tone": "🕵🏿", - "detective_light_skin_tone": "🕵🏻", - "detective_medium-dark_skin_tone": "🕵🏾", - "detective_medium-light_skin_tone": "🕵🏼", - "detective_medium_skin_tone": "🕵🏽", - "diamond_suit": "♦", - "diamond_with_a_dot": "💠", - "dim_button": "🔅", - "direct_hit": "🎯", - "disappointed_face": "😞", - "diving_mask": "🤿", - "diya_lamp": "🪔", - "dizzy": "💫", - "dizzy_face": "😵", - "dna": "🧬", - "dog": "🐶", - "dog_face": "🐶", - "dollar_banknote": "💵", - "dolphin": "🐬", - "door": "🚪", - "dotted_six-pointed_star": "🔯", - "double_curly_loop": "➿", - "double_exclamation_mark": "‼", - "doughnut": "🍩", - "dove": "🕊", - "down-left_arrow": "↙", - "down-right_arrow": "↘", - "down_arrow": "⬇", - "downcast_face_with_sweat": "😓", - "downwards_button": "🔽", - "dragon": "🐉", - "dragon_face": "🐲", - "dress": "👗", - "drooling_face": "🤤", - "drop_of_blood": "🩸", - "droplet": "💧", - "drum": "🥁", - "duck": "🦆", - "dumpling": "🥟", - "dvd": "📀", - "e-mail": "📧", - "eagle": "🦅", - "ear": "👂", - "ear_dark_skin_tone": "👂🏿", - "ear_light_skin_tone": "👂🏻", - "ear_medium-dark_skin_tone": "👂🏾", - "ear_medium-light_skin_tone": "👂🏼", - "ear_medium_skin_tone": "👂🏽", - "ear_of_corn": "🌽", - "ear_with_hearing_aid": "🦻", - "egg": "🍳", - "eggplant": "🍆", - "eight-pointed_star": "✴", - "eight-spoked_asterisk": "✳", - "eight-thirty": "🕣", - "eight_o’clock": "🕗", - "eject_button": "⏏", - "electric_plug": "🔌", - "elephant": "🐘", - "eleven-thirty": "🕦", - "eleven_o’clock": "🕚", - "elf": "🧝", - "elf_dark_skin_tone": "🧝🏿", - "elf_light_skin_tone": "🧝🏻", - "elf_medium-dark_skin_tone": "🧝🏾", - "elf_medium-light_skin_tone": "🧝🏼", - "elf_medium_skin_tone": "🧝🏽", - "envelope": "✉", - "envelope_with_arrow": "📩", - "euro_banknote": "💶", - "evergreen_tree": "🌲", - "ewe": "🐑", - "exclamation_mark": "❗", - "exclamation_question_mark": "⁉", - "exploding_head": "🤯", - "expressionless_face": "😑", - "eye": "👁", - "eye_in_speech_bubble": "👁️\u200d🗨️", - "eyes": "👀", - "face_blowing_a_kiss": "😘", - "face_savoring_food": "😋", - "face_screaming_in_fear": "😱", - "face_vomiting": "🤮", - "face_with_hand_over_mouth": "🤭", - "face_with_head-bandage": "🤕", - "face_with_medical_mask": "😷", - "face_with_monocle": "🧐", - "face_with_open_mouth": "😮", - "face_with_raised_eyebrow": "🤨", - "face_with_rolling_eyes": "🙄", - "face_with_steam_from_nose": "😤", - "face_with_symbols_on_mouth": "🤬", - "face_with_tears_of_joy": "😂", - "face_with_thermometer": "🤒", - "face_with_tongue": "😛", - "face_without_mouth": "😶", - "factory": "🏭", - "fairy": "🧚", - "fairy_dark_skin_tone": "🧚🏿", - "fairy_light_skin_tone": "🧚🏻", - "fairy_medium-dark_skin_tone": "🧚🏾", - "fairy_medium-light_skin_tone": "🧚🏼", - "fairy_medium_skin_tone": "🧚🏽", - "falafel": "🧆", - "fallen_leaf": "🍂", - "family": "👪", - "family_man_boy": "👨\u200d👦", - "family_man_boy_boy": "👨\u200d👦\u200d👦", - "family_man_girl": "👨\u200d👧", - "family_man_girl_boy": "👨\u200d👧\u200d👦", - "family_man_girl_girl": "👨\u200d👧\u200d👧", - "family_man_man_boy": "👨\u200d👨\u200d👦", - "family_man_man_boy_boy": "👨\u200d👨\u200d👦\u200d👦", - "family_man_man_girl": "👨\u200d👨\u200d👧", - "family_man_man_girl_boy": "👨\u200d👨\u200d👧\u200d👦", - "family_man_man_girl_girl": "👨\u200d👨\u200d👧\u200d👧", - "family_man_woman_boy": "👨\u200d👩\u200d👦", - "family_man_woman_boy_boy": "👨\u200d👩\u200d👦\u200d👦", - "family_man_woman_girl": "👨\u200d👩\u200d👧", - "family_man_woman_girl_boy": "👨\u200d👩\u200d👧\u200d👦", - "family_man_woman_girl_girl": "👨\u200d👩\u200d👧\u200d👧", - "family_woman_boy": "👩\u200d👦", - "family_woman_boy_boy": "👩\u200d👦\u200d👦", - "family_woman_girl": "👩\u200d👧", - "family_woman_girl_boy": "👩\u200d👧\u200d👦", - "family_woman_girl_girl": "👩\u200d👧\u200d👧", - "family_woman_woman_boy": "👩\u200d👩\u200d👦", - "family_woman_woman_boy_boy": "👩\u200d👩\u200d👦\u200d👦", - "family_woman_woman_girl": "👩\u200d👩\u200d👧", - "family_woman_woman_girl_boy": "👩\u200d👩\u200d👧\u200d👦", - "family_woman_woman_girl_girl": "👩\u200d👩\u200d👧\u200d👧", - "fast-forward_button": "⏩", - "fast_down_button": "⏬", - "fast_reverse_button": "⏪", - "fast_up_button": "⏫", - "fax_machine": "📠", - "fearful_face": "😨", - "female_sign": "♀", - "ferris_wheel": "🎡", - "ferry": "⛴", - "field_hockey": "🏑", - "file_cabinet": "🗄", - "file_folder": "📁", - "film_frames": "🎞", - "film_projector": "📽", - "fire": "🔥", - "fire_extinguisher": "🧯", - "firecracker": "🧨", - "fire_engine": "🚒", - "fireworks": "🎆", - "first_quarter_moon": "🌓", - "first_quarter_moon_face": "🌛", - "fish": "🐟", - "fish_cake_with_swirl": "🍥", - "fishing_pole": "🎣", - "five-thirty": "🕠", - "five_o’clock": "🕔", - "flag_in_hole": "⛳", - "flamingo": "🦩", - "flashlight": "🔦", - "flat_shoe": "🥿", - "fleur-de-lis": "⚜", - "flexed_biceps": "💪", - "flexed_biceps_dark_skin_tone": "💪🏿", - "flexed_biceps_light_skin_tone": "💪🏻", - "flexed_biceps_medium-dark_skin_tone": "💪🏾", - "flexed_biceps_medium-light_skin_tone": "💪🏼", - "flexed_biceps_medium_skin_tone": "💪🏽", - "floppy_disk": "💾", - "flower_playing_cards": "🎴", - "flushed_face": "😳", - "flying_disc": "🥏", - "flying_saucer": "🛸", - "fog": "🌫", - "foggy": "🌁", - "folded_hands": "🙏", - "folded_hands_dark_skin_tone": "🙏🏿", - "folded_hands_light_skin_tone": "🙏🏻", - "folded_hands_medium-dark_skin_tone": "🙏🏾", - "folded_hands_medium-light_skin_tone": "🙏🏼", - "folded_hands_medium_skin_tone": "🙏🏽", - "foot": "🦶", - "footprints": "👣", - "fork_and_knife": "🍴", - "fork_and_knife_with_plate": "🍽", - "fortune_cookie": "🥠", - "fountain": "⛲", - "fountain_pen": "🖋", - "four-thirty": "🕟", - "four_leaf_clover": "🍀", - "four_o’clock": "🕓", - "fox_face": "🦊", - "framed_picture": "🖼", - "french_fries": "🍟", - "fried_shrimp": "🍤", - "frog_face": "🐸", - "front-facing_baby_chick": "🐥", - "frowning_face": "☹", - "frowning_face_with_open_mouth": "😦", - "fuel_pump": "⛽", - "full_moon": "🌕", - "full_moon_face": "🌝", - "funeral_urn": "⚱", - "game_die": "🎲", - "garlic": "🧄", - "gear": "⚙", - "gem_stone": "💎", - "genie": "🧞", - "ghost": "👻", - "giraffe": "🦒", - "girl": "👧", - "girl_dark_skin_tone": "👧🏿", - "girl_light_skin_tone": "👧🏻", - "girl_medium-dark_skin_tone": "👧🏾", - "girl_medium-light_skin_tone": "👧🏼", - "girl_medium_skin_tone": "👧🏽", - "glass_of_milk": "🥛", - "glasses": "👓", - "globe_showing_americas": "🌎", - "globe_showing_asia-australia": "🌏", - "globe_showing_europe-africa": "🌍", - "globe_with_meridians": "🌐", - "gloves": "🧤", - "glowing_star": "🌟", - "goal_net": "🥅", - "goat": "🐐", - "goblin": "👺", - "goggles": "🥽", - "gorilla": "🦍", - "graduation_cap": "🎓", - "grapes": "🍇", - "green_apple": "🍏", - "green_book": "📗", - "green_circle": "🟢", - "green_heart": "💚", - "green_salad": "🥗", - "green_square": "🟩", - "grimacing_face": "😬", - "grinning_cat_face": "😺", - "grinning_cat_face_with_smiling_eyes": "😸", - "grinning_face": "😀", - "grinning_face_with_big_eyes": "😃", - "grinning_face_with_smiling_eyes": "😄", - "grinning_face_with_sweat": "😅", - "grinning_squinting_face": "😆", - "growing_heart": "💗", - "guard": "💂", - "guard_dark_skin_tone": "💂🏿", - "guard_light_skin_tone": "💂🏻", - "guard_medium-dark_skin_tone": "💂🏾", - "guard_medium-light_skin_tone": "💂🏼", - "guard_medium_skin_tone": "💂🏽", - "guide_dog": "🦮", - "guitar": "🎸", - "hamburger": "🍔", - "hammer": "🔨", - "hammer_and_pick": "⚒", - "hammer_and_wrench": "🛠", - "hamster_face": "🐹", - "hand_with_fingers_splayed": "🖐", - "hand_with_fingers_splayed_dark_skin_tone": "🖐🏿", - "hand_with_fingers_splayed_light_skin_tone": "🖐🏻", - "hand_with_fingers_splayed_medium-dark_skin_tone": "🖐🏾", - "hand_with_fingers_splayed_medium-light_skin_tone": "🖐🏼", - "hand_with_fingers_splayed_medium_skin_tone": "🖐🏽", - "handbag": "👜", - "handshake": "🤝", - "hatching_chick": "🐣", - "headphone": "🎧", - "hear-no-evil_monkey": "🙉", - "heart_decoration": "💟", - "heart_suit": "♥", - "heart_with_arrow": "💘", - "heart_with_ribbon": "💝", - "heavy_check_mark": "✔", - "heavy_division_sign": "➗", - "heavy_dollar_sign": "💲", - "heavy_heart_exclamation": "❣", - "heavy_large_circle": "⭕", - "heavy_minus_sign": "➖", - "heavy_multiplication_x": "✖", - "heavy_plus_sign": "➕", - "hedgehog": "🦔", - "helicopter": "🚁", - "herb": "🌿", - "hibiscus": "🌺", - "high-heeled_shoe": "👠", - "high-speed_train": "🚄", - "high_voltage": "⚡", - "hiking_boot": "🥾", - "hindu_temple": "🛕", - "hippopotamus": "🦛", - "hole": "🕳", - "honey_pot": "🍯", - "honeybee": "🐝", - "horizontal_traffic_light": "🚥", - "horse": "🐴", - "horse_face": "🐴", - "horse_racing": "🏇", - "horse_racing_dark_skin_tone": "🏇🏿", - "horse_racing_light_skin_tone": "🏇🏻", - "horse_racing_medium-dark_skin_tone": "🏇🏾", - "horse_racing_medium-light_skin_tone": "🏇🏼", - "horse_racing_medium_skin_tone": "🏇🏽", - "hospital": "🏥", - "hot_beverage": "☕", - "hot_dog": "🌭", - "hot_face": "🥵", - "hot_pepper": "🌶", - "hot_springs": "♨", - "hotel": "🏨", - "hourglass_done": "⌛", - "hourglass_not_done": "⏳", - "house": "🏠", - "house_with_garden": "🏡", - "houses": "🏘", - "hugging_face": "🤗", - "hundred_points": "💯", - "hushed_face": "😯", - "ice": "🧊", - "ice_cream": "🍨", - "ice_hockey": "🏒", - "ice_skate": "⛸", - "inbox_tray": "📥", - "incoming_envelope": "📨", - "index_pointing_up": "☝", - "index_pointing_up_dark_skin_tone": "☝🏿", - "index_pointing_up_light_skin_tone": "☝🏻", - "index_pointing_up_medium-dark_skin_tone": "☝🏾", - "index_pointing_up_medium-light_skin_tone": "☝🏼", - "index_pointing_up_medium_skin_tone": "☝🏽", - "infinity": "♾", - "information": "ℹ", - "input_latin_letters": "🔤", - "input_latin_lowercase": "🔡", - "input_latin_uppercase": "🔠", - "input_numbers": "🔢", - "input_symbols": "🔣", - "jack-o-lantern": "🎃", - "jeans": "👖", - "jigsaw": "🧩", - "joker": "🃏", - "joystick": "🕹", - "kaaba": "🕋", - "kangaroo": "🦘", - "key": "🔑", - "keyboard": "⌨", - "keycap_#": "#️⃣", - "keycap_*": "*️⃣", - "keycap_0": "0️⃣", - "keycap_1": "1️⃣", - "keycap_10": "🔟", - "keycap_2": "2️⃣", - "keycap_3": "3️⃣", - "keycap_4": "4️⃣", - "keycap_5": "5️⃣", - "keycap_6": "6️⃣", - "keycap_7": "7️⃣", - "keycap_8": "8️⃣", - "keycap_9": "9️⃣", - "kick_scooter": "🛴", - "kimono": "👘", - "kiss": "💋", - "kiss_man_man": "👨\u200d❤️\u200d💋\u200d👨", - "kiss_mark": "💋", - "kiss_woman_man": "👩\u200d❤️\u200d💋\u200d👨", - "kiss_woman_woman": "👩\u200d❤️\u200d💋\u200d👩", - "kissing_cat_face": "😽", - "kissing_face": "😗", - "kissing_face_with_closed_eyes": "😚", - "kissing_face_with_smiling_eyes": "😙", - "kitchen_knife": "🔪", - "kite": "🪁", - "kiwi_fruit": "🥝", - "koala": "🐨", - "lab_coat": "🥼", - "label": "🏷", - "lacrosse": "🥍", - "lady_beetle": "🐞", - "laptop_computer": "💻", - "large_blue_diamond": "🔷", - "large_orange_diamond": "🔶", - "last_quarter_moon": "🌗", - "last_quarter_moon_face": "🌜", - "last_track_button": "⏮", - "latin_cross": "✝", - "leaf_fluttering_in_wind": "🍃", - "leafy_green": "🥬", - "ledger": "📒", - "left-facing_fist": "🤛", - "left-facing_fist_dark_skin_tone": "🤛🏿", - "left-facing_fist_light_skin_tone": "🤛🏻", - "left-facing_fist_medium-dark_skin_tone": "🤛🏾", - "left-facing_fist_medium-light_skin_tone": "🤛🏼", - "left-facing_fist_medium_skin_tone": "🤛🏽", - "left-right_arrow": "↔", - "left_arrow": "⬅", - "left_arrow_curving_right": "↪", - "left_luggage": "🛅", - "left_speech_bubble": "🗨", - "leg": "🦵", - "lemon": "🍋", - "leopard": "🐆", - "level_slider": "🎚", - "light_bulb": "💡", - "light_rail": "🚈", - "link": "🔗", - "linked_paperclips": "🖇", - "lion_face": "🦁", - "lipstick": "💄", - "litter_in_bin_sign": "🚮", - "lizard": "🦎", - "llama": "🦙", - "lobster": "🦞", - "locked": "🔒", - "locked_with_key": "🔐", - "locked_with_pen": "🔏", - "locomotive": "🚂", - "lollipop": "🍭", - "lotion_bottle": "🧴", - "loudly_crying_face": "😭", - "loudspeaker": "📢", - "love-you_gesture": "🤟", - "love-you_gesture_dark_skin_tone": "🤟🏿", - "love-you_gesture_light_skin_tone": "🤟🏻", - "love-you_gesture_medium-dark_skin_tone": "🤟🏾", - "love-you_gesture_medium-light_skin_tone": "🤟🏼", - "love-you_gesture_medium_skin_tone": "🤟🏽", - "love_hotel": "🏩", - "love_letter": "💌", - "luggage": "🧳", - "lying_face": "🤥", - "mage": "🧙", - "mage_dark_skin_tone": "🧙🏿", - "mage_light_skin_tone": "🧙🏻", - "mage_medium-dark_skin_tone": "🧙🏾", - "mage_medium-light_skin_tone": "🧙🏼", - "mage_medium_skin_tone": "🧙🏽", - "magnet": "🧲", - "magnifying_glass_tilted_left": "🔍", - "magnifying_glass_tilted_right": "🔎", - "mahjong_red_dragon": "🀄", - "male_sign": "♂", - "man": "👨", - "man_and_woman_holding_hands": "👫", - "man_artist": "👨\u200d🎨", - "man_artist_dark_skin_tone": "👨🏿\u200d🎨", - "man_artist_light_skin_tone": "👨🏻\u200d🎨", - "man_artist_medium-dark_skin_tone": "👨🏾\u200d🎨", - "man_artist_medium-light_skin_tone": "👨🏼\u200d🎨", - "man_artist_medium_skin_tone": "👨🏽\u200d🎨", - "man_astronaut": "👨\u200d🚀", - "man_astronaut_dark_skin_tone": "👨🏿\u200d🚀", - "man_astronaut_light_skin_tone": "👨🏻\u200d🚀", - "man_astronaut_medium-dark_skin_tone": "👨🏾\u200d🚀", - "man_astronaut_medium-light_skin_tone": "👨🏼\u200d🚀", - "man_astronaut_medium_skin_tone": "👨🏽\u200d🚀", - "man_biking": "🚴\u200d♂️", - "man_biking_dark_skin_tone": "🚴🏿\u200d♂️", - "man_biking_light_skin_tone": "🚴🏻\u200d♂️", - "man_biking_medium-dark_skin_tone": "🚴🏾\u200d♂️", - "man_biking_medium-light_skin_tone": "🚴🏼\u200d♂️", - "man_biking_medium_skin_tone": "🚴🏽\u200d♂️", - "man_bouncing_ball": "⛹️\u200d♂️", - "man_bouncing_ball_dark_skin_tone": "⛹🏿\u200d♂️", - "man_bouncing_ball_light_skin_tone": "⛹🏻\u200d♂️", - "man_bouncing_ball_medium-dark_skin_tone": "⛹🏾\u200d♂️", - "man_bouncing_ball_medium-light_skin_tone": "⛹🏼\u200d♂️", - "man_bouncing_ball_medium_skin_tone": "⛹🏽\u200d♂️", - "man_bowing": "🙇\u200d♂️", - "man_bowing_dark_skin_tone": "🙇🏿\u200d♂️", - "man_bowing_light_skin_tone": "🙇🏻\u200d♂️", - "man_bowing_medium-dark_skin_tone": "🙇🏾\u200d♂️", - "man_bowing_medium-light_skin_tone": "🙇🏼\u200d♂️", - "man_bowing_medium_skin_tone": "🙇🏽\u200d♂️", - "man_cartwheeling": "🤸\u200d♂️", - "man_cartwheeling_dark_skin_tone": "🤸🏿\u200d♂️", - "man_cartwheeling_light_skin_tone": "🤸🏻\u200d♂️", - "man_cartwheeling_medium-dark_skin_tone": "🤸🏾\u200d♂️", - "man_cartwheeling_medium-light_skin_tone": "🤸🏼\u200d♂️", - "man_cartwheeling_medium_skin_tone": "🤸🏽\u200d♂️", - "man_climbing": "🧗\u200d♂️", - "man_climbing_dark_skin_tone": "🧗🏿\u200d♂️", - "man_climbing_light_skin_tone": "🧗🏻\u200d♂️", - "man_climbing_medium-dark_skin_tone": "🧗🏾\u200d♂️", - "man_climbing_medium-light_skin_tone": "🧗🏼\u200d♂️", - "man_climbing_medium_skin_tone": "🧗🏽\u200d♂️", - "man_construction_worker": "👷\u200d♂️", - "man_construction_worker_dark_skin_tone": "👷🏿\u200d♂️", - "man_construction_worker_light_skin_tone": "👷🏻\u200d♂️", - "man_construction_worker_medium-dark_skin_tone": "👷🏾\u200d♂️", - "man_construction_worker_medium-light_skin_tone": "👷🏼\u200d♂️", - "man_construction_worker_medium_skin_tone": "👷🏽\u200d♂️", - "man_cook": "👨\u200d🍳", - "man_cook_dark_skin_tone": "👨🏿\u200d🍳", - "man_cook_light_skin_tone": "👨🏻\u200d🍳", - "man_cook_medium-dark_skin_tone": "👨🏾\u200d🍳", - "man_cook_medium-light_skin_tone": "👨🏼\u200d🍳", - "man_cook_medium_skin_tone": "👨🏽\u200d🍳", - "man_dancing": "🕺", - "man_dancing_dark_skin_tone": "🕺🏿", - "man_dancing_light_skin_tone": "🕺🏻", - "man_dancing_medium-dark_skin_tone": "🕺🏾", - "man_dancing_medium-light_skin_tone": "🕺🏼", - "man_dancing_medium_skin_tone": "🕺🏽", - "man_dark_skin_tone": "👨🏿", - "man_detective": "🕵️\u200d♂️", - "man_detective_dark_skin_tone": "🕵🏿\u200d♂️", - "man_detective_light_skin_tone": "🕵🏻\u200d♂️", - "man_detective_medium-dark_skin_tone": "🕵🏾\u200d♂️", - "man_detective_medium-light_skin_tone": "🕵🏼\u200d♂️", - "man_detective_medium_skin_tone": "🕵🏽\u200d♂️", - "man_elf": "🧝\u200d♂️", - "man_elf_dark_skin_tone": "🧝🏿\u200d♂️", - "man_elf_light_skin_tone": "🧝🏻\u200d♂️", - "man_elf_medium-dark_skin_tone": "🧝🏾\u200d♂️", - "man_elf_medium-light_skin_tone": "🧝🏼\u200d♂️", - "man_elf_medium_skin_tone": "🧝🏽\u200d♂️", - "man_facepalming": "🤦\u200d♂️", - "man_facepalming_dark_skin_tone": "🤦🏿\u200d♂️", - "man_facepalming_light_skin_tone": "🤦🏻\u200d♂️", - "man_facepalming_medium-dark_skin_tone": "🤦🏾\u200d♂️", - "man_facepalming_medium-light_skin_tone": "🤦🏼\u200d♂️", - "man_facepalming_medium_skin_tone": "🤦🏽\u200d♂️", - "man_factory_worker": "👨\u200d🏭", - "man_factory_worker_dark_skin_tone": "👨🏿\u200d🏭", - "man_factory_worker_light_skin_tone": "👨🏻\u200d🏭", - "man_factory_worker_medium-dark_skin_tone": "👨🏾\u200d🏭", - "man_factory_worker_medium-light_skin_tone": "👨🏼\u200d🏭", - "man_factory_worker_medium_skin_tone": "👨🏽\u200d🏭", - "man_fairy": "🧚\u200d♂️", - "man_fairy_dark_skin_tone": "🧚🏿\u200d♂️", - "man_fairy_light_skin_tone": "🧚🏻\u200d♂️", - "man_fairy_medium-dark_skin_tone": "🧚🏾\u200d♂️", - "man_fairy_medium-light_skin_tone": "🧚🏼\u200d♂️", - "man_fairy_medium_skin_tone": "🧚🏽\u200d♂️", - "man_farmer": "👨\u200d🌾", - "man_farmer_dark_skin_tone": "👨🏿\u200d🌾", - "man_farmer_light_skin_tone": "👨🏻\u200d🌾", - "man_farmer_medium-dark_skin_tone": "👨🏾\u200d🌾", - "man_farmer_medium-light_skin_tone": "👨🏼\u200d🌾", - "man_farmer_medium_skin_tone": "👨🏽\u200d🌾", - "man_firefighter": "👨\u200d🚒", - "man_firefighter_dark_skin_tone": "👨🏿\u200d🚒", - "man_firefighter_light_skin_tone": "👨🏻\u200d🚒", - "man_firefighter_medium-dark_skin_tone": "👨🏾\u200d🚒", - "man_firefighter_medium-light_skin_tone": "👨🏼\u200d🚒", - "man_firefighter_medium_skin_tone": "👨🏽\u200d🚒", - "man_frowning": "🙍\u200d♂️", - "man_frowning_dark_skin_tone": "🙍🏿\u200d♂️", - "man_frowning_light_skin_tone": "🙍🏻\u200d♂️", - "man_frowning_medium-dark_skin_tone": "🙍🏾\u200d♂️", - "man_frowning_medium-light_skin_tone": "🙍🏼\u200d♂️", - "man_frowning_medium_skin_tone": "🙍🏽\u200d♂️", - "man_genie": "🧞\u200d♂️", - "man_gesturing_no": "🙅\u200d♂️", - "man_gesturing_no_dark_skin_tone": "🙅🏿\u200d♂️", - "man_gesturing_no_light_skin_tone": "🙅🏻\u200d♂️", - "man_gesturing_no_medium-dark_skin_tone": "🙅🏾\u200d♂️", - "man_gesturing_no_medium-light_skin_tone": "🙅🏼\u200d♂️", - "man_gesturing_no_medium_skin_tone": "🙅🏽\u200d♂️", - "man_gesturing_ok": "🙆\u200d♂️", - "man_gesturing_ok_dark_skin_tone": "🙆🏿\u200d♂️", - "man_gesturing_ok_light_skin_tone": "🙆🏻\u200d♂️", - "man_gesturing_ok_medium-dark_skin_tone": "🙆🏾\u200d♂️", - "man_gesturing_ok_medium-light_skin_tone": "🙆🏼\u200d♂️", - "man_gesturing_ok_medium_skin_tone": "🙆🏽\u200d♂️", - "man_getting_haircut": "💇\u200d♂️", - "man_getting_haircut_dark_skin_tone": "💇🏿\u200d♂️", - "man_getting_haircut_light_skin_tone": "💇🏻\u200d♂️", - "man_getting_haircut_medium-dark_skin_tone": "💇🏾\u200d♂️", - "man_getting_haircut_medium-light_skin_tone": "💇🏼\u200d♂️", - "man_getting_haircut_medium_skin_tone": "💇🏽\u200d♂️", - "man_getting_massage": "💆\u200d♂️", - "man_getting_massage_dark_skin_tone": "💆🏿\u200d♂️", - "man_getting_massage_light_skin_tone": "💆🏻\u200d♂️", - "man_getting_massage_medium-dark_skin_tone": "💆🏾\u200d♂️", - "man_getting_massage_medium-light_skin_tone": "💆🏼\u200d♂️", - "man_getting_massage_medium_skin_tone": "💆🏽\u200d♂️", - "man_golfing": "🏌️\u200d♂️", - "man_golfing_dark_skin_tone": "🏌🏿\u200d♂️", - "man_golfing_light_skin_tone": "🏌🏻\u200d♂️", - "man_golfing_medium-dark_skin_tone": "🏌🏾\u200d♂️", - "man_golfing_medium-light_skin_tone": "🏌🏼\u200d♂️", - "man_golfing_medium_skin_tone": "🏌🏽\u200d♂️", - "man_guard": "💂\u200d♂️", - "man_guard_dark_skin_tone": "💂🏿\u200d♂️", - "man_guard_light_skin_tone": "💂🏻\u200d♂️", - "man_guard_medium-dark_skin_tone": "💂🏾\u200d♂️", - "man_guard_medium-light_skin_tone": "💂🏼\u200d♂️", - "man_guard_medium_skin_tone": "💂🏽\u200d♂️", - "man_health_worker": "👨\u200d⚕️", - "man_health_worker_dark_skin_tone": "👨🏿\u200d⚕️", - "man_health_worker_light_skin_tone": "👨🏻\u200d⚕️", - "man_health_worker_medium-dark_skin_tone": "👨🏾\u200d⚕️", - "man_health_worker_medium-light_skin_tone": "👨🏼\u200d⚕️", - "man_health_worker_medium_skin_tone": "👨🏽\u200d⚕️", - "man_in_lotus_position": "🧘\u200d♂️", - "man_in_lotus_position_dark_skin_tone": "🧘🏿\u200d♂️", - "man_in_lotus_position_light_skin_tone": "🧘🏻\u200d♂️", - "man_in_lotus_position_medium-dark_skin_tone": "🧘🏾\u200d♂️", - "man_in_lotus_position_medium-light_skin_tone": "🧘🏼\u200d♂️", - "man_in_lotus_position_medium_skin_tone": "🧘🏽\u200d♂️", - "man_in_manual_wheelchair": "👨\u200d🦽", - "man_in_motorized_wheelchair": "👨\u200d🦼", - "man_in_steamy_room": "🧖\u200d♂️", - "man_in_steamy_room_dark_skin_tone": "🧖🏿\u200d♂️", - "man_in_steamy_room_light_skin_tone": "🧖🏻\u200d♂️", - "man_in_steamy_room_medium-dark_skin_tone": "🧖🏾\u200d♂️", - "man_in_steamy_room_medium-light_skin_tone": "🧖🏼\u200d♂️", - "man_in_steamy_room_medium_skin_tone": "🧖🏽\u200d♂️", - "man_in_suit_levitating": "🕴", - "man_in_suit_levitating_dark_skin_tone": "🕴🏿", - "man_in_suit_levitating_light_skin_tone": "🕴🏻", - "man_in_suit_levitating_medium-dark_skin_tone": "🕴🏾", - "man_in_suit_levitating_medium-light_skin_tone": "🕴🏼", - "man_in_suit_levitating_medium_skin_tone": "🕴🏽", - "man_in_tuxedo": "🤵", - "man_in_tuxedo_dark_skin_tone": "🤵🏿", - "man_in_tuxedo_light_skin_tone": "🤵🏻", - "man_in_tuxedo_medium-dark_skin_tone": "🤵🏾", - "man_in_tuxedo_medium-light_skin_tone": "🤵🏼", - "man_in_tuxedo_medium_skin_tone": "🤵🏽", - "man_judge": "👨\u200d⚖️", - "man_judge_dark_skin_tone": "👨🏿\u200d⚖️", - "man_judge_light_skin_tone": "👨🏻\u200d⚖️", - "man_judge_medium-dark_skin_tone": "👨🏾\u200d⚖️", - "man_judge_medium-light_skin_tone": "👨🏼\u200d⚖️", - "man_judge_medium_skin_tone": "👨🏽\u200d⚖️", - "man_juggling": "🤹\u200d♂️", - "man_juggling_dark_skin_tone": "🤹🏿\u200d♂️", - "man_juggling_light_skin_tone": "🤹🏻\u200d♂️", - "man_juggling_medium-dark_skin_tone": "🤹🏾\u200d♂️", - "man_juggling_medium-light_skin_tone": "🤹🏼\u200d♂️", - "man_juggling_medium_skin_tone": "🤹🏽\u200d♂️", - "man_lifting_weights": "🏋️\u200d♂️", - "man_lifting_weights_dark_skin_tone": "🏋🏿\u200d♂️", - "man_lifting_weights_light_skin_tone": "🏋🏻\u200d♂️", - "man_lifting_weights_medium-dark_skin_tone": "🏋🏾\u200d♂️", - "man_lifting_weights_medium-light_skin_tone": "🏋🏼\u200d♂️", - "man_lifting_weights_medium_skin_tone": "🏋🏽\u200d♂️", - "man_light_skin_tone": "👨🏻", - "man_mage": "🧙\u200d♂️", - "man_mage_dark_skin_tone": "🧙🏿\u200d♂️", - "man_mage_light_skin_tone": "🧙🏻\u200d♂️", - "man_mage_medium-dark_skin_tone": "🧙🏾\u200d♂️", - "man_mage_medium-light_skin_tone": "🧙🏼\u200d♂️", - "man_mage_medium_skin_tone": "🧙🏽\u200d♂️", - "man_mechanic": "👨\u200d🔧", - "man_mechanic_dark_skin_tone": "👨🏿\u200d🔧", - "man_mechanic_light_skin_tone": "👨🏻\u200d🔧", - "man_mechanic_medium-dark_skin_tone": "👨🏾\u200d🔧", - "man_mechanic_medium-light_skin_tone": "👨🏼\u200d🔧", - "man_mechanic_medium_skin_tone": "👨🏽\u200d🔧", - "man_medium-dark_skin_tone": "👨🏾", - "man_medium-light_skin_tone": "👨🏼", - "man_medium_skin_tone": "👨🏽", - "man_mountain_biking": "🚵\u200d♂️", - "man_mountain_biking_dark_skin_tone": "🚵🏿\u200d♂️", - "man_mountain_biking_light_skin_tone": "🚵🏻\u200d♂️", - "man_mountain_biking_medium-dark_skin_tone": "🚵🏾\u200d♂️", - "man_mountain_biking_medium-light_skin_tone": "🚵🏼\u200d♂️", - "man_mountain_biking_medium_skin_tone": "🚵🏽\u200d♂️", - "man_office_worker": "👨\u200d💼", - "man_office_worker_dark_skin_tone": "👨🏿\u200d💼", - "man_office_worker_light_skin_tone": "👨🏻\u200d💼", - "man_office_worker_medium-dark_skin_tone": "👨🏾\u200d💼", - "man_office_worker_medium-light_skin_tone": "👨🏼\u200d💼", - "man_office_worker_medium_skin_tone": "👨🏽\u200d💼", - "man_pilot": "👨\u200d✈️", - "man_pilot_dark_skin_tone": "👨🏿\u200d✈️", - "man_pilot_light_skin_tone": "👨🏻\u200d✈️", - "man_pilot_medium-dark_skin_tone": "👨🏾\u200d✈️", - "man_pilot_medium-light_skin_tone": "👨🏼\u200d✈️", - "man_pilot_medium_skin_tone": "👨🏽\u200d✈️", - "man_playing_handball": "🤾\u200d♂️", - "man_playing_handball_dark_skin_tone": "🤾🏿\u200d♂️", - "man_playing_handball_light_skin_tone": "🤾🏻\u200d♂️", - "man_playing_handball_medium-dark_skin_tone": "🤾🏾\u200d♂️", - "man_playing_handball_medium-light_skin_tone": "🤾🏼\u200d♂️", - "man_playing_handball_medium_skin_tone": "🤾🏽\u200d♂️", - "man_playing_water_polo": "🤽\u200d♂️", - "man_playing_water_polo_dark_skin_tone": "🤽🏿\u200d♂️", - "man_playing_water_polo_light_skin_tone": "🤽🏻\u200d♂️", - "man_playing_water_polo_medium-dark_skin_tone": "🤽🏾\u200d♂️", - "man_playing_water_polo_medium-light_skin_tone": "🤽🏼\u200d♂️", - "man_playing_water_polo_medium_skin_tone": "🤽🏽\u200d♂️", - "man_police_officer": "👮\u200d♂️", - "man_police_officer_dark_skin_tone": "👮🏿\u200d♂️", - "man_police_officer_light_skin_tone": "👮🏻\u200d♂️", - "man_police_officer_medium-dark_skin_tone": "👮🏾\u200d♂️", - "man_police_officer_medium-light_skin_tone": "👮🏼\u200d♂️", - "man_police_officer_medium_skin_tone": "👮🏽\u200d♂️", - "man_pouting": "🙎\u200d♂️", - "man_pouting_dark_skin_tone": "🙎🏿\u200d♂️", - "man_pouting_light_skin_tone": "🙎🏻\u200d♂️", - "man_pouting_medium-dark_skin_tone": "🙎🏾\u200d♂️", - "man_pouting_medium-light_skin_tone": "🙎🏼\u200d♂️", - "man_pouting_medium_skin_tone": "🙎🏽\u200d♂️", - "man_raising_hand": "🙋\u200d♂️", - "man_raising_hand_dark_skin_tone": "🙋🏿\u200d♂️", - "man_raising_hand_light_skin_tone": "🙋🏻\u200d♂️", - "man_raising_hand_medium-dark_skin_tone": "🙋🏾\u200d♂️", - "man_raising_hand_medium-light_skin_tone": "🙋🏼\u200d♂️", - "man_raising_hand_medium_skin_tone": "🙋🏽\u200d♂️", - "man_rowing_boat": "🚣\u200d♂️", - "man_rowing_boat_dark_skin_tone": "🚣🏿\u200d♂️", - "man_rowing_boat_light_skin_tone": "🚣🏻\u200d♂️", - "man_rowing_boat_medium-dark_skin_tone": "🚣🏾\u200d♂️", - "man_rowing_boat_medium-light_skin_tone": "🚣🏼\u200d♂️", - "man_rowing_boat_medium_skin_tone": "🚣🏽\u200d♂️", - "man_running": "🏃\u200d♂️", - "man_running_dark_skin_tone": "🏃🏿\u200d♂️", - "man_running_light_skin_tone": "🏃🏻\u200d♂️", - "man_running_medium-dark_skin_tone": "🏃🏾\u200d♂️", - "man_running_medium-light_skin_tone": "🏃🏼\u200d♂️", - "man_running_medium_skin_tone": "🏃🏽\u200d♂️", - "man_scientist": "👨\u200d🔬", - "man_scientist_dark_skin_tone": "👨🏿\u200d🔬", - "man_scientist_light_skin_tone": "👨🏻\u200d🔬", - "man_scientist_medium-dark_skin_tone": "👨🏾\u200d🔬", - "man_scientist_medium-light_skin_tone": "👨🏼\u200d🔬", - "man_scientist_medium_skin_tone": "👨🏽\u200d🔬", - "man_shrugging": "🤷\u200d♂️", - "man_shrugging_dark_skin_tone": "🤷🏿\u200d♂️", - "man_shrugging_light_skin_tone": "🤷🏻\u200d♂️", - "man_shrugging_medium-dark_skin_tone": "🤷🏾\u200d♂️", - "man_shrugging_medium-light_skin_tone": "🤷🏼\u200d♂️", - "man_shrugging_medium_skin_tone": "🤷🏽\u200d♂️", - "man_singer": "👨\u200d🎤", - "man_singer_dark_skin_tone": "👨🏿\u200d🎤", - "man_singer_light_skin_tone": "👨🏻\u200d🎤", - "man_singer_medium-dark_skin_tone": "👨🏾\u200d🎤", - "man_singer_medium-light_skin_tone": "👨🏼\u200d🎤", - "man_singer_medium_skin_tone": "👨🏽\u200d🎤", - "man_student": "👨\u200d🎓", - "man_student_dark_skin_tone": "👨🏿\u200d🎓", - "man_student_light_skin_tone": "👨🏻\u200d🎓", - "man_student_medium-dark_skin_tone": "👨🏾\u200d🎓", - "man_student_medium-light_skin_tone": "👨🏼\u200d🎓", - "man_student_medium_skin_tone": "👨🏽\u200d🎓", - "man_surfing": "🏄\u200d♂️", - "man_surfing_dark_skin_tone": "🏄🏿\u200d♂️", - "man_surfing_light_skin_tone": "🏄🏻\u200d♂️", - "man_surfing_medium-dark_skin_tone": "🏄🏾\u200d♂️", - "man_surfing_medium-light_skin_tone": "🏄🏼\u200d♂️", - "man_surfing_medium_skin_tone": "🏄🏽\u200d♂️", - "man_swimming": "🏊\u200d♂️", - "man_swimming_dark_skin_tone": "🏊🏿\u200d♂️", - "man_swimming_light_skin_tone": "🏊🏻\u200d♂️", - "man_swimming_medium-dark_skin_tone": "🏊🏾\u200d♂️", - "man_swimming_medium-light_skin_tone": "🏊🏼\u200d♂️", - "man_swimming_medium_skin_tone": "🏊🏽\u200d♂️", - "man_teacher": "👨\u200d🏫", - "man_teacher_dark_skin_tone": "👨🏿\u200d🏫", - "man_teacher_light_skin_tone": "👨🏻\u200d🏫", - "man_teacher_medium-dark_skin_tone": "👨🏾\u200d🏫", - "man_teacher_medium-light_skin_tone": "👨🏼\u200d🏫", - "man_teacher_medium_skin_tone": "👨🏽\u200d🏫", - "man_technologist": "👨\u200d💻", - "man_technologist_dark_skin_tone": "👨🏿\u200d💻", - "man_technologist_light_skin_tone": "👨🏻\u200d💻", - "man_technologist_medium-dark_skin_tone": "👨🏾\u200d💻", - "man_technologist_medium-light_skin_tone": "👨🏼\u200d💻", - "man_technologist_medium_skin_tone": "👨🏽\u200d💻", - "man_tipping_hand": "💁\u200d♂️", - "man_tipping_hand_dark_skin_tone": "💁🏿\u200d♂️", - "man_tipping_hand_light_skin_tone": "💁🏻\u200d♂️", - "man_tipping_hand_medium-dark_skin_tone": "💁🏾\u200d♂️", - "man_tipping_hand_medium-light_skin_tone": "💁🏼\u200d♂️", - "man_tipping_hand_medium_skin_tone": "💁🏽\u200d♂️", - "man_vampire": "🧛\u200d♂️", - "man_vampire_dark_skin_tone": "🧛🏿\u200d♂️", - "man_vampire_light_skin_tone": "🧛🏻\u200d♂️", - "man_vampire_medium-dark_skin_tone": "🧛🏾\u200d♂️", - "man_vampire_medium-light_skin_tone": "🧛🏼\u200d♂️", - "man_vampire_medium_skin_tone": "🧛🏽\u200d♂️", - "man_walking": "🚶\u200d♂️", - "man_walking_dark_skin_tone": "🚶🏿\u200d♂️", - "man_walking_light_skin_tone": "🚶🏻\u200d♂️", - "man_walking_medium-dark_skin_tone": "🚶🏾\u200d♂️", - "man_walking_medium-light_skin_tone": "🚶🏼\u200d♂️", - "man_walking_medium_skin_tone": "🚶🏽\u200d♂️", - "man_wearing_turban": "👳\u200d♂️", - "man_wearing_turban_dark_skin_tone": "👳🏿\u200d♂️", - "man_wearing_turban_light_skin_tone": "👳🏻\u200d♂️", - "man_wearing_turban_medium-dark_skin_tone": "👳🏾\u200d♂️", - "man_wearing_turban_medium-light_skin_tone": "👳🏼\u200d♂️", - "man_wearing_turban_medium_skin_tone": "👳🏽\u200d♂️", - "man_with_probing_cane": "👨\u200d🦯", - "man_with_chinese_cap": "👲", - "man_with_chinese_cap_dark_skin_tone": "👲🏿", - "man_with_chinese_cap_light_skin_tone": "👲🏻", - "man_with_chinese_cap_medium-dark_skin_tone": "👲🏾", - "man_with_chinese_cap_medium-light_skin_tone": "👲🏼", - "man_with_chinese_cap_medium_skin_tone": "👲🏽", - "man_zombie": "🧟\u200d♂️", - "mango": "🥭", - "mantelpiece_clock": "🕰", - "manual_wheelchair": "🦽", - "man’s_shoe": "👞", - "map_of_japan": "🗾", - "maple_leaf": "🍁", - "martial_arts_uniform": "🥋", - "mate": "🧉", - "meat_on_bone": "🍖", - "mechanical_arm": "🦾", - "mechanical_leg": "🦿", - "medical_symbol": "⚕", - "megaphone": "📣", - "melon": "🍈", - "memo": "📝", - "men_with_bunny_ears": "👯\u200d♂️", - "men_wrestling": "🤼\u200d♂️", - "menorah": "🕎", - "men’s_room": "🚹", - "mermaid": "🧜\u200d♀️", - "mermaid_dark_skin_tone": "🧜🏿\u200d♀️", - "mermaid_light_skin_tone": "🧜🏻\u200d♀️", - "mermaid_medium-dark_skin_tone": "🧜🏾\u200d♀️", - "mermaid_medium-light_skin_tone": "🧜🏼\u200d♀️", - "mermaid_medium_skin_tone": "🧜🏽\u200d♀️", - "merman": "🧜\u200d♂️", - "merman_dark_skin_tone": "🧜🏿\u200d♂️", - "merman_light_skin_tone": "🧜🏻\u200d♂️", - "merman_medium-dark_skin_tone": "🧜🏾\u200d♂️", - "merman_medium-light_skin_tone": "🧜🏼\u200d♂️", - "merman_medium_skin_tone": "🧜🏽\u200d♂️", - "merperson": "🧜", - "merperson_dark_skin_tone": "🧜🏿", - "merperson_light_skin_tone": "🧜🏻", - "merperson_medium-dark_skin_tone": "🧜🏾", - "merperson_medium-light_skin_tone": "🧜🏼", - "merperson_medium_skin_tone": "🧜🏽", - "metro": "🚇", - "microbe": "🦠", - "microphone": "🎤", - "microscope": "🔬", - "middle_finger": "🖕", - "middle_finger_dark_skin_tone": "🖕🏿", - "middle_finger_light_skin_tone": "🖕🏻", - "middle_finger_medium-dark_skin_tone": "🖕🏾", - "middle_finger_medium-light_skin_tone": "🖕🏼", - "middle_finger_medium_skin_tone": "🖕🏽", - "military_medal": "🎖", - "milky_way": "🌌", - "minibus": "🚐", - "moai": "🗿", - "mobile_phone": "📱", - "mobile_phone_off": "📴", - "mobile_phone_with_arrow": "📲", - "money-mouth_face": "🤑", - "money_bag": "💰", - "money_with_wings": "💸", - "monkey": "🐒", - "monkey_face": "🐵", - "monorail": "🚝", - "moon_cake": "🥮", - "moon_viewing_ceremony": "🎑", - "mosque": "🕌", - "mosquito": "🦟", - "motor_boat": "🛥", - "motor_scooter": "🛵", - "motorcycle": "🏍", - "motorized_wheelchair": "🦼", - "motorway": "🛣", - "mount_fuji": "🗻", - "mountain": "⛰", - "mountain_cableway": "🚠", - "mountain_railway": "🚞", - "mouse": "🐭", - "mouse_face": "🐭", - "mouth": "👄", - "movie_camera": "🎥", - "mushroom": "🍄", - "musical_keyboard": "🎹", - "musical_note": "🎵", - "musical_notes": "🎶", - "musical_score": "🎼", - "muted_speaker": "🔇", - "nail_polish": "💅", - "nail_polish_dark_skin_tone": "💅🏿", - "nail_polish_light_skin_tone": "💅🏻", - "nail_polish_medium-dark_skin_tone": "💅🏾", - "nail_polish_medium-light_skin_tone": "💅🏼", - "nail_polish_medium_skin_tone": "💅🏽", - "name_badge": "📛", - "national_park": "🏞", - "nauseated_face": "🤢", - "nazar_amulet": "🧿", - "necktie": "👔", - "nerd_face": "🤓", - "neutral_face": "😐", - "new_moon": "🌑", - "new_moon_face": "🌚", - "newspaper": "📰", - "next_track_button": "⏭", - "night_with_stars": "🌃", - "nine-thirty": "🕤", - "nine_o’clock": "🕘", - "no_bicycles": "🚳", - "no_entry": "⛔", - "no_littering": "🚯", - "no_mobile_phones": "📵", - "no_one_under_eighteen": "🔞", - "no_pedestrians": "🚷", - "no_smoking": "🚭", - "non-potable_water": "🚱", - "nose": "👃", - "nose_dark_skin_tone": "👃🏿", - "nose_light_skin_tone": "👃🏻", - "nose_medium-dark_skin_tone": "👃🏾", - "nose_medium-light_skin_tone": "👃🏼", - "nose_medium_skin_tone": "👃🏽", - "notebook": "📓", - "notebook_with_decorative_cover": "📔", - "nut_and_bolt": "🔩", - "octopus": "🐙", - "oden": "🍢", - "office_building": "🏢", - "ogre": "👹", - "oil_drum": "🛢", - "old_key": "🗝", - "old_man": "👴", - "old_man_dark_skin_tone": "👴🏿", - "old_man_light_skin_tone": "👴🏻", - "old_man_medium-dark_skin_tone": "👴🏾", - "old_man_medium-light_skin_tone": "👴🏼", - "old_man_medium_skin_tone": "👴🏽", - "old_woman": "👵", - "old_woman_dark_skin_tone": "👵🏿", - "old_woman_light_skin_tone": "👵🏻", - "old_woman_medium-dark_skin_tone": "👵🏾", - "old_woman_medium-light_skin_tone": "👵🏼", - "old_woman_medium_skin_tone": "👵🏽", - "older_adult": "🧓", - "older_adult_dark_skin_tone": "🧓🏿", - "older_adult_light_skin_tone": "🧓🏻", - "older_adult_medium-dark_skin_tone": "🧓🏾", - "older_adult_medium-light_skin_tone": "🧓🏼", - "older_adult_medium_skin_tone": "🧓🏽", - "om": "🕉", - "oncoming_automobile": "🚘", - "oncoming_bus": "🚍", - "oncoming_fist": "👊", - "oncoming_fist_dark_skin_tone": "👊🏿", - "oncoming_fist_light_skin_tone": "👊🏻", - "oncoming_fist_medium-dark_skin_tone": "👊🏾", - "oncoming_fist_medium-light_skin_tone": "👊🏼", - "oncoming_fist_medium_skin_tone": "👊🏽", - "oncoming_police_car": "🚔", - "oncoming_taxi": "🚖", - "one-piece_swimsuit": "🩱", - "one-thirty": "🕜", - "one_o’clock": "🕐", - "onion": "🧅", - "open_book": "📖", - "open_file_folder": "📂", - "open_hands": "👐", - "open_hands_dark_skin_tone": "👐🏿", - "open_hands_light_skin_tone": "👐🏻", - "open_hands_medium-dark_skin_tone": "👐🏾", - "open_hands_medium-light_skin_tone": "👐🏼", - "open_hands_medium_skin_tone": "👐🏽", - "open_mailbox_with_lowered_flag": "📭", - "open_mailbox_with_raised_flag": "📬", - "optical_disk": "💿", - "orange_book": "📙", - "orange_circle": "🟠", - "orange_heart": "🧡", - "orange_square": "🟧", - "orangutan": "🦧", - "orthodox_cross": "☦", - "otter": "🦦", - "outbox_tray": "📤", - "owl": "🦉", - "ox": "🐂", - "oyster": "🦪", - "package": "📦", - "page_facing_up": "📄", - "page_with_curl": "📃", - "pager": "📟", - "paintbrush": "🖌", - "palm_tree": "🌴", - "palms_up_together": "🤲", - "palms_up_together_dark_skin_tone": "🤲🏿", - "palms_up_together_light_skin_tone": "🤲🏻", - "palms_up_together_medium-dark_skin_tone": "🤲🏾", - "palms_up_together_medium-light_skin_tone": "🤲🏼", - "palms_up_together_medium_skin_tone": "🤲🏽", - "pancakes": "🥞", - "panda_face": "🐼", - "paperclip": "📎", - "parrot": "🦜", - "part_alternation_mark": "〽", - "party_popper": "🎉", - "partying_face": "🥳", - "passenger_ship": "🛳", - "passport_control": "🛂", - "pause_button": "⏸", - "paw_prints": "🐾", - "peace_symbol": "☮", - "peach": "🍑", - "peacock": "🦚", - "peanuts": "🥜", - "pear": "🍐", - "pen": "🖊", - "pencil": "📝", - "penguin": "🐧", - "pensive_face": "😔", - "people_holding_hands": "🧑\u200d🤝\u200d🧑", - "people_with_bunny_ears": "👯", - "people_wrestling": "🤼", - "performing_arts": "🎭", - "persevering_face": "😣", - "person_biking": "🚴", - "person_biking_dark_skin_tone": "🚴🏿", - "person_biking_light_skin_tone": "🚴🏻", - "person_biking_medium-dark_skin_tone": "🚴🏾", - "person_biking_medium-light_skin_tone": "🚴🏼", - "person_biking_medium_skin_tone": "🚴🏽", - "person_bouncing_ball": "⛹", - "person_bouncing_ball_dark_skin_tone": "⛹🏿", - "person_bouncing_ball_light_skin_tone": "⛹🏻", - "person_bouncing_ball_medium-dark_skin_tone": "⛹🏾", - "person_bouncing_ball_medium-light_skin_tone": "⛹🏼", - "person_bouncing_ball_medium_skin_tone": "⛹🏽", - "person_bowing": "🙇", - "person_bowing_dark_skin_tone": "🙇🏿", - "person_bowing_light_skin_tone": "🙇🏻", - "person_bowing_medium-dark_skin_tone": "🙇🏾", - "person_bowing_medium-light_skin_tone": "🙇🏼", - "person_bowing_medium_skin_tone": "🙇🏽", - "person_cartwheeling": "🤸", - "person_cartwheeling_dark_skin_tone": "🤸🏿", - "person_cartwheeling_light_skin_tone": "🤸🏻", - "person_cartwheeling_medium-dark_skin_tone": "🤸🏾", - "person_cartwheeling_medium-light_skin_tone": "🤸🏼", - "person_cartwheeling_medium_skin_tone": "🤸🏽", - "person_climbing": "🧗", - "person_climbing_dark_skin_tone": "🧗🏿", - "person_climbing_light_skin_tone": "🧗🏻", - "person_climbing_medium-dark_skin_tone": "🧗🏾", - "person_climbing_medium-light_skin_tone": "🧗🏼", - "person_climbing_medium_skin_tone": "🧗🏽", - "person_facepalming": "🤦", - "person_facepalming_dark_skin_tone": "🤦🏿", - "person_facepalming_light_skin_tone": "🤦🏻", - "person_facepalming_medium-dark_skin_tone": "🤦🏾", - "person_facepalming_medium-light_skin_tone": "🤦🏼", - "person_facepalming_medium_skin_tone": "🤦🏽", - "person_fencing": "🤺", - "person_frowning": "🙍", - "person_frowning_dark_skin_tone": "🙍🏿", - "person_frowning_light_skin_tone": "🙍🏻", - "person_frowning_medium-dark_skin_tone": "🙍🏾", - "person_frowning_medium-light_skin_tone": "🙍🏼", - "person_frowning_medium_skin_tone": "🙍🏽", - "person_gesturing_no": "🙅", - "person_gesturing_no_dark_skin_tone": "🙅🏿", - "person_gesturing_no_light_skin_tone": "🙅🏻", - "person_gesturing_no_medium-dark_skin_tone": "🙅🏾", - "person_gesturing_no_medium-light_skin_tone": "🙅🏼", - "person_gesturing_no_medium_skin_tone": "🙅🏽", - "person_gesturing_ok": "🙆", - "person_gesturing_ok_dark_skin_tone": "🙆🏿", - "person_gesturing_ok_light_skin_tone": "🙆🏻", - "person_gesturing_ok_medium-dark_skin_tone": "🙆🏾", - "person_gesturing_ok_medium-light_skin_tone": "🙆🏼", - "person_gesturing_ok_medium_skin_tone": "🙆🏽", - "person_getting_haircut": "💇", - "person_getting_haircut_dark_skin_tone": "💇🏿", - "person_getting_haircut_light_skin_tone": "💇🏻", - "person_getting_haircut_medium-dark_skin_tone": "💇🏾", - "person_getting_haircut_medium-light_skin_tone": "💇🏼", - "person_getting_haircut_medium_skin_tone": "💇🏽", - "person_getting_massage": "💆", - "person_getting_massage_dark_skin_tone": "💆🏿", - "person_getting_massage_light_skin_tone": "💆🏻", - "person_getting_massage_medium-dark_skin_tone": "💆🏾", - "person_getting_massage_medium-light_skin_tone": "💆🏼", - "person_getting_massage_medium_skin_tone": "💆🏽", - "person_golfing": "🏌", - "person_golfing_dark_skin_tone": "🏌🏿", - "person_golfing_light_skin_tone": "🏌🏻", - "person_golfing_medium-dark_skin_tone": "🏌🏾", - "person_golfing_medium-light_skin_tone": "🏌🏼", - "person_golfing_medium_skin_tone": "🏌🏽", - "person_in_bed": "🛌", - "person_in_bed_dark_skin_tone": "🛌🏿", - "person_in_bed_light_skin_tone": "🛌🏻", - "person_in_bed_medium-dark_skin_tone": "🛌🏾", - "person_in_bed_medium-light_skin_tone": "🛌🏼", - "person_in_bed_medium_skin_tone": "🛌🏽", - "person_in_lotus_position": "🧘", - "person_in_lotus_position_dark_skin_tone": "🧘🏿", - "person_in_lotus_position_light_skin_tone": "🧘🏻", - "person_in_lotus_position_medium-dark_skin_tone": "🧘🏾", - "person_in_lotus_position_medium-light_skin_tone": "🧘🏼", - "person_in_lotus_position_medium_skin_tone": "🧘🏽", - "person_in_steamy_room": "🧖", - "person_in_steamy_room_dark_skin_tone": "🧖🏿", - "person_in_steamy_room_light_skin_tone": "🧖🏻", - "person_in_steamy_room_medium-dark_skin_tone": "🧖🏾", - "person_in_steamy_room_medium-light_skin_tone": "🧖🏼", - "person_in_steamy_room_medium_skin_tone": "🧖🏽", - "person_juggling": "🤹", - "person_juggling_dark_skin_tone": "🤹🏿", - "person_juggling_light_skin_tone": "🤹🏻", - "person_juggling_medium-dark_skin_tone": "🤹🏾", - "person_juggling_medium-light_skin_tone": "🤹🏼", - "person_juggling_medium_skin_tone": "🤹🏽", - "person_kneeling": "🧎", - "person_lifting_weights": "🏋", - "person_lifting_weights_dark_skin_tone": "🏋🏿", - "person_lifting_weights_light_skin_tone": "🏋🏻", - "person_lifting_weights_medium-dark_skin_tone": "🏋🏾", - "person_lifting_weights_medium-light_skin_tone": "🏋🏼", - "person_lifting_weights_medium_skin_tone": "🏋🏽", - "person_mountain_biking": "🚵", - "person_mountain_biking_dark_skin_tone": "🚵🏿", - "person_mountain_biking_light_skin_tone": "🚵🏻", - "person_mountain_biking_medium-dark_skin_tone": "🚵🏾", - "person_mountain_biking_medium-light_skin_tone": "🚵🏼", - "person_mountain_biking_medium_skin_tone": "🚵🏽", - "person_playing_handball": "🤾", - "person_playing_handball_dark_skin_tone": "🤾🏿", - "person_playing_handball_light_skin_tone": "🤾🏻", - "person_playing_handball_medium-dark_skin_tone": "🤾🏾", - "person_playing_handball_medium-light_skin_tone": "🤾🏼", - "person_playing_handball_medium_skin_tone": "🤾🏽", - "person_playing_water_polo": "🤽", - "person_playing_water_polo_dark_skin_tone": "🤽🏿", - "person_playing_water_polo_light_skin_tone": "🤽🏻", - "person_playing_water_polo_medium-dark_skin_tone": "🤽🏾", - "person_playing_water_polo_medium-light_skin_tone": "🤽🏼", - "person_playing_water_polo_medium_skin_tone": "🤽🏽", - "person_pouting": "🙎", - "person_pouting_dark_skin_tone": "🙎🏿", - "person_pouting_light_skin_tone": "🙎🏻", - "person_pouting_medium-dark_skin_tone": "🙎🏾", - "person_pouting_medium-light_skin_tone": "🙎🏼", - "person_pouting_medium_skin_tone": "🙎🏽", - "person_raising_hand": "🙋", - "person_raising_hand_dark_skin_tone": "🙋🏿", - "person_raising_hand_light_skin_tone": "🙋🏻", - "person_raising_hand_medium-dark_skin_tone": "🙋🏾", - "person_raising_hand_medium-light_skin_tone": "🙋🏼", - "person_raising_hand_medium_skin_tone": "🙋🏽", - "person_rowing_boat": "🚣", - "person_rowing_boat_dark_skin_tone": "🚣🏿", - "person_rowing_boat_light_skin_tone": "🚣🏻", - "person_rowing_boat_medium-dark_skin_tone": "🚣🏾", - "person_rowing_boat_medium-light_skin_tone": "🚣🏼", - "person_rowing_boat_medium_skin_tone": "🚣🏽", - "person_running": "🏃", - "person_running_dark_skin_tone": "🏃🏿", - "person_running_light_skin_tone": "🏃🏻", - "person_running_medium-dark_skin_tone": "🏃🏾", - "person_running_medium-light_skin_tone": "🏃🏼", - "person_running_medium_skin_tone": "🏃🏽", - "person_shrugging": "🤷", - "person_shrugging_dark_skin_tone": "🤷🏿", - "person_shrugging_light_skin_tone": "🤷🏻", - "person_shrugging_medium-dark_skin_tone": "🤷🏾", - "person_shrugging_medium-light_skin_tone": "🤷🏼", - "person_shrugging_medium_skin_tone": "🤷🏽", - "person_standing": "🧍", - "person_surfing": "🏄", - "person_surfing_dark_skin_tone": "🏄🏿", - "person_surfing_light_skin_tone": "🏄🏻", - "person_surfing_medium-dark_skin_tone": "🏄🏾", - "person_surfing_medium-light_skin_tone": "🏄🏼", - "person_surfing_medium_skin_tone": "🏄🏽", - "person_swimming": "🏊", - "person_swimming_dark_skin_tone": "🏊🏿", - "person_swimming_light_skin_tone": "🏊🏻", - "person_swimming_medium-dark_skin_tone": "🏊🏾", - "person_swimming_medium-light_skin_tone": "🏊🏼", - "person_swimming_medium_skin_tone": "🏊🏽", - "person_taking_bath": "🛀", - "person_taking_bath_dark_skin_tone": "🛀🏿", - "person_taking_bath_light_skin_tone": "🛀🏻", - "person_taking_bath_medium-dark_skin_tone": "🛀🏾", - "person_taking_bath_medium-light_skin_tone": "🛀🏼", - "person_taking_bath_medium_skin_tone": "🛀🏽", - "person_tipping_hand": "💁", - "person_tipping_hand_dark_skin_tone": "💁🏿", - "person_tipping_hand_light_skin_tone": "💁🏻", - "person_tipping_hand_medium-dark_skin_tone": "💁🏾", - "person_tipping_hand_medium-light_skin_tone": "💁🏼", - "person_tipping_hand_medium_skin_tone": "💁🏽", - "person_walking": "🚶", - "person_walking_dark_skin_tone": "🚶🏿", - "person_walking_light_skin_tone": "🚶🏻", - "person_walking_medium-dark_skin_tone": "🚶🏾", - "person_walking_medium-light_skin_tone": "🚶🏼", - "person_walking_medium_skin_tone": "🚶🏽", - "person_wearing_turban": "👳", - "person_wearing_turban_dark_skin_tone": "👳🏿", - "person_wearing_turban_light_skin_tone": "👳🏻", - "person_wearing_turban_medium-dark_skin_tone": "👳🏾", - "person_wearing_turban_medium-light_skin_tone": "👳🏼", - "person_wearing_turban_medium_skin_tone": "👳🏽", - "petri_dish": "🧫", - "pick": "⛏", - "pie": "🥧", - "pig": "🐷", - "pig_face": "🐷", - "pig_nose": "🐽", - "pile_of_poo": "💩", - "pill": "💊", - "pinching_hand": "🤏", - "pine_decoration": "🎍", - "pineapple": "🍍", - "ping_pong": "🏓", - "pirate_flag": "🏴\u200d☠️", - "pistol": "🔫", - "pizza": "🍕", - "place_of_worship": "🛐", - "play_button": "▶", - "play_or_pause_button": "⏯", - "pleading_face": "🥺", - "police_car": "🚓", - "police_car_light": "🚨", - "police_officer": "👮", - "police_officer_dark_skin_tone": "👮🏿", - "police_officer_light_skin_tone": "👮🏻", - "police_officer_medium-dark_skin_tone": "👮🏾", - "police_officer_medium-light_skin_tone": "👮🏼", - "police_officer_medium_skin_tone": "👮🏽", - "poodle": "🐩", - "pool_8_ball": "🎱", - "popcorn": "🍿", - "post_office": "🏣", - "postal_horn": "📯", - "postbox": "📮", - "pot_of_food": "🍲", - "potable_water": "🚰", - "potato": "🥔", - "poultry_leg": "🍗", - "pound_banknote": "💷", - "pouting_cat_face": "😾", - "pouting_face": "😡", - "prayer_beads": "📿", - "pregnant_woman": "🤰", - "pregnant_woman_dark_skin_tone": "🤰🏿", - "pregnant_woman_light_skin_tone": "🤰🏻", - "pregnant_woman_medium-dark_skin_tone": "🤰🏾", - "pregnant_woman_medium-light_skin_tone": "🤰🏼", - "pregnant_woman_medium_skin_tone": "🤰🏽", - "pretzel": "🥨", - "probing_cane": "🦯", - "prince": "🤴", - "prince_dark_skin_tone": "🤴🏿", - "prince_light_skin_tone": "🤴🏻", - "prince_medium-dark_skin_tone": "🤴🏾", - "prince_medium-light_skin_tone": "🤴🏼", - "prince_medium_skin_tone": "🤴🏽", - "princess": "👸", - "princess_dark_skin_tone": "👸🏿", - "princess_light_skin_tone": "👸🏻", - "princess_medium-dark_skin_tone": "👸🏾", - "princess_medium-light_skin_tone": "👸🏼", - "princess_medium_skin_tone": "👸🏽", - "printer": "🖨", - "prohibited": "🚫", - "purple_circle": "🟣", - "purple_heart": "💜", - "purple_square": "🟪", - "purse": "👛", - "pushpin": "📌", - "question_mark": "❓", - "rabbit": "🐰", - "rabbit_face": "🐰", - "raccoon": "🦝", - "racing_car": "🏎", - "radio": "📻", - "radio_button": "🔘", - "radioactive": "☢", - "railway_car": "🚃", - "railway_track": "🛤", - "rainbow": "🌈", - "rainbow_flag": "🏳️\u200d🌈", - "raised_back_of_hand": "🤚", - "raised_back_of_hand_dark_skin_tone": "🤚🏿", - "raised_back_of_hand_light_skin_tone": "🤚🏻", - "raised_back_of_hand_medium-dark_skin_tone": "🤚🏾", - "raised_back_of_hand_medium-light_skin_tone": "🤚🏼", - "raised_back_of_hand_medium_skin_tone": "🤚🏽", - "raised_fist": "✊", - "raised_fist_dark_skin_tone": "✊🏿", - "raised_fist_light_skin_tone": "✊🏻", - "raised_fist_medium-dark_skin_tone": "✊🏾", - "raised_fist_medium-light_skin_tone": "✊🏼", - "raised_fist_medium_skin_tone": "✊🏽", - "raised_hand": "✋", - "raised_hand_dark_skin_tone": "✋🏿", - "raised_hand_light_skin_tone": "✋🏻", - "raised_hand_medium-dark_skin_tone": "✋🏾", - "raised_hand_medium-light_skin_tone": "✋🏼", - "raised_hand_medium_skin_tone": "✋🏽", - "raising_hands": "🙌", - "raising_hands_dark_skin_tone": "🙌🏿", - "raising_hands_light_skin_tone": "🙌🏻", - "raising_hands_medium-dark_skin_tone": "🙌🏾", - "raising_hands_medium-light_skin_tone": "🙌🏼", - "raising_hands_medium_skin_tone": "🙌🏽", - "ram": "🐏", - "rat": "🐀", - "razor": "🪒", - "ringed_planet": "🪐", - "receipt": "🧾", - "record_button": "⏺", - "recycling_symbol": "♻", - "red_apple": "🍎", - "red_circle": "🔴", - "red_envelope": "🧧", - "red_hair": "🦰", - "red-haired_man": "👨\u200d🦰", - "red-haired_woman": "👩\u200d🦰", - "red_heart": "❤", - "red_paper_lantern": "🏮", - "red_square": "🟥", - "red_triangle_pointed_down": "🔻", - "red_triangle_pointed_up": "🔺", - "registered": "®", - "relieved_face": "😌", - "reminder_ribbon": "🎗", - "repeat_button": "🔁", - "repeat_single_button": "🔂", - "rescue_worker’s_helmet": "⛑", - "restroom": "🚻", - "reverse_button": "◀", - "revolving_hearts": "💞", - "rhinoceros": "🦏", - "ribbon": "🎀", - "rice_ball": "🍙", - "rice_cracker": "🍘", - "right-facing_fist": "🤜", - "right-facing_fist_dark_skin_tone": "🤜🏿", - "right-facing_fist_light_skin_tone": "🤜🏻", - "right-facing_fist_medium-dark_skin_tone": "🤜🏾", - "right-facing_fist_medium-light_skin_tone": "🤜🏼", - "right-facing_fist_medium_skin_tone": "🤜🏽", - "right_anger_bubble": "🗯", - "right_arrow": "➡", - "right_arrow_curving_down": "⤵", - "right_arrow_curving_left": "↩", - "right_arrow_curving_up": "⤴", - "ring": "💍", - "roasted_sweet_potato": "🍠", - "robot_face": "🤖", - "rocket": "🚀", - "roll_of_paper": "🧻", - "rolled-up_newspaper": "🗞", - "roller_coaster": "🎢", - "rolling_on_the_floor_laughing": "🤣", - "rooster": "🐓", - "rose": "🌹", - "rosette": "🏵", - "round_pushpin": "📍", - "rugby_football": "🏉", - "running_shirt": "🎽", - "running_shoe": "👟", - "sad_but_relieved_face": "😥", - "safety_pin": "🧷", - "safety_vest": "🦺", - "salt": "🧂", - "sailboat": "⛵", - "sake": "🍶", - "sandwich": "🥪", - "sari": "🥻", - "satellite": "📡", - "satellite_antenna": "📡", - "sauropod": "🦕", - "saxophone": "🎷", - "scarf": "🧣", - "school": "🏫", - "school_backpack": "🎒", - "scissors": "✂", - "scorpion": "🦂", - "scroll": "📜", - "seat": "💺", - "see-no-evil_monkey": "🙈", - "seedling": "🌱", - "selfie": "🤳", - "selfie_dark_skin_tone": "🤳🏿", - "selfie_light_skin_tone": "🤳🏻", - "selfie_medium-dark_skin_tone": "🤳🏾", - "selfie_medium-light_skin_tone": "🤳🏼", - "selfie_medium_skin_tone": "🤳🏽", - "service_dog": "🐕\u200d🦺", - "seven-thirty": "🕢", - "seven_o’clock": "🕖", - "shallow_pan_of_food": "🥘", - "shamrock": "☘", - "shark": "🦈", - "shaved_ice": "🍧", - "sheaf_of_rice": "🌾", - "shield": "🛡", - "shinto_shrine": "⛩", - "ship": "🚢", - "shooting_star": "🌠", - "shopping_bags": "🛍", - "shopping_cart": "🛒", - "shortcake": "🍰", - "shorts": "🩳", - "shower": "🚿", - "shrimp": "🦐", - "shuffle_tracks_button": "🔀", - "shushing_face": "🤫", - "sign_of_the_horns": "🤘", - "sign_of_the_horns_dark_skin_tone": "🤘🏿", - "sign_of_the_horns_light_skin_tone": "🤘🏻", - "sign_of_the_horns_medium-dark_skin_tone": "🤘🏾", - "sign_of_the_horns_medium-light_skin_tone": "🤘🏼", - "sign_of_the_horns_medium_skin_tone": "🤘🏽", - "six-thirty": "🕡", - "six_o’clock": "🕕", - "skateboard": "🛹", - "skier": "⛷", - "skis": "🎿", - "skull": "💀", - "skull_and_crossbones": "☠", - "skunk": "🦨", - "sled": "🛷", - "sleeping_face": "😴", - "sleepy_face": "😪", - "slightly_frowning_face": "🙁", - "slightly_smiling_face": "🙂", - "slot_machine": "🎰", - "sloth": "🦥", - "small_airplane": "🛩", - "small_blue_diamond": "🔹", - "small_orange_diamond": "🔸", - "smiling_cat_face_with_heart-eyes": "😻", - "smiling_face": "☺", - "smiling_face_with_halo": "😇", - "smiling_face_with_3_hearts": "🥰", - "smiling_face_with_heart-eyes": "😍", - "smiling_face_with_horns": "😈", - "smiling_face_with_smiling_eyes": "😊", - "smiling_face_with_sunglasses": "😎", - "smirking_face": "😏", - "snail": "🐌", - "snake": "🐍", - "sneezing_face": "🤧", - "snow-capped_mountain": "🏔", - "snowboarder": "🏂", - "snowboarder_dark_skin_tone": "🏂🏿", - "snowboarder_light_skin_tone": "🏂🏻", - "snowboarder_medium-dark_skin_tone": "🏂🏾", - "snowboarder_medium-light_skin_tone": "🏂🏼", - "snowboarder_medium_skin_tone": "🏂🏽", - "snowflake": "❄", - "snowman": "☃", - "snowman_without_snow": "⛄", - "soap": "🧼", - "soccer_ball": "⚽", - "socks": "🧦", - "softball": "🥎", - "soft_ice_cream": "🍦", - "spade_suit": "♠", - "spaghetti": "🍝", - "sparkle": "❇", - "sparkler": "🎇", - "sparkles": "✨", - "sparkling_heart": "💖", - "speak-no-evil_monkey": "🙊", - "speaker_high_volume": "🔊", - "speaker_low_volume": "🔈", - "speaker_medium_volume": "🔉", - "speaking_head": "🗣", - "speech_balloon": "💬", - "speedboat": "🚤", - "spider": "🕷", - "spider_web": "🕸", - "spiral_calendar": "🗓", - "spiral_notepad": "🗒", - "spiral_shell": "🐚", - "spoon": "🥄", - "sponge": "🧽", - "sport_utility_vehicle": "🚙", - "sports_medal": "🏅", - "spouting_whale": "🐳", - "squid": "🦑", - "squinting_face_with_tongue": "😝", - "stadium": "🏟", - "star-struck": "🤩", - "star_and_crescent": "☪", - "star_of_david": "✡", - "station": "🚉", - "steaming_bowl": "🍜", - "stethoscope": "🩺", - "stop_button": "⏹", - "stop_sign": "🛑", - "stopwatch": "⏱", - "straight_ruler": "📏", - "strawberry": "🍓", - "studio_microphone": "🎙", - "stuffed_flatbread": "🥙", - "sun": "☀", - "sun_behind_cloud": "⛅", - "sun_behind_large_cloud": "🌥", - "sun_behind_rain_cloud": "🌦", - "sun_behind_small_cloud": "🌤", - "sun_with_face": "🌞", - "sunflower": "🌻", - "sunglasses": "😎", - "sunrise": "🌅", - "sunrise_over_mountains": "🌄", - "sunset": "🌇", - "superhero": "🦸", - "supervillain": "🦹", - "sushi": "🍣", - "suspension_railway": "🚟", - "swan": "🦢", - "sweat_droplets": "💦", - "synagogue": "🕍", - "syringe": "💉", - "t-shirt": "👕", - "taco": "🌮", - "takeout_box": "🥡", - "tanabata_tree": "🎋", - "tangerine": "🍊", - "taxi": "🚕", - "teacup_without_handle": "🍵", - "tear-off_calendar": "📆", - "teddy_bear": "🧸", - "telephone": "☎", - "telephone_receiver": "📞", - "telescope": "🔭", - "television": "📺", - "ten-thirty": "🕥", - "ten_o’clock": "🕙", - "tennis": "🎾", - "tent": "⛺", - "test_tube": "🧪", - "thermometer": "🌡", - "thinking_face": "🤔", - "thought_balloon": "💭", - "thread": "🧵", - "three-thirty": "🕞", - "three_o’clock": "🕒", - "thumbs_down": "👎", - "thumbs_down_dark_skin_tone": "👎🏿", - "thumbs_down_light_skin_tone": "👎🏻", - "thumbs_down_medium-dark_skin_tone": "👎🏾", - "thumbs_down_medium-light_skin_tone": "👎🏼", - "thumbs_down_medium_skin_tone": "👎🏽", - "thumbs_up": "👍", - "thumbs_up_dark_skin_tone": "👍🏿", - "thumbs_up_light_skin_tone": "👍🏻", - "thumbs_up_medium-dark_skin_tone": "👍🏾", - "thumbs_up_medium-light_skin_tone": "👍🏼", - "thumbs_up_medium_skin_tone": "👍🏽", - "ticket": "🎫", - "tiger": "🐯", - "tiger_face": "🐯", - "timer_clock": "⏲", - "tired_face": "😫", - "toolbox": "🧰", - "toilet": "🚽", - "tomato": "🍅", - "tongue": "👅", - "tooth": "🦷", - "top_hat": "🎩", - "tornado": "🌪", - "trackball": "🖲", - "tractor": "🚜", - "trade_mark": "™", - "train": "🚋", - "tram": "🚊", - "tram_car": "🚋", - "triangular_flag": "🚩", - "triangular_ruler": "📐", - "trident_emblem": "🔱", - "trolleybus": "🚎", - "trophy": "🏆", - "tropical_drink": "🍹", - "tropical_fish": "🐠", - "trumpet": "🎺", - "tulip": "🌷", - "tumbler_glass": "🥃", - "turtle": "🐢", - "twelve-thirty": "🕧", - "twelve_o’clock": "🕛", - "two-hump_camel": "🐫", - "two-thirty": "🕝", - "two_hearts": "💕", - "two_men_holding_hands": "👬", - "two_o’clock": "🕑", - "two_women_holding_hands": "👭", - "umbrella": "☂", - "umbrella_on_ground": "⛱", - "umbrella_with_rain_drops": "☔", - "unamused_face": "😒", - "unicorn_face": "🦄", - "unlocked": "🔓", - "up-down_arrow": "↕", - "up-left_arrow": "↖", - "up-right_arrow": "↗", - "up_arrow": "⬆", - "upside-down_face": "🙃", - "upwards_button": "🔼", - "vampire": "🧛", - "vampire_dark_skin_tone": "🧛🏿", - "vampire_light_skin_tone": "🧛🏻", - "vampire_medium-dark_skin_tone": "🧛🏾", - "vampire_medium-light_skin_tone": "🧛🏼", - "vampire_medium_skin_tone": "🧛🏽", - "vertical_traffic_light": "🚦", - "vibration_mode": "📳", - "victory_hand": "✌", - "victory_hand_dark_skin_tone": "✌🏿", - "victory_hand_light_skin_tone": "✌🏻", - "victory_hand_medium-dark_skin_tone": "✌🏾", - "victory_hand_medium-light_skin_tone": "✌🏼", - "victory_hand_medium_skin_tone": "✌🏽", - "video_camera": "📹", - "video_game": "🎮", - "videocassette": "📼", - "violin": "🎻", - "volcano": "🌋", - "volleyball": "🏐", - "vulcan_salute": "🖖", - "vulcan_salute_dark_skin_tone": "🖖🏿", - "vulcan_salute_light_skin_tone": "🖖🏻", - "vulcan_salute_medium-dark_skin_tone": "🖖🏾", - "vulcan_salute_medium-light_skin_tone": "🖖🏼", - "vulcan_salute_medium_skin_tone": "🖖🏽", - "waffle": "🧇", - "waning_crescent_moon": "🌘", - "waning_gibbous_moon": "🌖", - "warning": "⚠", - "wastebasket": "🗑", - "watch": "⌚", - "water_buffalo": "🐃", - "water_closet": "🚾", - "water_wave": "🌊", - "watermelon": "🍉", - "waving_hand": "👋", - "waving_hand_dark_skin_tone": "👋🏿", - "waving_hand_light_skin_tone": "👋🏻", - "waving_hand_medium-dark_skin_tone": "👋🏾", - "waving_hand_medium-light_skin_tone": "👋🏼", - "waving_hand_medium_skin_tone": "👋🏽", - "wavy_dash": "〰", - "waxing_crescent_moon": "🌒", - "waxing_gibbous_moon": "🌔", - "weary_cat_face": "🙀", - "weary_face": "😩", - "wedding": "💒", - "whale": "🐳", - "wheel_of_dharma": "☸", - "wheelchair_symbol": "♿", - "white_circle": "⚪", - "white_exclamation_mark": "❕", - "white_flag": "🏳", - "white_flower": "💮", - "white_hair": "🦳", - "white-haired_man": "👨\u200d🦳", - "white-haired_woman": "👩\u200d🦳", - "white_heart": "🤍", - "white_heavy_check_mark": "✅", - "white_large_square": "⬜", - "white_medium-small_square": "◽", - "white_medium_square": "◻", - "white_medium_star": "⭐", - "white_question_mark": "❔", - "white_small_square": "▫", - "white_square_button": "🔳", - "wilted_flower": "🥀", - "wind_chime": "🎐", - "wind_face": "🌬", - "wine_glass": "🍷", - "winking_face": "😉", - "winking_face_with_tongue": "😜", - "wolf_face": "🐺", - "woman": "👩", - "woman_artist": "👩\u200d🎨", - "woman_artist_dark_skin_tone": "👩🏿\u200d🎨", - "woman_artist_light_skin_tone": "👩🏻\u200d🎨", - "woman_artist_medium-dark_skin_tone": "👩🏾\u200d🎨", - "woman_artist_medium-light_skin_tone": "👩🏼\u200d🎨", - "woman_artist_medium_skin_tone": "👩🏽\u200d🎨", - "woman_astronaut": "👩\u200d🚀", - "woman_astronaut_dark_skin_tone": "👩🏿\u200d🚀", - "woman_astronaut_light_skin_tone": "👩🏻\u200d🚀", - "woman_astronaut_medium-dark_skin_tone": "👩🏾\u200d🚀", - "woman_astronaut_medium-light_skin_tone": "👩🏼\u200d🚀", - "woman_astronaut_medium_skin_tone": "👩🏽\u200d🚀", - "woman_biking": "🚴\u200d♀️", - "woman_biking_dark_skin_tone": "🚴🏿\u200d♀️", - "woman_biking_light_skin_tone": "🚴🏻\u200d♀️", - "woman_biking_medium-dark_skin_tone": "🚴🏾\u200d♀️", - "woman_biking_medium-light_skin_tone": "🚴🏼\u200d♀️", - "woman_biking_medium_skin_tone": "🚴🏽\u200d♀️", - "woman_bouncing_ball": "⛹️\u200d♀️", - "woman_bouncing_ball_dark_skin_tone": "⛹🏿\u200d♀️", - "woman_bouncing_ball_light_skin_tone": "⛹🏻\u200d♀️", - "woman_bouncing_ball_medium-dark_skin_tone": "⛹🏾\u200d♀️", - "woman_bouncing_ball_medium-light_skin_tone": "⛹🏼\u200d♀️", - "woman_bouncing_ball_medium_skin_tone": "⛹🏽\u200d♀️", - "woman_bowing": "🙇\u200d♀️", - "woman_bowing_dark_skin_tone": "🙇🏿\u200d♀️", - "woman_bowing_light_skin_tone": "🙇🏻\u200d♀️", - "woman_bowing_medium-dark_skin_tone": "🙇🏾\u200d♀️", - "woman_bowing_medium-light_skin_tone": "🙇🏼\u200d♀️", - "woman_bowing_medium_skin_tone": "🙇🏽\u200d♀️", - "woman_cartwheeling": "🤸\u200d♀️", - "woman_cartwheeling_dark_skin_tone": "🤸🏿\u200d♀️", - "woman_cartwheeling_light_skin_tone": "🤸🏻\u200d♀️", - "woman_cartwheeling_medium-dark_skin_tone": "🤸🏾\u200d♀️", - "woman_cartwheeling_medium-light_skin_tone": "🤸🏼\u200d♀️", - "woman_cartwheeling_medium_skin_tone": "🤸🏽\u200d♀️", - "woman_climbing": "🧗\u200d♀️", - "woman_climbing_dark_skin_tone": "🧗🏿\u200d♀️", - "woman_climbing_light_skin_tone": "🧗🏻\u200d♀️", - "woman_climbing_medium-dark_skin_tone": "🧗🏾\u200d♀️", - "woman_climbing_medium-light_skin_tone": "🧗🏼\u200d♀️", - "woman_climbing_medium_skin_tone": "🧗🏽\u200d♀️", - "woman_construction_worker": "👷\u200d♀️", - "woman_construction_worker_dark_skin_tone": "👷🏿\u200d♀️", - "woman_construction_worker_light_skin_tone": "👷🏻\u200d♀️", - "woman_construction_worker_medium-dark_skin_tone": "👷🏾\u200d♀️", - "woman_construction_worker_medium-light_skin_tone": "👷🏼\u200d♀️", - "woman_construction_worker_medium_skin_tone": "👷🏽\u200d♀️", - "woman_cook": "👩\u200d🍳", - "woman_cook_dark_skin_tone": "👩🏿\u200d🍳", - "woman_cook_light_skin_tone": "👩🏻\u200d🍳", - "woman_cook_medium-dark_skin_tone": "👩🏾\u200d🍳", - "woman_cook_medium-light_skin_tone": "👩🏼\u200d🍳", - "woman_cook_medium_skin_tone": "👩🏽\u200d🍳", - "woman_dancing": "💃", - "woman_dancing_dark_skin_tone": "💃🏿", - "woman_dancing_light_skin_tone": "💃🏻", - "woman_dancing_medium-dark_skin_tone": "💃🏾", - "woman_dancing_medium-light_skin_tone": "💃🏼", - "woman_dancing_medium_skin_tone": "💃🏽", - "woman_dark_skin_tone": "👩🏿", - "woman_detective": "🕵️\u200d♀️", - "woman_detective_dark_skin_tone": "🕵🏿\u200d♀️", - "woman_detective_light_skin_tone": "🕵🏻\u200d♀️", - "woman_detective_medium-dark_skin_tone": "🕵🏾\u200d♀️", - "woman_detective_medium-light_skin_tone": "🕵🏼\u200d♀️", - "woman_detective_medium_skin_tone": "🕵🏽\u200d♀️", - "woman_elf": "🧝\u200d♀️", - "woman_elf_dark_skin_tone": "🧝🏿\u200d♀️", - "woman_elf_light_skin_tone": "🧝🏻\u200d♀️", - "woman_elf_medium-dark_skin_tone": "🧝🏾\u200d♀️", - "woman_elf_medium-light_skin_tone": "🧝🏼\u200d♀️", - "woman_elf_medium_skin_tone": "🧝🏽\u200d♀️", - "woman_facepalming": "🤦\u200d♀️", - "woman_facepalming_dark_skin_tone": "🤦🏿\u200d♀️", - "woman_facepalming_light_skin_tone": "🤦🏻\u200d♀️", - "woman_facepalming_medium-dark_skin_tone": "🤦🏾\u200d♀️", - "woman_facepalming_medium-light_skin_tone": "🤦🏼\u200d♀️", - "woman_facepalming_medium_skin_tone": "🤦🏽\u200d♀️", - "woman_factory_worker": "👩\u200d🏭", - "woman_factory_worker_dark_skin_tone": "👩🏿\u200d🏭", - "woman_factory_worker_light_skin_tone": "👩🏻\u200d🏭", - "woman_factory_worker_medium-dark_skin_tone": "👩🏾\u200d🏭", - "woman_factory_worker_medium-light_skin_tone": "👩🏼\u200d🏭", - "woman_factory_worker_medium_skin_tone": "👩🏽\u200d🏭", - "woman_fairy": "🧚\u200d♀️", - "woman_fairy_dark_skin_tone": "🧚🏿\u200d♀️", - "woman_fairy_light_skin_tone": "🧚🏻\u200d♀️", - "woman_fairy_medium-dark_skin_tone": "🧚🏾\u200d♀️", - "woman_fairy_medium-light_skin_tone": "🧚🏼\u200d♀️", - "woman_fairy_medium_skin_tone": "🧚🏽\u200d♀️", - "woman_farmer": "👩\u200d🌾", - "woman_farmer_dark_skin_tone": "👩🏿\u200d🌾", - "woman_farmer_light_skin_tone": "👩🏻\u200d🌾", - "woman_farmer_medium-dark_skin_tone": "👩🏾\u200d🌾", - "woman_farmer_medium-light_skin_tone": "👩🏼\u200d🌾", - "woman_farmer_medium_skin_tone": "👩🏽\u200d🌾", - "woman_firefighter": "👩\u200d🚒", - "woman_firefighter_dark_skin_tone": "👩🏿\u200d🚒", - "woman_firefighter_light_skin_tone": "👩🏻\u200d🚒", - "woman_firefighter_medium-dark_skin_tone": "👩🏾\u200d🚒", - "woman_firefighter_medium-light_skin_tone": "👩🏼\u200d🚒", - "woman_firefighter_medium_skin_tone": "👩🏽\u200d🚒", - "woman_frowning": "🙍\u200d♀️", - "woman_frowning_dark_skin_tone": "🙍🏿\u200d♀️", - "woman_frowning_light_skin_tone": "🙍🏻\u200d♀️", - "woman_frowning_medium-dark_skin_tone": "🙍🏾\u200d♀️", - "woman_frowning_medium-light_skin_tone": "🙍🏼\u200d♀️", - "woman_frowning_medium_skin_tone": "🙍🏽\u200d♀️", - "woman_genie": "🧞\u200d♀️", - "woman_gesturing_no": "🙅\u200d♀️", - "woman_gesturing_no_dark_skin_tone": "🙅🏿\u200d♀️", - "woman_gesturing_no_light_skin_tone": "🙅🏻\u200d♀️", - "woman_gesturing_no_medium-dark_skin_tone": "🙅🏾\u200d♀️", - "woman_gesturing_no_medium-light_skin_tone": "🙅🏼\u200d♀️", - "woman_gesturing_no_medium_skin_tone": "🙅🏽\u200d♀️", - "woman_gesturing_ok": "🙆\u200d♀️", - "woman_gesturing_ok_dark_skin_tone": "🙆🏿\u200d♀️", - "woman_gesturing_ok_light_skin_tone": "🙆🏻\u200d♀️", - "woman_gesturing_ok_medium-dark_skin_tone": "🙆🏾\u200d♀️", - "woman_gesturing_ok_medium-light_skin_tone": "🙆🏼\u200d♀️", - "woman_gesturing_ok_medium_skin_tone": "🙆🏽\u200d♀️", - "woman_getting_haircut": "💇\u200d♀️", - "woman_getting_haircut_dark_skin_tone": "💇🏿\u200d♀️", - "woman_getting_haircut_light_skin_tone": "💇🏻\u200d♀️", - "woman_getting_haircut_medium-dark_skin_tone": "💇🏾\u200d♀️", - "woman_getting_haircut_medium-light_skin_tone": "💇🏼\u200d♀️", - "woman_getting_haircut_medium_skin_tone": "💇🏽\u200d♀️", - "woman_getting_massage": "💆\u200d♀️", - "woman_getting_massage_dark_skin_tone": "💆🏿\u200d♀️", - "woman_getting_massage_light_skin_tone": "💆🏻\u200d♀️", - "woman_getting_massage_medium-dark_skin_tone": "💆🏾\u200d♀️", - "woman_getting_massage_medium-light_skin_tone": "💆🏼\u200d♀️", - "woman_getting_massage_medium_skin_tone": "💆🏽\u200d♀️", - "woman_golfing": "🏌️\u200d♀️", - "woman_golfing_dark_skin_tone": "🏌🏿\u200d♀️", - "woman_golfing_light_skin_tone": "🏌🏻\u200d♀️", - "woman_golfing_medium-dark_skin_tone": "🏌🏾\u200d♀️", - "woman_golfing_medium-light_skin_tone": "🏌🏼\u200d♀️", - "woman_golfing_medium_skin_tone": "🏌🏽\u200d♀️", - "woman_guard": "💂\u200d♀️", - "woman_guard_dark_skin_tone": "💂🏿\u200d♀️", - "woman_guard_light_skin_tone": "💂🏻\u200d♀️", - "woman_guard_medium-dark_skin_tone": "💂🏾\u200d♀️", - "woman_guard_medium-light_skin_tone": "💂🏼\u200d♀️", - "woman_guard_medium_skin_tone": "💂🏽\u200d♀️", - "woman_health_worker": "👩\u200d⚕️", - "woman_health_worker_dark_skin_tone": "👩🏿\u200d⚕️", - "woman_health_worker_light_skin_tone": "👩🏻\u200d⚕️", - "woman_health_worker_medium-dark_skin_tone": "👩🏾\u200d⚕️", - "woman_health_worker_medium-light_skin_tone": "👩🏼\u200d⚕️", - "woman_health_worker_medium_skin_tone": "👩🏽\u200d⚕️", - "woman_in_lotus_position": "🧘\u200d♀️", - "woman_in_lotus_position_dark_skin_tone": "🧘🏿\u200d♀️", - "woman_in_lotus_position_light_skin_tone": "🧘🏻\u200d♀️", - "woman_in_lotus_position_medium-dark_skin_tone": "🧘🏾\u200d♀️", - "woman_in_lotus_position_medium-light_skin_tone": "🧘🏼\u200d♀️", - "woman_in_lotus_position_medium_skin_tone": "🧘🏽\u200d♀️", - "woman_in_manual_wheelchair": "👩\u200d🦽", - "woman_in_motorized_wheelchair": "👩\u200d🦼", - "woman_in_steamy_room": "🧖\u200d♀️", - "woman_in_steamy_room_dark_skin_tone": "🧖🏿\u200d♀️", - "woman_in_steamy_room_light_skin_tone": "🧖🏻\u200d♀️", - "woman_in_steamy_room_medium-dark_skin_tone": "🧖🏾\u200d♀️", - "woman_in_steamy_room_medium-light_skin_tone": "🧖🏼\u200d♀️", - "woman_in_steamy_room_medium_skin_tone": "🧖🏽\u200d♀️", - "woman_judge": "👩\u200d⚖️", - "woman_judge_dark_skin_tone": "👩🏿\u200d⚖️", - "woman_judge_light_skin_tone": "👩🏻\u200d⚖️", - "woman_judge_medium-dark_skin_tone": "👩🏾\u200d⚖️", - "woman_judge_medium-light_skin_tone": "👩🏼\u200d⚖️", - "woman_judge_medium_skin_tone": "👩🏽\u200d⚖️", - "woman_juggling": "🤹\u200d♀️", - "woman_juggling_dark_skin_tone": "🤹🏿\u200d♀️", - "woman_juggling_light_skin_tone": "🤹🏻\u200d♀️", - "woman_juggling_medium-dark_skin_tone": "🤹🏾\u200d♀️", - "woman_juggling_medium-light_skin_tone": "🤹🏼\u200d♀️", - "woman_juggling_medium_skin_tone": "🤹🏽\u200d♀️", - "woman_lifting_weights": "🏋️\u200d♀️", - "woman_lifting_weights_dark_skin_tone": "🏋🏿\u200d♀️", - "woman_lifting_weights_light_skin_tone": "🏋🏻\u200d♀️", - "woman_lifting_weights_medium-dark_skin_tone": "🏋🏾\u200d♀️", - "woman_lifting_weights_medium-light_skin_tone": "🏋🏼\u200d♀️", - "woman_lifting_weights_medium_skin_tone": "🏋🏽\u200d♀️", - "woman_light_skin_tone": "👩🏻", - "woman_mage": "🧙\u200d♀️", - "woman_mage_dark_skin_tone": "🧙🏿\u200d♀️", - "woman_mage_light_skin_tone": "🧙🏻\u200d♀️", - "woman_mage_medium-dark_skin_tone": "🧙🏾\u200d♀️", - "woman_mage_medium-light_skin_tone": "🧙🏼\u200d♀️", - "woman_mage_medium_skin_tone": "🧙🏽\u200d♀️", - "woman_mechanic": "👩\u200d🔧", - "woman_mechanic_dark_skin_tone": "👩🏿\u200d🔧", - "woman_mechanic_light_skin_tone": "👩🏻\u200d🔧", - "woman_mechanic_medium-dark_skin_tone": "👩🏾\u200d🔧", - "woman_mechanic_medium-light_skin_tone": "👩🏼\u200d🔧", - "woman_mechanic_medium_skin_tone": "👩🏽\u200d🔧", - "woman_medium-dark_skin_tone": "👩🏾", - "woman_medium-light_skin_tone": "👩🏼", - "woman_medium_skin_tone": "👩🏽", - "woman_mountain_biking": "🚵\u200d♀️", - "woman_mountain_biking_dark_skin_tone": "🚵🏿\u200d♀️", - "woman_mountain_biking_light_skin_tone": "🚵🏻\u200d♀️", - "woman_mountain_biking_medium-dark_skin_tone": "🚵🏾\u200d♀️", - "woman_mountain_biking_medium-light_skin_tone": "🚵🏼\u200d♀️", - "woman_mountain_biking_medium_skin_tone": "🚵🏽\u200d♀️", - "woman_office_worker": "👩\u200d💼", - "woman_office_worker_dark_skin_tone": "👩🏿\u200d💼", - "woman_office_worker_light_skin_tone": "👩🏻\u200d💼", - "woman_office_worker_medium-dark_skin_tone": "👩🏾\u200d💼", - "woman_office_worker_medium-light_skin_tone": "👩🏼\u200d💼", - "woman_office_worker_medium_skin_tone": "👩🏽\u200d💼", - "woman_pilot": "👩\u200d✈️", - "woman_pilot_dark_skin_tone": "👩🏿\u200d✈️", - "woman_pilot_light_skin_tone": "👩🏻\u200d✈️", - "woman_pilot_medium-dark_skin_tone": "👩🏾\u200d✈️", - "woman_pilot_medium-light_skin_tone": "👩🏼\u200d✈️", - "woman_pilot_medium_skin_tone": "👩🏽\u200d✈️", - "woman_playing_handball": "🤾\u200d♀️", - "woman_playing_handball_dark_skin_tone": "🤾🏿\u200d♀️", - "woman_playing_handball_light_skin_tone": "🤾🏻\u200d♀️", - "woman_playing_handball_medium-dark_skin_tone": "🤾🏾\u200d♀️", - "woman_playing_handball_medium-light_skin_tone": "🤾🏼\u200d♀️", - "woman_playing_handball_medium_skin_tone": "🤾🏽\u200d♀️", - "woman_playing_water_polo": "🤽\u200d♀️", - "woman_playing_water_polo_dark_skin_tone": "🤽🏿\u200d♀️", - "woman_playing_water_polo_light_skin_tone": "🤽🏻\u200d♀️", - "woman_playing_water_polo_medium-dark_skin_tone": "🤽🏾\u200d♀️", - "woman_playing_water_polo_medium-light_skin_tone": "🤽🏼\u200d♀️", - "woman_playing_water_polo_medium_skin_tone": "🤽🏽\u200d♀️", - "woman_police_officer": "👮\u200d♀️", - "woman_police_officer_dark_skin_tone": "👮🏿\u200d♀️", - "woman_police_officer_light_skin_tone": "👮🏻\u200d♀️", - "woman_police_officer_medium-dark_skin_tone": "👮🏾\u200d♀️", - "woman_police_officer_medium-light_skin_tone": "👮🏼\u200d♀️", - "woman_police_officer_medium_skin_tone": "👮🏽\u200d♀️", - "woman_pouting": "🙎\u200d♀️", - "woman_pouting_dark_skin_tone": "🙎🏿\u200d♀️", - "woman_pouting_light_skin_tone": "🙎🏻\u200d♀️", - "woman_pouting_medium-dark_skin_tone": "🙎🏾\u200d♀️", - "woman_pouting_medium-light_skin_tone": "🙎🏼\u200d♀️", - "woman_pouting_medium_skin_tone": "🙎🏽\u200d♀️", - "woman_raising_hand": "🙋\u200d♀️", - "woman_raising_hand_dark_skin_tone": "🙋🏿\u200d♀️", - "woman_raising_hand_light_skin_tone": "🙋🏻\u200d♀️", - "woman_raising_hand_medium-dark_skin_tone": "🙋🏾\u200d♀️", - "woman_raising_hand_medium-light_skin_tone": "🙋🏼\u200d♀️", - "woman_raising_hand_medium_skin_tone": "🙋🏽\u200d♀️", - "woman_rowing_boat": "🚣\u200d♀️", - "woman_rowing_boat_dark_skin_tone": "🚣🏿\u200d♀️", - "woman_rowing_boat_light_skin_tone": "🚣🏻\u200d♀️", - "woman_rowing_boat_medium-dark_skin_tone": "🚣🏾\u200d♀️", - "woman_rowing_boat_medium-light_skin_tone": "🚣🏼\u200d♀️", - "woman_rowing_boat_medium_skin_tone": "🚣🏽\u200d♀️", - "woman_running": "🏃\u200d♀️", - "woman_running_dark_skin_tone": "🏃🏿\u200d♀️", - "woman_running_light_skin_tone": "🏃🏻\u200d♀️", - "woman_running_medium-dark_skin_tone": "🏃🏾\u200d♀️", - "woman_running_medium-light_skin_tone": "🏃🏼\u200d♀️", - "woman_running_medium_skin_tone": "🏃🏽\u200d♀️", - "woman_scientist": "👩\u200d🔬", - "woman_scientist_dark_skin_tone": "👩🏿\u200d🔬", - "woman_scientist_light_skin_tone": "👩🏻\u200d🔬", - "woman_scientist_medium-dark_skin_tone": "👩🏾\u200d🔬", - "woman_scientist_medium-light_skin_tone": "👩🏼\u200d🔬", - "woman_scientist_medium_skin_tone": "👩🏽\u200d🔬", - "woman_shrugging": "🤷\u200d♀️", - "woman_shrugging_dark_skin_tone": "🤷🏿\u200d♀️", - "woman_shrugging_light_skin_tone": "🤷🏻\u200d♀️", - "woman_shrugging_medium-dark_skin_tone": "🤷🏾\u200d♀️", - "woman_shrugging_medium-light_skin_tone": "🤷🏼\u200d♀️", - "woman_shrugging_medium_skin_tone": "🤷🏽\u200d♀️", - "woman_singer": "👩\u200d🎤", - "woman_singer_dark_skin_tone": "👩🏿\u200d🎤", - "woman_singer_light_skin_tone": "👩🏻\u200d🎤", - "woman_singer_medium-dark_skin_tone": "👩🏾\u200d🎤", - "woman_singer_medium-light_skin_tone": "👩🏼\u200d🎤", - "woman_singer_medium_skin_tone": "👩🏽\u200d🎤", - "woman_student": "👩\u200d🎓", - "woman_student_dark_skin_tone": "👩🏿\u200d🎓", - "woman_student_light_skin_tone": "👩🏻\u200d🎓", - "woman_student_medium-dark_skin_tone": "👩🏾\u200d🎓", - "woman_student_medium-light_skin_tone": "👩🏼\u200d🎓", - "woman_student_medium_skin_tone": "👩🏽\u200d🎓", - "woman_surfing": "🏄\u200d♀️", - "woman_surfing_dark_skin_tone": "🏄🏿\u200d♀️", - "woman_surfing_light_skin_tone": "🏄🏻\u200d♀️", - "woman_surfing_medium-dark_skin_tone": "🏄🏾\u200d♀️", - "woman_surfing_medium-light_skin_tone": "🏄🏼\u200d♀️", - "woman_surfing_medium_skin_tone": "🏄🏽\u200d♀️", - "woman_swimming": "🏊\u200d♀️", - "woman_swimming_dark_skin_tone": "🏊🏿\u200d♀️", - "woman_swimming_light_skin_tone": "🏊🏻\u200d♀️", - "woman_swimming_medium-dark_skin_tone": "🏊🏾\u200d♀️", - "woman_swimming_medium-light_skin_tone": "🏊🏼\u200d♀️", - "woman_swimming_medium_skin_tone": "🏊🏽\u200d♀️", - "woman_teacher": "👩\u200d🏫", - "woman_teacher_dark_skin_tone": "👩🏿\u200d🏫", - "woman_teacher_light_skin_tone": "👩🏻\u200d🏫", - "woman_teacher_medium-dark_skin_tone": "👩🏾\u200d🏫", - "woman_teacher_medium-light_skin_tone": "👩🏼\u200d🏫", - "woman_teacher_medium_skin_tone": "👩🏽\u200d🏫", - "woman_technologist": "👩\u200d💻", - "woman_technologist_dark_skin_tone": "👩🏿\u200d💻", - "woman_technologist_light_skin_tone": "👩🏻\u200d💻", - "woman_technologist_medium-dark_skin_tone": "👩🏾\u200d💻", - "woman_technologist_medium-light_skin_tone": "👩🏼\u200d💻", - "woman_technologist_medium_skin_tone": "👩🏽\u200d💻", - "woman_tipping_hand": "💁\u200d♀️", - "woman_tipping_hand_dark_skin_tone": "💁🏿\u200d♀️", - "woman_tipping_hand_light_skin_tone": "💁🏻\u200d♀️", - "woman_tipping_hand_medium-dark_skin_tone": "💁🏾\u200d♀️", - "woman_tipping_hand_medium-light_skin_tone": "💁🏼\u200d♀️", - "woman_tipping_hand_medium_skin_tone": "💁🏽\u200d♀️", - "woman_vampire": "🧛\u200d♀️", - "woman_vampire_dark_skin_tone": "🧛🏿\u200d♀️", - "woman_vampire_light_skin_tone": "🧛🏻\u200d♀️", - "woman_vampire_medium-dark_skin_tone": "🧛🏾\u200d♀️", - "woman_vampire_medium-light_skin_tone": "🧛🏼\u200d♀️", - "woman_vampire_medium_skin_tone": "🧛🏽\u200d♀️", - "woman_walking": "🚶\u200d♀️", - "woman_walking_dark_skin_tone": "🚶🏿\u200d♀️", - "woman_walking_light_skin_tone": "🚶🏻\u200d♀️", - "woman_walking_medium-dark_skin_tone": "🚶🏾\u200d♀️", - "woman_walking_medium-light_skin_tone": "🚶🏼\u200d♀️", - "woman_walking_medium_skin_tone": "🚶🏽\u200d♀️", - "woman_wearing_turban": "👳\u200d♀️", - "woman_wearing_turban_dark_skin_tone": "👳🏿\u200d♀️", - "woman_wearing_turban_light_skin_tone": "👳🏻\u200d♀️", - "woman_wearing_turban_medium-dark_skin_tone": "👳🏾\u200d♀️", - "woman_wearing_turban_medium-light_skin_tone": "👳🏼\u200d♀️", - "woman_wearing_turban_medium_skin_tone": "👳🏽\u200d♀️", - "woman_with_headscarf": "🧕", - "woman_with_headscarf_dark_skin_tone": "🧕🏿", - "woman_with_headscarf_light_skin_tone": "🧕🏻", - "woman_with_headscarf_medium-dark_skin_tone": "🧕🏾", - "woman_with_headscarf_medium-light_skin_tone": "🧕🏼", - "woman_with_headscarf_medium_skin_tone": "🧕🏽", - "woman_with_probing_cane": "👩\u200d🦯", - "woman_zombie": "🧟\u200d♀️", - "woman’s_boot": "👢", - "woman’s_clothes": "👚", - "woman’s_hat": "👒", - "woman’s_sandal": "👡", - "women_with_bunny_ears": "👯\u200d♀️", - "women_wrestling": "🤼\u200d♀️", - "women’s_room": "🚺", - "woozy_face": "🥴", - "world_map": "🗺", - "worried_face": "😟", - "wrapped_gift": "🎁", - "wrench": "🔧", - "writing_hand": "✍", - "writing_hand_dark_skin_tone": "✍🏿", - "writing_hand_light_skin_tone": "✍🏻", - "writing_hand_medium-dark_skin_tone": "✍🏾", - "writing_hand_medium-light_skin_tone": "✍🏼", - "writing_hand_medium_skin_tone": "✍🏽", - "yarn": "🧶", - "yawning_face": "🥱", - "yellow_circle": "🟡", - "yellow_heart": "💛", - "yellow_square": "🟨", - "yen_banknote": "💴", - "yo-yo": "🪀", - "yin_yang": "☯", - "zany_face": "🤪", - "zebra": "🦓", - "zipper-mouth_face": "🤐", - "zombie": "🧟", - "zzz": "💤", - "åland_islands": "🇦🇽", - "keycap_asterisk": "*⃣", - "keycap_digit_eight": "8⃣", - "keycap_digit_five": "5⃣", - "keycap_digit_four": "4⃣", - "keycap_digit_nine": "9⃣", - "keycap_digit_one": "1⃣", - "keycap_digit_seven": "7⃣", - "keycap_digit_six": "6⃣", - "keycap_digit_three": "3⃣", - "keycap_digit_two": "2⃣", - "keycap_digit_zero": "0⃣", - "keycap_number_sign": "#⃣", - "light_skin_tone": "🏻", - "medium_light_skin_tone": "🏼", - "medium_skin_tone": "🏽", - "medium_dark_skin_tone": "🏾", - "dark_skin_tone": "🏿", - "regional_indicator_symbol_letter_a": "🇦", - "regional_indicator_symbol_letter_b": "🇧", - "regional_indicator_symbol_letter_c": "🇨", - "regional_indicator_symbol_letter_d": "🇩", - "regional_indicator_symbol_letter_e": "🇪", - "regional_indicator_symbol_letter_f": "🇫", - "regional_indicator_symbol_letter_g": "🇬", - "regional_indicator_symbol_letter_h": "🇭", - "regional_indicator_symbol_letter_i": "🇮", - "regional_indicator_symbol_letter_j": "🇯", - "regional_indicator_symbol_letter_k": "🇰", - "regional_indicator_symbol_letter_l": "🇱", - "regional_indicator_symbol_letter_m": "🇲", - "regional_indicator_symbol_letter_n": "🇳", - "regional_indicator_symbol_letter_o": "🇴", - "regional_indicator_symbol_letter_p": "🇵", - "regional_indicator_symbol_letter_q": "🇶", - "regional_indicator_symbol_letter_r": "🇷", - "regional_indicator_symbol_letter_s": "🇸", - "regional_indicator_symbol_letter_t": "🇹", - "regional_indicator_symbol_letter_u": "🇺", - "regional_indicator_symbol_letter_v": "🇻", - "regional_indicator_symbol_letter_w": "🇼", - "regional_indicator_symbol_letter_x": "🇽", - "regional_indicator_symbol_letter_y": "🇾", - "regional_indicator_symbol_letter_z": "🇿", - "airplane_arriving": "🛬", - "space_invader": "👾", - "football": "🏈", - "anger": "💢", - "angry": "😠", - "anguished": "😧", - "signal_strength": "📶", - "arrows_counterclockwise": "🔄", - "arrow_heading_down": "⤵", - "arrow_heading_up": "⤴", - "art": "🎨", - "astonished": "😲", - "athletic_shoe": "👟", - "atm": "🏧", - "car": "🚗", - "red_car": "🚗", - "angel": "👼", - "back": "🔙", - "badminton_racquet_and_shuttlecock": "🏸", - "dollar": "💵", - "euro": "💶", - "pound": "💷", - "yen": "💴", - "barber": "💈", - "bath": "🛀", - "bear": "🐻", - "heartbeat": "💓", - "beer": "🍺", - "no_bell": "🔕", - "bento": "🍱", - "bike": "🚲", - "bicyclist": "🚴", - "8ball": "🎱", - "biohazard_sign": "☣", - "birthday": "🎂", - "black_circle_for_record": "⏺", - "clubs": "♣", - "diamonds": "♦", - "arrow_double_down": "⏬", - "hearts": "♥", - "rewind": "⏪", - "black_left__pointing_double_triangle_with_vertical_bar": "⏮", - "arrow_backward": "◀", - "black_medium_small_square": "◾", - "question": "❓", - "fast_forward": "⏩", - "black_right__pointing_double_triangle_with_vertical_bar": "⏭", - "arrow_forward": "▶", - "black_right__pointing_triangle_with_double_vertical_bar": "⏯", - "arrow_right": "➡", - "spades": "♠", - "black_square_for_stop": "⏹", - "sunny": "☀", - "phone": "☎", - "recycle": "♻", - "arrow_double_up": "⏫", - "busstop": "🚏", - "date": "📅", - "flags": "🎏", - "cat2": "🐈", - "joy_cat": "😹", - "smirk_cat": "😼", - "chart_with_downwards_trend": "📉", - "chart_with_upwards_trend": "📈", - "chart": "💹", - "mega": "📣", - "checkered_flag": "🏁", - "accept": "🉑", - "ideograph_advantage": "🉐", - "congratulations": "㊗", - "secret": "㊙", - "m": "Ⓜ", - "city_sunset": "🌆", - "clapper": "🎬", - "clap": "👏", - "beers": "🍻", - "clock830": "🕣", - "clock8": "🕗", - "clock1130": "🕦", - "clock11": "🕚", - "clock530": "🕠", - "clock5": "🕔", - "clock430": "🕟", - "clock4": "🕓", - "clock930": "🕤", - "clock9": "🕘", - "clock130": "🕜", - "clock1": "🕐", - "clock730": "🕢", - "clock7": "🕖", - "clock630": "🕡", - "clock6": "🕕", - "clock1030": "🕥", - "clock10": "🕙", - "clock330": "🕞", - "clock3": "🕒", - "clock1230": "🕧", - "clock12": "🕛", - "clock230": "🕝", - "clock2": "🕑", - "arrows_clockwise": "🔃", - "repeat": "🔁", - "repeat_one": "🔂", - "closed_lock_with_key": "🔐", - "mailbox_closed": "📪", - "mailbox": "📫", - "cloud_with_tornado": "🌪", - "cocktail": "🍸", - "boom": "💥", - "compression": "🗜", - "confounded": "😖", - "confused": "😕", - "rice": "🍚", - "cow2": "🐄", - "cricket_bat_and_ball": "🏏", - "x": "❌", - "cry": "😢", - "curry": "🍛", - "dagger_knife": "🗡", - "dancer": "💃", - "dark_sunglasses": "🕶", - "dash": "💨", - "truck": "🚚", - "derelict_house_building": "🏚", - "diamond_shape_with_a_dot_inside": "💠", - "dart": "🎯", - "disappointed_relieved": "😥", - "disappointed": "😞", - "do_not_litter": "🚯", - "dog2": "🐕", - "flipper": "🐬", - "loop": "➿", - "bangbang": "‼", - "double_vertical_bar": "⏸", - "dove_of_peace": "🕊", - "small_red_triangle_down": "🔻", - "arrow_down_small": "🔽", - "arrow_down": "⬇", - "dromedary_camel": "🐪", - "e__mail": "📧", - "corn": "🌽", - "ear_of_rice": "🌾", - "earth_americas": "🌎", - "earth_asia": "🌏", - "earth_africa": "🌍", - "eight_pointed_black_star": "✴", - "eight_spoked_asterisk": "✳", - "eject_symbol": "⏏", - "bulb": "💡", - "emoji_modifier_fitzpatrick_type__1__2": "🏻", - "emoji_modifier_fitzpatrick_type__3": "🏼", - "emoji_modifier_fitzpatrick_type__4": "🏽", - "emoji_modifier_fitzpatrick_type__5": "🏾", - "emoji_modifier_fitzpatrick_type__6": "🏿", - "end": "🔚", - "email": "✉", - "european_castle": "🏰", - "european_post_office": "🏤", - "interrobang": "⁉", - "expressionless": "😑", - "eyeglasses": "👓", - "massage": "💆", - "yum": "😋", - "scream": "😱", - "kissing_heart": "😘", - "sweat": "😓", - "face_with_head__bandage": "🤕", - "triumph": "😤", - "mask": "😷", - "no_good": "🙅", - "ok_woman": "🙆", - "open_mouth": "😮", - "cold_sweat": "😰", - "stuck_out_tongue": "😛", - "stuck_out_tongue_closed_eyes": "😝", - "stuck_out_tongue_winking_eye": "😜", - "joy": "😂", - "no_mouth": "😶", - "santa": "🎅", - "fax": "📠", - "fearful": "😨", - "field_hockey_stick_and_ball": "🏑", - "first_quarter_moon_with_face": "🌛", - "fish_cake": "🍥", - "fishing_pole_and_fish": "🎣", - "facepunch": "👊", - "punch": "👊", - "flag_for_afghanistan": "🇦🇫", - "flag_for_albania": "🇦🇱", - "flag_for_algeria": "🇩🇿", - "flag_for_american_samoa": "🇦🇸", - "flag_for_andorra": "🇦🇩", - "flag_for_angola": "🇦🇴", - "flag_for_anguilla": "🇦🇮", - "flag_for_antarctica": "🇦🇶", - "flag_for_antigua_&_barbuda": "🇦🇬", - "flag_for_argentina": "🇦🇷", - "flag_for_armenia": "🇦🇲", - "flag_for_aruba": "🇦🇼", - "flag_for_ascension_island": "🇦🇨", - "flag_for_australia": "🇦🇺", - "flag_for_austria": "🇦🇹", - "flag_for_azerbaijan": "🇦🇿", - "flag_for_bahamas": "🇧🇸", - "flag_for_bahrain": "🇧🇭", - "flag_for_bangladesh": "🇧🇩", - "flag_for_barbados": "🇧🇧", - "flag_for_belarus": "🇧🇾", - "flag_for_belgium": "🇧🇪", - "flag_for_belize": "🇧🇿", - "flag_for_benin": "🇧🇯", - "flag_for_bermuda": "🇧🇲", - "flag_for_bhutan": "🇧🇹", - "flag_for_bolivia": "🇧🇴", - "flag_for_bosnia_&_herzegovina": "🇧🇦", - "flag_for_botswana": "🇧🇼", - "flag_for_bouvet_island": "🇧🇻", - "flag_for_brazil": "🇧🇷", - "flag_for_british_indian_ocean_territory": "🇮🇴", - "flag_for_british_virgin_islands": "🇻🇬", - "flag_for_brunei": "🇧🇳", - "flag_for_bulgaria": "🇧🇬", - "flag_for_burkina_faso": "🇧🇫", - "flag_for_burundi": "🇧🇮", - "flag_for_cambodia": "🇰🇭", - "flag_for_cameroon": "🇨🇲", - "flag_for_canada": "🇨🇦", - "flag_for_canary_islands": "🇮🇨", - "flag_for_cape_verde": "🇨🇻", - "flag_for_caribbean_netherlands": "🇧🇶", - "flag_for_cayman_islands": "🇰🇾", - "flag_for_central_african_republic": "🇨🇫", - "flag_for_ceuta_&_melilla": "🇪🇦", - "flag_for_chad": "🇹🇩", - "flag_for_chile": "🇨🇱", - "flag_for_china": "🇨🇳", - "flag_for_christmas_island": "🇨🇽", - "flag_for_clipperton_island": "🇨🇵", - "flag_for_cocos__islands": "🇨🇨", - "flag_for_colombia": "🇨🇴", - "flag_for_comoros": "🇰🇲", - "flag_for_congo____brazzaville": "🇨🇬", - "flag_for_congo____kinshasa": "🇨🇩", - "flag_for_cook_islands": "🇨🇰", - "flag_for_costa_rica": "🇨🇷", - "flag_for_croatia": "🇭🇷", - "flag_for_cuba": "🇨🇺", - "flag_for_curaçao": "🇨🇼", - "flag_for_cyprus": "🇨🇾", - "flag_for_czech_republic": "🇨🇿", - "flag_for_côte_d’ivoire": "🇨🇮", - "flag_for_denmark": "🇩🇰", - "flag_for_diego_garcia": "🇩🇬", - "flag_for_djibouti": "🇩🇯", - "flag_for_dominica": "🇩🇲", - "flag_for_dominican_republic": "🇩🇴", - "flag_for_ecuador": "🇪🇨", - "flag_for_egypt": "🇪🇬", - "flag_for_el_salvador": "🇸🇻", - "flag_for_equatorial_guinea": "🇬🇶", - "flag_for_eritrea": "🇪🇷", - "flag_for_estonia": "🇪🇪", - "flag_for_ethiopia": "🇪🇹", - "flag_for_european_union": "🇪🇺", - "flag_for_falkland_islands": "🇫🇰", - "flag_for_faroe_islands": "🇫🇴", - "flag_for_fiji": "🇫🇯", - "flag_for_finland": "🇫🇮", - "flag_for_france": "🇫🇷", - "flag_for_french_guiana": "🇬🇫", - "flag_for_french_polynesia": "🇵🇫", - "flag_for_french_southern_territories": "🇹🇫", - "flag_for_gabon": "🇬🇦", - "flag_for_gambia": "🇬🇲", - "flag_for_georgia": "🇬🇪", - "flag_for_germany": "🇩🇪", - "flag_for_ghana": "🇬🇭", - "flag_for_gibraltar": "🇬🇮", - "flag_for_greece": "🇬🇷", - "flag_for_greenland": "🇬🇱", - "flag_for_grenada": "🇬🇩", - "flag_for_guadeloupe": "🇬🇵", - "flag_for_guam": "🇬🇺", - "flag_for_guatemala": "🇬🇹", - "flag_for_guernsey": "🇬🇬", - "flag_for_guinea": "🇬🇳", - "flag_for_guinea__bissau": "🇬🇼", - "flag_for_guyana": "🇬🇾", - "flag_for_haiti": "🇭🇹", - "flag_for_heard_&_mcdonald_islands": "🇭🇲", - "flag_for_honduras": "🇭🇳", - "flag_for_hong_kong": "🇭🇰", - "flag_for_hungary": "🇭🇺", - "flag_for_iceland": "🇮🇸", - "flag_for_india": "🇮🇳", - "flag_for_indonesia": "🇮🇩", - "flag_for_iran": "🇮🇷", - "flag_for_iraq": "🇮🇶", - "flag_for_ireland": "🇮🇪", - "flag_for_isle_of_man": "🇮🇲", - "flag_for_israel": "🇮🇱", - "flag_for_italy": "🇮🇹", - "flag_for_jamaica": "🇯🇲", - "flag_for_japan": "🇯🇵", - "flag_for_jersey": "🇯🇪", - "flag_for_jordan": "🇯🇴", - "flag_for_kazakhstan": "🇰🇿", - "flag_for_kenya": "🇰🇪", - "flag_for_kiribati": "🇰🇮", - "flag_for_kosovo": "🇽🇰", - "flag_for_kuwait": "🇰🇼", - "flag_for_kyrgyzstan": "🇰🇬", - "flag_for_laos": "🇱🇦", - "flag_for_latvia": "🇱🇻", - "flag_for_lebanon": "🇱🇧", - "flag_for_lesotho": "🇱🇸", - "flag_for_liberia": "🇱🇷", - "flag_for_libya": "🇱🇾", - "flag_for_liechtenstein": "🇱🇮", - "flag_for_lithuania": "🇱🇹", - "flag_for_luxembourg": "🇱🇺", - "flag_for_macau": "🇲🇴", - "flag_for_macedonia": "🇲🇰", - "flag_for_madagascar": "🇲🇬", - "flag_for_malawi": "🇲🇼", - "flag_for_malaysia": "🇲🇾", - "flag_for_maldives": "🇲🇻", - "flag_for_mali": "🇲🇱", - "flag_for_malta": "🇲🇹", - "flag_for_marshall_islands": "🇲🇭", - "flag_for_martinique": "🇲🇶", - "flag_for_mauritania": "🇲🇷", - "flag_for_mauritius": "🇲🇺", - "flag_for_mayotte": "🇾🇹", - "flag_for_mexico": "🇲🇽", - "flag_for_micronesia": "🇫🇲", - "flag_for_moldova": "🇲🇩", - "flag_for_monaco": "🇲🇨", - "flag_for_mongolia": "🇲🇳", - "flag_for_montenegro": "🇲🇪", - "flag_for_montserrat": "🇲🇸", - "flag_for_morocco": "🇲🇦", - "flag_for_mozambique": "🇲🇿", - "flag_for_myanmar": "🇲🇲", - "flag_for_namibia": "🇳🇦", - "flag_for_nauru": "🇳🇷", - "flag_for_nepal": "🇳🇵", - "flag_for_netherlands": "🇳🇱", - "flag_for_new_caledonia": "🇳🇨", - "flag_for_new_zealand": "🇳🇿", - "flag_for_nicaragua": "🇳🇮", - "flag_for_niger": "🇳🇪", - "flag_for_nigeria": "🇳🇬", - "flag_for_niue": "🇳🇺", - "flag_for_norfolk_island": "🇳🇫", - "flag_for_north_korea": "🇰🇵", - "flag_for_northern_mariana_islands": "🇲🇵", - "flag_for_norway": "🇳🇴", - "flag_for_oman": "🇴🇲", - "flag_for_pakistan": "🇵🇰", - "flag_for_palau": "🇵🇼", - "flag_for_palestinian_territories": "🇵🇸", - "flag_for_panama": "🇵🇦", - "flag_for_papua_new_guinea": "🇵🇬", - "flag_for_paraguay": "🇵🇾", - "flag_for_peru": "🇵🇪", - "flag_for_philippines": "🇵🇭", - "flag_for_pitcairn_islands": "🇵🇳", - "flag_for_poland": "🇵🇱", - "flag_for_portugal": "🇵🇹", - "flag_for_puerto_rico": "🇵🇷", - "flag_for_qatar": "🇶🇦", - "flag_for_romania": "🇷🇴", - "flag_for_russia": "🇷🇺", - "flag_for_rwanda": "🇷🇼", - "flag_for_réunion": "🇷🇪", - "flag_for_samoa": "🇼🇸", - "flag_for_san_marino": "🇸🇲", - "flag_for_saudi_arabia": "🇸🇦", - "flag_for_senegal": "🇸🇳", - "flag_for_serbia": "🇷🇸", - "flag_for_seychelles": "🇸🇨", - "flag_for_sierra_leone": "🇸🇱", - "flag_for_singapore": "🇸🇬", - "flag_for_sint_maarten": "🇸🇽", - "flag_for_slovakia": "🇸🇰", - "flag_for_slovenia": "🇸🇮", - "flag_for_solomon_islands": "🇸🇧", - "flag_for_somalia": "🇸🇴", - "flag_for_south_africa": "🇿🇦", - "flag_for_south_georgia_&_south_sandwich_islands": "🇬🇸", - "flag_for_south_korea": "🇰🇷", - "flag_for_south_sudan": "🇸🇸", - "flag_for_spain": "🇪🇸", - "flag_for_sri_lanka": "🇱🇰", - "flag_for_st._barthélemy": "🇧🇱", - "flag_for_st._helena": "🇸🇭", - "flag_for_st._kitts_&_nevis": "🇰🇳", - "flag_for_st._lucia": "🇱🇨", - "flag_for_st._martin": "🇲🇫", - "flag_for_st._pierre_&_miquelon": "🇵🇲", - "flag_for_st._vincent_&_grenadines": "🇻🇨", - "flag_for_sudan": "🇸🇩", - "flag_for_suriname": "🇸🇷", - "flag_for_svalbard_&_jan_mayen": "🇸🇯", - "flag_for_swaziland": "🇸🇿", - "flag_for_sweden": "🇸🇪", - "flag_for_switzerland": "🇨🇭", - "flag_for_syria": "🇸🇾", - "flag_for_são_tomé_&_príncipe": "🇸🇹", - "flag_for_taiwan": "🇹🇼", - "flag_for_tajikistan": "🇹🇯", - "flag_for_tanzania": "🇹🇿", - "flag_for_thailand": "🇹🇭", - "flag_for_timor__leste": "🇹🇱", - "flag_for_togo": "🇹🇬", - "flag_for_tokelau": "🇹🇰", - "flag_for_tonga": "🇹🇴", - "flag_for_trinidad_&_tobago": "🇹🇹", - "flag_for_tristan_da_cunha": "🇹🇦", - "flag_for_tunisia": "🇹🇳", - "flag_for_turkey": "🇹🇷", - "flag_for_turkmenistan": "🇹🇲", - "flag_for_turks_&_caicos_islands": "🇹🇨", - "flag_for_tuvalu": "🇹🇻", - "flag_for_u.s._outlying_islands": "🇺🇲", - "flag_for_u.s._virgin_islands": "🇻🇮", - "flag_for_uganda": "🇺🇬", - "flag_for_ukraine": "🇺🇦", - "flag_for_united_arab_emirates": "🇦🇪", - "flag_for_united_kingdom": "🇬🇧", - "flag_for_united_states": "🇺🇸", - "flag_for_uruguay": "🇺🇾", - "flag_for_uzbekistan": "🇺🇿", - "flag_for_vanuatu": "🇻🇺", - "flag_for_vatican_city": "🇻🇦", - "flag_for_venezuela": "🇻🇪", - "flag_for_vietnam": "🇻🇳", - "flag_for_wallis_&_futuna": "🇼🇫", - "flag_for_western_sahara": "🇪🇭", - "flag_for_yemen": "🇾🇪", - "flag_for_zambia": "🇿🇲", - "flag_for_zimbabwe": "🇿🇼", - "flag_for_åland_islands": "🇦🇽", - "golf": "⛳", - "fleur__de__lis": "⚜", - "muscle": "💪", - "flushed": "😳", - "frame_with_picture": "🖼", - "fries": "🍟", - "frog": "🐸", - "hatched_chick": "🐥", - "frowning": "😦", - "fuelpump": "⛽", - "full_moon_with_face": "🌝", - "gem": "💎", - "star2": "🌟", - "golfer": "🏌", - "mortar_board": "🎓", - "grimacing": "😬", - "smile_cat": "😸", - "grinning": "😀", - "grin": "😁", - "heartpulse": "💗", - "guardsman": "💂", - "haircut": "💇", - "hamster": "🐹", - "raising_hand": "🙋", - "headphones": "🎧", - "hear_no_evil": "🙉", - "cupid": "💘", - "gift_heart": "💝", - "heart": "❤", - "exclamation": "❗", - "heavy_exclamation_mark": "❗", - "heavy_heart_exclamation_mark_ornament": "❣", - "o": "⭕", - "helm_symbol": "⎈", - "helmet_with_white_cross": "⛑", - "high_heel": "👠", - "bullettrain_side": "🚄", - "bullettrain_front": "🚅", - "high_brightness": "🔆", - "zap": "⚡", - "hocho": "🔪", - "knife": "🔪", - "bee": "🐝", - "traffic_light": "🚥", - "racehorse": "🐎", - "coffee": "☕", - "hotsprings": "♨", - "hourglass": "⌛", - "hourglass_flowing_sand": "⏳", - "house_buildings": "🏘", - "100": "💯", - "hushed": "😯", - "ice_hockey_stick_and_puck": "🏒", - "imp": "👿", - "information_desk_person": "💁", - "information_source": "ℹ", - "capital_abcd": "🔠", - "abc": "🔤", - "abcd": "🔡", - "1234": "🔢", - "symbols": "🔣", - "izakaya_lantern": "🏮", - "lantern": "🏮", - "jack_o_lantern": "🎃", - "dolls": "🎎", - "japanese_goblin": "👺", - "japanese_ogre": "👹", - "beginner": "🔰", - "zero": "0️⃣", - "one": "1️⃣", - "ten": "🔟", - "two": "2️⃣", - "three": "3️⃣", - "four": "4️⃣", - "five": "5️⃣", - "six": "6️⃣", - "seven": "7️⃣", - "eight": "8️⃣", - "nine": "9️⃣", - "couplekiss": "💏", - "kissing_cat": "😽", - "kissing": "😗", - "kissing_closed_eyes": "😚", - "kissing_smiling_eyes": "😙", - "beetle": "🐞", - "large_blue_circle": "🔵", - "last_quarter_moon_with_face": "🌜", - "leaves": "🍃", - "mag": "🔍", - "left_right_arrow": "↔", - "leftwards_arrow_with_hook": "↩", - "arrow_left": "⬅", - "lock": "🔒", - "lock_with_ink_pen": "🔏", - "sob": "😭", - "low_brightness": "🔅", - "lower_left_ballpoint_pen": "🖊", - "lower_left_crayon": "🖍", - "lower_left_fountain_pen": "🖋", - "lower_left_paintbrush": "🖌", - "mahjong": "🀄", - "couple": "👫", - "man_in_business_suit_levitating": "🕴", - "man_with_gua_pi_mao": "👲", - "man_with_turban": "👳", - "mans_shoe": "👞", - "shoe": "👞", - "menorah_with_nine_branches": "🕎", - "mens": "🚹", - "minidisc": "💽", - "iphone": "📱", - "calling": "📲", - "money__mouth_face": "🤑", - "moneybag": "💰", - "rice_scene": "🎑", - "mountain_bicyclist": "🚵", - "mouse2": "🐁", - "lips": "👄", - "moyai": "🗿", - "notes": "🎶", - "nail_care": "💅", - "ab": "🆎", - "negative_squared_cross_mark": "❎", - "a": "🅰", - "b": "🅱", - "o2": "🅾", - "parking": "🅿", - "new_moon_with_face": "🌚", - "no_entry_sign": "🚫", - "underage": "🔞", - "non__potable_water": "🚱", - "arrow_upper_right": "↗", - "arrow_upper_left": "↖", - "office": "🏢", - "older_man": "👴", - "older_woman": "👵", - "om_symbol": "🕉", - "on": "🔛", - "book": "📖", - "unlock": "🔓", - "mailbox_with_no_mail": "📭", - "mailbox_with_mail": "📬", - "cd": "💿", - "tada": "🎉", - "feet": "🐾", - "walking": "🚶", - "pencil2": "✏", - "pensive": "😔", - "persevere": "😣", - "bow": "🙇", - "raised_hands": "🙌", - "person_with_ball": "⛹", - "person_with_blond_hair": "👱", - "pray": "🙏", - "person_with_pouting_face": "🙎", - "computer": "💻", - "pig2": "🐖", - "hankey": "💩", - "poop": "💩", - "shit": "💩", - "bamboo": "🎍", - "gun": "🔫", - "black_joker": "🃏", - "rotating_light": "🚨", - "cop": "👮", - "stew": "🍲", - "pouch": "👝", - "pouting_cat": "😾", - "rage": "😡", - "put_litter_in_its_place": "🚮", - "rabbit2": "🐇", - "racing_motorcycle": "🏍", - "radioactive_sign": "☢", - "fist": "✊", - "hand": "✋", - "raised_hand_with_fingers_splayed": "🖐", - "raised_hand_with_part_between_middle_and_ring_fingers": "🖖", - "blue_car": "🚙", - "apple": "🍎", - "relieved": "😌", - "reversed_hand_with_middle_finger_extended": "🖕", - "mag_right": "🔎", - "arrow_right_hook": "↪", - "sweet_potato": "🍠", - "robot": "🤖", - "rolled__up_newspaper": "🗞", - "rowboat": "🚣", - "runner": "🏃", - "running": "🏃", - "running_shirt_with_sash": "🎽", - "boat": "⛵", - "scales": "⚖", - "school_satchel": "🎒", - "scorpius": "♏", - "see_no_evil": "🙈", - "sheep": "🐑", - "stars": "🌠", - "cake": "🍰", - "six_pointed_star": "🔯", - "ski": "🎿", - "sleeping_accommodation": "🛌", - "sleeping": "😴", - "sleepy": "😪", - "sleuth_or_spy": "🕵", - "heart_eyes_cat": "😻", - "smiley_cat": "😺", - "innocent": "😇", - "heart_eyes": "😍", - "smiling_imp": "😈", - "smiley": "😃", - "sweat_smile": "😅", - "smile": "😄", - "laughing": "😆", - "satisfied": "😆", - "blush": "😊", - "smirk": "😏", - "smoking": "🚬", - "snow_capped_mountain": "🏔", - "soccer": "⚽", - "icecream": "🍦", - "soon": "🔜", - "arrow_lower_right": "↘", - "arrow_lower_left": "↙", - "speak_no_evil": "🙊", - "speaker": "🔈", - "mute": "🔇", - "sound": "🔉", - "loud_sound": "🔊", - "speaking_head_in_silhouette": "🗣", - "spiral_calendar_pad": "🗓", - "spiral_note_pad": "🗒", - "shell": "🐚", - "sweat_drops": "💦", - "u5272": "🈹", - "u5408": "🈴", - "u55b6": "🈺", - "u6307": "🈯", - "u6708": "🈷", - "u6709": "🈶", - "u6e80": "🈵", - "u7121": "🈚", - "u7533": "🈸", - "u7981": "🈲", - "u7a7a": "🈳", - "cl": "🆑", - "cool": "🆒", - "free": "🆓", - "id": "🆔", - "koko": "🈁", - "sa": "🈂", - "new": "🆕", - "ng": "🆖", - "ok": "🆗", - "sos": "🆘", - "up": "🆙", - "vs": "🆚", - "steam_locomotive": "🚂", - "ramen": "🍜", - "partly_sunny": "⛅", - "city_sunrise": "🌇", - "surfer": "🏄", - "swimmer": "🏊", - "shirt": "👕", - "tshirt": "👕", - "table_tennis_paddle_and_ball": "🏓", - "tea": "🍵", - "tv": "📺", - "three_button_mouse": "🖱", - "+1": "👍", - "thumbsup": "👍", - "__1": "👎", - "-1": "👎", - "thumbsdown": "👎", - "thunder_cloud_and_rain": "⛈", - "tiger2": "🐅", - "tophat": "🎩", - "top": "🔝", - "tm": "™", - "train2": "🚆", - "triangular_flag_on_post": "🚩", - "trident": "🔱", - "twisted_rightwards_arrows": "🔀", - "unamused": "😒", - "small_red_triangle": "🔺", - "arrow_up_small": "🔼", - "arrow_up_down": "↕", - "upside__down_face": "🙃", - "arrow_up": "⬆", - "v": "✌", - "vhs": "📼", - "wc": "🚾", - "ocean": "🌊", - "waving_black_flag": "🏴", - "wave": "👋", - "waving_white_flag": "🏳", - "moon": "🌔", - "scream_cat": "🙀", - "weary": "😩", - "weight_lifter": "🏋", - "whale2": "🐋", - "wheelchair": "♿", - "point_down": "👇", - "grey_exclamation": "❕", - "white_frowning_face": "☹", - "white_check_mark": "✅", - "point_left": "👈", - "white_medium_small_square": "◽", - "star": "⭐", - "grey_question": "❔", - "point_right": "👉", - "relaxed": "☺", - "white_sun_behind_cloud": "🌥", - "white_sun_behind_cloud_with_rain": "🌦", - "white_sun_with_small_cloud": "🌤", - "point_up_2": "👆", - "point_up": "☝", - "wind_blowing_face": "🌬", - "wink": "😉", - "wolf": "🐺", - "dancers": "👯", - "boot": "👢", - "womans_clothes": "👚", - "womans_hat": "👒", - "sandal": "👡", - "womens": "🚺", - "worried": "😟", - "gift": "🎁", - "zipper__mouth_face": "🤐", - "regional_indicator_a": "🇦", - "regional_indicator_b": "🇧", - "regional_indicator_c": "🇨", - "regional_indicator_d": "🇩", - "regional_indicator_e": "🇪", - "regional_indicator_f": "🇫", - "regional_indicator_g": "🇬", - "regional_indicator_h": "🇭", - "regional_indicator_i": "🇮", - "regional_indicator_j": "🇯", - "regional_indicator_k": "🇰", - "regional_indicator_l": "🇱", - "regional_indicator_m": "🇲", - "regional_indicator_n": "🇳", - "regional_indicator_o": "🇴", - "regional_indicator_p": "🇵", - "regional_indicator_q": "🇶", - "regional_indicator_r": "🇷", - "regional_indicator_s": "🇸", - "regional_indicator_t": "🇹", - "regional_indicator_u": "🇺", - "regional_indicator_v": "🇻", - "regional_indicator_w": "🇼", - "regional_indicator_x": "🇽", - "regional_indicator_y": "🇾", - "regional_indicator_z": "🇿", -} diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/_itertools.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/_itertools.py deleted file mode 100644 index cce05582ffc6fe6d72027194f4ccc44ee42f1fcd..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/_itertools.py +++ /dev/null @@ -1,35 +0,0 @@ -from itertools import filterfalse - -from typing import ( - Callable, - Iterable, - Iterator, - Optional, - Set, - TypeVar, - Union, -) - -# Type and type variable definitions -_T = TypeVar('_T') -_U = TypeVar('_U') - - -def unique_everseen( - iterable: Iterable[_T], key: Optional[Callable[[_T], _U]] = None -) -> Iterator[_T]: - "List unique elements, preserving order. Remember all elements ever seen." - # unique_everseen('AAAABBBCCDAABBB') --> A B C D - # unique_everseen('ABBCcAD', str.lower) --> A B C D - seen: Set[Union[_T, _U]] = set() - seen_add = seen.add - if key is None: - for element in filterfalse(seen.__contains__, iterable): - seen_add(element) - yield element - else: - for element in iterable: - k = key(element) - if k not in seen: - seen_add(k) - yield element diff --git a/spaces/Tej3/DepthEstimation/models/unet_resnet18.py b/spaces/Tej3/DepthEstimation/models/unet_resnet18.py deleted file mode 100644 index e22ed34e751378992e29e02df973e7ef89c16f36..0000000000000000000000000000000000000000 --- a/spaces/Tej3/DepthEstimation/models/unet_resnet18.py +++ /dev/null @@ -1,94 +0,0 @@ -import torch.nn as nn -from torchinfo import summary -import torchvision.models -import torch - - -def convrelu(in_channels, out_channels, kernel, padding): - return nn.Sequential( - nn.Conv2d(in_channels, out_channels, kernel, padding=padding), - nn.ReLU(inplace=True), - ) - - -class ResNet18UNet(nn.Module): - def __init__(self, max_depth, n_class=1): - super().__init__() - - self.base_model = torchvision.models.resnet18(pretrained=True) - self.base_layers = list(self.base_model.children()) - - self.layer0 = nn.Sequential(*self.base_layers[:3]) # size=(N, 64, x.H/2, x.W/2) - self.layer0_1x1 = convrelu(64, 64, 1, 0) - self.layer1 = nn.Sequential(*self.base_layers[3:5]) # size=(N, 64, x.H/4, x.W/4) - self.layer1_1x1 = convrelu(64, 64, 1, 0) - self.layer2 = self.base_layers[5] # size=(N, 128, x.H/8, x.W/8) - self.layer2_1x1 = convrelu(128, 128, 1, 0) - self.layer3 = self.base_layers[6] # size=(N, 256, x.H/16, x.W/16) - self.layer3_1x1 = convrelu(256, 256, 1, 0) - self.layer4 = self.base_layers[7] # size=(N, 512, x.H/32, x.W/32) - self.layer4_1x1 = convrelu(512, 512, 1, 0) - - self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) - - self.conv_up3 = convrelu(256 + 512, 512, 3, 1) - self.conv_up2 = convrelu(128 + 512, 256, 3, 1) - self.conv_up1 = convrelu(64 + 256, 256, 3, 1) - self.conv_up0 = convrelu(64 + 256, 128, 3, 1) - - self.conv_original_size0 = convrelu(3, 64, 3, 1) - self.conv_original_size1 = convrelu(64, 64, 3, 1) - self.conv_original_size2 = convrelu(64 + 128, 64, 3, 1) - - self.conv_last = nn.Conv2d(64, n_class, 1) - - self.max_depth = max_depth - - def forward(self, input): - x_original = self.conv_original_size0(input) - x_original = self.conv_original_size1(x_original) - - layer0 = self.layer0(input) - layer1 = self.layer1(layer0) - layer2 = self.layer2(layer1) - layer3 = self.layer3(layer2) - layer4 = self.layer4(layer3) - - layer4 = self.layer4_1x1(layer4) - x = self.upsample(layer4) - layer3 = self.layer3_1x1(layer3) - x = torch.cat([x, layer3], dim=1) - x = self.conv_up3(x) - - x = self.upsample(x) - layer2 = self.layer2_1x1(layer2) - print(x.shape) - print(layer2.shape) - x = torch.cat([x, layer2], dim=1) - x = self.conv_up2(x) - - x = self.upsample(x) - layer1 = self.layer1_1x1(layer1) - x = torch.cat([x, layer1], dim=1) - x = self.conv_up1(x) - - x = self.upsample(x) - layer0 = self.layer0_1x1(layer0) - x = torch.cat([x, layer0], dim=1) - x = self.conv_up0(x) - - x = self.upsample(x) - x = torch.cat([x, x_original], dim=1) - x = self.conv_original_size2(x) - - out = self.conv_last(x) - - out_depth = torch.sigmoid(out) * self.max_depth - - return {'pred_d': out_depth} - -if __name__ == "__main__": - model = ResNet18UNet(max_depth=10).cuda() - # print(model) - summary(model, input_size=(1,3,256,256)) - \ No newline at end of file diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/memory.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/memory.py deleted file mode 100644 index bd494780b9dbbd1571688cd270bb9b53d113c13e..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/memory.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -from contextlib import contextmanager -from functools import wraps -import torch - -__all__ = ["retry_if_cuda_oom"] - - -@contextmanager -def _ignore_torch_cuda_oom(): - """ - A context which ignores CUDA OOM exception from pytorch. - """ - try: - yield - except RuntimeError as e: - # NOTE: the string may change? - if "CUDA out of memory. " in str(e): - pass - else: - raise - - -def retry_if_cuda_oom(func): - """ - Makes a function retry itself after encountering - pytorch's CUDA OOM error. - It will first retry after calling `torch.cuda.empty_cache()`. - - If that still fails, it will then retry by trying to convert inputs to CPUs. - In this case, it expects the function to dispatch to CPU implementation. - The return values may become CPU tensors as well and it's user's - responsibility to convert it back to CUDA tensor if needed. - - Args: - func: a stateless callable that takes tensor-like objects as arguments - - Returns: - a callable which retries `func` if OOM is encountered. - - Examples: - :: - output = retry_if_cuda_oom(some_torch_function)(input1, input2) - # output may be on CPU even if inputs are on GPU - - Note: - 1. When converting inputs to CPU, it will only look at each argument and check - if it has `.device` and `.to` for conversion. Nested structures of tensors - are not supported. - - 2. Since the function might be called more than once, it has to be - stateless. - """ - - def maybe_to_cpu(x): - try: - like_gpu_tensor = x.device.type == "cuda" and hasattr(x, "to") - except AttributeError: - like_gpu_tensor = False - if like_gpu_tensor: - return x.to(device="cpu") - else: - return x - - @wraps(func) - def wrapped(*args, **kwargs): - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # Clear cache and retry - torch.cuda.empty_cache() - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # Try on CPU. This slows down the code significantly, therefore print a notice. - logger = logging.getLogger(__name__) - logger.info("Attempting to copy inputs of {} to CPU due to CUDA OOM".format(str(func))) - new_args = (maybe_to_cpu(x) for x in args) - new_kwargs = {k: maybe_to_cpu(v) for k, v in kwargs.items()} - return func(*new_args, **new_kwargs) - - return wrapped diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/test_model_analysis.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/test_model_analysis.py deleted file mode 100644 index c01b7af09703c8dad889dee0118d74fcc12ac4b0..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/test_model_analysis.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - - -import unittest -import torch -from torch import nn - -from detectron2.utils.analysis import find_unused_parameters, flop_count_operators, parameter_count -from detectron2.utils.testing import get_model_no_weights - - -class RetinaNetTest(unittest.TestCase): - def setUp(self): - self.model = get_model_no_weights("COCO-Detection/retinanet_R_50_FPN_1x.yaml") - - def test_flop(self): - # RetinaNet supports flop-counting with random inputs - inputs = [{"image": torch.rand(3, 800, 800), "test_unused": "abcd"}] - res = flop_count_operators(self.model, inputs) - self.assertEqual(int(res["conv"]), 146) # 146B flops - - def test_param_count(self): - res = parameter_count(self.model) - self.assertEqual(res[""], 37915572) - self.assertEqual(res["backbone"], 31452352) - - -class FasterRCNNTest(unittest.TestCase): - def setUp(self): - self.model = get_model_no_weights("COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml") - - def test_flop(self): - # Faster R-CNN supports flop-counting with random inputs - inputs = [{"image": torch.rand(3, 800, 800)}] - res = flop_count_operators(self.model, inputs) - - # This only checks flops for backbone & proposal generator - # Flops for box head is not conv, and depends on #proposals, which is - # almost 0 for random inputs. - self.assertEqual(int(res["conv"]), 117) - - def test_flop_with_output_shape(self): - inputs = [{"image": torch.rand(3, 800, 800), "height": 700, "width": 700}] - res = flop_count_operators(self.model, inputs) - self.assertEqual(int(res["conv"]), 117) - - def test_param_count(self): - res = parameter_count(self.model) - self.assertEqual(res[""], 41699936) - self.assertEqual(res["backbone"], 26799296) - - -class MaskRCNNTest(unittest.TestCase): - def setUp(self): - self.model = get_model_no_weights("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") - - def test_flop(self): - inputs1 = [{"image": torch.rand(3, 800, 800)}] - inputs2 = [{"image": torch.rand(3, 800, 800), "height": 700, "width": 700}] - - for inputs in [inputs1, inputs2]: - res = flop_count_operators(self.model, inputs) - # The mask head could have extra conv flops, so total >= 117 - self.assertGreaterEqual(int(res["conv"]), 117) - - -class UnusedParamTest(unittest.TestCase): - def test_unused(self): - class TestMod(nn.Module): - def __init__(self): - super().__init__() - self.fc1 = nn.Linear(10, 10) - self.t = nn.Linear(10, 10) - - def forward(self, x): - return self.fc1(x).mean() - - m = TestMod() - ret = find_unused_parameters(m, torch.randn(10, 10)) - self.assertEqual(set(ret), {"t.weight", "t.bias"}) diff --git a/spaces/ThomasSimonini/Unity-MLAgents-Pyramids/TemplateData/style.css b/spaces/ThomasSimonini/Unity-MLAgents-Pyramids/TemplateData/style.css deleted file mode 100644 index cdc3477fb8c1c824db96f451631bca7cde305923..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/Unity-MLAgents-Pyramids/TemplateData/style.css +++ /dev/null @@ -1,105 +0,0 @@ -html { - box-sizing: border-box; -} -*, *:before, *:after { - box-sizing: inherit; -} -html, body { - height: 100%; -} -canvas { - display: block; -} -body { - margin: 0; -} -#unity-container { - width: 100%; - height: 100%; -} -#unity-canvas { - width: 100%; - height: 100%; - background: #231F20; -} -#loading-cover { - position: absolute; - top: 0; - left: 0; - width: 100%; - height: 100%; - display: flex; - justify-content: center; - align-items: center; -} -#unity-loading-bar { - flex: 1 1 auto; - display: flex; - flex-direction: column; - justify-content: center; - align-items: center; -} -#unity-logo { - text-align: center; -} -#unity-logo img { - max-width: 80%; -} -#unity-progress-bar-empty { - width: 80%; - height: 24px; - margin: 10px 20px 20px 10px; - text-align: left; - border: 1px solid white; - padding: 2px; -} -#unity-progress-bar-full { - width: 0%; - height: 100%; - background: #ffd21e; -} -.light #unity-progress-bar-empty { - border-color: black; -} -.light #unity-progress-bar-full { - background: black; -} - -#unity-fullscreen-button { - position: absolute; - right: 10px; - bottom: 10px; - width: 38px; - height: 38px; - background: url('fullscreen-button.png') no-repeat center; - background-size: contain; -} - -.spinner, -.spinner:after { - border-radius: 50%; - width: 5em; - height: 5em; -} -.spinner { - margin: 10px; - font-size: 10px; - position: relative; - text-indent: -9999em; - border-top: 1.1em solid rgba(255, 255, 255, 0.2); - border-right: 1.1em solid rgba(255, 255, 255, 0.2); - border-bottom: 1.1em solid rgba(255, 255, 255, 0.2); - border-left: 1.1em solid #ffffff; - transform: translateZ(0); - animation: spinner-spin 1.1s infinite linear; -} -@keyframes spinner-spin { - 0% { - transform: rotate(0deg); - } - 100% { - transform: rotate(360deg); - } -} - - diff --git a/spaces/Truepic/ai-content-credentials/static/style.css b/spaces/Truepic/ai-content-credentials/static/style.css deleted file mode 100644 index 9405bcbfb75b662a36f5a1301dda13c5fbd3cb80..0000000000000000000000000000000000000000 --- a/spaces/Truepic/ai-content-credentials/static/style.css +++ /dev/null @@ -1,421 +0,0 @@ -@import url("https://fonts.cdnfonts.com/css/inter"); - -html { - height: 100%; -} - -body { - font-family: "Inter", sans-serif; - margin: 0; -} - -section#header { - padding: 3rem 10rem 1rem 10rem; - border-bottom: 1px solid rgba(227, 234, 240, 1); -} - -section#content { - display: flex; - flex-grow: 1; - height: 100%; -} - -h1 { - font-size: 26px; - font-weight: 600; -} - -p { - font-size: 14px; - font-weight: 400; -} - -#header p { - color: #56687a; -} - -.text-gen-form label { - display: block; - font-weight: 600; - font-size: 14px; - padding-bottom: 0.5rem; -} - -select, -textarea { - border: 1px solid rgba(227, 234, 240, 1); - margin-bottom: 1rem; - width: 100%; - padding: 0.5rem; - resize: none; - font-family: "Inter", sans-serif; -} - -select:focus, -textarea:focus { - outline: none !important; - border-color: #1a6dff; - border-radius: 0; -} - -select { - appearance: none; - -webkit-appearance: none; -} - -button { - font-family: "Inter", sans-serif; -} - -.custom-select { - position: relative; -} - -.custom-select::after { - --size: 0.4rem; - content: ""; - position: absolute; - right: 1rem; - pointer-events: none; - border-left: var(--size) solid transparent; - border-right: var(--size) solid transparent; - border-top: var(--size) solid black; - top: 30%; -} - -textarea { - box-sizing: border-box; - height: 100px; -} - -#column-one { - width: 100%; - border-right: 1px solid rgba(227, 234, 240, 1); -} - -#column-one .container { - padding-left: 10rem; - border-bottom: 1px solid rgba(227, 234, 240, 1); -} - -#column-one .container .parameters { - display: none; - color: rgba(86, 104, 122, 1); - font-size: 12px; - padding-bottom: 2rem; -} - -#logo { - width: 156px; - padding: 2rem 10rem; -} - -#download-link { - display: none; - flex-grow: 1; - padding-right: 2rem; -} - -#download { - width: 113px; - display: block; - margin-left: auto; -} - -#column-two { - width: 440px; - padding-right: 10rem; -} - -#column-two .form, -#column-two .description { - padding: 2rem; -} - -#column-two .form { - border-bottom: 1px solid rgba(227, 234, 240, 1); -} - -.output { - padding: 2rem 2rem 16px 0; -} - -.output #image-container { - width: 100%; - background: rgba(247, 249, 250, 1); - display: flex; - justify-content: center; -} - -.output #image-container img { - align-self: start; - width: 100%; - height: auto; -} - -.output #image-container #placeholder, -.output #image-container #spinner { - width: 48px; - height: 48px; - align-self: center; - padding: 235px 0; -} - -#text-gen-submit { - height: 40px; - width: 80px; - background: rgba(206, 206, 206, 1); - color: white; - font-weight: 600; - font-size: 14px; - border: none; -} - -#text-gen-submit.active { - background-color: #1a6dff; -} - -#text-gen-submit.active:hover { - background-color: #165ad9; - cursor: pointer; -} - -#text-gen-submit.active:active { - background: #1247b2; -} - -section.verification-details { - display: none; -} - -section.verification-details nav { - padding-left: 10rem; - padding-top: 1rem; - border-bottom: 1px solid rgba(227, 234, 240, 1); -} - -section.verification-details nav ul { - display: flex; - margin: 0; - padding: 0; - gap: 2.5rem; -} - -section.verification-details nav ul li { - display: block; - color: #56687a; -} - -section.verification-details nav ul li a { - display: block; - padding: 0.6875rem 0; - border-bottom: 3px solid transparent; - cursor: pointer; -} - -section.verification-details nav ul li a.active { - font-weight: 600; - border-bottom-color: #1a6dff; - color: black; -} - -section.verification-details .verification { - margin: 2rem 2rem 0 10rem; -} - -section.verification-details .verification #verification-output { - height: 480px; - background: #f7f9fa; - border: 1px solid #d8dfe5; - overflow-y: auto; -} - -section.verification-details .certificate .details { - margin: 2rem 2rem 0 10rem; - height: 480px; - background: #f7f9fa; - border: 1px solid #d8dfe5; - overflow-y: auto; -} - -section.verification-details .verification pre { - width: 0; -} - -section.verification-details .certificate { - display: none; - margin: 2rem 2rem 0 10rem; -} - -.verification > p, -.certificate > p { - color: rgba(86, 104, 122, 1); - padding-bottom: 1rem; -} - -.verification > p strong, -.certificate > p strong { - color: black; -} - -section.verification-details .certificate .details { - color: rgba(86, 104, 122, 1); - font-size: 12px; - font-weight: 400; - margin: 0; - padding: 1rem; -} - -section.verification-details .certificate .details > div { - clear: both; - padding: 0.4rem 0; -} - -section.verification-details .certificate .details strong { - padding: 0.5rem 0; - display: block; - color: black; -} - -section.verification-details .certificate .details div label { - float: left; - width: 175px; - font-size: 12px; - font-weight: 400; -} - -#certificate-list { - padding: 0; - margin: 0 0 32px 0; - border-bottom: 1px solid #d8dfe5; -} - -#certificate-list li { - border: 1px solid #d8dfe5; - border-bottom: none; - list-style: none; - padding: 0.8rem 0 0.8rem 1.5rem; - font-size: 14px; -} - -#certificate-list li:not(:first-child) { - background: url(images/li.svg) no-repeat 2.5rem 50%; - padding: 0.8rem 0 0.8rem 4.5rem; -} - -#certificate-list li.active { - background-color: #f7f9fa; - font-weight: 600; -} - -#certificate-list li:hover, -#certificate-list li.active:hover { - background-color: #e9f4ff; - cursor: pointer; -} - -.description { - padding-top: 0.5rem; -} - -.description p { - color: rgba(86, 104, 122, 1); - font-size: 12px; - line-height: 16px; - font-weight: 400; -} - -.description strong { - color: black; - font-weight: 600; -} - -@media screen and (max-width: 880px) { - #column-one, - #column-two { - width: 50%; - } - - section#header { - padding: 1rem; - } - - #column-one .container { - padding-left: 1rem; - } - - #column-two { - padding-right: 1rem; - } - - #logo { - padding: 2rem 1rem; - } - - section.verification-details nav { - padding-left: 1rem; - } - - section.verification-details .verification, - section.verification-details .certificate .details { - padding-left: 1rem; - margin: 2rem 1rem 0 1rem; - } - - section.verification-details .certificate { - margin-left: 1rem; - } -} - -@media screen and (max-width: 768px) { - section#content { - flex-direction: column; - } - - #column-one, - #column-two { - width: 100%; - } - - #column-one { - border-right: none; - order: 2; - } - - #column-two { - order: 1; - padding: 0; - } - - .output { - padding-right: 1rem; - } - - .text-gen-form { - padding: 1rem; - } -} - -.alert-primary { - background-color: rgba(254, 243, 199, 0.25); - padding: 0 1rem; - margin-bottom: 1rem; - border: 1px solid transparent; - border-radius: 0.25rem; - border-color: #fef3c7; -} - -.alert-primary h4 { - font-weight: 600; -} - -.alert-primary h4, -.alert-primary p { - font-size: 14px; - color: #1f2937 !important; -} - -.alert-primary img { - position: relative; - top: 3px; -} diff --git a/spaces/Vegecken/sovits4dzl/inference/__init__.py b/spaces/Vegecken/sovits4dzl/inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Xuan2060320350/Bing-1/README.md b/spaces/Xuan2060320350/Bing-1/README.md deleted file mode 100644 index 9f76cc88e6ebb881d380164beccbea7066b5f651..0000000000000000000000000000000000000000 --- a/spaces/Xuan2060320350/Bing-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bing 1 -emoji: 🤖️ -colorFrom: blue -colorTo: blue -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/XzJosh/Jiaran-Bert-VITS2/server.py b/spaces/XzJosh/Jiaran-Bert-VITS2/server.py deleted file mode 100644 index c736ca4f95fec853950eef6654ef79856beffc0a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jiaran-Bert-VITS2/server.py +++ /dev/null @@ -1,123 +0,0 @@ -from flask import Flask, request, Response -from io import BytesIO -import torch -from av import open as avopen - -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -from scipy.io import wavfile - -# Flask Init -app = Flask(__name__) -app.config['JSON_AS_ASCII'] = False -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - print([f"{p}{t}" for p, t in zip(phone, tone)]) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language - -def infer(text, sdp_ratio, noise_scale, noise_scale_w,length_scale,sid): - bert, phones, tones, lang_ids = get_text(text,"ZH", hps,) - with torch.no_grad(): - x_tst=phones.to(dev).unsqueeze(0) - tones=tones.to(dev).unsqueeze(0) - lang_ids=lang_ids.to(dev).unsqueeze(0) - bert = bert.to(dev).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(dev) - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(dev) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids,bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - return audio - -def replace_punctuation(text, i=2): - punctuation = ",。?!" - for char in punctuation: - text = text.replace(char, char * i) - return text - -def wav2(i, o, format): - inp = avopen(i, 'rb') - out = avopen(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -# Load Generator -hps = utils.get_hparams_from_file("./configs/config.json") - -dev='cuda' -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(dev) -_ = net_g.eval() - -_ = utils.load_checkpoint("logs/G_649000.pth", net_g, None,skip_optimizer=True) - -@app.route("/",methods=['GET','POST']) -def main(): - if request.method == 'GET': - try: - speaker = request.args.get('speaker') - text = request.args.get('text').replace("/n","") - sdp_ratio = float(request.args.get("sdp_ratio", 0.2)) - noise = float(request.args.get("noise", 0.5)) - noisew = float(request.args.get("noisew", 0.6)) - length = float(request.args.get("length", 1.2)) - if length >= 2: - return "Too big length" - if len(text) >=200: - return "Too long text" - fmt = request.args.get("format", "wav") - if None in (speaker, text): - return "Missing Parameter" - if fmt not in ("mp3", "wav", "ogg"): - return "Invalid Format" - except: - return "Invalid Parameter" - - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise, noise_scale_w=noisew, length_scale=length, sid=speaker) - - with BytesIO() as wav: - wavfile.write(wav, hps.data.sampling_rate, audio) - torch.cuda.empty_cache() - if fmt == "wav": - return Response(wav.getvalue(), mimetype="audio/wav") - wav.seek(0, 0) - with BytesIO() as ofp: - wav2(wav, ofp, fmt) - return Response( - ofp.getvalue(), - mimetype="audio/mpeg" if fmt == "mp3" else "audio/ogg" - ) diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/text/english_bert_mock.py b/spaces/XzJosh/ShanBao-Bert-VITS2/text/english_bert_mock.py deleted file mode 100644 index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ShanBao-Bert-VITS2/text/english_bert_mock.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch - - -def get_bert_feature(norm_text, word2ph): - return torch.zeros(1024, sum(word2ph)) diff --git a/spaces/XzJosh/Spade-Bert-VITS2/train_ms.py b/spaces/XzJosh/Spade-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Spade-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/XzJosh/nine2-Bert-VITS2/data_utils.py b/spaces/XzJosh/nine2-Bert-VITS2/data_utils.py deleted file mode 100644 index be3a29a93188c5b3386f22e5db29e5e96d78109a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/nine2-Bert-VITS2/data_utils.py +++ /dev/null @@ -1,321 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - try: - spec = torch.load(spec_filename) - except: - if self.use_mel_spec_posterior: - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/YESO/YESOdreambooth/convertosd.py b/spaces/YESO/YESOdreambooth/convertosd.py deleted file mode 100644 index e4bec6cbe894dd74b24f633cc66346d687d3f802..0000000000000000000000000000000000000000 --- a/spaces/YESO/YESOdreambooth/convertosd.py +++ /dev/null @@ -1,226 +0,0 @@ -# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint. -# *Only* converts the UNet, VAE, and Text Encoder. -# Does not convert optimizer state or any other thing. -# Written by jachiam - -import argparse -import os.path as osp - -import torch -import gc - -# =================# -# UNet Conversion # -# =================# - -unet_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("time_embed.0.weight", "time_embedding.linear_1.weight"), - ("time_embed.0.bias", "time_embedding.linear_1.bias"), - ("time_embed.2.weight", "time_embedding.linear_2.weight"), - ("time_embed.2.bias", "time_embedding.linear_2.bias"), - ("input_blocks.0.0.weight", "conv_in.weight"), - ("input_blocks.0.0.bias", "conv_in.bias"), - ("out.0.weight", "conv_norm_out.weight"), - ("out.0.bias", "conv_norm_out.bias"), - ("out.2.weight", "conv_out.weight"), - ("out.2.bias", "conv_out.bias"), -] - -unet_conversion_map_resnet = [ - # (stable-diffusion, HF Diffusers) - ("in_layers.0", "norm1"), - ("in_layers.2", "conv1"), - ("out_layers.0", "norm2"), - ("out_layers.3", "conv2"), - ("emb_layers.1", "time_emb_proj"), - ("skip_connection", "conv_shortcut"), -] - -unet_conversion_map_layer = [] -# hardcoded number of downblocks and resnets/attentions... -# would need smarter logic for other networks. -for i in range(4): - # loop over downblocks/upblocks - - for j in range(2): - # loop over resnets/attentions for downblocks - hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}." - sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0." - unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix)) - - if i < 3: - # no attention layers in down_blocks.3 - hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}." - sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1." - unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix)) - - for j in range(3): - # loop over resnets/attentions for upblocks - hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}." - sd_up_res_prefix = f"output_blocks.{3*i + j}.0." - unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix)) - - if i > 0: - # no attention layers in up_blocks.0 - hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}." - sd_up_atn_prefix = f"output_blocks.{3*i + j}.1." - unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix)) - - if i < 3: - # no downsample in down_blocks.3 - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv." - sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op." - unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix)) - - # no upsample in up_blocks.3 - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}." - unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix)) - -hf_mid_atn_prefix = "mid_block.attentions.0." -sd_mid_atn_prefix = "middle_block.1." -unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix)) - -for j in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{j}." - sd_mid_res_prefix = f"middle_block.{2*j}." - unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -def convert_unet_state_dict(unet_state_dict): - # buyer beware: this is a *brittle* function, - # and correct output requires that all of these pieces interact in - # the exact order in which I have arranged them. - mapping = {k: k for k in unet_state_dict.keys()} - for sd_name, hf_name in unet_conversion_map: - mapping[hf_name] = sd_name - for k, v in mapping.items(): - if "resnets" in k: - for sd_part, hf_part in unet_conversion_map_resnet: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - for sd_part, hf_part in unet_conversion_map_layer: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()} - return new_state_dict - - -# ================# -# VAE Conversion # -# ================# - -vae_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("nin_shortcut", "conv_shortcut"), - ("norm_out", "conv_norm_out"), - ("mid.attn_1.", "mid_block.attentions.0."), -] - -for i in range(4): - # down_blocks have two resnets - for j in range(2): - hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}." - sd_down_prefix = f"encoder.down.{i}.block.{j}." - vae_conversion_map.append((sd_down_prefix, hf_down_prefix)) - - if i < 3: - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0." - sd_downsample_prefix = f"down.{i}.downsample." - vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix)) - - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"up.{3-i}.upsample." - vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix)) - - # up_blocks have three resnets - # also, up blocks in hf are numbered in reverse from sd - for j in range(3): - hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}." - sd_up_prefix = f"decoder.up.{3-i}.block.{j}." - vae_conversion_map.append((sd_up_prefix, hf_up_prefix)) - -# this part accounts for mid blocks in both the encoder and the decoder -for i in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{i}." - sd_mid_res_prefix = f"mid.block_{i+1}." - vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -vae_conversion_map_attn = [ - # (stable-diffusion, HF Diffusers) - ("norm.", "group_norm."), - ("q.", "query."), - ("k.", "key."), - ("v.", "value."), - ("proj_out.", "proj_attn."), -] - - -def reshape_weight_for_sd(w): - # convert HF linear weights to SD conv2d weights - return w.reshape(*w.shape, 1, 1) - - -def convert_vae_state_dict(vae_state_dict): - mapping = {k: k for k in vae_state_dict.keys()} - for k, v in mapping.items(): - for sd_part, hf_part in vae_conversion_map: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - if "attentions" in k: - for sd_part, hf_part in vae_conversion_map_attn: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()} - weights_to_convert = ["q", "k", "v", "proj_out"] - print("Converting to CKPT ...") - for k, v in new_state_dict.items(): - for weight_name in weights_to_convert: - if f"mid.attn_1.{weight_name}.weight" in k: - new_state_dict[k] = reshape_weight_for_sd(v) - return new_state_dict - - -# =========================# -# Text Encoder Conversion # -# =========================# -# pretty much a no-op - - -def convert_text_enc_state_dict(text_enc_dict): - return text_enc_dict - - -def convert(model_path, checkpoint_path): - unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin") - vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin") - text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin") - - # Convert the UNet model - unet_state_dict = torch.load(unet_path, map_location='cpu') - unet_state_dict = convert_unet_state_dict(unet_state_dict) - unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()} - - # Convert the VAE model - vae_state_dict = torch.load(vae_path, map_location='cpu') - vae_state_dict = convert_vae_state_dict(vae_state_dict) - vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()} - - # Convert the text encoder model - text_enc_dict = torch.load(text_enc_path, map_location='cpu') - text_enc_dict = convert_text_enc_state_dict(text_enc_dict) - text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()} - - # Put together new checkpoint - state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict} - - state_dict = {k:v.half() for k,v in state_dict.items()} - state_dict = {"state_dict": state_dict} - torch.save(state_dict, checkpoint_path) - del state_dict, text_enc_dict, vae_state_dict, unet_state_dict - torch.cuda.empty_cache() - gc.collect() diff --git a/spaces/YUANAI/DiffspeechResearch/modules/commons/normalizing_flow/res_flow.py b/spaces/YUANAI/DiffspeechResearch/modules/commons/normalizing_flow/res_flow.py deleted file mode 100644 index d0d13285704543ec28fe37d82346011240bdcaf8..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/modules/commons/normalizing_flow/res_flow.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch import nn -from modules.commons.conv import ConditionalConvBlocks -from modules.commons.wavenet import WN - - -class FlipLayer(nn.Module): - def forward(self, x, nonpadding, cond=None, reverse=False): - x = torch.flip(x, [1]) - return x - - -class CouplingLayer(nn.Module): - def __init__(self, c_in, hidden_size, kernel_size, n_layers, p_dropout=0, c_in_g=0, nn_type='wn'): - super().__init__() - self.channels = c_in - self.hidden_size = hidden_size - self.kernel_size = kernel_size - self.n_layers = n_layers - self.c_half = c_in // 2 - - self.pre = nn.Conv1d(self.c_half, hidden_size, 1) - if nn_type == 'wn': - self.enc = WN(hidden_size, kernel_size, 1, n_layers, p_dropout=p_dropout, - c_cond=c_in_g) - elif nn_type == 'conv': - self.enc = ConditionalConvBlocks( - hidden_size, c_in_g, hidden_size, None, kernel_size, - layers_in_block=1, is_BTC=False, num_layers=n_layers) - self.post = nn.Conv1d(hidden_size, self.c_half, 1) - - def forward(self, x, nonpadding, cond=None, reverse=False): - x0, x1 = x[:, :self.c_half], x[:, self.c_half:] - x_ = self.pre(x0) * nonpadding - x_ = self.enc(x_, nonpadding=nonpadding, cond=cond) - m = self.post(x_) - x1 = m + x1 if not reverse else x1 - m - x = torch.cat([x0, x1], 1) - return x * nonpadding - - -class ResFlow(nn.Module): - def __init__(self, - c_in, - hidden_size, - kernel_size, - n_flow_layers, - n_flow_steps=4, - c_cond=0, - nn_type='wn'): - super().__init__() - self.flows = nn.ModuleList() - for i in range(n_flow_steps): - self.flows.append( - CouplingLayer(c_in, hidden_size, kernel_size, n_flow_layers, c_in_g=c_cond, nn_type=nn_type)) - self.flows.append(FlipLayer()) - - def forward(self, x, nonpadding, cond=None, reverse=False): - for flow in (self.flows if not reverse else reversed(self.flows)): - x = flow(x, nonpadding, cond=cond, reverse=reverse) - return x diff --git a/spaces/Yiqin/ChatVID/model/utils/extract_clip_feature.py b/spaces/Yiqin/ChatVID/model/utils/extract_clip_feature.py deleted file mode 100644 index a980bd90e29afc0ed8925a61565f65a7771c81bb..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/utils/extract_clip_feature.py +++ /dev/null @@ -1,121 +0,0 @@ -import clip -import numpy as np -import torch -from mmaction.datasets.transforms import (CenterCrop, DecordDecode, DecordInit, - FormatShape, Resize) -from torchvision import transforms - - -def extract_clip_feature_single_video_fps( - video_path: str, - clip_ckpt_path: str = 'ViT-L-14.pt', - device: str = 'cuda'): - - class SampleFrames1FPS(object): - '''Sample frames at 1 fps. - - Required Keys: - - total_frames - - start_index - - avg_fps - - Added Keys: - - frame_interval - - frame_inds - - num_clips - ''' - - def transform(self, video_info: dict) -> dict: - video_info['frame_inds'] = np.arange( - video_info['start_index'], - video_info['total_frames'], - video_info['avg_fps'], - dtype=int) # np.arange(start, stop, step, dtype) - video_info['frame_interval'] = 1 - video_info['num_clips'] = len(video_info['frame_inds']) - return video_info - - class SampleFrames5FPS(object): - '''Sample frames at 5 fps. - - Required Keys: - - total_frames - - start_index - - avg_fps - - Added Keys: - - frame_interval - - frame_inds - - num_clips - ''' - - def transform(self, video_info: dict) -> dict: - video_info['frame_inds'] = np.arange( - video_info['start_index'], - video_info['total_frames'], - video_info['avg_fps'] // 5, - dtype=int) - video_info['frame_interval'] = 1 - video_info['num_clips'] = len(video_info['frame_inds']) - return video_info - - video_info = {'filename': video_path, 'start_index': 0} - video_processors = [ - DecordInit(), - SampleFrames1FPS(), - DecordDecode(), - Resize(scale=(-1, 224)), - CenterCrop(crop_size=224), - FormatShape(input_format='NCHW'), - ] - - # decode video to imgs - for processor in video_processors: - video_info = processor.transform(video_info) - - imgs = torch.from_numpy(video_info['imgs']) # uint8 img tensor - - imgs_transforms = transforms.Compose([ - transforms.ConvertImageDtype(dtype=torch.float32), - transforms.Normalize( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - inplace=False) - ]) - - # uint8 -> float, then normalize - imgs = imgs_transforms(imgs).to(device) - - # load model - clip_model, _ = clip.load(clip_ckpt_path, device) - - # encode imgs get features - with torch.no_grad(): - video_feat = clip_model.encode_image(imgs) - - return video_feat, video_info - - -if __name__ == '__main__': - - device = "cuda" if torch.cuda.is_available() else "cpu" - - video_names = [ - 'cook.mp4', 'latex.mp4', 'nba.mp4', 'temple_of_heaven.mp4', - 'south_pole.mp4', 'tv_series.mp4', 'formula_one.mp4', 'make-up.mp4', - 'police.mp4' - ] - video_dir = '/mnt/petrelfs/wangyiqin/vid_cap/examples/videos/' - - for video_name in video_names: - video_feat = extract_clip_feature_single_video_fps( - video_path=video_dir + video_name, - clip_ckpt_path='ViT-L-14.pt', - device=device) - video_feat = video_feat.cpu() - # compress to one dimension - video_feat = video_feat.numpy() - - np.save('clip_features/20/' + video_name[:-4] + '.npy', video_feat) - print(video_feat.shape) - print(video_name + ' DONE') diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/engine/hooks.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/engine/hooks.py deleted file mode 100644 index 52c321f979726b8aa89ba34874bc6729a75b70b4..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/engine/hooks.py +++ /dev/null @@ -1,686 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import datetime -import itertools -import logging -import math -import operator -import os -import tempfile -import time -import warnings -from collections import Counter -import torch -from fvcore.common.checkpoint import Checkpointer -from fvcore.common.checkpoint import PeriodicCheckpointer as _PeriodicCheckpointer -from fvcore.common.param_scheduler import ParamScheduler -from fvcore.common.timer import Timer -from fvcore.nn.precise_bn import get_bn_modules, update_bn_stats - -import detectron2.utils.comm as comm -from detectron2.evaluation.testing import flatten_results_dict -from detectron2.solver import LRMultiplier -from detectron2.utils.events import EventStorage, EventWriter -from detectron2.utils.file_io import PathManager - -from .train_loop import HookBase - -__all__ = [ - "CallbackHook", - "IterationTimer", - "PeriodicWriter", - "PeriodicCheckpointer", - "BestCheckpointer", - "LRScheduler", - "AutogradProfiler", - "EvalHook", - "PreciseBN", - "TorchProfiler", - "TorchMemoryStats", -] - - -""" -Implement some common hooks. -""" - - -class CallbackHook(HookBase): - """ - Create a hook using callback functions provided by the user. - """ - - def __init__(self, *, before_train=None, after_train=None, before_step=None, after_step=None): - """ - Each argument is a function that takes one argument: the trainer. - """ - self._before_train = before_train - self._before_step = before_step - self._after_step = after_step - self._after_train = after_train - - def before_train(self): - if self._before_train: - self._before_train(self.trainer) - - def after_train(self): - if self._after_train: - self._after_train(self.trainer) - # The functions may be closures that hold reference to the trainer - # Therefore, delete them to avoid circular reference. - del self._before_train, self._after_train - del self._before_step, self._after_step - - def before_step(self): - if self._before_step: - self._before_step(self.trainer) - - def after_step(self): - if self._after_step: - self._after_step(self.trainer) - - -class IterationTimer(HookBase): - """ - Track the time spent for each iteration (each run_step call in the trainer). - Print a summary in the end of training. - - This hook uses the time between the call to its :meth:`before_step` - and :meth:`after_step` methods. - Under the convention that :meth:`before_step` of all hooks should only - take negligible amount of time, the :class:`IterationTimer` hook should be - placed at the beginning of the list of hooks to obtain accurate timing. - """ - - def __init__(self, warmup_iter=3): - """ - Args: - warmup_iter (int): the number of iterations at the beginning to exclude - from timing. - """ - self._warmup_iter = warmup_iter - self._step_timer = Timer() - self._start_time = time.perf_counter() - self._total_timer = Timer() - - def before_train(self): - self._start_time = time.perf_counter() - self._total_timer.reset() - self._total_timer.pause() - - def after_train(self): - logger = logging.getLogger(__name__) - total_time = time.perf_counter() - self._start_time - total_time_minus_hooks = self._total_timer.seconds() - hook_time = total_time - total_time_minus_hooks - - num_iter = self.trainer.storage.iter + 1 - self.trainer.start_iter - self._warmup_iter - - if num_iter > 0 and total_time_minus_hooks > 0: - # Speed is meaningful only after warmup - # NOTE this format is parsed by grep in some scripts - logger.info( - "Overall training speed: {} iterations in {} ({:.4f} s / it)".format( - num_iter, - str(datetime.timedelta(seconds=int(total_time_minus_hooks))), - total_time_minus_hooks / num_iter, - ) - ) - - logger.info( - "Total training time: {} ({} on hooks)".format( - str(datetime.timedelta(seconds=int(total_time))), - str(datetime.timedelta(seconds=int(hook_time))), - ) - ) - - def before_step(self): - self._step_timer.reset() - self._total_timer.resume() - - def after_step(self): - # +1 because we're in after_step, the current step is done - # but not yet counted - iter_done = self.trainer.storage.iter - self.trainer.start_iter + 1 - if iter_done >= self._warmup_iter: - sec = self._step_timer.seconds() - self.trainer.storage.put_scalars(time=sec) - else: - self._start_time = time.perf_counter() - self._total_timer.reset() - - self._total_timer.pause() - - -class PeriodicWriter(HookBase): - """ - Write events to EventStorage (by calling ``writer.write()``) periodically. - - It is executed every ``period`` iterations and after the last iteration. - Note that ``period`` does not affect how data is smoothed by each writer. - """ - - def __init__(self, writers, period=20): - """ - Args: - writers (list[EventWriter]): a list of EventWriter objects - period (int): - """ - self._writers = writers - for w in writers: - assert isinstance(w, EventWriter), w - self._period = period - - def after_step(self): - if (self.trainer.iter + 1) % self._period == 0 or ( - self.trainer.iter == self.trainer.max_iter - 1 - ): - for writer in self._writers: - writer.write() - - def after_train(self): - for writer in self._writers: - # If any new data is found (e.g. produced by other after_train), - # write them before closing - writer.write() - writer.close() - - -class PeriodicCheckpointer(_PeriodicCheckpointer, HookBase): - """ - Same as :class:`detectron2.checkpoint.PeriodicCheckpointer`, but as a hook. - - Note that when used as a hook, - it is unable to save additional data other than what's defined - by the given `checkpointer`. - - It is executed every ``period`` iterations and after the last iteration. - """ - - def before_train(self): - self.max_iter = self.trainer.max_iter - - def after_step(self): - # No way to use **kwargs - self.step(self.trainer.iter) - - -class BestCheckpointer(HookBase): - """ - Checkpoints best weights based off given metric. - - This hook should be used in conjunction to and executed after the hook - that produces the metric, e.g. `EvalHook`. - """ - - def __init__( - self, - eval_period: int, - checkpointer: Checkpointer, - val_metric: str, - mode: str = "max", - file_prefix: str = "model_best", - ) -> None: - """ - Args: - eval_period (int): the period `EvalHook` is set to run. - checkpointer: the checkpointer object used to save checkpoints. - val_metric (str): validation metric to track for best checkpoint, e.g. "bbox/AP50" - mode (str): one of {'max', 'min'}. controls whether the chosen val metric should be - maximized or minimized, e.g. for "bbox/AP50" it should be "max" - file_prefix (str): the prefix of checkpoint's filename, defaults to "model_best" - """ - self._logger = logging.getLogger(__name__) - self._period = eval_period - self._val_metric = val_metric - assert mode in [ - "max", - "min", - ], f'Mode "{mode}" to `BestCheckpointer` is unknown. It should be one of {"max", "min"}.' - if mode == "max": - self._compare = operator.gt - else: - self._compare = operator.lt - self._checkpointer = checkpointer - self._file_prefix = file_prefix - self.best_metric = None - self.best_iter = None - - def _update_best(self, val, iteration): - if math.isnan(val) or math.isinf(val): - return False - self.best_metric = val - self.best_iter = iteration - return True - - def _best_checking(self): - metric_tuple = self.trainer.storage.latest().get(self._val_metric) - if metric_tuple is None: - self._logger.warning( - f"Given val metric {self._val_metric} does not seem to be computed/stored." - "Will not be checkpointing based on it." - ) - return - else: - latest_metric, metric_iter = metric_tuple - - if self.best_metric is None: - if self._update_best(latest_metric, metric_iter): - additional_state = {"iteration": metric_iter} - self._checkpointer.save(f"{self._file_prefix}", **additional_state) - self._logger.info( - f"Saved first model at {self.best_metric:0.5f} @ {self.best_iter} steps" - ) - elif self._compare(latest_metric, self.best_metric): - additional_state = {"iteration": metric_iter} - self._checkpointer.save(f"{self._file_prefix}", **additional_state) - self._logger.info( - f"Saved best model as latest eval score for {self._val_metric} is " - f"{latest_metric:0.5f}, better than last best score " - f"{self.best_metric:0.5f} @ iteration {self.best_iter}." - ) - self._update_best(latest_metric, metric_iter) - else: - self._logger.info( - f"Not saving as latest eval score for {self._val_metric} is {latest_metric:0.5f}, " - f"not better than best score {self.best_metric:0.5f} @ iteration {self.best_iter}." - ) - - def after_step(self): - # same conditions as `EvalHook` - next_iter = self.trainer.iter + 1 - if ( - self._period > 0 - and next_iter % self._period == 0 - and next_iter != self.trainer.max_iter - ): - self._best_checking() - - def after_train(self): - # same conditions as `EvalHook` - if self.trainer.iter + 1 >= self.trainer.max_iter: - self._best_checking() - - -class LRScheduler(HookBase): - """ - A hook which executes a torch builtin LR scheduler and summarizes the LR. - It is executed after every iteration. - """ - - def __init__(self, optimizer=None, scheduler=None): - """ - Args: - optimizer (torch.optim.Optimizer): - scheduler (torch.optim.LRScheduler or fvcore.common.param_scheduler.ParamScheduler): - if a :class:`ParamScheduler` object, it defines the multiplier over the base LR - in the optimizer. - - If any argument is not given, will try to obtain it from the trainer. - """ - self._optimizer = optimizer - self._scheduler = scheduler - - def before_train(self): - self._optimizer = self._optimizer or self.trainer.optimizer - if isinstance(self.scheduler, ParamScheduler): - self._scheduler = LRMultiplier( - self._optimizer, - self.scheduler, - self.trainer.max_iter, - last_iter=self.trainer.iter - 1, - ) - self._best_param_group_id = LRScheduler.get_best_param_group_id(self._optimizer) - - @staticmethod - def get_best_param_group_id(optimizer): - # NOTE: some heuristics on what LR to summarize - # summarize the param group with most parameters - largest_group = max(len(g["params"]) for g in optimizer.param_groups) - - if largest_group == 1: - # If all groups have one parameter, - # then find the most common initial LR, and use it for summary - lr_count = Counter([g["lr"] for g in optimizer.param_groups]) - lr = lr_count.most_common()[0][0] - for i, g in enumerate(optimizer.param_groups): - if g["lr"] == lr: - return i - else: - for i, g in enumerate(optimizer.param_groups): - if len(g["params"]) == largest_group: - return i - - def after_step(self): - lr = self._optimizer.param_groups[self._best_param_group_id]["lr"] - self.trainer.storage.put_scalar("lr", lr, smoothing_hint=False) - self.scheduler.step() - - @property - def scheduler(self): - return self._scheduler or self.trainer.scheduler - - def state_dict(self): - if isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler): - return self.scheduler.state_dict() - return {} - - def load_state_dict(self, state_dict): - if isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler): - logger = logging.getLogger(__name__) - logger.info("Loading scheduler from state_dict ...") - self.scheduler.load_state_dict(state_dict) - - -class TorchProfiler(HookBase): - """ - A hook which runs `torch.profiler.profile`. - - Examples: - :: - hooks.TorchProfiler( - lambda trainer: 10 < trainer.iter < 20, self.cfg.OUTPUT_DIR - ) - - The above example will run the profiler for iteration 10~20 and dump - results to ``OUTPUT_DIR``. We did not profile the first few iterations - because they are typically slower than the rest. - The result files can be loaded in the ``chrome://tracing`` page in chrome browser, - and the tensorboard visualizations can be visualized using - ``tensorboard --logdir OUTPUT_DIR/log`` - """ - - def __init__(self, enable_predicate, output_dir, *, activities=None, save_tensorboard=True): - """ - Args: - enable_predicate (callable[trainer -> bool]): a function which takes a trainer, - and returns whether to enable the profiler. - It will be called once every step, and can be used to select which steps to profile. - output_dir (str): the output directory to dump tracing files. - activities (iterable): same as in `torch.profiler.profile`. - save_tensorboard (bool): whether to save tensorboard visualizations at (output_dir)/log/ - """ - self._enable_predicate = enable_predicate - self._activities = activities - self._output_dir = output_dir - self._save_tensorboard = save_tensorboard - - def before_step(self): - if self._enable_predicate(self.trainer): - if self._save_tensorboard: - on_trace_ready = torch.profiler.tensorboard_trace_handler( - os.path.join( - self._output_dir, - "log", - "profiler-tensorboard-iter{}".format(self.trainer.iter), - ), - f"worker{comm.get_rank()}", - ) - else: - on_trace_ready = None - self._profiler = torch.profiler.profile( - activities=self._activities, - on_trace_ready=on_trace_ready, - record_shapes=True, - profile_memory=True, - with_stack=True, - with_flops=True, - ) - self._profiler.__enter__() - else: - self._profiler = None - - def after_step(self): - if self._profiler is None: - return - self._profiler.__exit__(None, None, None) - if not self._save_tensorboard: - PathManager.mkdirs(self._output_dir) - out_file = os.path.join( - self._output_dir, "profiler-trace-iter{}.json".format(self.trainer.iter) - ) - if "://" not in out_file: - self._profiler.export_chrome_trace(out_file) - else: - # Support non-posix filesystems - with tempfile.TemporaryDirectory(prefix="detectron2_profiler") as d: - tmp_file = os.path.join(d, "tmp.json") - self._profiler.export_chrome_trace(tmp_file) - with open(tmp_file) as f: - content = f.read() - with PathManager.open(out_file, "w") as f: - f.write(content) - - -class AutogradProfiler(TorchProfiler): - """ - A hook which runs `torch.autograd.profiler.profile`. - - Examples: - :: - hooks.AutogradProfiler( - lambda trainer: 10 < trainer.iter < 20, self.cfg.OUTPUT_DIR - ) - - The above example will run the profiler for iteration 10~20 and dump - results to ``OUTPUT_DIR``. We did not profile the first few iterations - because they are typically slower than the rest. - The result files can be loaded in the ``chrome://tracing`` page in chrome browser. - - Note: - When used together with NCCL on older version of GPUs, - autograd profiler may cause deadlock because it unnecessarily allocates - memory on every device it sees. The memory management calls, if - interleaved with NCCL calls, lead to deadlock on GPUs that do not - support ``cudaLaunchCooperativeKernelMultiDevice``. - """ - - def __init__(self, enable_predicate, output_dir, *, use_cuda=True): - """ - Args: - enable_predicate (callable[trainer -> bool]): a function which takes a trainer, - and returns whether to enable the profiler. - It will be called once every step, and can be used to select which steps to profile. - output_dir (str): the output directory to dump tracing files. - use_cuda (bool): same as in `torch.autograd.profiler.profile`. - """ - warnings.warn("AutogradProfiler has been deprecated in favor of TorchProfiler.") - self._enable_predicate = enable_predicate - self._use_cuda = use_cuda - self._output_dir = output_dir - - def before_step(self): - if self._enable_predicate(self.trainer): - self._profiler = torch.autograd.profiler.profile(use_cuda=self._use_cuda) - self._profiler.__enter__() - else: - self._profiler = None - - -class EvalHook(HookBase): - """ - Run an evaluation function periodically, and at the end of training. - - It is executed every ``eval_period`` iterations and after the last iteration. - """ - - def __init__(self, eval_period, eval_function): - """ - Args: - eval_period (int): the period to run `eval_function`. Set to 0 to - not evaluate periodically (but still after the last iteration). - eval_function (callable): a function which takes no arguments, and - returns a nested dict of evaluation metrics. - - Note: - This hook must be enabled in all or none workers. - If you would like only certain workers to perform evaluation, - give other workers a no-op function (`eval_function=lambda: None`). - """ - self._period = eval_period - self._func = eval_function - - def _do_eval(self): - results = self._func() - - if results: - assert isinstance( - results, dict - ), "Eval function must return a dict. Got {} instead.".format(results) - - flattened_results = flatten_results_dict(results) - for k, v in flattened_results.items(): - try: - v = float(v) - except Exception as e: - raise ValueError( - "[EvalHook] eval_function should return a nested dict of float. " - "Got '{}: {}' instead.".format(k, v) - ) from e - self.trainer.storage.put_scalars(**flattened_results, smoothing_hint=False) - - # Evaluation may take different time among workers. - # A barrier make them start the next iteration together. - comm.synchronize() - - def after_step(self): - next_iter = self.trainer.iter + 1 - if self._period > 0 and next_iter % self._period == 0: - # do the last eval in after_train - if next_iter != self.trainer.max_iter: - self._do_eval() - - def after_train(self): - # This condition is to prevent the eval from running after a failed training - if self.trainer.iter + 1 >= self.trainer.max_iter: - self._do_eval() - # func is likely a closure that holds reference to the trainer - # therefore we clean it to avoid circular reference in the end - del self._func - - -class PreciseBN(HookBase): - """ - The standard implementation of BatchNorm uses EMA in inference, which is - sometimes suboptimal. - This class computes the true average of statistics rather than the moving average, - and put true averages to every BN layer in the given model. - - It is executed every ``period`` iterations and after the last iteration. - """ - - def __init__(self, period, model, data_loader, num_iter): - """ - Args: - period (int): the period this hook is run, or 0 to not run during training. - The hook will always run in the end of training. - model (nn.Module): a module whose all BN layers in training mode will be - updated by precise BN. - Note that user is responsible for ensuring the BN layers to be - updated are in training mode when this hook is triggered. - data_loader (iterable): it will produce data to be run by `model(data)`. - num_iter (int): number of iterations used to compute the precise - statistics. - """ - self._logger = logging.getLogger(__name__) - if len(get_bn_modules(model)) == 0: - self._logger.info( - "PreciseBN is disabled because model does not contain BN layers in training mode." - ) - self._disabled = True - return - - self._model = model - self._data_loader = data_loader - self._num_iter = num_iter - self._period = period - self._disabled = False - - self._data_iter = None - - def after_step(self): - next_iter = self.trainer.iter + 1 - is_final = next_iter == self.trainer.max_iter - if is_final or (self._period > 0 and next_iter % self._period == 0): - self.update_stats() - - def update_stats(self): - """ - Update the model with precise statistics. Users can manually call this method. - """ - if self._disabled: - return - - if self._data_iter is None: - self._data_iter = iter(self._data_loader) - - def data_loader(): - for num_iter in itertools.count(1): - if num_iter % 100 == 0: - self._logger.info( - "Running precise-BN ... {}/{} iterations.".format(num_iter, self._num_iter) - ) - # This way we can reuse the same iterator - yield next(self._data_iter) - - with EventStorage(): # capture events in a new storage to discard them - self._logger.info( - "Running precise-BN for {} iterations... ".format(self._num_iter) - + "Note that this could produce different statistics every time." - ) - update_bn_stats(self._model, data_loader(), self._num_iter) - - -class TorchMemoryStats(HookBase): - """ - Writes pytorch's cuda memory statistics periodically. - """ - - def __init__(self, period=20, max_runs=10): - """ - Args: - period (int): Output stats each 'period' iterations - max_runs (int): Stop the logging after 'max_runs' - """ - - self._logger = logging.getLogger(__name__) - self._period = period - self._max_runs = max_runs - self._runs = 0 - - def after_step(self): - if self._runs > self._max_runs: - return - - if (self.trainer.iter + 1) % self._period == 0 or ( - self.trainer.iter == self.trainer.max_iter - 1 - ): - if torch.cuda.is_available(): - max_reserved_mb = torch.cuda.max_memory_reserved() / 1024.0 / 1024.0 - reserved_mb = torch.cuda.memory_reserved() / 1024.0 / 1024.0 - max_allocated_mb = torch.cuda.max_memory_allocated() / 1024.0 / 1024.0 - allocated_mb = torch.cuda.memory_allocated() / 1024.0 / 1024.0 - - self._logger.info( - ( - " iter: {} " - " max_reserved_mem: {:.0f}MB " - " reserved_mem: {:.0f}MB " - " max_allocated_mem: {:.0f}MB " - " allocated_mem: {:.0f}MB " - ).format( - self.trainer.iter, - max_reserved_mb, - reserved_mb, - max_allocated_mb, - allocated_mb, - ) - ) - - self._runs += 1 - if self._runs == self._max_runs: - mem_summary = torch.cuda.memory_summary() - self._logger.info("\n" + mem_summary) - - torch.cuda.reset_peak_memory_stats() diff --git a/spaces/YlcldKlns/bing/src/components/ui/tooltip.tsx b/spaces/YlcldKlns/bing/src/components/ui/tooltip.tsx deleted file mode 100644 index af1d48beb90dd5ae311796539843700871052cae..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/components/ui/tooltip.tsx +++ /dev/null @@ -1,30 +0,0 @@ -'use client' - -import * as React from 'react' -import * as TooltipPrimitive from '@radix-ui/react-tooltip' - -import { cn } from '@/lib/utils' - -const TooltipProvider = TooltipPrimitive.Provider - -const Tooltip = TooltipPrimitive.Root - -const TooltipTrigger = TooltipPrimitive.Trigger - -const TooltipContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - -)) -TooltipContent.displayName = TooltipPrimitive.Content.displayName - -export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider } diff --git a/spaces/YuAnthony/Audio-Caption/coco_caption/pycocotools/__init__.py b/spaces/YuAnthony/Audio-Caption/coco_caption/pycocotools/__init__.py deleted file mode 100644 index 3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Audio-Caption/coco_caption/pycocotools/__init__.py +++ /dev/null @@ -1 +0,0 @@ -__author__ = 'tylin' diff --git a/spaces/YuAnthony/Audio-Caption/processes/method.py b/spaces/YuAnthony/Audio-Caption/processes/method.py deleted file mode 100644 index 5cbbac808cd6834171389b0c30f94f00a9366e4e..0000000000000000000000000000000000000000 --- a/spaces/YuAnthony/Audio-Caption/processes/method.py +++ /dev/null @@ -1,485 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -from pathlib import Path -import pickle -from time import time -from typing import MutableMapping, MutableSequence,\ - Any, Union, List, Dict, Tuple - -from torch import Tensor, no_grad, save as pt_save, \ - load as pt_load, randperm -from torch.nn import CrossEntropyLoss, Module -from torch.optim import Adam -from torch.nn.functional import softmax -from loguru import logger - -from tools import file_io, printing -from tools.argument_parsing import get_argument_parser -from tools.model import module_epoch_passing, get_model,\ - get_device -from data_handling.clotho_data_loader import get_clotho_loader -from eval_metrics import evaluate_metrics - - -__author__ = 'Konstantinos Drossos -- Tampere University' -__docformat__ = 'reStructuredText' -__all__ = ['method'] - - -def _decode_outputs(predicted_outputs: MutableSequence[Tensor], - ground_truth_outputs: MutableSequence[Tensor], - indices_object: MutableSequence[str], - file_names: MutableSequence[Path], - eos_token: str, - print_to_console: bool) \ - -> Tuple[List[Dict[str, str]], List[Dict[str, str]]]: - """Decodes predicted output to string. - - :param predicted_outputs: Predicted outputs. - :type predicted_outputs: list[torch.Tensor] - :param ground_truth_outputs: Ground truth outputs. - :type ground_truth_outputs: list[torch.Tensor] - :param indices_object: Object to map indices to text (words or chars). - :type indices_object: list[str] - :param file_names: List of ile names used. - :type file_names: list[pathlib.Path] - :param eos_token: End of sequence token to be used. - :type eos_token: str - :param print_to_console: Print captions to console? - :type print_to_console: bool - :return: Predicted and ground truth captions for scoring. - :rtype: (list[dict[str, str]], list[dict[str, str]]) - """ - caption_logger = logger.bind(is_caption=True, indent=None) - main_logger = logger.bind(is_caption=False, indent=0) - caption_logger.info('Captions start') - main_logger.info('Starting decoding of captions') - text_sep = '-' * 100 - - captions_pred: List[Dict] = [] - captions_gt: List[Dict] = [] - f_names: List[str] = [] - - if print_to_console: - main_logger.info(f'{text_sep}\n{text_sep}\n{text_sep}\n\n') - - for gt_words, b_predictions, f_name in zip( - ground_truth_outputs, predicted_outputs, file_names): - predicted_words = softmax(b_predictions, dim=-1).argmax(1) - - predicted_caption = [indices_object[i.item()] - for i in predicted_words] - gt_caption = [indices_object[i.item()] - for i in gt_words] - - gt_caption = gt_caption[:gt_caption.index(eos_token)] - try: - predicted_caption = predicted_caption[ - :predicted_caption.index(eos_token)] - except ValueError: - pass - - predicted_caption = ' '.join(predicted_caption) - gt_caption = ' '.join(gt_caption) - - f_n = f_name.stem.split('.')[0] - - if f_n not in f_names: - f_names.append(f_n) - captions_pred.append({ - 'file_name': f_n, - 'caption_predicted': predicted_caption}) - captions_gt.append({ - 'file_name': f_n, - 'caption_1': gt_caption}) - else: - for d_i, d in enumerate(captions_gt): - if f_n == d['file_name']: - len_captions = len([i_c for i_c in d.keys() - if i_c.startswith('caption_')]) + 1 - d.update({f'caption_{len_captions}': gt_caption}) - captions_gt[d_i] = d - break - - log_strings = [f'Captions for file {f_name.stem}: ', - f'\tPredicted caption: {predicted_caption}', - f'\tOriginal caption: {gt_caption}\n\n'] - - [caption_logger.info(log_string) - for log_string in log_strings] - - if print_to_console: - [main_logger.info(log_string) - for log_string in log_strings] - - if print_to_console: - main_logger.info(f'{text_sep}\n{text_sep}\n{text_sep}\n\n') - - logger.bind(is_caption=False, indent=0).info( - 'Decoding of captions ended') - - return captions_pred, captions_gt - - -def _do_evaluation(model: Module, - settings_data: MutableMapping[str, Any], - settings_io: MutableMapping[str, Any], - indices_list: MutableSequence[str]) \ - -> None: - """Evaluation of an optimized model. - - :param model: Model to use. - :type model: torch.nn.Module - :param settings_data: Data settings to use. - :type settings_data: dict - :param indices_list: Sequence with the words of the captions. - :type indices_list: list[str] - """ - model.eval() - logger_main = logger.bind(is_caption=False, indent=1) - - data_path_evaluation = Path( - settings_io['root_dirs']['data'], - settings_io['dataset']['features_dirs']['output'], - settings_io['dataset']['features_dirs']['evaluation']) - - logger_main.info('Getting evaluation data') - validation_data = get_clotho_loader( - settings_io['dataset']['features_dirs']['evaluation'], - is_training=False, - settings_data=settings_data, - settings_io=settings_io) - logger_main.info('Done') - - text_sep = '-' * 100 - starting_text = 'Starting evaluation on evaluation data' - - logger_main.info(starting_text) - logger.bind(is_caption=True, indent=0).info( - f'{text_sep}\n{text_sep}\n{text_sep}\n\n') - logger.bind(is_caption=True, indent=0).info( - f'{starting_text}.\n\n') - - with no_grad(): - evaluation_outputs = module_epoch_passing( - data=validation_data, module=model, - objective=None, optimizer=None) - - captions_pred, captions_gt = _decode_outputs( - evaluation_outputs[1], - evaluation_outputs[2], - indices_object=indices_list, - file_names=list(data_path_evaluation.iterdir()), - eos_token='', - print_to_console=False) - - logger_main.info('Evaluation done') - - metrics = evaluate_metrics(captions_pred, captions_gt) - - for metric, values in metrics.items(): - logger_main.info(f'{metric:<7s}: {values["score"]:7.4f}') - - -def _do_training(model: Module, - settings_training: MutableMapping[ - str, Union[Any, MutableMapping[str, Any]]], - settings_data: MutableMapping[ - str, Union[Any, MutableMapping[str, Any]]], - settings_io: MutableMapping[ - str, Union[Any, MutableMapping[str, Any]]], - model_file_name: str, - model_dir: Path, - indices_list: MutableSequence[str]) \ - -> None: - """Optimization of the model. - - :param model: Model to optimize. - :type model: torch.nn.Module - :param settings_training: Training settings to use. - :type settings_training: dict - :param settings_data: Training data settings to use. - :type settings_data: dict - :param settings_io: Data I/O settings to use. - :type settings_io: dict - :param model_file_name: File name of the model. - :type model_file_name: str - :param model_dir: Directory to serialize the model to. - :type model_dir: pathlib.Path - :param indices_list: A sequence with the words. - :type indices_list: list[str] - """ - # Initialize variables for the training process - prv_training_loss = 1e8 - patience: int = settings_training['patience'] - loss_thr: float = settings_training['loss_thr'] - patience_counter = 0 - best_epoch = 0 - - # Initialize logger - logger_main = logger.bind(is_caption=False, indent=1) - - # Inform that we start getting the data - logger_main.info('Getting training data') - - # Get training data and count the amount of batches - training_data = get_clotho_loader( - settings_io['dataset']['features_dirs']['development'], - is_training=True, - settings_data=settings_data, - settings_io=settings_io) - - logger_main.info('Done') - - # Initialize loss and optimizer objects - objective = CrossEntropyLoss() - optimizer = Adam(params=model.parameters(), - lr=settings_training['optimizer']['lr']) - - # Inform that we start training - logger_main.info('Starting training') - - model.train() - for epoch in range(settings_training['nb_epochs']): - - # Log starting time - start_time = time() - - # Do a complete pass over our training data - epoch_output = module_epoch_passing( - data=training_data, - module=model, - objective=objective, - optimizer=optimizer, - grad_norm=settings_training['grad_norm']['norm'], - grad_norm_val=settings_training['grad_norm']['value']) - objective_output, output_y_hat, output_y, f_names = epoch_output - - # Get mean loss of training and print it with logger - training_loss = objective_output.mean().item() - - logger_main.info(f'Epoch: {epoch:05d} -- ' - f'Training loss: {training_loss:>7.4f} | ' - f'Time: {time() - start_time:>5.3f}') - - # Check if we have to decode captions for the current epoch - if divmod(epoch + 1, - settings_training['text_output_every_nb_epochs'])[-1] == 0: - - # Get the subset of files for decoding their captions - sampling_indices = sorted(randperm(len(output_y_hat)) - [:settings_training['nb_examples_to_sample']] - .tolist()) - - # Do the decoding - _decode_outputs(*zip(*[[output_y_hat[i], output_y[i]] - for i in sampling_indices]), - indices_object=indices_list, - file_names=[Path(f_names[i_f_name]) - for i_f_name in sampling_indices], - eos_token='', - print_to_console=False) - - # Check improvement of loss - if prv_training_loss - training_loss > loss_thr: - # Log the current loss - prv_training_loss = training_loss - - # Log the current epoch - best_epoch = epoch - - # Serialize the model keeping the epoch - pt_save( - model.state_dict(), - str(model_dir.joinpath( - f'epoch_{best_epoch:05d}_{model_file_name}'))) - - # Zero out the patience - patience_counter = 0 - - else: - - # Increase patience counter - patience_counter += 1 - - # Serialize the model and optimizer. - for pt_obj, save_str in zip([model, optimizer], ['', '_optimizer']): - pt_save( - pt_obj.state_dict(), - str(model_dir.joinpath( - f'latest{save_str}_{model_file_name}'))) - - # Check for stopping criteria - if patience_counter >= patience: - logger_main.info('No lower training loss for ' - f'{patience_counter} epochs. ' - 'Training stops.') - - # Inform that we are done - logger_main.info('Training done') - - # Load best model - model.load_state_dict(pt_load( - str(model_dir.joinpath( - f'epoch_{best_epoch:05d}_{model_file_name}')))) - - -def _get_nb_output_classes(settings: MutableMapping[str, Any]) \ - -> int: - """Gets the amount of output classes. - - :param settings: Settings to use. - :type settings: dict - :return: Amount of output classes. - :rtype: int - """ - f_name_field = 'words_list_file_name' \ - if settings['data']['output_field_name'].startswith('words') \ - else 'characters_list_file_name' - - f_name = settings['data']['files'][f_name_field] - path = Path( - settings['data']['files']['root_dir'], - settings['data']['files']['dataset_dir'], - f_name) - - with path.open('rb') as f: - return len(pickle.load(f)) - - -def _load_indices_file(settings_files: MutableMapping[str, Any], - settings_data: MutableMapping[str, Any]) \ - -> MutableSequence[str]: - """Loads and returns the indices file. - - :param settings_files: Settings of file i/o to be used. - :type settings_files: dict - :param settings_data: Settings of data to be used. . - :type settings_data: dict - :return: The indices file. - :rtype: list[str] - """ - path = Path( - settings_files['root_dirs']['data'], - settings_files['dataset']['pickle_files_dir']) - p_field = 'words_list_file_name' \ - if settings_data['output_field_name'].startswith('words') \ - else 'characters_list_file_name' - return file_io.load_pickle_file( - path.joinpath(settings_files['dataset']['files'][p_field])) - - -def method(settings: MutableMapping[str, Any]) \ - -> None: - """Baseline method. - - :param settings: Settings to be used. - :type settings: dict - """ - logger_main = logger.bind(is_caption=False, indent=0) - logger_main.info('Bootstrapping method') - pretty_printer = printing.get_pretty_printer() - logger_inner = logger.bind(is_caption=False, indent=1) - device, device_name = get_device( - settings['dnn_training_settings']['training']['force_cpu']) - - model_dir = Path( - settings['dirs_and_files']['root_dirs']['outputs'], - settings['dirs_and_files']['model']['model_dir']) - - model_dir.mkdir(parents=True, exist_ok=True) - - model_file_name = f'{settings["dirs_and_files"]["model"]["checkpoint_model_name"]}' - - logger_inner.info(f'Process on {device_name}\n') - - logger_inner.info('Settings:\n' - f'{pretty_printer.pformat(settings)}\n') - - logger_inner.info('Loading indices file') - indices_list = _load_indices_file( - settings['dirs_and_files'], - settings['dnn_training_settings']['data']) - logger_inner.info('Done') - - model: Union[Module, None] = None - - logger_main.info('Bootstrapping done') - - if settings['workflow']['dnn_training']: - logger_main.info('Doing training') - logger_inner.info('Setting up model') - model = get_model( - settings_model=settings['dnn_training_settings']['model'], - settings_io=settings['dirs_and_files'], - output_classes=len(indices_list), - device=device) - model.to(device) - logger_inner.info('Done\n') - - logger_inner.info(f'Model:\n{model}\n') - logger_inner.info('Total amount of parameters: ' - f'{sum([i.numel() for i in model.parameters()])}') - logger_inner.info('Starting training') - _do_training( - model=model, - settings_training=settings['dnn_training_settings']['training'], - settings_data=settings['dnn_training_settings']['data'], - settings_io=settings['dirs_and_files'], - model_file_name=model_file_name, - model_dir=model_dir, - indices_list=indices_list) - logger_inner.info('Training done') - - if settings['workflow']['dnn_evaluation']: - logger_main.info('Doing evaluation') - if model is None: - if not settings['dnn_training_settings']['model']['use_pre_trained_model']: - raise AttributeError('Mode is set to only evaluation, but' - 'is specified not to use a pre-trained model.') - - logger_inner.info('Setting up model') - model = get_model( - settings_model=settings['dnn_training_settings']['model'], - settings_io=settings['dirs_and_files'], - output_classes=len(indices_list), - device=device) - model.to(device) - logger_inner.info('Model ready') - - logger_inner.info('Starting evaluation') - _do_evaluation( - model=model, - settings_data=settings['dnn_training_settings']['data'], - settings_io=settings['dirs_and_files'], - indices_list=indices_list) - logger_inner.info('Evaluation done') - - -def main(): - args = get_argument_parser().parse_args() - - file_dir = args.file_dir - config_file = args.config_file - file_ext = args.file_ext - verbose = args.verbose - - settings = file_io.load_yaml_file(Path( - file_dir, f'{config_file}.{file_ext}')) - - printing.init_loggers( - verbose=verbose, - settings=settings['dirs_and_files']) - - logger_main = logger.bind(is_caption=False, indent=0) - - logger_main.info('Starting method only') - method(settings) - logger_main.info('Method\'s done') - - -if __name__ == '__main__': - main() - -# EOF diff --git a/spaces/Yuliang/ECON/lib/pymafx/models/pose_resnet.py b/spaces/Yuliang/ECON/lib/pymafx/models/pose_resnet.py deleted file mode 100644 index 16b22e815f715d2ae8e5f217431055ee2ba57ddf..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/pymafx/models/pose_resnet.py +++ /dev/null @@ -1,301 +0,0 @@ -# ------------------------------------------------------------------------------ -# Copyright (c) Microsoft -# Licensed under the MIT License. -# Written by Bin Xiao (Bin.Xiao@microsoft.com) -# ------------------------------------------------------------------------------ - -from __future__ import absolute_import, division, print_function - -import logging -import os - -import torch -import torch.nn as nn - -BN_MOMENTUM = 0.1 -logger = logging.getLogger(__name__) - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM) - self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion, momentum=BN_MOMENTUM) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class PoseResNet(nn.Module): - def __init__(self, block, layers, cfg, global_mode, **kwargs): - self.inplanes = 64 - extra = cfg.POSE_RES_MODEL.EXTRA - self.extra = extra - self.deconv_with_bias = extra.DECONV_WITH_BIAS - - super(PoseResNet, self).__init__() - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64, momentum=BN_MOMENTUM) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2) - - self.global_mode = global_mode - if self.global_mode: - self.avgpool = nn.AvgPool2d(7, stride=1) - self.deconv_layers = None - else: - # used for deconv layers - self.deconv_layers = self._make_deconv_layer( - extra.NUM_DECONV_LAYERS, - extra.NUM_DECONV_FILTERS, - extra.NUM_DECONV_KERNELS, - ) - - # self.final_layer = nn.Conv2d( - # in_channels=extra.NUM_DECONV_FILTERS[-1], - # out_channels=17, - # kernel_size=extra.FINAL_CONV_KERNEL, - # stride=1, - # padding=1 if extra.FINAL_CONV_KERNEL == 3 else 0 - # ) - self.final_layer = None - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False - ), - nn.BatchNorm2d(planes * block.expansion, momentum=BN_MOMENTUM), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def _get_deconv_cfg(self, deconv_kernel, index): - if deconv_kernel == 4: - padding = 1 - output_padding = 0 - elif deconv_kernel == 3: - padding = 1 - output_padding = 1 - elif deconv_kernel == 2: - padding = 0 - output_padding = 0 - - return deconv_kernel, padding, output_padding - - def _make_deconv_layer(self, num_layers, num_filters, num_kernels): - assert num_layers == len(num_filters), \ - 'ERROR: num_deconv_layers is different len(num_deconv_filters)' - assert num_layers == len(num_kernels), \ - 'ERROR: num_deconv_layers is different len(num_deconv_filters)' - - layers = [] - for i in range(num_layers): - kernel, padding, output_padding = \ - self._get_deconv_cfg(num_kernels[i], i) - - planes = num_filters[i] - layers.append( - nn.ConvTranspose2d( - in_channels=self.inplanes, - out_channels=planes, - kernel_size=kernel, - stride=2, - padding=padding, - output_padding=output_padding, - bias=self.deconv_with_bias - ) - ) - layers.append(nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)) - layers.append(nn.ReLU(inplace=True)) - self.inplanes = planes - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - # x = self.deconv_layers(x) - # x = self.final_layer(x) - - if self.global_mode: - g_feat = self.avgpool(x) - g_feat = g_feat.view(g_feat.size(0), -1) - s_feat_list = [g_feat] - else: - g_feat = None - if self.extra.NUM_DECONV_LAYERS == 3: - deconv_blocks = [ - self.deconv_layers[0:3], self.deconv_layers[3:6], self.deconv_layers[6:9] - ] - - s_feat_list = [] - s_feat = x - for i in range(self.extra.NUM_DECONV_LAYERS): - s_feat = deconv_blocks[i](s_feat) - s_feat_list.append(s_feat) - - return s_feat_list, g_feat - - def init_weights(self, pretrained=''): - if os.path.isfile(pretrained): - # logger.info('=> init deconv weights from normal distribution') - if self.deconv_layers is not None: - for name, m in self.deconv_layers.named_modules(): - if isinstance(m, nn.ConvTranspose2d): - # logger.info('=> init {}.weight as normal(0, 0.001)'.format(name)) - # logger.info('=> init {}.bias as 0'.format(name)) - nn.init.normal_(m.weight, std=0.001) - if self.deconv_with_bias: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - # logger.info('=> init {}.weight as 1'.format(name)) - # logger.info('=> init {}.bias as 0'.format(name)) - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - if self.final_layer is not None: - logger.info('=> init final conv weights from normal distribution') - for m in self.final_layer.modules(): - if isinstance(m, nn.Conv2d): - # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - logger.info('=> init {}.weight as normal(0, 0.001)'.format(name)) - logger.info('=> init {}.bias as 0'.format(name)) - nn.init.normal_(m.weight, std=0.001) - nn.init.constant_(m.bias, 0) - - pretrained_state_dict = torch.load(pretrained) - logger.info('=> loading pretrained model {}'.format(pretrained)) - self.load_state_dict(pretrained_state_dict, strict=False) - elif pretrained: - logger.error('=> please download pre-trained models first!') - raise ValueError('{} is not exist!'.format(pretrained)) - else: - logger.info('=> init weights from normal distribution') - for m in self.modules(): - if isinstance(m, nn.Conv2d): - # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - nn.init.normal_(m.weight, std=0.001) - # nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.ConvTranspose2d): - nn.init.normal_(m.weight, std=0.001) - if self.deconv_with_bias: - nn.init.constant_(m.bias, 0) - - -resnet_spec = { - 18: (BasicBlock, [2, 2, 2, 2]), 34: (BasicBlock, [3, 4, 6, 3]), 50: (Bottleneck, [3, 4, 6, 3]), - 101: (Bottleneck, [3, 4, 23, 3]), 152: (Bottleneck, [3, 8, 36, 3]) -} - - -def get_resnet_encoder(cfg, init_weight=True, global_mode=False, **kwargs): - num_layers = cfg.POSE_RES_MODEL.EXTRA.NUM_LAYERS - - block_class, layers = resnet_spec[num_layers] - - model = PoseResNet(block_class, layers, cfg, global_mode, **kwargs) - - if init_weight: - if num_layers == 50: - if cfg.POSE_RES_MODEL.PRETR_SET in ['imagenet']: - model.init_weights(cfg.POSE_RES_MODEL.PRETRAINED_IM) - logger.info('loaded ResNet imagenet pretrained model') - elif cfg.POSE_RES_MODEL.PRETR_SET in ['coco']: - model.init_weights(cfg.POSE_RES_MODEL.PRETRAINED_COCO) - logger.info('loaded ResNet coco pretrained model') - else: - raise NotImplementedError - - return model diff --git a/spaces/YuxinJ/Scenimefy/Scenimefy/data/base_dataset.py b/spaces/YuxinJ/Scenimefy/Scenimefy/data/base_dataset.py deleted file mode 100644 index 5748a9da2bcfb8126b3f91e50309eace78344e7b..0000000000000000000000000000000000000000 --- a/spaces/YuxinJ/Scenimefy/Scenimefy/data/base_dataset.py +++ /dev/null @@ -1,230 +0,0 @@ -"""This module implements an abstract base class (ABC) 'BaseDataset' for datasets. - -It also includes common transformation functions (e.g., get_transform, __scale_width), which can be later used in subclasses. -""" -import random -import numpy as np -import torch.utils.data as data -from PIL import Image -import torchvision.transforms as transforms -from abc import ABC, abstractmethod - - -class BaseDataset(data.Dataset, ABC): - """This class is an abstract base class (ABC) for datasets. - - To create a subclass, you need to implement the following four functions: - -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt). - -- <__len__>: return the size of dataset. - -- <__getitem__>: get a data point. - -- : (optionally) add dataset-specific options and set default options. - """ - - def __init__(self, opt): - """Initialize the class; save the options in the class - - Parameters: - opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions - """ - self.opt = opt - self.root = opt.dataroot - self.current_epoch = 0 - - @staticmethod - def modify_commandline_options(parser, is_train): - """Add new dataset-specific options, and rewrite default values for existing options. - - Parameters: - parser -- original option parser - is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - """ - return parser - - @abstractmethod - def __len__(self): - """Return the total number of images in the dataset.""" - return 0 - - @abstractmethod - def __getitem__(self, index): - """Return a data point and its metadata information. - - Parameters: - index - - a random integer for data indexing - - Returns: - a dictionary of data with their names. It ususally contains the data itself and its metadata information. - """ - pass - - -def get_params(opt, size): - w, h = size - new_h = h - new_w = w - if opt.preprocess == 'resize_and_crop': - new_h = new_w = opt.load_size - elif opt.preprocess == 'scale_width_and_crop': - new_w = opt.load_size - new_h = opt.load_size * h // w - - x = random.randint(0, np.maximum(0, new_w - opt.crop_size)) - y = random.randint(0, np.maximum(0, new_h - opt.crop_size)) - - flip = random.random() > 0.5 - - return {'crop_pos': (x, y), 'flip': flip} - - -def get_transform(opt, params=None, grayscale=False, method=Image.BICUBIC, convert=True): - transform_list = [] - if grayscale: - transform_list.append(transforms.Grayscale(1)) - if 'fixsize' in opt.preprocess: - transform_list.append(transforms.Resize(params["size"], method)) - if 'resize' in opt.preprocess: - osize = [opt.load_size, opt.load_size] - if "gta2cityscapes" in opt.dataroot: - osize[0] = opt.load_size // 2 - transform_list.append(transforms.Resize(osize, method)) - elif 'scale_width' in opt.preprocess: - transform_list.append(transforms.Lambda(lambda img: __scale_width(img, opt.load_size, opt.crop_size, method))) - elif 'scale_shortside' in opt.preprocess: - transform_list.append(transforms.Lambda(lambda img: __scale_shortside(img, opt.load_size, opt.crop_size, method))) - - if 'zoom' in opt.preprocess: - if params is None: - transform_list.append(transforms.Lambda(lambda img: __random_zoom(img, opt.load_size, opt.crop_size, method))) - else: - transform_list.append(transforms.Lambda(lambda img: __random_zoom(img, opt.load_size, opt.crop_size, method, factor=params["scale_factor"]))) - - if 'crop' in opt.preprocess: - if params is None or 'crop_pos' not in params: - transform_list.append(transforms.RandomCrop(opt.crop_size)) - else: - transform_list.append(transforms.Lambda(lambda img: __crop(img, params['crop_pos'], opt.crop_size))) - - if 'patch' in opt.preprocess: - transform_list.append(transforms.Lambda(lambda img: __patch(img, params['patch_index'], opt.crop_size))) - - if 'trim' in opt.preprocess: - transform_list.append(transforms.Lambda(lambda img: __trim(img, opt.crop_size))) - - # if opt.preprocess == 'none': - transform_list.append(transforms.Lambda(lambda img: __make_power_2(img, base=4, method=method))) - - if not opt.no_flip: - if params is None or 'flip' not in params: - transform_list.append(transforms.RandomHorizontalFlip()) - elif 'flip' in params: - transform_list.append(transforms.Lambda(lambda img: __flip(img, params['flip']))) - - if convert: - transform_list += [transforms.ToTensor()] - if grayscale: - transform_list += [transforms.Normalize((0.5,), (0.5,))] - else: - transform_list += [transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))] - return transforms.Compose(transform_list) - - -def __make_power_2(img, base, method=Image.BICUBIC): - ow, oh = img.size - h = int(round(oh / base) * base) - w = int(round(ow / base) * base) - if h == oh and w == ow: - return img - - return img.resize((w, h), method) - - -def __random_zoom(img, target_width, crop_width, method=Image.BICUBIC, factor=None): - if factor is None: - zoom_level = np.random.uniform(0.8, 1.0, size=[2]) - else: - zoom_level = (factor[0], factor[1]) - iw, ih = img.size - zoomw = max(crop_width, iw * zoom_level[0]) - zoomh = max(crop_width, ih * zoom_level[1]) - img = img.resize((int(round(zoomw)), int(round(zoomh))), method) - return img - - -def __scale_shortside(img, target_width, crop_width, method=Image.BICUBIC): - ow, oh = img.size - shortside = min(ow, oh) - if shortside >= target_width: - return img - else: - scale = target_width / shortside - return img.resize((round(ow * scale), round(oh * scale)), method) - - -def __trim(img, trim_width): - ow, oh = img.size - if ow > trim_width: - xstart = np.random.randint(ow - trim_width) - xend = xstart + trim_width - else: - xstart = 0 - xend = ow - if oh > trim_width: - ystart = np.random.randint(oh - trim_width) - yend = ystart + trim_width - else: - ystart = 0 - yend = oh - return img.crop((xstart, ystart, xend, yend)) - - -def __scale_width(img, target_width, crop_width, method=Image.BICUBIC): - ow, oh = img.size - if ow == target_width and oh >= crop_width: - return img - w = target_width - h = int(max(target_width * oh / ow, crop_width)) - return img.resize((w, h), method) - - -def __crop(img, pos, size): - ow, oh = img.size - x1, y1 = pos - tw = th = size - if (ow > tw or oh > th): - return img.crop((x1, y1, x1 + tw, y1 + th)) - return img - - -def __patch(img, index, size): - ow, oh = img.size - nw, nh = ow // size, oh // size - roomx = ow - nw * size - roomy = oh - nh * size - startx = np.random.randint(int(roomx) + 1) - starty = np.random.randint(int(roomy) + 1) - - index = index % (nw * nh) - ix = index // nh - iy = index % nh - gridx = startx + ix * size - gridy = starty + iy * size - return img.crop((gridx, gridy, gridx + size, gridy + size)) - - -def __flip(img, flip): - if flip: - return img.transpose(Image.FLIP_LEFT_RIGHT) - return img - - -def __print_size_warning(ow, oh, w, h): - """Print warning information about image size(only print once)""" - if not hasattr(__print_size_warning, 'has_printed'): - print("The image size needs to be a multiple of 4. " - "The loaded image size was (%d, %d), so it was adjusted to " - "(%d, %d). This adjustment will be done to all images " - "whose sizes are not multiples of 4" % (ow, oh, w, h)) - __print_size_warning.has_printed = True diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/dataset_wrappers.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/dataset_wrappers.py deleted file mode 100644 index 55ad5cb60e581a96bdbd1fbbeebc2f46f8c4e899..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/dataset_wrappers.py +++ /dev/null @@ -1,282 +0,0 @@ -import bisect -import math -from collections import defaultdict - -import numpy as np -from mmcv.utils import print_log -from torch.utils.data.dataset import ConcatDataset as _ConcatDataset - -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class ConcatDataset(_ConcatDataset): - """A wrapper of concatenated dataset. - - Same as :obj:`torch.utils.data.dataset.ConcatDataset`, but - concat the group flag for image aspect ratio. - - Args: - datasets (list[:obj:`Dataset`]): A list of datasets. - separate_eval (bool): Whether to evaluate the results - separately if it is used as validation dataset. - Defaults to True. - """ - - def __init__(self, datasets, separate_eval=True): - super(ConcatDataset, self).__init__(datasets) - self.CLASSES = datasets[0].CLASSES - self.separate_eval = separate_eval - if not separate_eval: - if any([isinstance(ds, CocoDataset) for ds in datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - - if hasattr(datasets[0], 'flag'): - flags = [] - for i in range(0, len(datasets)): - flags.append(datasets[i].flag) - self.flag = np.concatenate(flags) - - def get_cat_ids(self, idx): - """Get category ids of concatenated dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - if idx < 0: - if -idx > len(self): - raise ValueError( - 'absolute value of index should not exceed dataset length') - idx = len(self) + idx - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - return self.datasets[dataset_idx].get_cat_ids(sample_idx) - - def evaluate(self, results, logger=None, **kwargs): - """Evaluate the results. - - Args: - results (list[list | tuple]): Testing results of the dataset. - logger (logging.Logger | str | None): Logger used for printing - related information during evaluation. Default: None. - - Returns: - dict[str: float]: AP results of the total dataset or each separate - dataset if `self.separate_eval=True`. - """ - assert len(results) == self.cumulative_sizes[-1], \ - ('Dataset and results have different sizes: ' - f'{self.cumulative_sizes[-1]} v.s. {len(results)}') - - # Check whether all the datasets support evaluation - for dataset in self.datasets: - assert hasattr(dataset, 'evaluate'), \ - f'{type(dataset)} does not implement evaluate function' - - if self.separate_eval: - dataset_idx = -1 - total_eval_results = dict() - for size, dataset in zip(self.cumulative_sizes, self.datasets): - start_idx = 0 if dataset_idx == -1 else \ - self.cumulative_sizes[dataset_idx] - end_idx = self.cumulative_sizes[dataset_idx + 1] - - results_per_dataset = results[start_idx:end_idx] - print_log( - f'\nEvaluateing {dataset.ann_file} with ' - f'{len(results_per_dataset)} images now', - logger=logger) - - eval_results_per_dataset = dataset.evaluate( - results_per_dataset, logger=logger, **kwargs) - dataset_idx += 1 - for k, v in eval_results_per_dataset.items(): - total_eval_results.update({f'{dataset_idx}_{k}': v}) - - return total_eval_results - elif any([isinstance(ds, CocoDataset) for ds in self.datasets]): - raise NotImplementedError( - 'Evaluating concatenated CocoDataset as a whole is not' - ' supported! Please set "separate_eval=True"') - elif len(set([type(ds) for ds in self.datasets])) != 1: - raise NotImplementedError( - 'All the datasets should have same types') - else: - original_data_infos = self.datasets[0].data_infos - self.datasets[0].data_infos = sum( - [dataset.data_infos for dataset in self.datasets], []) - eval_results = self.datasets[0].evaluate( - results, logger=logger, **kwargs) - self.datasets[0].data_infos = original_data_infos - return eval_results - - -@DATASETS.register_module() -class RepeatDataset(object): - """A wrapper of repeated dataset. - - The length of repeated dataset will be `times` larger than the original - dataset. This is useful when the data loading time is long but the dataset - is small. Using RepeatDataset can reduce the data loading time between - epochs. - - Args: - dataset (:obj:`Dataset`): The dataset to be repeated. - times (int): Repeat times. - """ - - def __init__(self, dataset, times): - self.dataset = dataset - self.times = times - self.CLASSES = dataset.CLASSES - if hasattr(self.dataset, 'flag'): - self.flag = np.tile(self.dataset.flag, times) - - self._ori_len = len(self.dataset) - - def __getitem__(self, idx): - return self.dataset[idx % self._ori_len] - - def get_cat_ids(self, idx): - """Get category ids of repeat dataset by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.dataset.get_cat_ids(idx % self._ori_len) - - def __len__(self): - """Length after repetition.""" - return self.times * self._ori_len - - -# Modified from https://github.com/facebookresearch/detectron2/blob/41d475b75a230221e21d9cac5d69655e3415e3a4/detectron2/data/samplers/distributed_sampler.py#L57 # noqa -@DATASETS.register_module() -class ClassBalancedDataset(object): - """A wrapper of repeated dataset with repeat factor. - - Suitable for training on class imbalanced datasets like LVIS. Following - the sampling strategy in the `paper `_, - in each epoch, an image may appear multiple times based on its - "repeat factor". - The repeat factor for an image is a function of the frequency the rarest - category labeled in that image. The "frequency of category c" in [0, 1] - is defined by the fraction of images in the training set (without repeats) - in which category c appears. - The dataset needs to instantiate :func:`self.get_cat_ids` to support - ClassBalancedDataset. - - The repeat factor is computed as followed. - - 1. For each category c, compute the fraction # of images - that contain it: :math:`f(c)` - 2. For each category c, compute the category-level repeat factor: - :math:`r(c) = max(1, sqrt(t/f(c)))` - 3. For each image I, compute the image-level repeat factor: - :math:`r(I) = max_{c in I} r(c)` - - Args: - dataset (:obj:`CustomDataset`): The dataset to be repeated. - oversample_thr (float): frequency threshold below which data is - repeated. For categories with ``f_c >= oversample_thr``, there is - no oversampling. For categories with ``f_c < oversample_thr``, the - degree of oversampling following the square-root inverse frequency - heuristic above. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes will not be oversampled. Otherwise, they will be categorized - as the pure background class and involved into the oversampling. - Default: True. - """ - - def __init__(self, dataset, oversample_thr, filter_empty_gt=True): - self.dataset = dataset - self.oversample_thr = oversample_thr - self.filter_empty_gt = filter_empty_gt - self.CLASSES = dataset.CLASSES - - repeat_factors = self._get_repeat_factors(dataset, oversample_thr) - repeat_indices = [] - for dataset_idx, repeat_factor in enumerate(repeat_factors): - repeat_indices.extend([dataset_idx] * math.ceil(repeat_factor)) - self.repeat_indices = repeat_indices - - flags = [] - if hasattr(self.dataset, 'flag'): - for flag, repeat_factor in zip(self.dataset.flag, repeat_factors): - flags.extend([flag] * int(math.ceil(repeat_factor))) - assert len(flags) == len(repeat_indices) - self.flag = np.asarray(flags, dtype=np.uint8) - - def _get_repeat_factors(self, dataset, repeat_thr): - """Get repeat factor for each images in the dataset. - - Args: - dataset (:obj:`CustomDataset`): The dataset - repeat_thr (float): The threshold of frequency. If an image - contains the categories whose frequency below the threshold, - it would be repeated. - - Returns: - list[float]: The repeat factors for each images in the dataset. - """ - - # 1. For each category c, compute the fraction # of images - # that contain it: f(c) - category_freq = defaultdict(int) - num_images = len(dataset) - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - for cat_id in cat_ids: - category_freq[cat_id] += 1 - for k, v in category_freq.items(): - category_freq[k] = v / num_images - - # 2. For each category c, compute the category-level repeat factor: - # r(c) = max(1, sqrt(t/f(c))) - category_repeat = { - cat_id: max(1.0, math.sqrt(repeat_thr / cat_freq)) - for cat_id, cat_freq in category_freq.items() - } - - # 3. For each image I, compute the image-level repeat factor: - # r(I) = max_{c in I} r(c) - repeat_factors = [] - for idx in range(num_images): - cat_ids = set(self.dataset.get_cat_ids(idx)) - if len(cat_ids) == 0 and not self.filter_empty_gt: - cat_ids = set([len(self.CLASSES)]) - repeat_factor = 1 - if len(cat_ids) > 0: - repeat_factor = max( - {category_repeat[cat_id] - for cat_id in cat_ids}) - repeat_factors.append(repeat_factor) - - return repeat_factors - - def __getitem__(self, idx): - ori_index = self.repeat_indices[idx] - return self.dataset[ori_index] - - def __len__(self): - """Length after repetition.""" - return len(self.repeat_indices) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/utils/registry.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/utils/registry.py deleted file mode 100644 index fa9df39bc9f3d8d568361e7250ab35468f2b74e0..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/utils/registry.py +++ /dev/null @@ -1,315 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import warnings -from functools import partial - -from .misc import is_seq_of - - -def build_from_cfg(cfg, registry, default_args=None): - """Build a module from config dict. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - registry (:obj:`Registry`): The registry to search the type from. - default_args (dict, optional): Default initialization arguments. - - Returns: - object: The constructed object. - """ - if not isinstance(cfg, dict): - raise TypeError(f'cfg must be a dict, but got {type(cfg)}') - if 'type' not in cfg: - if default_args is None or 'type' not in default_args: - raise KeyError( - '`cfg` or `default_args` must contain the key "type", ' - f'but got {cfg}\n{default_args}') - if not isinstance(registry, Registry): - raise TypeError('registry must be an mmcv.Registry object, ' - f'but got {type(registry)}') - if not (isinstance(default_args, dict) or default_args is None): - raise TypeError('default_args must be a dict or None, ' - f'but got {type(default_args)}') - - args = cfg.copy() - - if default_args is not None: - for name, value in default_args.items(): - args.setdefault(name, value) - - obj_type = args.pop('type') - if isinstance(obj_type, str): - obj_cls = registry.get(obj_type) - if obj_cls is None: - raise KeyError( - f'{obj_type} is not in the {registry.name} registry') - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - try: - return obj_cls(**args) - except Exception as e: - # Normal TypeError does not print class name. - raise type(e)(f'{obj_cls.__name__}: {e}') - - -class Registry: - """A registry to map strings to classes. - - Registered object could be built from registry. - Example: - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - >>> resnet = MODELS.build(dict(type='ResNet')) - - Please refer to - https://mmcv.readthedocs.io/en/latest/understand_mmcv/registry.html for - advanced usage. - - Args: - name (str): Registry name. - build_func(func, optional): Build function to construct instance from - Registry, func:`build_from_cfg` is used if neither ``parent`` or - ``build_func`` is specified. If ``parent`` is specified and - ``build_func`` is not given, ``build_func`` will be inherited - from ``parent``. Default: None. - parent (Registry, optional): Parent registry. The class registered in - children registry could be built from parent. Default: None. - scope (str, optional): The scope of registry. It is the key to search - for children registry. If not specified, scope will be the name of - the package where class is defined, e.g. mmdet, mmcls, mmseg. - Default: None. - """ - - def __init__(self, name, build_func=None, parent=None, scope=None): - self._name = name - self._module_dict = dict() - self._children = dict() - self._scope = self.infer_scope() if scope is None else scope - - # self.build_func will be set with the following priority: - # 1. build_func - # 2. parent.build_func - # 3. build_from_cfg - if build_func is None: - if parent is not None: - self.build_func = parent.build_func - else: - self.build_func = build_from_cfg - else: - self.build_func = build_func - if parent is not None: - assert isinstance(parent, Registry) - parent._add_children(self) - self.parent = parent - else: - self.parent = None - - def __len__(self): - return len(self._module_dict) - - def __contains__(self, key): - return self.get(key) is not None - - def __repr__(self): - format_str = self.__class__.__name__ + \ - f'(name={self._name}, ' \ - f'items={self._module_dict})' - return format_str - - @staticmethod - def infer_scope(): - """Infer the scope of registry. - - The name of the package where registry is defined will be returned. - - Example: - # in mmdet/models/backbone/resnet.py - >>> MODELS = Registry('models') - >>> @MODELS.register_module() - >>> class ResNet: - >>> pass - The scope of ``ResNet`` will be ``mmdet``. - - - Returns: - scope (str): The inferred scope name. - """ - # inspect.stack() trace where this function is called, the index-2 - # indicates the frame where `infer_scope()` is called - filename = inspect.getmodule(inspect.stack()[2][0]).__name__ - split_filename = filename.split('.') - return split_filename[0] - - @staticmethod - def split_scope_key(key): - """Split scope and key. - - The first scope will be split from key. - - Examples: - >>> Registry.split_scope_key('mmdet.ResNet') - 'mmdet', 'ResNet' - >>> Registry.split_scope_key('ResNet') - None, 'ResNet' - - Return: - scope (str, None): The first scope. - key (str): The remaining key. - """ - split_index = key.find('.') - if split_index != -1: - return key[:split_index], key[split_index + 1:] - else: - return None, key - - @property - def name(self): - return self._name - - @property - def scope(self): - return self._scope - - @property - def module_dict(self): - return self._module_dict - - @property - def children(self): - return self._children - - def get(self, key): - """Get the registry record. - - Args: - key (str): The class name in string format. - - Returns: - class: The corresponding class. - """ - scope, real_key = self.split_scope_key(key) - if scope is None or scope == self._scope: - # get from self - if real_key in self._module_dict: - return self._module_dict[real_key] - else: - # get from self._children - if scope in self._children: - return self._children[scope].get(real_key) - else: - # goto root - parent = self.parent - while parent.parent is not None: - parent = parent.parent - return parent.get(key) - - def build(self, *args, **kwargs): - return self.build_func(*args, **kwargs, registry=self) - - def _add_children(self, registry): - """Add children for a registry. - - The ``registry`` will be added as children based on its scope. - The parent registry could build objects from children registry. - - Example: - >>> models = Registry('models') - >>> mmdet_models = Registry('models', parent=models) - >>> @mmdet_models.register_module() - >>> class ResNet: - >>> pass - >>> resnet = models.build(dict(type='mmdet.ResNet')) - """ - - assert isinstance(registry, Registry) - assert registry.scope is not None - assert registry.scope not in self.children, \ - f'scope {registry.scope} exists in {self.name} registry' - self.children[registry.scope] = registry - - def _register_module(self, module_class, module_name=None, force=False): - if not inspect.isclass(module_class): - raise TypeError('module must be a class, ' - f'but got {type(module_class)}') - - if module_name is None: - module_name = module_class.__name__ - if isinstance(module_name, str): - module_name = [module_name] - for name in module_name: - if not force and name in self._module_dict: - raise KeyError(f'{name} is already registered ' - f'in {self.name}') - self._module_dict[name] = module_class - - def deprecated_register_module(self, cls=None, force=False): - warnings.warn( - 'The old API of register_module(module, force=False) ' - 'is deprecated and will be removed, please use the new API ' - 'register_module(name=None, force=False, module=None) instead.') - if cls is None: - return partial(self.deprecated_register_module, force=force) - self._register_module(cls, force=force) - return cls - - def register_module(self, name=None, force=False, module=None): - """Register a module. - - A record will be added to `self._module_dict`, whose key is the class - name or the specified name, and value is the class itself. - It can be used as a decorator or a normal function. - - Example: - >>> backbones = Registry('backbone') - >>> @backbones.register_module() - >>> class ResNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> @backbones.register_module(name='mnet') - >>> class MobileNet: - >>> pass - - >>> backbones = Registry('backbone') - >>> class ResNet: - >>> pass - >>> backbones.register_module(ResNet) - - Args: - name (str | None): The module name to be registered. If not - specified, the class name will be used. - force (bool, optional): Whether to override an existing class with - the same name. Default: False. - module (type): Module class to be registered. - """ - if not isinstance(force, bool): - raise TypeError(f'force must be a boolean, but got {type(force)}') - # NOTE: This is a walkaround to be compatible with the old api, - # while it may introduce unexpected bugs. - if isinstance(name, type): - return self.deprecated_register_module(name, force=force) - - # raise the error ahead of time - if not (name is None or isinstance(name, str) or is_seq_of(name, str)): - raise TypeError( - 'name must be either of None, an instance of str or a sequence' - f' of str, but got {type(name)}') - - # use it as a normal method: x.register_module(module=SomeClass) - if module is not None: - self._register_module( - module_class=module, module_name=name, force=force) - return module - - # use it as a decorator: @x.register_module() - def _register(cls): - self._register_module( - module_class=cls, module_name=name, force=force) - return cls - - return _register diff --git a/spaces/adirik/OWL-ViT/app.py b/spaces/adirik/OWL-ViT/app.py deleted file mode 100644 index 549af428acfe3469e430d8ae25e6d5bf804b2d7e..0000000000000000000000000000000000000000 --- a/spaces/adirik/OWL-ViT/app.py +++ /dev/null @@ -1,69 +0,0 @@ -import torch -import cv2 -import gradio as gr -import numpy as np -from transformers import OwlViTProcessor, OwlViTForObjectDetection - - -# Use GPU if available -if torch.cuda.is_available(): - device = torch.device("cuda") -else: - device = torch.device("cpu") - -model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32").to(device) -model.eval() -processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") - - -def query_image(img, text_queries, score_threshold): - text_queries = text_queries - text_queries = text_queries.split(",") - - target_sizes = torch.Tensor([img.shape[:2]]) - inputs = processor(text=text_queries, images=img, return_tensors="pt").to(device) - - with torch.no_grad(): - outputs = model(**inputs) - - outputs.logits = outputs.logits.cpu() - outputs.pred_boxes = outputs.pred_boxes.cpu() - results = processor.post_process(outputs=outputs, target_sizes=target_sizes) - boxes, scores, labels = results[0]["boxes"], results[0]["scores"], results[0]["labels"] - - font = cv2.FONT_HERSHEY_SIMPLEX - - for box, score, label in zip(boxes, scores, labels): - box = [int(i) for i in box.tolist()] - - if score >= score_threshold: - img = cv2.rectangle(img, box[:2], box[2:], (255,0,0), 5) - if box[3] + 25 > 768: - y = box[3] - 10 - else: - y = box[3] + 25 - - img = cv2.putText( - img, text_queries[label], (box[0], y), font, 1, (255,0,0), 2, cv2.LINE_AA - ) - return img - - -description = """ -Gradio demo for OWL-ViT, introduced in Simple Open-Vocabulary Object Detection with Vision Transformers. -\n\nYou can use OWL-ViT to query images with text descriptions of any object. To use it, simply upload an image and enter comma separated text descriptions of objects you want to query the image for. You can also use the score threshold slider to set a threshold to filter out low probability predictions. -\n\nOWL-ViT is trained on text templates, hence you can get better predictions by querying the image with text templates used in training the original model: *"photo of a star-spangled banner"*, *"image of a shoe"*. Refer to the CLIP paper to see the full list of text templates used to augment the training data. \n\nColab demo -""" -demo = gr.Interface( - query_image, - inputs=[gr.Image(), "text", gr.Slider(0, 1, value=0.1)], - outputs="image", - title="Zero-Shot Object Detection with OWL-ViT", - description=description, - examples=[ - ["assets/astronaut.png", "human face, rocket, star-spangled banner, nasa badge", 0.11], - ["assets/coffee.png", "coffee mug, spoon, plate", 0.1], - ["assets/butterflies.jpeg", "orange butterfly", 0.3], - ], -) -demo.launch() \ No newline at end of file diff --git a/spaces/aijack/jojo/e4e/criteria/w_norm.py b/spaces/aijack/jojo/e4e/criteria/w_norm.py deleted file mode 100644 index a45ab6f67d8a3f7051be4b7236fa2f38446fd2c1..0000000000000000000000000000000000000000 --- a/spaces/aijack/jojo/e4e/criteria/w_norm.py +++ /dev/null @@ -1,14 +0,0 @@ -import torch -from torch import nn - - -class WNormLoss(nn.Module): - - def __init__(self, start_from_latent_avg=True): - super(WNormLoss, self).__init__() - self.start_from_latent_avg = start_from_latent_avg - - def forward(self, latent, latent_avg=None): - if self.start_from_latent_avg: - latent = latent - latent_avg - return torch.sum(latent.norm(2, dim=(1, 2))) / latent.shape[0] diff --git a/spaces/akhaliq/JoJoGAN/e4e/scripts/inference.py b/spaces/akhaliq/JoJoGAN/e4e/scripts/inference.py deleted file mode 100644 index 185b9b34db85dcd97b9793bd5dbfc9d1ca046549..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/JoJoGAN/e4e/scripts/inference.py +++ /dev/null @@ -1,133 +0,0 @@ -import argparse - -import torch -import numpy as np -import sys -import os -import dlib - -sys.path.append(".") -sys.path.append("..") - -from configs import data_configs, paths_config -from datasets.inference_dataset import InferenceDataset -from torch.utils.data import DataLoader -from utils.model_utils import setup_model -from utils.common import tensor2im -from utils.alignment import align_face -from PIL import Image - - -def main(args): - net, opts = setup_model(args.ckpt, device) - is_cars = 'cars_' in opts.dataset_type - generator = net.decoder - generator.eval() - args, data_loader = setup_data_loader(args, opts) - - # Check if latents exist - latents_file_path = os.path.join(args.save_dir, 'latents.pt') - if os.path.exists(latents_file_path): - latent_codes = torch.load(latents_file_path).to(device) - else: - latent_codes = get_all_latents(net, data_loader, args.n_sample, is_cars=is_cars) - torch.save(latent_codes, latents_file_path) - - if not args.latents_only: - generate_inversions(args, generator, latent_codes, is_cars=is_cars) - - -def setup_data_loader(args, opts): - dataset_args = data_configs.DATASETS[opts.dataset_type] - transforms_dict = dataset_args['transforms'](opts).get_transforms() - images_path = args.images_dir if args.images_dir is not None else dataset_args['test_source_root'] - print(f"images path: {images_path}") - align_function = None - if args.align: - align_function = run_alignment - test_dataset = InferenceDataset(root=images_path, - transform=transforms_dict['transform_test'], - preprocess=align_function, - opts=opts) - - data_loader = DataLoader(test_dataset, - batch_size=args.batch, - shuffle=False, - num_workers=2, - drop_last=True) - - print(f'dataset length: {len(test_dataset)}') - - if args.n_sample is None: - args.n_sample = len(test_dataset) - return args, data_loader - - -def get_latents(net, x, is_cars=False): - codes = net.encoder(x) - if net.opts.start_from_latent_avg: - if codes.ndim == 2: - codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1)[:, 0, :] - else: - codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1) - if codes.shape[1] == 18 and is_cars: - codes = codes[:, :16, :] - return codes - - -def get_all_latents(net, data_loader, n_images=None, is_cars=False): - all_latents = [] - i = 0 - with torch.no_grad(): - for batch in data_loader: - if n_images is not None and i > n_images: - break - x = batch - inputs = x.to(device).float() - latents = get_latents(net, inputs, is_cars) - all_latents.append(latents) - i += len(latents) - return torch.cat(all_latents) - - -def save_image(img, save_dir, idx): - result = tensor2im(img) - im_save_path = os.path.join(save_dir, f"{idx:05d}.jpg") - Image.fromarray(np.array(result)).save(im_save_path) - - -@torch.no_grad() -def generate_inversions(args, g, latent_codes, is_cars): - print('Saving inversion images') - inversions_directory_path = os.path.join(args.save_dir, 'inversions') - os.makedirs(inversions_directory_path, exist_ok=True) - for i in range(args.n_sample): - imgs, _ = g([latent_codes[i].unsqueeze(0)], input_is_latent=True, randomize_noise=False, return_latents=True) - if is_cars: - imgs = imgs[:, :, 64:448, :] - save_image(imgs[0], inversions_directory_path, i + 1) - - -def run_alignment(image_path): - predictor = dlib.shape_predictor(paths_config.model_paths['shape_predictor']) - aligned_image = align_face(filepath=image_path, predictor=predictor) - print("Aligned image has shape: {}".format(aligned_image.size)) - return aligned_image - - -if __name__ == "__main__": - device = "cuda" - - parser = argparse.ArgumentParser(description="Inference") - parser.add_argument("--images_dir", type=str, default=None, - help="The directory of the images to be inverted") - parser.add_argument("--save_dir", type=str, default=None, - help="The directory to save the latent codes and inversion images. (default: images_dir") - parser.add_argument("--batch", type=int, default=1, help="batch size for the generator") - parser.add_argument("--n_sample", type=int, default=None, help="number of the samples to infer.") - parser.add_argument("--latents_only", action="store_true", help="infer only the latent codes of the directory") - parser.add_argument("--align", action="store_true", help="align face images before inference") - parser.add_argument("ckpt", metavar="CHECKPOINT", help="path to generator checkpoint") - - args = parser.parse_args() - main(args) diff --git a/spaces/akhaliq/PaintTransformer/train/models/networks.py b/spaces/akhaliq/PaintTransformer/train/models/networks.py deleted file mode 100644 index 77fda334ac6acb666ea329d6d4d75ffc257fa7ba..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/PaintTransformer/train/models/networks.py +++ /dev/null @@ -1,143 +0,0 @@ -import torch -import torch.nn as nn -from torch.nn import init -from torch.optim import lr_scheduler - - -def get_scheduler(optimizer, opt): - if opt.lr_policy == 'linear': - def lambda_rule(epoch): - # lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs_decay + 1) - lr_l = 0.3 ** max(0, (epoch + opt.epoch_count - opt.n_epochs) // 5) - return lr_l - - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) - elif opt.lr_policy == 'step': - scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=0.1) - elif opt.lr_policy == 'plateau': - scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) - elif opt.lr_policy == 'cosine': - scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0) - else: - return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) - return scheduler - - -def init_weights(net, init_type='normal', init_gain=0.02): - def init_func(m): - classname = m.__class__.__name__ - if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): - if init_type == 'normal': - init.normal_(m.weight.data, 0.0, init_gain) - elif init_type == 'xavier': - init.xavier_normal_(m.weight.data, gain=init_gain) - elif init_type == 'kaiming': - init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - init.orthogonal_(m.weight.data, gain=init_gain) - else: - raise NotImplementedError('initialization method [%s] is not implemented' % init_type) - if hasattr(m, 'bias') and m.bias is not None: - init.constant_(m.bias.data, 0.0) - elif classname.find('BatchNorm2d') != -1: - init.normal_(m.weight.data, 1.0, init_gain) - init.constant_(m.bias.data, 0.0) - - print('initialize network with %s' % init_type) - net.apply(init_func) - - -def init_net(net, init_type='normal', init_gain=0.02, gpu_ids=()): - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - net.to(gpu_ids[0]) - net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs - init_weights(net, init_type, init_gain=init_gain) - return net - - -class SignWithSigmoidGrad(torch.autograd.Function): - - @staticmethod - def forward(ctx, x): - result = (x > 0).float() - sigmoid_result = torch.sigmoid(x) - ctx.save_for_backward(sigmoid_result) - return result - - @staticmethod - def backward(ctx, grad_result): - (sigmoid_result,) = ctx.saved_tensors - if ctx.needs_input_grad[0]: - grad_input = grad_result * sigmoid_result * (1 - sigmoid_result) - else: - grad_input = None - return grad_input - - -class Painter(nn.Module): - - def __init__(self, param_per_stroke, total_strokes, hidden_dim, n_heads=8, n_enc_layers=3, n_dec_layers=3): - super().__init__() - self.enc_img = nn.Sequential( - nn.ReflectionPad2d(1), - nn.Conv2d(3, 32, 3, 1), - nn.BatchNorm2d(32), - nn.ReLU(True), - nn.ReflectionPad2d(1), - nn.Conv2d(32, 64, 3, 2), - nn.BatchNorm2d(64), - nn.ReLU(True), - nn.ReflectionPad2d(1), - nn.Conv2d(64, 128, 3, 2), - nn.BatchNorm2d(128), - nn.ReLU(True)) - self.enc_canvas = nn.Sequential( - nn.ReflectionPad2d(1), - nn.Conv2d(3, 32, 3, 1), - nn.BatchNorm2d(32), - nn.ReLU(True), - nn.ReflectionPad2d(1), - nn.Conv2d(32, 64, 3, 2), - nn.BatchNorm2d(64), - nn.ReLU(True), - nn.ReflectionPad2d(1), - nn.Conv2d(64, 128, 3, 2), - nn.BatchNorm2d(128), - nn.ReLU(True)) - self.conv = nn.Conv2d(128 * 2, hidden_dim, 1) - self.transformer = nn.Transformer(hidden_dim, n_heads, n_enc_layers, n_dec_layers) - self.linear_param = nn.Sequential( - nn.Linear(hidden_dim, hidden_dim), - nn.ReLU(True), - nn.Linear(hidden_dim, hidden_dim), - nn.ReLU(True), - nn.Linear(hidden_dim, param_per_stroke)) - self.linear_decider = nn.Linear(hidden_dim, 1) - self.query_pos = nn.Parameter(torch.rand(total_strokes, hidden_dim)) - self.row_embed = nn.Parameter(torch.rand(8, hidden_dim // 2)) - self.col_embed = nn.Parameter(torch.rand(8, hidden_dim // 2)) - - def forward(self, img, canvas): - b, _, H, W = img.shape - img_feat = self.enc_img(img) - canvas_feat = self.enc_canvas(canvas) - h, w = img_feat.shape[-2:] - feat = torch.cat([img_feat, canvas_feat], dim=1) - feat_conv = self.conv(feat) - - pos_embed = torch.cat([ - self.col_embed[:w].unsqueeze(0).contiguous().repeat(h, 1, 1), - self.row_embed[:h].unsqueeze(1).contiguous().repeat(1, w, 1), - ], dim=-1).flatten(0, 1).unsqueeze(1) - hidden_state = self.transformer(pos_embed + feat_conv.flatten(2).permute(2, 0, 1).contiguous(), - self.query_pos.unsqueeze(1).contiguous().repeat(1, b, 1)) - hidden_state = hidden_state.permute(1, 0, 2).contiguous() - param = self.linear_param(hidden_state) - s = hidden_state.shape[1] - grid = param[:, :, :2].view(b * s, 1, 1, 2).contiguous() - img_temp = img.unsqueeze(1).contiguous().repeat(1, s, 1, 1, 1).view(b * s, 3, H, W).contiguous() - color = nn.functional.grid_sample(img_temp, 2 * grid - 1, align_corners=False).view(b, s, 3).contiguous() - decision = self.linear_decider(hidden_state) - return torch.cat([param, color, color, torch.rand(b, s, 1, device=img.device)], dim=-1), decision - diff --git a/spaces/akhaliq/lama/saicinpainting/training/__init__.py b/spaces/akhaliq/lama/saicinpainting/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/alamin655/websurfx/docs/README.md b/spaces/alamin655/websurfx/docs/README.md deleted file mode 100644 index 02f304bb8e2df34c952a23a24301224af9c3f196..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/docs/README.md +++ /dev/null @@ -1,17 +0,0 @@ -

    Websurfx Docs

    - -# General - -- [Introduction](./introduction.md) -- [**FAQ**](./faq.md) - -# Users - -- [Installation](./installation.md) -- [Configuration](./configuration.md) -- [Theming](./theming.md) - -# Developers - -- [**Contribute**](https://github.com/neon-mmd/websurfx/blob/master/CONTRIBUTING.md) -- [**Coding style**](https://rust-lang.github.io/api-guidelines/naming.html) diff --git a/spaces/alamin655/websurfx/public/templates/about.html b/spaces/alamin655/websurfx/public/templates/about.html deleted file mode 100644 index 9c4cbb052ae4c433ae9648c4ff4d21975ec2c657..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/public/templates/about.html +++ /dev/null @@ -1,29 +0,0 @@ -{{>header this}} -
    -
    -
    -

    Websurfx

    -
    -
    -

    A modern-looking, lightning-fast, privacy-respecting, secure meta search engine written in Rust. It provides a fast and secure search experience while respecting user privacy.
    It aggregates results from multiple search engines and presents them in an unbiased manner, filtering out trackers and ads. -

    - -

    Some of the Top Features:

    - -
      Lightning fast - Results load within milliseconds for an instant search experience.
    - -
      Secure search - All searches are performed over an encrypted connection to prevent snooping.
    - -
      Ad free results - All search results are ad free and clutter free for a clean search experience.
    - -
      Privacy focused - Websurface does not track, store or sell your search data. Your privacy is our priority.
    - -
      Free and Open source - The entire project's code is open source and available for free on GitHub under an GNU Affero General Public License.
    - -
      Highly customizable - Websurface comes with 9 built-in color themes and supports creating custom themes effortlessly.
    -
    - -

    Devoloped by: Websurfx team

    -
    -{{>footer}} - diff --git a/spaces/allknowingroger/Image-Models-Test43/README.md b/spaces/allknowingroger/Image-Models-Test43/README.md deleted file mode 100644 index 40fce8099c9bb7c9015bac913dc79102d7037f7c..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test43/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Models -emoji: 👀 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test42 ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test88/app.py b/spaces/allknowingroger/Image-Models-Test88/app.py deleted file mode 100644 index dbeb57b2ebd49c701713ba1951ad96f787b93452..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test88/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "DmatryMakeev/ponteleich-v1-3500s", - "Aayan2586/ayn-yt2", - "swapnabyna/OUTPUT", - "minimaxir/sdxl-ugly-sonic-lora", - "bellagio-ai/WalterNgo-face-xl-dreambooth-512", - "Yntec/MangledMerge3_768", - "MakAttack/653b80bfe3adbe5935e7e488", - "digiplay/OnlyReal-Black-Mix", - "LinoyTsaban/lora-trained-xl-colab-person-0.0001-1000", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/alvanlii/domain-expansion/torch_utils/ops/upfirdn2d.py b/spaces/alvanlii/domain-expansion/torch_utils/ops/upfirdn2d.py deleted file mode 100644 index ceeac2b9834e33b7c601c28bf27f32aa91c69256..0000000000000000000000000000000000000000 --- a/spaces/alvanlii/domain-expansion/torch_utils/ops/upfirdn2d.py +++ /dev/null @@ -1,384 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom PyTorch ops for efficient resampling of 2D images.""" - -import os -import warnings -import numpy as np -import torch -import traceback - -from .. import custom_ops -from .. import misc -from . import conv2d_gradfix - -#---------------------------------------------------------------------------- - -_inited = False -_plugin = None - -def _init(): - global _inited, _plugin - if not _inited: - sources = ['upfirdn2d.cpp', 'upfirdn2d.cu'] - sources = [os.path.join(os.path.dirname(__file__), s) for s in sources] - try: - _plugin = custom_ops.get_plugin('upfirdn2d_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math']) - except: - warnings.warn('Failed to build CUDA kernels for upfirdn2d. Falling back to slow reference implementation. Details:\n\n' + traceback.format_exc()) - return _plugin is not None - -def _parse_scaling(scaling): - if isinstance(scaling, int): - scaling = [scaling, scaling] - assert isinstance(scaling, (list, tuple)) - assert all(isinstance(x, int) for x in scaling) - sx, sy = scaling - assert sx >= 1 and sy >= 1 - return sx, sy - -def _parse_padding(padding): - if isinstance(padding, int): - padding = [padding, padding] - assert isinstance(padding, (list, tuple)) - assert all(isinstance(x, int) for x in padding) - if len(padding) == 2: - padx, pady = padding - padding = [padx, padx, pady, pady] - padx0, padx1, pady0, pady1 = padding - return padx0, padx1, pady0, pady1 - -def _get_filter_size(f): - if f is None: - return 1, 1 - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - fw = f.shape[-1] - fh = f.shape[0] - with misc.suppress_tracer_warnings(): - fw = int(fw) - fh = int(fh) - misc.assert_shape(f, [fh, fw][:f.ndim]) - assert fw >= 1 and fh >= 1 - return fw, fh - -#---------------------------------------------------------------------------- - -def setup_filter(f, device=torch.device('cpu'), normalize=True, flip_filter=False, gain=1, separable=None): - r"""Convenience function to setup 2D FIR filter for `upfirdn2d()`. - - Args: - f: Torch tensor, numpy array, or python list of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), - `[]` (impulse), or - `None` (identity). - device: Result device (default: cpu). - normalize: Normalize the filter so that it retains the magnitude - for constant input signal (DC)? (default: True). - flip_filter: Flip the filter? (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - separable: Return a separable filter? (default: select automatically). - - Returns: - Float32 tensor of the shape - `[filter_height, filter_width]` (non-separable) or - `[filter_taps]` (separable). - """ - # Validate. - if f is None: - f = 1 - f = torch.as_tensor(f, dtype=torch.float32) - assert f.ndim in [0, 1, 2] - assert f.numel() > 0 - if f.ndim == 0: - f = f[np.newaxis] - - # Separable? - if separable is None: - separable = (f.ndim == 1 and f.numel() >= 8) - if f.ndim == 1 and not separable: - f = f.ger(f) - assert f.ndim == (1 if separable else 2) - - # Apply normalize, flip, gain, and device. - if normalize: - f /= f.sum() - if flip_filter: - f = f.flip(list(range(f.ndim))) - f = f * (gain ** (f.ndim / 2)) - f = f.to(device=device) - return f - -#---------------------------------------------------------------------------- - -def upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Pad, upsample, filter, and downsample a batch of 2D images. - - Performs the following sequence of operations for each channel: - - 1. Upsample the image by inserting N-1 zeros after each pixel (`up`). - - 2. Pad the image with the specified number of zeros on each side (`padding`). - Negative padding corresponds to cropping the image. - - 3. Convolve the image with the specified 2D FIR filter (`f`), shrinking it - so that the footprint of all output pixels lies within the input image. - - 4. Downsample the image by keeping every Nth pixel (`down`). - - This sequence of operations bears close resemblance to scipy.signal.upfirdn(). - The fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports gradients of arbitrary order. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f) - return _upfirdn2d_ref(x, f, up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _upfirdn2d_ref(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1): - """Slow reference implementation of `upfirdn2d()` using standard PyTorch ops. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - assert f.dtype == torch.float32 and not f.requires_grad - batch_size, num_channels, in_height, in_width = x.shape - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Upsample by inserting zeros. - x = x.reshape([batch_size, num_channels, in_height, 1, in_width, 1]) - x = torch.nn.functional.pad(x, [0, upx - 1, 0, 0, 0, upy - 1]) - x = x.reshape([batch_size, num_channels, in_height * upy, in_width * upx]) - - # Pad or crop. - x = torch.nn.functional.pad(x, [max(padx0, 0), max(padx1, 0), max(pady0, 0), max(pady1, 0)]) - x = x[:, :, max(-pady0, 0) : x.shape[2] - max(-pady1, 0), max(-padx0, 0) : x.shape[3] - max(-padx1, 0)] - - # Setup filter. - f = f * (gain ** (f.ndim / 2)) - f = f.to(x.dtype) - if not flip_filter: - f = f.flip(list(range(f.ndim))) - - # Convolve with the filter. - f = f[np.newaxis, np.newaxis].repeat([num_channels, 1] + [1] * f.ndim) - if f.ndim == 4: - x = conv2d_gradfix.conv2d(input=x, weight=f, groups=num_channels) - else: - x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(2), groups=num_channels) - x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(3), groups=num_channels) - - # Downsample by throwing away pixels. - x = x[:, :, ::downy, ::downx] - return x - -#---------------------------------------------------------------------------- - -_upfirdn2d_cuda_cache = dict() - -def _upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1): - """Fast CUDA implementation of `upfirdn2d()` using custom ops. - """ - # Parse arguments. - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Lookup from cache. - key = (upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain) - if key in _upfirdn2d_cuda_cache: - return _upfirdn2d_cuda_cache[key] - - # Forward op. - class Upfirdn2dCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, f): # pylint: disable=arguments-differ - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - y = x - if f.ndim == 2: - y = _plugin.upfirdn2d(y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain) - else: - y = _plugin.upfirdn2d(y, f.unsqueeze(0), upx, 1, downx, 1, padx0, padx1, 0, 0, flip_filter, np.sqrt(gain)) - y = _plugin.upfirdn2d(y, f.unsqueeze(1), 1, upy, 1, downy, 0, 0, pady0, pady1, flip_filter, np.sqrt(gain)) - ctx.save_for_backward(f) - ctx.x_shape = x.shape - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - f, = ctx.saved_tensors - _, _, ih, iw = ctx.x_shape - _, _, oh, ow = dy.shape - fw, fh = _get_filter_size(f) - p = [ - fw - padx0 - 1, - iw * upx - ow * downx + padx0 - upx + 1, - fh - pady0 - 1, - ih * upy - oh * downy + pady0 - upy + 1, - ] - dx = None - df = None - - if ctx.needs_input_grad[0]: - dx = _upfirdn2d_cuda(up=down, down=up, padding=p, flip_filter=(not flip_filter), gain=gain).apply(dy, f) - - assert not ctx.needs_input_grad[1] - return dx, df - - # Add to cache. - _upfirdn2d_cuda_cache[key] = Upfirdn2dCuda - return Upfirdn2dCuda - -#---------------------------------------------------------------------------- - -def filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Filter a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape matches the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + fw // 2, - padx1 + (fw - 1) // 2, - pady0 + fh // 2, - pady1 + (fh - 1) // 2, - ] - return upfirdn2d(x, f, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -#---------------------------------------------------------------------------- - -def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Upsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a multiple of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - upx, upy = _parse_scaling(up) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw + upx - 1) // 2, - padx1 + (fw - upx) // 2, - pady0 + (fh + upy - 1) // 2, - pady1 + (fh - upy) // 2, - ] - return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl) - -#---------------------------------------------------------------------------- - -def downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Downsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a fraction of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the input. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw - downx + 1) // 2, - padx1 + (fw - downx) // 2, - pady0 + (fh - downy + 1) // 2, - pady1 + (fh - downy) // 2, - ] - return upfirdn2d(x, f, down=down, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -#---------------------------------------------------------------------------- diff --git a/spaces/andreped/AeroPath/LICENSE.md b/spaces/andreped/AeroPath/LICENSE.md deleted file mode 100644 index abf831ecf47782d04286a47533618914fb5aaa91..0000000000000000000000000000000000000000 --- a/spaces/andreped/AeroPath/LICENSE.md +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2023 André Pedersen - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/spaces/anekcb/Bee4Med/app.py b/spaces/anekcb/Bee4Med/app.py deleted file mode 100644 index b77774ebb070a99fa185398342fec6c663406648..0000000000000000000000000000000000000000 --- a/spaces/anekcb/Bee4Med/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import gradio as gr -import tensorflow as tf -from tensorflow.keras.applications.mobilenet_v2 import preprocess_input -from tensorflow.keras.models import load_model -from tensorflow.keras.preprocessing import image -import numpy as np -from PIL import Image - - -# Load the model -model = load_model("keras_model.h5", compile=False) - -# Load the labels -class_names = open("labels.txt", "r").readlines() - - -def predict_image(img): - # Resize the image - img = Image.fromarray(img.astype('uint8'), 'RGB') - img = img.resize((224, 224), resample=Image.BILINEAR) - - # Preprocess the image - img_array = image.img_to_array(img) - img_array = preprocess_input(img_array) - - # Expand the dimensions to create a batch of size 1 - img_batch = tf.expand_dims(img_array, axis=0) - - # Predict the class probabilities - preds = model.predict(img_batch) - class_idx = tf.argmax(preds, axis=1)[0] - class_name = class_names[class_idx].strip() - confidence_score = float(preds[0][class_idx]) # convert to float - - if class_idx == 5: - return "We couldn't detect anything from this image. Please try with a different image." - elif confidence_score >= 0.70: - return f"There is a {confidence_score*100:.2f}% chance for this image to be in {class_name}. Even though it has good accuracy, please consult a doctor for confirmation." - elif 0.50 <= confidence_score < 0.70: - return f"There is a {confidence_score*100:.2f}% chance for this image to be in {class_name}, but considering the accuracy, it's better to consult a doctor before using our service." - else: - return f"There is a {confidence_score*100:.2f}% chance for this image to be in {class_name}. Since the accuracy is very low, please consider a doctor's advice and we recommend you not to rely on our predictions." - - -# Launch the Gradio interface -iface = gr.Interface(fn=predict_image, inputs="image", outputs="text", title="Bee4Med - Skin Disease Classifier", - description="""This is a machine learning model that predicts skin disease from an image(limited dataset). Which is --->Acne and Rosacea category --->Eczema(most probably atopic dermatitis) category --->Bullous Disease category --->Eczema category --->Alopecia, Fungus, and other Nail Diseases category -However, please note that there are chances that the predictions may go wrong, and we strongly recommend you to consult a doctor for confirmation. Please provide a closer pic for better accuracy""") - -# Launch the interface -iface.launch() \ No newline at end of file diff --git a/spaces/animesh651/ChatAPT_v1/README.md b/spaces/animesh651/ChatAPT_v1/README.md deleted file mode 100644 index 487e2bd989a57ccc2a119efb6262cd7269cbd8cc..0000000000000000000000000000000000000000 --- a/spaces/animesh651/ChatAPT_v1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatAPT V1 -emoji: 😻 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aphenx/bingo/src/app/loading.css b/spaces/aphenx/bingo/src/app/loading.css deleted file mode 100644 index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/app/loading.css +++ /dev/null @@ -1,68 +0,0 @@ -::-webkit-scrollbar { - width: 10px; - height: 10px; - display: none; -} - -::-webkit-scrollbar-button:start:decrement, -::-webkit-scrollbar-button:end:increment { - height: 30px; - background-color: transparent; -} - -::-webkit-scrollbar-track-piece { - background-color: #3b3b3b; - -webkit-border-radius: 16px; -} - -::-webkit-scrollbar-thumb:vertical { - height: 50px; - background-color: #666; - border: 1px solid #eee; - -webkit-border-radius: 6px; -} - -/* loading start */ -.loading-spinner { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - opacity: 1; - transition: opacity .8s ease-out; -} - -.loading-spinner.hidden { - opacity: 0; -} - -.loading-spinner>div { - width: 30px; - height: 30px; - background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%); - - border-radius: 100%; - display: inline-block; - animation: sk-bouncedelay 1.4s infinite ease-in-out both; -} - -.loading-spinner .bounce1 { - animation-delay: -0.32s; -} - -.loading-spinner .bounce2 { - animation-delay: -0.16s; -} - -@keyframes sk-bouncedelay { - - 0%, - 80%, - 100% { - transform: scale(0); - } - - 40% { - transform: scale(1.0); - } -} diff --git a/spaces/arch-123/bingo/src/components/ui/badge.tsx b/spaces/arch-123/bingo/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/arch-123/bingo/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
    - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/ardha27/rvc_TTS/lib/infer_pack/models.py b/spaces/ardha27/rvc_TTS/lib/infer_pack/models.py deleted file mode 100644 index 3665d03bc0514a6ed07d3372ea24717dae1e0a65..0000000000000000000000000000000000000000 --- a/spaces/ardha27/rvc_TTS/lib/infer_pack/models.py +++ /dev/null @@ -1,1142 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/data.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/data.py deleted file mode 100644 index 22e46b683adfc7f6c7c8a57fb5b697e422cd915c..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/data.py +++ /dev/null @@ -1,79 +0,0 @@ -import bisect - -import numpy as np -import torch - - -def _pad_data(x, length): - _pad = 0 - assert x.ndim == 1 - return np.pad(x, (0, length - x.shape[0]), mode="constant", constant_values=_pad) - - -def prepare_data(inputs): - max_len = max((len(x) for x in inputs)) - return np.stack([_pad_data(x, max_len) for x in inputs]) - - -def _pad_tensor(x, length): - _pad = 0.0 - assert x.ndim == 2 - x = np.pad(x, [[0, 0], [0, length - x.shape[1]]], mode="constant", constant_values=_pad) - return x - - -def prepare_tensor(inputs, out_steps): - max_len = max((x.shape[1] for x in inputs)) - remainder = max_len % out_steps - pad_len = max_len + (out_steps - remainder) if remainder > 0 else max_len - return np.stack([_pad_tensor(x, pad_len) for x in inputs]) - - -def _pad_stop_target(x: np.ndarray, length: int, pad_val=1) -> np.ndarray: - """Pad stop target array. - - Args: - x (np.ndarray): Stop target array. - length (int): Length after padding. - pad_val (int, optional): Padding value. Defaults to 1. - - Returns: - np.ndarray: Padded stop target array. - """ - assert x.ndim == 1 - return np.pad(x, (0, length - x.shape[0]), mode="constant", constant_values=pad_val) - - -def prepare_stop_target(inputs, out_steps): - """Pad row vectors with 1.""" - max_len = max((x.shape[0] for x in inputs)) - remainder = max_len % out_steps - pad_len = max_len + (out_steps - remainder) if remainder > 0 else max_len - return np.stack([_pad_stop_target(x, pad_len) for x in inputs]) - - -def pad_per_step(inputs, pad_len): - return np.pad(inputs, [[0, 0], [0, 0], [0, pad_len]], mode="constant", constant_values=0.0) - - -def get_length_balancer_weights(items: list, num_buckets=10): - # get all durations - audio_lengths = np.array([item["audio_length"] for item in items]) - # create the $num_buckets buckets classes based in the dataset max and min length - max_length = int(max(audio_lengths)) - min_length = int(min(audio_lengths)) - step = int((max_length - min_length) / num_buckets) + 1 - buckets_classes = [i + step for i in range(min_length, (max_length - step) + num_buckets + 1, step)] - # add each sample in their respective length bucket - buckets_names = np.array( - [buckets_classes[bisect.bisect_left(buckets_classes, item["audio_length"])] for item in items] - ) - # count and compute the weights_bucket for each sample - unique_buckets_names = np.unique(buckets_names).tolist() - bucket_ids = [unique_buckets_names.index(l) for l in buckets_names] - bucket_count = np.array([len(np.where(buckets_names == l)[0]) for l in unique_buckets_names]) - weight_bucket = 1.0 / bucket_count - dataset_samples_weight = np.array([weight_bucket[l] for l in bucket_ids]) - # normalize - dataset_samples_weight = dataset_samples_weight / np.linalg.norm(dataset_samples_weight) - return torch.from_numpy(dataset_samples_weight).float() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bubble_plot.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bubble_plot.py deleted file mode 100644 index 44b74bced4654c696211a9da654d5c56c15fd5db..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/bubble_plot.py +++ /dev/null @@ -1,16 +0,0 @@ -""" -Bubble Plot ------------------ -This example shows how to make a bubble plot. -""" -# category: scatter plots -import altair as alt -from vega_datasets import data - -source = data.cars() - -alt.Chart(source).mark_point().encode( - x='Horsepower', - y='Miles_per_Gallon', - size='Acceleration' -) diff --git a/spaces/asifhugs/InfiniteGPT/README.md b/spaces/asifhugs/InfiniteGPT/README.md deleted file mode 100644 index 5d0a66009955a69c55022e487e0b2c202a243659..0000000000000000000000000000000000000000 --- a/spaces/asifhugs/InfiniteGPT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: InfiniteGPT -emoji: 🚀 -colorFrom: indigo -colorTo: yellow -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/auto-academic/auto-draft/latex_templates/AAAI2023/methodology.tex b/spaces/auto-academic/auto-draft/latex_templates/AAAI2023/methodology.tex deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/awacke1/HuggingfaceEvolution/app.py b/spaces/awacke1/HuggingfaceEvolution/app.py deleted file mode 100644 index 87014a74a5efe1bc55873844893d37e2fbb2ec57..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HuggingfaceEvolution/app.py +++ /dev/null @@ -1,177 +0,0 @@ -# List of URLs provided by the user -urls = [ - "https://huggingface.co/spaces/awacke1/CB-GR-Chatbot-Blenderbot", - "https://huggingface.co/spaces/awacke1/TTS-STT-Blocks", - "https://huggingface.co/spaces/awacke1/Prompt-Refinery-Text-to-Image-Generation", - "https://huggingface.co/spaces/awacke1/Video-Summary", - "https://huggingface.co/spaces/awacke1/AI-MovieMaker-Comedy", - "https://huggingface.co/spaces/awacke1/ChatGPT-Memory-Chat-Story-Generator", - "https://huggingface.co/spaces/awacke1/CloneAnyVoice", - "https://huggingface.co/spaces/awacke1/ChatGPT-Streamlit-2", - "https://huggingface.co/spaces/awacke1/WikipediaUltimateAISearch", - "https://huggingface.co/spaces/awacke1/RLHF.Cognitive.Episodic.Semantic.Memory", - "https://huggingface.co/spaces/awacke1/Memory-Shared", - "https://huggingface.co/spaces/awacke1/VideoSwap", - "https://huggingface.co/spaces/awacke1/AI-Wikipedia-Search", - "https://huggingface.co/spaces/awacke1/AutoMLUsingStreamlit-Plotly", - "https://huggingface.co/spaces/awacke1/NLP-Lyric-Chorus-Image", - "https://huggingface.co/spaces/awacke1/OpenAssistant-Chatbot-FTW-Open-Source", - "https://huggingface.co/spaces/awacke1/ChatGPTStreamlit7", - "https://huggingface.co/spaces/awacke1/MultiPDF-QA-ChatGPT-Langchain", - "https://huggingface.co/spaces/awacke1/SOTA-Plan", - "https://huggingface.co/spaces/awacke1/AIandSmartTools", - "https://huggingface.co/spaces/awacke1/3DVirtualFood", - "https://huggingface.co/spaces/awacke1/Gradio-Gallery-Health-Medical-Icon-Sets", - "https://huggingface.co/spaces/awacke1/DatasetAnalyzer", - "https://huggingface.co/spaces/awacke1/PrompTart", - "https://huggingface.co/spaces/awacke1/sileod-deberta-v3-base-tasksource-nli", - "https://huggingface.co/spaces/awacke1/File-Memory-Operations-Human-Feedback-Gradio", - "https://huggingface.co/spaces/awacke1/Bloom.Big.Science.Continual.Generator", - "https://huggingface.co/spaces/awacke1/Ontology-Gradio", - "https://huggingface.co/spaces/awacke1/HTML5-Aframe-3dMap-Flight", - "https://huggingface.co/spaces/awacke1/Bloom.Generative.Writer", - "https://huggingface.co/spaces/awacke1/Voice-ChatGPT-Streamlit-12", - "https://huggingface.co/spaces/awacke1/HTML5-AR-VR", - "https://huggingface.co/spaces/awacke1/AnimationAI", - "https://huggingface.co/spaces/awacke1/GenerativeWordsandImages", - "https://huggingface.co/spaces/awacke1/AR-VR-IOT-Demo", - "https://huggingface.co/spaces/awacke1/ArtStyleFoodsandNutrition", - "https://huggingface.co/spaces/awacke1/CarePlanQnAWithContext", - "https://huggingface.co/spaces/awacke1/VideoSummaryYoutube3", - "https://huggingface.co/spaces/awacke1/AW-01ST-CSV-Dataset-Analyzer", - "https://huggingface.co/spaces/awacke1/Try.Playing.Learning.Sharing.On.This", - "https://huggingface.co/spaces/awacke1/google-flan-t5-base", - "https://huggingface.co/spaces/awacke1/PubMed-Parrot-Paraphraser-on-T5", - "https://huggingface.co/spaces/awacke1/Writing-Grammar-And-Paraphrase-w-Pegasus", - "https://huggingface.co/spaces/awacke1/runwayml-stable-diffusion-v1-5", - "https://huggingface.co/spaces/awacke1/DockerGoFlanT5", - "https://huggingface.co/spaces/awacke1/GradioContinualGenerator", - "https://huggingface.co/spaces/awacke1/StreamlitSuperPowerCheatSheet" -] - -# Extract the last part of each URL (after the last '/') to serve as the name of the button -url_names = [url.split('/')[-1] for url in urls] - -# Associate each URL with a relevant emoji based on keywords in its name -emoji_mapping = { - "Chatbot": "🤖", - "TTS": "🗣️", - "STT": "👂", - "Video": "🎥", - "MovieMaker": "🍿", - "ChatGPT": "💬", - "Voice": "🎙️", - "Wikipedia": "📖", - "Memory": "🧠", - "AI": "🧠", - "OpenAssistant": "🤝", - "3D": "🕶️", - "AR": "👓", - "VR": "🕶️", - "Animation": "🖌️", - "Dataset": "📊", - "Gradio": "📻", - "HTML5": "🌐", - "Writing": "✍️", - "Grammar": "🖋️", - "Paraphrase": "🔄", - "Streamlit": "🌠" -} - -# Map each URL name to its most relevant emoji -url_emojis = [] -for name in url_names: - associated_emoji = "🔗" # Default emoji - for keyword, emoji in emoji_mapping.items(): - if keyword in name: - associated_emoji = emoji - break - url_emojis.append(associated_emoji) - -#url_emojis[:5], url_names[:5] # Display the first 5 URL names with their associated emojis - -import streamlit as st -import json -import webbrowser - -# Function to load the history of clicks from the text file -def load_history(): - try: - with open("click_history.txt", "r") as f: - return json.load(f) - except FileNotFoundError: - return {url: 0 for url in urls} - -# Function to save the updated history of clicks to the text file -def save_history(history): - with open("click_history.txt", "w") as f: - json.dump(history, f) - -# Load the history of clicks -history = load_history() - -# Display the buttons for each URL -for url, name, emoji in zip(urls, url_names, url_emojis): - if st.button(f"{emoji} {name}"): - # Open the URL in a new browser tab using JavaScript - st.write('', unsafe_allow_html=True) - # Update the history of clicks - history[url] += 1 - save_history(history) - # Display the number of times the URL was opened below its corresponding button - st.write(f"Clicked: {history[url]} times") - -import time -from bokeh.plotting import figure -from bokeh.models import ColumnDataSource - -# ... [rest of the initial code remains unchanged] ... - -# Streamlit app -def main(): - - # Session state to hold the value of AutoRepeat button across reruns - if "auto_repeat" not in st.session_state: - st.session_state.auto_repeat = "On" - if "current_index" not in st.session_state: - st.session_state.current_index = 0 # Use 0 as a default index - - # Load the history of clicks - history = load_history() - - # Display the buttons for each URL - for url, name, emoji in zip(urls, url_names, url_emojis): - #if st.button(f"{emoji} {name}"): - if st.button(f"{emoji} {name}", key=url): # using the URL as the unique key - # Open the URL in a new browser tab using JavaScript - st.write('', unsafe_allow_html=True) - # Update the history of clicks - history[url] += 1 - save_history(history) - # Display the number of times the URL was opened below its corresponding button - st.write(f"Clicked: {history[url]} times") - - # Timer logic - if st.session_state.auto_repeat == "On": - timer_placeholder = st.empty() - for i in range(10, 0, -1): - timer_placeholder.text(f"Reloading in {i} seconds...") - time.sleep(1) - history = load_history() # Reload the history after the countdown - - # Display the Bokeh graph showing the click counts - non_zero_urls = [name for url, name in zip(urls, url_names) if history[url] > 0] - non_zero_counts = [history[url] for url in urls if history[url] > 0] - - source = ColumnDataSource(data=dict(urls=non_zero_urls, counts=non_zero_counts)) - - p = figure(x_range=non_zero_urls, plot_height=350, title="Click Counts per URL", - toolbar_location=None, tools="") - p.vbar(x='urls', top='counts', width=0.9, source=source) - p.xaxis.major_label_orientation = 1.2 - - st.bokeh_chart(p) - -if __name__ == "__main__": - main() - diff --git a/spaces/awacke1/chatgpt-demo/README.md b/spaces/awacke1/chatgpt-demo/README.md deleted file mode 100644 index 03b2af020bba8606e9f67f44b17571e7e23a5921..0000000000000000000000000000000000000000 --- a/spaces/awacke1/chatgpt-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatgpt Demo -emoji: 🐠 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -duplicated_from: anzorq/chatgpt-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/b-monroe/rvc-VoiceAI/vc_infer_pipeline.py b/spaces/b-monroe/rvc-VoiceAI/vc_infer_pipeline.py deleted file mode 100644 index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000 --- a/spaces/b-monroe/rvc-VoiceAI/vc_infer_pipeline.py +++ /dev/null @@ -1,306 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -from config import x_pad, x_query, x_center, x_max -import scipy.signal as signal -import pyworld, os, traceback, faiss -from scipy import signal - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - - -class VC(object): - def __init__(self, tgt_sr, device, is_half): - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * x_query # 查询切点前后查询时间 - self.t_center = self.sr * x_center # 查询切点位置 - self.t_max = self.sr * x_max # 免查询时长阈值 - self.device = device - self.is_half = is_half - - def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None): - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9, # layer 9 - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - _, I = index.search(npy, 1) - npy = big_npy[I.squeeze()] - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - f0_file=None, - ): - if ( - file_big_npy != "" - and file_index != "" - and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - big_npy = np.load(file_big_npy) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - print("Feature retrieval library doesn't exist or ratio is 0") - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/renderers/CSS2DRenderer.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/renderers/CSS2DRenderer.js deleted file mode 100644 index bb034c3195b71c3ae98ef3ac055891c75f2c7f46..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/renderers/CSS2DRenderer.js +++ /dev/null @@ -1,174 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - */ - -THREE.CSS2DObject = function ( element ) { - - THREE.Object3D.call( this ); - - this.element = element; - this.element.style.position = 'absolute'; - - this.addEventListener( 'removed', function () { - - if ( this.element.parentNode !== null ) { - - this.element.parentNode.removeChild( this.element ); - - } - - } ); - -}; - -THREE.CSS2DObject.prototype = Object.create( THREE.Object3D.prototype ); -THREE.CSS2DObject.prototype.constructor = THREE.CSS2DObject; - -// - -THREE.CSS2DRenderer = function () { - - console.log( 'THREE.CSS2DRenderer', THREE.REVISION ); - - var _width, _height; - var _widthHalf, _heightHalf; - - var vector = new THREE.Vector3(); - var viewMatrix = new THREE.Matrix4(); - var viewProjectionMatrix = new THREE.Matrix4(); - - var cache = { - objects: new WeakMap() - }; - - var domElement = document.createElement( 'div' ); - domElement.style.overflow = 'hidden'; - - this.domElement = domElement; - - this.getSize = function () { - - return { - width: _width, - height: _height - }; - - }; - - this.setSize = function ( width, height ) { - - _width = width; - _height = height; - - _widthHalf = _width / 2; - _heightHalf = _height / 2; - - domElement.style.width = width + 'px'; - domElement.style.height = height + 'px'; - - }; - - var renderObject = function ( object, camera ) { - - if ( object instanceof THREE.CSS2DObject ) { - - vector.setFromMatrixPosition( object.matrixWorld ); - vector.applyMatrix4( viewProjectionMatrix ); - - var element = object.element; - var style = 'translate(-50%,-50%) translate(' + ( vector.x * _widthHalf + _widthHalf ) + 'px,' + ( - vector.y * _heightHalf + _heightHalf ) + 'px)'; - - element.style.WebkitTransform = style; - element.style.MozTransform = style; - element.style.oTransform = style; - element.style.transform = style; - element.style.display = ( vector.z < - 1 || vector.z > 1 ) ? 'none' : ''; - - var objectData = { - distanceToCameraSquared: getDistanceToSquared( camera, object ) - }; - - cache.objects.set( object, objectData ); - - if ( element.parentNode !== domElement ) { - - domElement.appendChild( element ); - - } - - } - - for ( var i = 0, l = object.children.length; i < l; i ++ ) { - - renderObject( object.children[ i ], camera ); - - } - - }; - - var getDistanceToSquared = function () { - - var a = new THREE.Vector3(); - var b = new THREE.Vector3(); - - return function ( object1, object2 ) { - - a.setFromMatrixPosition( object1.matrixWorld ); - b.setFromMatrixPosition( object2.matrixWorld ); - - return a.distanceToSquared( b ); - - }; - - }(); - - var filterAndFlatten = function ( scene ) { - - var result = []; - - scene.traverse( function ( object ) { - - if ( object instanceof THREE.CSS2DObject ) result.push( object ); - - } ); - - return result; - - }; - - var zOrder = function ( scene ) { - - var sorted = filterAndFlatten( scene ).sort( function ( a, b ) { - - var distanceA = cache.objects.get( a ).distanceToCameraSquared; - var distanceB = cache.objects.get( b ).distanceToCameraSquared; - - return distanceA - distanceB; - - } ); - - var zMax = sorted.length; - - for ( var i = 0, l = sorted.length; i < l; i ++ ) { - - sorted[ i ].element.style.zIndex = zMax - i; - - } - - }; - - this.render = function ( scene, camera ) { - - scene.updateMatrixWorld(); - - if ( camera.parent === null ) camera.updateMatrixWorld(); - - viewMatrix.copy( camera.matrixWorldInverse ); - viewProjectionMatrix.multiplyMatrices( camera.projectionMatrix, viewMatrix ); - - renderObject( scene, camera ); - zOrder( scene ); - - }; - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/clipping_planes_pars_vertex.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/clipping_planes_pars_vertex.glsl.js deleted file mode 100644 index 44ade682b25dfced9f5c00c401af440b570b4ec3..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/clipping_planes_pars_vertex.glsl.js +++ /dev/null @@ -1,5 +0,0 @@ -export default /* glsl */` -#if NUM_CLIPPING_PLANES > 0 && ! defined( PHYSICAL ) && ! defined( PHONG ) && ! defined( MATCAP ) - varying vec3 vViewPosition; -#endif -`; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/default_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/default_fragment.glsl.js deleted file mode 100644 index a68ac369b71e9a89219f971b064b95b4d94ef8cc..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/default_fragment.glsl.js +++ /dev/null @@ -1,5 +0,0 @@ -export default /* glsl */` -void main() { - gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 ); -} -`; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/logdepthbuf_pars_vertex.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/logdepthbuf_pars_vertex.glsl.js deleted file mode 100644 index 748289ea356e7862c07014c2238f8532cfa97efe..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/logdepthbuf_pars_vertex.glsl.js +++ /dev/null @@ -1,15 +0,0 @@ -export default /* glsl */` -#ifdef USE_LOGDEPTHBUF - - #ifdef USE_LOGDEPTHBUF_EXT - - varying float vFragDepth; - - #else - - uniform float logDepthBufFC; - - #endif - -#endif -`; diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220620231703.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220620231703.py deleted file mode 100644 index 7b20ef1cd574d79afd48e2e208d5a7cc7de3fd5c..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220620231703.py +++ /dev/null @@ -1,31 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) - -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - - # read the pdf and parse it using stream - tables = cam.read_pdf("input.pdf", pages=page_number) - result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter') - tables[0].to_excel(result,index=False) - # for i in range(0,len(tables)): - # table = tables[i].df - # sheetname = str(i) - # table.to_excel(result, sheetname,index=False) - - with open('result.xlsx','rb') as f: - st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel") \ No newline at end of file diff --git a/spaces/bejar111/cursoia/README.md b/spaces/bejar111/cursoia/README.md deleted file mode 100644 index 9a0cefc12db9bf2f09ab235f5b174a3844151982..0000000000000000000000000000000000000000 --- a/spaces/bejar111/cursoia/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Cursoia -emoji: 📚 -colorFrom: gray -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bergrozen1213/3d-obj/README.md b/spaces/bergrozen1213/3d-obj/README.md deleted file mode 100644 index b942444fc6ad2e9308d68c6755810d5707d1c78e..0000000000000000000000000000000000000000 --- a/spaces/bergrozen1213/3d-obj/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dpt Depth Estimation + 3D -emoji: ⚡ -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.0b8 -app_file: app.py -pinned: false -duplicated_from: radames/dpt-depth-estimation-3d-obj ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/bioriAsaeru/text-to-voice/Download Film Mr Bean Holiday Subtitle Indonesia Download.md b/spaces/bioriAsaeru/text-to-voice/Download Film Mr Bean Holiday Subtitle Indonesia Download.md deleted file mode 100644 index 6c86dc049edaf0e37c9f733d2cfc01a96a8b53b2..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download Film Mr Bean Holiday Subtitle Indonesia Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    download film mr bean holiday subtitle indonesia download


    Download File 🌟 https://urloso.com/2uyPAR



    - -Nonton Mr Bean Holiday Sub Indo. Nonton dan Download Streaming Film Movie Indonesia Thailand India Jepang Terbaru Terlengkap Online Rebahin. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Finecut 8 Illustrator Cc Crack [BEST].md b/spaces/bioriAsaeru/text-to-voice/Finecut 8 Illustrator Cc Crack [BEST].md deleted file mode 100644 index 0a02e172d2c0390b7e6738ab5fc0aa79cc97e0e5..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Finecut 8 Illustrator Cc Crack [BEST].md +++ /dev/null @@ -1,11 +0,0 @@ -
    -

    fine cut 8 for illustrator mimaki fine cut 8 for illustrator mimaki fine cut 8 for illustrator mimaki fine cut 8 for illustrator mimaki fine cut 8. finecut 8 for illustrator mimaki fine cut 8 for illustrator mimaki fine cut 8 for illustrator mimaki fine cut 8 for illustrator mimaki fine cut 8. fine cut 8 for illustrator mimaki fine cut 8 for illustrator mimaki fine cut 8 for illustrator mimaki fine cut 8 for illustrator mimaki fine cut 8.

    -

    finecut 8 illustrator cc crack


    Download Zip ->->->-> https://urloso.com/2uyRD0



    -

    finecut cs6 download
    finecut 8 illustrator cc 2013
    cs5 2012
    2014
    finecut illustrator crack 2015 
    finecut for illustrator
    IMPORTANT!. FineCut/Coat9 for Illustrator Ver2.4 Mac (Illustrator 2019. 8* .

    -

    giorgio mario finecut 8 illustrator cc crack. finecut for illustrator iphone. FineCut/Coat9 for Illustrator Ver2.4 Mac (Illustrator 2019. 8killustrator 2020 finecut update fast finecut illustrator cc 2019 finecut for illustrator windows. Coat 8 for illustrator cc 2015

    -

    cs6 serial number finecut illustrator 7 for mac finecut 8 illustrator cc 5 finecut illustrator cs2. FineCut/Coat9 for Illustrator Ver2.4 Mac (Illustrator 2019. finecut for illustrator ist for mac win 7 / xp. finecut 8 illustrator ccrack. finecut 8 illustrator

    -

    mimaki finecut for illustrator 8. FineCut/Coat9 for Illustrator Ver2.4 Mac (Illustrator 2019. find and replace FineCut. Using FineCut with Illustrator CS4. FineCut CS4 (If you are working with CS4. finecut for illustrator

    -

    that finecut 8 for illustrator cc serial 32 compatible finecut 8 for illustrator cc serial 32 finecut 8 for illustrator cc serial 32. that finecut 8 for illustrator cc serial 32 compatible finecut 8 for illustrator cc serial 32 finecut 8 for illustrator cc serial 32. The program is a simple MIDI sequencer with audio recording and playback, audio recording and playback. Mimaki Fine Cut 8 serial and crack for illustrator. 0: Apr 22, 2015 - Crack Download Mimaki Fine Cut 8 Crack Download. Mimaki Fine Cut 8 for Adobe Illustrator. Mimaki Fine Cut 8 full serial.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Jodi-No1-Movie-720p-Extra-Quality-Download-Utorrent-Movies.md b/spaces/bioriAsaeru/text-to-voice/Jodi-No1-Movie-720p-Extra-Quality-Download-Utorrent-Movies.md deleted file mode 100644 index 68308fcc31a331d39a09fb690945b9fda10c4c2e..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Jodi-No1-Movie-720p-Extra-Quality-Download-Utorrent-Movies.md +++ /dev/null @@ -1,70 +0,0 @@ -## Jodi No1 Movie 720p Download Utorrent Movies - - - - - - ![Jodi No1 Movie 720p \[Extra Quality\] Download Utorrent Movies](https://i.ytimg.com/vi/QdtZgbSuDOo/maxresdefault.jpg) - - - - - -**Jodi No1 Movie 720p Download Utorrent Movies --->>> [https://tinourl.com/2txnLb](https://tinourl.com/2txnLb)** - - - - - - - - - - - - Here is a possible title and article with html formatting for the keyword "Jodi No1 Movie 720p Download Utorrent Movies": - -# How to Download Jodi No. 1 Movie in 720p Quality Using Utorrent - - - -Jodi No. 1 is a 2001 Bollywood comedy film directed by David Dhawan and starring Sanjay Dutt, Govinda, Twinkle Khanna and Monica Bedi. The film follows the adventures of two con men who pose as rich businessmen and fall in love with the daughters of their target. The film was a box office hit and received mixed reviews from critics. - - - -If you want to watch Jodi No. 1 movie in high definition quality, you can download it using Utorrent, a popular peer-to-peer file sharing software. Here are the steps to download Jodi No. 1 movie in 720p quality using Utorrent: - - - -1. Download and install Utorrent on your device from [https://www.utorrent.com/](https://www.utorrent.com/). - -2. Go to [https://baconsilockcomcont.wixsite.com/elflatlandpor/post/jodi-no-1-movie-720p-download-utorrent-movies](https://baconsilockcomcont.wixsite.com/elflatlandpor/post/jodi-no-1-movie-720p-download-utorrent-movies) and click on the download link for Jodi No. 1 movie in 720p quality. - -3. Open the downloaded torrent file with Utorrent and choose a location to save the movie file. - -4. Wait for the download to complete. You can check the progress and speed of the download on Utorrent. - -5. Once the download is finished, you can enjoy watching Jodi No. 1 movie in 720p quality on your device. - - - -Note: Downloading movies from unauthorized sources may be illegal in your country. Please check the laws and regulations before downloading any content. We do not endorse or promote piracy in any way. - -Here is a possible continuation of the article: - -Jodi No. 1 movie is a comedy of errors that will make you laugh and entertain you. The film has some hilarious scenes and dialogues that will tickle your funny bone. The film also has some romantic and emotional moments that will touch your heart. The film has a star-studded cast that delivers a great performance. Sanjay Dutt and Govinda are the perfect pair of con men who have a great chemistry and comic timing. Twinkle Khanna and Monica Bedi are the beautiful and charming love interests who add glamour and romance to the film. The film also features Anupam Kher, Ashish Vidyarthi, Shakti Kapoor, Mukesh Rishi and others in supporting roles. - - - -The film has a catchy and melodious soundtrack composed by Anand Raj Anand and Himesh Reshammiya. The film has some popular songs like "Jodi No. 1", "Andde Ka Funda", "Dostana", "Teri Bindiya Uda Ke Le Gayi" and others that will make you groove to the music. The film also has some stunning locations and sets that add to the visual appeal of the film. - - - -Jodi No. 1 movie is a fun-filled and enjoyable film that you can watch with your family and friends. The film is a perfect blend of comedy, romance, action and drama that will keep you engaged and entertained throughout. The film is a must-watch for the fans of Sanjay Dutt, Govinda and David Dhawan. - - dfd1c89656 - - - - - diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/evaluator.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/evaluator.py deleted file mode 100644 index baf996002b2fddc8c1952408d450b5bf69394f0a..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/evaluator.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import datetime -import logging -import time -from collections import OrderedDict, abc -from contextlib import ExitStack, contextmanager -from typing import List, Union -import torch -from torch import nn - -from detectron2.utils.comm import get_world_size, is_main_process -from detectron2.utils.logger import log_every_n_seconds - - -class DatasetEvaluator: - """ - Base class for a dataset evaluator. - - The function :func:`inference_on_dataset` runs the model over - all samples in the dataset, and have a DatasetEvaluator to process the inputs/outputs. - - This class will accumulate information of the inputs/outputs (by :meth:`process`), - and produce evaluation results in the end (by :meth:`evaluate`). - """ - - def reset(self): - """ - Preparation for a new round of evaluation. - Should be called before starting a round of evaluation. - """ - pass - - def process(self, inputs, outputs): - """ - Process the pair of inputs and outputs. - If they contain batches, the pairs can be consumed one-by-one using `zip`: - - .. code-block:: python - - for input_, output in zip(inputs, outputs): - # do evaluation on single input/output pair - ... - - Args: - inputs (list): the inputs that's used to call the model. - outputs (list): the return value of `model(inputs)` - """ - pass - - def evaluate(self): - """ - Evaluate/summarize the performance, after processing all input/output pairs. - - Returns: - dict: - A new evaluator class can return a dict of arbitrary format - as long as the user can process the results. - In our train_net.py, we expect the following format: - - * key: the name of the task (e.g., bbox) - * value: a dict of {metric name: score}, e.g.: {"AP50": 80} - """ - pass - - -class DatasetEvaluators(DatasetEvaluator): - """ - Wrapper class to combine multiple :class:`DatasetEvaluator` instances. - - This class dispatches every evaluation call to - all of its :class:`DatasetEvaluator`. - """ - - def __init__(self, evaluators): - """ - Args: - evaluators (list): the evaluators to combine. - """ - super().__init__() - self._evaluators = evaluators - - def reset(self): - for evaluator in self._evaluators: - evaluator.reset() - - def process(self, inputs, outputs): - for evaluator in self._evaluators: - evaluator.process(inputs, outputs) - - def evaluate(self): - results = OrderedDict() - for evaluator in self._evaluators: - result = evaluator.evaluate() - if is_main_process() and result is not None: - for k, v in result.items(): - assert ( - k not in results - ), "Different evaluators produce results with the same key {}".format(k) - results[k] = v - return results - - -def inference_on_dataset( - model, data_loader, evaluator: Union[DatasetEvaluator, List[DatasetEvaluator], None] -): - """ - Run model on the data_loader and evaluate the metrics with evaluator. - Also benchmark the inference speed of `model.__call__` accurately. - The model will be used in eval mode. - - Args: - model (callable): a callable which takes an object from - `data_loader` and returns some outputs. - - If it's an nn.Module, it will be temporarily set to `eval` mode. - If you wish to evaluate a model in `training` mode instead, you can - wrap the given model and override its behavior of `.eval()` and `.train()`. - data_loader: an iterable object with a length. - The elements it generates will be the inputs to the model. - evaluator: the evaluator(s) to run. Use `None` if you only want to benchmark, - but don't want to do any evaluation. - - Returns: - The return value of `evaluator.evaluate()` - """ - num_devices = get_world_size() - logger = logging.getLogger(__name__) - logger.info("Start inference on {} batches".format(len(data_loader))) - - total = len(data_loader) # inference data loader must have a fixed length - if evaluator is None: - # create a no-op evaluator - evaluator = DatasetEvaluators([]) - if isinstance(evaluator, abc.MutableSequence): - evaluator = DatasetEvaluators(evaluator) - evaluator.reset() - - num_warmup = min(5, total - 1) - start_time = time.perf_counter() - total_data_time = 0 - total_compute_time = 0 - total_eval_time = 0 - with ExitStack() as stack: - if isinstance(model, nn.Module): - stack.enter_context(inference_context(model)) - stack.enter_context(torch.no_grad()) - - start_data_time = time.perf_counter() - for idx, inputs in enumerate(data_loader): - total_data_time += time.perf_counter() - start_data_time - if idx == num_warmup: - start_time = time.perf_counter() - total_data_time = 0 - total_compute_time = 0 - total_eval_time = 0 - - start_compute_time = time.perf_counter() - outputs = model(inputs) - if torch.cuda.is_available(): - torch.cuda.synchronize() - total_compute_time += time.perf_counter() - start_compute_time - - start_eval_time = time.perf_counter() - evaluator.process(inputs, outputs) - total_eval_time += time.perf_counter() - start_eval_time - - iters_after_start = idx + 1 - num_warmup * int(idx >= num_warmup) - data_seconds_per_iter = total_data_time / iters_after_start - compute_seconds_per_iter = total_compute_time / iters_after_start - eval_seconds_per_iter = total_eval_time / iters_after_start - total_seconds_per_iter = (time.perf_counter() - start_time) / iters_after_start - if idx >= num_warmup * 2 or compute_seconds_per_iter > 5: - eta = datetime.timedelta(seconds=int(total_seconds_per_iter * (total - idx - 1))) - log_every_n_seconds( - logging.INFO, - ( - f"Inference done {idx + 1}/{total}. " - f"Dataloading: {data_seconds_per_iter:.4f} s/iter. " - f"Inference: {compute_seconds_per_iter:.4f} s/iter. " - f"Eval: {eval_seconds_per_iter:.4f} s/iter. " - f"Total: {total_seconds_per_iter:.4f} s/iter. " - f"ETA={eta}" - ), - n=5, - ) - start_data_time = time.perf_counter() - - # Measure the time only for this worker (before the synchronization barrier) - total_time = time.perf_counter() - start_time - total_time_str = str(datetime.timedelta(seconds=total_time)) - # NOTE this format is parsed by grep - logger.info( - "Total inference time: {} ({:.6f} s / iter per device, on {} devices)".format( - total_time_str, total_time / (total - num_warmup), num_devices - ) - ) - total_compute_time_str = str(datetime.timedelta(seconds=int(total_compute_time))) - logger.info( - "Total inference pure compute time: {} ({:.6f} s / iter per device, on {} devices)".format( - total_compute_time_str, total_compute_time / (total - num_warmup), num_devices - ) - ) - - results = evaluator.evaluate() - # An evaluator may return None when not in main process. - # Replace it by an empty dict instead to make it easier for downstream code to handle - if results is None: - results = {} - return results - - -@contextmanager -def inference_context(model): - """ - A context where the model is temporarily changed to eval mode, - and restored to previous mode afterwards. - - Args: - model: a torch Module - """ - training_mode = model.training - model.eval() - yield - model.train(training_mode) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/dev/run_inference_tests.sh b/spaces/brjathu/HMR2.0/vendor/detectron2/dev/run_inference_tests.sh deleted file mode 100644 index bc9dcc56f06f79fc5efa42c04ffdc07c2787e3ac..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/dev/run_inference_tests.sh +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -BIN="python tools/train_net.py" -OUTPUT="inference_test_output" -NUM_GPUS=2 - -CFG_LIST=( "${@:1}" ) - -if [ ${#CFG_LIST[@]} -eq 0 ]; then - CFG_LIST=( ./configs/quick_schedules/*inference_acc_test.yaml ) -fi - -echo "========================================================================" -echo "Configs to run:" -echo "${CFG_LIST[@]}" -echo "========================================================================" - - -for cfg in "${CFG_LIST[@]}"; do - echo "========================================================================" - echo "Running $cfg ..." - echo "========================================================================" - $BIN \ - --eval-only \ - --num-gpus $NUM_GPUS \ - --config-file "$cfg" \ - OUTPUT_DIR $OUTPUT - rm -rf $OUTPUT -done - - -echo "========================================================================" -echo "Running demo.py ..." -echo "========================================================================" -DEMO_BIN="python demo/demo.py" -COCO_DIR=datasets/coco/val2014 -mkdir -pv $OUTPUT - -set -v - -$DEMO_BIN --config-file ./configs/quick_schedules/panoptic_fpn_R_50_inference_acc_test.yaml \ - --input $COCO_DIR/COCO_val2014_0000001933* --output $OUTPUT -rm -rf $OUTPUT diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/cse/vertex_direct_embedder.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/cse/vertex_direct_embedder.py deleted file mode 100644 index 60fba277bf4c5bcb98cbd170dad168c4308bc0b4..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/cse/vertex_direct_embedder.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import pickle -import torch -from torch import nn - -from detectron2.utils.file_io import PathManager - -from .utils import normalize_embeddings - - -class VertexDirectEmbedder(nn.Module): - """ - Class responsible for embedding vertices. Vertex embeddings take - the form of a tensor of size [N, D], where - N = number of vertices - D = number of dimensions in the embedding space - """ - - def __init__(self, num_vertices: int, embed_dim: int): - """ - Initialize embedder, set random embeddings - - Args: - num_vertices (int): number of vertices to embed - embed_dim (int): number of dimensions in the embedding space - """ - super(VertexDirectEmbedder, self).__init__() - self.embeddings = nn.Parameter(torch.Tensor(num_vertices, embed_dim)) - self.reset_parameters() - - @torch.no_grad() - def reset_parameters(self): - """ - Reset embeddings to random values - """ - self.embeddings.zero_() - - def forward(self) -> torch.Tensor: - """ - Produce vertex embeddings, a tensor of shape [N, D] where: - N = number of vertices - D = number of dimensions in the embedding space - - Return: - Full vertex embeddings, a tensor of shape [N, D] - """ - return normalize_embeddings(self.embeddings) - - @torch.no_grad() - def load(self, fpath: str): - """ - Load data from a file - - Args: - fpath (str): file path to load data from - """ - with PathManager.open(fpath, "rb") as hFile: - data = pickle.load(hFile) # pyre-ignore[6] - for name in ["embeddings"]: - if name in data: - getattr(self, name).copy_( - torch.tensor(data[name]).float().to(device=getattr(self, name).device) - ) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/setup.py b/spaces/brjathu/HMR2.0/vendor/detectron2/setup.py deleted file mode 100644 index 986c20c2375bc4b08fe145dde69471a2ce702180..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/setup.py +++ /dev/null @@ -1,219 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. - -import glob -import os -import shutil -from os import path -from setuptools import find_packages, setup -from typing import List -import torch -from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension - -torch_ver = [int(x) for x in torch.__version__.split(".")[:2]] -assert torch_ver >= [1, 8], "Requires PyTorch >= 1.8" - - -def get_version(): - init_py_path = path.join(path.abspath(path.dirname(__file__)), "detectron2", "__init__.py") - init_py = open(init_py_path, "r").readlines() - version_line = [l.strip() for l in init_py if l.startswith("__version__")][0] - version = version_line.split("=")[-1].strip().strip("'\"") - - # The following is used to build release packages. - # Users should never use it. - suffix = os.getenv("D2_VERSION_SUFFIX", "") - version = version + suffix - if os.getenv("BUILD_NIGHTLY", "0") == "1": - from datetime import datetime - - date_str = datetime.today().strftime("%y%m%d") - version = version + ".dev" + date_str - - new_init_py = [l for l in init_py if not l.startswith("__version__")] - new_init_py.append('__version__ = "{}"\n'.format(version)) - with open(init_py_path, "w") as f: - f.write("".join(new_init_py)) - return version - - -def get_extensions(): - this_dir = path.dirname(path.abspath(__file__)) - extensions_dir = path.join(this_dir, "detectron2", "layers", "csrc") - - main_source = path.join(extensions_dir, "vision.cpp") - sources = glob.glob(path.join(extensions_dir, "**", "*.cpp")) - - from torch.utils.cpp_extension import ROCM_HOME - - is_rocm_pytorch = ( - True if ((torch.version.hip is not None) and (ROCM_HOME is not None)) else False - ) - if is_rocm_pytorch: - assert torch_ver >= [1, 8], "ROCM support requires PyTorch >= 1.8!" - - # common code between cuda and rocm platforms, for hipify version [1,0,0] and later. - source_cuda = glob.glob(path.join(extensions_dir, "**", "*.cu")) + glob.glob( - path.join(extensions_dir, "*.cu") - ) - sources = [main_source] + sources - - extension = CppExtension - - extra_compile_args = {"cxx": []} - define_macros = [] - - if (torch.cuda.is_available() and ((CUDA_HOME is not None) or is_rocm_pytorch)) or os.getenv( - "FORCE_CUDA", "0" - ) == "1": - extension = CUDAExtension - sources += source_cuda - - if not is_rocm_pytorch: - define_macros += [("WITH_CUDA", None)] - extra_compile_args["nvcc"] = [ - "-O3", - "-DCUDA_HAS_FP16=1", - "-D__CUDA_NO_HALF_OPERATORS__", - "-D__CUDA_NO_HALF_CONVERSIONS__", - "-D__CUDA_NO_HALF2_OPERATORS__", - ] - else: - define_macros += [("WITH_HIP", None)] - extra_compile_args["nvcc"] = [] - - nvcc_flags_env = os.getenv("NVCC_FLAGS", "") - if nvcc_flags_env != "": - extra_compile_args["nvcc"].extend(nvcc_flags_env.split(" ")) - - if torch_ver < [1, 7]: - # supported by https://github.com/pytorch/pytorch/pull/43931 - CC = os.environ.get("CC", None) - if CC is not None: - extra_compile_args["nvcc"].append("-ccbin={}".format(CC)) - - include_dirs = [extensions_dir] - - ext_modules = [ - extension( - "detectron2._C", - sources, - include_dirs=include_dirs, - define_macros=define_macros, - extra_compile_args=extra_compile_args, - ) - ] - - return ext_modules - - -def get_model_zoo_configs() -> List[str]: - """ - Return a list of configs to include in package for model zoo. Copy over these configs inside - detectron2/model_zoo. - """ - - # Use absolute paths while symlinking. - source_configs_dir = path.join(path.dirname(path.realpath(__file__)), "configs") - destination = path.join( - path.dirname(path.realpath(__file__)), "detectron2", "model_zoo", "configs" - ) - # Symlink the config directory inside package to have a cleaner pip install. - - # Remove stale symlink/directory from a previous build. - if path.exists(source_configs_dir): - if path.islink(destination): - os.unlink(destination) - elif path.isdir(destination): - shutil.rmtree(destination) - - if not path.exists(destination): - try: - os.symlink(source_configs_dir, destination) - except OSError: - # Fall back to copying if symlink fails: ex. on Windows. - shutil.copytree(source_configs_dir, destination) - - config_paths = glob.glob("configs/**/*.yaml", recursive=True) + glob.glob( - "configs/**/*.py", recursive=True - ) - return config_paths - - -# For projects that are relative small and provide features that are very close -# to detectron2's core functionalities, we install them under detectron2.projects -PROJECTS = { - "detectron2.projects.point_rend": "projects/PointRend/point_rend", - "detectron2.projects.deeplab": "projects/DeepLab/deeplab", - "detectron2.projects.panoptic_deeplab": "projects/Panoptic-DeepLab/panoptic_deeplab", -} - -setup( - name="detectron2", - version=get_version(), - author="FAIR", - url="https://github.com/facebookresearch/detectron2", - description="Detectron2 is FAIR's next-generation research " - "platform for object detection and segmentation.", - packages=find_packages(exclude=("configs", "tests*")) + list(PROJECTS.keys()), - package_dir=PROJECTS, - package_data={"detectron2.model_zoo": get_model_zoo_configs()}, - python_requires=">=3.7", - install_requires=[ - # These dependencies are not pure-python. - # In general, avoid adding dependencies that are not pure-python because they are not - # guaranteed to be installable by `pip install` on all platforms. - "Pillow>=7.1", # or use pillow-simd for better performance - "matplotlib", # TODO move it to optional after we add opencv visualization - "pycocotools>=2.0.2", # corresponds to https://github.com/ppwwyyxx/cocoapi - # Do not add opencv here. Just like pytorch, user should install - # opencv themselves, preferrably by OS's package manager, or by - # choosing the proper pypi package name at https://github.com/skvark/opencv-python - # Also, avoid adding dependencies that transitively depend on pytorch or opencv. - # ------------------------------------------------------------ - # The following are pure-python dependencies that should be easily installable. - # But still be careful when adding more: fewer people are able to use the software - # with every new dependency added. - "termcolor>=1.1", - "yacs>=0.1.8", - "tabulate", - "cloudpickle", - "tqdm>4.29.0", - "tensorboard", - # Lock version of fvcore/iopath because they may have breaking changes - # NOTE: when updating fvcore/iopath version, make sure fvcore depends - # on compatible version of iopath. - "fvcore>=0.1.5,<0.1.6", # required like this to make it pip installable - "iopath>=0.1.7,<0.1.10", - "dataclasses; python_version<'3.7'", - "omegaconf>=2.1", - "hydra-core>=1.1", - "black", - "packaging", - # NOTE: When adding new dependencies, if it is required at import time (in addition - # to runtime), it probably needs to appear in docs/requirements.txt, or as a mock - # in docs/conf.py - ], - extras_require={ - # optional dependencies, required by some features - "all": [ - "fairscale", - "timm", # Used by a few ViT models. - "scipy>1.5.1", - "shapely", - "pygments>=2.2", - "psutil", - "panopticapi @ https://github.com/cocodataset/panopticapi/archive/master.zip", - ], - # dev dependencies. Install them by `pip install 'detectron2[dev]'` - "dev": [ - "flake8==3.8.1", - "isort==4.3.21", - "flake8-bugbear", - "flake8-comprehensions", - "black==22.3.0", - ], - }, - ext_modules=get_extensions(), - cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension}, -) diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/general.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/general.py deleted file mode 100644 index a3e242d78a174677c3b8e62e7bb989994f01d326..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/general.py +++ /dev/null @@ -1,1018 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -General utils -""" - -import contextlib -import glob -import inspect -import logging -import math -import os -import platform -import random -import re -import shutil -import signal -import threading -import time -import urllib -from datetime import datetime -from itertools import repeat -from multiprocessing.pool import ThreadPool -from pathlib import Path -from subprocess import check_output -from typing import Optional -from zipfile import ZipFile - -import cv2 -import numpy as np -import pandas as pd -import pkg_resources as pkg -import torch -import torchvision -import yaml - -from utils.downloads import gsutil_getsize -from utils.metrics import box_iou, fitness - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -RANK = int(os.getenv('RANK', -1)) - -# Settings -DATASETS_DIR = ROOT.parent / 'datasets' # YOLOv5 datasets directory -NUM_THREADS = min(8, max(1, os.cpu_count() - 1)) # number of YOLOv5 multiprocessing threads -AUTOINSTALL = str(os.getenv('YOLOv5_AUTOINSTALL', True)).lower() == 'true' # global auto-install mode -VERBOSE = str(os.getenv('YOLOv5_VERBOSE', True)).lower() == 'true' # global verbose mode -FONT = 'Arial.ttf' # https://ultralytics.com/assets/Arial.ttf - -torch.set_printoptions(linewidth=320, precision=5, profile='long') -np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 -pd.options.display.max_columns = 10 -cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader) -os.environ['NUMEXPR_MAX_THREADS'] = str(NUM_THREADS) # NumExpr max threads -os.environ['OMP_NUM_THREADS'] = str(NUM_THREADS) # OpenMP max threads (PyTorch and SciPy) - - -def is_kaggle(): - # Is environment a Kaggle Notebook? - try: - assert os.environ.get('PWD') == '/kaggle/working' - assert os.environ.get('KAGGLE_URL_BASE') == 'https://www.kaggle.com' - return True - except AssertionError: - return False - - -def is_writeable(dir, test=False): - # Return True if directory has write permissions, test opening a file with write permissions if test=True - if not test: - return os.access(dir, os.R_OK) # possible issues on Windows - file = Path(dir) / 'tmp.txt' - try: - with open(file, 'w'): # open file with write permissions - pass - file.unlink() # remove file - return True - except OSError: - return False - - -def set_logging(name=None, verbose=VERBOSE): - # Sets level and returns logger - if is_kaggle(): - for h in logging.root.handlers: - logging.root.removeHandler(h) # remove all handlers associated with the root logger object - rank = int(os.getenv('RANK', -1)) # rank in world for Multi-GPU trainings - level = logging.INFO if verbose and rank in {-1, 0} else logging.ERROR - log = logging.getLogger(name) - log.setLevel(level) - handler = logging.StreamHandler() - handler.setFormatter(logging.Formatter("%(message)s")) - handler.setLevel(level) - log.addHandler(handler) - - -set_logging() # run before defining LOGGER -LOGGER = logging.getLogger("yolov5") # define globally (used in train.py, val.py, detect.py, etc.) - - -def user_config_dir(dir='Ultralytics', env_var='YOLOV5_CONFIG_DIR'): - # Return path of user configuration directory. Prefer environment variable if exists. Make dir if required. - env = os.getenv(env_var) - if env: - path = Path(env) # use environment variable - else: - cfg = {'Windows': 'AppData/Roaming', 'Linux': '.config', 'Darwin': 'Library/Application Support'} # 3 OS dirs - path = Path.home() / cfg.get(platform.system(), '') # OS-specific config dir - path = (path if is_writeable(path) else Path('/tmp')) / dir # GCP and AWS lambda fix, only /tmp is writeable - path.mkdir(exist_ok=True) # make if required - return path - - -CONFIG_DIR = user_config_dir() # Ultralytics settings dir - - -class Profile(contextlib.ContextDecorator): - # Usage: @Profile() decorator or 'with Profile():' context manager - def __enter__(self): - self.start = time.time() - - def __exit__(self, type, value, traceback): - print(f'Profile results: {time.time() - self.start:.5f}s') - - -class Timeout(contextlib.ContextDecorator): - # Usage: @Timeout(seconds) decorator or 'with Timeout(seconds):' context manager - def __init__(self, seconds, *, timeout_msg='', suppress_timeout_errors=True): - self.seconds = int(seconds) - self.timeout_message = timeout_msg - self.suppress = bool(suppress_timeout_errors) - - def _timeout_handler(self, signum, frame): - raise TimeoutError(self.timeout_message) - - def __enter__(self): - if platform.system() != 'Windows': # not supported on Windows - signal.signal(signal.SIGALRM, self._timeout_handler) # Set handler for SIGALRM - signal.alarm(self.seconds) # start countdown for SIGALRM to be raised - - def __exit__(self, exc_type, exc_val, exc_tb): - if platform.system() != 'Windows': - signal.alarm(0) # Cancel SIGALRM if it's scheduled - if self.suppress and exc_type is TimeoutError: # Suppress TimeoutError - return True - - -class WorkingDirectory(contextlib.ContextDecorator): - # Usage: @WorkingDirectory(dir) decorator or 'with WorkingDirectory(dir):' context manager - def __init__(self, new_dir): - self.dir = new_dir # new dir - self.cwd = Path.cwd().resolve() # current dir - - def __enter__(self): - os.chdir(self.dir) - - def __exit__(self, exc_type, exc_val, exc_tb): - os.chdir(self.cwd) - - -def try_except(func): - # try-except function. Usage: @try_except decorator - def handler(*args, **kwargs): - try: - func(*args, **kwargs) - except Exception as e: - print(e) - - return handler - - -def threaded(func): - # Multi-threads a target function and returns thread. Usage: @threaded decorator - def wrapper(*args, **kwargs): - thread = threading.Thread(target=func, args=args, kwargs=kwargs, daemon=True) - thread.start() - return thread - - return wrapper - - -def methods(instance): - # Get class/instance methods - return [f for f in dir(instance) if callable(getattr(instance, f)) and not f.startswith("__")] - - -def print_args(args: Optional[dict] = None, show_file=True, show_fcn=False): - # Print function arguments (optional args dict) - x = inspect.currentframe().f_back # previous frame - file, _, fcn, _, _ = inspect.getframeinfo(x) - if args is None: # get args automatically - args, _, _, frm = inspect.getargvalues(x) - args = {k: v for k, v in frm.items() if k in args} - s = (f'{Path(file).stem}: ' if show_file else '') + (f'{fcn}: ' if show_fcn else '') - LOGGER.info(colorstr(s) + ', '.join(f'{k}={v}' for k, v in args.items())) - - -def init_seeds(seed=0): - # Initialize random number generator (RNG) seeds https://pytorch.org/docs/stable/notes/randomness.html - # cudnn seed 0 settings are slower and more reproducible, else faster and less reproducible - import torch.backends.cudnn as cudnn - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - cudnn.benchmark, cudnn.deterministic = (False, True) if seed == 0 else (True, False) - - -def intersect_dicts(da, db, exclude=()): - # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values - return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape} - - -def get_latest_run(search_dir='.'): - # Return path to most recent 'last.pt' in /runs (i.e. to --resume from) - last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True) - return max(last_list, key=os.path.getctime) if last_list else '' - - -def is_docker(): - # Is environment a Docker container? - return Path('/workspace').exists() # or Path('/.dockerenv').exists() - - -def is_colab(): - # Is environment a Google Colab instance? - try: - import google.colab - return True - except ImportError: - return False - - -def is_pip(): - # Is file in a pip package? - return 'site-packages' in Path(__file__).resolve().parts - - -def is_ascii(s=''): - # Is string composed of all ASCII (no UTF) characters? (note str().isascii() introduced in python 3.7) - s = str(s) # convert list, tuple, None, etc. to str - return len(s.encode().decode('ascii', 'ignore')) == len(s) - - -def is_chinese(s='人工智能'): - # Is string composed of any Chinese characters? - return bool(re.search('[\u4e00-\u9fff]', str(s))) - - -def emojis(str=''): - # Return platform-dependent emoji-safe version of string - return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str - - -def file_age(path=__file__): - # Return days since last file update - dt = (datetime.now() - datetime.fromtimestamp(Path(path).stat().st_mtime)) # delta - return dt.days # + dt.seconds / 86400 # fractional days - - -def file_date(path=__file__): - # Return human-readable file modification date, i.e. '2021-3-26' - t = datetime.fromtimestamp(Path(path).stat().st_mtime) - return f'{t.year}-{t.month}-{t.day}' - - -def file_size(path): - # Return file/dir size (MB) - mb = 1 << 20 # bytes to MiB (1024 ** 2) - path = Path(path) - if path.is_file(): - return path.stat().st_size / mb - elif path.is_dir(): - return sum(f.stat().st_size for f in path.glob('**/*') if f.is_file()) / mb - else: - return 0.0 - - -def check_online(): - # Check internet connectivity - import socket - try: - socket.create_connection(("1.1.1.1", 443), 5) # check host accessibility - return True - except OSError: - return False - - -def git_describe(path=ROOT): # path must be a directory - # Return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe - try: - assert (Path(path) / '.git').is_dir() - return check_output(f'git -C {path} describe --tags --long --always', shell=True).decode()[:-1] - except Exception: - return '' - - -@try_except -@WorkingDirectory(ROOT) -def check_git_status(): - # Recommend 'git pull' if code is out of date - msg = ', for updates see https://github.com/ultralytics/yolov5' - s = colorstr('github: ') # string - assert Path('.git').exists(), s + 'skipping check (not a git repository)' + msg - assert not is_docker(), s + 'skipping check (Docker image)' + msg - assert check_online(), s + 'skipping check (offline)' + msg - - cmd = 'git fetch && git config --get remote.origin.url' - url = check_output(cmd, shell=True, timeout=5).decode().strip().rstrip('.git') # git fetch - branch = check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out - n = int(check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind - if n > 0: - s += f"⚠️ YOLOv5 is out of date by {n} commit{'s' * (n > 1)}. Use `git pull` or `git clone {url}` to update." - else: - s += f'up to date with {url} ✅' - LOGGER.info(emojis(s)) # emoji-safe - - -def check_python(minimum='3.7.0'): - # Check current python version vs. required python version - check_version(platform.python_version(), minimum, name='Python ', hard=True) - - -def check_version(current='0.0.0', minimum='0.0.0', name='version ', pinned=False, hard=False, verbose=False): - # Check version vs. required version - current, minimum = (pkg.parse_version(x) for x in (current, minimum)) - result = (current == minimum) if pinned else (current >= minimum) # bool - s = f'{name}{minimum} required by YOLOv5, but {name}{current} is currently installed' # string - if hard: - assert result, s # assert min requirements met - if verbose and not result: - LOGGER.warning(s) - return result - - -@try_except -def check_requirements(requirements=ROOT / 'requirements.txt', exclude=(), install=True, cmds=()): - # Check installed dependencies meet requirements (pass *.txt file or list of packages) - prefix = colorstr('red', 'bold', 'requirements:') - check_python() # check python version - if isinstance(requirements, (str, Path)): # requirements.txt file - file = Path(requirements) - assert file.exists(), f"{prefix} {file.resolve()} not found, check failed." - with file.open() as f: - requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(f) if x.name not in exclude] - else: # list or tuple of packages - requirements = [x for x in requirements if x not in exclude] - - n = 0 # number of packages updates - for i, r in enumerate(requirements): - try: - pkg.require(r) - except Exception: # DistributionNotFound or VersionConflict if requirements not met - s = f"{prefix} {r} not found and is required by YOLOv5" - if install and AUTOINSTALL: # check environment variable - LOGGER.info(f"{s}, attempting auto-update...") - try: - assert check_online(), f"'pip install {r}' skipped (offline)" - LOGGER.info(check_output(f'pip install "{r}" {cmds[i] if cmds else ""}', shell=True).decode()) - n += 1 - except Exception as e: - LOGGER.warning(f'{prefix} {e}') - else: - LOGGER.info(f'{s}. Please install and rerun your command.') - - if n: # if packages updated - source = file.resolve() if 'file' in locals() else requirements - s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \ - f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n" - LOGGER.info(emojis(s)) - - -def check_img_size(imgsz, s=32, floor=0): - # Verify image size is a multiple of stride s in each dimension - if isinstance(imgsz, int): # integer i.e. img_size=640 - new_size = max(make_divisible(imgsz, int(s)), floor) - else: # list i.e. img_size=[640, 480] - imgsz = list(imgsz) # convert to list if tuple - new_size = [max(make_divisible(x, int(s)), floor) for x in imgsz] - if new_size != imgsz: - LOGGER.warning(f'WARNING: --img-size {imgsz} must be multiple of max stride {s}, updating to {new_size}') - return new_size - - -def check_imshow(): - # Check if environment supports image displays - try: - assert not is_docker(), 'cv2.imshow() is disabled in Docker environments' - assert not is_colab(), 'cv2.imshow() is disabled in Google Colab environments' - cv2.imshow('test', np.zeros((1, 1, 3))) - cv2.waitKey(1) - cv2.destroyAllWindows() - cv2.waitKey(1) - return True - except Exception as e: - LOGGER.warning(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}') - return False - - -def check_suffix(file='yolov5s.pt', suffix=('.pt',), msg=''): - # Check file(s) for acceptable suffix - if file and suffix: - if isinstance(suffix, str): - suffix = [suffix] - for f in file if isinstance(file, (list, tuple)) else [file]: - s = Path(f).suffix.lower() # file suffix - if len(s): - assert s in suffix, f"{msg}{f} acceptable suffix is {suffix}" - - -def check_yaml(file, suffix=('.yaml', '.yml')): - # Search/download YAML file (if necessary) and return path, checking suffix - return check_file(file, suffix) - - -def check_file(file, suffix=''): - # Search/download file (if necessary) and return path - check_suffix(file, suffix) # optional - file = str(file) # convert to str() - if Path(file).is_file() or not file: # exists - return file - elif file.startswith(('http:/', 'https:/')): # download - url = file # warning: Pathlib turns :// -> :/ - file = Path(urllib.parse.unquote(file).split('?')[0]).name # '%2F' to '/', split https://url.com/file.txt?auth - if Path(file).is_file(): - LOGGER.info(f'Found {url} locally at {file}') # file already exists - else: - LOGGER.info(f'Downloading {url} to {file}...') - torch.hub.download_url_to_file(url, file) - assert Path(file).exists() and Path(file).stat().st_size > 0, f'File download failed: {url}' # check - return file - else: # search - files = [] - for d in 'data', 'models', 'utils': # search directories - files.extend(glob.glob(str(ROOT / d / '**' / file), recursive=True)) # find file - assert len(files), f'File not found: {file}' # assert file was found - assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique - return files[0] # return file - - -def check_font(font=FONT, progress=False): - # Download font to CONFIG_DIR if necessary - font = Path(font) - file = CONFIG_DIR / font.name - if not font.exists() and not file.exists(): - url = "https://ultralytics.com/assets/" + font.name - LOGGER.info(f'Downloading {url} to {file}...') - torch.hub.download_url_to_file(url, str(file), progress=progress) - - -def check_dataset(data, autodownload=True): - # Download, check and/or unzip dataset if not found locally - - # Download (optional) - extract_dir = '' - if isinstance(data, (str, Path)) and str(data).endswith('.zip'): # i.e. gs://bucket/dir/coco128.zip - download(data, dir=DATASETS_DIR, unzip=True, delete=False, curl=False, threads=1) - data = next((DATASETS_DIR / Path(data).stem).rglob('*.yaml')) - extract_dir, autodownload = data.parent, False - - # Read yaml (optional) - if isinstance(data, (str, Path)): - with open(data, errors='ignore') as f: - data = yaml.safe_load(f) # dictionary - - # Checks - for k in 'train', 'val', 'nc': - assert k in data, emojis(f"data.yaml '{k}:' field missing ❌") - if 'names' not in data: - LOGGER.warning(emojis("data.yaml 'names:' field missing ⚠, assigning default names 'class0', 'class1', etc.")) - data['names'] = [f'class{i}' for i in range(data['nc'])] # default names - - # Resolve paths - path = Path(extract_dir or data.get('path') or '') # optional 'path' default to '.' - if not path.is_absolute(): - path = (ROOT / path).resolve() - for k in 'train', 'val', 'test': - if data.get(k): # prepend path - data[k] = str(path / data[k]) if isinstance(data[k], str) else [str(path / x) for x in data[k]] - - # Parse yaml - train, val, test, s = (data.get(x) for x in ('train', 'val', 'test', 'download')) - if val: - val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path - if not all(x.exists() for x in val): - LOGGER.info(emojis('\nDataset not found ⚠, missing paths %s' % [str(x) for x in val if not x.exists()])) - if not s or not autodownload: - raise Exception(emojis('Dataset not found ❌')) - t = time.time() - root = path.parent if 'path' in data else '..' # unzip directory i.e. '../' - if s.startswith('http') and s.endswith('.zip'): # URL - f = Path(s).name # filename - LOGGER.info(f'Downloading {s} to {f}...') - torch.hub.download_url_to_file(s, f) - Path(root).mkdir(parents=True, exist_ok=True) # create root - ZipFile(f).extractall(path=root) # unzip - Path(f).unlink() # remove zip - r = None # success - elif s.startswith('bash '): # bash script - LOGGER.info(f'Running {s} ...') - r = os.system(s) - else: # python script - r = exec(s, {'yaml': data}) # return None - dt = f'({round(time.time() - t, 1)}s)' - s = f"success ✅ {dt}, saved to {colorstr('bold', root)}" if r in (0, None) else f"failure {dt} ❌" - LOGGER.info(emojis(f"Dataset download {s}")) - check_font('Arial.ttf' if is_ascii(data['names']) else 'Arial.Unicode.ttf', progress=True) # download fonts - return data # dictionary - - -def check_amp(model): - # Check PyTorch Automatic Mixed Precision (AMP) functionality. Return True on correct operation - from models.common import AutoShape, DetectMultiBackend - - def amp_allclose(model, im): - # All close FP32 vs AMP results - m = AutoShape(model, verbose=False) # model - a = m(im).xywhn[0] # FP32 inference - m.amp = True - b = m(im).xywhn[0] # AMP inference - return a.shape == b.shape and torch.allclose(a, b, atol=0.1) # close to 10% absolute tolerance - - prefix = colorstr('AMP: ') - device = next(model.parameters()).device # get model device - if device.type == 'cpu': - return False # AMP disabled on CPU - f = ROOT / 'data' / 'images' / 'bus.jpg' # image to check - im = f if f.exists() else 'https://ultralytics.com/images/bus.jpg' if check_online() else np.ones((640, 640, 3)) - try: - assert amp_allclose(model, im) or amp_allclose(DetectMultiBackend('yolov5n.pt', device), im) - LOGGER.info(emojis(f'{prefix}checks passed ✅')) - return True - except Exception: - help_url = 'https://github.com/ultralytics/yolov5/issues/7908' - LOGGER.warning(emojis(f'{prefix}checks failed ❌, disabling Automatic Mixed Precision. See {help_url}')) - return False - - -def url2file(url): - # Convert URL to filename, i.e. https://url.com/file.txt?auth -> file.txt - url = str(Path(url)).replace(':/', '://') # Pathlib turns :// -> :/ - return Path(urllib.parse.unquote(url)).name.split('?')[0] # '%2F' to '/', split https://url.com/file.txt?auth - - -def download(url, dir='.', unzip=True, delete=True, curl=False, threads=1, retry=3): - # Multi-threaded file download and unzip function, used in data.yaml for autodownload - def download_one(url, dir): - # Download 1 file - success = True - f = dir / Path(url).name # filename - if Path(url).is_file(): # exists in current path - Path(url).rename(f) # move to dir - elif not f.exists(): - LOGGER.info(f'Downloading {url} to {f}...') - for i in range(retry + 1): - if curl: - s = 'sS' if threads > 1 else '' # silent - r = os.system(f'curl -{s}L "{url}" -o "{f}" --retry 9 -C -') # curl download with retry, continue - success = r == 0 - else: - torch.hub.download_url_to_file(url, f, progress=threads == 1) # torch download - success = f.is_file() - if success: - break - elif i < retry: - LOGGER.warning(f'Download failure, retrying {i + 1}/{retry} {url}...') - else: - LOGGER.warning(f'Failed to download {url}...') - - if unzip and success and f.suffix in ('.zip', '.gz'): - LOGGER.info(f'Unzipping {f}...') - if f.suffix == '.zip': - ZipFile(f).extractall(path=dir) # unzip - elif f.suffix == '.gz': - os.system(f'tar xfz {f} --directory {f.parent}') # unzip - if delete: - f.unlink() # remove zip - - dir = Path(dir) - dir.mkdir(parents=True, exist_ok=True) # make directory - if threads > 1: - pool = ThreadPool(threads) - pool.imap(lambda x: download_one(*x), zip(url, repeat(dir))) # multi-threaded - pool.close() - pool.join() - else: - for u in [url] if isinstance(url, (str, Path)) else url: - download_one(u, dir) - - -def make_divisible(x, divisor): - # Returns nearest x divisible by divisor - if isinstance(divisor, torch.Tensor): - divisor = int(divisor.max()) # to int - return math.ceil(x / divisor) * divisor - - -def clean_str(s): - # Cleans a string by replacing special characters with underscore _ - return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s) - - -def one_cycle(y1=0.0, y2=1.0, steps=100): - # lambda function for sinusoidal ramp from y1 to y2 https://arxiv.org/pdf/1812.01187.pdf - return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1 - - -def colorstr(*input): - # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world') - *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string - colors = { - 'black': '\033[30m', # basic colors - 'red': '\033[31m', - 'green': '\033[32m', - 'yellow': '\033[33m', - 'blue': '\033[34m', - 'magenta': '\033[35m', - 'cyan': '\033[36m', - 'white': '\033[37m', - 'bright_black': '\033[90m', # bright colors - 'bright_red': '\033[91m', - 'bright_green': '\033[92m', - 'bright_yellow': '\033[93m', - 'bright_blue': '\033[94m', - 'bright_magenta': '\033[95m', - 'bright_cyan': '\033[96m', - 'bright_white': '\033[97m', - 'end': '\033[0m', # misc - 'bold': '\033[1m', - 'underline': '\033[4m'} - return ''.join(colors[x] for x in args) + f'{string}' + colors['end'] - - -def labels_to_class_weights(labels, nc=80): - # Get class weights (inverse frequency) from training labels - if labels[0] is None: # no labels loaded - return torch.Tensor() - - labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO - classes = labels[:, 0].astype(np.int) # labels = [class xywh] - weights = np.bincount(classes, minlength=nc) # occurrences per class - - # Prepend gridpoint count (for uCE training) - # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image - # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start - - weights[weights == 0] = 1 # replace empty bins with 1 - weights = 1 / weights # number of targets per class - weights /= weights.sum() # normalize - return torch.from_numpy(weights) - - -def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)): - # Produces image weights based on class_weights and image contents - # Usage: index = random.choices(range(n), weights=image_weights, k=1) # weighted image sample - class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels]) - return (class_weights.reshape(1, nc) * class_counts).sum(1) - - -def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) - # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ - # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n') - # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n') - # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco - # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet - return [ - 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, - 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, - 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] - - -def xyxy2xywh(x): - # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - -def xywh2xyxy(x): - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y - - -def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0): - # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x - y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y - y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x - y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y - return y - - -def xyxy2xywhn(x, w=640, h=640, clip=False, eps=0.0): - # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] normalized where xy1=top-left, xy2=bottom-right - if clip: - clip_coords(x, (h - eps, w - eps)) # warning: inplace clip - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = ((x[:, 0] + x[:, 2]) / 2) / w # x center - y[:, 1] = ((x[:, 1] + x[:, 3]) / 2) / h # y center - y[:, 2] = (x[:, 2] - x[:, 0]) / w # width - y[:, 3] = (x[:, 3] - x[:, 1]) / h # height - return y - - -def xyn2xy(x, w=640, h=640, padw=0, padh=0): - # Convert normalized segments into pixel segments, shape (n,2) - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * x[:, 0] + padw # top left x - y[:, 1] = h * x[:, 1] + padh # top left y - return y - - -def segment2box(segment, width=640, height=640): - # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy) - x, y = segment.T # segment xy - inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height) - x, y, = x[inside], y[inside] - return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy - - -def segments2boxes(segments): - # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh) - boxes = [] - for s in segments: - x, y = s.T # segment xy - boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy - return xyxy2xywh(np.array(boxes)) # cls, xywh - - -def resample_segments(segments, n=1000): - # Up-sample an (n,2) segment - for i, s in enumerate(segments): - s = np.concatenate((s, s[0:1, :]), axis=0) - x = np.linspace(0, len(s) - 1, n) - xp = np.arange(len(s)) - segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy - return segments - - -def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None): - # Rescale coords (xyxy) from img1_shape to img0_shape - if ratio_pad is None: # calculate from img0_shape - gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new - pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding - else: - gain = ratio_pad[0][0] - pad = ratio_pad[1] - - coords[:, [0, 2]] -= pad[0] # x padding - coords[:, [1, 3]] -= pad[1] # y padding - coords[:, :4] /= gain - clip_coords(coords, img0_shape) - return coords - - -def clip_coords(boxes, shape): - # Clip bounding xyxy bounding boxes to image shape (height, width) - if isinstance(boxes, torch.Tensor): # faster individually - boxes[:, 0].clamp_(0, shape[1]) # x1 - boxes[:, 1].clamp_(0, shape[0]) # y1 - boxes[:, 2].clamp_(0, shape[1]) # x2 - boxes[:, 3].clamp_(0, shape[0]) # y2 - else: # np.array (faster grouped) - boxes[:, [0, 2]] = boxes[:, [0, 2]].clip(0, shape[1]) # x1, x2 - boxes[:, [1, 3]] = boxes[:, [1, 3]].clip(0, shape[0]) # y1, y2 - - -def non_max_suppression(prediction, - conf_thres=0.25, - iou_thres=0.45, - classes=None, - agnostic=False, - multi_label=False, - labels=(), - max_det=300): - """Non-Maximum Suppression (NMS) on inference results to reject overlapping bounding boxes - - Returns: - list of detections, on (n,6) tensor per image [xyxy, conf, cls] - """ - - bs = prediction.shape[0] # batch size - nc = prediction.shape[2] - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Checks - assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0' - assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0' - - # Settings - # min_wh = 2 # (pixels) minimum box width and height - max_wh = 7680 # (pixels) maximum box width and height - max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 0.3 + 0.03 * bs # seconds to quit after - redundant = True # require redundant detections - multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.time() - output = [torch.zeros((0, 6), device=prediction.device)] * bs - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - lb = labels[xi] - v = torch.zeros((len(lb), nc + 5), device=x.device) - v[:, :4] = lb[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(lb)), lb[:, 0].long() + 5] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # Check shape - n = x.shape[0] # number of boxes - if not n: # no boxes - continue - elif n > max_nms: # excess boxes - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.time() - t) > time_limit: - LOGGER.warning(f'WARNING: NMS time limit {time_limit:.3f}s exceeded') - break # time limit exceeded - - return output - - -def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer() - # Strip optimizer from 'f' to finalize training, optionally save as 's' - x = torch.load(f, map_location=torch.device('cpu')) - if x.get('ema'): - x['model'] = x['ema'] # replace model with ema - for k in 'optimizer', 'best_fitness', 'wandb_id', 'ema', 'updates': # keys - x[k] = None - x['epoch'] = -1 - x['model'].half() # to FP16 - for p in x['model'].parameters(): - p.requires_grad = False - torch.save(x, s or f) - mb = os.path.getsize(s or f) / 1E6 # filesize - LOGGER.info(f"Optimizer stripped from {f},{f' saved as {s},' if s else ''} {mb:.1f}MB") - - -def print_mutation(results, hyp, save_dir, bucket, prefix=colorstr('evolve: ')): - evolve_csv = save_dir / 'evolve.csv' - evolve_yaml = save_dir / 'hyp_evolve.yaml' - keys = ('metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', 'val/box_loss', - 'val/obj_loss', 'val/cls_loss') + tuple(hyp.keys()) # [results + hyps] - keys = tuple(x.strip() for x in keys) - vals = results + tuple(hyp.values()) - n = len(keys) - - # Download (optional) - if bucket: - url = f'gs://{bucket}/evolve.csv' - if gsutil_getsize(url) > (evolve_csv.stat().st_size if evolve_csv.exists() else 0): - os.system(f'gsutil cp {url} {save_dir}') # download evolve.csv if larger than local - - # Log to evolve.csv - s = '' if evolve_csv.exists() else (('%20s,' * n % keys).rstrip(',') + '\n') # add header - with open(evolve_csv, 'a') as f: - f.write(s + ('%20.5g,' * n % vals).rstrip(',') + '\n') - - # Save yaml - with open(evolve_yaml, 'w') as f: - data = pd.read_csv(evolve_csv) - data = data.rename(columns=lambda x: x.strip()) # strip keys - i = np.argmax(fitness(data.values[:, :4])) # - generations = len(data) - f.write('# YOLOv5 Hyperparameter Evolution Results\n' + f'# Best generation: {i}\n' + - f'# Last generation: {generations - 1}\n' + '# ' + ', '.join(f'{x.strip():>20s}' for x in keys[:7]) + - '\n' + '# ' + ', '.join(f'{x:>20.5g}' for x in data.values[i, :7]) + '\n\n') - yaml.safe_dump(data.loc[i][7:].to_dict(), f, sort_keys=False) - - # Print to screen - LOGGER.info(prefix + f'{generations} generations finished, current result:\n' + prefix + - ', '.join(f'{x.strip():>20s}' for x in keys) + '\n' + prefix + ', '.join(f'{x:20.5g}' - for x in vals) + '\n\n') - - if bucket: - os.system(f'gsutil cp {evolve_csv} {evolve_yaml} gs://{bucket}') # upload - - -def apply_classifier(x, model, img, im0): - # Apply a second stage classifier to YOLO outputs - # Example model = torchvision.models.__dict__['efficientnet_b0'](pretrained=True).to(device).eval() - im0 = [im0] if isinstance(im0, np.ndarray) else im0 - for i, d in enumerate(x): # per image - if d is not None and len(d): - d = d.clone() - - # Reshape and pad cutouts - b = xyxy2xywh(d[:, :4]) # boxes - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square - b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad - d[:, :4] = xywh2xyxy(b).long() - - # Rescale boxes from img_size to im0 size - scale_coords(img.shape[2:], d[:, :4], im0[i].shape) - - # Classes - pred_cls1 = d[:, 5].long() - ims = [] - for a in d: - cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])] - im = cv2.resize(cutout, (224, 224)) # BGR - - im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32 - im /= 255 # 0 - 255 to 0.0 - 1.0 - ims.append(im) - - pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction - x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections - - return x - - -def increment_path(path, exist_ok=False, sep='', mkdir=False): - # Increment file or directory path, i.e. runs/exp --> runs/exp{sep}2, runs/exp{sep}3, ... etc. - path = Path(path) # os-agnostic - if path.exists() and not exist_ok: - path, suffix = (path.with_suffix(''), path.suffix) if path.is_file() else (path, '') - - # Method 1 - for n in range(2, 9999): - p = f'{path}{sep}{n}{suffix}' # increment path - if not os.path.exists(p): # - break - path = Path(p) - - # Method 2 (deprecated) - # dirs = glob.glob(f"{path}{sep}*") # similar paths - # matches = [re.search(rf"{path.stem}{sep}(\d+)", d) for d in dirs] - # i = [int(m.groups()[0]) for m in matches if m] # indices - # n = max(i) + 1 if i else 2 # increment number - # path = Path(f"{path}{sep}{n}{suffix}") # increment path - - if mkdir: - path.mkdir(parents=True, exist_ok=True) # make directory - - return path - - -# OpenCV Chinese-friendly functions ------------------------------------------------------------------------------------ -imshow_ = cv2.imshow # copy to avoid recursion errors - - -def imread(path, flags=cv2.IMREAD_COLOR): - return cv2.imdecode(np.fromfile(path, np.uint8), flags) - - -def imwrite(path, im): - try: - cv2.imencode(Path(path).suffix, im)[1].tofile(path) - return True - except Exception: - return False - - -def imshow(path, im): - imshow_(path.encode('unicode_escape').decode(), im) - - -cv2.imread, cv2.imwrite, cv2.imshow = imread, imwrite, imshow # redefine - -# Variables ------------------------------------------------------------------------------------------------------------ -NCOLS = 0 if is_docker() else shutil.get_terminal_size().columns # terminal window size for tqdm diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/Hdf5StubImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/Hdf5StubImagePlugin.py deleted file mode 100644 index bba05ed65a72c6b859f1722cefd0c75a59c43a37..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/Hdf5StubImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# HDF5 stub adapter -# -# Copyright (c) 2000-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler): - """ - Install application-specific HDF5 image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix): - return prefix[:8] == b"\x89HDF\r\n\x1a\n" - - -class HDF5StubImageFile(ImageFile.StubImageFile): - format = "HDF5" - format_description = "HDF5" - - def _open(self): - offset = self.fp.tell() - - if not _accept(self.fp.read(8)): - msg = "Not an HDF file" - raise SyntaxError(msg) - - self.fp.seek(offset) - - # make something up - self.mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "HDF5 save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(HDF5StubImageFile.format, HDF5StubImageFile, _accept) -Image.register_save(HDF5StubImageFile.format, _save) - -Image.register_extensions(HDF5StubImageFile.format, [".h5", ".hdf"]) diff --git a/spaces/canaxx/donut-mrz/donut/__init__.py b/spaces/canaxx/donut-mrz/donut/__init__.py deleted file mode 100644 index 513eb6ad0e23b9eed71304ff44aeb3b4dd427784..0000000000000000000000000000000000000000 --- a/spaces/canaxx/donut-mrz/donut/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -""" -Donut -Copyright (c) 2022-present NAVER Corp. -MIT License -""" -from .model import DonutConfig, DonutModel -from .util import DonutDataset, JSONParseEvaluator, load_json, save_json - -__all__ = [ - "DonutConfig", - "DonutModel", - "DonutDataset", - "JSONParseEvaluator", - "load_json", - "save_json", -] diff --git a/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/cpp/libJPG/jpge.h deleted file mode 100644 index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000 --- a/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/cpp/libJPG/jpge.h +++ /dev/null @@ -1,172 +0,0 @@ - -// jpge.h - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// Alex Evans: Added RGBA support, linear memory allocator. -#ifndef JPEG_ENCODER_H -#define JPEG_ENCODER_H - -#include - -namespace jpge -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef signed int int32; - typedef unsigned short uint16; - typedef unsigned int uint32; - typedef unsigned int uint; - - // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common. - enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 }; - - // JPEG compression parameters structure. - struct params - { - inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { } - - inline bool check_valid() const - { - if ((m_quality < 1) || (m_quality > 100)) return false; - if ((uint)m_subsampling > (uint)H2V2) return false; - return true; - } - - // Quality: 1-100, higher is better. Typical values are around 50-95. - int m_quality; - - // m_subsampling: - // 0 = Y (grayscale) only - // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU) - // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU) - // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common) - subsampling_t m_subsampling; - - // Disables CbCr discrimination - only intended for testing. - // If true, the Y quantization table is also used for the CbCr channels. - bool m_no_chroma_discrim_flag; - - bool m_two_pass_flag; - }; - - // Writes JPEG image to a file. - // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels. - bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Writes JPEG image to memory buffer. - // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes. - // If return value is true, buf_size will be set to the size of the compressed data. - bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Output stream abstract class - used by the jpeg_encoder class to write to the output stream. - // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts. - class output_stream - { - public: - virtual ~output_stream() { }; - virtual bool put_buf(const void* Pbuf, int64_t len) = 0; - template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); } - }; - - // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions. - class jpeg_encoder - { - public: - jpeg_encoder(); - ~jpeg_encoder(); - - // Initializes the compressor. - // pStream: The stream object to use for writing compressed data. - // params - Compression parameters structure, defined above. - // width, height - Image dimensions. - // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data. - // Returns false on out of memory or if a stream write fails. - bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params()); - - const params &get_params() const { return m_params; } - - // Deinitializes the compressor, freeing any allocated memory. May be called at any time. - void deinit(); - - uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; } - inline uint get_cur_pass() { return m_pass_num; } - - // Call this method with each source scanline. - // width * src_channels bytes per scanline is expected (RGB or Y format). - // You must call with NULL after all scanlines are processed to finish compression. - // Returns false on out of memory or if a stream write fails. - bool process_scanline(const void* pScanline); - - private: - jpeg_encoder(const jpeg_encoder &); - jpeg_encoder &operator =(const jpeg_encoder &); - - typedef int32 sample_array_t; - - output_stream *m_pStream; - params m_params; - uint8 m_num_components; - uint8 m_comp_h_samp[3], m_comp_v_samp[3]; - int m_image_x, m_image_y, m_image_bpp, m_image_bpl; - int m_image_x_mcu, m_image_y_mcu; - int m_image_bpl_xlt, m_image_bpl_mcu; - int m_mcus_per_row; - int m_mcu_x, m_mcu_y; - uint8 *m_mcu_lines[16]; - uint8 m_mcu_y_ofs; - sample_array_t m_sample_array[64]; - int16 m_coefficient_array[64]; - int32 m_quantization_tables[2][64]; - uint m_huff_codes[4][256]; - uint8 m_huff_code_sizes[4][256]; - uint8 m_huff_bits[4][17]; - uint8 m_huff_val[4][256]; - uint32 m_huff_count[4][256]; - int m_last_dc_val[3]; - enum { JPGE_OUT_BUF_SIZE = 2048 }; - uint8 m_out_buf[JPGE_OUT_BUF_SIZE]; - uint8 *m_pOut_buf; - uint m_out_buf_left; - uint32 m_bit_buffer; - uint m_bits_in; - uint8 m_pass_num; - bool m_all_stream_writes_succeeded; - - void optimize_huffman_table(int table_num, int table_len); - void emit_byte(uint8 i); - void emit_word(uint i); - void emit_marker(int marker); - void emit_jfif_app0(); - void emit_dqt(); - void emit_sof(); - void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag); - void emit_dhts(); - void emit_sos(); - void emit_markers(); - void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val); - void compute_quant_table(int32 *dst, int16 *src); - void adjust_quant_table(int32 *dst, int32 *src); - void first_pass_init(); - bool second_pass_init(); - bool jpg_open(int p_x_res, int p_y_res, int src_channels); - void load_block_8_8_grey(int x); - void load_block_8_8(int x, int y, int c); - void load_block_16_8(int x, int c); - void load_block_16_8_8(int x, int c); - void load_quantized_coefficients(int component_num); - void flush_output_buffer(); - void put_bits(uint bits, uint len); - void code_coefficients_pass_one(int component_num); - void code_coefficients_pass_two(int component_num); - void code_block(int component_num); - void process_mcu_row(); - bool terminate_pass_one(); - bool terminate_pass_two(); - bool process_end_of_image(); - void load_mcu(const void* src); - void clear(); - void init(); - }; - -} // namespace jpge - -#endif // JPEG_ENCODER \ No newline at end of file diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/list.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/list.py deleted file mode 100644 index 3dc40b0a7c04c7144c8e33c826a7354bf5d59819..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/list.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch - -from densepose.structures.data_relative import DensePoseDataRelative - - -class DensePoseList(object): - - _TORCH_DEVICE_CPU = torch.device("cpu") - - def __init__(self, densepose_datas, boxes_xyxy_abs, image_size_hw, device=_TORCH_DEVICE_CPU): - assert len(densepose_datas) == len( - boxes_xyxy_abs - ), "Attempt to initialize DensePoseList with {} DensePose datas " "and {} boxes".format( - len(densepose_datas), len(boxes_xyxy_abs) - ) - self.densepose_datas = [] - for densepose_data in densepose_datas: - assert isinstance(densepose_data, DensePoseDataRelative) or densepose_data is None, ( - "Attempt to initialize DensePoseList with DensePose datas " - "of type {}, expected DensePoseDataRelative".format(type(densepose_data)) - ) - densepose_data_ondevice = ( - densepose_data.to(device) if densepose_data is not None else None - ) - self.densepose_datas.append(densepose_data_ondevice) - self.boxes_xyxy_abs = boxes_xyxy_abs.to(device) - self.image_size_hw = image_size_hw - self.device = device - - def to(self, device): - if self.device == device: - return self - return DensePoseList(self.densepose_datas, self.boxes_xyxy_abs, self.image_size_hw, device) - - def __iter__(self): - return iter(self.densepose_datas) - - def __len__(self): - return len(self.densepose_datas) - - def __repr__(self): - s = self.__class__.__name__ + "(" - s += "num_instances={}, ".format(len(self.densepose_datas)) - s += "image_width={}, ".format(self.image_size_hw[1]) - s += "image_height={})".format(self.image_size_hw[0]) - return s - - def __getitem__(self, item): - if isinstance(item, int): - densepose_data_rel = self.densepose_datas[item] - return densepose_data_rel - elif isinstance(item, slice): - densepose_datas_rel = self.densepose_datas[item] - boxes_xyxy_abs = self.boxes_xyxy_abs[item] - return DensePoseList( - densepose_datas_rel, boxes_xyxy_abs, self.image_size_hw, self.device - ) - elif isinstance(item, torch.Tensor) and (item.dtype == torch.bool): - densepose_datas_rel = [self.densepose_datas[i] for i, x in enumerate(item) if x > 0] - boxes_xyxy_abs = self.boxes_xyxy_abs[item] - return DensePoseList( - densepose_datas_rel, boxes_xyxy_abs, self.image_size_hw, self.device - ) - else: - densepose_datas_rel = [self.densepose_datas[i] for i in item] - boxes_xyxy_abs = self.boxes_xyxy_abs[item] - return DensePoseList( - densepose_datas_rel, boxes_xyxy_abs, self.image_size_hw, self.device - ) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/TridentNet/train_net.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/TridentNet/train_net.py deleted file mode 100644 index 143289a10514cb87059f62425d79aa3812bc0c98..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/TridentNet/train_net.py +++ /dev/null @@ -1,67 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -TridentNet Training Script. - -This script is a simplified version of the training script in detectron2/tools. -""" - -import os - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch -from detectron2.evaluation import COCOEvaluator - -from tridentnet import add_tridentnet_config - - -class Trainer(DefaultTrainer): - @classmethod - def build_evaluator(cls, cfg, dataset_name, output_folder=None): - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - return COCOEvaluator(dataset_name, output_dir=output_folder) - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - add_tridentnet_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup(cfg, args) - return cfg - - -def main(args): - cfg = setup(args) - - if args.eval_only: - model = Trainer.build_model(cfg) - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - res = Trainer.test(cfg, model) - return res - - trainer = Trainer(cfg) - trainer.resume_or_load(resume=args.resume) - return trainer.train() - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_l_100ep.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_l_100ep.py deleted file mode 100644 index ebaf526ab7735309d5f50527136ad6207ce9d58b..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_l_100ep.py +++ /dev/null @@ -1,51 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.data.detection_utils import get_fed_loss_cls_weights -from detectron2.layers import ShapeSpec -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.roi_heads import FastRCNNOutputLayers, FastRCNNConvFCHead, CascadeROIHeads - -from .mask_rcnn_vitdet_l_100ep import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -# arguments that don't exist for Cascade R-CNN -[model.roi_heads.pop(k) for k in ["box_head", "box_predictor", "proposal_matcher"]] - -model.roi_heads.update( - _target_=CascadeROIHeads, - num_classes=1203, - box_heads=[ - L(FastRCNNConvFCHead)( - input_shape=ShapeSpec(channels=256, height=7, width=7), - conv_dims=[256, 256, 256, 256], - fc_dims=[1024], - conv_norm="LN", - ) - for _ in range(3) - ], - box_predictors=[ - L(FastRCNNOutputLayers)( - input_shape=ShapeSpec(channels=1024), - box2box_transform=L(Box2BoxTransform)(weights=(w1, w1, w2, w2)), - num_classes="${...num_classes}", - test_score_thresh=0.02, - test_topk_per_image=300, - cls_agnostic_bbox_reg=True, - use_sigmoid_ce=True, - use_fed_loss=True, - get_fed_loss_cls_weights=lambda: get_fed_loss_cls_weights( - dataloader.train.dataset.names, 0.5 - ), - ) - for (w1, w2) in [(10, 5), (20, 10), (30, 15)] - ], - proposal_matchers=[ - L(Matcher)(thresholds=[th], labels=[0, 1], allow_low_quality_matches=False) - for th in [0.5, 0.6, 0.7] - ], -) diff --git a/spaces/ccolas/TastyPiano/src/music/utilities/representation_learning_utilities/lr_scheduling.py b/spaces/ccolas/TastyPiano/src/music/utilities/representation_learning_utilities/lr_scheduling.py deleted file mode 100644 index 16f01ebe55904da3cdcb28bce792c1cd29711bf3..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/music/utilities/representation_learning_utilities/lr_scheduling.py +++ /dev/null @@ -1,127 +0,0 @@ -#Library Imports -import math -import numpy as np -import matplotlib.pyplot as plt -#Using Adam optimizer with -#Beta_1=0.9, Beta_2=0.98, and Epsilon=10^-9 - -#Learning rate varies over course of training -#lrate = sqrt(d_model)*min((1/sqrt(step_num)), step_num*(1/warmup_steps*sqrt(warmup_steps))) - -# def lr_plot(steps, target_up=7e-4, param=1, d_model=512, asymptot=1e-4, warmup_steps=10000): -# scaled_target = target_up * np.sqrt(d_model) -# asymptot_scale = asymptot * np.sqrt(d_model) -# slope = scaled_target / warmup_steps -# p1 = - param * np.log(scaled_target - asymptot_scale) - warmup_steps -# out = np.zeros(steps.size) -# out[:warmup_steps] = slope * steps[:warmup_steps] * 1/np.sqrt(d_model) -# out[warmup_steps:] = (np.exp(-(steps[warmup_steps:] + p1) / param) + asymptot_scale) * 1/np.sqrt(d_model) -# -# # out[warmup_steps:] = ((steps[warmup_steps:] - (warmup_steps - (scaled_target-asymptot_scale)**-2)) **-0.5 + asymptot_scale) * 1/np.sqrt(d_model) -# plt.figure() -# plt.plot(out) -# plt.ylim([np.min(out), 1e-3]) - - -# LrStepTracker -class MyLrStepTracker: - """ - """ - - def __init__(self, model_dim=512, warmup_steps=4000, asymptot=1e-4, target_up=8e-4, exp_slope=1e4, init_steps=0): - # Store Values - self.warmup_steps = warmup_steps - self.model_dim = model_dim - self.asymptot = asymptot - self.exp_slope = exp_slope - self.init_steps = init_steps - - # Begin Calculations - self.invsqrt_dim = 1 / math.sqrt(model_dim) - self.scaled_target = target_up * math.sqrt(model_dim) - self.asymptot_scale = asymptot * math.sqrt(model_dim) - self.constant = - exp_slope * math.log(self.scaled_target - self.asymptot_scale) - warmup_steps - self.invsqrt_warmup = warmup_steps**(-1.5) - self.slope = self.scaled_target / warmup_steps - - - # step - def step(self, step): - """ - ---------- - Author: Ryan Marshall - Modified: Damon Gwinn - ---------- - Method to pass to LambdaLR. Increments the step and computes the new learn rate. - ---------- - """ - - step += self.init_steps - if(step <= self.warmup_steps): - return self.invsqrt_dim * self.slope * step # linear warmup - else: - return self.invsqrt_dim * (math.exp(-(step + self.constant) / self.exp_slope) + self.asymptot_scale) - -# steps = np.arange(1, 60*2000) -# tracker = MyLrStepTracker(warmup_steps=4000, asymptot=1e-4, target_up=8e-4, exp_slope=2e4) -# out = [tracker.step(s) for s in steps] -# plt.figure() -# plt.plot(out) -# plt.show() -# LrStepTracker -class LrStepTracker: - """ - ---------- - Author: Ryan Marshall - Modified: Damon Gwinn - ---------- - Class for custom learn rate scheduler (to be used by torch.optim.lr_scheduler.LambdaLR). - - Learn rate for each step (batch) given the warmup steps is: - lr = [ 1/sqrt(d_model) ] * min[ 1/sqrt(step) , step * (warmup_steps)^-1.5 ] - - This is from Attention is All you Need (https://arxiv.org/abs/1706.03762) - ---------- - """ - - def __init__(self, model_dim=512, warmup_steps=4000, baseline=0, init_steps=0): - # Store Values - self.warmup_steps = warmup_steps - self.model_dim = model_dim - self.baseline = baseline - self.init_steps = init_steps - - # Begin Calculations - self.invsqrt_dim = (1 / math.sqrt(model_dim)) - self.invsqrt_warmup = warmup_steps**(-1.5) - - # step - def step(self, step): - """ - ---------- - Author: Ryan Marshall - Modified: Damon Gwinn - ---------- - Method to pass to LambdaLR. Increments the step and computes the new learn rate. - ---------- - """ - - step += self.init_steps - if(step <= self.warmup_steps): - return self.invsqrt_dim * self.invsqrt_warmup * step + self.baseline - else: - invsqrt_step = (1 / math.sqrt(step)) - return self.invsqrt_dim * invsqrt_step + self.baseline - -# get_lr -def get_lr(optimizer): - """ - ---------- - Author: Damon Gwinn - ---------- - Hack to get the current learn rate of the model - ---------- - """ - - for param_group in optimizer.param_groups: - return param_group['lr'] diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/coloredlogs/demo.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/coloredlogs/demo.py deleted file mode 100644 index 7d38af556c79b07aa39d57b50e6f61d1209fba05..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/coloredlogs/demo.py +++ /dev/null @@ -1,49 +0,0 @@ -# Demonstration of the coloredlogs package. -# -# Author: Peter Odding -# Last Change: January 14, 2018 -# URL: https://coloredlogs.readthedocs.io - -"""A simple demonstration of the `coloredlogs` package.""" - -# Standard library modules. -import os -import time - -# Modules included in our package. -import coloredlogs - -# If my verbose logger is installed, we'll use that for the demo. -try: - from verboselogs import VerboseLogger as getLogger -except ImportError: - from logging import getLogger - -# Initialize a logger for this module. -logger = getLogger(__name__) - -DEMO_DELAY = float(os.environ.get('COLOREDLOGS_DEMO_DELAY', '1')) -"""The number of seconds between each message emitted by :func:`demonstrate_colored_logging()`.""" - - -def demonstrate_colored_logging(): - """Interactively demonstrate the :mod:`coloredlogs` package.""" - # Determine the available logging levels and order them by numeric value. - decorated_levels = [] - defined_levels = coloredlogs.find_defined_levels() - normalizer = coloredlogs.NameNormalizer() - for name, level in defined_levels.items(): - if name != 'NOTSET': - item = (level, normalizer.normalize_name(name)) - if item not in decorated_levels: - decorated_levels.append(item) - ordered_levels = sorted(decorated_levels) - # Initialize colored output to the terminal, default to the most - # verbose logging level but enable the user the customize it. - coloredlogs.install(level=os.environ.get('COLOREDLOGS_LOG_LEVEL', ordered_levels[0][1])) - # Print some examples with different timestamps. - for level, name in ordered_levels: - log_method = getattr(logger, name, None) - if log_method: - log_method("message with level %s (%i)", name, level) - time.sleep(DEMO_DELAY) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py deleted file mode 100644 index 5edc86a9cbc9a0b710cfc014a3910f671f791e54..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py +++ /dev/null @@ -1,46 +0,0 @@ -""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT) -tool to store its hinting source data. - -TSI5 contains the VTT character groups. -""" -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import sys -import array - - -class table_T_S_I__5(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - numGlyphs = ttFont["maxp"].numGlyphs - assert len(data) == 2 * numGlyphs - a = array.array("H") - a.frombytes(data) - if sys.byteorder != "big": - a.byteswap() - self.glyphGrouping = {} - for i in range(numGlyphs): - self.glyphGrouping[ttFont.getGlyphName(i)] = a[i] - - def compile(self, ttFont): - glyphNames = ttFont.getGlyphOrder() - a = array.array("H") - for i in range(len(glyphNames)): - a.append(self.glyphGrouping.get(glyphNames[i], 0)) - if sys.byteorder != "big": - a.byteswap() - return a.tobytes() - - def toXML(self, writer, ttFont): - names = sorted(self.glyphGrouping.keys()) - for glyphName in names: - writer.simpletag( - "glyphgroup", name=glyphName, value=self.glyphGrouping[glyphName] - ) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if not hasattr(self, "glyphGrouping"): - self.glyphGrouping = {} - if name != "glyphgroup": - return - self.glyphGrouping[attrs["name"]] = safeEval(attrs["value"]) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_g_v_a_r.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_g_v_a_r.py deleted file mode 100644 index 11485bf09aee04a15307d094fdead26e7e4572ea..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_g_v_a_r.py +++ /dev/null @@ -1,284 +0,0 @@ -from collections import UserDict, deque -from functools import partial -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import array -import itertools -import logging -import struct -import sys -import fontTools.ttLib.tables.TupleVariation as tv - - -log = logging.getLogger(__name__) -TupleVariation = tv.TupleVariation - - -# https://www.microsoft.com/typography/otspec/gvar.htm -# https://www.microsoft.com/typography/otspec/otvarcommonformats.htm -# -# Apple's documentation of 'gvar': -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6gvar.html -# -# FreeType2 source code for parsing 'gvar': -# http://git.savannah.gnu.org/cgit/freetype/freetype2.git/tree/src/truetype/ttgxvar.c - -GVAR_HEADER_FORMAT = """ - > # big endian - version: H - reserved: H - axisCount: H - sharedTupleCount: H - offsetToSharedTuples: I - glyphCount: H - flags: H - offsetToGlyphVariationData: I -""" - -GVAR_HEADER_SIZE = sstruct.calcsize(GVAR_HEADER_FORMAT) - - -class _LazyDict(UserDict): - def __init__(self, data): - super().__init__() - self.data = data - - def __getitem__(self, k): - v = self.data[k] - if callable(v): - v = v() - self.data[k] = v - return v - - -class table__g_v_a_r(DefaultTable.DefaultTable): - dependencies = ["fvar", "glyf"] - - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.version, self.reserved = 1, 0 - self.variations = {} - - def compile(self, ttFont): - axisTags = [axis.axisTag for axis in ttFont["fvar"].axes] - sharedTuples = tv.compileSharedTuples( - axisTags, itertools.chain(*self.variations.values()) - ) - sharedTupleIndices = {coord: i for i, coord in enumerate(sharedTuples)} - sharedTupleSize = sum([len(c) for c in sharedTuples]) - compiledGlyphs = self.compileGlyphs_(ttFont, axisTags, sharedTupleIndices) - offset = 0 - offsets = [] - for glyph in compiledGlyphs: - offsets.append(offset) - offset += len(glyph) - offsets.append(offset) - compiledOffsets, tableFormat = self.compileOffsets_(offsets) - - header = {} - header["version"] = self.version - header["reserved"] = self.reserved - header["axisCount"] = len(axisTags) - header["sharedTupleCount"] = len(sharedTuples) - header["offsetToSharedTuples"] = GVAR_HEADER_SIZE + len(compiledOffsets) - header["glyphCount"] = len(compiledGlyphs) - header["flags"] = tableFormat - header["offsetToGlyphVariationData"] = ( - header["offsetToSharedTuples"] + sharedTupleSize - ) - compiledHeader = sstruct.pack(GVAR_HEADER_FORMAT, header) - - result = [compiledHeader, compiledOffsets] - result.extend(sharedTuples) - result.extend(compiledGlyphs) - return b"".join(result) - - def compileGlyphs_(self, ttFont, axisTags, sharedCoordIndices): - result = [] - glyf = ttFont["glyf"] - for glyphName in ttFont.getGlyphOrder(): - variations = self.variations.get(glyphName, []) - if not variations: - result.append(b"") - continue - pointCountUnused = 0 # pointCount is actually unused by compileGlyph - result.append( - compileGlyph_( - variations, pointCountUnused, axisTags, sharedCoordIndices - ) - ) - return result - - def decompile(self, data, ttFont): - axisTags = [axis.axisTag for axis in ttFont["fvar"].axes] - glyphs = ttFont.getGlyphOrder() - sstruct.unpack(GVAR_HEADER_FORMAT, data[0:GVAR_HEADER_SIZE], self) - assert len(glyphs) == self.glyphCount - assert len(axisTags) == self.axisCount - offsets = self.decompileOffsets_( - data[GVAR_HEADER_SIZE:], - tableFormat=(self.flags & 1), - glyphCount=self.glyphCount, - ) - sharedCoords = tv.decompileSharedTuples( - axisTags, self.sharedTupleCount, data, self.offsetToSharedTuples - ) - variations = {} - offsetToData = self.offsetToGlyphVariationData - glyf = ttFont["glyf"] - - def decompileVarGlyph(glyphName, gid): - gvarData = data[ - offsetToData + offsets[gid] : offsetToData + offsets[gid + 1] - ] - if not gvarData: - return [] - glyph = glyf[glyphName] - numPointsInGlyph = self.getNumPoints_(glyph) - return decompileGlyph_(numPointsInGlyph, sharedCoords, axisTags, gvarData) - - for gid in range(self.glyphCount): - glyphName = glyphs[gid] - variations[glyphName] = partial(decompileVarGlyph, glyphName, gid) - self.variations = _LazyDict(variations) - - if ttFont.lazy is False: # Be lazy for None and True - self.ensureDecompiled() - - def ensureDecompiled(self, recurse=False): - # The recurse argument is unused, but part of the signature of - # ensureDecompiled across the library. - # Use a zero-length deque to consume the lazy dict - deque(self.variations.values(), maxlen=0) - - @staticmethod - def decompileOffsets_(data, tableFormat, glyphCount): - if tableFormat == 0: - # Short format: array of UInt16 - offsets = array.array("H") - offsetsSize = (glyphCount + 1) * 2 - else: - # Long format: array of UInt32 - offsets = array.array("I") - offsetsSize = (glyphCount + 1) * 4 - offsets.frombytes(data[0:offsetsSize]) - if sys.byteorder != "big": - offsets.byteswap() - - # In the short format, offsets need to be multiplied by 2. - # This is not documented in Apple's TrueType specification, - # but can be inferred from the FreeType implementation, and - # we could verify it with two sample GX fonts. - if tableFormat == 0: - offsets = [off * 2 for off in offsets] - - return offsets - - @staticmethod - def compileOffsets_(offsets): - """Packs a list of offsets into a 'gvar' offset table. - - Returns a pair (bytestring, tableFormat). Bytestring is the - packed offset table. Format indicates whether the table - uses short (tableFormat=0) or long (tableFormat=1) integers. - The returned tableFormat should get packed into the flags field - of the 'gvar' header. - """ - assert len(offsets) >= 2 - for i in range(1, len(offsets)): - assert offsets[i - 1] <= offsets[i] - if max(offsets) <= 0xFFFF * 2: - packed = array.array("H", [n >> 1 for n in offsets]) - tableFormat = 0 - else: - packed = array.array("I", offsets) - tableFormat = 1 - if sys.byteorder != "big": - packed.byteswap() - return (packed.tobytes(), tableFormat) - - def toXML(self, writer, ttFont): - writer.simpletag("version", value=self.version) - writer.newline() - writer.simpletag("reserved", value=self.reserved) - writer.newline() - axisTags = [axis.axisTag for axis in ttFont["fvar"].axes] - for glyphName in ttFont.getGlyphNames(): - variations = self.variations.get(glyphName) - if not variations: - continue - writer.begintag("glyphVariations", glyph=glyphName) - writer.newline() - for gvar in variations: - gvar.toXML(writer, axisTags) - writer.endtag("glyphVariations") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "version": - self.version = safeEval(attrs["value"]) - elif name == "reserved": - self.reserved = safeEval(attrs["value"]) - elif name == "glyphVariations": - if not hasattr(self, "variations"): - self.variations = {} - glyphName = attrs["glyph"] - glyph = ttFont["glyf"][glyphName] - numPointsInGlyph = self.getNumPoints_(glyph) - glyphVariations = [] - for element in content: - if isinstance(element, tuple): - name, attrs, content = element - if name == "tuple": - gvar = TupleVariation({}, [None] * numPointsInGlyph) - glyphVariations.append(gvar) - for tupleElement in content: - if isinstance(tupleElement, tuple): - tupleName, tupleAttrs, tupleContent = tupleElement - gvar.fromXML(tupleName, tupleAttrs, tupleContent) - self.variations[glyphName] = glyphVariations - - @staticmethod - def getNumPoints_(glyph): - NUM_PHANTOM_POINTS = 4 - - if glyph.isComposite(): - return len(glyph.components) + NUM_PHANTOM_POINTS - elif glyph.isVarComposite(): - count = 0 - for component in glyph.components: - count += component.getPointCount() - return count + NUM_PHANTOM_POINTS - else: - # Empty glyphs (eg. space, nonmarkingreturn) have no "coordinates" attribute. - return len(getattr(glyph, "coordinates", [])) + NUM_PHANTOM_POINTS - - -def compileGlyph_(variations, pointCount, axisTags, sharedCoordIndices): - tupleVariationCount, tuples, data = tv.compileTupleVariationStore( - variations, pointCount, axisTags, sharedCoordIndices - ) - if tupleVariationCount == 0: - return b"" - result = [struct.pack(">HH", tupleVariationCount, 4 + len(tuples)), tuples, data] - if (len(tuples) + len(data)) % 2 != 0: - result.append(b"\0") # padding - return b"".join(result) - - -def decompileGlyph_(pointCount, sharedTuples, axisTags, data): - if len(data) < 4: - return [] - tupleVariationCount, offsetToData = struct.unpack(">HH", data[:4]) - dataPos = offsetToData - return tv.decompileTupleVariationStore( - "gvar", - axisTags, - tupleVariationCount, - pointCount, - sharedTuples, - data, - 4, - offsetToData, - ) diff --git a/spaces/cihyFjudo/fairness-paper-search/Anurag Photoshop Software Free Download with Crack What You Need to Know Before Downloading and Installing.md b/spaces/cihyFjudo/fairness-paper-search/Anurag Photoshop Software Free Download with Crack What You Need to Know Before Downloading and Installing.md deleted file mode 100644 index 6bc77c607f989dc41ef1468ce29f498935f2a4ef..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Anurag Photoshop Software Free Download with Crack What You Need to Know Before Downloading and Installing.md +++ /dev/null @@ -1,6 +0,0 @@ -

    anurag photoshop software free download with crack


    Download Ziphttps://tinurli.com/2uwjuJ



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/FL STUDIO Producer Edition 15.3.3 Crack Full Version.md b/spaces/cihyFjudo/fairness-paper-search/FL STUDIO Producer Edition 15.3.3 Crack Full Version.md deleted file mode 100644 index 7d2a5e51f8279d824eb9053c2449b430c015293e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/FL STUDIO Producer Edition 15.3.3 Crack Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

    FL STUDIO Producer Edition 15.3.3 Crack Full Version


    Download ->>> https://tinurli.com/2uwiVG



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Welcome 2 movie free download in english mp4 The easiest and safest way to watch online.md b/spaces/cihyFjudo/fairness-paper-search/Welcome 2 movie free download in english mp4 The easiest and safest way to watch online.md deleted file mode 100644 index 9272abfc8060df01656b356d2f4571965b2d8c70..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Welcome 2 movie free download in english mp4 The easiest and safest way to watch online.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Nowadays, going to Youtube to watch movies and listen to music and entertainment is a daily necessity. of each person. Therefore, the need to download favorite movies or songs to your computer or on mobile devices to store and watch offline when not connected to 5G network or wifi is increasingly popular.

    -

    In addition to allowing downloading video from Youtube Yt5s.io.io also supports download mp3 from youtube , download video from facebook and many other features waiting for you to discover completely free of charge.

    -

    Welcome 2 movie free download in english mp4


    Download Zip ✑ ✑ ✑ https://tinurli.com/2uwhJP



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Woh Shakti Hame Do Dayanidhe The Story and Message of this Powerful Prayer.md b/spaces/cihyFjudo/fairness-paper-search/Woh Shakti Hame Do Dayanidhe The Story and Message of this Powerful Prayer.md deleted file mode 100644 index c6352418c0ae2c78a21837abd7e9dc1aee5df6c9..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Woh Shakti Hame Do Dayanidhe The Story and Message of this Powerful Prayer.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    Watch the Vah shakti hame do daya nidhe kartabya video before converting or downloading, you can preview it by clicking Watch Video button, Download MP3 button will convert to mp3 and Download MP4 button will convert to mp4; SavefromNets.com allows you to download any videos from the supported website into MP3, MP4, and more format.

    -

    woh shakti hame do dayanidhe free download


    Downloadhttps://tinurli.com/2uwjfc



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/DdsImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/DdsImagePlugin.py deleted file mode 100644 index a946daeaa6b9a5946fc5492443dfddbb10881c99..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/DdsImagePlugin.py +++ /dev/null @@ -1,291 +0,0 @@ -""" -A Pillow loader for .dds files (S3TC-compressed aka DXTC) -Jerome Leclanche - -Documentation: - https://web.archive.org/web/20170802060935/http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_compression_s3tc.txt - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: - https://creativecommons.org/publicdomain/zero/1.0/ -""" - -import struct -from io import BytesIO - -from . import Image, ImageFile -from ._binary import o32le as o32 - -# Magic ("DDS ") -DDS_MAGIC = 0x20534444 - -# DDS flags -DDSD_CAPS = 0x1 -DDSD_HEIGHT = 0x2 -DDSD_WIDTH = 0x4 -DDSD_PITCH = 0x8 -DDSD_PIXELFORMAT = 0x1000 -DDSD_MIPMAPCOUNT = 0x20000 -DDSD_LINEARSIZE = 0x80000 -DDSD_DEPTH = 0x800000 - -# DDS caps -DDSCAPS_COMPLEX = 0x8 -DDSCAPS_TEXTURE = 0x1000 -DDSCAPS_MIPMAP = 0x400000 - -DDSCAPS2_CUBEMAP = 0x200 -DDSCAPS2_CUBEMAP_POSITIVEX = 0x400 -DDSCAPS2_CUBEMAP_NEGATIVEX = 0x800 -DDSCAPS2_CUBEMAP_POSITIVEY = 0x1000 -DDSCAPS2_CUBEMAP_NEGATIVEY = 0x2000 -DDSCAPS2_CUBEMAP_POSITIVEZ = 0x4000 -DDSCAPS2_CUBEMAP_NEGATIVEZ = 0x8000 -DDSCAPS2_VOLUME = 0x200000 - -# Pixel Format -DDPF_ALPHAPIXELS = 0x1 -DDPF_ALPHA = 0x2 -DDPF_FOURCC = 0x4 -DDPF_PALETTEINDEXED8 = 0x20 -DDPF_RGB = 0x40 -DDPF_LUMINANCE = 0x20000 - - -# dds.h - -DDS_FOURCC = DDPF_FOURCC -DDS_RGB = DDPF_RGB -DDS_RGBA = DDPF_RGB | DDPF_ALPHAPIXELS -DDS_LUMINANCE = DDPF_LUMINANCE -DDS_LUMINANCEA = DDPF_LUMINANCE | DDPF_ALPHAPIXELS -DDS_ALPHA = DDPF_ALPHA -DDS_PAL8 = DDPF_PALETTEINDEXED8 - -DDS_HEADER_FLAGS_TEXTURE = DDSD_CAPS | DDSD_HEIGHT | DDSD_WIDTH | DDSD_PIXELFORMAT -DDS_HEADER_FLAGS_MIPMAP = DDSD_MIPMAPCOUNT -DDS_HEADER_FLAGS_VOLUME = DDSD_DEPTH -DDS_HEADER_FLAGS_PITCH = DDSD_PITCH -DDS_HEADER_FLAGS_LINEARSIZE = DDSD_LINEARSIZE - -DDS_HEIGHT = DDSD_HEIGHT -DDS_WIDTH = DDSD_WIDTH - -DDS_SURFACE_FLAGS_TEXTURE = DDSCAPS_TEXTURE -DDS_SURFACE_FLAGS_MIPMAP = DDSCAPS_COMPLEX | DDSCAPS_MIPMAP -DDS_SURFACE_FLAGS_CUBEMAP = DDSCAPS_COMPLEX - -DDS_CUBEMAP_POSITIVEX = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEX -DDS_CUBEMAP_NEGATIVEX = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEX -DDS_CUBEMAP_POSITIVEY = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEY -DDS_CUBEMAP_NEGATIVEY = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEY -DDS_CUBEMAP_POSITIVEZ = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEZ -DDS_CUBEMAP_NEGATIVEZ = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEZ - - -# DXT1 -DXT1_FOURCC = 0x31545844 - -# DXT3 -DXT3_FOURCC = 0x33545844 - -# DXT5 -DXT5_FOURCC = 0x35545844 - - -# dxgiformat.h - -DXGI_FORMAT_R8G8B8A8_TYPELESS = 27 -DXGI_FORMAT_R8G8B8A8_UNORM = 28 -DXGI_FORMAT_R8G8B8A8_UNORM_SRGB = 29 -DXGI_FORMAT_BC5_TYPELESS = 82 -DXGI_FORMAT_BC5_UNORM = 83 -DXGI_FORMAT_BC5_SNORM = 84 -DXGI_FORMAT_BC6H_UF16 = 95 -DXGI_FORMAT_BC6H_SF16 = 96 -DXGI_FORMAT_BC7_TYPELESS = 97 -DXGI_FORMAT_BC7_UNORM = 98 -DXGI_FORMAT_BC7_UNORM_SRGB = 99 - - -class DdsImageFile(ImageFile.ImageFile): - format = "DDS" - format_description = "DirectDraw Surface" - - def _open(self): - if not _accept(self.fp.read(4)): - msg = "not a DDS file" - raise SyntaxError(msg) - (header_size,) = struct.unpack("`_ - table, which contains outlines for glyphs in TrueType format. In many cases, - it is easier to access and manipulate glyph outlines through the ``GlyphSet`` - object returned from :py:meth:`fontTools.ttLib.ttFont.getGlyphSet`:: - - >> from fontTools.pens.boundsPen import BoundsPen - >> glyphset = font.getGlyphSet() - >> bp = BoundsPen(glyphset) - >> glyphset["A"].draw(bp) - >> bp.bounds - (19, 0, 633, 716) - - However, this class can be used for low-level access to the ``glyf`` table data. - Objects of this class support dictionary-like access, mapping glyph names to - :py:class:`Glyph` objects:: - - >> glyf = font["glyf"] - >> len(glyf["Aacute"].components) - 2 - - Note that when adding glyphs to the font via low-level access to the ``glyf`` - table, the new glyphs must also be added to the ``hmtx``/``vmtx`` table:: - - >> font["glyf"]["divisionslash"] = Glyph() - >> font["hmtx"]["divisionslash"] = (640, 0) - - """ - - dependencies = ["fvar"] - - # this attribute controls the amount of padding applied to glyph data upon compile. - # Glyph lenghts are aligned to multiples of the specified value. - # Allowed values are (0, 1, 2, 4). '0' means no padding; '1' (default) also means - # no padding, except for when padding would allow to use short loca offsets. - padding = 1 - - def decompile(self, data, ttFont): - self.axisTags = ( - [axis.axisTag for axis in ttFont["fvar"].axes] if "fvar" in ttFont else [] - ) - loca = ttFont["loca"] - pos = int(loca[0]) - nextPos = 0 - noname = 0 - self.glyphs = {} - self.glyphOrder = glyphOrder = ttFont.getGlyphOrder() - for i in range(0, len(loca) - 1): - try: - glyphName = glyphOrder[i] - except IndexError: - noname = noname + 1 - glyphName = "ttxautoglyph%s" % i - nextPos = int(loca[i + 1]) - glyphdata = data[pos:nextPos] - if len(glyphdata) != (nextPos - pos): - raise ttLib.TTLibError("not enough 'glyf' table data") - glyph = Glyph(glyphdata) - self.glyphs[glyphName] = glyph - pos = nextPos - if len(data) - nextPos >= 4: - log.warning( - "too much 'glyf' table data: expected %d, received %d bytes", - nextPos, - len(data), - ) - if noname: - log.warning("%s glyphs have no name", noname) - if ttFont.lazy is False: # Be lazy for None and True - self.ensureDecompiled() - - def ensureDecompiled(self, recurse=False): - # The recurse argument is unused, but part of the signature of - # ensureDecompiled across the library. - for glyph in self.glyphs.values(): - glyph.expand(self) - - def compile(self, ttFont): - self.axisTags = ( - [axis.axisTag for axis in ttFont["fvar"].axes] if "fvar" in ttFont else [] - ) - if not hasattr(self, "glyphOrder"): - self.glyphOrder = ttFont.getGlyphOrder() - padding = self.padding - assert padding in (0, 1, 2, 4) - locations = [] - currentLocation = 0 - dataList = [] - recalcBBoxes = ttFont.recalcBBoxes - for glyphName in self.glyphOrder: - glyph = self.glyphs[glyphName] - glyphData = glyph.compile(self, recalcBBoxes) - if padding > 1: - glyphData = pad(glyphData, size=padding) - locations.append(currentLocation) - currentLocation = currentLocation + len(glyphData) - dataList.append(glyphData) - locations.append(currentLocation) - - if padding == 1 and currentLocation < 0x20000: - # See if we can pad any odd-lengthed glyphs to allow loca - # table to use the short offsets. - indices = [ - i for i, glyphData in enumerate(dataList) if len(glyphData) % 2 == 1 - ] - if indices and currentLocation + len(indices) < 0x20000: - # It fits. Do it. - for i in indices: - dataList[i] += b"\0" - currentLocation = 0 - for i, glyphData in enumerate(dataList): - locations[i] = currentLocation - currentLocation += len(glyphData) - locations[len(dataList)] = currentLocation - - data = b"".join(dataList) - if "loca" in ttFont: - ttFont["loca"].set(locations) - if "maxp" in ttFont: - ttFont["maxp"].numGlyphs = len(self.glyphs) - if not data: - # As a special case when all glyph in the font are empty, add a zero byte - # to the table, so that OTS doesn’t reject it, and to make the table work - # on Windows as well. - # See https://github.com/khaledhosny/ots/issues/52 - data = b"\0" - return data - - def toXML(self, writer, ttFont, splitGlyphs=False): - notice = ( - "The xMin, yMin, xMax and yMax values\n" - "will be recalculated by the compiler." - ) - glyphNames = ttFont.getGlyphNames() - if not splitGlyphs: - writer.newline() - writer.comment(notice) - writer.newline() - writer.newline() - numGlyphs = len(glyphNames) - if splitGlyphs: - path, ext = os.path.splitext(writer.file.name) - existingGlyphFiles = set() - for glyphName in glyphNames: - glyph = self.get(glyphName) - if glyph is None: - log.warning("glyph '%s' does not exist in glyf table", glyphName) - continue - if glyph.numberOfContours: - if splitGlyphs: - glyphPath = userNameToFileName( - tostr(glyphName, "utf-8"), - existingGlyphFiles, - prefix=path + ".", - suffix=ext, - ) - existingGlyphFiles.add(glyphPath.lower()) - glyphWriter = xmlWriter.XMLWriter( - glyphPath, - idlefunc=writer.idlefunc, - newlinestr=writer.newlinestr, - ) - glyphWriter.begintag("ttFont", ttLibVersion=version) - glyphWriter.newline() - glyphWriter.begintag("glyf") - glyphWriter.newline() - glyphWriter.comment(notice) - glyphWriter.newline() - writer.simpletag("TTGlyph", src=os.path.basename(glyphPath)) - else: - glyphWriter = writer - glyphWriter.begintag( - "TTGlyph", - [ - ("name", glyphName), - ("xMin", glyph.xMin), - ("yMin", glyph.yMin), - ("xMax", glyph.xMax), - ("yMax", glyph.yMax), - ], - ) - glyphWriter.newline() - glyph.toXML(glyphWriter, ttFont) - glyphWriter.endtag("TTGlyph") - glyphWriter.newline() - if splitGlyphs: - glyphWriter.endtag("glyf") - glyphWriter.newline() - glyphWriter.endtag("ttFont") - glyphWriter.newline() - glyphWriter.close() - else: - writer.simpletag("TTGlyph", name=glyphName) - writer.comment("contains no outline data") - if not splitGlyphs: - writer.newline() - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name != "TTGlyph": - return - if not hasattr(self, "glyphs"): - self.glyphs = {} - if not hasattr(self, "glyphOrder"): - self.glyphOrder = ttFont.getGlyphOrder() - glyphName = attrs["name"] - log.debug("unpacking glyph '%s'", glyphName) - glyph = Glyph() - for attr in ["xMin", "yMin", "xMax", "yMax"]: - setattr(glyph, attr, safeEval(attrs.get(attr, "0"))) - self.glyphs[glyphName] = glyph - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - glyph.fromXML(name, attrs, content, ttFont) - if not ttFont.recalcBBoxes: - glyph.compact(self, 0) - - def setGlyphOrder(self, glyphOrder): - """Sets the glyph order - - Args: - glyphOrder ([str]): List of glyph names in order. - """ - self.glyphOrder = glyphOrder - - def getGlyphName(self, glyphID): - """Returns the name for the glyph with the given ID. - - Raises a ``KeyError`` if the glyph name is not found in the font. - """ - return self.glyphOrder[glyphID] - - def getGlyphID(self, glyphName): - """Returns the ID of the glyph with the given name. - - Raises a ``ValueError`` if the glyph is not found in the font. - """ - # XXX optimize with reverse dict!!! - return self.glyphOrder.index(glyphName) - - def removeHinting(self): - """Removes TrueType hints from all glyphs in the glyphset. - - See :py:meth:`Glyph.removeHinting`. - """ - for glyph in self.glyphs.values(): - glyph.removeHinting() - - def keys(self): - return self.glyphs.keys() - - def has_key(self, glyphName): - return glyphName in self.glyphs - - __contains__ = has_key - - def get(self, glyphName, default=None): - glyph = self.glyphs.get(glyphName, default) - if glyph is not None: - glyph.expand(self) - return glyph - - def __getitem__(self, glyphName): - glyph = self.glyphs[glyphName] - glyph.expand(self) - return glyph - - def __setitem__(self, glyphName, glyph): - self.glyphs[glyphName] = glyph - if glyphName not in self.glyphOrder: - self.glyphOrder.append(glyphName) - - def __delitem__(self, glyphName): - del self.glyphs[glyphName] - self.glyphOrder.remove(glyphName) - - def __len__(self): - assert len(self.glyphOrder) == len(self.glyphs) - return len(self.glyphs) - - def _getPhantomPoints(self, glyphName, hMetrics, vMetrics=None): - """Compute the four "phantom points" for the given glyph from its bounding box - and the horizontal and vertical advance widths and sidebearings stored in the - ttFont's "hmtx" and "vmtx" tables. - - 'hMetrics' should be ttFont['hmtx'].metrics. - - 'vMetrics' should be ttFont['vmtx'].metrics if there is "vmtx" or None otherwise. - If there is no vMetrics passed in, vertical phantom points are set to the zero coordinate. - - https://docs.microsoft.com/en-us/typography/opentype/spec/tt_instructing_glyphs#phantoms - """ - glyph = self[glyphName] - if not hasattr(glyph, "xMin"): - glyph.recalcBounds(self) - - horizontalAdvanceWidth, leftSideBearing = hMetrics[glyphName] - leftSideX = glyph.xMin - leftSideBearing - rightSideX = leftSideX + horizontalAdvanceWidth - - if vMetrics: - verticalAdvanceWidth, topSideBearing = vMetrics[glyphName] - topSideY = topSideBearing + glyph.yMax - bottomSideY = topSideY - verticalAdvanceWidth - else: - bottomSideY = topSideY = 0 - - return [ - (leftSideX, 0), - (rightSideX, 0), - (0, topSideY), - (0, bottomSideY), - ] - - def _getCoordinatesAndControls( - self, glyphName, hMetrics, vMetrics=None, *, round=otRound - ): - """Return glyph coordinates and controls as expected by "gvar" table. - - The coordinates includes four "phantom points" for the glyph metrics, - as mandated by the "gvar" spec. - - The glyph controls is a namedtuple with the following attributes: - - numberOfContours: -1 for composite glyphs. - - endPts: list of indices of end points for each contour in simple - glyphs, or component indices in composite glyphs (used for IUP - optimization). - - flags: array of contour point flags for simple glyphs (None for - composite glyphs). - - components: list of base glyph names (str) for each component in - composite glyphs (None for simple glyphs). - - The "hMetrics" and vMetrics are used to compute the "phantom points" (see - the "_getPhantomPoints" method). - - Return None if the requested glyphName is not present. - """ - glyph = self.get(glyphName) - if glyph is None: - return None - if glyph.isComposite(): - coords = GlyphCoordinates( - [(getattr(c, "x", 0), getattr(c, "y", 0)) for c in glyph.components] - ) - controls = _GlyphControls( - numberOfContours=glyph.numberOfContours, - endPts=list(range(len(glyph.components))), - flags=None, - components=[ - (c.glyphName, getattr(c, "transform", None)) - for c in glyph.components - ], - ) - elif glyph.isVarComposite(): - coords = [] - controls = [] - - for component in glyph.components: - ( - componentCoords, - componentControls, - ) = component.getCoordinatesAndControls() - coords.extend(componentCoords) - controls.extend(componentControls) - - coords = GlyphCoordinates(coords) - - controls = _GlyphControls( - numberOfContours=glyph.numberOfContours, - endPts=list(range(len(coords))), - flags=None, - components=[ - (c.glyphName, getattr(c, "flags", None)) for c in glyph.components - ], - ) - - else: - coords, endPts, flags = glyph.getCoordinates(self) - coords = coords.copy() - controls = _GlyphControls( - numberOfContours=glyph.numberOfContours, - endPts=endPts, - flags=flags, - components=None, - ) - # Add phantom points for (left, right, top, bottom) positions. - phantomPoints = self._getPhantomPoints(glyphName, hMetrics, vMetrics) - coords.extend(phantomPoints) - coords.toInt(round=round) - return coords, controls - - def _setCoordinates(self, glyphName, coord, hMetrics, vMetrics=None): - """Set coordinates and metrics for the given glyph. - - "coord" is an array of GlyphCoordinates which must include the "phantom - points" as the last four coordinates. - - Both the horizontal/vertical advances and left/top sidebearings in "hmtx" - and "vmtx" tables (if any) are updated from four phantom points and - the glyph's bounding boxes. - - The "hMetrics" and vMetrics are used to propagate "phantom points" - into "hmtx" and "vmtx" tables if desired. (see the "_getPhantomPoints" - method). - """ - glyph = self[glyphName] - - # Handle phantom points for (left, right, top, bottom) positions. - assert len(coord) >= 4 - leftSideX = coord[-4][0] - rightSideX = coord[-3][0] - topSideY = coord[-2][1] - bottomSideY = coord[-1][1] - - coord = coord[:-4] - - if glyph.isComposite(): - assert len(coord) == len(glyph.components) - for p, comp in zip(coord, glyph.components): - if hasattr(comp, "x"): - comp.x, comp.y = p - elif glyph.isVarComposite(): - for comp in glyph.components: - coord = comp.setCoordinates(coord) - assert not coord - elif glyph.numberOfContours == 0: - assert len(coord) == 0 - else: - assert len(coord) == len(glyph.coordinates) - glyph.coordinates = GlyphCoordinates(coord) - - glyph.recalcBounds(self) - - horizontalAdvanceWidth = otRound(rightSideX - leftSideX) - if horizontalAdvanceWidth < 0: - # unlikely, but it can happen, see: - # https://github.com/fonttools/fonttools/pull/1198 - horizontalAdvanceWidth = 0 - leftSideBearing = otRound(glyph.xMin - leftSideX) - hMetrics[glyphName] = horizontalAdvanceWidth, leftSideBearing - - if vMetrics is not None: - verticalAdvanceWidth = otRound(topSideY - bottomSideY) - if verticalAdvanceWidth < 0: # unlikely but do the same as horizontal - verticalAdvanceWidth = 0 - topSideBearing = otRound(topSideY - glyph.yMax) - vMetrics[glyphName] = verticalAdvanceWidth, topSideBearing - - # Deprecated - - def _synthesizeVMetrics(self, glyphName, ttFont, defaultVerticalOrigin): - """This method is wrong and deprecated. - For rationale see: - https://github.com/fonttools/fonttools/pull/2266/files#r613569473 - """ - vMetrics = getattr(ttFont.get("vmtx"), "metrics", None) - if vMetrics is None: - verticalAdvanceWidth = ttFont["head"].unitsPerEm - topSideY = getattr(ttFont.get("hhea"), "ascent", None) - if topSideY is None: - if defaultVerticalOrigin is not None: - topSideY = defaultVerticalOrigin - else: - topSideY = verticalAdvanceWidth - glyph = self[glyphName] - glyph.recalcBounds(self) - topSideBearing = otRound(topSideY - glyph.yMax) - vMetrics = {glyphName: (verticalAdvanceWidth, topSideBearing)} - return vMetrics - - @deprecateFunction("use '_getPhantomPoints' instead", category=DeprecationWarning) - def getPhantomPoints(self, glyphName, ttFont, defaultVerticalOrigin=None): - """Old public name for self._getPhantomPoints(). - See: https://github.com/fonttools/fonttools/pull/2266""" - hMetrics = ttFont["hmtx"].metrics - vMetrics = self._synthesizeVMetrics(glyphName, ttFont, defaultVerticalOrigin) - return self._getPhantomPoints(glyphName, hMetrics, vMetrics) - - @deprecateFunction( - "use '_getCoordinatesAndControls' instead", category=DeprecationWarning - ) - def getCoordinatesAndControls(self, glyphName, ttFont, defaultVerticalOrigin=None): - """Old public name for self._getCoordinatesAndControls(). - See: https://github.com/fonttools/fonttools/pull/2266""" - hMetrics = ttFont["hmtx"].metrics - vMetrics = self._synthesizeVMetrics(glyphName, ttFont, defaultVerticalOrigin) - return self._getCoordinatesAndControls(glyphName, hMetrics, vMetrics) - - @deprecateFunction("use '_setCoordinates' instead", category=DeprecationWarning) - def setCoordinates(self, glyphName, ttFont): - """Old public name for self._setCoordinates(). - See: https://github.com/fonttools/fonttools/pull/2266""" - hMetrics = ttFont["hmtx"].metrics - vMetrics = getattr(ttFont.get("vmtx"), "metrics", None) - self._setCoordinates(glyphName, hMetrics, vMetrics) - - -_GlyphControls = namedtuple( - "_GlyphControls", "numberOfContours endPts flags components" -) - - -glyphHeaderFormat = """ - > # big endian - numberOfContours: h - xMin: h - yMin: h - xMax: h - yMax: h -""" - -# flags -flagOnCurve = 0x01 -flagXShort = 0x02 -flagYShort = 0x04 -flagRepeat = 0x08 -flagXsame = 0x10 -flagYsame = 0x20 -flagOverlapSimple = 0x40 -flagCubic = 0x80 - -# These flags are kept for XML output after decompiling the coordinates -keepFlags = flagOnCurve + flagOverlapSimple + flagCubic - -_flagSignBytes = { - 0: 2, - flagXsame: 0, - flagXShort | flagXsame: +1, - flagXShort: -1, - flagYsame: 0, - flagYShort | flagYsame: +1, - flagYShort: -1, -} - - -def flagBest(x, y, onCurve): - """For a given x,y delta pair, returns the flag that packs this pair - most efficiently, as well as the number of byte cost of such flag.""" - - flag = flagOnCurve if onCurve else 0 - cost = 0 - # do x - if x == 0: - flag = flag | flagXsame - elif -255 <= x <= 255: - flag = flag | flagXShort - if x > 0: - flag = flag | flagXsame - cost += 1 - else: - cost += 2 - # do y - if y == 0: - flag = flag | flagYsame - elif -255 <= y <= 255: - flag = flag | flagYShort - if y > 0: - flag = flag | flagYsame - cost += 1 - else: - cost += 2 - return flag, cost - - -def flagFits(newFlag, oldFlag, mask): - newBytes = _flagSignBytes[newFlag & mask] - oldBytes = _flagSignBytes[oldFlag & mask] - return newBytes == oldBytes or abs(newBytes) > abs(oldBytes) - - -def flagSupports(newFlag, oldFlag): - return ( - (oldFlag & flagOnCurve) == (newFlag & flagOnCurve) - and flagFits(newFlag, oldFlag, flagXsame | flagXShort) - and flagFits(newFlag, oldFlag, flagYsame | flagYShort) - ) - - -def flagEncodeCoord(flag, mask, coord, coordBytes): - byteCount = _flagSignBytes[flag & mask] - if byteCount == 1: - coordBytes.append(coord) - elif byteCount == -1: - coordBytes.append(-coord) - elif byteCount == 2: - coordBytes.extend(struct.pack(">h", coord)) - - -def flagEncodeCoords(flag, x, y, xBytes, yBytes): - flagEncodeCoord(flag, flagXsame | flagXShort, x, xBytes) - flagEncodeCoord(flag, flagYsame | flagYShort, y, yBytes) - - -ARG_1_AND_2_ARE_WORDS = 0x0001 # if set args are words otherwise they are bytes -ARGS_ARE_XY_VALUES = 0x0002 # if set args are xy values, otherwise they are points -ROUND_XY_TO_GRID = 0x0004 # for the xy values if above is true -WE_HAVE_A_SCALE = 0x0008 # Sx = Sy, otherwise scale == 1.0 -NON_OVERLAPPING = 0x0010 # set to same value for all components (obsolete!) -MORE_COMPONENTS = 0x0020 # indicates at least one more glyph after this one -WE_HAVE_AN_X_AND_Y_SCALE = 0x0040 # Sx, Sy -WE_HAVE_A_TWO_BY_TWO = 0x0080 # t00, t01, t10, t11 -WE_HAVE_INSTRUCTIONS = 0x0100 # instructions follow -USE_MY_METRICS = 0x0200 # apply these metrics to parent glyph -OVERLAP_COMPOUND = 0x0400 # used by Apple in GX fonts -SCALED_COMPONENT_OFFSET = 0x0800 # composite designed to have the component offset scaled (designed for Apple) -UNSCALED_COMPONENT_OFFSET = 0x1000 # composite designed not to have the component offset scaled (designed for MS) - - -CompositeMaxpValues = namedtuple( - "CompositeMaxpValues", ["nPoints", "nContours", "maxComponentDepth"] -) - - -class Glyph(object): - """This class represents an individual TrueType glyph. - - TrueType glyph objects come in two flavours: simple and composite. Simple - glyph objects contain contours, represented via the ``.coordinates``, - ``.flags``, ``.numberOfContours``, and ``.endPtsOfContours`` attributes; - composite glyphs contain components, available through the ``.components`` - attributes. - - Because the ``.coordinates`` attribute (and other simple glyph attributes mentioned - above) is only set on simple glyphs and the ``.components`` attribute is only - set on composite glyphs, it is necessary to use the :py:meth:`isComposite` - method to test whether a glyph is simple or composite before attempting to - access its data. - - For a composite glyph, the components can also be accessed via array-like access:: - - >> assert(font["glyf"]["Aacute"].isComposite()) - >> font["glyf"]["Aacute"][0] - - - """ - - def __init__(self, data=b""): - if not data: - # empty char - self.numberOfContours = 0 - return - self.data = data - - def compact(self, glyfTable, recalcBBoxes=True): - data = self.compile(glyfTable, recalcBBoxes) - self.__dict__.clear() - self.data = data - - def expand(self, glyfTable): - if not hasattr(self, "data"): - # already unpacked - return - if not self.data: - # empty char - del self.data - self.numberOfContours = 0 - return - dummy, data = sstruct.unpack2(glyphHeaderFormat, self.data, self) - del self.data - # Some fonts (eg. Neirizi.ttf) have a 0 for numberOfContours in - # some glyphs; decompileCoordinates assumes that there's at least - # one, so short-circuit here. - if self.numberOfContours == 0: - return - if self.isComposite(): - self.decompileComponents(data, glyfTable) - elif self.isVarComposite(): - self.decompileVarComponents(data, glyfTable) - else: - self.decompileCoordinates(data) - - def compile(self, glyfTable, recalcBBoxes=True): - if hasattr(self, "data"): - if recalcBBoxes: - # must unpack glyph in order to recalculate bounding box - self.expand(glyfTable) - else: - return self.data - if self.numberOfContours == 0: - return b"" - if recalcBBoxes: - self.recalcBounds(glyfTable) - data = sstruct.pack(glyphHeaderFormat, self) - if self.isComposite(): - data = data + self.compileComponents(glyfTable) - elif self.isVarComposite(): - data = data + self.compileVarComponents(glyfTable) - else: - data = data + self.compileCoordinates() - return data - - def toXML(self, writer, ttFont): - if self.isComposite(): - for compo in self.components: - compo.toXML(writer, ttFont) - haveInstructions = hasattr(self, "program") - elif self.isVarComposite(): - for compo in self.components: - compo.toXML(writer, ttFont) - haveInstructions = False - else: - last = 0 - for i in range(self.numberOfContours): - writer.begintag("contour") - writer.newline() - for j in range(last, self.endPtsOfContours[i] + 1): - attrs = [ - ("x", self.coordinates[j][0]), - ("y", self.coordinates[j][1]), - ("on", self.flags[j] & flagOnCurve), - ] - if self.flags[j] & flagOverlapSimple: - # Apple's rasterizer uses flagOverlapSimple in the first contour/first pt to flag glyphs that contain overlapping contours - attrs.append(("overlap", 1)) - if self.flags[j] & flagCubic: - attrs.append(("cubic", 1)) - writer.simpletag("pt", attrs) - writer.newline() - last = self.endPtsOfContours[i] + 1 - writer.endtag("contour") - writer.newline() - haveInstructions = self.numberOfContours > 0 - if haveInstructions: - if self.program: - writer.begintag("instructions") - writer.newline() - self.program.toXML(writer, ttFont) - writer.endtag("instructions") - else: - writer.simpletag("instructions") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "contour": - if self.numberOfContours < 0: - raise ttLib.TTLibError("can't mix composites and contours in glyph") - self.numberOfContours = self.numberOfContours + 1 - coordinates = GlyphCoordinates() - flags = bytearray() - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name != "pt": - continue # ignore anything but "pt" - coordinates.append((safeEval(attrs["x"]), safeEval(attrs["y"]))) - flag = bool(safeEval(attrs["on"])) - if "overlap" in attrs and bool(safeEval(attrs["overlap"])): - flag |= flagOverlapSimple - if "cubic" in attrs and bool(safeEval(attrs["cubic"])): - flag |= flagCubic - flags.append(flag) - if not hasattr(self, "coordinates"): - self.coordinates = coordinates - self.flags = flags - self.endPtsOfContours = [len(coordinates) - 1] - else: - self.coordinates.extend(coordinates) - self.flags.extend(flags) - self.endPtsOfContours.append(len(self.coordinates) - 1) - elif name == "component": - if self.numberOfContours > 0: - raise ttLib.TTLibError("can't mix composites and contours in glyph") - self.numberOfContours = -1 - if not hasattr(self, "components"): - self.components = [] - component = GlyphComponent() - self.components.append(component) - component.fromXML(name, attrs, content, ttFont) - elif name == "varComponent": - if self.numberOfContours > 0: - raise ttLib.TTLibError("can't mix composites and contours in glyph") - self.numberOfContours = -2 - if not hasattr(self, "components"): - self.components = [] - component = GlyphVarComponent() - self.components.append(component) - component.fromXML(name, attrs, content, ttFont) - elif name == "instructions": - self.program = ttProgram.Program() - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - self.program.fromXML(name, attrs, content, ttFont) - - def getCompositeMaxpValues(self, glyfTable, maxComponentDepth=1): - assert self.isComposite() or self.isVarComposite() - nContours = 0 - nPoints = 0 - initialMaxComponentDepth = maxComponentDepth - for compo in self.components: - baseGlyph = glyfTable[compo.glyphName] - if baseGlyph.numberOfContours == 0: - continue - elif baseGlyph.numberOfContours > 0: - nP, nC = baseGlyph.getMaxpValues() - else: - nP, nC, componentDepth = baseGlyph.getCompositeMaxpValues( - glyfTable, initialMaxComponentDepth + 1 - ) - maxComponentDepth = max(maxComponentDepth, componentDepth) - nPoints = nPoints + nP - nContours = nContours + nC - return CompositeMaxpValues(nPoints, nContours, maxComponentDepth) - - def getMaxpValues(self): - assert self.numberOfContours > 0 - return len(self.coordinates), len(self.endPtsOfContours) - - def decompileComponents(self, data, glyfTable): - self.components = [] - more = 1 - haveInstructions = 0 - while more: - component = GlyphComponent() - more, haveInstr, data = component.decompile(data, glyfTable) - haveInstructions = haveInstructions | haveInstr - self.components.append(component) - if haveInstructions: - (numInstructions,) = struct.unpack(">h", data[:2]) - data = data[2:] - self.program = ttProgram.Program() - self.program.fromBytecode(data[:numInstructions]) - data = data[numInstructions:] - if len(data) >= 4: - log.warning( - "too much glyph data at the end of composite glyph: %d excess bytes", - len(data), - ) - - def decompileVarComponents(self, data, glyfTable): - self.components = [] - while len(data) >= GlyphVarComponent.MIN_SIZE: - component = GlyphVarComponent() - data = component.decompile(data, glyfTable) - self.components.append(component) - - def decompileCoordinates(self, data): - endPtsOfContours = array.array("H") - endPtsOfContours.frombytes(data[: 2 * self.numberOfContours]) - if sys.byteorder != "big": - endPtsOfContours.byteswap() - self.endPtsOfContours = endPtsOfContours.tolist() - - pos = 2 * self.numberOfContours - (instructionLength,) = struct.unpack(">h", data[pos : pos + 2]) - self.program = ttProgram.Program() - self.program.fromBytecode(data[pos + 2 : pos + 2 + instructionLength]) - pos += 2 + instructionLength - nCoordinates = self.endPtsOfContours[-1] + 1 - flags, xCoordinates, yCoordinates = self.decompileCoordinatesRaw( - nCoordinates, data, pos - ) - - # fill in repetitions and apply signs - self.coordinates = coordinates = GlyphCoordinates.zeros(nCoordinates) - xIndex = 0 - yIndex = 0 - for i in range(nCoordinates): - flag = flags[i] - # x coordinate - if flag & flagXShort: - if flag & flagXsame: - x = xCoordinates[xIndex] - else: - x = -xCoordinates[xIndex] - xIndex = xIndex + 1 - elif flag & flagXsame: - x = 0 - else: - x = xCoordinates[xIndex] - xIndex = xIndex + 1 - # y coordinate - if flag & flagYShort: - if flag & flagYsame: - y = yCoordinates[yIndex] - else: - y = -yCoordinates[yIndex] - yIndex = yIndex + 1 - elif flag & flagYsame: - y = 0 - else: - y = yCoordinates[yIndex] - yIndex = yIndex + 1 - coordinates[i] = (x, y) - assert xIndex == len(xCoordinates) - assert yIndex == len(yCoordinates) - coordinates.relativeToAbsolute() - # discard all flags except "keepFlags" - for i in range(len(flags)): - flags[i] &= keepFlags - self.flags = flags - - def decompileCoordinatesRaw(self, nCoordinates, data, pos=0): - # unpack flags and prepare unpacking of coordinates - flags = bytearray(nCoordinates) - # Warning: deep Python trickery going on. We use the struct module to unpack - # the coordinates. We build a format string based on the flags, so we can - # unpack the coordinates in one struct.unpack() call. - xFormat = ">" # big endian - yFormat = ">" # big endian - j = 0 - while True: - flag = data[pos] - pos += 1 - repeat = 1 - if flag & flagRepeat: - repeat = data[pos] + 1 - pos += 1 - for k in range(repeat): - if flag & flagXShort: - xFormat = xFormat + "B" - elif not (flag & flagXsame): - xFormat = xFormat + "h" - if flag & flagYShort: - yFormat = yFormat + "B" - elif not (flag & flagYsame): - yFormat = yFormat + "h" - flags[j] = flag - j = j + 1 - if j >= nCoordinates: - break - assert j == nCoordinates, "bad glyph flags" - # unpack raw coordinates, krrrrrr-tching! - xDataLen = struct.calcsize(xFormat) - yDataLen = struct.calcsize(yFormat) - if len(data) - pos - (xDataLen + yDataLen) >= 4: - log.warning( - "too much glyph data: %d excess bytes", - len(data) - pos - (xDataLen + yDataLen), - ) - xCoordinates = struct.unpack(xFormat, data[pos : pos + xDataLen]) - yCoordinates = struct.unpack( - yFormat, data[pos + xDataLen : pos + xDataLen + yDataLen] - ) - return flags, xCoordinates, yCoordinates - - def compileComponents(self, glyfTable): - data = b"" - lastcomponent = len(self.components) - 1 - more = 1 - haveInstructions = 0 - for i in range(len(self.components)): - if i == lastcomponent: - haveInstructions = hasattr(self, "program") - more = 0 - compo = self.components[i] - data = data + compo.compile(more, haveInstructions, glyfTable) - if haveInstructions: - instructions = self.program.getBytecode() - data = data + struct.pack(">h", len(instructions)) + instructions - return data - - def compileVarComponents(self, glyfTable): - return b"".join(c.compile(glyfTable) for c in self.components) - - def compileCoordinates(self): - assert len(self.coordinates) == len(self.flags) - data = [] - endPtsOfContours = array.array("H", self.endPtsOfContours) - if sys.byteorder != "big": - endPtsOfContours.byteswap() - data.append(endPtsOfContours.tobytes()) - instructions = self.program.getBytecode() - data.append(struct.pack(">h", len(instructions))) - data.append(instructions) - - deltas = self.coordinates.copy() - deltas.toInt() - deltas.absoluteToRelative() - - # TODO(behdad): Add a configuration option for this? - deltas = self.compileDeltasGreedy(self.flags, deltas) - # deltas = self.compileDeltasOptimal(self.flags, deltas) - - data.extend(deltas) - return b"".join(data) - - def compileDeltasGreedy(self, flags, deltas): - # Implements greedy algorithm for packing coordinate deltas: - # uses shortest representation one coordinate at a time. - compressedFlags = bytearray() - compressedXs = bytearray() - compressedYs = bytearray() - lastflag = None - repeat = 0 - for flag, (x, y) in zip(flags, deltas): - # Oh, the horrors of TrueType - # do x - if x == 0: - flag = flag | flagXsame - elif -255 <= x <= 255: - flag = flag | flagXShort - if x > 0: - flag = flag | flagXsame - else: - x = -x - compressedXs.append(x) - else: - compressedXs.extend(struct.pack(">h", x)) - # do y - if y == 0: - flag = flag | flagYsame - elif -255 <= y <= 255: - flag = flag | flagYShort - if y > 0: - flag = flag | flagYsame - else: - y = -y - compressedYs.append(y) - else: - compressedYs.extend(struct.pack(">h", y)) - # handle repeating flags - if flag == lastflag and repeat != 255: - repeat = repeat + 1 - if repeat == 1: - compressedFlags.append(flag) - else: - compressedFlags[-2] = flag | flagRepeat - compressedFlags[-1] = repeat - else: - repeat = 0 - compressedFlags.append(flag) - lastflag = flag - return (compressedFlags, compressedXs, compressedYs) - - def compileDeltasOptimal(self, flags, deltas): - # Implements optimal, dynaic-programming, algorithm for packing coordinate - # deltas. The savings are negligible :(. - candidates = [] - bestTuple = None - bestCost = 0 - repeat = 0 - for flag, (x, y) in zip(flags, deltas): - # Oh, the horrors of TrueType - flag, coordBytes = flagBest(x, y, flag) - bestCost += 1 + coordBytes - newCandidates = [ - (bestCost, bestTuple, flag, coordBytes), - (bestCost + 1, bestTuple, (flag | flagRepeat), coordBytes), - ] - for lastCost, lastTuple, lastFlag, coordBytes in candidates: - if ( - lastCost + coordBytes <= bestCost + 1 - and (lastFlag & flagRepeat) - and (lastFlag < 0xFF00) - and flagSupports(lastFlag, flag) - ): - if (lastFlag & 0xFF) == ( - flag | flagRepeat - ) and lastCost == bestCost + 1: - continue - newCandidates.append( - (lastCost + coordBytes, lastTuple, lastFlag + 256, coordBytes) - ) - candidates = newCandidates - bestTuple = min(candidates, key=lambda t: t[0]) - bestCost = bestTuple[0] - - flags = [] - while bestTuple: - cost, bestTuple, flag, coordBytes = bestTuple - flags.append(flag) - flags.reverse() - - compressedFlags = bytearray() - compressedXs = bytearray() - compressedYs = bytearray() - coords = iter(deltas) - ff = [] - for flag in flags: - repeatCount, flag = flag >> 8, flag & 0xFF - compressedFlags.append(flag) - if flag & flagRepeat: - assert repeatCount > 0 - compressedFlags.append(repeatCount) - else: - assert repeatCount == 0 - for i in range(1 + repeatCount): - x, y = next(coords) - flagEncodeCoords(flag, x, y, compressedXs, compressedYs) - ff.append(flag) - try: - next(coords) - raise Exception("internal error") - except StopIteration: - pass - - return (compressedFlags, compressedXs, compressedYs) - - def recalcBounds(self, glyfTable): - """Recalculates the bounds of the glyph. - - Each glyph object stores its bounding box in the - ``xMin``/``yMin``/``xMax``/``yMax`` attributes. These bounds must be - recomputed when the ``coordinates`` change. The ``table__g_l_y_f`` bounds - must be provided to resolve component bounds. - """ - try: - coords, endPts, flags = self.getCoordinates(glyfTable) - self.xMin, self.yMin, self.xMax, self.yMax = calcIntBounds(coords) - except NotImplementedError: - pass - - def isComposite(self): - """Test whether a glyph has components""" - if hasattr(self, "data"): - return struct.unpack(">h", self.data[:2])[0] == -1 if self.data else False - else: - return self.numberOfContours == -1 - - def isVarComposite(self): - """Test whether a glyph has variable components""" - if hasattr(self, "data"): - return struct.unpack(">h", self.data[:2])[0] == -2 if self.data else False - else: - return self.numberOfContours == -2 - - def getCoordinates(self, glyfTable): - """Return the coordinates, end points and flags - - This method returns three values: A :py:class:`GlyphCoordinates` object, - a list of the indexes of the final points of each contour (allowing you - to split up the coordinates list into contours) and a list of flags. - - On simple glyphs, this method returns information from the glyph's own - contours; on composite glyphs, it "flattens" all components recursively - to return a list of coordinates representing all the components involved - in the glyph. - - To interpret the flags for each point, see the "Simple Glyph Flags" - section of the `glyf table specification `. - """ - - if self.numberOfContours > 0: - return self.coordinates, self.endPtsOfContours, self.flags - elif self.isComposite(): - # it's a composite - allCoords = GlyphCoordinates() - allFlags = bytearray() - allEndPts = [] - for compo in self.components: - g = glyfTable[compo.glyphName] - try: - coordinates, endPts, flags = g.getCoordinates(glyfTable) - except RecursionError: - raise ttLib.TTLibError( - "glyph '%s' contains a recursive component reference" - % compo.glyphName - ) - coordinates = GlyphCoordinates(coordinates) - if hasattr(compo, "firstPt"): - # component uses two reference points: we apply the transform _before_ - # computing the offset between the points - if hasattr(compo, "transform"): - coordinates.transform(compo.transform) - x1, y1 = allCoords[compo.firstPt] - x2, y2 = coordinates[compo.secondPt] - move = x1 - x2, y1 - y2 - coordinates.translate(move) - else: - # component uses XY offsets - move = compo.x, compo.y - if not hasattr(compo, "transform"): - coordinates.translate(move) - else: - apple_way = compo.flags & SCALED_COMPONENT_OFFSET - ms_way = compo.flags & UNSCALED_COMPONENT_OFFSET - assert not (apple_way and ms_way) - if not (apple_way or ms_way): - scale_component_offset = ( - SCALE_COMPONENT_OFFSET_DEFAULT # see top of this file - ) - else: - scale_component_offset = apple_way - if scale_component_offset: - # the Apple way: first move, then scale (ie. scale the component offset) - coordinates.translate(move) - coordinates.transform(compo.transform) - else: - # the MS way: first scale, then move - coordinates.transform(compo.transform) - coordinates.translate(move) - offset = len(allCoords) - allEndPts.extend(e + offset for e in endPts) - allCoords.extend(coordinates) - allFlags.extend(flags) - return allCoords, allEndPts, allFlags - elif self.isVarComposite(): - raise NotImplementedError("use TTGlyphSet to draw VarComposite glyphs") - else: - return GlyphCoordinates(), [], bytearray() - - def getComponentNames(self, glyfTable): - """Returns a list of names of component glyphs used in this glyph - - This method can be used on simple glyphs (in which case it returns an - empty list) or composite glyphs. - """ - if hasattr(self, "data") and self.isVarComposite(): - # TODO(VarComposite) Add implementation without expanding glyph - self.expand(glyfTable) - - if not hasattr(self, "data"): - if self.isComposite() or self.isVarComposite(): - return [c.glyphName for c in self.components] - else: - return [] - - # Extract components without expanding glyph - - if not self.data or struct.unpack(">h", self.data[:2])[0] >= 0: - return [] # Not composite - - data = self.data - i = 10 - components = [] - more = 1 - while more: - flags, glyphID = struct.unpack(">HH", data[i : i + 4]) - i += 4 - flags = int(flags) - components.append(glyfTable.getGlyphName(int(glyphID))) - - if flags & ARG_1_AND_2_ARE_WORDS: - i += 4 - else: - i += 2 - if flags & WE_HAVE_A_SCALE: - i += 2 - elif flags & WE_HAVE_AN_X_AND_Y_SCALE: - i += 4 - elif flags & WE_HAVE_A_TWO_BY_TWO: - i += 8 - more = flags & MORE_COMPONENTS - - return components - - def trim(self, remove_hinting=False): - """Remove padding and, if requested, hinting, from a glyph. - This works on both expanded and compacted glyphs, without - expanding it.""" - if not hasattr(self, "data"): - if remove_hinting: - if self.isComposite(): - if hasattr(self, "program"): - del self.program - elif self.isVarComposite(): - pass # Doesn't have hinting - else: - self.program = ttProgram.Program() - self.program.fromBytecode([]) - # No padding to trim. - return - if not self.data: - return - numContours = struct.unpack(">h", self.data[:2])[0] - data = bytearray(self.data) - i = 10 - if numContours >= 0: - i += 2 * numContours # endPtsOfContours - nCoordinates = ((data[i - 2] << 8) | data[i - 1]) + 1 - instructionLen = (data[i] << 8) | data[i + 1] - if remove_hinting: - # Zero instruction length - data[i] = data[i + 1] = 0 - i += 2 - if instructionLen: - # Splice it out - data = data[:i] + data[i + instructionLen :] - instructionLen = 0 - else: - i += 2 + instructionLen - - coordBytes = 0 - j = 0 - while True: - flag = data[i] - i = i + 1 - repeat = 1 - if flag & flagRepeat: - repeat = data[i] + 1 - i = i + 1 - xBytes = yBytes = 0 - if flag & flagXShort: - xBytes = 1 - elif not (flag & flagXsame): - xBytes = 2 - if flag & flagYShort: - yBytes = 1 - elif not (flag & flagYsame): - yBytes = 2 - coordBytes += (xBytes + yBytes) * repeat - j += repeat - if j >= nCoordinates: - break - assert j == nCoordinates, "bad glyph flags" - i += coordBytes - # Remove padding - data = data[:i] - elif self.isComposite(): - more = 1 - we_have_instructions = False - while more: - flags = (data[i] << 8) | data[i + 1] - if remove_hinting: - flags &= ~WE_HAVE_INSTRUCTIONS - if flags & WE_HAVE_INSTRUCTIONS: - we_have_instructions = True - data[i + 0] = flags >> 8 - data[i + 1] = flags & 0xFF - i += 4 - flags = int(flags) - - if flags & ARG_1_AND_2_ARE_WORDS: - i += 4 - else: - i += 2 - if flags & WE_HAVE_A_SCALE: - i += 2 - elif flags & WE_HAVE_AN_X_AND_Y_SCALE: - i += 4 - elif flags & WE_HAVE_A_TWO_BY_TWO: - i += 8 - more = flags & MORE_COMPONENTS - if we_have_instructions: - instructionLen = (data[i] << 8) | data[i + 1] - i += 2 + instructionLen - # Remove padding - data = data[:i] - elif self.isVarComposite(): - i = 0 - MIN_SIZE = GlyphVarComponent.MIN_SIZE - while len(data[i : i + MIN_SIZE]) >= MIN_SIZE: - size = GlyphVarComponent.getSize(data[i : i + MIN_SIZE]) - i += size - data = data[:i] - - self.data = data - - def removeHinting(self): - """Removes TrueType hinting instructions from the glyph.""" - self.trim(remove_hinting=True) - - def draw(self, pen, glyfTable, offset=0): - """Draws the glyph using the supplied pen object. - - Arguments: - pen: An object conforming to the pen protocol. - glyfTable: A :py:class:`table__g_l_y_f` object, to resolve components. - offset (int): A horizontal offset. If provided, all coordinates are - translated by this offset. - """ - - if self.isComposite(): - for component in self.components: - glyphName, transform = component.getComponentInfo() - pen.addComponent(glyphName, transform) - return - - coordinates, endPts, flags = self.getCoordinates(glyfTable) - if offset: - coordinates = coordinates.copy() - coordinates.translate((offset, 0)) - start = 0 - maybeInt = lambda v: int(v) if v == int(v) else v - for end in endPts: - end = end + 1 - contour = coordinates[start:end] - cFlags = [flagOnCurve & f for f in flags[start:end]] - cuFlags = [flagCubic & f for f in flags[start:end]] - start = end - if 1 not in cFlags: - assert all(cuFlags) or not any(cuFlags) - cubic = all(cuFlags) - if cubic: - count = len(contour) - assert count % 2 == 0, "Odd number of cubic off-curves undefined" - l = contour[-1] - f = contour[0] - p0 = (maybeInt((l[0] + f[0]) * 0.5), maybeInt((l[1] + f[1]) * 0.5)) - pen.moveTo(p0) - for i in range(0, count, 2): - p1 = contour[i] - p2 = contour[i + 1] - p4 = contour[i + 2 if i + 2 < count else 0] - p3 = ( - maybeInt((p2[0] + p4[0]) * 0.5), - maybeInt((p2[1] + p4[1]) * 0.5), - ) - pen.curveTo(p1, p2, p3) - else: - # There is not a single on-curve point on the curve, - # use pen.qCurveTo's special case by specifying None - # as the on-curve point. - contour.append(None) - pen.qCurveTo(*contour) - else: - # Shuffle the points so that the contour is guaranteed - # to *end* in an on-curve point, which we'll use for - # the moveTo. - firstOnCurve = cFlags.index(1) + 1 - contour = contour[firstOnCurve:] + contour[:firstOnCurve] - cFlags = cFlags[firstOnCurve:] + cFlags[:firstOnCurve] - cuFlags = cuFlags[firstOnCurve:] + cuFlags[:firstOnCurve] - pen.moveTo(contour[-1]) - while contour: - nextOnCurve = cFlags.index(1) + 1 - if nextOnCurve == 1: - # Skip a final lineTo(), as it is implied by - # pen.closePath() - if len(contour) > 1: - pen.lineTo(contour[0]) - else: - cubicFlags = [f for f in cuFlags[: nextOnCurve - 1]] - assert all(cubicFlags) or not any(cubicFlags) - cubic = any(cubicFlags) - if cubic: - assert all( - cubicFlags - ), "Mixed cubic and quadratic segment undefined" - - count = nextOnCurve - assert ( - count >= 3 - ), "At least two cubic off-curve points required" - assert ( - count - 1 - ) % 2 == 0, "Odd number of cubic off-curves undefined" - for i in range(0, count - 3, 2): - p1 = contour[i] - p2 = contour[i + 1] - p4 = contour[i + 2] - p3 = ( - maybeInt((p2[0] + p4[0]) * 0.5), - maybeInt((p2[1] + p4[1]) * 0.5), - ) - lastOnCurve = p3 - pen.curveTo(p1, p2, p3) - pen.curveTo(*contour[count - 3 : count]) - else: - pen.qCurveTo(*contour[:nextOnCurve]) - contour = contour[nextOnCurve:] - cFlags = cFlags[nextOnCurve:] - cuFlags = cuFlags[nextOnCurve:] - pen.closePath() - - def drawPoints(self, pen, glyfTable, offset=0): - """Draw the glyph using the supplied pointPen. As opposed to Glyph.draw(), - this will not change the point indices. - """ - - if self.isComposite(): - for component in self.components: - glyphName, transform = component.getComponentInfo() - pen.addComponent(glyphName, transform) - return - - coordinates, endPts, flags = self.getCoordinates(glyfTable) - if offset: - coordinates = coordinates.copy() - coordinates.translate((offset, 0)) - start = 0 - for end in endPts: - end = end + 1 - contour = coordinates[start:end] - cFlags = flags[start:end] - start = end - pen.beginPath() - # Start with the appropriate segment type based on the final segment - - if cFlags[-1] & flagOnCurve: - segmentType = "line" - elif cFlags[-1] & flagCubic: - segmentType = "curve" - else: - segmentType = "qcurve" - for i, pt in enumerate(contour): - if cFlags[i] & flagOnCurve: - pen.addPoint(pt, segmentType=segmentType) - segmentType = "line" - else: - pen.addPoint(pt) - segmentType = "curve" if cFlags[i] & flagCubic else "qcurve" - pen.endPath() - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result - - -# Vector.__round__ uses the built-in (Banker's) `round` but we want -# to use otRound below -_roundv = partial(Vector.__round__, round=otRound) - - -def _is_mid_point(p0: tuple, p1: tuple, p2: tuple) -> bool: - # True if p1 is in the middle of p0 and p2, either before or after rounding - p0 = Vector(p0) - p1 = Vector(p1) - p2 = Vector(p2) - return ((p0 + p2) * 0.5).isclose(p1) or _roundv(p0) + _roundv(p2) == _roundv(p1) * 2 - - -def dropImpliedOnCurvePoints(*interpolatable_glyphs: Glyph) -> Set[int]: - """Drop impliable on-curve points from the (simple) glyph or glyphs. - - In TrueType glyf outlines, on-curve points can be implied when they are located at - the midpoint of the line connecting two consecutive off-curve points. - - If more than one glyphs are passed, these are assumed to be interpolatable masters - of the same glyph impliable, and thus only the on-curve points that are impliable - for all of them will actually be implied. - Composite glyphs or empty glyphs are skipped, only simple glyphs with 1 or more - contours are considered. - The input glyph(s) is/are modified in-place. - - Args: - interpolatable_glyphs: The glyph or glyphs to modify in-place. - - Returns: - The set of point indices that were dropped if any. - - Raises: - ValueError if simple glyphs are not in fact interpolatable because they have - different point flags or number of contours. - - Reference: - https://developer.apple.com/fonts/TrueType-Reference-Manual/RM01/Chap1.html - """ - staticAttributes = SimpleNamespace( - numberOfContours=None, flags=None, endPtsOfContours=None - ) - drop = None - simple_glyphs = [] - for i, glyph in enumerate(interpolatable_glyphs): - if glyph.numberOfContours < 1: - # ignore composite or empty glyphs - continue - - for attr in staticAttributes.__dict__: - expected = getattr(staticAttributes, attr) - found = getattr(glyph, attr) - if expected is None: - setattr(staticAttributes, attr, found) - elif expected != found: - raise ValueError( - f"Incompatible {attr} for glyph at master index {i}: " - f"expected {expected}, found {found}" - ) - - may_drop = set() - start = 0 - coords = glyph.coordinates - flags = staticAttributes.flags - endPtsOfContours = staticAttributes.endPtsOfContours - for last in endPtsOfContours: - for i in range(start, last + 1): - if not (flags[i] & flagOnCurve): - continue - prv = i - 1 if i > start else last - nxt = i + 1 if i < last else start - if (flags[prv] & flagOnCurve) or flags[prv] != flags[nxt]: - continue - # we may drop the ith on-curve if halfway between previous/next off-curves - if not _is_mid_point(coords[prv], coords[i], coords[nxt]): - continue - - may_drop.add(i) - start = last + 1 - # we only want to drop if ALL interpolatable glyphs have the same implied oncurves - if drop is None: - drop = may_drop - else: - drop.intersection_update(may_drop) - - simple_glyphs.append(glyph) - - if drop: - # Do the actual dropping - flags = staticAttributes.flags - assert flags is not None - newFlags = array.array( - "B", (flags[i] for i in range(len(flags)) if i not in drop) - ) - - endPts = staticAttributes.endPtsOfContours - assert endPts is not None - newEndPts = [] - i = 0 - delta = 0 - for d in sorted(drop): - while d > endPts[i]: - newEndPts.append(endPts[i] - delta) - i += 1 - delta += 1 - while i < len(endPts): - newEndPts.append(endPts[i] - delta) - i += 1 - - for glyph in simple_glyphs: - coords = glyph.coordinates - glyph.coordinates = GlyphCoordinates( - coords[i] for i in range(len(coords)) if i not in drop - ) - glyph.flags = newFlags - glyph.endPtsOfContours = newEndPts - - return drop if drop is not None else set() - - -class GlyphComponent(object): - """Represents a component within a composite glyph. - - The component is represented internally with four attributes: ``glyphName``, - ``x``, ``y`` and ``transform``. If there is no "two-by-two" matrix (i.e - no scaling, reflection, or rotation; only translation), the ``transform`` - attribute is not present. - """ - - # The above documentation is not *completely* true, but is *true enough* because - # the rare firstPt/lastPt attributes are not totally supported and nobody seems to - # mind - see below. - - def __init__(self): - pass - - def getComponentInfo(self): - """Return information about the component - - This method returns a tuple of two values: the glyph name of the component's - base glyph, and a transformation matrix. As opposed to accessing the attributes - directly, ``getComponentInfo`` always returns a six-element tuple of the - component's transformation matrix, even when the two-by-two ``.transform`` - matrix is not present. - """ - # XXX Ignoring self.firstPt & self.lastpt for now: I need to implement - # something equivalent in fontTools.objects.glyph (I'd rather not - # convert it to an absolute offset, since it is valuable information). - # This method will now raise "AttributeError: x" on glyphs that use - # this TT feature. - if hasattr(self, "transform"): - [[xx, xy], [yx, yy]] = self.transform - trans = (xx, xy, yx, yy, self.x, self.y) - else: - trans = (1, 0, 0, 1, self.x, self.y) - return self.glyphName, trans - - def decompile(self, data, glyfTable): - flags, glyphID = struct.unpack(">HH", data[:4]) - self.flags = int(flags) - glyphID = int(glyphID) - self.glyphName = glyfTable.getGlyphName(int(glyphID)) - data = data[4:] - - if self.flags & ARG_1_AND_2_ARE_WORDS: - if self.flags & ARGS_ARE_XY_VALUES: - self.x, self.y = struct.unpack(">hh", data[:4]) - else: - x, y = struct.unpack(">HH", data[:4]) - self.firstPt, self.secondPt = int(x), int(y) - data = data[4:] - else: - if self.flags & ARGS_ARE_XY_VALUES: - self.x, self.y = struct.unpack(">bb", data[:2]) - else: - x, y = struct.unpack(">BB", data[:2]) - self.firstPt, self.secondPt = int(x), int(y) - data = data[2:] - - if self.flags & WE_HAVE_A_SCALE: - (scale,) = struct.unpack(">h", data[:2]) - self.transform = [ - [fi2fl(scale, 14), 0], - [0, fi2fl(scale, 14)], - ] # fixed 2.14 - data = data[2:] - elif self.flags & WE_HAVE_AN_X_AND_Y_SCALE: - xscale, yscale = struct.unpack(">hh", data[:4]) - self.transform = [ - [fi2fl(xscale, 14), 0], - [0, fi2fl(yscale, 14)], - ] # fixed 2.14 - data = data[4:] - elif self.flags & WE_HAVE_A_TWO_BY_TWO: - (xscale, scale01, scale10, yscale) = struct.unpack(">hhhh", data[:8]) - self.transform = [ - [fi2fl(xscale, 14), fi2fl(scale01, 14)], - [fi2fl(scale10, 14), fi2fl(yscale, 14)], - ] # fixed 2.14 - data = data[8:] - more = self.flags & MORE_COMPONENTS - haveInstructions = self.flags & WE_HAVE_INSTRUCTIONS - self.flags = self.flags & ( - ROUND_XY_TO_GRID - | USE_MY_METRICS - | SCALED_COMPONENT_OFFSET - | UNSCALED_COMPONENT_OFFSET - | NON_OVERLAPPING - | OVERLAP_COMPOUND - ) - return more, haveInstructions, data - - def compile(self, more, haveInstructions, glyfTable): - data = b"" - - # reset all flags we will calculate ourselves - flags = self.flags & ( - ROUND_XY_TO_GRID - | USE_MY_METRICS - | SCALED_COMPONENT_OFFSET - | UNSCALED_COMPONENT_OFFSET - | NON_OVERLAPPING - | OVERLAP_COMPOUND - ) - if more: - flags = flags | MORE_COMPONENTS - if haveInstructions: - flags = flags | WE_HAVE_INSTRUCTIONS - - if hasattr(self, "firstPt"): - if (0 <= self.firstPt <= 255) and (0 <= self.secondPt <= 255): - data = data + struct.pack(">BB", self.firstPt, self.secondPt) - else: - data = data + struct.pack(">HH", self.firstPt, self.secondPt) - flags = flags | ARG_1_AND_2_ARE_WORDS - else: - x = otRound(self.x) - y = otRound(self.y) - flags = flags | ARGS_ARE_XY_VALUES - if (-128 <= x <= 127) and (-128 <= y <= 127): - data = data + struct.pack(">bb", x, y) - else: - data = data + struct.pack(">hh", x, y) - flags = flags | ARG_1_AND_2_ARE_WORDS - - if hasattr(self, "transform"): - transform = [[fl2fi(x, 14) for x in row] for row in self.transform] - if transform[0][1] or transform[1][0]: - flags = flags | WE_HAVE_A_TWO_BY_TWO - data = data + struct.pack( - ">hhhh", - transform[0][0], - transform[0][1], - transform[1][0], - transform[1][1], - ) - elif transform[0][0] != transform[1][1]: - flags = flags | WE_HAVE_AN_X_AND_Y_SCALE - data = data + struct.pack(">hh", transform[0][0], transform[1][1]) - else: - flags = flags | WE_HAVE_A_SCALE - data = data + struct.pack(">h", transform[0][0]) - - glyphID = glyfTable.getGlyphID(self.glyphName) - return struct.pack(">HH", flags, glyphID) + data - - def toXML(self, writer, ttFont): - attrs = [("glyphName", self.glyphName)] - if not hasattr(self, "firstPt"): - attrs = attrs + [("x", self.x), ("y", self.y)] - else: - attrs = attrs + [("firstPt", self.firstPt), ("secondPt", self.secondPt)] - - if hasattr(self, "transform"): - transform = self.transform - if transform[0][1] or transform[1][0]: - attrs = attrs + [ - ("scalex", fl2str(transform[0][0], 14)), - ("scale01", fl2str(transform[0][1], 14)), - ("scale10", fl2str(transform[1][0], 14)), - ("scaley", fl2str(transform[1][1], 14)), - ] - elif transform[0][0] != transform[1][1]: - attrs = attrs + [ - ("scalex", fl2str(transform[0][0], 14)), - ("scaley", fl2str(transform[1][1], 14)), - ] - else: - attrs = attrs + [("scale", fl2str(transform[0][0], 14))] - attrs = attrs + [("flags", hex(self.flags))] - writer.simpletag("component", attrs) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - self.glyphName = attrs["glyphName"] - if "firstPt" in attrs: - self.firstPt = safeEval(attrs["firstPt"]) - self.secondPt = safeEval(attrs["secondPt"]) - else: - self.x = safeEval(attrs["x"]) - self.y = safeEval(attrs["y"]) - if "scale01" in attrs: - scalex = str2fl(attrs["scalex"], 14) - scale01 = str2fl(attrs["scale01"], 14) - scale10 = str2fl(attrs["scale10"], 14) - scaley = str2fl(attrs["scaley"], 14) - self.transform = [[scalex, scale01], [scale10, scaley]] - elif "scalex" in attrs: - scalex = str2fl(attrs["scalex"], 14) - scaley = str2fl(attrs["scaley"], 14) - self.transform = [[scalex, 0], [0, scaley]] - elif "scale" in attrs: - scale = str2fl(attrs["scale"], 14) - self.transform = [[scale, 0], [0, scale]] - self.flags = safeEval(attrs["flags"]) - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result - - -# -# Variable Composite glyphs -# https://github.com/harfbuzz/boring-expansion-spec/blob/main/glyf1.md -# - - -class VarComponentFlags(IntFlag): - USE_MY_METRICS = 0x0001 - AXIS_INDICES_ARE_SHORT = 0x0002 - UNIFORM_SCALE = 0x0004 - HAVE_TRANSLATE_X = 0x0008 - HAVE_TRANSLATE_Y = 0x0010 - HAVE_ROTATION = 0x0020 - HAVE_SCALE_X = 0x0040 - HAVE_SCALE_Y = 0x0080 - HAVE_SKEW_X = 0x0100 - HAVE_SKEW_Y = 0x0200 - HAVE_TCENTER_X = 0x0400 - HAVE_TCENTER_Y = 0x0800 - GID_IS_24BIT = 0x1000 - AXES_HAVE_VARIATION = 0x2000 - RESET_UNSPECIFIED_AXES = 0x4000 - - -VarComponentTransformMappingValues = namedtuple( - "VarComponentTransformMappingValues", - ["flag", "fractionalBits", "scale", "defaultValue"], -) - -VAR_COMPONENT_TRANSFORM_MAPPING = { - "translateX": VarComponentTransformMappingValues( - VarComponentFlags.HAVE_TRANSLATE_X, 0, 1, 0 - ), - "translateY": VarComponentTransformMappingValues( - VarComponentFlags.HAVE_TRANSLATE_Y, 0, 1, 0 - ), - "rotation": VarComponentTransformMappingValues( - VarComponentFlags.HAVE_ROTATION, 12, 180, 0 - ), - "scaleX": VarComponentTransformMappingValues( - VarComponentFlags.HAVE_SCALE_X, 10, 1, 1 - ), - "scaleY": VarComponentTransformMappingValues( - VarComponentFlags.HAVE_SCALE_Y, 10, 1, 1 - ), - "skewX": VarComponentTransformMappingValues( - VarComponentFlags.HAVE_SKEW_X, 12, -180, 0 - ), - "skewY": VarComponentTransformMappingValues( - VarComponentFlags.HAVE_SKEW_Y, 12, 180, 0 - ), - "tCenterX": VarComponentTransformMappingValues( - VarComponentFlags.HAVE_TCENTER_X, 0, 1, 0 - ), - "tCenterY": VarComponentTransformMappingValues( - VarComponentFlags.HAVE_TCENTER_Y, 0, 1, 0 - ), -} - - -class GlyphVarComponent(object): - MIN_SIZE = 5 - - def __init__(self): - self.location = {} - self.transform = DecomposedTransform() - - @staticmethod - def getSize(data): - size = 5 - flags = struct.unpack(">H", data[:2])[0] - numAxes = int(data[2]) - - if flags & VarComponentFlags.GID_IS_24BIT: - size += 1 - - size += numAxes - if flags & VarComponentFlags.AXIS_INDICES_ARE_SHORT: - size += 2 * numAxes - else: - axisIndices = array.array("B", data[:numAxes]) - size += numAxes - - for attr_name, mapping_values in VAR_COMPONENT_TRANSFORM_MAPPING.items(): - if flags & mapping_values.flag: - size += 2 - - return size - - def decompile(self, data, glyfTable): - flags = struct.unpack(">H", data[:2])[0] - self.flags = int(flags) - data = data[2:] - - numAxes = int(data[0]) - data = data[1:] - - if flags & VarComponentFlags.GID_IS_24BIT: - glyphID = int(struct.unpack(">L", b"\0" + data[:3])[0]) - data = data[3:] - flags ^= VarComponentFlags.GID_IS_24BIT - else: - glyphID = int(struct.unpack(">H", data[:2])[0]) - data = data[2:] - self.glyphName = glyfTable.getGlyphName(int(glyphID)) - - if flags & VarComponentFlags.AXIS_INDICES_ARE_SHORT: - axisIndices = array.array("H", data[: 2 * numAxes]) - if sys.byteorder != "big": - axisIndices.byteswap() - data = data[2 * numAxes :] - flags ^= VarComponentFlags.AXIS_INDICES_ARE_SHORT - else: - axisIndices = array.array("B", data[:numAxes]) - data = data[numAxes:] - assert len(axisIndices) == numAxes - axisIndices = list(axisIndices) - - axisValues = array.array("h", data[: 2 * numAxes]) - if sys.byteorder != "big": - axisValues.byteswap() - data = data[2 * numAxes :] - assert len(axisValues) == numAxes - axisValues = [fi2fl(v, 14) for v in axisValues] - - self.location = { - glyfTable.axisTags[i]: v for i, v in zip(axisIndices, axisValues) - } - - def read_transform_component(data, values): - if flags & values.flag: - return ( - data[2:], - fi2fl(struct.unpack(">h", data[:2])[0], values.fractionalBits) - * values.scale, - ) - else: - return data, values.defaultValue - - for attr_name, mapping_values in VAR_COMPONENT_TRANSFORM_MAPPING.items(): - data, value = read_transform_component(data, mapping_values) - setattr(self.transform, attr_name, value) - - if flags & VarComponentFlags.UNIFORM_SCALE: - if flags & VarComponentFlags.HAVE_SCALE_X and not ( - flags & VarComponentFlags.HAVE_SCALE_Y - ): - self.transform.scaleY = self.transform.scaleX - flags |= VarComponentFlags.HAVE_SCALE_Y - flags ^= VarComponentFlags.UNIFORM_SCALE - - return data - - def compile(self, glyfTable): - data = b"" - - if not hasattr(self, "flags"): - flags = 0 - # Calculate optimal transform component flags - for attr_name, mapping in VAR_COMPONENT_TRANSFORM_MAPPING.items(): - value = getattr(self.transform, attr_name) - if fl2fi(value / mapping.scale, mapping.fractionalBits) != fl2fi( - mapping.defaultValue / mapping.scale, mapping.fractionalBits - ): - flags |= mapping.flag - else: - flags = self.flags - - if ( - flags & VarComponentFlags.HAVE_SCALE_X - and flags & VarComponentFlags.HAVE_SCALE_Y - and fl2fi(self.transform.scaleX, 10) == fl2fi(self.transform.scaleY, 10) - ): - flags |= VarComponentFlags.UNIFORM_SCALE - flags ^= VarComponentFlags.HAVE_SCALE_Y - - numAxes = len(self.location) - - data = data + struct.pack(">B", numAxes) - - glyphID = glyfTable.getGlyphID(self.glyphName) - if glyphID > 65535: - flags |= VarComponentFlags.GID_IS_24BIT - data = data + struct.pack(">L", glyphID)[1:] - else: - data = data + struct.pack(">H", glyphID) - - axisIndices = [glyfTable.axisTags.index(tag) for tag in self.location.keys()] - if all(a <= 255 for a in axisIndices): - axisIndices = array.array("B", axisIndices) - else: - axisIndices = array.array("H", axisIndices) - if sys.byteorder != "big": - axisIndices.byteswap() - flags |= VarComponentFlags.AXIS_INDICES_ARE_SHORT - data = data + bytes(axisIndices) - - axisValues = self.location.values() - axisValues = array.array("h", (fl2fi(v, 14) for v in axisValues)) - if sys.byteorder != "big": - axisValues.byteswap() - data = data + bytes(axisValues) - - def write_transform_component(data, value, values): - if flags & values.flag: - return data + struct.pack( - ">h", fl2fi(value / values.scale, values.fractionalBits) - ) - else: - return data - - for attr_name, mapping_values in VAR_COMPONENT_TRANSFORM_MAPPING.items(): - value = getattr(self.transform, attr_name) - data = write_transform_component(data, value, mapping_values) - - return struct.pack(">H", flags) + data - - def toXML(self, writer, ttFont): - attrs = [("glyphName", self.glyphName)] - - if hasattr(self, "flags"): - attrs = attrs + [("flags", hex(self.flags))] - - for attr_name, mapping in VAR_COMPONENT_TRANSFORM_MAPPING.items(): - v = getattr(self.transform, attr_name) - if v != mapping.defaultValue: - attrs.append((attr_name, fl2str(v, mapping.fractionalBits))) - - writer.begintag("varComponent", attrs) - writer.newline() - - writer.begintag("location") - writer.newline() - for tag, v in self.location.items(): - writer.simpletag("axis", [("tag", tag), ("value", fl2str(v, 14))]) - writer.newline() - writer.endtag("location") - writer.newline() - - writer.endtag("varComponent") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - self.glyphName = attrs["glyphName"] - - if "flags" in attrs: - self.flags = safeEval(attrs["flags"]) - - for attr_name, mapping in VAR_COMPONENT_TRANSFORM_MAPPING.items(): - if attr_name not in attrs: - continue - v = str2fl(safeEval(attrs[attr_name]), mapping.fractionalBits) - setattr(self.transform, attr_name, v) - - for c in content: - if not isinstance(c, tuple): - continue - name, attrs, content = c - if name != "location": - continue - for c in content: - if not isinstance(c, tuple): - continue - name, attrs, content = c - assert name == "axis" - assert not content - self.location[attrs["tag"]] = str2fl(safeEval(attrs["value"]), 14) - - def getPointCount(self): - assert hasattr(self, "flags"), "VarComponent with variations must have flags" - - count = 0 - - if self.flags & VarComponentFlags.AXES_HAVE_VARIATION: - count += len(self.location) - - if self.flags & ( - VarComponentFlags.HAVE_TRANSLATE_X | VarComponentFlags.HAVE_TRANSLATE_Y - ): - count += 1 - if self.flags & VarComponentFlags.HAVE_ROTATION: - count += 1 - if self.flags & ( - VarComponentFlags.HAVE_SCALE_X | VarComponentFlags.HAVE_SCALE_Y - ): - count += 1 - if self.flags & (VarComponentFlags.HAVE_SKEW_X | VarComponentFlags.HAVE_SKEW_Y): - count += 1 - if self.flags & ( - VarComponentFlags.HAVE_TCENTER_X | VarComponentFlags.HAVE_TCENTER_Y - ): - count += 1 - - return count - - def getCoordinatesAndControls(self): - coords = [] - controls = [] - - if self.flags & VarComponentFlags.AXES_HAVE_VARIATION: - for tag, v in self.location.items(): - controls.append(tag) - coords.append((fl2fi(v, 14), 0)) - - if self.flags & ( - VarComponentFlags.HAVE_TRANSLATE_X | VarComponentFlags.HAVE_TRANSLATE_Y - ): - controls.append("translate") - coords.append((self.transform.translateX, self.transform.translateY)) - if self.flags & VarComponentFlags.HAVE_ROTATION: - controls.append("rotation") - coords.append((fl2fi(self.transform.rotation / 180, 12), 0)) - if self.flags & ( - VarComponentFlags.HAVE_SCALE_X | VarComponentFlags.HAVE_SCALE_Y - ): - controls.append("scale") - coords.append( - (fl2fi(self.transform.scaleX, 10), fl2fi(self.transform.scaleY, 10)) - ) - if self.flags & (VarComponentFlags.HAVE_SKEW_X | VarComponentFlags.HAVE_SKEW_Y): - controls.append("skew") - coords.append( - ( - fl2fi(self.transform.skewX / -180, 12), - fl2fi(self.transform.skewY / 180, 12), - ) - ) - if self.flags & ( - VarComponentFlags.HAVE_TCENTER_X | VarComponentFlags.HAVE_TCENTER_Y - ): - controls.append("tCenter") - coords.append((self.transform.tCenterX, self.transform.tCenterY)) - - return coords, controls - - def setCoordinates(self, coords): - i = 0 - - if self.flags & VarComponentFlags.AXES_HAVE_VARIATION: - newLocation = {} - for tag in self.location: - newLocation[tag] = fi2fl(coords[i][0], 14) - i += 1 - self.location = newLocation - - self.transform = DecomposedTransform() - if self.flags & ( - VarComponentFlags.HAVE_TRANSLATE_X | VarComponentFlags.HAVE_TRANSLATE_Y - ): - self.transform.translateX, self.transform.translateY = coords[i] - i += 1 - if self.flags & VarComponentFlags.HAVE_ROTATION: - self.transform.rotation = fi2fl(coords[i][0], 12) * 180 - i += 1 - if self.flags & ( - VarComponentFlags.HAVE_SCALE_X | VarComponentFlags.HAVE_SCALE_Y - ): - self.transform.scaleX, self.transform.scaleY = fi2fl( - coords[i][0], 10 - ), fi2fl(coords[i][1], 10) - i += 1 - if self.flags & (VarComponentFlags.HAVE_SKEW_X | VarComponentFlags.HAVE_SKEW_Y): - self.transform.skewX, self.transform.skewY = ( - fi2fl(coords[i][0], 12) * -180, - fi2fl(coords[i][1], 12) * 180, - ) - i += 1 - if self.flags & ( - VarComponentFlags.HAVE_TCENTER_X | VarComponentFlags.HAVE_TCENTER_Y - ): - self.transform.tCenterX, self.transform.tCenterY = coords[i] - i += 1 - - return coords[i:] - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result - - -class GlyphCoordinates(object): - """A list of glyph coordinates. - - Unlike an ordinary list, this is a numpy-like matrix object which supports - matrix addition, scalar multiplication and other operations described below. - """ - - def __init__(self, iterable=[]): - self._a = array.array("d") - self.extend(iterable) - - @property - def array(self): - """Returns the underlying array of coordinates""" - return self._a - - @staticmethod - def zeros(count): - """Creates a new ``GlyphCoordinates`` object with all coordinates set to (0,0)""" - g = GlyphCoordinates() - g._a.frombytes(bytes(count * 2 * g._a.itemsize)) - return g - - def copy(self): - """Creates a new ``GlyphCoordinates`` object which is a copy of the current one.""" - c = GlyphCoordinates() - c._a.extend(self._a) - return c - - def __len__(self): - """Returns the number of coordinates in the array.""" - return len(self._a) // 2 - - def __getitem__(self, k): - """Returns a two element tuple (x,y)""" - if isinstance(k, slice): - indices = range(*k.indices(len(self))) - return [self[i] for i in indices] - a = self._a - x = a[2 * k] - y = a[2 * k + 1] - return (int(x) if x.is_integer() else x, int(y) if y.is_integer() else y) - - def __setitem__(self, k, v): - """Sets a point's coordinates to a two element tuple (x,y)""" - if isinstance(k, slice): - indices = range(*k.indices(len(self))) - # XXX This only works if len(v) == len(indices) - for j, i in enumerate(indices): - self[i] = v[j] - return - self._a[2 * k], self._a[2 * k + 1] = v - - def __delitem__(self, i): - """Removes a point from the list""" - i = (2 * i) % len(self._a) - del self._a[i] - del self._a[i] - - def __repr__(self): - return "GlyphCoordinates([" + ",".join(str(c) for c in self) + "])" - - def append(self, p): - self._a.extend(tuple(p)) - - def extend(self, iterable): - for p in iterable: - self._a.extend(p) - - def toInt(self, *, round=otRound): - if round is noRound: - return - a = self._a - for i in range(len(a)): - a[i] = round(a[i]) - - def relativeToAbsolute(self): - a = self._a - x, y = 0, 0 - for i in range(0, len(a), 2): - a[i] = x = a[i] + x - a[i + 1] = y = a[i + 1] + y - - def absoluteToRelative(self): - a = self._a - x, y = 0, 0 - for i in range(0, len(a), 2): - nx = a[i] - ny = a[i + 1] - a[i] = nx - x - a[i + 1] = ny - y - x = nx - y = ny - - def translate(self, p): - """ - >>> GlyphCoordinates([(1,2)]).translate((.5,0)) - """ - x, y = p - if x == 0 and y == 0: - return - a = self._a - for i in range(0, len(a), 2): - a[i] += x - a[i + 1] += y - - def scale(self, p): - """ - >>> GlyphCoordinates([(1,2)]).scale((.5,0)) - """ - x, y = p - if x == 1 and y == 1: - return - a = self._a - for i in range(0, len(a), 2): - a[i] *= x - a[i + 1] *= y - - def transform(self, t): - """ - >>> GlyphCoordinates([(1,2)]).transform(((.5,0),(.2,.5))) - """ - a = self._a - for i in range(0, len(a), 2): - x = a[i] - y = a[i + 1] - px = x * t[0][0] + y * t[1][0] - py = x * t[0][1] + y * t[1][1] - a[i] = px - a[i + 1] = py - - def __eq__(self, other): - """ - >>> g = GlyphCoordinates([(1,2)]) - >>> g2 = GlyphCoordinates([(1.0,2)]) - >>> g3 = GlyphCoordinates([(1.5,2)]) - >>> g == g2 - True - >>> g == g3 - False - >>> g2 == g3 - False - """ - if type(self) != type(other): - return NotImplemented - return self._a == other._a - - def __ne__(self, other): - """ - >>> g = GlyphCoordinates([(1,2)]) - >>> g2 = GlyphCoordinates([(1.0,2)]) - >>> g3 = GlyphCoordinates([(1.5,2)]) - >>> g != g2 - False - >>> g != g3 - True - >>> g2 != g3 - True - """ - result = self.__eq__(other) - return result if result is NotImplemented else not result - - # Math operations - - def __pos__(self): - """ - >>> g = GlyphCoordinates([(1,2)]) - >>> g - GlyphCoordinates([(1, 2)]) - >>> g2 = +g - >>> g2 - GlyphCoordinates([(1, 2)]) - >>> g2.translate((1,0)) - >>> g2 - GlyphCoordinates([(2, 2)]) - >>> g - GlyphCoordinates([(1, 2)]) - """ - return self.copy() - - def __neg__(self): - """ - >>> g = GlyphCoordinates([(1,2)]) - >>> g - GlyphCoordinates([(1, 2)]) - >>> g2 = -g - >>> g2 - GlyphCoordinates([(-1, -2)]) - >>> g - GlyphCoordinates([(1, 2)]) - """ - r = self.copy() - a = r._a - for i in range(len(a)): - a[i] = -a[i] - return r - - def __round__(self, *, round=otRound): - r = self.copy() - r.toInt(round=round) - return r - - def __add__(self, other): - return self.copy().__iadd__(other) - - def __sub__(self, other): - return self.copy().__isub__(other) - - def __mul__(self, other): - return self.copy().__imul__(other) - - def __truediv__(self, other): - return self.copy().__itruediv__(other) - - __radd__ = __add__ - __rmul__ = __mul__ - - def __rsub__(self, other): - return other + (-self) - - def __iadd__(self, other): - """ - >>> g = GlyphCoordinates([(1,2)]) - >>> g += (.5,0) - >>> g - GlyphCoordinates([(1.5, 2)]) - >>> g2 = GlyphCoordinates([(3,4)]) - >>> g += g2 - >>> g - GlyphCoordinates([(4.5, 6)]) - """ - if isinstance(other, tuple): - assert len(other) == 2 - self.translate(other) - return self - if isinstance(other, GlyphCoordinates): - other = other._a - a = self._a - assert len(a) == len(other) - for i in range(len(a)): - a[i] += other[i] - return self - return NotImplemented - - def __isub__(self, other): - """ - >>> g = GlyphCoordinates([(1,2)]) - >>> g -= (.5,0) - >>> g - GlyphCoordinates([(0.5, 2)]) - >>> g2 = GlyphCoordinates([(3,4)]) - >>> g -= g2 - >>> g - GlyphCoordinates([(-2.5, -2)]) - """ - if isinstance(other, tuple): - assert len(other) == 2 - self.translate((-other[0], -other[1])) - return self - if isinstance(other, GlyphCoordinates): - other = other._a - a = self._a - assert len(a) == len(other) - for i in range(len(a)): - a[i] -= other[i] - return self - return NotImplemented - - def __imul__(self, other): - """ - >>> g = GlyphCoordinates([(1,2)]) - >>> g *= (2,.5) - >>> g *= 2 - >>> g - GlyphCoordinates([(4, 2)]) - >>> g = GlyphCoordinates([(1,2)]) - >>> g *= 2 - >>> g - GlyphCoordinates([(2, 4)]) - """ - if isinstance(other, tuple): - assert len(other) == 2 - self.scale(other) - return self - if isinstance(other, Number): - if other == 1: - return self - a = self._a - for i in range(len(a)): - a[i] *= other - return self - return NotImplemented - - def __itruediv__(self, other): - """ - >>> g = GlyphCoordinates([(1,3)]) - >>> g /= (.5,1.5) - >>> g /= 2 - >>> g - GlyphCoordinates([(1, 1)]) - """ - if isinstance(other, Number): - other = (other, other) - if isinstance(other, tuple): - if other == (1, 1): - return self - assert len(other) == 2 - self.scale((1.0 / other[0], 1.0 / other[1])) - return self - return NotImplemented - - def __bool__(self): - """ - >>> g = GlyphCoordinates([]) - >>> bool(g) - False - >>> g = GlyphCoordinates([(0,0), (0.,0)]) - >>> bool(g) - True - >>> g = GlyphCoordinates([(0,0), (1,0)]) - >>> bool(g) - True - >>> g = GlyphCoordinates([(0,.5), (0,0)]) - >>> bool(g) - True - """ - return bool(self._a) - - __nonzero__ = __bool__ - - -if __name__ == "__main__": - import doctest, sys - - sys.exit(doctest.testmod().failed) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mdec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mdec.c deleted file mode 100644 index 640b671a0fe5b47ce49ba00855e0c9d351a6ccf1..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mdec.c +++ /dev/null @@ -1,261 +0,0 @@ -/* - * Sony PlayStation MDEC (Motion DECoder) - * Copyright (c) 2003 Michael Niedermayer - * - * based upon code from Sebastian Jedruszkiewicz - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Sony PlayStation MDEC (Motion DECoder) - * This is very similar to intra-only MPEG-1. - */ - -#include "libavutil/mem_internal.h" - -#include "avcodec.h" -#include "blockdsp.h" -#include "bswapdsp.h" -#include "codec_internal.h" -#include "idctdsp.h" -#include "mpeg12data.h" -#include "mpeg12dec.h" -#include "thread.h" - -typedef struct MDECContext { - AVCodecContext *avctx; - BlockDSPContext bdsp; - BswapDSPContext bbdsp; - IDCTDSPContext idsp; - GetBitContext gb; - uint8_t permutated_scantable[64]; - int version; - int qscale; - int last_dc[3]; - int mb_width; - int mb_height; - int mb_x, mb_y; - DECLARE_ALIGNED(32, int16_t, block)[6][64]; - DECLARE_ALIGNED(16, uint16_t, quant_matrix)[64]; - uint8_t *bitstream_buffer; - unsigned int bitstream_buffer_size; - int block_last_index[6]; -} MDECContext; - -//very similar to MPEG-1 -static inline int mdec_decode_block_intra(MDECContext *a, int16_t *block, int n) -{ - int level, diff, i, j, run; - int component; - const uint8_t *const scantable = a->permutated_scantable; - const uint16_t *quant_matrix = a->quant_matrix; - const int qscale = a->qscale; - - /* DC coefficient */ - if (a->version == 2) { - block[0] = 2 * get_sbits(&a->gb, 10) + 1024; - } else { - component = (n <= 3 ? 0 : n - 4 + 1); - diff = decode_dc(&a->gb, component); - a->last_dc[component] += diff; - block[0] = a->last_dc[component] * (1 << 3); - } - - i = 0; - { - OPEN_READER(re, &a->gb); - /* now quantify & encode AC coefficients */ - for (;;) { - UPDATE_CACHE(re, &a->gb); - GET_RL_VLC(level, run, re, &a->gb, ff_mpeg1_rl_vlc, TEX_VLC_BITS, 2, 0); - - if (level == 127) { - break; - } else if (level != 0) { - i += run; - if (i > 63) { - av_log(a->avctx, AV_LOG_ERROR, - "ac-tex damaged at %d %d\n", a->mb_x, a->mb_y); - return AVERROR_INVALIDDATA; - } - j = scantable[i]; - level = (level * qscale * quant_matrix[j]) >> 3; - level = (level ^ SHOW_SBITS(re, &a->gb, 1)) - SHOW_SBITS(re, &a->gb, 1); - LAST_SKIP_BITS(re, &a->gb, 1); - } else { - /* escape */ - run = SHOW_UBITS(re, &a->gb, 6)+1; LAST_SKIP_BITS(re, &a->gb, 6); - UPDATE_CACHE(re, &a->gb); - level = SHOW_SBITS(re, &a->gb, 10); SKIP_BITS(re, &a->gb, 10); - i += run; - if (i > 63) { - av_log(a->avctx, AV_LOG_ERROR, - "ac-tex damaged at %d %d\n", a->mb_x, a->mb_y); - return AVERROR_INVALIDDATA; - } - j = scantable[i]; - if (level < 0) { - level = -level; - level = (level * (unsigned)qscale * quant_matrix[j]) >> 3; - level = (level - 1) | 1; - level = -level; - } else { - level = (level * (unsigned)qscale * quant_matrix[j]) >> 3; - level = (level - 1) | 1; - } - } - - block[j] = level; - } - CLOSE_READER(re, &a->gb); - } - a->block_last_index[n] = i; - return 0; -} - -static inline int decode_mb(MDECContext *a, int16_t block[6][64]) -{ - int i, ret; - static const int block_index[6] = { 5, 4, 0, 1, 2, 3 }; - - a->bdsp.clear_blocks(block[0]); - - for (i = 0; i < 6; i++) { - if ((ret = mdec_decode_block_intra(a, block[block_index[i]], - block_index[i])) < 0) - return ret; - if (get_bits_left(&a->gb) < 0) - return AVERROR_INVALIDDATA; - } - return 0; -} - -static inline void idct_put(MDECContext *a, AVFrame *frame, int mb_x, int mb_y) -{ - int16_t (*block)[64] = a->block; - int linesize = frame->linesize[0]; - - uint8_t *dest_y = frame->data[0] + (mb_y * 16* linesize ) + mb_x * 16; - uint8_t *dest_cb = frame->data[1] + (mb_y * 8 * frame->linesize[1]) + mb_x * 8; - uint8_t *dest_cr = frame->data[2] + (mb_y * 8 * frame->linesize[2]) + mb_x * 8; - - a->idsp.idct_put(dest_y, linesize, block[0]); - a->idsp.idct_put(dest_y + 8, linesize, block[1]); - a->idsp.idct_put(dest_y + 8 * linesize, linesize, block[2]); - a->idsp.idct_put(dest_y + 8 * linesize + 8, linesize, block[3]); - - if (!(a->avctx->flags & AV_CODEC_FLAG_GRAY)) { - a->idsp.idct_put(dest_cb, frame->linesize[1], block[4]); - a->idsp.idct_put(dest_cr, frame->linesize[2], block[5]); - } -} - -static int decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame, AVPacket *avpkt) -{ - MDECContext * const a = avctx->priv_data; - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - int ret; - - if ((ret = ff_thread_get_buffer(avctx, frame, 0)) < 0) - return ret; - frame->pict_type = AV_PICTURE_TYPE_I; - frame->key_frame = 1; - - av_fast_padded_malloc(&a->bitstream_buffer, &a->bitstream_buffer_size, buf_size); - if (!a->bitstream_buffer) - return AVERROR(ENOMEM); - a->bbdsp.bswap16_buf((uint16_t *)a->bitstream_buffer, (uint16_t *)buf, (buf_size + 1) / 2); - if ((ret = init_get_bits8(&a->gb, a->bitstream_buffer, buf_size)) < 0) - return ret; - - /* skip over 4 preamble bytes in stream (typically 0xXX 0xXX 0x00 0x38) */ - skip_bits(&a->gb, 32); - - a->qscale = get_bits(&a->gb, 16); - a->version = get_bits(&a->gb, 16); - - a->last_dc[0] = a->last_dc[1] = a->last_dc[2] = 128; - - for (a->mb_x = 0; a->mb_x < a->mb_width; a->mb_x++) { - for (a->mb_y = 0; a->mb_y < a->mb_height; a->mb_y++) { - if ((ret = decode_mb(a, a->block)) < 0) - return ret; - - idct_put(a, frame, a->mb_x, a->mb_y); - } - } - - *got_frame = 1; - - return (get_bits_count(&a->gb) + 31) / 32 * 4; -} - -static av_cold int decode_init(AVCodecContext *avctx) -{ - MDECContext * const a = avctx->priv_data; - int i; - - a->mb_width = (avctx->coded_width + 15) / 16; - a->mb_height = (avctx->coded_height + 15) / 16; - - a->avctx = avctx; - - ff_blockdsp_init(&a->bdsp); - ff_bswapdsp_init(&a->bbdsp); - ff_idctdsp_init(&a->idsp, avctx); - ff_mpeg12_init_vlcs(); - ff_permute_scantable(a->permutated_scantable, ff_zigzag_direct, - a->idsp.idct_permutation); - - avctx->pix_fmt = AV_PIX_FMT_YUVJ420P; - avctx->color_range = AVCOL_RANGE_JPEG; - - /* init q matrix */ - for (i = 0; i < 64; i++) { - int j = a->idsp.idct_permutation[i]; - - a->quant_matrix[j] = ff_mpeg1_default_intra_matrix[i]; - } - - return 0; -} - -static av_cold int decode_end(AVCodecContext *avctx) -{ - MDECContext * const a = avctx->priv_data; - - av_freep(&a->bitstream_buffer); - a->bitstream_buffer_size = 0; - - return 0; -} - -const FFCodec ff_mdec_decoder = { - .p.name = "mdec", - CODEC_LONG_NAME("Sony PlayStation MDEC (Motion DECoder)"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_MDEC, - .priv_data_size = sizeof(MDECContext), - .init = decode_init, - .close = decode_end, - FF_CODEC_DECODE_CB(decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_FRAME_THREADS, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/ARK Survival Evolved APK Old Version How to Get the Original Version of the Game for Android.md b/spaces/congsaPfin/Manga-OCR/logs/ARK Survival Evolved APK Old Version How to Get the Original Version of the Game for Android.md deleted file mode 100644 index f94e6d0dbbac917379f94af119c30e5c009e3bfa..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/ARK Survival Evolved APK Old Version How to Get the Original Version of the Game for Android.md +++ /dev/null @@ -1,97 +0,0 @@ -
    -

    Download ARK Survival Evolved APK Old Version

    -

    If you are a fan of survival games, you might have heard of ARK Survival Evolved, a popular game that lets you explore a vast island full of dinosaurs and other creatures. You can fight, tame, breed, and ride these creatures, as well as craft weapons, tools, and structures to survive. You can also team up with other players online or play solo in offline mode.

    -

    ARK Survival Evolved is available for various platforms, including Android devices. However, some players prefer to download the ARK Survival Evolved APK old version instead of the latest one. Why is that? And how can you do it? In this article, we will answer these questions and more.

    -

    download ark survival evolved apk old version


    Downloadhttps://urlca.com/2uO4Us



    -

    What is ARK Survival Evolved?

    -

    ARK Survival Evolved is an action-adventure game developed by Studio Wildcard, in collaboration with Instinct Games, Efecto Studios, and Virtual Basement. It was released in 2017 for Android devices, after being available for PC, PlayStation 4, and Xbox One since 2015.

    -

    In ARK Survival Evolved, you are stranded on a mysterious island called ARK, where you have to survive by harvesting resources, crafting items, building shelters, and hunting or taming dinosaurs and other creatures. You can also interact with other players online or play solo in offline mode.

    -

    download ark survival evolved apk 2.0.28
    -download ark survival evolved apk xapk
    -download ark survival evolved apk 1.0.62
    -download ark survival evolved apk for android
    -download ark survival evolved apk mod
    -download ark survival evolved apk obb
    -download ark survival evolved apk data
    -download ark survival evolved apk latest version
    -download ark survival evolved apk revdl
    -download ark survival evolved apk rexdl
    -download ark survival evolved apk unlimited money
    -download ark survival evolved apk andropalace
    -download ark survival evolved apk android 1
    -download ark survival evolved apk apkpure
    -download ark survival evolved apk aptoide
    -download ark survival evolved apk android republic
    -download ark survival evolved apk by studio wildcard
    -download ark survival evolved apk blackmod
    -download ark survival evolved apk beta
    -download ark survival evolved apk cracked
    -download ark survival evolved apk cheat
    -download ark survival evolved apk compressed
    -download ark survival evolved apk direct link
    -download ark survival evolved apk free
    -download ark survival evolved apk full version
    -download ark survival evolved apk file
    -download ark survival evolved apk gamestechy
    -download ark survival evolved apk god mode
    -download ark survival evolved apk highly compressed
    -download ark survival evolved apk hack version
    -download ark survival evolved apk in parts
    -download ark survival evolved apk ios
    -download ark survival evolved apk latest update
    -download ark survival evolved apk low mb
    -download ark survival evolved apk mega mod
    -download ark survival evolved apk no license verification
    -download ark survival evolved apk offline mode
    -download ark survival evolved apk original version
    -download ark survival evolved apk premium unlocked
    -download ark survival evolved apk pro version

    -

    Features of ARK Survival Evolved

    -

    Some of the features that make ARK Survival Evolved an exciting game are:

    -
      -
    • 80+ dinosaurs and creatures: You can encounter, fight, tame, breed, and ride various types of dinosaurs and creatures, such as Tyrannosaurus Rex, Triceratops, Velociraptor, Mammoth, Dragon, and more.
    • -
    • Crafting and building: You can use the resources you gather to craft weapons, tools, armor, clothes, and other items. You can also build structures such as houses, fences, towers, traps, and more.
    • -
    • Exploration and adventure: You can explore the vast island of ARK, which has different biomes such as jungles, mountains, caves, swamps, volcanoes, and more. You can also find hidden secrets, artifacts, and bosses.
    • -
    • Multiplayer and solo modes: You can join or create your own tribe with other players online and cooperate or compete with them. You can also play solo in offline mode and customize the game settings to your liking.
    • -
    • Stunning graphics and sound: The game has high-quality graphics that make the island and the creatures look realistic and immersive. The game also has realistic sound effects that enhance the gameplay experience.
    • -
    -

    Why download ARK Survival Evolved APK old version?

    -

    While the latest version of ARK Survival Evolved may have more features and improvements than the old versions, some players may prefer to download the old versions for various reasons. Here are some of them:

    -

    Pros of ARK Survival Evolved APK old version

    -
      -
    • Compatibility: Some older devices may not be able to run the latest version of the game smoothly or at all. Downloading an older version may solve this problem and allow you to play the game without any issues.
    • -
    • Nostalgia: Some players may have fond memories of playing an older version of the game and want to relive them. Downloading an older version may bring back those memories and feelings.
    • -
    • V

      Cons of ARK Survival Evolved APK old version

      -
        -
      • Missing features: Some older versions of the game may not have the features that the latest version has, such as new dinosaurs, items, biomes, modes, and more. You may miss out on some of the fun and excitement that the latest version offers.
      • -
      • Bugs and glitches: Some older versions of the game may have bugs and glitches that affect the gameplay quality and performance. The latest version may have fixed these issues and made the game more stable and smooth.
      • -
      • Security risks: Some older versions of the game may have security vulnerabilities that expose your device and data to hackers and malware. The latest version may have patched these holes and made the game more secure and safe.
      • -
      -

      How to download ARK Survival Evolved APK old version?

      -

      If you still want to download ARK Survival Evolved APK old version, you need to follow these steps:

      -

      Step 1: Find a reliable source

      -

      The first step is to find a reliable source that offers the ARK Survival Evolved APK old version that you want. You can search online for websites or forums that provide links to download the APK file. However, you need to be careful and avoid downloading from shady or unknown sources that may contain viruses or malware. You can also check the reviews and ratings of the source to see if it is trustworthy and safe.

      -

      Step 2: Download the APK file

      -

      The second step is to download the APK file from the source that you have chosen. You need to make sure that you have enough storage space on your device to download the file. You also need to enable the option to install apps from unknown sources on your device settings. This will allow you to install the APK file without any problems.

      -

      Step 3: Install the APK file

      -

      The third step is to install the APK file on your device. You need to locate the file in your device's file manager and tap on it to start the installation process. You may need to grant some permissions and accept some terms and conditions before the installation is complete.

      -

      Step 4: Enjoy the game

      -

      The final step is to enjoy the game. You can launch the game from your device's app drawer or home screen and start playing. You can also check if the game is working properly and has all the features that you want.

      -

      Conclusion

      -

      In this article, we have explained what ARK Survival Evolved is, why some players prefer to download ARK Survival Evolved APK old version, and how to do it. We have also listed some of the pros and cons of downloading ARK Survival Evolved APK old version. We hope that this article has been helpful and informative for you.

      -

      If you have any questions or comments, feel free to leave them below. We would love to hear from you.

      -

      FAQs

      -
        -
      • Q: Is ARK Survival Evolved free to play?
      • -
      • A: Yes, ARK Survival Evolved is free to play on Android devices. However, it may contain in-app purchases and ads that require real money.
      • -
      • Q: What are the minimum requirements to play ARK Survival Evolved on Android devices?
      • -
      • A: According to the official website, you need an Android device with at least 3 GB of RAM, a quad-core processor, a GPU with OpenGL ES 3.1 support, and Android 7.0 or higher.
      • -
      • Q: Can I play ARK Survival Evolved offline?
      • -
      • A: Yes, you can play ARK Survival Evolved offline in solo mode. However, you need an internet connection to play online with other players or access some features such as cloud saving.
      • -
      • Q: Can I transfer my progress from one device to another?
      • -
      • A: Yes, you can transfer your progress from one device to another by using cloud saving or Google Play Games services. However, you need an internet connection and a Google account to do so.
      • -
      • Q: Can I play ARK Survival Evolved with a controller?
      • -
      • A: Yes, you can play ARK Survival Evolved with a controller if your device supports it. You can also customize the controller settings in the game options.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Become a Bus Driver in Bus Simulator Indonesia Enjoy the unique culture and scenery of Indonesia in this amazing bus game..md b/spaces/congsaPfin/Manga-OCR/logs/Become a Bus Driver in Bus Simulator Indonesia Enjoy the unique culture and scenery of Indonesia in this amazing bus game..md deleted file mode 100644 index c09bb94435438be5dd50279e1e5085f088c79e2a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Become a Bus Driver in Bus Simulator Indonesia Enjoy the unique culture and scenery of Indonesia in this amazing bus game..md +++ /dev/null @@ -1,150 +0,0 @@ - -

      Bus Real Game Download: How to Enjoy the Best Bus Simulator Games on Your Device

      -

      Do you love driving buses and transporting passengers from one place to another? Do you want to experience the thrill of driving realistic buses on different roads and cities? If yes, then you should try out some of the best bus simulator games available for download on your device. In this article, we will show you how to download and play bus real game on your device, and also recommend some of the top bus simulator games that you can enjoy.

      -

      bus real game download


      Download ✦✦✦ https://urlca.com/2uOgdg



      -

      Introduction

      -

      What are bus simulator games and why are they popular?

      -

      Bus simulator games are a type of simulation games that allow you to drive various types of buses, such as city buses, intercity buses, coach buses, school buses, etc. You can choose from different bus models, customize them with liveries, horns, lights, etc., and drive them on realistic maps inspired by real-world locations. You can also follow traffic rules, transport passengers, earn money, manage your own bus company, and more.

      -

      Bus simulator games are popular because they offer a fun and immersive way of experiencing the life of a bus driver. You can learn how to drive different buses, explore different places, face different challenges, and have fun along the way. You can also play with other players online, compete in leaderboards, join leagues, and share your achievements. Bus simulator games are suitable for anyone who loves driving games, simulation games, or just wants to try something new.

      -

      How to download and play bus real game on your device?

      -

      To download and play bus real game on your device, you need to follow these simple steps:

      -

      bus real game download for android
      -bus real game download for pc
      -bus real game download apk
      -bus real game download free
      -bus real game download offline
      -bus real game download 2023
      -bus real game download mod apk
      -bus real game download new version
      -bus real game download hack
      -bus real game download unlimited money
      -bus simulator ultimate game download
      -bus simulator indonesia game download
      -mobile bus simulator game download
      -coach bus simulator game download
      -euro bus simulator game download
      -heavy bus simulator game download
      -indian bus simulator game download
      -public transport simulator game download
      -city bus simulator game download
      -tourist bus simulator game download
      -bus real driving simulator 3d game
      -bus real parking simulator 3d game
      -bus real racing simulator 3d game
      -bus real traffic simulator 3d game
      -bus real hill climb simulator 3d game
      -best bus real games for android
      -best bus real games for pc
      -best bus real games for ios
      -best bus real games online
      -best bus real games offline
      -how to play bus real games on pc
      -how to play bus real games on mobile
      -how to play bus real games with friends
      -how to play bus real games with keyboard
      -how to play bus real games with controller
      -top 10 bus real games for android
      -top 10 bus real games for pc
      -top 10 bus real games for ios
      -top 10 bus real games of all time
      -top 10 bus real games in 2023
      -realistic bus driving games free download
      -realistic bus driving games online
      -realistic bus driving games for pc
      -realistic bus driving games for android
      -realistic bus driving games with passengers

      -
        -
      1. Go to the Google Play Store or the App Store on your device and search for "bus simulator games".
      2. -
      3. Choose a bus simulator game that you like from the list of results. You can read the description, reviews, ratings, screenshots, etc. to help you decide.
      4. -
      5. Tap on the "Install" button to download and install the game on your device.
      6. -
      7. Once the game is installed, tap on the "Open" button to launch the game.
      8. -
      9. Follow the instructions on the screen to set up your profile, choose your settings, select your bus, etc.
      10. -
      11. Start driving your bus and enjoy the game!
      12. -
      -

      Top 3 Bus Simulator Games to Try Out

      -

      Mobile Bus Simulator by LOCOS

      -

      Features and benefits of Mobile Bus Simulator

      -

      Mobile Bus Simulator is one of the most popular bus simulator games on the Google Play Store. It has over 50 million downloads and 4.0 stars rating. Here are some of the features and benefits of Mobile Bus Simulator:

      -
        -
      • You can drive realistic buses with detailed interiors and exteriors.
      • -
      • You can customize your bus with various liveries, horns, lights, bumpers, wheels, etc.
      • -
      • You can transport passengers from one city to another city terminal through amazing places and landscapes.
      • -
      • You can follow traffic rules, use indicators, open/close doors, honk for children, etc.
      • -
      • You can experience different weather conditions (sunny and rain) and day/night cycle.
      • -
      • You can choose from different control options (tilt, buttons or steering wheel) and camera modes (first person, top, rear, orbital, etc.).
      • -
      • You can play online with other players and chat with them.
      • -
      • You can earn money and buy new buses and upgrades.
      • -
      • You can join leagues and compete with other players in leaderboards.
      • -
      -

      How to download and install Mobile Bus Simulator on your device?

      -

      To download and install Mobile Bus Simulator on your device, you need to follow these simple steps:

      -
        -
      1. Go to the Google Play Store on your device and search for "Mobile Bus Simulator".
      2. -
      3. Tap on the "Install" button to download and install the game on your device.
      4. -
      5. Once the game is installed, tap on the "Open" button to launch the game.
      6. -
      7. Follow the instructions on the screen to set up your profile, choose your settings, select your bus, etc.
      8. -
      9. Start driving your bus and enjoy the game!
      10. -
      -

      Bus Simulator : Ultimate by Zuuks Games

      -

      Features and benefits of Bus Simulator : Ultimate

      -

      Bus Simulator : Ultimate is another popular bus simulator game on the Google Play Store and the App Store. It has over 100 million downloads and 4.2 stars rating. Here are some of the features and benefits of Bus Simulator : Ultimate:

      -
        -
      • You can drive realistic buses with high-quality graphics and animations.
      • -
      • You can create your own bus company and expand your business across the world.
      • -
      • You can transport passengers from different countries such as Germany, Turkey, Italy, Spain, France, Netherlands, Brazil, Azerbaijan, etc.
      • -
      • You can follow realistic traffic rules, use signals, obey speed limits, etc.
      • -
      • You can experience different weather conditions (rainy, snowy, sunny, foggy) and time of day (morning, noon, evening, night).
      • -
      • You can choose from different control options (tilt, buttons or steering wheel) and camera modes (cockpit, outside, free camera).
      • -
      • You can play online multiplayer mode with your friends or other players.
      • -
      • You can earn money and buy new buses and upgrades.
      • -
      • You can get feedback from passengers based on your driving performance.
      • -

      How to download and install Bus Simulator : Ultimate on your device?

      -

      To download and install Bus Simulator : Ultimate on your device, you need to follow these simple steps:

      -
        -
      1. Go to the Google Play Store or the App Store on your device and search for "Bus Simulator : Ultimate".
      2. -
      3. Tap on the "Install" button to download and install the game on your device.
      4. -
      5. Once the game is installed, tap on the "Open" button to launch the game.
      6. -
      7. Follow the instructions on the screen to set up your profile, choose your settings, select your bus, etc.
      8. -
      9. Start driving your bus and enjoy the game!
      10. -
      -

      Bus Simulator : Ultimate on PC by BlueStacks

      -

      Features and benefits of Bus Simulator : Ultimate on PC

      -

      If you want to play Bus Simulator : Ultimate on a bigger screen and with better performance, you can try playing it on your PC using BlueStacks. BlueStacks is a powerful Android emulator that allows you to run Android apps and games on your PC. Here are some of the features and benefits of Bus Simulator : Ultimate on PC by BlueStacks:

      -
        -
      • You can enjoy the game on a larger screen with high-resolution graphics and smooth gameplay.
      • -
      • You can use your keyboard and mouse or a gamepad to control your bus with ease and accuracy.
      • -
      • You can customize your settings, such as graphics, sound, language, etc., according to your preferences.
      • -
      • You can use the Multi-Instance feature to play multiple games or accounts at the same time.
      • -
      • You can use the Macro feature to automate repetitive tasks or create shortcuts for complex actions.
      • -
      • You can use the Screen Recorder feature to record your gameplay and share it with others.
      • -
      • You can use the Chat feature to communicate with other players or friends while playing.
      • -
      -

      How to download and install Bus Simulator : Ultimate on PC using BlueStacks?

      -

      To download and install Bus Simulator : Ultimate on PC using BlueStacks, you need to follow these simple steps:

      -
        -
      1. Go to the BlueStacks website and download the latest version of BlueStacks for your PC.
      2. -
      3. Run the installer and follow the instructions to install BlueStacks on your PC.
      4. -
      5. Launch BlueStacks and sign in with your Google account or create a new one.
      6. -
      7. Go to the Google Play Store on BlueStacks and search for "Bus Simulator : Ultimate".
      8. -
      9. Tap on the "Install" button to download and install the game on BlueStacks.
      10. -
      11. Once the game is installed, tap on the "Open" button to launch the game.
      12. -
      13. Follow the instructions on the screen to set up your profile, choose your settings, select your bus, etc.
      14. -
      15. Start driving your bus and enjoy the game!
      16. -

      Conclusion

      -

      In conclusion, bus simulator games are a great way to enjoy the thrill of driving realistic buses on different roads and cities. You can download and play bus real game on your device by following the simple steps we have shown you. You can also try out some of the top bus simulator games that we have recommended, such as Mobile Bus Simulator, Bus Simulator : Ultimate, and Bus Simulator : Ultimate on PC by BlueStacks. These games offer various features and benefits that will make your bus driving experience more fun and immersive. We hope you have found this article helpful and informative. Happy bus driving!

      -

      FAQs

      -

      Here are some of the frequently asked questions about bus simulator games:

      -
        -
      1. Q: What are the minimum requirements to play bus simulator games on my device?
        -A: The minimum requirements may vary depending on the game and the device, but generally, you need to have a device that runs on Android 4.4 or higher or iOS 9.0 or higher, with at least 2 GB of RAM and 500 MB of free storage space.
      2. -
      3. Q: How can I update my bus simulator game to the latest version?
        -A: You can update your bus simulator game to the latest version by going to the Google Play Store or the App Store on your device, finding the game, and tapping on the "Update" button. Alternatively, you can enable automatic updates for your apps in your device settings.
      4. -
      5. Q: How can I contact the developers of my bus simulator game if I have any issues or suggestions?
        -A: You can contact the developers of your bus simulator game by going to the game's page on the Google Play Store or the App Store, scrolling down to the bottom, and tapping on the "Contact Developer" or "Developer Website" option. You can also find their email address, social media links, or website in the game's settings or menu.
      6. -
      7. Q: How can I improve my bus driving skills in bus simulator games?
        -A: You can improve your bus driving skills in bus simulator games by practicing regularly, following traffic rules, using indicators, obeying speed limits, etc. You can also watch tutorials, tips, and tricks videos on YouTube or other platforms to learn from other players.
      8. -
      9. Q: How can I share my bus simulator game achievements with others?
        -A: You can share your bus simulator game achievements with others by taking screenshots or recording videos of your gameplay and posting them on social media platforms such as Facebook, Instagram, Twitter, etc. You can also use the in-game chat feature to communicate with other players and share your achievements.
      10. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download CarX Highway Racing APK for Android - Latest Version.md b/spaces/congsaPfin/Manga-OCR/logs/Download CarX Highway Racing APK for Android - Latest Version.md deleted file mode 100644 index 6b2600253e09abf62cc44f459d093775670ebd49..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download CarX Highway Racing APK for Android - Latest Version.md +++ /dev/null @@ -1,149 +0,0 @@ - -

      CarX Highway Racing: A Thrilling Mobile Racing Game

      -

      If you are a fan of racing games, you might have heard of CarX Highway Racing, one of the most popular and realistic mobile racing games available. In this article, we will tell you everything you need to know about this game, including its features, how to download it from Uptodown, and some tips and tricks to master it. So buckle up and get ready for an adrenaline-fueled ride!

      -

      carx highway racing apk download uptodown


      Download Filehttps://urlca.com/2uOaKo



      -

      Introduction

      -

      What is CarX Highway Racing?

      -

      CarX Highway Racing is a mobile racing game developed by CarX Technologies, the same company behind the successful CarX Drift Racing series. The game was released in 2017 and has since gained millions of fans worldwide. The game is based on realistic physics and offers an unprecedented driving experience on traffic-packed highways. You can compete against numerous rivals, escape from relentless police, and discover unlimited number of new roads. You can also immerse yourself in the world of street racing, where you can uncover the secrets of secret organizations, destroy Winston's empire, and make new friends who can help you in your endeavors.

      -

      Why should you play CarX Highway Racing?

      -

      There are many reasons why you should play CarX Highway Racing, but here are some of the main ones:

      -
        -
      • The game has stunning graphics and sound effects that will make you feel like you are driving a real car.
      • -
      • The game has a variety of cars to choose from, ranging from pickup trucks to hypercars. You can also customize your car with different colors, decals, wheels, and performance parts.
      • -
      • The game has different game modes to suit your preferences, such as campaign mode, time attack mode, police mode, free ride mode, and online mode.
      • -
      • The game has a dynamic day/night cycle and weather system that will affect your driving conditions.
      • -
      • The game has a challenging AI that will not let you win easily. You will have to use your skills and strategy to outsmart your opponents.
      • -
      • The game has a social aspect that will allow you to connect with other players around the world. You can join or create a club, chat with other racers, and compete in online races and leaderboards.
      • -
      -

      Features of CarX Highway Racing

      -

      Realistic physics and graphics

      -

      One of the main features of CarX Highway Racing is its realistic physics engine that will make you feel every horsepower of your car. The game uses the same physics engine as CarX Drift Racing 2, which is known for its lifelike drifting mechanics. You will have to adjust your car's settings to make sure it drives the way you want it. You will also have to deal with factors such as traction, suspension, aerodynamics, and damage.

      -

      The game also boasts of eye-catching graphics that will impress you with their details and quality. The game has high-resolution textures, realistic lighting and shadows, smooth animations, and particle effects. The game also supports 60 FPS on high-end devices for a smoother gameplay experience.

      -

      Various cars and customization options

      -

      The game has over 40 sports cars that are waiting for your command. You can choose from sports classics, regular vehicles, muscle cars, and powerful supercars. Each car has its own characteristics and stats that will affect your performance and handling. You can also upgrade your car with different parts to improve its speed, acceleration, braking, and stability. You can also change the appearance of your car with various paint colors, decals, wheels, and spoilers. You can create your own unique style and show it off to other players.

      -

      Different game modes and challenges

      -

      The game has several game modes that will keep you entertained for hours. You can play the campaign mode, where you will follow the story of a street racer who wants to take down the corrupt Winston and his empire. You will have to complete various missions and events, such as races, chases, escapes, and deliveries. You will also meet different characters who will help you or hinder you along the way. The campaign mode has over 100 missions and 12 chapters to complete.

      -

      carx highway racing mod apk unlimited money download uptodown
      -carx highway racing latest version apk free download uptodown
      -carx highway racing hack apk download for android uptodown
      -carx highway racing offline apk download uptodown
      -carx highway racing 1.74.8 apk download uptodown
      -carx highway racing apk obb download uptodown
      -carx highway racing apk data download uptodown
      -carx highway racing apk pure download uptodown
      -carx highway racing apk mirror download uptodown
      -carx highway racing apk rexdl download uptodown
      -carx highway racing apk revdl download uptodown
      -carx highway racing apk mod menu download uptodown
      -carx highway racing apk unlimited gold download uptodown
      -carx highway racing apk full version download uptodown
      -carx highway racing apk no ads download uptodown
      -carx highway racing apk premium download uptodown
      -carx highway racing apk pro download uptodown
      -carx highway racing apk cracked download uptodown
      -carx highway racing apk unlocked download uptodown
      -carx highway racing apk mega mod download uptodown
      -carx highway racing apk all cars unlocked download uptodown
      -carx highway racing apk unlimited fuel download uptodown
      -carx highway racing apk unlimited nitro download uptodown
      -carx highway racing apk unlimited coins download uptodown
      -carx highway racing apk unlimited cash download uptodown
      -carx highway racing apk mod money and gold download uptodown
      -carx highway racing apk mod free shopping download uptodown
      -carx highway racing apk mod no root download uptodown
      -carx highway racing apk mod anti ban download uptodown
      -carx highway racing apk mod online download uptodown
      -carx highway racing apk mod offline download uptodown
      -carx highway racing apk mod high graphics download uptodown
      -carx highway racing apk mod low graphics download uptodown
      -carx highway racing apk mod realistic physics download uptodown
      -carx highway racing apk mod unlimited everything download uptodown
      -carx highway racing 2 apk download uptodown
      -carx drift and drag race: real driving simulator 2021 - new games 2021 - free games 2021 - offline games 2021 - best games 2021 - fun games 2021 - top games 2021 - cool games 2021 - crazy games 2021 - addictive games 2021 - amazing games 2021 - awesome games 2021 - beautiful games 2021 - challenging games 2021 - exciting games 2021 - fantastic games 2021 - incredible games 2021 - marvelous games 2021 - stunning games 2021 - wonderful games 2021 - action games 2021 - adventure games 2021 - arcade games 2021 - casual games 2021 - puzzle games 2021 - role playing games 2021 - simulation games 2021 - sports games 2021 - strategy games 2021 - trivia games 2021 - word games 2021 - multiplayer games 2021 - single player games 2021 - online games 2021 - offline games 2021 - free to play games 2021 - paid games 2021 - premium games 2021 - pro games 2021 - full version games 2021 - cracked games 2021 - hacked games 2021 - modded games 2021 - unlocked games 2021

      -

      You can also play the time attack mode, where you will have to race against the clock and beat your own records. You can choose from different tracks and weather conditions, and try to get the best time possible. You can also compare your results with other players on the global leaderboards.

      -

      You can also play the police mode, where you will have to either chase or escape from the cops. You can choose to be a cop or a racer, and use different tactics and strategies to win. You can use nitro, ramming, blocking, or evading to get an advantage over your opponent. You can also unlock different police cars and upgrade them with sirens, lights, and armor.

      -

      You can also play the free ride mode, where you can explore the open world of CarX Highway Racing without any restrictions. You can drive anywhere you want, discover new roads and locations, and enjoy the scenery. You can also switch between day and night, and change the weather as you wish.

      -

      You can also play the online mode, where you can race against other players from around the world in real time. You can join or create a room, choose a track and a car, and start the race. You can also chat with other players before or after the race, and send them friend requests or challenges. You can also join or create a club, where you can team up with other racers and compete in club races and tournaments.

      -

      Online racing and leaderboards

      -

      The game has a competitive online racing system that will test your skills and rank you among other players. The game has a ranking system that will assign you a rating based on your performance in online races. The higher your rating, the higher your rank. The game has six ranks: Rookie, Amateur, Pro, Expert, Master, and Legend. You can also earn trophies by winning races and reaching certain milestones. The more trophies you have, the higher your position on the leaderboards.

      -

      The game has different leaderboards that will show you how you compare with other players in various aspects of the game. You can check the leaderboards for time attack mode, police mode, online mode, club races, trophies, rating, and more. You can also filter the leaderboards by region, country, club, or friends. You can also view your own profile and statistics, such as wins, losses, best times, best cars, etc.

      -

      How to download CarX Highway Racing APK from Uptodown

      -

      What is Uptodown and why use it?

      -

      Uptodown is a website that allows you to download APK files of various apps and games for free. APK files are Android application packages that contain all the necessary files to install an app or a game on your device. Uptodown is a safe and reliable source of APK files that are verified by antivirus software and checked by human editors. Uptodown also offers some advantages over other sources of APK files:

      -
        -
      • Uptodown has a large catalog of apps and games that are updated regularly.
      • -
      • Uptodown allows you to download older versions of apps and games that may not be available on other platforms.
      • -
      • Uptodown does not require you to register or log in to download APK files.
      • -
      • Uptodown does not have any geographical restrictions or censorship issues.
      • -
      • Uptodown supports multiple languages and currencies.
      • -
      -

      Steps to download and install CarX Highway Racing APK from Uptodown

      -

      If you want to download CarX Highway Racing APK from Uptodown, you will need to follow these simple steps:

      -
        -
      1. Go to https://carx-highway-racing.en.uptodown.com/android, which is the official page of CarX Highway Racing on Uptodown.
      2. -
      3. Click on the green button that says "Download" to start the download process. You can also choose a different version of the game if you want.
      4. -
      5. Wait for the download to finish. You will see a notification on your device when the download is complete.
      6. -
      7. Go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install APK files that are not from the Google Play Store.
      8. -
      9. Go to your device's file manager and locate the downloaded APK file. It should be in the "Downloads" folder or in the "Uptodown" folder.
      10. -
      11. Tap on the APK file and follow the instructions to install CarX Highway Racing on your device.
      12. -
      13. Enjoy playing CarX Highway Racing!
      14. -
      -

      Tips and tricks to master CarX Highway Racing

      -

      Choose the right car for each race

      -

      One of the most important factors that will affect your performance in CarX Highway Racing is your choice of car. Each car has its own strengths and weaknesses, and you will need to choose the one that suits your style and the track conditions. For example, some cars are faster but less stable, while others are more agile but less powerful. Some cars are better for drifting, while others are better for straight-line speed. You can also upgrade your car with different parts to improve its stats and performance.

      -

      You can check the stats of each car by tapping on it in the garage menu. You can see its top speed, acceleration, braking, handling, and weight. You can also see its class, which ranges from C to S, with S being the highest. You can also compare different cars by tapping on the compare button. You can also test drive any car before buying it by tapping on the test drive button.

      -

      Learn how to drift and use nitro effectively

      -

      Another key skill that you will need to master in CarX Highway Racing is drifting. Drifting is a technique that involves sliding your car sideways around corners, which allows you to maintain speed and control. Drifting is essential for winning races, as it will help you gain an edge over your rivals and avoid crashing into obstacles. Drifting will also fill up your nitro meter, which will give you a boost of speed when you need it.

      -

      To drift in CarX Highway Racing, you will need to tap on the brake button while turning your car. You will see a yellow arrow indicating the direction of your drift. You will need to adjust your steering and throttle to maintain your drift and balance your car. You will also need to release the brake button at the right time to exit your drift smoothly. You can also use the handbrake button to initiate a sharper drift, but be careful not to overdo it or you might spin out.

      -

      To use nitro in CarX Highway Racing, you will need to tap on the nitro button when your nitro meter is full or partially full. You will see a blue bar indicating the amount of nitro you have left. You can use nitro to accelerate faster, overtake your opponents, or escape from the police. However, you should not use nitro all the time, as it will make your car harder to control and more prone to overheating. You should use nitro strategically, such as when you are on a straight road, when you are behind an opponent, or when you are in danger of being caught by the police.

      -

      Avoid traffic and police cars

      -

      One of the main challenges that you will face in CarX Highway Racing is dealing with traffic and police cars. Traffic cars are civilian vehicles that are driving on the highway along with you. They can be a nuisance or a hazard, depending on how you handle them. Police cars are law enforcement vehicles that are trying to stop you from racing or escaping. They can be very aggressive and persistent, depending on your wanted level.

      -

      To avoid traffic and police cars in CarX Highway Racing, you will need to be alert and careful. You will need to watch out for traffic signs and signals that indicate the direction and speed of traffic cars. You will also need to use your mini-map and rear-view mirror to see where traffic and police cars are coming from. You will need to avoid colliding with traffic and police cars, as they will slow you down, damage your car, and increase your wanted level. You will also need to avoid roadblocks, spike strips, helicopters, and other obstacles that police cars might use against you.

      -

      Complete missions and events to earn rewards

      -

      One of the best ways to progress in CarX Highway Racing is to complete missions and events that are available in the game. Missions are tasks that are related to the campaign mode, such as races, chases, escapes, deliveries , and more. Events are tasks that are not related to the campaign mode, such as time trials, police chases, club races, and more. Completing missions and events will reward you with cash, gold, experience points, and other items. You can use these rewards to buy new cars, upgrade your existing cars, unlock new tracks, and access new features. You can also earn achievements and trophies by completing certain missions and events.

      -

      You can find the missions and events in the main menu of the game. You can see the details, requirements, and rewards of each mission and event by tapping on them. You can also see your progress and status of each mission and event by tapping on the icons on the top of the screen. You can also replay any mission or event that you have already completed by tapping on the replay button.

      -

      Conclusion

      -

      Summary of the main points

      -

      CarX Highway Racing is a mobile racing game that offers a realistic and thrilling driving experience on traffic-packed highways. The game has many features that make it stand out from other racing games, such as:

      -
        -
      • Realistic physics and graphics that will make you feel like you are driving a real car.
      • -
      • Various cars and customization options that will allow you to create your own unique style.
      • -
      • Different game modes and challenges that will keep you entertained for hours.
      • -
      • Online racing and leaderboards that will test your skills and rank you among other players.
      • -
      -

      The game is available for free on the Google Play Store, but you can also download it from Uptodown, a website that offers APK files of various apps and games for free. Uptodown is a safe and reliable source of APK files that has some advantages over other platforms, such as:

      -
        -
      • Uptodown has a large catalog of apps and games that are updated regularly.
      • -
      • Uptodown allows you to download older versions of apps and games that may not be available on other platforms.
      • -
      • Uptodown does not require you to register or log in to download APK files.
      • -
      • Uptodown does not have any geographical restrictions or censorship issues.
      • -
      • Uptodown supports multiple languages and currencies.
      • -
      -

      To download CarX Highway Racing APK from Uptodown, you will need to follow some simple steps, such as:

      -
        -
      1. Go to https://carx-highway-racing.en.uptodown.com/android, which is the official page of CarX Highway Racing on Uptodown.
      2. -
      3. Click on the green button that says "Download" to start the download process.
      4. -
      5. Wait for the download to finish.
      6. -
      7. Go to your device's settings and enable the option to install apps from unknown sources.
      8. -
      9. Go to your device's file manager and locate the downloaded APK file.
      10. -
      11. Tap on the APK file and follow the instructions to install CarX Highway Racing on your device.
      12. -
      13. Enjoy playing CarX Highway Racing!
      14. -
      -

      Call to action

      -

      If you are looking for a mobile racing game that will give you a realistic and thrilling driving experience on traffic-packed highways, then CarX Highway Racing is the game for you. Download it now from Uptodown and join millions of fans worldwide who are enjoying this game. You will not regret it!

      -

      Frequently Asked Questions

      -

      Here are some of the most frequently asked questions about CarX Highway Racing:

      -
        -
      1. How do I change the language of the game?
      2. -

        You can change the language of the game by going to the settings menu and tapping on the language option. You can choose from English, Spanish, French, German, Russian, Turkish, Portuguese, Arabic, Chinese, Japanese, Korean, Indonesian, Thai, Vietnamese, Hindi, or Malay.

        -
      3. How do I save my progress in the game?
      4. -

        You can save your progress in the game by logging in with your Facebook or Google account. This will also allow you to sync your progress across different devices and access online features.

        -
      5. How do I get more gold in the game?
      6. -

        You can get more gold in the game by completing missions and events, watching ads, participating in online races, joining a club, or buying it with real money.

        -
      7. How do I unlock new cars in the game?
      8. -

        You can unlock new cars in the game by buying them with cash or gold in the garage menu. You can also unlock some cars by completing certain missions or events.How do I upgrade my car in the game? -

        You can upgrade your car in the game by buying different parts in the garage menu. You can buy parts for the engine, transmission, brakes, suspension, tires, and nitro. You can also tune your car by adjusting the settings for the gear ratio, camber angle, brake balance, and nitro pressure.

        -
      -

      I hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy racing!

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Become the Biggest Snake in Snake.io.md b/spaces/congsaPfin/Manga-OCR/logs/How to Become the Biggest Snake in Snake.io.md deleted file mode 100644 index f77121f89d045d2da428df47cac5edff363da1a6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Become the Biggest Snake in Snake.io.md +++ /dev/null @@ -1,140 +0,0 @@ - -

      Snake.io: How to Play the Best Snake Game Online Without Downloading Anything

      -

      Introduction

      -

      Do you love snake games? Do you want to play a fun and addictive snake game online without downloading anything? If you answered yes, then you should try Snake.io, the ultimate snake game that combines trendy art with classic gameplay. In this article, we will tell you everything you need to know about Snake.io, how to play it online, and how it compares with other popular snake games. Let's get started!

      -

      snake io no download


      DOWNLOAD >> https://urlca.com/2uOghN



      -

      What is Snake.io?

      -

      Snake.io is a multiplayer game where you play as a snake fighting to survive on a battlefield of snakes. You have to eat colorful bits of food to grow bigger and take down other snakes to become an absolute unit to be reckoned with. The game was developed by Kooapps and released in 2016 for iOS and Android devices, and in 2021 for web browsers .

      -

      Why play Snake.io online?

      -

      There are many reasons why you should play Snake.io online instead of downloading it on your device. Here are some of them:

      -
        -
      • You can play it on any device that has a web browser, whether it's a desktop, laptop, tablet, or smartphone.
      • -
      • You don't have to worry about storage space, updates, or compatibility issues.
      • -
      • You can access the game anytime and anywhere, as long as you have an internet connection.
      • -
      • You can enjoy the same features and graphics as the mobile version, but with a larger screen and better controls.
      • -
      • You can challenge your friends and other players from around the world in real-time.
      • -
      -

      How to play Snake.io online

      -

      Controls and movement

      -

      Playing Snake.io online is very easy and intuitive. You can use your mouse or keyboard to control your snake's movement. Here are the basic controls:

      - - - - - - - - - - - - - -
      InputAction
      Move mouse / WASD / arrow keysChange direction
      Hold left-click / spacebarGo faster
      -

      Your goal is to slither around the arena and eat as many food pieces as you can to grow bigger. You can also kill other snakes by making them crash into your body. When you kill another snake, they leave behind all the food for you to munch on. But be careful, because other snakes can do the same to you!

      -

      Tips and tricks

      -

      If you want to master Snake.io online, here are some tips and tricks that will help you:

      -

      snake io online game
      -snake io multiplayer
      -snake io play free
      -snake io crazy games
      -snake io web browser
      -snake io fun games
      -snake io kooapps
      -snake io WebGL
      -snake io slither and survive
      -snake io leaderboard
      -snake io skins shop
      -snake io speed boost
      -snake io eat and grow
      -snake io worm your way
      -snake io battlefield of snakes
      -snake io classic snake game
      -snake io trendy art
      -snake io challenge your friends
      -snake io giant worm
      -snake io arena
      -snake io addictive gameplay
      -snake io smooth controls
      -snake io colorful graphics
      -snake io achievements and rewards
      -snake io tips and tricks
      -snake io how to play
      -snake io best strategy
      -snake io high score
      -snake io game review
      -snake io game guide
      -snake io game wiki
      -snake io game fandom
      -snake.io vs slither.io
      -slither.io no download required
      -slither.io online multiplayer game
      -slither.io play now for free
      -slither.io unblocked games 66
      -slither.io mods and hacks
      -slither.io custom skins maker
      -slither.io zoom out extension
      -slither.io lag fix chromebook
      -slither.io private server with bots
      -slither.io world record length
      -slither.io cheat codes for pc
      -slither.io invisible skin code
      -slither.io epic gameplay
      -slither.io how to get big fast
      -slither.io tricks and glitches

      -
        -
      • Move fast and take more risks when you're small. You can easily dodge other snakes and sneak up on them.
      • -
      • Follow the crown icon to find the largest snake in the game. You can try to take them down or avoid them depending on your strategy.
      • -
      • Trap other snakes in a circle when you get big enough. You can use your size to intimidate them and make them run out of space.
      • -
      • Become an absolute unit and use your size to dominate the arena. You can block other snakes' paths and force them to crash into you.
      • -
      -

      Skins and achievements

      -

      Once you've mastered how to play Snake.io online, there are numerous skins available in the skins shop. You can unlock various skins by completing achievements like high scores and playing on consecutive days. Some of the skins include animals, fruits, flags, emojis, and more. You can also customize your snake's name and color.

      - Comparison with other snake games -

      Snake.io is not the only snake game available online. There are many other snake games that you can play for free, such as Slither.io, Wormate.io, and Little Big Snake. How does Snake.io compare with these games? Let's find out.

      -

      Snake.io vs Slither.io

      -

      Slither.io is one of the most popular snake games online, with over 100 million downloads on Google Play. It was released in 2016 by Lowtech Studios and has a similar gameplay to Snake.io. However, there are some differences between the two games:

      -
        -
      • Slither.io has more realistic graphics and physics, while Snake.io has more cartoonish and colorful graphics.
      • -
      • Slither.io has more lag and ads, while Snake.io has less lag and ads.
      • -
      • Slither.io has more modes and features, such as team mode, custom skins, and leaderboards, while Snake.io has fewer modes and features.
      • -
      -

      Overall, Slither.io is more challenging and competitive, while Snake.io is more casual and relaxing.

      -

      Snake.io vs Wormate.io

      -

      Wormate.io is another popular snake game online, with over 50 million downloads on Google Play. It was released in 2016 by Oleksandr Godoba and has a similar gameplay to Snake.io. However, there are some differences between the two games:

      -
        -
      • Wormate.io has more food options and power-ups, such as cakes, donuts, candies, magnets, and potions, while Snake.io has fewer food options and power-ups.
      • -
      • Wormate.io has more skins and accessories, such as hats, glasses, masks, and tails, while Snake.io has fewer skins and accessories.
      • -
      • Wormate.io has more maps and themes, such as Halloween, Christmas, Easter, and Summer, while Snake.io has fewer maps and themes.
      • -
      -

      Overall, Wormate.io is more fun and varied, while Snake.io is more simple and straightforward.

      -

      Snake.io vs Little Big Snake

      -

      Little Big Snake is another popular snake game online, with over 10 million downloads on Google Play. It was released in 2018 by LittleBIGsnake.com and has a different gameplay from Snake.io. Here are some of the differences between the two games:

      -
        -
      • Little Big Snake has two modes: snake mode and fly mode. In snake mode, you play as a snake that can eat other snakes and food. In fly mode, you play as a bug that can fly around and collect nectar.
      • -
      • Little Big Snake has more objectives and quests, such as daily tasks, achievements, missions, and events. You can also join clans and chat with other players.
      • -
      • Little Big Snake has more upgrades and customization options. You can level up your snake or bug and unlock new abilities. You can also buy gems and gold to get premium skins and items.
      • -
      -

      Overall, Little Big Snake is more complex and immersive, while Snake.io is more simple and easy.

      -

      Conclusion

      -

      Summary of main points

      -

      In conclusion, Snake.io is a great snake game that you can play online without downloading anything. It has the following features:

      -
        -
      • It combines trendy art with classic gameplay.
      • -
      • You can play it on any device that has a web browser.
      • -
      • You can enjoy the same features and graphics as the mobile version.
      • -
      • You can challenge your friends and other players from around the world in real-time.
      • -
      • You can unlock various skins by completing achievements.
      • -
      -

      Call to action

      -

      If you are looking for a fun and addictive snake game online that doesn't require any download or installation, then you should definitely try Snake.io. It's free to play and easy to learn. You can start playing right now by visiting this link: [Play Snake.io Online]. Have fun!

      -

      FAQs

      -
        -
      1. What is the highest score in Snake.io?
        The highest score in Snake.io depends on the server you are playing on. However, some players have reported scores of over 100k points.
      2. -
      3. How do I change my name in Snake.io?
        You can change your name in Snake.io by clicking on the name box at the top of the screen before you start playing. You can enter any name you want up to 15 characters.
      4. -
      5. <
      6. How do I get more skins in Snake.io?
        You can get more skins in Snake.io by completing achievements, such as reaching a certain score, playing for a certain time, or killing a certain number of snakes. You can also buy some skins with real money.
      7. -
      8. Is Snake.io safe to play online?
        Yes, Snake.io is safe to play online. It does not require any download or installation, and it does not collect any personal information from the players. However, you should be careful of clicking on any ads or links that may appear on the game page.
      9. -
      10. Can I play Snake.io offline?
        No, you cannot play Snake.io offline. You need an internet connection to access the game and play with other players. If you want to play a snake game offline, you can download the mobile version of Snake.io from Google Play or App Store.
      11. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Kamusi Ya Karne Ya 21 APK - The Ultimate Swahili Resource by Longhorn Publishers.md b/spaces/congsaPfin/Manga-OCR/logs/Kamusi Ya Karne Ya 21 APK - The Ultimate Swahili Resource by Longhorn Publishers.md deleted file mode 100644 index 00a9fe83939be1b22fe7fbb55ab9d1a042c50fad..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Kamusi Ya Karne Ya 21 APK - The Ultimate Swahili Resource by Longhorn Publishers.md +++ /dev/null @@ -1,99 +0,0 @@ -
      -

      Kamusi Ya Karne Ya 21 APK: A Digital Swahili Dictionary for Everyone

      -

      Do you want to learn Swahili, one of the most widely spoken languages in Africa? Do you need a reliable and comprehensive dictionary that can help you with vocabulary, pronunciation, and grammar? If yes, then you should try Kamusi Ya Karne Ya 21 APK, a free app that offers a digital Swahili dictionary of Longhorn Publishers Limited. In this article, we will tell you what Kamusi Ya Karne Ya 21 APK is, what features it has, how to download and install it, why you should use it, and some tips for using it effectively.

      -

      kamusi ya karne ya 21 apk


      Download File > https://urlca.com/2uOds7



      -

      What is Kamusi Ya Karne Ya 21 APK?

      -

      Kamusi Ya Karne Ya 21 APK is an android app that provides a digital Swahili dictionary of Longhorn Publishers Limited. The app is suitable for use by primary school pupils, secondary school students, university students, Kiswahili teachers and lecturers, Kiswahili experts, researchers and all Kiswahili speakers in the world. The app is also recommended for Swahili learners across the globe.

      -

      The app was developed by Longhorn Publishers Limited, a leading educational publisher in East Africa. The authors of Kamusi Ya Karne Ya 21 are great Kiswahili experts who have taught Swahili language in a number of universities in East and Central Africa. They are: Prof. James Mdee, Prof. Kimani Njogu, Prof. Mohamed Abdulaziz, Prof. Clara Momanyi, Prof. Kitula King'ei, Prof. Rocha Chimera, Prof. Kithaka wa Mberia, Dr. Leonard Chacha Mwita, Dr. Nathan Oyori Ogechi and Dr. Zaja Omboga.

      -

      Features of Kamusi Ya Karne Ya 21 APK

      -

      Kamusi Ya Karne Ya 21 APK has many features that make it a useful and user-friendly app for learning Swahili. Here are some of them:

      -

      - More than 25,000 English and Swahili words

      -

      The app contains more than 25,000 English words with Swahili meaning. You can search both English and Swahili words in the app. You can also see the word's part of speech, usage examples, related words, and etymology.

      -

      - Pronunciation and voice search

      -

      The app also includes pronunciation in English and Swahili. You can listen to how the words are pronounced by native speakers. You can also use the speech to text feature to search words by speaking them.

      -

      kamusi ya karne ya 21 app download
      -kamusi ya karne ya 21 free apk
      -kamusi ya karne ya 21 latest version
      -kamusi ya karne ya 21 android app
      -kamusi ya karne ya 21 swahili dictionary
      -kamusi ya karne ya 21 longhorn publishers
      -kamusi ya karne ya 21 apkcombo
      -kamusi ya karne ya 21 google play
      -kamusi ya karne ya 21 digital dictionary
      -kamusi ya karne ya 21 for primary school
      -kamusi ya karne ya 21 for secondary school
      -kamusi ya karne ya 21 for university students
      -kamusi ya karne ya 21 for kiswahili teachers
      -kamusi ya karne ya 21 for kiswahili learners
      -kamusi ya karne ya 21 for kiswahili experts
      -kamusi ya karne ya 21 for kiswahili researchers
      -kamusi ya karne ya 21 with interactive materials
      -kamusi ya karne ya 21 with hyperlinks
      -kamusi ya karne ya 21 with misemo sayings
      -kamusi ya karne ya 21 with nahau idioms
      -kamusi ya karne ya 21 with methali proverbs
      -kamusi ya karne ya 21 by prof james michira
      -kamusi ya karne ya 21 by prof kimani njogu
      -kamusi ya karne ya 21 by prof kitula kingei
      -kamusi ya karne ya 21 by prof rochford bernard
      -kamusi ya karne ya 21 by prof mohamed abdulaziz
      -kamusi ya karne ya 21 by prof yusuf osman abdullahi
      -kamusi ya karne ya 21 by prof said a mohamed
      -how to download kamusi ya karne ya 21 apk
      -how to install kamusi ya karne ya 21 apk
      -how to use kamusi ya karne ya 21 apk
      -how to update kamusi ya karne ya 21 apk
      -how to uninstall kamusi ya karne ya 21 apk
      -how to rate kamusi ya karne ya 21 apk
      -how to share kamusi ya karne ya 2

      -

      - Offline mode and web search

      -

      You can use this app when you have no internet connection. The app works offline and does not require any data or wifi. However, if you want to access more information or resources on the web, you can use the web search option to find relevant websites.

      -

      - Antonyms, synonyms, and word games

      The app also offers antonyms, synonyms, and word games to help you expand your vocabulary and have fun while learning. You can find words that have opposite or similar meanings to the ones you search. You can also play various games such as matching, conversation, crossword, word search, and more. These games will test your knowledge and skills in Swahili language.

      -

      How to download and install Kamusi Ya Karne Ya 21 APK?

      -

      If you want to use this app on your device, you need to download and install it first. Here are the steps to do so:

      -

      - For Android devices

      -

      1. Go to the Google Play Store and search for Kamusi Ya Karne Ya 21 APK or click on this link.

      -

      2. Tap on the Install button and wait for the app to download.

      -

      3. Once the app is downloaded, tap on the Open button and enjoy using it.

      -

      - For iOS devices

      -

      1. Go to the App Store and search for Kamusi Ya Karne Ya 21 or click on this link.

      -

      2. Tap on the Get button and enter your Apple ID password if prompted.

      -

      3. Wait for the app to download and install on your device.

      -

      4. Once the app is installed, tap on it and start using it.

      -

      Why use Kamusi Ya Karne Ya 21 APK?

      -

      You might be wondering why you should use this app instead of other Swahili dictionaries or apps. Well, there are many reasons why Kamusi Ya Karne Ya 21 APK is a great choice for learning Swahili. Here are some of them:

      -

      Benefits of learning Swahili language

      -

      Swahili is a beautiful and useful language that can enrich your life in many ways. Here are some of the benefits of learning Swahili:

      -

      - It is widely spoken in East Africa and beyond

      -

      Swahili is the official language of Tanzania, Kenya, Uganda, Rwanda, Burundi, and the Democratic Republic of Congo. It is also spoken in other countries such as Somalia, Mozambique, Malawi, Zambia, Zimbabwe, Comoros, Madagascar, and more. It is estimated that more than 100 million people speak Swahili as their first or second language.

      -

      If you learn Swahili, you will be able to communicate with millions of people across Africa and beyond. You will also be able to access a rich variety of media, literature, music, art, and culture in Swahili.

      -

      - It is rich in culture and history

      -

      Swahili is a language that has a long and fascinating history. It originated from the interaction of Bantu speakers with Arab traders along the East African coast since the 10th century. It was influenced by Arabic, Persian, Portuguese, English, Hindi, and other languages over time. It also developed its own unique script called Ajami.

      -

      Swahili is a language that reflects the diversity and complexity of African cultures and histories. It has many proverbs, idioms, poems, stories, songs, and expressions that convey wisdom, humor, values, beliefs, and traditions. Learning Swahili will help you appreciate and understand the richness and beauty of African cultures.

      -

      - It is easy to learn and fun to speak

      -

      Swahili is a language that is relatively easy to learn compared to other languages. It has a simple grammar system that follows regular rules and patterns. It has a phonetic spelling system that makes it easy to read and write. It has a lot of cognates with English and other languages that make it easy to remember words.

      -

      Swahili is also a language that is fun to speak because it has a musical and rhythmic sound. It has many words that sound like what they mean such as "kucha" (nail), "nyuki" (bee), "ng'ombe" (cow), "chura" (frog), etc. It also has many words that are playful and expressive such as "bwana" (sir), "mambo" (hello), "pole" (sorry), "sawa" (okay), etc.

      -

      Tips for using Kamusi Ya Karne Ya 21 APK effectively

      -

      If you want to make the most out of this app and improve your Swahili skills faster, here are some tips for you:

      -

      - Use the app regularly and review the words you learn- Use the app regularly and review the words you learn. The more you use the app, the more you will learn new words and phrases in Swahili. Try to use the app every day for at least 15 minutes. Review the words you learn by using the flashcards, quizzes, and games in the app. This will help you reinforce your memory and recall.

      -

      - Practice speaking and writing Swahili with native speakers or learners

      -

      - Practice speaking and writing Swahili with native speakers or learners. The best way to improve your Swahili skills is to use them in real situations. Find a language partner or a tutor who can help you practice speaking and writing Swahili. You can also join online communities or forums where you can chat with other Swahili learners or speakers. This will help you improve your fluency, accuracy, and confidence.

      -

      - Explore the app's interactive materials and hyperlinks

      -

      - Explore the app's interactive materials and hyperlinks. The app has many features that can help you learn more about Swahili language and culture. You can access audio clips, videos, images, maps, charts, and other multimedia materials that can enhance your learning experience. You can also follow the hyperlinks that can lead you to more information or resources on the web. This will help you broaden your knowledge and curiosity.

      -

      Conclusion

      -

      Kamusi Ya Karne Ya 21 APK is a digital Swahili dictionary that can help you learn Swahili language in a fun and easy way. It has many features that can help you with vocabulary, pronunciation, grammar, and culture. It is suitable for anyone who wants to learn Swahili for personal or professional reasons. You can download and install it on your Android or iOS device for free. You can also use it offline or online depending on your preference. If you want to improve your Swahili skills faster, you should use the app regularly, practice speaking and writing Swahili with others, and explore the app's interactive materials and hyperlinks.

      -

      We hope this article has given you a clear overview of Kamusi Ya Karne Ya 21 APK and how to use it effectively. If you have any questions or feedback, please feel free to contact us or leave a comment below. Asante sana (thank you very much) for reading!

      -

      FAQs

      -

      Here are some frequently asked questions about Kamusi Ya Karne Ya 21 APK:

      -

      Q: What does Kamusi Ya Karne Ya 21 mean?

      -

      A: Kamusi Ya Karne Ya 21 means Dictionary of the 21st Century in Swahili.

      -

      Q: Is Kamusi Ya Karne Ya 21 APK free?

      -

      A: Yes, Kamusi Ya Karne Ya 21 APK is free to download and use.

      -

      Q: How can I update Kamusi Ya Karne Ya 21 APK?

      -

      A: You can update Kamusi Ya Karne Ya 21 APK by going to the Google Play Store or App Store and checking for updates.

      -

      Q: How can I contact the developers of Kamusi Ya Karne Ya 21 APK?

      -

      A: You can contact the developers of Kamusi Ya Karne Ya 21 APK by emailing them at info@longhornpublishers.com or calling them at +254-20-210-4670.

      -

      Q: How can I rate and review Kamusi Ya Karne Ya 21 APK?

      -

      A: You can rate and review Kamusi Ya Karne Ya 21 APK by going to the Google Play Store or App Store and leaving your feedback.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Lightroom Pro APK 2023 Why You Should Download and Install This Amazing Photo Editing App for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Lightroom Pro APK 2023 Why You Should Download and Install This Amazing Photo Editing App for Android.md deleted file mode 100644 index 4d2855b505d20584b1c404085d634ae542c1c111..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Lightroom Pro APK 2023 Why You Should Download and Install This Amazing Photo Editing App for Android.md +++ /dev/null @@ -1,134 +0,0 @@ -
      -

      Lightroom Pro APK Download 2023: A Complete Guide

      -

      If you are looking for a powerful and easy-to-use photo editor and camera app for your Android device, you might want to consider downloading Lightroom Pro APK. This is a modified version of the official Adobe Lightroom app that gives you access to all the premium features without paying a monthly subscription fee. In this article, we will tell you everything you need to know about Lightroom Pro APK, including its features, benefits, comparison, reviews, installation, safety, and alternatives. Read on to find out how you can enhance your photos and videos with this amazing app.

      -

      What is Lightroom Pro APK?

      -

      Lightroom Pro APK is a photo editor and camera app that allows you to capture, edit, and organize your photos and videos on your Android device. It is based on the official Adobe Lightroom app, which is one of the most popular and trusted photo editing software in the world. However, unlike the official app, which requires a monthly subscription fee to access all the premium features, Lightroom Pro APK gives you everything for free. You can enjoy unlimited editing tools, presets, filters, effects, and more with this app.

      -

      lightroom pro apk download 2023


      Download Zip »»» https://urlca.com/2uO6pK



      -

      Features of Lightroom Pro APK

      -

      Lightroom Pro APK has many features that make it stand out from other photo editing apps. Here are some of the most notable ones:

      -

      Premium presets

      -

      Lightroom Pro APK offers you a wide range of industry-standard presets for different types of edits, such as portraits, travel, cinematic, vintage, and more. You can apply these presets with one tap to transform your photos in seconds. You can also create your own custom presets and save them for future use.

      -

      lightroom pro mod apk free download 2023
      -download lightroom pro apk full version 2023
      -lightroom pro apk premium unlocked 2023
      -lightroom pro apk latest version download 2023
      -how to download lightroom pro apk for android 2023
      -lightroom pro apk cracked download 2023
      -lightroom pro apk no watermark download 2023
      -lightroom pro apk download for pc 2023
      -lightroom pro apk with presets download 2023
      -lightroom pro apk offline download 2023
      -adobe photoshop lightroom mod apk download 2023
      -lightroom pro photo editor apk download 2023
      -lightroom pro apk hack download 2023
      -lightroom pro apk unlimited money download 2023
      -lightroom pro apk old version download 2023
      -lightroom pro apk direct download link 2023
      -lightroom pro apk mirror download 2023
      -lightroom pro apk rexdl download 2023
      -lightroom pro apk revdl download 2023
      -lightroom pro apk uptodown download 2023
      -lightroom pro apk pure download 2023
      -lightroom pro apk apkpure download 2023
      -lightroom pro apk apkmirror download 2023
      -lightroom pro apk apkmody download 2023
      -lightroom pro apk happymod download 2023
      -lightroom pro apk moddroid download 2023
      -lightroom pro apk an1 download 2023
      -lightroom pro apk andropalace download 2023
      -lightroom pro apk androeed download 2023
      -lightroom pro apk androking download 2023
      -lightroom pro apk android republic download 2023
      -lightroom pro apk android zone download 2023
      -lightroom pro apk android oyun club download 2023
      -lightroom pro apk android games room download 2023
      -lightroom pro apk android games hvga download 2023
      -lightroom pro apk android games spot download 2023
      -lightroom pro apk android games plus download 2023
      -lightroom pro apk android games box download 2023
      -lightroom pro apk android games hub download 2023
      -lightroom pro apk android games world download 2023

      -

      Healing brush

      -

      Lightroom Pro APK has a powerful healing brush tool that lets you remove any unwanted objects or blemishes from your photos. You can simply tap on the area you want to fix and let the app do the rest. The healing brush tool can make anything disappear from your photos, no matter how big or small.

      -

      Selective adjustments

      -

      Lightroom Pro APK allows you to make precise edits to any part of your photo with selective adjustments. You can use the brush selection tool, the radial selection tool, or the linear selection tool to apply adjustments to specific areas of your photo. You can adjust the exposure, contrast, color, sharpness, and more with these tools.

      -

      Geometry tools

      -

      Lightroom Pro APK has advanced geometry tools that help you correct perspective and distortion issues in your photos. You can use the upright tool, the guided upright tool, or the geometry slider tool to straighten tilted or skewed lines in your photos. You can also crop and rotate your photos to fit any aspect ratio or orientation.

      -

      Raw editing

      -

      Lightroom Pro APK supports raw editing on your Android device. You can import and edit raw files from your camera or other sources with this app. Raw editing gives you more control over the quality and details of your photos. You can adjust the white balance, exposure, color temperature, tint, and more with raw editing.

      -

      Benefits of Lightroom Pro APK

      -

      Lightroom Pro APK has many benefits that make it worth downloading on your Android device. Here are some of them:

      -

      No ads

      -

      One of the benefits of using Lightroom Pro APK is that you can

      enjoy a smooth and uninterrupted editing experience. Unlike the official app, which shows ads and pop-ups, Lightroom Pro APK does not have any ads or banners. You can focus on your creativity and productivity without any distractions.

      -

      Color gradients

      -

      Another benefit of using Lightroom Pro APK is that you can create stunning color gradients in your photos. You can use the gradient tool to apply different colors to different parts of your photo. You can also adjust the intensity, angle, and blend mode of the gradient. Color gradients can add depth, mood, and style to your photos.

      -

      Shoot-through presets

      -

      Lightroom Pro APK also lets you apply presets while taking photos with the camera mode. You can preview how your photos will look like with different presets before capturing them. You can also adjust the exposure, focus, and white balance of the camera. Shoot-through presets can help you save time and storage space by editing your photos on the go.

      -

      Full raw HDR capture mode

      -

      Lightroom Pro APK has a full raw HDR capture mode that allows you to take high-quality photos with a wide dynamic range. This mode combines multiple exposures into one raw file that preserves more details in the shadows and highlights. You can then edit the raw file with Lightroom Pro APK to bring out the best in your photos.

      -

      Comparison of Lightroom Pro APK with other plans and software

      -

      Lightroom Pro APK is not the only option for photo editing on your Android device. There are other plans and software that you can choose from depending on your needs and preferences. Here is a comparison of Lightroom Pro APK with some of them:

      -

      Lightroom vs Lightroom Classic vs Photoshop

      -

      Lightroom, Lightroom Classic, and Photoshop are all products of Adobe that offer photo editing solutions for different platforms and purposes. Lightroom is a cloud-based service that works on mobile devices, web browsers, and desktop computers. It allows you to sync your photos across all your devices and access them anywhere. Lightroom Classic is a desktop-based software that works on Windows and Mac computers. It allows you to organize and edit your photos locally on your computer. Photoshop is also a desktop-based software that works on Windows and Mac computers. It allows you to create and edit complex graphics, illustrations, and designs.

      -

      The main difference between Lightroom Pro APK and these products is that Lightroom Pro APK is free and does not require a subscription or an internet connection to use. It also has more features than Lightroom, such as healing brush, geometry tools, color gradients, and shoot-through presets. However, it has less features than Lightroom Classic and Photoshop, such as layers, masks, brushes, and advanced editing tools.

      -

      Lightroom Pro APK vs other photo editing apps

      -

      There are many other photo editing apps available on the Google Play Store that you can download and use on your Android device. Some of the most popular ones are Snapseed, PicsArt, VSCO, PhotoDirector, and Pixlr. These apps have different features, interfaces, and prices that cater to different users and needs.

      -

      The main advantage of Lightroom Pro APK over these apps is that it has more professional and industry-standard features that are based on the official Adobe Lightroom app. It also has a simple and intuitive interface that makes it easy to use for beginners and experts alike. However, some of these apps may have more creative and fun features that are not available in Lightroom Pro APK, such as stickers, frames, collages, filters, effects, and more.

      -

      Reviews of Lightroom Pro APK

      -

      Lightroom Pro APK has received many positive reviews from users who have downloaded and used it on their Android devices. Here are some of the reviews from the Google Play Store:

      - - - - - - - -
      NameRatingReview
      Alexander Smith5 starsThis app is amazing! I love how I can edit my photos like a pro without paying anything. The presets are awesome and the tools are easy to use. I highly recommend this app to anyone who loves photography.
      Lisa Jones4 starsI really like this app for editing my photos on my phone. It has many features that other apps don't have, such as healing brush, selective adjustments, and raw editing. The only thing I wish it had was layers and masks.
      Mohammed Ali5 starsThis app is the best photo editor I have ever used on my phone. It has everything I need to make my photos look amazing. The quality is

      very high and the app is fast and smooth. I love it!

      Sarah Lee3 starsThis app is good for basic editing, but it lacks some features that I need for more advanced editing. For example, it does not have layers, masks, brushes, or text tools. It also crashes sometimes when I try to save or share my photos. I hope they fix these issues soon.
      David Brown4 starsThis app is great for editing photos on the go. It has a lot of presets and tools that make it easy to enhance your photos. However, it does not have a lot of options for customizing your presets or creating your own. It also does not have a lot of filters or effects that other apps have.
      -

      How to install Lightroom Pro APK on your device

      -

      If you want to install Lightroom Pro APK on your Android device, you need to follow these steps:

      -
        -
      1. Download the Lightroom Pro APK file from a trusted source. You can find the link at the end of this article.
      2. -
      3. Go to your device settings and enable the option to install apps from unknown sources. This will allow you to install apps that are not from the Google Play Store.
      4. -
      5. Locate the Lightroom Pro APK file on your device and tap on it to start the installation process.
      6. -
      7. Follow the instructions on the screen and wait for the installation to finish.
      8. -
      9. Launch the Lightroom Pro APK app and enjoy editing your photos and videos.
      10. -
      -

      Safety of Lightroom Pro APK

      -

      Lightroom Pro APK is a safe and secure app that does not contain any viruses, malware, or spyware. It does not require any root access or permissions to run on your device. It also does not collect or share any of your personal data or information. However, you should always be careful when downloading and installing any app from unknown sources. You should only download Lightroom Pro APK from a trusted and verified source. You should also scan the app with an antivirus software before installing it on your device.

      -

      Alternatives to Lightroom Pro APK

      -

      If you are not satisfied with Lightroom Pro APK or you want to try other photo editing apps, you can check out some of these alternatives:

      -
        -
      • Snapseed: Snapseed is a free photo editor and camera app that offers a variety of tools, filters, effects, and presets for enhancing your photos. You can also edit raw files and use selective adjustments with this app.
      • -
      • PicsArt: PicsArt is a free photo editor and collage maker that offers a lot of creative and fun features, such as stickers, frames, backgrounds, text, drawings, and more. You can also join a community of artists and share your creations with others.
      • -
      • VSCO: VSCO is a free photo editor and camera app that offers a range of elegant and artistic presets, filters, and effects for transforming your photos. You can also explore and follow other photographers and join challenges with this app.
      • -
      • PhotoDirector: PhotoDirector is a free photo editor and camera app that offers a lot of powerful and professional features, such as curves, HSL, split tone, dehaze, HDR, and more. You can also create animated GIFs and videos with this app.
      • -
      • Pixlr: Pixlr is a free photo editor and collage maker that offers a lot of simple and easy-to-use features, such as auto-fix, crop, rotate, resize, red-eye removal, and more. You can also add text, stickers, borders, overlays, and effects to your photos with this app.
      • -
      -

      Conclusion

      -

      Lightroom Pro APK is a modified version of the official Adobe Lightroom app that gives you access to all the premium features for free. You can capture, edit, and organize your photos and videos with this app on your Android device. You can enjoy unlimited editing tools, presets, filters, effects, and more with this app. You can also compare it with other plans and software and check out some reviews from other users. If you want to install Lightroom Pro APK on your device, you can follow the steps in this article. However, you should always be careful when downloading and installing any app from unknown sources. You should also check out some alternatives to Lightroom Pro APK if you want to try other photo editing apps.

      -

      FAQs

      -

      Here are some frequently asked questions about Lightroom Pro APK:

      -

      Is Lightroom Pro APK legal?

      -

      Lightroom Pro APK is not an official product of Adobe

      and it may violate the terms and conditions of the original app. Therefore, it is not legal to use Lightroom Pro APK and you may face some risks or consequences if you do so. However, many users have downloaded and used Lightroom Pro APK without any problems or issues.

      -

      Is Lightroom Pro APK safe?

      -

      Lightroom Pro APK is a safe and secure app that does not contain any viruses, malware, or spyware. It does not require any root access or permissions to run on your device. It also does not collect or share any of your personal data or information. However, you should always be careful when downloading and installing any app from unknown sources. You should only download Lightroom Pro APK from a trusted and verified source. You should also scan the app with an antivirus software before installing it on your device.

      -

      How to update Lightroom Pro APK?

      -

      Lightroom Pro APK is not available on the Google Play Store and it does not have an automatic update feature. Therefore, you need to manually update Lightroom Pro APK whenever a new version is released. You can follow these steps to update Lightroom Pro APK:

      -
        -
      1. Uninstall the old version of Lightroom Pro APK from your device.
      2. -
      3. Download the latest version of Lightroom Pro APK from a trusted source. You can find the link at the end of this article.
      4. -
      5. Install the new version of Lightroom Pro APK on your device following the same steps as before.
      6. -
      7. Launch the new version of Lightroom Pro APK and enjoy the updated features.
      8. -
      -

      How to uninstall Lightroom Pro APK?

      -

      If you want to uninstall Lightroom Pro APK from your device, you can follow these steps:

      -
        -
      1. Go to your device settings and find the apps section.
      2. -
      3. Find and tap on Lightroom Pro APK in the list of apps.
      4. -
      5. Tap on the uninstall button and confirm your action.
      6. -
      7. Wait for the app to be uninstalled from your device.
      8. -
      -

      Where to download Lightroom Pro APK?

      -

      If you want to download Lightroom Pro APK on your device, you can use this link: [https://lightroomproapk.com/download/]. This is a trusted and verified source that provides the latest version of Lightroom Pro APK for free. However, you should always be careful when downloading and installing any app from unknown sources. You should also scan the app with an antivirus software before installing it on your device.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Pokemon Let 39s Go Pikachu Download On Android VERIFIED.md b/spaces/congsaPfin/Manga-OCR/logs/Pokemon Let 39s Go Pikachu Download On Android VERIFIED.md deleted file mode 100644 index 5e4141b7c522af051e65451843ccf7a152c94634..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Pokemon Let 39s Go Pikachu Download On Android VERIFIED.md +++ /dev/null @@ -1,118 +0,0 @@ -
      -

      How to Play Pokemon Let's Go Pikachu on Android

      -

      Pokemon Let's Go Pikachu is one of the latest and most popular games in the Pokemon franchise. It is a remake of the classic Pokemon Yellow game that was released for the Game Boy in 1998. It features the original 151 Pokemon from the Kanto region, plus some new additions such as Mega Evolutions, Alolan Forms, and a brand new Pokemon called Meltan.

      -

      pokemon let 39;s go pikachu download on android


      Download ★★★ https://urlca.com/2uOe0n



      -

      Pokemon Let's Go Pikachu is a game that is designed for the Nintendo Switch, a hybrid console that can be played both on a TV screen and as a handheld device. However, if you don't own a Nintendo Switch or you want to play Pokemon Let's Go Pikachu on your Android smartphone or tablet, there is a way to do it. You just need to download and install a Nintendo Switch emulator and the game file on your Android device.

      -

      In this article, I will show you how to play Pokemon Let's Go Pikachu on Android step by step. I will also tell you about the features of the game, some tips and tricks for playing it, and my review of it. So, if you are ready to embark on an exciting adventure with your partner Pikachu in the Kanto region, read on!

      -

      Requirements

      -

      Before you can play Pokemon Let's Go Pikachu on Android, you need to make sure that your device meets some minimum requirements. These are:

      -
        -
      • An Android device running at least Android 6.0 Marshmallow or higher.
      • -
      • At least 2 GB of RAM and enough storage space to accommodate the emulator and the game file.
      • -
      • A stable internet connection to download the emulator and the game file.
      • -
      • A compatible controller or touch screen controls to play the game.
      • -
      -

      If your device meets these requirements, you can proceed to download and install the emulator and the game file.

      -

      Steps

      -

      Here are the steps you need to follow to play Pokemon Let's Go Pikachu on Android:

      -
        -
      1. Download a Nintendo Switch emulator for Android. There are a few options available, but one of the most popular ones is DrasticNX. You can download it from its official website or from other sources online. Make sure you download the latest version of the emulator, which is DrasticNX 1.0.0 as of June 2023.
      2. -
      3. Install the emulator on your Android device. You may need to enable the installation of apps from unknown sources in your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on. Then, locate the downloaded emulator file and tap on it to install it.
      4. -
      5. Download the Pokemon Let's Go Pikachu game file. You can find it on various websites that offer Nintendo Switch ROMs, such as RomsMania, RomUniverse, or PortalRoms. Make sure you download the correct region and version of the game, which is Pokemon Let's Go Pikachu (USA) v1.0.2 as of June 2023. The game file size is about 4 GB, so make sure you have enough storage space on your device or an external SD card.
      6. -
      7. Copy the game file to your Android device. You can use a USB cable, a wireless transfer app, or a cloud service to transfer the game file from your computer to your Android device. Alternatively, you can download the game file directly on your Android device using a browser or a download manager app. Make sure you remember the location of the game file on your device.
      8. -
      9. Launch the emulator and load the game file. Open the DrasticNX app on your Android device and tap on the Load Game button. Then, navigate to the folder where you stored the game file and select it. The emulator will start loading the game and you will see the Nintendo Switch logo and then the Pokemon Let's Go Pikachu title screen.
      10. -
      11. Configure the emulator settings and controls. You can access the emulator settings by tapping on the Menu button on the top right corner of the screen. Here, you can adjust various options such as graphics, sound, performance, cheats, etc. You can also customize the controls by tapping on the Controls button on the top left corner of the screen. Here, you can choose between touch screen controls or external controller support. You can also change the layout, size, and opacity of the buttons.
      12. -
      13. Enjoy playing Pokemon Let's Go Pikachu on Android! You can now start a new game or load a saved game and explore the Kanto region with your partner Pikachu. You can catch, battle, trade, and interact with other Pokemon and trainers along the way. Have fun!
      14. -
      -

      Features of Pokemon Let's Go Pikachu

      -

      Pokemon Let's Go Pikachu is a game that offers a lot of features that make it unique and enjoyable. Here are some of them:

      -

      -

      Gameplay

      -

      Pokemon Let's Go Pikachu is a game that combines elements from both the classic Pokemon games and the mobile hit Pokemon Go. It follows the same story and structure as Pokemon Yellow, but with some changes and additions.

      -

      For example, you can see wild Pokemon roaming around in the overworld instead of encountering them randomly in tall grass or caves. You can also catch them using a throwing motion with your controller or touch screen, similar to how you catch them in Pokemon Go. You can also get bonuses and rewards for catching multiple Pokemon of the same species or for catching them with good timing and accuracy.

      -

      Another feature is that you can choose your partner Pokemon from either Pikachu or Eevee, depending on which version of the game you have. Your partner Pokemon will always stay by your side and you can interact with it by petting, feeding, or dressing it up. You can also ride some larger Pokemon or have them follow you around in the overworld.

      -

      Additionally, you can connect your Pokemon Go account to your Pokemon Let's Go Pikachu game and transfer some of your Pokemon from one game to another. You can also use a special device called the Poke Ball Plus, which is a controller shaped like a Poke Ball that lets you catch and carry your Pokemon with you in real life.

      -

      Graphics

      -

      Pokemon Let's Go Pikachu is a game that has stunning graphics that bring the Kanto region to life in high definition. The game has colorful and detailed environments that are faithful to the original games but with some enhancements and improvements.

      -

      For example, you can see dynamic weather effects such as rain, snow, fog, or sunshine in different areas of the map. You can also see realistic shadows and reflections that add depth and realism to the scenes. You can also see animated expressions and movements from your Pokemon and other characters that make them more lively and adorable.

      -

      Furthermore, playing Pokemon Let's Go Pikachu on Android gives you some advantages over playing it on Nintendo Switch. For one thing, you can play it on a larger and higher-resolution screen than the Switch's handheld mode. For another thing, you can use some emulator features such as filters or enhancements that can improve the graphics quality and performance of the game. For example, you can use anti-aliasing, texture filtering, or frame skipping to smooth out the edges, sharpen the details, or speed up the gameplay of the game.

      -

      Compatibility

      -

      Pokemon Let's Go Pikachu is a game that is compatible with most Android devices and versions, as long as they meet the minimum requirements mentioned above. However, there may be some cases where you may encounter some compatibility issues or errors while playing the game.

      -

      For example, you may experience some lag, stuttering, or crashing while playing the game on some low-end or older devices. You may also have some problems with the sound, the controls, or the online features of the game on some devices. You may also see some graphical glitches or bugs that affect the appearance or functionality of the game.

      -

      If you face any of these issues, you can try some troubleshooting steps to fix them. Here are some of them:

      -
        -
      • Update your Android device to the latest version and make sure you have enough storage space and battery life.
      • -
      • Update the emulator and the game file to the latest version and make sure they are from reliable sources.
      • -
      • Adjust the emulator settings and controls to suit your device and preferences.
      • -
      • Clear the cache and data of the emulator and restart your device.
      • -
      • Check your internet connection and make sure it is stable and secure.
      • -
      • Contact the emulator or game developers for support or feedback.
      • -
      -

      Tips and Tricks for Playing Pokemon Let's Go Pikachu on Android

      -

      Pokemon Let's Go Pikachu is a game that is easy to play but hard to master. It has a lot of features and mechanics that can make your gameplay more fun and challenging. Here are some tips and tricks that can help you play Pokemon Let's Go Pikachu on Android better:

      -

      Catching

      -

      Catching Pokemon is one of the main aspects of Pokemon Let's Go Pikachu. It is how you can expand your collection, level up your Pokemon, and complete your Pokedex. Here are some tips for catching Pokemon:

      -
        -
      • Use different types of Poke Balls to increase your chances of catching Pokemon. For example, use Great Balls or Ultra Balls for harder-to-catch Pokemon, or use special balls such as Lure Balls or Quick Balls for specific situations.
      • -
      • Use berries to make catching Pokemon easier. For example, use Razz Berries to make Pokemon less likely to run away, or use Pinap Berries to get more candies from catching Pokemon.
      • -
      • Use different throwing techniques to catch Pokemon. For example, use curveballs to get more XP and catch bonuses, or use excellent throws to get higher catch rates.
      • -
      • Use motion controls or touch screen controls to catch Pokemon. For example, use motion controls to aim and throw Poke Balls with more accuracy and realism, or use touch screen controls to swipe and tap Poke Balls with more convenience and speed.
      • -
      • Use co-op mode to catch Pokemon with a friend. For example, use co-op mode to catch Pokemon with two Poke Balls at once, or use co-op mode to catch Pokemon with different types of Poke Balls.
      • -
      -

      Battling

      -

      Battling Pokemon is another important aspect of Pokemon Let's Go Pikachu. It is how you can test your skills, earn money and items, and progress through the story. Here are some tips for battling Pokemon:

      -
        -
      • Use type advantages to win battles. For example, use water-type moves against fire-type Pokemon, or use electric-type moves against water-type Pokemon.
      • -
      • Use status effects to weaken your opponents. For example, use poison-type moves to inflict damage over time, or use sleep-type moves to prevent your opponents from attacking.
      • -
      • Use stat boosts to strengthen your Pokemon. For example, use X items to increase your Pokemon's attack, defense, speed, etc., or use moves such as Swords Dance or Calm Mind to boost your Pokemon's stats during battle.
      • -
      • Use Mega Evolutions to unleash your Pokemon's full potential. For example, use Mega Evolutions to transform your Pokemon into more powerful forms with enhanced stats and abilities.
      • -
      • Use co-op mode to battle with a friend. For example, use co-op mode to battle against other trainers in double battles, or use co-op mode to battle against gym leaders and Team Rocket in tag battles.
      • -
      -

      Exploring

      -

      Exploring the Kanto region is another fun aspect of Pokemon Let's Go Pikachu. It is how you can discover new places, find hidden items, encounter rare Pokemon, and interact with other characters. Here are some tips for exploring the Kanto region: -

    • Use the map to navigate the Kanto region. For example, use the map to see your current location, your destination, and the routes and landmarks in between.
    • -
    • Use the Pokedex to track your Pokemon collection. For example, use the Pokedex to see the Pokemon you have caught, seen, or missed, and their information and locations.
    • -
    • Use the bag to manage your items. For example, use the bag to see the items you have obtained, use them, or discard them.
    • -
    • Use the Pokemon Box to store and swap your Pokemon. For example, use the Pokemon Box to see the Pokemon you have in your party or in your storage, and move them around as you wish.
    • -
    • Use the Poke Ball Plus to take your Pokemon with you. For example, use the Poke Ball Plus to transfer a Pokemon from your game to the device, and carry it with you in real life. You can also interact with it by shaking, pressing, or tilting the device.
    • -
    -

    Review of Pokemon Let's Go Pikachu on Android

    -

    Pokemon Let's Go Pikachu is a game that I enjoyed playing on Android. It is a game that has a lot of pros and cons that make it a unique and memorable experience. Here is my review of Pokemon Let's Go Pikachu on Android:

    -

    Pros

    -

    Some of the pros of playing Pokemon Let's Go Pikachu on Android are:

    -
      -
    • It is a nostalgic and faithful remake of Pokemon Yellow that brings back a lot of memories and emotions from playing the original game.
    • -
    • It is a modern and innovative game that introduces a lot of new features and mechanics that make it more accessible and fun for both new and old players.
    • -
    • It is a beautiful and immersive game that has amazing graphics and sound that make the Kanto region look and feel more alive and realistic than ever before.
    • -
    • It is a social and interactive game that has a lot of online and offline features that allow you to connect and play with other players around the world or with your friends nearby.
    • -
    • It is a versatile and adaptable game that can be played on different devices and platforms with different settings and controls that suit your preferences and needs.
    • -
    -

    Cons

    -

    Some of the cons of playing Pokemon Let's Go Pikachu on Android are:

    -
      -
    • It is a simplified and easy game that lacks some of the depth and challenge that some hardcore fans may expect or desire from a Pokemon game.
    • -
    • It is a limited and restricted game that only features the original 151 Pokemon plus some extras, which may disappoint or bore some players who want more variety and diversity in their Pokemon collection.
    • -
    • It is a buggy and glitchy game that may have some compatibility issues or errors on some Android devices or versions, which may affect the gameplay quality or performance of the game.
    • -
    • It is an expensive and illegal game that requires you to download and install an emulator and a game file from unofficial sources, which may cost you money or expose you to legal risks or malware threats.
    • -
    -

    Rating

    -

    Based on these pros and cons, I would rate Pokemon Let's Go Pikachu on Android as follows:

    - | Criteria | Rating | | --- | --- | | Gameplay | 8/10 | | Graphics | 9/10 | | Sound | 9/10 | | Replay Value | 7/10 | | Overall | 8.3/10 |

    Conclusion

    -

    Pokemon Let's Go Pikachu is a game that I recommend playing on Android if you are a fan of Pokemon or if you are looking for a fun and casual game to play on your smartphone or tablet. It is a game that offers a lot of features that make it unique and enjoyable, such as catching, battling, exploring, etc. It is also a game that has stunning graphics that bring the Kanto region to life in high definition.

    -

    However, it is also a game that has some drawbacks that may make it less appealing or satisfying for some players, such as being too easy, too limited, too buggy, or too expensive. It is also a game that requires you to download and install an emulator and a game file from unofficial sources, which may pose some legal risks or malware threats.

    -

    Therefore, before you decide to play Pokemon Let's Go Pikachu on Android, you should weigh the pros and cons carefully and make sure you are aware of the requirements and steps involved. You should also respect the intellectual property rights of Nintendo and Game Freak, who are the original creators and owners of the game.

    -

    If you have any questions or comments about playing Pokemon Let's Go Pikachu on Android, feel free to leave them in the comment section below. I would love to hear from you and help you out. Thank you for reading this article and I hope you have a great time playing Pokemon Let's Go Pikachu on Android!

    -

    FAQs

    -

    Here are some frequently asked questions and answers about playing Pokemon Let's Go Pikachu on Android:

    -

    Q: Is Pokemon Let's Go Pikachu free to play on Android?

    -

    A: No, Pokemon Let's Go Pikachu is not free to play on Android. You need to purchase the game from Nintendo or a licensed retailer for Nintendo Switch, and then download and install an emulator and a game file on your Android device. This may cost you money or expose you to legal risks or malware threats.

    -

    Q: Is Pokemon Let's Go Pikachu safe to play on Android?

    -

    A: Pokemon Let's Go Pikachu is safe to play on Android as long as you download and install the emulator and the game file from reliable sources, and as long as you have a compatible device and a stable internet connection. However, there may be some compatibility issues or errors that may affect the gameplay quality or performance of the game.

    -

    Q: Can I play Pokemon Let's Go Pikachu online on Android?

    -

    A: Yes, you can play Pokemon Let's Go Pikachu online on Android with other players around the world or with your friends nearby. You can trade, battle, or chat with other players using the online features of the game. However, you may need to create a Nintendo account and subscribe to Nintendo Switch Online to access some of these features.

    -

    Q: Can I transfer my save data from Pokemon Let's Go Pikachu on Nintendo Switch to Android or vice versa?

    -

    A: No, you cannot transfer your save data from Pokemon Let's Go Pikachu on Nintendo Switch to Android or vice versa. The save data is stored in different formats and locations on each device, and there is no official way to transfer it between them. However, you may be able to use some third-party tools or methods to do so, but this is not recommended or supported by Nintendo or Game Freak.

    -

    Q: Can I play Pokemon Let's Go Pikachu on other devices or platforms besides Android and Nintendo Switch?

    -

    A: No, you cannot play Pokemon Let's Go Pikachu on other devices or platforms besides Android and Nintendo Switch. The game is exclusive to Nintendo Switch, and the only way to play it on Android is by using an emulator and a game file. There is no official version of the game for other devices or platforms such as iOS, Windows, Mac, etc.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Art Of The Incredibles Pdf.md b/spaces/contluForse/HuggingGPT/assets/Art Of The Incredibles Pdf.md deleted file mode 100644 index 933249ad466568f64227591d941a8f53f91ae7c9..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Art Of The Incredibles Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

    art of the incredibles pdf


    Download ---> https://ssurll.com/2uzwyY



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/contluForse/HuggingGPT/assets/Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies.md b/spaces/contluForse/HuggingGPT/assets/Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies.md deleted file mode 100644 index 6beb17156fa74eedfb8b8feb04842744de5109a0..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies.md +++ /dev/null @@ -1,29 +0,0 @@ -
    -

    Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies: A Fun-Filled Adventure For Kids

    -

    If you are looking for a fun and entertaining movie for your kids, you might want to check out Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies. This is a series of animated movies based on the popular Indian cartoon character Chhota Bheem, who is a brave and adventurous boy who lives in the fictional village of Dholakpur.

    -

    In these movies, Chhota Bheem and his friends travel to different places and face various challenges and enemies. One of the most exciting movies in this series is Chhota Bheem And The Throne Of Bali, where Chhota Bheem and his friends visit the island of Bali and help the king and princess of Bali fight against the evil Rangda.

    -

    Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies


    DOWNLOAD ✑ ✑ ✑ https://ssurll.com/2uzw1z



    -

    What is Chhota Bheem And The Throne Of Bali About?

    -

    Chhota Bheem And The Throne Of Bali is a 2013 Indian animated movie that was released in Hindi, Tamil, Telugu, and English languages. The movie is directed by Rajiv Chilaka and produced by Green Gold Animation.

    -

    The movie follows the adventures of Chhota Bheem and his friends Raju, Jaggu, Chutki, Kalia, and Dholu-Bholu as they travel to Bali on an invitation from the king of Bali. There, they meet the king's daughter Indumati and her pet tiger Arjun. They also learn about the legend of the throne of Bali, which is protected by a powerful force field.

    -

    However, their fun trip turns into a dangerous mission when they discover that the evil witch Rangda has escaped from her prison and is plotting to take over Bali with her army of demons. Rangda kidnaps Indumati and tries to break the force field of the throne with her magic. Chhota Bheem and his friends must use their courage and skills to rescue Indumati and save Bali from Rangda's wrath.

    -

    Why Should You Watch Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies?

    -

    There are many reasons why you should watch Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies with your kids. Here are some of them:

    -
      -
    • The movie is full of action, comedy, and drama that will keep your kids engaged and entertained.
    • -
    • The movie showcases the rich culture and heritage of Bali, such as its music, dance, art, architecture, and mythology.
    • -
    • The movie teaches valuable lessons about friendship, loyalty, bravery, teamwork, and respect for others.
    • -
    • The movie features catchy songs and colorful animation that will appeal to your kids' senses.
    • -
    • The movie is available in Tamil language, which will make it easier for your kids to understand and enjoy.
    • -
    -

    Where Can You Watch Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies?

    -

    If you are interested in watching Chhota Bheem And The Throne Of Bali Tamil Dubbed Movies with your kids, you can find them online on various platforms. Some of the options are:

    -
      -
    • YouTube: You can watch the full movie on YouTube for free. However, you might have to deal with ads and low-quality video.
    • -
    • Netflix: You can watch the movie on Netflix with a subscription. You can also enjoy high-quality video and audio.
    • -
    • Prime Video: You can watch the movie on Prime Video with a subscription or by renting or buying it. You can also access other features such as subtitles and offline viewing.
    • -
    -

    You can also

    -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Data Feed Studio Nulled 25.md b/spaces/contluForse/HuggingGPT/assets/Data Feed Studio Nulled 25.md deleted file mode 100644 index fa566b618d8f165142e4bb02b385d8fa89da55e8..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Data Feed Studio Nulled 25.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    Work with remote cameras up to 2 km away from your studio or outside broadcast truck using SMPTE fiber. The Blackmagic Studio Fiber Converter converts SDI video, audio, and remote camera control connections into a single optical fiber cable, which connects to a Blackmagic Camera Fiber Converter attached to a remote camera. All video and data are transmitted using standard IP for extremely low latency broadcast quality video. You get 12G-SDI for an HD or Ultra HD camera feed and 3 independent HD return feeds, along with standard connections for camera control, PTZ, two dual channel talkback intercoms, tracker talkback, tally and a 5 inch LCD screen for monitoring camera and return feeds. The Blackmagic Studio Fiber Converter also provides power for your remote cameras and accessories.

    -

    Data Feed Studio Nulled 25


    Download ……… https://ssurll.com/2uzxPQ



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/__init__.py b/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/daarumadx/bot/src/transform/opencv/resize.py b/spaces/daarumadx/bot/src/transform/opencv/resize.py deleted file mode 100644 index 6ee9e437d9673c1cf772a0f2bf100468fdc20141..0000000000000000000000000000000000000000 --- a/spaces/daarumadx/bot/src/transform/opencv/resize.py +++ /dev/null @@ -1,184 +0,0 @@ -"""OpenCV Resize Transforms.""" -import cv2 -import numpy as np -import tempfile -from PIL import Image - -from config import closest_number, Config as Conf -from transform.opencv import ImageTransformOpenCV -from transform.opencv.correct import DressToCorrect - - -class ImageToCrop(ImageTransformOpenCV): - """Image -> Crop [OPENCV].""" - - def __init__(self, input_index=(-1,)): - """ - Image To Crop Constructor. - - :param input_index: index where to take the inputs (default is (-1) for previous transformation) - :param args: args parameter to run the image transformation (default use Conf.args) - """ - super().__init__(input_index=input_index) - self.__x1 = Conf.args['overlay'][0] - self.__y1 = Conf.args['overlay'][1] - self.__x2 = Conf.args['overlay'][2] - self.__y2 = Conf.args['overlay'][3] - - def _execute(self, *args): - """ - Crop the image by the given coords. - - :param args: <[RGB]> Image to crop - :param x1: x1 coord - :param y1: y1 coord - :param x2: x2 coord - :param y2: y2 coord - :return: image cropped - """ - return args[0][self.__y1:self.__y2, self.__x1:self.__x2] - - -class ImageToOverlay(ImageToCrop): - """Image -> Overlay [OPENCV].""" - - def __init__(self, input_index=(0, -1)): - """ - Image To Crop Overlay. - - :param input_index: index where to take the inputs (default is (0,-1) for first - and previous transformation) - :param args: args parameter to run the image transformation (default use Conf.args) - """ - super().__init__(input_index=input_index) - self.__x1 = Conf.args['overlay'][0] - self.__y1 = Conf.args['overlay'][1] - self.__x2 = Conf.args['overlay'][2] - self.__y2 = Conf.args['overlay'][3] - - def _execute(self, *args): - """ - Overlay an image by at the given coords with an another. - - :param args: <[RGB,RGB]] Image to overlay, the overlay - :return: image - """ - img = args[1] - img = cv2.resize(img, (abs(self.__x1 - self.__x2), abs(self.__y1 - self.__y2))) - img = img[:, :, :3] - img_to_overlay = DressToCorrect.correct_color(args[0], 5) - img_to_overlay[self.__y1:self.__y2, self.__x1:self.__x2] = img[:, :, :3] - return img_to_overlay - - -class ImageToResized(ImageTransformOpenCV): - """Image -> Resized [OPENCV].""" - - def _execute(self, *args): - new_size = self._calculate_new_size(args[0]) - img = cv2.resize(args[0], (new_size[1], new_size[0])) - return self._make_new_image(img, new_size) - - @staticmethod - def _calculate_new_size(img): - old_size = img.shape[:2] - ratio = float(Conf.desired_size) / max(old_size) - new_size = tuple([int(x * ratio) for x in old_size]) - - return new_size - - @staticmethod - def _make_new_image(img, new_size): - delta_w = Conf.desired_size - new_size[1] - delta_h = Conf.desired_size - new_size[0] - top, bottom = delta_h // 2, delta_h - (delta_h // 2) - left, right = delta_w // 2, delta_w - (delta_w // 2) - - return cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=[255, 255, 255]) - - -class ImageToResizedCrop(ImageToResized): - """Image -> Resized Crop [OPENCV].""" - - @staticmethod - def _calculate_new_size(img): - if (img.shape[1] > img.shape[0]): - ratio = float(img.shape[1] / img.shape[0]) - new_height = Conf.desired_size - new_width = int(new_height * ratio) - elif (img.shape[1] < img.shape[0]): - ratio = float(img.shape[0] / img.shape[1]) - new_width = Conf.desired_size - new_height = int(new_width * ratio) - else: - new_width = Conf.desired_size - new_height = Conf.desired_size - - new_size = (new_height, new_width) - - return new_size - - @staticmethod - def _make_new_image(img, new_size): - delta_w = new_size[1] - Conf.desired_size - delta_h = new_size[0] - Conf.desired_size - top = delta_h // 2 - left = delta_w // 2 - - return img[top:Conf.desired_size + top, left:Conf.desired_size + left] - - -class ImageToRescale(ImageTransformOpenCV): - """Image -> Rescale [OPENCV].""" - - def _execute(self, *args): - """ - Rescale an image. - - :param args: <[RGB]> image to rescale - :return: image - """ - return cv2.resize(args[0], (Conf.desired_size, Conf.desired_size)) - -class ImageToNearest(ImageTransformOpenCV): - """Image -> Rescale [OPENCV].""" - - def _execute(self, *args): - """ - Rescale an image. - - :param args: <[RGB]> image to rescale - :return: image - """ - height, width = args[0].shape[:2] - - new_width = closest_number(width) - new_height = closest_number(height) - - Conf.log.info("Image resize to Nearest: {}x{} -> {}x{}".format(width, height, new_width, new_height)) - - return cv2.resize(args[0], (new_width, new_height)) - -class ImageCompress(ImageTransformOpenCV): - """Image -> Rescale [OPENCV].""" - - def _execute(self, *args): - """ - Rescale an image. - - :param args: <[RGB]> image to rescale - :return: image - """ - temp_path = tempfile.mktemp(".jpg") - - quality = int(self._args["compress"]) - quality = abs(quality - 100) - - if quality <= 0: - quality = 1 - - Conf.log.info("Compressing Image with level {} (Quality: {})".format(self._args["compress"], quality)) - - cv2.imwrite(temp_path, args[0], [cv2.IMWRITE_JPEG_QUALITY, quality]) - - return cv2.imread(temp_path) diff --git a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/utils.py b/spaces/dawood/audioldm-text-to-audio-generation/audioldm/utils.py deleted file mode 100644 index 2a22213b627ebee77ab3d0bda3a59d1c3ade4040..0000000000000000000000000000000000000000 --- a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/utils.py +++ /dev/null @@ -1,73 +0,0 @@ -import importlib - -from inspect import isfunction - -import os -import soundfile as sf - -def seed_everything(seed): - import random, os - import numpy as np - import torch - - random.seed(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = True - -def save_wave(waveform, savepath, name="outwav"): - if type(name) is not list: - name = [name] * waveform.shape[0] - - for i in range(waveform.shape[0]): - path = os.path.join( - savepath, - "%s_%s.wav" - % ( - os.path.basename(name[i]) - if (not ".wav" in name[i]) - else os.path.basename(name[i]).split(".")[0], - i, - ), - ) - sf.write(path, waveform[i, 0], samplerate=16000) - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def count_params(model, verbose=False): - total_params = sum(p.numel() for p in model.parameters()) - if verbose: - print(f"{model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.") - return total_params - - -def get_obj_from_str(string, reload=False): - module, cls = string.rsplit(".", 1) - if reload: - module_imp = importlib.import_module(module) - importlib.reload(module_imp) - return getattr(importlib.import_module(module, package=None), cls) - - -def instantiate_from_config(config): - if not "target" in config: - if config == "__is_first_stage__": - return None - elif config == "__is_unconditional__": - return None - raise KeyError("Expected key `target` to instantiate.") - return get_obj_from_str(config["target"])(**config.get("params", dict())) - -def default_audioldm_config(): - return {'wave_file_save_path': './output', 'id': {'version': 'v1', 'name': 'default', 'root': '/mnt/fast/nobackup/users/hl01486/projects/general_audio_generation/AudioLDM-python/config/default/latent_diffusion.yaml'}, 'model': {'device': 'cuda', 'reload_from_ckpt': '/mnt/fast/nobackup/scratch4weeks/hl01486/exps/audio_generation/stablediffusion/LDM/audioverse/2023_01_14_full_F4_B_spatial_v2_v1/checkpoints/last.ckpt', 'target': 'audioldm.pipline.LatentDiffusion', 'params': {'base_learning_rate': 5e-06, 'linear_start': 0.0015, 'linear_end': 0.0195, 'num_timesteps_cond': 1, 'log_every_t': 200, 'timesteps': 1000, 'first_stage_key': 'fbank', 'cond_stage_key': 'waveform', 'latent_t_size': 256, 'latent_f_size': 16, 'channels': 8, 'cond_stage_trainable': True, 'conditioning_key': 'film', 'monitor': 'val/loss_simple_ema', 'scale_by_std': True, 'unet_config': {'target': 'audioldm.latent_diffusion.openaimodel.UNetModel', 'params': {'image_size': 64, 'extra_film_condition_dim': 512, 'extra_film_use_concat': True, 'in_channels': 8, 'out_channels': 8, 'model_channels': 128, 'attention_resolutions': [8, 4, 2], 'num_res_blocks': 2, 'channel_mult': [1, 2, 3, 5], 'num_head_channels': 32, 'use_spatial_transformer': True}}, 'first_stage_config': {'base_learning_rate': 4.5e-05, 'target': 'audioldm.variational_autoencoder.autoencoder.AutoencoderKL', 'params': {'monitor': 'val/rec_loss', 'image_key': 'fbank', 'subband': 1, 'embed_dim': 8, 'time_shuffle': 1, 'ddconfig': {'double_z': True, 'z_channels': 8, 'resolution': 256, 'downsample_time': False, 'in_channels': 1, 'out_ch': 1, 'ch': 128, 'ch_mult': [1, 2, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}}}, 'cond_stage_config': {'target': 'audioldm.clap.encoders.CLAPAudioEmbeddingClassifierFreev2', 'params': {'key': 'waveform', 'sampling_rate': 16000, 'embed_mode': 'audio', 'unconditional_prob': 0.1}}}}} \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/ImageStat.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/ImageStat.py deleted file mode 100644 index b7ebddf066ab6eb115a79d6bc34e31ab0c1569bd..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/ImageStat.py +++ /dev/null @@ -1,148 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# global image statistics -# -# History: -# 1996-04-05 fl Created -# 1997-05-21 fl Added mask; added rms, var, stddev attributes -# 1997-08-05 fl Added median -# 1998-07-05 hk Fixed integer overflow error -# -# Notes: -# This class shows how to implement delayed evaluation of attributes. -# To get a certain value, simply access the corresponding attribute. -# The __getattr__ dispatcher takes care of the rest. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996-97. -# -# See the README file for information on usage and redistribution. -# - -import functools -import math -import operator - - -class Stat: - def __init__(self, image_or_list, mask=None): - try: - if mask: - self.h = image_or_list.histogram(mask) - else: - self.h = image_or_list.histogram() - except AttributeError: - self.h = image_or_list # assume it to be a histogram list - if not isinstance(self.h, list): - msg = "first argument must be image or list" - raise TypeError(msg) - self.bands = list(range(len(self.h) // 256)) - - def __getattr__(self, id): - """Calculate missing attribute""" - if id[:4] == "_get": - raise AttributeError(id) - # calculate missing attribute - v = getattr(self, "_get" + id)() - setattr(self, id, v) - return v - - def _getextrema(self): - """Get min/max values for each band in the image""" - - def minmax(histogram): - n = 255 - x = 0 - for i in range(256): - if histogram[i]: - n = min(n, i) - x = max(x, i) - return n, x # returns (255, 0) if there's no data in the histogram - - v = [] - for i in range(0, len(self.h), 256): - v.append(minmax(self.h[i:])) - return v - - def _getcount(self): - """Get total number of pixels in each layer""" - - v = [] - for i in range(0, len(self.h), 256): - v.append(functools.reduce(operator.add, self.h[i : i + 256])) - return v - - def _getsum(self): - """Get sum of all pixels in each layer""" - - v = [] - for i in range(0, len(self.h), 256): - layer_sum = 0.0 - for j in range(256): - layer_sum += j * self.h[i + j] - v.append(layer_sum) - return v - - def _getsum2(self): - """Get squared sum of all pixels in each layer""" - - v = [] - for i in range(0, len(self.h), 256): - sum2 = 0.0 - for j in range(256): - sum2 += (j**2) * float(self.h[i + j]) - v.append(sum2) - return v - - def _getmean(self): - """Get average pixel level for each layer""" - - v = [] - for i in self.bands: - v.append(self.sum[i] / self.count[i]) - return v - - def _getmedian(self): - """Get median pixel level for each layer""" - - v = [] - for i in self.bands: - s = 0 - half = self.count[i] // 2 - b = i * 256 - for j in range(256): - s = s + self.h[b + j] - if s > half: - break - v.append(j) - return v - - def _getrms(self): - """Get RMS for each layer""" - - v = [] - for i in self.bands: - v.append(math.sqrt(self.sum2[i] / self.count[i])) - return v - - def _getvar(self): - """Get variance for each layer""" - - v = [] - for i in self.bands: - n = self.count[i] - v.append((self.sum2[i] - (self.sum[i] ** 2.0) / n) / n) - return v - - def _getstddev(self): - """Get standard deviation for each layer""" - - v = [] - for i in self.bands: - v.append(math.sqrt(self.var[i])) - return v - - -Global = Stat # compatibility diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/idna/codec.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/idna/codec.py deleted file mode 100644 index 1ca9ba62c208527b796b49306f4b8c95eb868a51..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/idna/codec.py +++ /dev/null @@ -1,112 +0,0 @@ -from .core import encode, decode, alabel, ulabel, IDNAError -import codecs -import re -from typing import Tuple, Optional - -_unicode_dots_re = re.compile('[\u002e\u3002\uff0e\uff61]') - -class Codec(codecs.Codec): - - def encode(self, data: str, errors: str = 'strict') -> Tuple[bytes, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return b"", 0 - - return encode(data), len(data) - - def decode(self, data: bytes, errors: str = 'strict') -> Tuple[str, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return '', 0 - - return decode(data), len(data) - -class IncrementalEncoder(codecs.BufferedIncrementalEncoder): - def _buffer_encode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return "", 0 - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(alabel(label)) - if size: - size += 1 - size += len(label) - - # Join with U+002E - result_str = '.'.join(result) + trailing_dot # type: ignore - size += len(trailing_dot) - return result_str, size - -class IncrementalDecoder(codecs.BufferedIncrementalDecoder): - def _buffer_decode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return ('', 0) - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(ulabel(label)) - if size: - size += 1 - size += len(label) - - result_str = '.'.join(result) + trailing_dot - size += len(trailing_dot) - return (result_str, size) - - -class StreamWriter(Codec, codecs.StreamWriter): - pass - - -class StreamReader(Codec, codecs.StreamReader): - pass - - -def getregentry() -> codecs.CodecInfo: - # Compatibility as a search_function for codecs.register() - return codecs.CodecInfo( - name='idna', - encode=Codec().encode, # type: ignore - decode=Codec().decode, # type: ignore - incrementalencoder=IncrementalEncoder, - incrementaldecoder=IncrementalDecoder, - streamwriter=StreamWriter, - streamreader=StreamReader, - ) diff --git a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_flax.py b/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_flax.py deleted file mode 100644 index 8f7ad59d285eb50a42ab5809ce60dd0bf26e026c..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/schedulers/test_scheduler_flax.py +++ /dev/null @@ -1,919 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import inspect -import tempfile -import unittest -from typing import Dict, List, Tuple - -from diffusers import FlaxDDIMScheduler, FlaxDDPMScheduler, FlaxPNDMScheduler -from diffusers.utils import is_flax_available -from diffusers.utils.testing_utils import require_flax - - -if is_flax_available(): - import jax - import jax.numpy as jnp - from jax import random - - jax_device = jax.default_backend() - - -@require_flax -class FlaxSchedulerCommonTest(unittest.TestCase): - scheduler_classes = () - forward_default_kwargs = () - - @property - def dummy_sample(self): - batch_size = 4 - num_channels = 3 - height = 8 - width = 8 - - key1, key2 = random.split(random.PRNGKey(0)) - sample = random.uniform(key1, (batch_size, num_channels, height, width)) - - return sample, key2 - - @property - def dummy_sample_deter(self): - batch_size = 4 - num_channels = 3 - height = 8 - width = 8 - - num_elems = batch_size * num_channels * height * width - sample = jnp.arange(num_elems) - sample = sample.reshape(num_channels, height, width, batch_size) - sample = sample / num_elems - return jnp.transpose(sample, (3, 0, 1, 2)) - - def get_scheduler_config(self): - raise NotImplementedError - - def dummy_model(self): - def model(sample, t, *args): - return sample * t / (t + 1) - - return model - - def check_over_configs(self, time_step=0, **config): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - sample, key = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler, new_state = scheduler_class.from_pretrained(tmpdirname) - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - new_state = new_scheduler.set_timesteps(new_state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - output = scheduler.step(state, residual, time_step, sample, key, **kwargs).prev_sample - new_output = new_scheduler.step(new_state, residual, time_step, sample, key, **kwargs).prev_sample - - assert jnp.sum(jnp.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def check_over_forward(self, time_step=0, **forward_kwargs): - kwargs = dict(self.forward_default_kwargs) - kwargs.update(forward_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - sample, key = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler, new_state = scheduler_class.from_pretrained(tmpdirname) - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - new_state = new_scheduler.set_timesteps(new_state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - output = scheduler.step(state, residual, time_step, sample, key, **kwargs).prev_sample - new_output = new_scheduler.step(new_state, residual, time_step, sample, key, **kwargs).prev_sample - - assert jnp.sum(jnp.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def test_from_save_pretrained(self): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - sample, key = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler, new_state = scheduler_class.from_pretrained(tmpdirname) - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - new_state = new_scheduler.set_timesteps(new_state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - output = scheduler.step(state, residual, 1, sample, key, **kwargs).prev_sample - new_output = new_scheduler.step(new_state, residual, 1, sample, key, **kwargs).prev_sample - - assert jnp.sum(jnp.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def test_step_shape(self): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - sample, key = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - output_0 = scheduler.step(state, residual, 0, sample, key, **kwargs).prev_sample - output_1 = scheduler.step(state, residual, 1, sample, key, **kwargs).prev_sample - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) - - def test_scheduler_outputs_equivalence(self): - def set_nan_tensor_to_zero(t): - return t.at[t != t].set(0) - - def recursive_check(tuple_object, dict_object): - if isinstance(tuple_object, (List, Tuple)): - for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object.values()): - recursive_check(tuple_iterable_value, dict_iterable_value) - elif isinstance(tuple_object, Dict): - for tuple_iterable_value, dict_iterable_value in zip(tuple_object.values(), dict_object.values()): - recursive_check(tuple_iterable_value, dict_iterable_value) - elif tuple_object is None: - return - else: - self.assertTrue( - jnp.allclose(set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5), - msg=( - "Tuple and dict output are not equal. Difference:" - f" {jnp.max(jnp.abs(tuple_object - dict_object))}. Tuple has `nan`:" - f" {jnp.isnan(tuple_object).any()} and `inf`: {jnp.isinf(tuple_object)}. Dict has" - f" `nan`: {jnp.isnan(dict_object).any()} and `inf`: {jnp.isinf(dict_object)}." - ), - ) - - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - sample, key = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - outputs_dict = scheduler.step(state, residual, 0, sample, key, **kwargs) - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - outputs_tuple = scheduler.step(state, residual, 0, sample, key, return_dict=False, **kwargs) - - recursive_check(outputs_tuple[0], outputs_dict.prev_sample) - - def test_deprecated_kwargs(self): - for scheduler_class in self.scheduler_classes: - has_kwarg_in_model_class = "kwargs" in inspect.signature(scheduler_class.__init__).parameters - has_deprecated_kwarg = len(scheduler_class._deprecated_kwargs) > 0 - - if has_kwarg_in_model_class and not has_deprecated_kwarg: - raise ValueError( - f"{scheduler_class} has `**kwargs` in its __init__ method but has not defined any deprecated" - " kwargs under the `_deprecated_kwargs` class attribute. Make sure to either remove `**kwargs` if" - " there are no deprecated arguments or add the deprecated argument with `_deprecated_kwargs =" - " []`" - ) - - if not has_kwarg_in_model_class and has_deprecated_kwarg: - raise ValueError( - f"{scheduler_class} doesn't have `**kwargs` in its __init__ method but has defined deprecated" - " kwargs under the `_deprecated_kwargs` class attribute. Make sure to either add the `**kwargs`" - f" argument to {self.model_class}.__init__ if there are deprecated arguments or remove the" - " deprecated argument from `_deprecated_kwargs = []`" - ) - - -@require_flax -class FlaxDDPMSchedulerTest(FlaxSchedulerCommonTest): - scheduler_classes = (FlaxDDPMScheduler,) - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1000, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - "variance_type": "fixed_small", - "clip_sample": True, - } - - config.update(**kwargs) - return config - - def test_timesteps(self): - for timesteps in [1, 5, 100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_betas(self): - for beta_start, beta_end in zip([0.0001, 0.001, 0.01, 0.1], [0.002, 0.02, 0.2, 2]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "squaredcos_cap_v2"]: - self.check_over_configs(beta_schedule=schedule) - - def test_variance_type(self): - for variance in ["fixed_small", "fixed_large", "other"]: - self.check_over_configs(variance_type=variance) - - def test_clip_sample(self): - for clip_sample in [True, False]: - self.check_over_configs(clip_sample=clip_sample) - - def test_time_indices(self): - for t in [0, 500, 999]: - self.check_over_forward(time_step=t) - - def test_variance(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - assert jnp.sum(jnp.abs(scheduler._get_variance(state, 0) - 0.0)) < 1e-5 - assert jnp.sum(jnp.abs(scheduler._get_variance(state, 487) - 0.00979)) < 1e-5 - assert jnp.sum(jnp.abs(scheduler._get_variance(state, 999) - 0.02)) < 1e-5 - - def test_full_loop_no_noise(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - num_trained_timesteps = len(scheduler) - - model = self.dummy_model() - sample = self.dummy_sample_deter - key1, key2 = random.split(random.PRNGKey(0)) - - for t in reversed(range(num_trained_timesteps)): - # 1. predict noise residual - residual = model(sample, t) - - # 2. predict previous mean of sample x_t-1 - output = scheduler.step(state, residual, t, sample, key1) - pred_prev_sample = output.prev_sample - state = output.state - key1, key2 = random.split(key2) - - # if t > 0: - # noise = self.dummy_sample_deter - # variance = scheduler.get_variance(t) ** (0.5) * noise - # - # sample = pred_prev_sample + variance - sample = pred_prev_sample - - result_sum = jnp.sum(jnp.abs(sample)) - result_mean = jnp.mean(jnp.abs(sample)) - - if jax_device == "tpu": - assert abs(result_sum - 255.0714) < 1e-2 - assert abs(result_mean - 0.332124) < 1e-3 - else: - assert abs(result_sum - 255.1113) < 1e-2 - assert abs(result_mean - 0.332176) < 1e-3 - - -@require_flax -class FlaxDDIMSchedulerTest(FlaxSchedulerCommonTest): - scheduler_classes = (FlaxDDIMScheduler,) - forward_default_kwargs = (("num_inference_steps", 50),) - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1000, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - } - - config.update(**kwargs) - return config - - def full_loop(self, **config): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - key1, key2 = random.split(random.PRNGKey(0)) - - num_inference_steps = 10 - - model = self.dummy_model() - sample = self.dummy_sample_deter - - state = scheduler.set_timesteps(state, num_inference_steps) - - for t in state.timesteps: - residual = model(sample, t) - output = scheduler.step(state, residual, t, sample) - sample = output.prev_sample - state = output.state - key1, key2 = random.split(key2) - - return sample - - def check_over_configs(self, time_step=0, **config): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - sample, _ = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler, new_state = scheduler_class.from_pretrained(tmpdirname) - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - new_state = new_scheduler.set_timesteps(new_state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - output = scheduler.step(state, residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step(new_state, residual, time_step, sample, **kwargs).prev_sample - - assert jnp.sum(jnp.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def test_from_save_pretrained(self): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - sample, _ = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler, new_state = scheduler_class.from_pretrained(tmpdirname) - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - new_state = new_scheduler.set_timesteps(new_state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - output = scheduler.step(state, residual, 1, sample, **kwargs).prev_sample - new_output = new_scheduler.step(new_state, residual, 1, sample, **kwargs).prev_sample - - assert jnp.sum(jnp.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def check_over_forward(self, time_step=0, **forward_kwargs): - kwargs = dict(self.forward_default_kwargs) - kwargs.update(forward_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - sample, _ = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler, new_state = scheduler_class.from_pretrained(tmpdirname) - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - new_state = new_scheduler.set_timesteps(new_state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - output = scheduler.step(state, residual, time_step, sample, **kwargs).prev_sample - new_output = new_scheduler.step(new_state, residual, time_step, sample, **kwargs).prev_sample - - assert jnp.sum(jnp.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def test_scheduler_outputs_equivalence(self): - def set_nan_tensor_to_zero(t): - return t.at[t != t].set(0) - - def recursive_check(tuple_object, dict_object): - if isinstance(tuple_object, (List, Tuple)): - for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object.values()): - recursive_check(tuple_iterable_value, dict_iterable_value) - elif isinstance(tuple_object, Dict): - for tuple_iterable_value, dict_iterable_value in zip(tuple_object.values(), dict_object.values()): - recursive_check(tuple_iterable_value, dict_iterable_value) - elif tuple_object is None: - return - else: - self.assertTrue( - jnp.allclose(set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5), - msg=( - "Tuple and dict output are not equal. Difference:" - f" {jnp.max(jnp.abs(tuple_object - dict_object))}. Tuple has `nan`:" - f" {jnp.isnan(tuple_object).any()} and `inf`: {jnp.isinf(tuple_object)}. Dict has" - f" `nan`: {jnp.isnan(dict_object).any()} and `inf`: {jnp.isinf(dict_object)}." - ), - ) - - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - sample, _ = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - outputs_dict = scheduler.step(state, residual, 0, sample, **kwargs) - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - outputs_tuple = scheduler.step(state, residual, 0, sample, return_dict=False, **kwargs) - - recursive_check(outputs_tuple[0], outputs_dict.prev_sample) - - def test_step_shape(self): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - sample, _ = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - output_0 = scheduler.step(state, residual, 0, sample, **kwargs).prev_sample - output_1 = scheduler.step(state, residual, 1, sample, **kwargs).prev_sample - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) - - def test_timesteps(self): - for timesteps in [100, 500, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_steps_offset(self): - for steps_offset in [0, 1]: - self.check_over_configs(steps_offset=steps_offset) - - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(steps_offset=1) - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - state = scheduler.set_timesteps(state, 5) - assert jnp.equal(state.timesteps, jnp.array([801, 601, 401, 201, 1])).all() - - def test_betas(self): - for beta_start, beta_end in zip([0.0001, 0.001, 0.01, 0.1], [0.002, 0.02, 0.2, 2]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "squaredcos_cap_v2"]: - self.check_over_configs(beta_schedule=schedule) - - def test_time_indices(self): - for t in [1, 10, 49]: - self.check_over_forward(time_step=t) - - def test_inference_steps(self): - for t, num_inference_steps in zip([1, 10, 50], [10, 50, 500]): - self.check_over_forward(time_step=t, num_inference_steps=num_inference_steps) - - def test_variance(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - assert jnp.sum(jnp.abs(scheduler._get_variance(state, 0, 0) - 0.0)) < 1e-5 - assert jnp.sum(jnp.abs(scheduler._get_variance(state, 420, 400) - 0.14771)) < 1e-5 - assert jnp.sum(jnp.abs(scheduler._get_variance(state, 980, 960) - 0.32460)) < 1e-5 - assert jnp.sum(jnp.abs(scheduler._get_variance(state, 0, 0) - 0.0)) < 1e-5 - assert jnp.sum(jnp.abs(scheduler._get_variance(state, 487, 486) - 0.00979)) < 1e-5 - assert jnp.sum(jnp.abs(scheduler._get_variance(state, 999, 998) - 0.02)) < 1e-5 - - def test_full_loop_no_noise(self): - sample = self.full_loop() - - result_sum = jnp.sum(jnp.abs(sample)) - result_mean = jnp.mean(jnp.abs(sample)) - - assert abs(result_sum - 172.0067) < 1e-2 - assert abs(result_mean - 0.223967) < 1e-3 - - def test_full_loop_with_set_alpha_to_one(self): - # We specify different beta, so that the first alpha is 0.99 - sample = self.full_loop(set_alpha_to_one=True, beta_start=0.01) - result_sum = jnp.sum(jnp.abs(sample)) - result_mean = jnp.mean(jnp.abs(sample)) - - if jax_device == "tpu": - assert abs(result_sum - 149.8409) < 1e-2 - assert abs(result_mean - 0.1951) < 1e-3 - else: - assert abs(result_sum - 149.8295) < 1e-2 - assert abs(result_mean - 0.1951) < 1e-3 - - def test_full_loop_with_no_set_alpha_to_one(self): - # We specify different beta, so that the first alpha is 0.99 - sample = self.full_loop(set_alpha_to_one=False, beta_start=0.01) - result_sum = jnp.sum(jnp.abs(sample)) - result_mean = jnp.mean(jnp.abs(sample)) - - if jax_device == "tpu": - pass - # FIXME: both result_sum and result_mean are nan on TPU - # assert jnp.isnan(result_sum) - # assert jnp.isnan(result_mean) - else: - assert abs(result_sum - 149.0784) < 1e-2 - assert abs(result_mean - 0.1941) < 1e-3 - - def test_prediction_type(self): - for prediction_type in ["epsilon", "sample", "v_prediction"]: - self.check_over_configs(prediction_type=prediction_type) - - -@require_flax -class FlaxPNDMSchedulerTest(FlaxSchedulerCommonTest): - scheduler_classes = (FlaxPNDMScheduler,) - forward_default_kwargs = (("num_inference_steps", 50),) - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1000, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - } - - config.update(**kwargs) - return config - - def check_over_configs(self, time_step=0, **config): - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - sample, _ = self.dummy_sample - residual = 0.1 * sample - dummy_past_residuals = jnp.array([residual + 0.2, residual + 0.15, residual + 0.1, residual + 0.05]) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - state = scheduler.set_timesteps(state, num_inference_steps, shape=sample.shape) - # copy over dummy past residuals - state = state.replace(ets=dummy_past_residuals[:]) - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler, new_state = scheduler_class.from_pretrained(tmpdirname) - new_state = new_scheduler.set_timesteps(new_state, num_inference_steps, shape=sample.shape) - # copy over dummy past residuals - new_state = new_state.replace(ets=dummy_past_residuals[:]) - - (prev_sample, state) = scheduler.step_prk(state, residual, time_step, sample, **kwargs) - (new_prev_sample, new_state) = new_scheduler.step_prk(new_state, residual, time_step, sample, **kwargs) - - assert jnp.sum(jnp.abs(prev_sample - new_prev_sample)) < 1e-5, "Scheduler outputs are not identical" - - output, _ = scheduler.step_plms(state, residual, time_step, sample, **kwargs) - new_output, _ = new_scheduler.step_plms(new_state, residual, time_step, sample, **kwargs) - - assert jnp.sum(jnp.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def test_from_save_pretrained(self): - pass - - def test_scheduler_outputs_equivalence(self): - def set_nan_tensor_to_zero(t): - return t.at[t != t].set(0) - - def recursive_check(tuple_object, dict_object): - if isinstance(tuple_object, (List, Tuple)): - for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object.values()): - recursive_check(tuple_iterable_value, dict_iterable_value) - elif isinstance(tuple_object, Dict): - for tuple_iterable_value, dict_iterable_value in zip(tuple_object.values(), dict_object.values()): - recursive_check(tuple_iterable_value, dict_iterable_value) - elif tuple_object is None: - return - else: - self.assertTrue( - jnp.allclose(set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5), - msg=( - "Tuple and dict output are not equal. Difference:" - f" {jnp.max(jnp.abs(tuple_object - dict_object))}. Tuple has `nan`:" - f" {jnp.isnan(tuple_object).any()} and `inf`: {jnp.isinf(tuple_object)}. Dict has" - f" `nan`: {jnp.isnan(dict_object).any()} and `inf`: {jnp.isinf(dict_object)}." - ), - ) - - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - sample, _ = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps, shape=sample.shape) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - outputs_dict = scheduler.step(state, residual, 0, sample, **kwargs) - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps, shape=sample.shape) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - outputs_tuple = scheduler.step(state, residual, 0, sample, return_dict=False, **kwargs) - - recursive_check(outputs_tuple[0], outputs_dict.prev_sample) - - def check_over_forward(self, time_step=0, **forward_kwargs): - kwargs = dict(self.forward_default_kwargs) - num_inference_steps = kwargs.pop("num_inference_steps", None) - sample, _ = self.dummy_sample - residual = 0.1 * sample - dummy_past_residuals = jnp.array([residual + 0.2, residual + 0.15, residual + 0.1, residual + 0.05]) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - state = scheduler.set_timesteps(state, num_inference_steps, shape=sample.shape) - - # copy over dummy past residuals (must be after setting timesteps) - scheduler.ets = dummy_past_residuals[:] - - with tempfile.TemporaryDirectory() as tmpdirname: - scheduler.save_config(tmpdirname) - new_scheduler, new_state = scheduler_class.from_pretrained(tmpdirname) - # copy over dummy past residuals - new_state = new_scheduler.set_timesteps(new_state, num_inference_steps, shape=sample.shape) - - # copy over dummy past residual (must be after setting timesteps) - new_state.replace(ets=dummy_past_residuals[:]) - - output, state = scheduler.step_prk(state, residual, time_step, sample, **kwargs) - new_output, new_state = new_scheduler.step_prk(new_state, residual, time_step, sample, **kwargs) - - assert jnp.sum(jnp.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - output, _ = scheduler.step_plms(state, residual, time_step, sample, **kwargs) - new_output, _ = new_scheduler.step_plms(new_state, residual, time_step, sample, **kwargs) - - assert jnp.sum(jnp.abs(output - new_output)) < 1e-5, "Scheduler outputs are not identical" - - def full_loop(self, **config): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(**config) - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - num_inference_steps = 10 - model = self.dummy_model() - sample = self.dummy_sample_deter - state = scheduler.set_timesteps(state, num_inference_steps, shape=sample.shape) - - for i, t in enumerate(state.prk_timesteps): - residual = model(sample, t) - sample, state = scheduler.step_prk(state, residual, t, sample) - - for i, t in enumerate(state.plms_timesteps): - residual = model(sample, t) - sample, state = scheduler.step_plms(state, residual, t, sample) - - return sample - - def test_step_shape(self): - kwargs = dict(self.forward_default_kwargs) - - num_inference_steps = kwargs.pop("num_inference_steps", None) - - for scheduler_class in self.scheduler_classes: - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - sample, _ = self.dummy_sample - residual = 0.1 * sample - - if num_inference_steps is not None and hasattr(scheduler, "set_timesteps"): - state = scheduler.set_timesteps(state, num_inference_steps, shape=sample.shape) - elif num_inference_steps is not None and not hasattr(scheduler, "set_timesteps"): - kwargs["num_inference_steps"] = num_inference_steps - - # copy over dummy past residuals (must be done after set_timesteps) - dummy_past_residuals = jnp.array([residual + 0.2, residual + 0.15, residual + 0.1, residual + 0.05]) - state = state.replace(ets=dummy_past_residuals[:]) - - output_0, state = scheduler.step_prk(state, residual, 0, sample, **kwargs) - output_1, state = scheduler.step_prk(state, residual, 1, sample, **kwargs) - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) - - output_0, state = scheduler.step_plms(state, residual, 0, sample, **kwargs) - output_1, state = scheduler.step_plms(state, residual, 1, sample, **kwargs) - - self.assertEqual(output_0.shape, sample.shape) - self.assertEqual(output_0.shape, output_1.shape) - - def test_timesteps(self): - for timesteps in [100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_steps_offset(self): - for steps_offset in [0, 1]: - self.check_over_configs(steps_offset=steps_offset) - - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(steps_offset=1) - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - state = scheduler.set_timesteps(state, 10, shape=()) - assert jnp.equal( - state.timesteps, - jnp.array([901, 851, 851, 801, 801, 751, 751, 701, 701, 651, 651, 601, 601, 501, 401, 301, 201, 101, 1]), - ).all() - - def test_betas(self): - for beta_start, beta_end in zip([0.0001, 0.001], [0.002, 0.02]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "squaredcos_cap_v2"]: - self.check_over_configs(beta_schedule=schedule) - - def test_time_indices(self): - for t in [1, 5, 10]: - self.check_over_forward(time_step=t) - - def test_inference_steps(self): - for t, num_inference_steps in zip([1, 5, 10], [10, 50, 100]): - self.check_over_forward(num_inference_steps=num_inference_steps) - - def test_pow_of_3_inference_steps(self): - # earlier version of set_timesteps() caused an error indexing alpha's with inference steps as power of 3 - num_inference_steps = 27 - - for scheduler_class in self.scheduler_classes: - sample, _ = self.dummy_sample - residual = 0.1 * sample - - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - state = scheduler.set_timesteps(state, num_inference_steps, shape=sample.shape) - - # before power of 3 fix, would error on first step, so we only need to do two - for i, t in enumerate(state.prk_timesteps[:2]): - sample, state = scheduler.step_prk(state, residual, t, sample) - - def test_inference_plms_no_past_residuals(self): - with self.assertRaises(ValueError): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - state = scheduler.create_state() - - scheduler.step_plms(state, self.dummy_sample, 1, self.dummy_sample).prev_sample - - def test_full_loop_no_noise(self): - sample = self.full_loop() - result_sum = jnp.sum(jnp.abs(sample)) - result_mean = jnp.mean(jnp.abs(sample)) - - if jax_device == "tpu": - assert abs(result_sum - 198.1275) < 1e-2 - assert abs(result_mean - 0.2580) < 1e-3 - else: - assert abs(result_sum - 198.1318) < 1e-2 - assert abs(result_mean - 0.2580) < 1e-3 - - def test_full_loop_with_set_alpha_to_one(self): - # We specify different beta, so that the first alpha is 0.99 - sample = self.full_loop(set_alpha_to_one=True, beta_start=0.01) - result_sum = jnp.sum(jnp.abs(sample)) - result_mean = jnp.mean(jnp.abs(sample)) - - if jax_device == "tpu": - assert abs(result_sum - 186.83226) < 1e-2 - assert abs(result_mean - 0.24327) < 1e-3 - else: - assert abs(result_sum - 186.9466) < 1e-2 - assert abs(result_mean - 0.24342) < 1e-3 - - def test_full_loop_with_no_set_alpha_to_one(self): - # We specify different beta, so that the first alpha is 0.99 - sample = self.full_loop(set_alpha_to_one=False, beta_start=0.01) - result_sum = jnp.sum(jnp.abs(sample)) - result_mean = jnp.mean(jnp.abs(sample)) - - if jax_device == "tpu": - assert abs(result_sum - 186.83226) < 1e-2 - assert abs(result_mean - 0.24327) < 1e-3 - else: - assert abs(result_sum - 186.9482) < 1e-2 - assert abs(result_mean - 0.2434) < 1e-3 diff --git a/spaces/deelerb/3dselfie/PIFu/apps/prt_util.py b/spaces/deelerb/3dselfie/PIFu/apps/prt_util.py deleted file mode 100644 index 7eba32fa0b396f420b2e332abbb67135dbc14d6b..0000000000000000000000000000000000000000 --- a/spaces/deelerb/3dselfie/PIFu/apps/prt_util.py +++ /dev/null @@ -1,142 +0,0 @@ -import os -import trimesh -import numpy as np -import math -from scipy.special import sph_harm -import argparse -from tqdm import tqdm - -def factratio(N, D): - if N >= D: - prod = 1.0 - for i in range(D+1, N+1): - prod *= i - return prod - else: - prod = 1.0 - for i in range(N+1, D+1): - prod *= i - return 1.0 / prod - -def KVal(M, L): - return math.sqrt(((2 * L + 1) / (4 * math.pi)) * (factratio(L - M, L + M))) - -def AssociatedLegendre(M, L, x): - if M < 0 or M > L or np.max(np.abs(x)) > 1.0: - return np.zeros_like(x) - - pmm = np.ones_like(x) - if M > 0: - somx2 = np.sqrt((1.0 + x) * (1.0 - x)) - fact = 1.0 - for i in range(1, M+1): - pmm = -pmm * fact * somx2 - fact = fact + 2 - - if L == M: - return pmm - else: - pmmp1 = x * (2 * M + 1) * pmm - if L == M+1: - return pmmp1 - else: - pll = np.zeros_like(x) - for i in range(M+2, L+1): - pll = (x * (2 * i - 1) * pmmp1 - (i + M - 1) * pmm) / (i - M) - pmm = pmmp1 - pmmp1 = pll - return pll - -def SphericalHarmonic(M, L, theta, phi): - if M > 0: - return math.sqrt(2.0) * KVal(M, L) * np.cos(M * phi) * AssociatedLegendre(M, L, np.cos(theta)) - elif M < 0: - return math.sqrt(2.0) * KVal(-M, L) * np.sin(-M * phi) * AssociatedLegendre(-M, L, np.cos(theta)) - else: - return KVal(0, L) * AssociatedLegendre(0, L, np.cos(theta)) - -def save_obj(mesh_path, verts): - file = open(mesh_path, 'w') - for v in verts: - file.write('v %.4f %.4f %.4f\n' % (v[0], v[1], v[2])) - file.close() - -def sampleSphericalDirections(n): - xv = np.random.rand(n,n) - yv = np.random.rand(n,n) - theta = np.arccos(1-2 * xv) - phi = 2.0 * math.pi * yv - - phi = phi.reshape(-1) - theta = theta.reshape(-1) - - vx = -np.sin(theta) * np.cos(phi) - vy = -np.sin(theta) * np.sin(phi) - vz = np.cos(theta) - return np.stack([vx, vy, vz], 1), phi, theta - -def getSHCoeffs(order, phi, theta): - shs = [] - for n in range(0, order+1): - for m in range(-n,n+1): - s = SphericalHarmonic(m, n, theta, phi) - shs.append(s) - - return np.stack(shs, 1) - -def computePRT(mesh_path, n, order): - mesh = trimesh.load(mesh_path, process=False) - vectors_orig, phi, theta = sampleSphericalDirections(n) - SH_orig = getSHCoeffs(order, phi, theta) - - w = 4.0 * math.pi / (n*n) - - origins = mesh.vertices - normals = mesh.vertex_normals - n_v = origins.shape[0] - - origins = np.repeat(origins[:,None], n, axis=1).reshape(-1,3) - normals = np.repeat(normals[:,None], n, axis=1).reshape(-1,3) - PRT_all = None - for i in tqdm(range(n)): - SH = np.repeat(SH_orig[None,(i*n):((i+1)*n)], n_v, axis=0).reshape(-1,SH_orig.shape[1]) - vectors = np.repeat(vectors_orig[None,(i*n):((i+1)*n)], n_v, axis=0).reshape(-1,3) - - dots = (vectors * normals).sum(1) - front = (dots > 0.0) - - delta = 1e-3*min(mesh.bounding_box.extents) - hits = mesh.ray.intersects_any(origins + delta * normals, vectors) - nohits = np.logical_and(front, np.logical_not(hits)) - - PRT = (nohits.astype(np.float) * dots)[:,None] * SH - - if PRT_all is not None: - PRT_all += (PRT.reshape(-1, n, SH.shape[1]).sum(1)) - else: - PRT_all = (PRT.reshape(-1, n, SH.shape[1]).sum(1)) - - PRT = w * PRT_all - - # NOTE: trimesh sometimes break the original vertex order, but topology will not change. - # when loading PRT in other program, use the triangle list from trimesh. - return PRT, mesh.faces - -def testPRT(dir_path, n=40): - if dir_path[-1] == '/': - dir_path = dir_path[:-1] - sub_name = dir_path.split('/')[-1][:-4] - obj_path = os.path.join(dir_path, sub_name + '_100k.obj') - os.makedirs(os.path.join(dir_path, 'bounce'), exist_ok=True) - - PRT, F = computePRT(obj_path, n, 2) - np.savetxt(os.path.join(dir_path, 'bounce', 'bounce0.txt'), PRT, fmt='%.8f') - np.save(os.path.join(dir_path, 'bounce', 'face.npy'), F) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='/home/shunsuke/Downloads/rp_dennis_posed_004_OBJ') - parser.add_argument('-n', '--n_sample', type=int, default=40, help='squared root of number of sampling. the higher, the more accurate, but slower') - args = parser.parse_args() - - testPRT(args.input) diff --git a/spaces/dhansmair/flamingo-mini-cap/README.md b/spaces/dhansmair/flamingo-mini-cap/README.md deleted file mode 100644 index d8073e80d18fc38910548b3d2bf5913b613b557c..0000000000000000000000000000000000000000 --- a/spaces/dhansmair/flamingo-mini-cap/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Flamingo Mini Image Captioning -emoji: 🦩🖼️💬 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/diacanFperku/AutoGPT/MikologidasardanterapanpdfVERIFIED Download.md b/spaces/diacanFperku/AutoGPT/MikologidasardanterapanpdfVERIFIED Download.md deleted file mode 100644 index 8a0bc3bc89a7adf2f40f1fcd38ae0bc9c72a3775..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/MikologidasardanterapanpdfVERIFIED Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    mikologidasardanterapanpdfdownload


    Download File ⚙⚙⚙ https://gohhs.com/2uFVrN



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/start.bat b/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/start.bat deleted file mode 100644 index 418d21233dbf720b0dd09821904d9d6a31b123a2..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/start.bat +++ /dev/null @@ -1,2 +0,0 @@ -set PYTHON=venv\python.exe -start cmd /k "set PYTHON=%PYTHON%" \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/mmdet/core/bbox/iou_calculators/builder.py b/spaces/dineshreddy/WALT/mmdet/core/bbox/iou_calculators/builder.py deleted file mode 100644 index 09094d7ece46a9f18a28ed0960feac2afa9331bb..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/core/bbox/iou_calculators/builder.py +++ /dev/null @@ -1,8 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -IOU_CALCULATORS = Registry('IoU calculator') - - -def build_iou_calculator(cfg, default_args=None): - """Builder of IoU calculator.""" - return build_from_cfg(cfg, IOU_CALCULATORS, default_args) diff --git a/spaces/erbanku/gpt-academic/crazy_functions/test_project/cpp/cppipc/ipc.cpp b/spaces/erbanku/gpt-academic/crazy_functions/test_project/cpp/cppipc/ipc.cpp deleted file mode 100644 index c713b852ea5a51fbeb4729b64561da482caaf351..0000000000000000000000000000000000000000 --- a/spaces/erbanku/gpt-academic/crazy_functions/test_project/cpp/cppipc/ipc.cpp +++ /dev/null @@ -1,701 +0,0 @@ - -#include -#include -#include -#include // std::pair, std::move, std::forward -#include -#include // aligned_storage_t -#include -#include -#include -#include - -#include "libipc/ipc.h" -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/pool_alloc.h" -#include "libipc/queue.h" -#include "libipc/policy.h" -#include "libipc/rw_lock.h" -#include "libipc/waiter.h" - -#include "libipc/utility/log.h" -#include "libipc/utility/id_pool.h" -#include "libipc/utility/scope_guard.h" -#include "libipc/utility/utility.h" - -#include "libipc/memory/resource.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_array.h" - -namespace { - -using msg_id_t = std::uint32_t; -using acc_t = std::atomic; - -template -struct msg_t; - -template -struct msg_t<0, AlignSize> { - msg_id_t cc_id_; - msg_id_t id_; - std::int32_t remain_; - bool storage_; -}; - -template -struct msg_t : msg_t<0, AlignSize> { - std::aligned_storage_t data_ {}; - - msg_t() = default; - msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size) - : msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} { - if (this->storage_) { - if (data != nullptr) { - // copy storage-id - *reinterpret_cast(&data_) = - *static_cast(data); - } - } - else std::memcpy(&data_, data, size); - } -}; - -template -ipc::buff_t make_cache(T& data, std::size_t size) { - auto ptr = ipc::mem::alloc(size); - std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size)); - return { ptr, size, ipc::mem::free }; -} - -struct cache_t { - std::size_t fill_; - ipc::buff_t buff_; - - cache_t(std::size_t f, ipc::buff_t && b) - : fill_(f), buff_(std::move(b)) - {} - - void append(void const * data, std::size_t size) { - if (fill_ >= buff_.size() || data == nullptr || size == 0) return; - auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size()); - std::memcpy(static_cast(buff_.data()) + fill_, data, new_fill - fill_); - fill_ = new_fill; - } -}; - -auto cc_acc() { - static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t)); - return static_cast(acc_h.get()); -} - -IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept { - return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align; -} - -IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept { - return ipc::make_align(alignof(std::max_align_t), align_chunk_size( - ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)) + size)); -} - -struct chunk_t { - std::atomic &conns() noexcept { - return *reinterpret_cast *>(this); - } - - void *data() noexcept { - return reinterpret_cast(this) - + ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic)); - } -}; - -struct chunk_info_t { - ipc::id_pool<> pool_; - ipc::spin_lock lock_; - - IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept { - return ipc::id_pool<>::max_count * chunk_size; - } - - ipc::byte_t *chunks_mem() noexcept { - return reinterpret_cast(this + 1); - } - - chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept { - if (id < 0) return nullptr; - return reinterpret_cast(chunks_mem() + (chunk_size * id)); - } -}; - -auto& chunk_storages() { - class chunk_handle_t { - ipc::shm::handle handle_; - - public: - chunk_info_t *get_info(std::size_t chunk_size) { - if (!handle_.valid() && - !handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(), - sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) { - ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - auto info = static_cast(handle_.get()); - if (info == nullptr) { - ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - return info; - } - }; - static ipc::map chunk_hs; - return chunk_hs; -} - -chunk_info_t *chunk_storage_info(std::size_t chunk_size) { - auto &storages = chunk_storages(); - std::decay_t::iterator it; - { - static ipc::rw_lock lock; - IPC_UNUSED_ std::shared_lock guard {lock}; - if ((it = storages.find(chunk_size)) == storages.end()) { - using chunk_handle_t = std::decay_t::value_type::second_type; - guard.unlock(); - IPC_UNUSED_ std::lock_guard guard {lock}; - it = storages.emplace(chunk_size, chunk_handle_t{}).first; - } - } - return it->second.get_info(chunk_size); -} - -std::pair acquire_storage(std::size_t size, ipc::circ::cc_t conns) { - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return {}; - - info->lock_.lock(); - info->pool_.prepare(); - // got an unique id - auto id = info->pool_.acquire(); - info->lock_.unlock(); - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return {}; - chunk->conns().store(conns, std::memory_order_relaxed); - return { id, chunk->data() }; -} - -void *find_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return nullptr; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return nullptr; - return info->at(chunk_size, id)->data(); -} - -void release_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool sub_rc(ipc::wr, - std::atomic &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept { - return true; -} - -template -bool sub_rc(ipc::wr, - std::atomic &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept { - auto last_conns = curr_conns & ~conn_id; - for (unsigned k = 0;;) { - auto chunk_conns = conns.load(std::memory_order_acquire); - if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) { - return (chunk_conns & last_conns) == 0; - } - ipc::yield(k); - } -} - -template -void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) { - if (id < 0) { - ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return; - - if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) { - return; - } - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template -bool clear_message(void* p) { - auto msg = static_cast(p); - if (msg->storage_) { - std::int32_t r_size = static_cast(ipc::data_length) + msg->remain_; - if (r_size <= 0) { - ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size); - return true; - } - release_storage( - *reinterpret_cast(&msg->data_), - static_cast(r_size)); - } - return true; -} - -struct conn_info_head { - - ipc::string name_; - msg_id_t cc_id_; // connection-info id - ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_; - ipc::shm::handle acc_h_; - - conn_info_head(char const * name) - : name_ {name} - , cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)} - , cc_waiter_{("__CC_CONN__" + name_).c_str()} - , wt_waiter_{("__WT_CONN__" + name_).c_str()} - , rd_waiter_{("__RD_CONN__" + name_).c_str()} - , acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} { - } - - void quit_waiting() { - cc_waiter_.quit_waiting(); - wt_waiter_.quit_waiting(); - rd_waiter_.quit_waiting(); - } - - auto acc() { - return static_cast(acc_h_.get()); - } - - auto& recv_cache() { - thread_local ipc::unordered_map tls; - return tls; - } -}; - -template -bool wait_for(W& waiter, F&& pred, std::uint64_t tm) { - if (tm == 0) return !pred(); - for (unsigned k = 0; pred();) { - bool ret = true; - ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] { - ret = waiter.wait_if(std::forward(pred), tm); - k = 0; - }); - if (!ret) return false; // timeout or fail - if (k == 0) break; // k has been reset - } - return true; -} - -template -struct queue_generator { - - using queue_t = ipc::queue, Policy>; - - struct conn_info_t : conn_info_head { - queue_t que_; - - conn_info_t(char const * name) - : conn_info_head{name} - , que_{("__QU_CONN__" + - ipc::to_string(DataSize) + "__" + - ipc::to_string(AlignSize) + "__" + name).c_str()} { - } - - void disconnect_receiver() { - bool dis = que_.disconnect(); - this->quit_waiting(); - if (dis) { - this->recv_cache().clear(); - } - } - }; -}; - -template -struct detail_impl { - -using policy_t = Policy; -using flag_t = typename policy_t::flag_t; -using queue_t = typename queue_generator::queue_t; -using conn_info_t = typename queue_generator::conn_info_t; - -constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept { - return static_cast(h); -} - -constexpr static queue_t* queue_of(ipc::handle_t h) noexcept { - return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_); -} - -/* API implementations */ - -static void disconnect(ipc::handle_t h) { - auto que = queue_of(h); - if (que == nullptr) { - return; - } - que->shut_sending(); - assert(info_of(h) != nullptr); - info_of(h)->disconnect_receiver(); -} - -static bool reconnect(ipc::handle_t * ph, bool start_to_recv) { - assert(ph != nullptr); - assert(*ph != nullptr); - auto que = queue_of(*ph); - if (que == nullptr) { - return false; - } - if (start_to_recv) { - que->shut_sending(); - if (que->connect()) { // wouldn't connect twice - info_of(*ph)->cc_waiter_.broadcast(); - return true; - } - return false; - } - // start_to_recv == false - if (que->connected()) { - info_of(*ph)->disconnect_receiver(); - } - return que->ready_sending(); -} - -static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) { - assert(ph != nullptr); - if (*ph == nullptr) { - *ph = ipc::mem::alloc(name); - } - return reconnect(ph, start_to_recv); -} - -static void destroy(ipc::handle_t h) { - disconnect(h); - ipc::mem::free(info_of(h)); -} - -static std::size_t recv_count(ipc::handle_t h) noexcept { - auto que = queue_of(h); - if (que == nullptr) { - return ipc::invalid_value; - } - return que->conn_count(); -} - -static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - return false; - } - return wait_for(info_of(h)->cc_waiter_, [que, r_count] { - return que->conn_count() < r_count; - }, tm); -} - -template -static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) { - if (data == nullptr || size == 0) { - ipc::error("fail: send(%p, %zd)\n", data, size); - return false; - } - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: send, queue_of(h) == nullptr\n"); - return false; - } - if (que->elems() == nullptr) { - ipc::error("fail: send, queue_of(h)->elems() == nullptr\n"); - return false; - } - if (!que->ready_sending()) { - ipc::error("fail: send, que->ready_sending() == false\n"); - return false; - } - ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed); - if (conns == 0) { - ipc::error("fail: send, there is no receiver on this connection.\n"); - return false; - } - // calc a new message id - auto acc = info_of(h)->acc(); - if (acc == nullptr) { - ipc::error("fail: send, info_of(h)->acc() == nullptr\n"); - return false; - } - auto msg_id = acc->fetch_add(1, std::memory_order_relaxed); - auto try_push = std::forward(gen_push)(info_of(h), que, msg_id); - if (size > ipc::large_msg_limit) { - auto dat = acquire_storage(size, conns); - void * buf = dat.second; - if (buf != nullptr) { - std::memcpy(buf, data, size); - return try_push(static_cast(size) - - static_cast(ipc::data_length), &(dat.first), 0); - } - // try using message fragment - //ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size); - } - // push message fragment - std::int32_t offset = 0; - for (std::int32_t i = 0; i < static_cast(size / ipc::data_length); ++i, offset += ipc::data_length) { - if (!try_push(static_cast(size) - offset - static_cast(ipc::data_length), - static_cast(data) + offset, ipc::data_length)) { - return false; - } - } - // if remain > 0, this is the last message fragment - std::int32_t remain = static_cast(size) - offset; - if (remain > 0) { - if (!try_push(remain - static_cast(ipc::data_length), - static_cast(data) + offset, - static_cast(remain))) { - return false; - } - } - return true; -} - -static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size); - if (!que->force_push( - clear_message, - info->cc_id_, msg_id, remain, data, size)) { - return false; - } - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - return false; - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: recv, queue_of(h) == nullptr\n"); - return {}; - } - if (!que->connected()) { - // hasn't connected yet, just return. - return {}; - } - auto& rc = info_of(h)->recv_cache(); - for (;;) { - // pop a new message - typename queue_t::value_t msg; - if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] { - return !que->pop(msg); - }, tm)) { - // pop failed, just return. - return {}; - } - info_of(h)->wt_waiter_.broadcast(); - if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) { - continue; // ignore message to self - } - // msg.remain_ may minus & abs(msg.remain_) < data_length - std::int32_t r_size = static_cast(ipc::data_length) + msg.remain_; - if (r_size <= 0) { - ipc::error("fail: recv, r_size = %d\n", (int)r_size); - return {}; - } - std::size_t msg_size = static_cast(r_size); - // large message - if (msg.storage_) { - ipc::storage_id_t buf_id = *reinterpret_cast(&msg.data_); - void* buf = find_storage(buf_id, msg_size); - if (buf != nullptr) { - struct recycle_t { - ipc::storage_id_t storage_id; - ipc::circ::cc_t curr_conns; - ipc::circ::cc_t conn_id; - } *r_info = ipc::mem::alloc(recycle_t{ - buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id() - }); - if (r_info == nullptr) { - ipc::log("fail: ipc::mem::alloc.\n"); - return ipc::buff_t{buf, msg_size}; // no recycle - } else { - return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) { - auto r_info = static_cast(p_info); - IPC_UNUSED_ auto finally = ipc::guard([r_info] { - ipc::mem::free(r_info); - }); - recycle_storage(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id); - }, r_info}; - } - } else { - ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size); - continue; - } - } - // find cache with msg.id_ - auto cac_it = rc.find(msg.id_); - if (cac_it == rc.end()) { - if (msg_size <= ipc::data_length) { - return make_cache(msg.data_, msg_size); - } - // gc - if (rc.size() > 1024) { - std::vector need_del; - for (auto const & pair : rc) { - auto cmp = std::minmax(msg.id_, pair.first); - if (cmp.second - cmp.first > 8192) { - need_del.push_back(pair.first); - } - } - for (auto id : need_del) rc.erase(id); - } - // cache the first message fragment - rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) }); - } - // has cached before this message - else { - auto& cac = cac_it->second; - // this is the last message fragment - if (msg.remain_ <= 0) { - cac.append(&(msg.data_), msg_size); - // finish this message, erase it from cache - auto buff = std::move(cac.buff_); - rc.erase(cac_it); - return buff; - } - // there are remain datas after this message - cac.append(&(msg.data_), ipc::data_length); - } - } -} - -static ipc::buff_t try_recv(ipc::handle_t h) { - return recv(h, 0); -} - -}; // detail_impl - -template -using policy_t = ipc::policy::choose; - -} // internal-linkage - -namespace ipc { - -template -ipc::handle_t chan_impl::inited() { - ipc::detail::waiter::init(); - return nullptr; -} - -template -bool chan_impl::connect(ipc::handle_t * ph, char const * name, unsigned mode) { - return detail_impl>::connect(ph, name, mode & receiver); -} - -template -bool chan_impl::reconnect(ipc::handle_t * ph, unsigned mode) { - return detail_impl>::reconnect(ph, mode & receiver); -} - -template -void chan_impl::disconnect(ipc::handle_t h) { - detail_impl>::disconnect(h); -} - -template -void chan_impl::destroy(ipc::handle_t h) { - detail_impl>::destroy(h); -} - -template -char const * chan_impl::name(ipc::handle_t h) { - auto info = detail_impl>::info_of(h); - return (info == nullptr) ? nullptr : info->name_.c_str(); -} - -template -std::size_t chan_impl::recv_count(ipc::handle_t h) { - return detail_impl>::recv_count(h); -} - -template -bool chan_impl::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - return detail_impl>::wait_for_recv(h, r_count, tm); -} - -template -bool chan_impl::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::send(h, data, size, tm); -} - -template -buff_t chan_impl::recv(ipc::handle_t h, std::uint64_t tm) { - return detail_impl>::recv(h, tm); -} - -template -bool chan_impl::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl>::try_send(h, data, size, tm); -} - -template -buff_t chan_impl::try_recv(ipc::handle_t h) { - return detail_impl>::try_recv(h); -} - -template struct chan_impl>; -// template struct chan_impl>; // TBD -// template struct chan_impl>; // TBD -template struct chan_impl>; -template struct chan_impl>; - -} // namespace ipc diff --git a/spaces/erbanku/gpt-academic/theme.py b/spaces/erbanku/gpt-academic/theme.py deleted file mode 100644 index 5433124b308a6679a0d73376adf7e4cf7c016453..0000000000000000000000000000000000000000 --- a/spaces/erbanku/gpt-academic/theme.py +++ /dev/null @@ -1,342 +0,0 @@ -import gradio as gr -from toolbox import get_conf -CODE_HIGHLIGHT, = get_conf('CODE_HIGHLIGHT') -# gradio可用颜色列表 -# gr.themes.utils.colors.slate (石板色) -# gr.themes.utils.colors.gray (灰色) -# gr.themes.utils.colors.zinc (锌色) -# gr.themes.utils.colors.neutral (中性色) -# gr.themes.utils.colors.stone (石头色) -# gr.themes.utils.colors.red (红色) -# gr.themes.utils.colors.orange (橙色) -# gr.themes.utils.colors.amber (琥珀色) -# gr.themes.utils.colors.yellow (黄色) -# gr.themes.utils.colors.lime (酸橙色) -# gr.themes.utils.colors.green (绿色) -# gr.themes.utils.colors.emerald (祖母绿) -# gr.themes.utils.colors.teal (青蓝色) -# gr.themes.utils.colors.cyan (青色) -# gr.themes.utils.colors.sky (天蓝色) -# gr.themes.utils.colors.blue (蓝色) -# gr.themes.utils.colors.indigo (靛蓝色) -# gr.themes.utils.colors.violet (紫罗兰色) -# gr.themes.utils.colors.purple (紫色) -# gr.themes.utils.colors.fuchsia (洋红色) -# gr.themes.utils.colors.pink (粉红色) -# gr.themes.utils.colors.rose (玫瑰色) - - -def adjust_theme(): - try: - color_er = gr.themes.utils.colors.fuchsia - set_theme = gr.themes.Default( - primary_hue=gr.themes.utils.colors.orange, - neutral_hue=gr.themes.utils.colors.gray, - font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui", - "sans-serif", gr.themes.utils.fonts.GoogleFont("Source Sans Pro")], - font_mono=["ui-monospace", "Consolas", "monospace", gr.themes.utils.fonts.GoogleFont("IBM Plex Mono")]) - set_theme.set( - # Colors - input_background_fill_dark="*neutral_800", - # Transition - button_transition="none", - # Shadows - button_shadow="*shadow_drop", - button_shadow_hover="*shadow_drop_lg", - button_shadow_active="*shadow_inset", - input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset", - input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset", - input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset", - checkbox_label_shadow="*shadow_drop", - block_shadow="*shadow_drop", - form_gap_width="1px", - # Button borders - input_border_width="1px", - input_background_fill="white", - # Gradients - stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)", - stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)", - error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)", - error_background_fill_dark="*background_fill_primary", - checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)", - checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)", - checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)", - button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)", - button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)", - button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)", - button_primary_border_color_dark="*primary_500", - button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)", - button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)", - button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)", - button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)", - button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})", - button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})", - button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})", - button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})", - button_cancel_border_color=color_er.c200, - button_cancel_border_color_dark=color_er.c600, - button_cancel_text_color=color_er.c600, - button_cancel_text_color_dark="white", - ) - except: - set_theme = None - print('gradio版本较旧, 不能自定义字体和颜色') - return set_theme - - -advanced_css = """ -/* 设置表格的外边距为1em,内部单元格之间边框合并,空单元格显示. */ -.markdown-body table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} - -/* 设置表格单元格的内边距为5px,边框粗细为1.2px,颜色为--border-color-primary. */ -.markdown-body th, .markdown-body td { - border: 1.2px solid var(--border-color-primary); - padding: 5px; -} - -/* 设置表头背景颜色为rgba(175,184,193,0.2),透明度为0.2. */ -.markdown-body thead { - background-color: rgba(175,184,193,0.2); -} - -/* 设置表头单元格的内边距为0.5em和0.2em. */ -.markdown-body thead th { - padding: .5em .2em; -} - -/* 去掉列表前缀的默认间距,使其与文本线对齐. */ -.markdown-body ol, .markdown-body ul { - padding-inline-start: 2em !important; -} - -/* 设定聊天气泡的样式,包括圆角、最大宽度和阴影等. */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - /* padding: var(--spacing-xl) !important; */ - /* font-size: var(--text-md) !important; */ - /* line-height: var(--line-md) !important; */ - /* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ - /* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ -} -[data-testid = "bot"] { - max-width: 95%; - /* width: auto !important; */ - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 100%; - /* width: auto !important; */ - border-bottom-right-radius: 0 !important; -} - -/* 行内代码的背景设为淡灰色,设定圆角和间距. */ -.markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(13, 17, 23, 0.95); - color: #c9d1d9; -} - -.dark .markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} - -/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */ -.markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(13, 17, 23, 0.95); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} - -.dark .markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(175,184,193,0.2); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} - -""" - -if CODE_HIGHLIGHT: - advanced_css += """ - -.codehilite .hll { background-color: #6e7681 } -.codehilite .c { color: #8b949e; font-style: italic } /* Comment */ -.codehilite .err { color: #f85149 } /* Error */ -.codehilite .esc { color: #c9d1d9 } /* Escape */ -.codehilite .g { color: #c9d1d9 } /* Generic */ -.codehilite .k { color: #ff7b72 } /* Keyword */ -.codehilite .l { color: #a5d6ff } /* Literal */ -.codehilite .n { color: #c9d1d9 } /* Name */ -.codehilite .o { color: #ff7b72; font-weight: bold } /* Operator */ -.codehilite .x { color: #c9d1d9 } /* Other */ -.codehilite .p { color: #c9d1d9 } /* Punctuation */ -.codehilite .ch { color: #8b949e; font-style: italic } /* Comment.Hashbang */ -.codehilite .cm { color: #8b949e; font-style: italic } /* Comment.Multiline */ -.codehilite .cp { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Preproc */ -.codehilite .cpf { color: #8b949e; font-style: italic } /* Comment.PreprocFile */ -.codehilite .c1 { color: #8b949e; font-style: italic } /* Comment.Single */ -.codehilite .cs { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Special */ -.codehilite .gd { color: #ffa198; background-color: #490202 } /* Generic.Deleted */ -.codehilite .ge { color: #c9d1d9; font-style: italic } /* Generic.Emph */ -.codehilite .gr { color: #ffa198 } /* Generic.Error */ -.codehilite .gh { color: #79c0ff; font-weight: bold } /* Generic.Heading */ -.codehilite .gi { color: #56d364; background-color: #0f5323 } /* Generic.Inserted */ -.codehilite .go { color: #8b949e } /* Generic.Output */ -.codehilite .gp { color: #8b949e } /* Generic.Prompt */ -.codehilite .gs { color: #c9d1d9; font-weight: bold } /* Generic.Strong */ -.codehilite .gu { color: #79c0ff } /* Generic.Subheading */ -.codehilite .gt { color: #ff7b72 } /* Generic.Traceback */ -.codehilite .g-Underline { color: #c9d1d9; text-decoration: underline } /* Generic.Underline */ -.codehilite .kc { color: #79c0ff } /* Keyword.Constant */ -.codehilite .kd { color: #ff7b72 } /* Keyword.Declaration */ -.codehilite .kn { color: #ff7b72 } /* Keyword.Namespace */ -.codehilite .kp { color: #79c0ff } /* Keyword.Pseudo */ -.codehilite .kr { color: #ff7b72 } /* Keyword.Reserved */ -.codehilite .kt { color: #ff7b72 } /* Keyword.Type */ -.codehilite .ld { color: #79c0ff } /* Literal.Date */ -.codehilite .m { color: #a5d6ff } /* Literal.Number */ -.codehilite .s { color: #a5d6ff } /* Literal.String */ -.codehilite .na { color: #c9d1d9 } /* Name.Attribute */ -.codehilite .nb { color: #c9d1d9 } /* Name.Builtin */ -.codehilite .nc { color: #f0883e; font-weight: bold } /* Name.Class */ -.codehilite .no { color: #79c0ff; font-weight: bold } /* Name.Constant */ -.codehilite .nd { color: #d2a8ff; font-weight: bold } /* Name.Decorator */ -.codehilite .ni { color: #ffa657 } /* Name.Entity */ -.codehilite .ne { color: #f0883e; font-weight: bold } /* Name.Exception */ -.codehilite .nf { color: #d2a8ff; font-weight: bold } /* Name.Function */ -.codehilite .nl { color: #79c0ff; font-weight: bold } /* Name.Label */ -.codehilite .nn { color: #ff7b72 } /* Name.Namespace */ -.codehilite .nx { color: #c9d1d9 } /* Name.Other */ -.codehilite .py { color: #79c0ff } /* Name.Property */ -.codehilite .nt { color: #7ee787 } /* Name.Tag */ -.codehilite .nv { color: #79c0ff } /* Name.Variable */ -.codehilite .ow { color: #ff7b72; font-weight: bold } /* Operator.Word */ -.codehilite .pm { color: #c9d1d9 } /* Punctuation.Marker */ -.codehilite .w { color: #6e7681 } /* Text.Whitespace */ -.codehilite .mb { color: #a5d6ff } /* Literal.Number.Bin */ -.codehilite .mf { color: #a5d6ff } /* Literal.Number.Float */ -.codehilite .mh { color: #a5d6ff } /* Literal.Number.Hex */ -.codehilite .mi { color: #a5d6ff } /* Literal.Number.Integer */ -.codehilite .mo { color: #a5d6ff } /* Literal.Number.Oct */ -.codehilite .sa { color: #79c0ff } /* Literal.String.Affix */ -.codehilite .sb { color: #a5d6ff } /* Literal.String.Backtick */ -.codehilite .sc { color: #a5d6ff } /* Literal.String.Char */ -.codehilite .dl { color: #79c0ff } /* Literal.String.Delimiter */ -.codehilite .sd { color: #a5d6ff } /* Literal.String.Doc */ -.codehilite .s2 { color: #a5d6ff } /* Literal.String.Double */ -.codehilite .se { color: #79c0ff } /* Literal.String.Escape */ -.codehilite .sh { color: #79c0ff } /* Literal.String.Heredoc */ -.codehilite .si { color: #a5d6ff } /* Literal.String.Interpol */ -.codehilite .sx { color: #a5d6ff } /* Literal.String.Other */ -.codehilite .sr { color: #79c0ff } /* Literal.String.Regex */ -.codehilite .s1 { color: #a5d6ff } /* Literal.String.Single */ -.codehilite .ss { color: #a5d6ff } /* Literal.String.Symbol */ -.codehilite .bp { color: #c9d1d9 } /* Name.Builtin.Pseudo */ -.codehilite .fm { color: #d2a8ff; font-weight: bold } /* Name.Function.Magic */ -.codehilite .vc { color: #79c0ff } /* Name.Variable.Class */ -.codehilite .vg { color: #79c0ff } /* Name.Variable.Global */ -.codehilite .vi { color: #79c0ff } /* Name.Variable.Instance */ -.codehilite .vm { color: #79c0ff } /* Name.Variable.Magic */ -.codehilite .il { color: #a5d6ff } /* Literal.Number.Integer.Long */ - -.dark .codehilite .hll { background-color: #2C3B41 } -.dark .codehilite .c { color: #79d618; font-style: italic } /* Comment */ -.dark .codehilite .err { color: #FF5370 } /* Error */ -.dark .codehilite .esc { color: #89DDFF } /* Escape */ -.dark .codehilite .g { color: #EEFFFF } /* Generic */ -.dark .codehilite .k { color: #BB80B3 } /* Keyword */ -.dark .codehilite .l { color: #C3E88D } /* Literal */ -.dark .codehilite .n { color: #EEFFFF } /* Name */ -.dark .codehilite .o { color: #89DDFF } /* Operator */ -.dark .codehilite .p { color: #89DDFF } /* Punctuation */ -.dark .codehilite .ch { color: #79d618; font-style: italic } /* Comment.Hashbang */ -.dark .codehilite .cm { color: #79d618; font-style: italic } /* Comment.Multiline */ -.dark .codehilite .cp { color: #79d618; font-style: italic } /* Comment.Preproc */ -.dark .codehilite .cpf { color: #79d618; font-style: italic } /* Comment.PreprocFile */ -.dark .codehilite .c1 { color: #79d618; font-style: italic } /* Comment.Single */ -.dark .codehilite .cs { color: #79d618; font-style: italic } /* Comment.Special */ -.dark .codehilite .gd { color: #FF5370 } /* Generic.Deleted */ -.dark .codehilite .ge { color: #89DDFF } /* Generic.Emph */ -.dark .codehilite .gr { color: #FF5370 } /* Generic.Error */ -.dark .codehilite .gh { color: #C3E88D } /* Generic.Heading */ -.dark .codehilite .gi { color: #C3E88D } /* Generic.Inserted */ -.dark .codehilite .go { color: #79d618 } /* Generic.Output */ -.dark .codehilite .gp { color: #FFCB6B } /* Generic.Prompt */ -.dark .codehilite .gs { color: #FF5370 } /* Generic.Strong */ -.dark .codehilite .gu { color: #89DDFF } /* Generic.Subheading */ -.dark .codehilite .gt { color: #FF5370 } /* Generic.Traceback */ -.dark .codehilite .kc { color: #89DDFF } /* Keyword.Constant */ -.dark .codehilite .kd { color: #BB80B3 } /* Keyword.Declaration */ -.dark .codehilite .kn { color: #89DDFF; font-style: italic } /* Keyword.Namespace */ -.dark .codehilite .kp { color: #89DDFF } /* Keyword.Pseudo */ -.dark .codehilite .kr { color: #BB80B3 } /* Keyword.Reserved */ -.dark .codehilite .kt { color: #BB80B3 } /* Keyword.Type */ -.dark .codehilite .ld { color: #C3E88D } /* Literal.Date */ -.dark .codehilite .m { color: #F78C6C } /* Literal.Number */ -.dark .codehilite .s { color: #C3E88D } /* Literal.String */ -.dark .codehilite .na { color: #BB80B3 } /* Name.Attribute */ -.dark .codehilite .nb { color: #82AAFF } /* Name.Builtin */ -.dark .codehilite .nc { color: #FFCB6B } /* Name.Class */ -.dark .codehilite .no { color: #EEFFFF } /* Name.Constant */ -.dark .codehilite .nd { color: #82AAFF } /* Name.Decorator */ -.dark .codehilite .ni { color: #89DDFF } /* Name.Entity */ -.dark .codehilite .ne { color: #FFCB6B } /* Name.Exception */ -.dark .codehilite .nf { color: #82AAFF } /* Name.Function */ -.dark .codehilite .nl { color: #82AAFF } /* Name.Label */ -.dark .codehilite .nn { color: #FFCB6B } /* Name.Namespace */ -.dark .codehilite .nx { color: #EEFFFF } /* Name.Other */ -.dark .codehilite .py { color: #FFCB6B } /* Name.Property */ -.dark .codehilite .nt { color: #FF5370 } /* Name.Tag */ -.dark .codehilite .nv { color: #89DDFF } /* Name.Variable */ -.dark .codehilite .ow { color: #89DDFF; font-style: italic } /* Operator.Word */ -.dark .codehilite .pm { color: #89DDFF } /* Punctuation.Marker */ -.dark .codehilite .w { color: #EEFFFF } /* Text.Whitespace */ -.dark .codehilite .mb { color: #F78C6C } /* Literal.Number.Bin */ -.dark .codehilite .mf { color: #F78C6C } /* Literal.Number.Float */ -.dark .codehilite .mh { color: #F78C6C } /* Literal.Number.Hex */ -.dark .codehilite .mi { color: #F78C6C } /* Literal.Number.Integer */ -.dark .codehilite .mo { color: #F78C6C } /* Literal.Number.Oct */ -.dark .codehilite .sa { color: #BB80B3 } /* Literal.String.Affix */ -.dark .codehilite .sb { color: #C3E88D } /* Literal.String.Backtick */ -.dark .codehilite .sc { color: #C3E88D } /* Literal.String.Char */ -.dark .codehilite .dl { color: #EEFFFF } /* Literal.String.Delimiter */ -.dark .codehilite .sd { color: #79d618; font-style: italic } /* Literal.String.Doc */ -.dark .codehilite .s2 { color: #C3E88D } /* Literal.String.Double */ -.dark .codehilite .se { color: #EEFFFF } /* Literal.String.Escape */ -.dark .codehilite .sh { color: #C3E88D } /* Literal.String.Heredoc */ -.dark .codehilite .si { color: #89DDFF } /* Literal.String.Interpol */ -.dark .codehilite .sx { color: #C3E88D } /* Literal.String.Other */ -.dark .codehilite .sr { color: #89DDFF } /* Literal.String.Regex */ -.dark .codehilite .s1 { color: #C3E88D } /* Literal.String.Single */ -.dark .codehilite .ss { color: #89DDFF } /* Literal.String.Symbol */ -.dark .codehilite .bp { color: #89DDFF } /* Name.Builtin.Pseudo */ -.dark .codehilite .fm { color: #82AAFF } /* Name.Function.Magic */ -.dark .codehilite .vc { color: #89DDFF } /* Name.Variable.Class */ -.dark .codehilite .vg { color: #89DDFF } /* Name.Variable.Global */ -.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */ -.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */ -.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */ - -""" diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/trouble-shooting.md b/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/trouble-shooting.md deleted file mode 100644 index 727b44131b713077cf6ccfb615566c8fd2bcde9c..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/trouble-shooting.md +++ /dev/null @@ -1,22 +0,0 @@ - - -## Exception: data did not match any variant of untagged enum ModelWrapper at line 108219 column 3 - - - - -## The OrderedVocab you are attempting to save contains a hole for index 50254, your vocabulary could be corrupted ! - - -``` -The OrderedVocab you are attempting to save contains a hole for index 50254, your vocabulary could be corrupted ! -The OrderedVocab you are attempting to save contains a hole for index 50255, your vocabulary could be corrupted ! -The OrderedVocab you are attempting to save contains a hole for index 50256, your vocabulary could be corrupted ! -``` - - -原因:50254 这些token并未在vocab中定义,只在 `added_tokens` 里定义了。 - -## ss - - diff --git a/spaces/evaluate-metric/competition_math/competition_math.py b/spaces/evaluate-metric/competition_math/competition_math.py deleted file mode 100644 index 9a82eb40b656dfe892aa5f3ebfef30c21329e92d..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/competition_math/competition_math.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Accuracy metric for the Mathematics Aptitude Test of Heuristics (MATH) dataset.""" - -import datasets -import math_equivalence # From: git+https://github.com/hendrycks/math.git - -import evaluate - - -_CITATION = """\ -@article{hendrycksmath2021, - title={Measuring Mathematical Problem Solving With the MATH Dataset}, - author={Dan Hendrycks - and Collin Burns - and Saurav Kadavath - and Akul Arora - and Steven Basart - and Eric Tang - and Dawn Song - and Jacob Steinhardt}, - journal={arXiv preprint arXiv:2103.03874}, - year={2021} -} -""" - - -_DESCRIPTION = """\ -This metric is used to assess performance on the Mathematics Aptitude Test of Heuristics (MATH) dataset. -It first canonicalizes the inputs (e.g., converting "1/2" to "\\frac{1}{2}") and then computes accuracy. -""" - - -_KWARGS_DESCRIPTION = r""" -Calculates accuracy after canonicalizing inputs. - -Args: - predictions: list of predictions to score. Each prediction - is a string that contains natural language and LaTex. - references: list of reference for each prediction. Each - reference is a string that contains natural language - and LaTex. -Returns: - accuracy: accuracy after canonicalizing inputs - (e.g., converting "1/2" to "\\frac{1}{2}") - -Examples: - >>> metric = evaluate.load("competition_math") - >>> results = metric.compute(references=["\\frac{1}{2}"], predictions=["1/2"]) - >>> print(results) - {'accuracy': 1.0} -""" - - -@datasets.utils.file_utils.add_end_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class CompetitionMathMetric(evaluate.Metric): - """Accuracy metric for the MATH dataset.""" - - def _info(self): - return evaluate.MetricInfo( - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - features=datasets.Features( - { - "predictions": datasets.Value("string"), - "references": datasets.Value("string"), - } - ), - # Homepage of the metric for documentation - homepage="https://github.com/hendrycks/math", - # Additional links to the codebase or references - codebase_urls=["https://github.com/hendrycks/math"], - ) - - def _compute(self, predictions, references): - """Returns the scores""" - n_correct = 0.0 - for i, j in zip(predictions, references): - n_correct += 1.0 if math_equivalence.is_equiv(i, j) else 0.0 - accuracy = n_correct / len(predictions) - return { - "accuracy": accuracy, - } diff --git a/spaces/facebook/MusicGen/model_cards/AUDIOGEN_MODEL_CARD.md b/spaces/facebook/MusicGen/model_cards/AUDIOGEN_MODEL_CARD.md deleted file mode 100644 index 5dcd23d8276d8f474043976672ea249d8b2a9dd1..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/model_cards/AUDIOGEN_MODEL_CARD.md +++ /dev/null @@ -1,79 +0,0 @@ -# AudioGen Model Card - -## Model details -**Organization developing the model:** The FAIR team of Meta AI. - -**Model date:** This version of AudioGen was trained between July 2023 and August 2023. - -**Model version:** This is version 2 of the model, not to be confused with the original AudioGen model published in ["AudioGen: Textually Guided Audio Generation"][audiogen]. -In this version (v2), AudioGen was trained on the same data, but with some other differences: -1. This model was trained on 10 seconds (vs. 5 seconds in v1). -2. The discrete representation used under the hood is extracted using a retrained EnCodec model on the environmental sound data, following the EnCodec setup detailed in the ["Simple and Controllable Music Generation" paper][musicgen]. -3. No audio mixing augmentations. - -**Model type:** AudioGen consists of an EnCodec model for audio tokenization, and an auto-regressive language model based on the transformer architecture for audio modeling. The released model has 1.5B parameters. - -**Paper or resource for more information:** More information can be found in the paper [AudioGen: Textually Guided Audio Generation](https://arxiv.org/abs/2209.15352). - -**Citation details:** See [AudioGen paper][audiogen] - -**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0. - -**Where to send questions or comments about the model:** Questions and comments about AudioGen can be sent via the [GitHub repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. - -## Intended use -**Primary intended use:** The primary use of AudioGen is research on AI-based audio generation, including: -- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science -- Generation of sound guided by text to understand current abilities of generative AI models by machine learning amateurs - -**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. - -**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate audio pieces that create hostile or alienating environments for people. This includes generating audio that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. - -## Metrics - -**Models performance measures:** We used the following objective measure to evaluate the model on a standard audio benchmark: -- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) -- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) - -Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: -- Overall quality of the audio samples; -- Text relevance to the provided text input; - -More details on performance measures and human studies can be found in the paper. - -**Decision thresholds:** Not applicable. - -## Evaluation datasets - -The model was evaluated on the [AudioCaps benchmark](https://audiocaps.github.io/). - -## Training datasets - -The model was trained on the following data sources: a subset of AudioSet (Gemmeke et al., 2017), [BBC sound effects](https://sound-effects.bbcrewind.co.uk/), AudioCaps (Kim et al., 2019), Clotho v2 (Drossos et al., 2020), VGG-Sound (Chen et al., 2020), FSD50K (Fonseca et al., 2021), [Free To Use Sounds](https://www.freetousesounds.com/all-in-one-bundle/), [Sonniss Game Effects](https://sonniss.com/gameaudiogdc), [WeSoundEffects](https://wesoundeffects.com/we-sound-effects-bundle-2020/), [Paramount Motion - Odeon Cinematic Sound Effects](https://www.paramountmotion.com/odeon-sound-effects). - -## Evaluation results - -Below are the objective metrics obtained with the released model on AudioCaps (consisting of 10-second long samples). Note that the model differs from the original AudioGen model introduced in the paper, hence the difference in the metrics. - -| Model | Frechet Audio Distance | KLD | Text consistency | -|---|---|---|---| -| facebook/audiogen-medium | 1.77 | 1.58 | 0.30 | - -More information can be found in the paper [AudioGen: Textually Guided Audio Generation][audiogen], in the Experiments section. - -## Limitations and biases - -**Limitations:** -- The model is not able to generate realistic vocals. -- The model has been trained with English descriptions and will not perform as well in other languages. -- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. - -**Biases:** The datasets used for training may be lacking of diversity and are not representative of all possible sound events. The generated samples from the model will reflect the biases from the training data. - -**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. - -**Use cases:** Users must be aware of the biases, limitations and risks of the model. AudioGen is a model developed for artificial intelligence research on audio generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. - -[musicgen]: https://arxiv.org/abs/2306.05284 -[audiogen]: https://arxiv.org/abs/2209.15352 diff --git a/spaces/falterWliame/Face_Mask_Detection/((EXCLUSIVE)) Downloadsniperghostwarriorunlockcodepc.md b/spaces/falterWliame/Face_Mask_Detection/((EXCLUSIVE)) Downloadsniperghostwarriorunlockcodepc.md deleted file mode 100644 index 242bc1bb74cab12ac003053d47f496da2a17d92e..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/((EXCLUSIVE)) Downloadsniperghostwarriorunlockcodepc.md +++ /dev/null @@ -1,12 +0,0 @@ -
    -

    https://coub.com/stories/3047124-downloadsniperghostwarriorunlockcodepc-exclusive https://coub.com/stories/3047123-adobe-acrobat-pro-license-code-darlann https://trello.com/c/fQ1awD4D/22-hack-dll-filescom-fixer-v27722024-full-version https://browkinmortproc.weebly.com/downloadsniperghostwarriorunlockcodepc.html

    -

    downloadsniperghostwarriorunlockcodepc


    Download Ziphttps://urlca.com/2uDcL5



    -

    https://coub.com/stories/3047124-downloadsniperghostwarriorunlockcodepc-exclusive https://coub.com/stories/3047123-adobe-acrobat-pro-license-code-darlann https://trello.com/c/ohHbnl3c/45-wowsnooperdiagnosticsoftware https://trello.com/c/SmKl1nY8/42-downloadsniperghostwarriorunlockcodepc

    -

    http://appdlo.com/downloadsniperghostwarriorunlockcodepc.html https://trello.com/c/ohHbnl3c/45-wowsnooperdiagnosticsoftware https://trello.com/c/SmKl1nY8/42-downloadsniperghostwarriorunlockcodepc

    -

    http://appdlo.com/downloadsniperghostwarriorunlockcodepc.html https://trello.com/c/fQ1awD4D/22-hack-dll-filescom-fixer-v27722024-full-version https://browkinmortproc.weebly.com/downloadsniperghostwarriorunlockcodepc.html

    -

    [FULL] Crack CarryMap V 2.3,downloadsniperghostwarriorunlockcodepc,HDa aa a ca a a a a a a i a c Kaho Na Pyar Hai Fulla 'a a a a a a i [/FULL] Solucionario De Venero Matematica Basica Pdf 129,downloadsniperghostwarriorunlockcodepc,desmumek 4bd6d2c6ca.

    -

    -

    https://coub.com/stories/3047123-adobe-acrobat-pro-license-code-darlann https://trello.com/c/bqzjmjmq/22-kannadare-redux-v176448663-mod-for-windows-7/ https://trello.com/c/SmKl1nY8/42-downloadsniperghostwarriorunlockcodepc

    -

    https://coub.com/stories/3047124-downloadsniperghostwarriorunlockcodepc-exclusive https://trello.com/c/2a8jZRmv/7-nietzsche-walks-in-the-woods-to-his-death-complete-edition-a-book-about-nietzsche https://coub.com/stories/3047123-adobe-acrobat-pro-license-code-darlann https://trello.com/c/OhSmfgQ2/36-simple-iis-configuration-and-server-setup https://coub.com/stories/3047124-downloadsniperghostwarriorunlockcodepc-exclusive

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Cmo instalar y configurar Call of Duty Mobile APK para PC sin problemas.md b/spaces/fatiXbelha/sd/Cmo instalar y configurar Call of Duty Mobile APK para PC sin problemas.md deleted file mode 100644 index a556fe9e1baf136a21e48ccd3e66a59f32b98009..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Cmo instalar y configurar Call of Duty Mobile APK para PC sin problemas.md +++ /dev/null @@ -1,225 +0,0 @@ - -

    Call of Duty Mobile APK para PC: Cómo descargar y jugar este juego de acción en tu computadora

    -

    ¿Te gustan los juegos de acción multijugador? ¿Quieres disfrutar de una experiencia de combate realista y competitiva en tu PC o Mac? Entonces te interesará saber cómo descargar y jugar Call of Duty Mobile APK para PC.

    -

    call of duty mobile apk para pc


    Download Filehttps://urllie.com/2uNHfA



    -

    Call of Duty Mobile es uno de los juegos más populares del momento, con millones de jugadores en todo el mundo. Se trata de un juego que te ofrece una variedad de modos de juego, mapas icónicos,

    armas personalizables, personajes legendarios y una acción frenética. Además, el juego se actualiza constantemente con nuevos contenidos y eventos.

    -

    Qué es Call of Duty Mobile y por qué deberías jugarlo en PC

    -

    Call of Duty Mobile es un juego de acción multijugador que te ofrece una experiencia de combate realista y competitiva

    -

    Call of Duty Mobile es un juego gratuito que puedes descargar en tu dispositivo móvil Android o iOS. El juego se basa en la exitosa franquicia de Call of Duty, que ha sido una de las más vendidas y aclamadas en el género de los shooters.

    -

    En Call of Duty Mobile puedes elegir entre diferentes modos de juego, como duelo por equipos, dominio, baja confirmada, battle royale y más. Cada modo tiene sus propias reglas y objetivos, que te pondrán a prueba en diferentes escenarios de combate.

    -

    También puedes explorar los diferentes mapas disponibles, que son recreaciones de los más icónicos de la saga, como Nuketown, Crash, Hijacked y más. Cada mapa tiene sus propias características y zonas estratégicas, que deberás aprovechar para ganar ventaja sobre tus enemigos.

    -

    Además, puedes personalizar tus armas, equipamiento y personajes con los diferentes camuflajes, accesorios y trajes que puedes obtener o comprar. Puedes crear tu propio estilo de juego y mostrar tu personalidad en el campo de batalla.

    -

    call of duty mobile pc download apk
    -call of duty mobile apk for pc windows 10
    -call of duty mobile emulator apk pc
    -call of duty mobile pc version apk
    -call of duty mobile apk on pc bluestacks
    -call of duty mobile apk para pc gratis
    -call of duty mobile apk para pc sin emulador
    -call of duty mobile apk para pc gameloop
    -call of duty mobile apk para pc mega
    -call of duty mobile apk para pc mumu
    -call of duty mobile apk para pc requisitos
    -call of duty mobile apk para pc 2023
    -call of duty mobile apk para pc descargar
    -call of duty mobile apk para pc online
    -call of duty mobile apk para pc nox
    -call of duty mobile apk para pc mediafıre
    -call of duty mobile apk para pc uptodown
    -call of duty mobile apk para pc season 5
    -call of duty mobile apk para pc 32 bits
    -call of duty mobile apk para pc fraco
    -call of duty mobile apk para pc baixar
    -call of duty mobile apk para pc hack
    -call of duty mobile apk para pc gameplay
    -call of duty mobile apk para pc full hd
    -call of duty mobile apk para pc 4k
    -call of duty mobile apk para pc low end
    -call of duty mobile apk para pc mod menu
    -call of duty mobile apk para pc zombies mode
    -call of duty mobile apk para pc battle royale
    -call of duty mobile apk para pc controller support
    -call of duty mobile apk para pc keyboard and mouse
    -call of duty mobile apk para pc graphics settings
    -call of duty mobile apk para pc best emulator
    -call of duty mobile apk para pc high fps
    -call of duty mobile apk para pc tips and tricks
    -call of duty mobile apk para pc review
    -call of duty mobile apk para pc reddit
    -call of duty mobile apk para pc youtube
    -call of duty mobile apk para pc facebook
    -call of duty mobile apk para pc twitter
    -call of duty mobile apk para pc discord
    -call of duty mobile apk para pc official website
    -call of duty mobile apk para pc latest update
    -call of duty mobile apk para pc new map and mode
    -call of duty mobile apk para pc free download link
    -call of duty mobile apk para pc how to install guide

    -

    Jugar Call of Duty Mobile en PC te ofrece varias ventajas, como una pantalla más grande, un mejor rendimiento, un control más preciso y una mayor comodidad

    -

    Aunque Call of Duty Mobile está diseñado para dispositivos móviles, eso no significa que no puedas jugarlo en tu PC o Mac. De hecho, hay varias razones por las que jugar Call of Duty Mobile en PC es una buena idea.

    -

    Una de las ventajas de jugar Call of Duty Mobile en PC es que puedes disfrutar de una pantalla más grande, lo que te permite apreciar mejor los detalles gráficos del juego y tener una mejor visión del entorno. Esto puede mejorar tu experiencia de juego y tu rendimiento en las partidas.

    -

    Otra ventaja de jugar Call of Duty Mobile en PC es que puedes aprovechar el mejor rendimiento de tu computadora, lo que te permite jugar con una mayor fluidez y estabilidad. Esto puede evitar los problemas de lag o caídas de FPS que pueden afectar a tu dispositivo móvil.

    -

    Asimismo, jugar Call of Duty Mobile en PC te ofrece un control más preciso y cómodo, ya que puedes usar el teclado, el mouse o el gamepad para manejar a tu personaje y disparar a tus enemigos. Esto puede darte una mayor precisión y rapidez en tus movimientos y acciones.

    -

    Por último, jugar Call of Duty Mobile en PC te ofrece una mayor comodidad, ya que no tienes que depender de la batería o la conexión de tu dispositivo móvil. Además, puedes evitar el cansancio visual o la fatiga muscular que puede provocar el uso prolongado del móvil.

    -

    Cómo descargar y jugar Call of Duty Mobile APK para PC con un emulador de Android

    -

    Un emulador de Android es un software que te permite ejecutar aplicaciones de Android en tu PC o Mac

    -

    Para poder jugar Call of Duty Mobile APK para PC necesitas usar un emulador de Android. Un emulador de Android es un software que te permite ejecutar aplicaciones de Android en tu PC o Mac, simulando el sistema operativo y el hardware de un dispositivo móvil.

    -

    Usando un emulador de Android puedes acceder a la Play Store y descargar e instalar cualquier aplicación o juego de Android en tu computadora. De esta manera, puedes disfrutar de tus juegos favoritos de Android en una pantalla más grande y con un mejor rendimiento.

    -

    Hay varios emuladores de Android disponibles, pero te recomendamos BlueStacks, GameLoop o Nox Player

    -

    Hay varios emuladores de Android disponibles en el mercado, pero no todos son iguales. Algunos pueden ofrecerte una mejor experiencia de juego que otros, dependiendo de sus características y funcionalidades.

    -

    Por eso, te recomendamos usar uno de estos tres emuladores: BlueStacks, GameLoop o Nox Player. Estos son algunos de los mejores emuladores de Android que puedes encontrar en Internet, y que te ofrecen varias características y ventajas para jugar Call of Duty Mobile APK para PC.

    -

    BlueStacks es una plataforma de juegos Android que te ofrece varias características, como controles inteligentes, sincronización de instancias múltiples, soporte nativo para gamepad y modos de rendimiento

    -

    BlueStacks es una de las plataformas de juegos Android más populares y confiables del mundo, con más de 500 millones de usuarios. BlueStacks te ofrece varias características que mejoran tu experiencia de juego, como:

    -
      -
    • Controles inteligentes: te permite cambiar automáticamente entre el modo táctil y el modo teclado y mouse según el juego que estés jugando, lo que te ahorra tiempo y esfuerzo.
    • -
    • Sincronización de instancias múltiples: te permite ejecutar varias instancias del mismo juego o de diferentes juegos al mismo tiempo, lo que te permite jugar con varias cuentas o en diferentes servidores.
    • -
    • Soporte nativo para gamepad: te permite conectar tu gamepad favorito a tu PC o Mac y usarlo para jugar a Call of Duty Mobile APK para PC con una mayor precisión y comodidad.
    • -
    • Modos de rendimiento: te permite ajustar el rendimiento del emulador según tus preferencias y las especificaciones de tu computadora, lo que te permite jugar con una mayor fluidez y estabilidad.
    • -
    -

    GameLoop es el emulador oficial de Call of Duty Mobile que te ofrece una optimización exclusiva, una interfaz simple, un modo disparo y una traducción en tiempo real

    -

    GameLoop es el emulador oficial de Call of Duty Mobile, desarrollado por Tencent, la misma empresa que creó el juego. GameLoop te ofrece una optimización exclusiva para Call of Duty Mobile APK para PC, lo que te garantiza una experiencia de juego óptima. Además, GameLoop te ofrece otras ventajas, como:

    -
      -
    • Interfaz simple: te ofrece una interfaz sencilla e intuitiva, que te permite acceder fácilmente a las funciones del emulador y al juego.
    • -
    • Modo disparo: te permite activar un modo especial que te permite disparar con el clic derecho del mouse, lo que te da una mayor rapidez y precisión en tus disparos.
    • -
    • Traducción en tiempo real: te permite traducir el chat del juego al idioma que prefieras, lo que te facilita la comunicación con otros jugadores.
    • -
    -

    Nox Player es un emulador ligero y rápido que te ofrece una personalización avanzada, una compatibilidad amplia, un modo MOBA y una función de captura de pantalla

    -

    Nox Player es un emulador ligero y rápido que te ofrece una personalización avanzada, una compatibilidad amplia, un modo MOBA y una función de captura de pantalla. Nox Player te ofrece algunas características interesantes, como:

    -
      -
    • Personalización avanzada: te permite configurar los parámetros del emulador según tus necesidades, como la resolución, la memoria RAM, la CPU, los gráficos y más.
    • -
    • Compatibilidad amplia: te permite ejecutar cualquier aplicación o juego de Android en tu PC o Mac, sin importar la versión o el modelo de tu dispositivo móvil.
    • -
    • Modo MOBA: te permite activar un modo especial que te permite usar el mouse para mover a tu personaje y el teclado para usar las habilidades, lo que te da una mayor facilidad y fluidez en los juegos del género MOBA.
    • -
    • Función de captura de pantalla: te permite tomar capturas de pantalla del juego con solo presionar un botón, lo que te permite guardar tus momentos favoritos o compartirlos con otros.
    • -
    -

    Los pasos para descargar y jugar Call of Duty Mobile APK para PC con un emulador de Android son los siguientes:

    -

    Descarga e instala el emulador de Android de tu preferencia en tu PC o Mac

    -

    El primer paso para jugar Call of Duty Mobile APK para PC es descargar e instalar el emulador de Android de tu preferencia en tu PC o Mac. Para ello, solo tienes que seguir estos pasos:

    -
      -
    1. Visita la página web oficial del emulador que quieras usar (BlueStacks, GameLoop o Nox Player) y haz clic en el botón de descarga.
    2. -
    3. Ejec

      uta el archivo descargado y sigue las instrucciones de instalación que aparecen en la pantalla.

    4. -
    5. Espera a que el emulador se instale y se inicie en tu PC o Mac.
    6. -
    -

    Inicia el emulador y accede a la Play Store con tu cuenta de Google

    -

    El segundo paso para jugar Call of Duty Mobile APK para PC es iniciar el emulador y acceder a la Play Store con tu cuenta de Google. Para ello, solo tienes que seguir estos pasos:

    -
      -
    1. Inicia el emulador que hayas instalado en tu PC o Mac.
    2. -
    3. Busca el icono de la Play Store en la pantalla de inicio del emulador y haz clic en él.
    4. -
    5. Ingresa con tu cuenta de Google o crea una nueva si no tienes una.
    6. -
    7. Acepta los términos y condiciones de la Play Store.
    8. -
    -

    Busca Call of Duty Mobile en la Play Store y haz clic en instalar

    -

    El tercer paso para jugar Call of Duty Mobile APK para PC es buscar Call of Duty Mobile en la Play Store y hacer clic en instalar. Para ello, solo tienes que seguir estos pasos:

    -
      -
    1. En la Play Store, escribe Call of Duty Mobile en el buscador y presiona enter.
    2. -
    3. Selecciona el juego que dice Call of Duty: Mobile - Temporada 6: El calor del verano, desarrollado por Activision Publishing, Inc.
    4. -
    5. Haz clic en el botón verde que dice Instalar y espera a que el juego se descargue e instale en tu PC o Mac.
    6. -
    -

    Una vez instalado el juego, haz clic en el icono de Call of Duty Mobile en la pantalla de inicio del emulador para empezar a jugar

    -

    El cuarto y último paso para jugar Call of Duty Mobile APK para PC es hacer clic en el icono de Call of Duty Mobile en la pantalla de inicio del emulador para empezar a jugar. Para ello, solo tienes que seguir estos pasos:

    -
      -
    1. Haz clic en el icono de Call of Duty Mobile que aparece en la pantalla de inicio del emulador.
    2. -
    3. Espera a que el juego se cargue y te muestre la pantalla de inicio.
    4. -
    5. Haz clic en el botón que dice Aceptar para aceptar los términos de servicio y la política de privacidad del juego.
    6. -
    7. Haz clic en el botón que dice Iniciar sesión con Activision, Facebook o Apple para ingresar con tu cuenta o crear una nueva si no tienes una.
    8. -
    9. Haz clic en el botón que dice Jugar para entrar al juego y disfrutar de Call of Duty Mobile APK para PC.
    10. -
    -

    Consejos y trucos para jugar Call of Duty Mobile APK para PC como un profesional

    -

    Configura los controles del juego según tu preferencia y comodidad, puedes usar el teclado, el mouse o el gamepad

    -

    Uno de los consejos más importantes para jugar Call of Duty Mobile APK para PC es configurar los controles del juego según tu preferencia y comodidad. Puedes usar el teclado, el mouse o el gamepad para manejar a tu personaje y disparar a tus enemigos. Para configurar los controles del juego, solo tienes que seguir estos pasos:

    -
      -
    1. Haz clic en el icono de ajustes que aparece en la esquina superior derecha de la pantalla del juego.
    2. -
    3. Haz clic en la pestaña que dice Controles y luego selecciona el modo de control que quieras usar: simple, avanzado o personalizado.
    4. -
    5. En el modo simple, solo tienes que apuntar al enemigo y el juego disparará automáticamente. En el modo avanzado, tienes que apuntar y disparar manualmente. En el modo personalizado, puedes ajustar los botones y las funciones según tu gusto.
    6. -
    7. También puedes cambiar los controles del teclado, del mouse o del gamepad haciendo clic en los iconos correspondientes que aparecen debajo de la pestaña Controles. Puedes arrastrar y soltar los botones en la pantalla o asignarles las teclas o botones que quieras.
    8. -
    9. Cuando hayas terminado de configurar los controles del juego, haz clic en el botón que dice Aplicar para guardar los cambios.
    10. -

    Ajusta los gráficos del juego según el rendimiento de tu PC o Mac, puedes elegir entre bajo, medio, alto o ultra

    -

    Otro consejo para jugar Call of Duty Mobile APK para PC es ajustar los gráficos del juego según el rendimiento de tu PC o Mac. Puedes elegir entre bajo, medio, alto o ultra, dependiendo de las especificaciones de tu computadora y de la calidad de imagen que quieras. Para ajustar los gráficos del juego, solo tienes que seguir estos pasos:

    -
      -
    1. Haz clic en el icono de ajustes que aparece en la esquina superior derecha de la pantalla del juego.
    2. -
    3. Haz clic en la pestaña que dice Gráficos y luego selecciona el nivel de gráficos que quieras usar: bajo, medio, alto o ultra.
    4. -
    5. En el nivel bajo, el juego tendrá una menor calidad de imagen, pero un mayor rendimiento. En el nivel medio, el juego tendrá una calidad de imagen aceptable y un rendimiento equilibrado. En el nivel alto, el juego tendrá una mayor calidad de imagen, pero un menor rendimiento. En el nivel ultra, el juego tendrá la máxima calidad de imagen, pero el rendimiento más bajo.
    6. -
    7. También puedes activar o desactivar otras opciones gráficas, como la profundidad de campo, las sombras, las texturas y más.
    8. -
    9. Cuando hayas terminado de ajustar los gráficos del juego, haz clic en el botón que dice Aplicar para guardar los cambios.
    10. -
    -

    Explora los diferentes modos de juego disponibles, como duelo por equipos, dominio, baja confirmada, battle royale y más

    -

    Un tercer consejo para jugar Call of Duty Mobile APK para PC es explorar los diferentes modos de juego disponibles, como duelo por equipos, dominio, baja confirmada, battle royale y más. Cada modo de juego tiene sus propias reglas y objetivos, que te pondrán a prueba en diferentes escenarios de combate. Para explorar los modos de juego disponibles, solo tienes que seguir estos pasos:

    -
      -
    1. Haz clic en el icono de multijugador o battle royale que aparece en la parte inferior izquierda de la pantalla del juego.
    2. -
    3. Haz clic en el botón que dice Modo y luego selecciona el modo de juego que quieras jugar: duelo por equipos, dominio, baja confirmada, punto caliente, buscar y destruir y más.
    4. -
    5. Duelo por equipos: es un modo clásico donde dos equipos de cinco jugadores se enfrentan en un mapa y gana el equipo que consiga más bajas al finalizar el tiempo o alcanzar el límite de puntuación.
    6. -
    7. Dominio: es un modo donde dos equipos de cinco jugadores deben capturar y defender tres zonas del mapa y gana el equipo que consiga más puntos al finalizar el tiempo.
    8. -
    9. Baja confirmada: es un modo donde dos equipos de cinco jugadores deben eliminar a sus enemigos y recoger sus placas de identificación para confirmar las bajas y gana el equipo que consiga más bajas confirmadas al finalizar el tiempo o alcanzar el límite de puntuación.
    10. -
    11. Punto caliente: es un modo donde dos equipos de cinco jugadores deben capturar y defender un punto del mapa que cambia cada cierto tiempo y gana el equipo que consiga más puntos al finalizar el tiempo.
    12. -
    13. Buscar y destruir: es un modo donde dos equipos de cinco jugadores se alternan entre atacar y defender dos objetivos del mapa y gana el equipo que elimine a todos sus enemigos o detone o desactive la bomba.
    14. -
    15. Battle royale: es un modo donde hasta 100 jugadores se lanzan en paracaídas a una isla gigante y deben sobrevivir hasta ser el último jugador o equipo en pie. Puedes jugar solo, en dúo o en escuadrón. Puedes elegir entre diferentes clases con habilidades especiales. Puedes encontrar armas, equipamiento y vehículos por toda la isla. Debes estar atento al círculo seguro que se va reduciendo con el tiempo.
    16. -

    Personaliza tus armas, equipamiento y personajes con los diferentes camuflajes, accesorios y trajes que puedes obtener o comprar

    -

    Un cuarto consejo para jugar Call of Duty Mobile APK para PC es personalizar tus armas, equipamiento y personajes con los diferentes camuflajes, accesorios y trajes que puedes obtener o comprar. De esta manera, puedes crear tu propio estilo de juego y mostrar tu personalidad en el campo de batalla. Para personalizar tus armas, equipamiento y personajes, solo tienes que seguir estos pasos:

    -
      -
    1. Haz clic en el icono de carga que aparece en la parte inferior izquierda de la pantalla del juego.
    2. -
    3. Haz clic en la pestaña que dice Armas, Equipamiento o Personajes y luego selecciona el elemento que quieras personalizar.
    4. -
    5. Haz clic en el botón que dice Personalizar y luego selecciona el camuflaje, el accesorio o el traje que quieras usar. Puedes ver una vista previa de cómo quedará tu elemento personalizado.
    6. -
    7. Algunos camuflajes, accesorios y trajes se pueden obtener gratis al completar misiones, subir de nivel o participar en eventos. Otros se pueden comprar con créditos o puntos COD, que son las monedas del juego. También hay algunos camuflajes, accesorios y trajes exclusivos que solo se pueden conseguir en cajas o pases de batalla.
    8. -
    9. Cuando hayas terminado de personalizar tu elemento, haz clic en el botón que dice Equipar para guardar los cambios.
    10. -
    -

    Participa en las actividades temporales y las misiones diarias para ganar recompensas y subir de nivel

    -

    Un quinto y último consejo para jugar Call of Duty Mobile APK para PC es participar en las actividades temporales y las misiones diarias para ganar recompensas y subir de nivel. De esta manera, puedes obtener más recursos, desbloquear más elementos y mejorar tus habilidades. Para participar en las actividades temporales y las misiones diarias, solo tienes que seguir estos pasos:

    -
      -
    1. Haz clic en el icono de eventos que aparece en la parte superior derecha de la pantalla del juego.
    2. -
    3. Haz clic en la pestaña que dice Temporales o Diarias y luego selecciona la actividad o la misión que quieras realizar.
    4. -
    5. Cada actividad o misión tiene sus propios requisitos y objetivos, que te indican lo que debes hacer para completarla. Por ejemplo, jugar cierto número de partidas, eliminar a cierto número de enemigos, usar cierto tipo de arma y más.
    6. -
    7. Cada actividad o misión también tiene sus propias recompensas, que te indican lo que obtendrás al completarla. Por ejemplo, créditos, puntos COD, cajas, camuflajes, accesorios, trajes y más.
    8. -
    9. Cuando completes una actividad o una misión, podrás reclamar tu recompensa haciendo clic en el botón que dice Reclamar. También podrás ver tu progreso en el nivel del pase de batalla o del evento actual.
    10. -
    -

    Conclusión

    -

    Call of Duty Mobile APK para PC es una excelente opción para los amantes de los juegos de acción que quieren disfrutar de una experiencia de combate realista y competitiva en su computadora

    -

    En conclusión, Call of Duty Mobile APK para PC es una excelente opción para los amantes de los juegos de acción que quieren disfrutar de una experiencia de combate realista y competitiva en su computadora. El juego te ofrece una variedad de modos de juego, mapas icónicos, armas personalizables, personajes legendarios y una acción frenética. Además, el juego se actualiza constantemente con nuevos contenidos y eventos.

    -

    Para jugar Call of Duty Mobile APK para PC solo necesitas descargar e instalar un emulador de Android como BlueStacks, GameLoop o Nox Player y seguir los pasos indicados anteriormente

    -

    Para jugar Call of Duty Mobile APK para PC solo necesitas descargar e instalar un emulador de Android como BlueStacks, GameLoop o Nox Player y seguir los pasos indicados anteriormente. Un emulador de Android es un software que te permite ejecutar aplicaciones de Android en tu PC o Mac. Us ando un emulador de Android puedes acceder a la Play Store y descargar e instalar Call of Duty Mobile APK para PC. Luego, puedes configurar los controles, los gráficos y los modos de juego según tu preferencia y comodidad. También puedes personalizar tus armas, equipamiento y personajes con los diferentes camuflajes, accesorios y trajes que puedes obtener o comprar. Además, puedes participar en las actividades temporales y las misiones diarias para ganar recompensas y subir de nivel.

    -

    Jugar Call of Duty Mobile APK para PC te ofrece varias ventajas, como una pantalla más grande, un mejor rendimiento, un control más preciso y una mayor comodidad

    -

    Jugar Call of Duty Mobile APK para PC te ofrece varias ventajas, como una pantalla más grande, un mejor rendimiento, un control más preciso y una mayor comodidad. Al jugar en tu PC o Mac, puedes disfrutar de una mejor calidad de imagen y una mejor visión del entorno. También puedes aprovechar el mejor rendimiento de tu computadora, lo que te permite jugar con una mayor fluidez y estabilidad. Asimismo, puedes usar el teclado, el mouse o el gamepad para manejar a tu personaje y disparar a tus enemigos, lo que te da una mayor precisión y rapidez. Por último, puedes evitar el cansancio visual o la fatiga muscular que puede provocar el uso prolongado del móvil.

    -

    Preguntas frecuentes

    -

    A continuación, te presentamos algunas preguntas frecuentes sobre Call of Duty Mobile APK para PC:

    -
      -
    1. ¿Qué requisitos necesita mi PC o Mac para jugar Call of Duty Mobile APK para PC?
    2. -

      Los requisitos mínimos que necesita tu PC o Mac para jugar Call of Duty Mobile APK para PC son los siguientes:

      -
        -
      • Sistema operativo: Windows 7 o superior / Mac OS X 10.11 o superior
      • -
      • Procesador: Intel o AMD de doble núcleo a 1.8 GHz o superior
      • -
      • Memoria RAM: 4 GB o superior
      • -
      • Gráficos: NVIDIA GeForce 8600 / GT 620 / AMD Radeon HD 4650 / Intel HD Graphics 4000 o superior
      • -
      • Espacio en disco: 5 GB o superior
      • -
      • Conexión a Internet: estable y de alta velocidad
      • -
      -
    3. ¿Puedo jugar Call of Duty Mobile APK para PC con mis amigos que juegan en dispositivos móviles?
    4. -

      Sí, puedes jugar Call of Duty Mobile APK para PC con tus amigos que juegan en dispositivos móviles. El juego tiene un sistema de emparejamiento cruzado que te permite jugar con otros jugadores que usan diferentes plataformas. Solo tienes que agregar a tus amigos como contactos en el juego y luego invitarlos a tu sala o unirte a la suya.

      -
    5. ¿Puedo usar trucos o hacks para jugar Call of Duty Mobile APK para PC?
    6. -

      No, no puedes usar trucos o hacks para jugar Call of Duty Mobile APK para PC. El juego tiene un sistema de detección y prevención de trampas que te puede banear permanentemente si detecta que estás usando algún software ilegal o modificado. Además, usar trucos o hacks es injusto y deshonesto con los demás jugadores que juegan limpiamente.

      -
    7. ¿Qué hacer si tengo algún problema técnico al jugar Call of Duty Mobile APK para PC?
    8. -

      Si tienes algún problema técnico al jugar Call of Duty Mobile APK para PC, puedes intentar algunas soluciones posibles, como:

      -
        -
      • Actualizar el emulador de Android a la última versión disponible.
      • -
      • Actualizar los controladores de tu tarjeta gráfica a la última versión disponible.
      • -
      • Cerrar otras aplicaciones que puedan consumir recursos de tu computadora.
      • -
      • Limpiar el caché y los datos del emulador de Android.
      • -
      • Reinstalar el emulador de Android o el juego.
      • -
      -

      Si ninguna de estas soluciones funciona, puedes contactar al servicio al cliente del emulador de Android o del juego para reportar tu problema y solicitar ayuda.

      -
    9. ¿Dónde puedo encontrar más información sobre Call of Duty Mobile APK para PC?
    10. -

      Puedes encontrar más información sobre Call of Duty Mobile APK para PC en las siguientes fuentes:

      -
        -
      • La página web oficial del juego: https://www.callofduty.com/mobile
      • -
      • La página web oficial del emulador BlueStacks: https://www.bluestacks.com/es/apps/action/call-of-duty-mobile-on-pc.html
      • -
      • La página web oficial del emulador GameLoop: https://gameloop.fun/es/game/codm
      • -
      • La página web oficial del emulador Nox Player: https://es.bignox.com/blog/como-jugar-call-of-duty-mobile-en-pc-con-noxplayer/
      • -
      • El canal de YouTube oficial del juego: https://www.youtube.com/channel/UCfO8SxUgkMcyN9FzZi6MJjQ
      • -
      • El canal de YouTube oficial del emulador BlueStacks: https://www.youtube.com/user/BlueStacksInc
      • -
      • El canal de YouTube oficial del emulador GameLoop: https://www.youtube.com/channel/UCyL2UwqjQlYNQdkZBjvzhqA
      • -
      • El canal de YouTube oficial del emulador Nox Player: https://www.youtube.com/channel/UCoXWumxCMY5hZMODF7i8rpg
      • -
      • El foro oficial del juego: https://community.callofduty.com/t5/Call-of-Duty-Mobile/bd-p/cod-mobile-forum
      • -
      • El foro oficial del emulador BlueStacks: https://support.bluestacks.com/hc/es/community/topics
      • -
      • El foro oficial del emulador GameLoop: https://gameloop.fun/es/forum/
      • -
      • El foro oficial del emulador Nox Player: https://es.bignox.com/blog/category/novedades/
      • -
      -

      Esperamos que este artículo te haya sido útil y que disfrutes de Call of Duty Mobile APK para PC. Si tienes alguna duda o comentario, no dudes en dejarnos un mensaje. ¡Hasta la próxima!

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Caustic Unlock Key APK for Android - Full Version Mode.md b/spaces/fatiXbelha/sd/Download Caustic Unlock Key APK for Android - Full Version Mode.md deleted file mode 100644 index b1a91a2458a7c0ef4f7ee54e641e7d8d4bdd8581..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Caustic Unlock Key APK for Android - Full Version Mode.md +++ /dev/null @@ -1,159 +0,0 @@ - -

      Caustic Unlock Key APK: How to Get Full Version of Caustic App for Free

      -

      If you are a music lover and a creative producer, you might have heard of Caustic app, a music creation tool inspired by rack-mount synthesizers and samplers. Caustic app allows you to create your own music tracks with up to 14 different machines, each with its own sound and effects. You can also mix, edit, sequence, and export your songs to various formats.

      -

      caustic unlock key apk


      Download File ->>> https://urllie.com/2uNBas



      -

      However, Caustic app is not free. You can download the demo version from Google Play or Amazon Appstore, but it has some limitations. You cannot save or export your songs, and you cannot access some features and effects. To unlock the full version mode, you need to purchase the Caustic Unlock Key app, which costs $9.99.

      -

      But what if you don't want to spend money on the unlock key? Is there a way to get the full version of Caustic app for free? The answer is yes, there is. You can use the Caustic Unlock Key APK, a modified version of the original unlock key app that bypasses the license verification and unlocks the full version mode of Caustic app.

      -

      In this article, we will explain what is Caustic app and why you need it, what is Caustic Unlock Key APK and how it works, how to download and install it on your device, and what are some alternatives and recommendations for using it. Let's get started!

      -

      caustic full version key apk free download
      -caustic unlock key apk mod
      -caustic unlock key apk latest version
      -caustic unlock key apk cracked
      -caustic unlock key apk no root
      -caustic full version key apk android
      -caustic unlock key apk 2023
      -caustic unlock key apk for pc
      -caustic unlock key apk rexdl
      -caustic full version key apk 1.0.1
      -caustic unlock key apk revdl
      -caustic unlock key apk uptodown
      -caustic full version key apk pure
      -caustic unlock key apk old version
      -caustic unlock key apk mirror
      -caustic full version key apk pro
      -caustic unlock key apk hack
      -caustic unlock key apk obb
      -caustic full version key apk 2022
      -caustic unlock key apk apkpure
      -caustic full version key apk modded
      -caustic unlock key apk unlimited
      -caustic unlock key apk data
      -caustic full version key apk unlocked
      -caustic unlock key apk premium
      -caustic full version key apk cracked download
      -caustic unlock key apk online
      -caustic full version key apk original
      -caustic unlock key apk offline
      -caustic full version key apk updated
      -caustic unlock key apk free purchase
      -caustic full version key apk mega mod
      -caustic unlock key apk paid
      -caustic full version key apk patched
      -caustic unlock key apk generator
      -caustic full version key apk hack download
      -caustic unlock key apk cheat
      -caustic full version key apk no ads
      -caustic unlock key apk direct link
      -caustic full version key apk unlimited money
      -caustic unlock key apk file download
      -caustic full version key apk latest update
      -caustic unlock key apk installer
      -caustic full version key apk new features
      -caustic unlock key apk android 1
      -caustic full version key apk android oyun club
      -caustic unlock key apk android republic
      -caustic full version key apk andropalace

      -

      What is Caustic App and Why You Need It

      -

      Caustic app is a mobile digital audio workstation developed by Single Cell Software. The app was released in 2012, and continued to receive updates for the next five years. There's also a Mastering app which is used to master songs made with the app.

      -

      Caustic App Features and Benefits

      -

      Caustic app has many features and benefits that make it a powerful and versatile music production tool. Some of them are:

      -
        -
      • You can create your rack by adding up to 14 machines from a choice of 10 synth types, such as Subsynth, PCMSynth, BassLine, BeatBox, PadSynth, 8BitSynth, Modular, Organ, Vocoder, and FMSynth.
      • -
      • You can customize each machine with its own parameters, effects, presets, patterns, automation curves, etc.
      • -
      • You can use the effects rack to add up to 2 effects per machine, from a selection of 16 effects types, such as Reverb, Delay, Chorus, Flanger, Phaser, Distortion, Bitcrusher, Compressor, etc.
      • -
      • You can use the mixer desk to adjust the volume, pan, mute, solo, send effects, EQ, limiter, etc. of each machine.
      • -
      • You can use the song sequencer to arrange your machines into patterns and tracks.
      • -
      • You can export your songs to WAV, Ogg or MIDI formats (full version only).
      • -
      • You can import your own WAV files or SoundFont files (.sf2) for use in the PCMSynth, BeatBox or Vocoder machines.
      • -
      • You can connect external MIDI controllers to the app and use them to control the machines and effects (full version only).
      • -
      • You can share your songs and presets with other users via the Caustic Community website.
      • -
      -

      As you can see, Caustic app is a great app for anyone who wants to create music on their mobile device, whether they are beginners or professionals. You can experiment with different sounds and styles, and unleash your creativity and imagination.

      -

      Caustic App Limitations and Drawbacks

      -

      However, Caustic app is not perfect. It has some limitations and drawbacks that you should be aware of before using it. Some of them are:

      -
        -
      • You cannot save or export your songs in the demo version. You need to buy the Caustic Unlock Key app to unlock the full version mode.
      • -
      • You cannot access some features and effects in the demo version, such as the Master section, the Stereo Reverb, the Stereo Delay, the Phaser, the Flanger, etc.
      • -
      • You cannot use more than 14 machines in your rack. This might limit your options and creativity if you want to create complex songs.
      • -
      • You cannot record audio directly into the app. You need to use another app or device to record your vocals or instruments, and then import them into the app.
      • -
      • You cannot edit the WAV files or SoundFont files that you import into the app. You need to use another app or software to edit them before importing them.
      • -
      • You cannot export your songs to MP3 format. You need to use another app or software to convert them from WAV or Ogg format.
      • -
      -

      These limitations and drawbacks might discourage some users from using Caustic app, especially if they don't want to pay for the unlock key. But don't worry, there is a solution for that. You can use the Caustic Unlock Key APK, a modified version of the original unlock key app that unlocks the full version mode of Caustic app for free.

      -

      What is Caustic Unlock Key APK and How It Works

      -

      Caustic Unlock Key APK is a hacked version of the original Caustic Unlock Key app that bypasses the license verification and unlocks the full version mode of Caustic app without paying anything. It is an APK file that you can download and install on your device, just like any other app.

      -

      Caustic Unlock Key APK Features and Benefits

      -

      Caustic Unlock Key APK has many features and benefits that make it a useful and convenient tool for Caustic app users. Some of them are:

      -
        -
      • You can save and export your songs in any format (WAV, Ogg or MIDI) without any restrictions.
      • -
      • You can access all the features and effects in the app, such as the Master section, the Stereo Reverb, the Stereo Delay, the Phaser, the Flanger, etc.
      • -
      • You can connect external MIDI controllers to the app and use them to control the machines and effects.
      • -
      • You can enjoy all the updates and improvements that are made to Caustic app by Single Cell Software.
      • -
      • You can support the developer of Caustic app by giving feedback and suggestions for future updates.
      • -
      -

      With Caustic Unlock Key APK, you can get the full version mode of Caustic app for free, and enjoy all its features and benefits without any limitations or drawbacks.

      -

      Caustic Unlock Key APK Limitations and Risks

      -

      However, Caustic Unlock Key APK is not an official product of Single Cell Software. It is a modified version of the original unlock key app that was created by unknown hackers or developers. Therefore, it has some limitations and risks that you should be aware of before using it. Some of them are:

      -
        -
      • You cannot download Caustic Unlock Key APK from Google Play or Amazon Appstore. You need to find a trusted source online that provides a safe and working link to download it.
      • -
      • You cannot update Caustic Unlock Key APK from Google Play or Amazon Appstore. You need to check online for new versions of Caustic Unlock Key APK and download them manually.
      • -
      • You cannot verify the authenticity or security of Caustic Unlock Key APK. You need to trust that it does not contain any malware or viruses that could harm your device or data.
      • -
      • You cannot guarantee the compatibility or stability of Caustic Unlock Key APK with your device or operating system. You need to test it yourself and see if it works properly or causes any errors or crashes.
      • -
      • You cannot support the developer of Caustic app financially by buying the original unlock key app. You need to consider donating to Single Cell Software if you like their app and their work.
      • -
      -

      These limitations and risks might discourage some users from using Caustic Unlock Key APK, especially if they are concerned about the legality, security, or quality of the app. But don't worry, there are some alternatives and recommendations for using it that we will discuss later in this article.

      -

      How to Download and Install Caustic Unlock Key APK on Your Device

      -

      If you have decided to use Caustic Unlock Key APK, you need to follow some steps to download and install it on your device. Here are the steps:

      -

      Step 1: Enable Unknown Sources on Your Device

      -

      Since Caustic Unlock Key APK is not available on Google Play or Amazon Appstore, you need to enable the option to install apps from unknown sources on your device. This will allow you to install apps that are not verified by Google or Amazon. To do this, go to your device settings, then security, then unknown sources, and turn it on. You might see a warning message that says installing apps from unknown sources could harm your device or data. Tap OK to proceed.

      -

      Step 2: Download Caustic Unlock Key APK from a Trusted Source

      -

      Next, you need to find a trusted source online that provides a safe and working link to download Caustic Unlock Key APK. You can search on Google or other search engines for "Caustic Unlock Key APK download" or similar keywords, and look for reputable websites or blogs that offer the link. You can also check the reviews or comments of other users who have downloaded the app from the same source. Be careful not to click on any ads or pop-ups that might redirect you to malicious or fraudulent sites. Once you have found a reliable source, tap on the download link and wait for the file to be downloaded on your device.

      -

      Step 3: Install Caustic Unlock Key APK on Your Device

      -

      After the file is downloaded, go to your device file manager and locate the Caustic Unlock Key APK file. Tap on it and you will see a prompt that asks you if you want to install the app. Tap on install and wait for the installation process to finish. You might see another warning message that says installing this app could harm your device or data. Tap OK to proceed.

      -

      Step 4: Launch Caustic App and Enjoy Full Version Mode

      -

      Once the installation is done, you can launch Caustic app from your device app drawer or home screen. You will see a message that says "Caustic Full Version Unlocked". This means that you have successfully installed Caustic Unlock Key APK and unlocked the full version mode of Caustic app. You can now enjoy all the features and benefits of the app without any limitations or drawbacks.

      -

      Caustic Unlock Key APK Alternatives and Recommendations

      -

      If you are not comfortable with using Caustic Unlock Key APK, or if you encounter any problems or issues with it, there are some alternatives and recommendations that you can try instead. Here are some of them:

      -

      Paid Alternatives to Caustic App

      -

      If you want to use a similar app to Caustic app, but with more features and options, you can try some paid alternatives that are available on Google Play or Amazon Appstore. Some of them are:

      - - - - - - - -
      NamePriceDescription
      G-Stomper Studio$12.99A full-featured music production tool with up to 24 machines, 47 effects, 16 modulation slots, etc.
      FL Studio Mobile$14.99A mobile version of the popular desktop software with up to 99 tracks, 133 instruments, 10 effects, etc.
      SunVox$5.99A modular synthesizer and tracker with up to 32 modules, 100 effects, unlimited tracks, etc.
      Nanoloop$3.99A minimalist music sequencer with up to 6 channels, 8 patterns per channel, 4 effects per channel, etc.
      KORG Gadget 2 Le$19.99A collection of over 40 gadgets (synths and drum machines) with up to 32 tracks, over 100 effects, etc.
      -

      These paid alternatives offer more advanced and professional features than Caustic app, but they also cost more money than Caustic app. You need to decide if you are willing to pay for them or not.

      -

      Free Alternatives to Caustic App

      -

      If you want to use a similar app to Caustic app, but without paying anything, you can try some free alternatives that are available on Google Play or Amazon Appstore. Some of them are:

      - - - - - - - -
      NameDescription
      Music Maker JAMA music creation app with over 300 mix packs, 500k loops, and 100 styles.
      BandLabA social music platform with over 200 virtual instruments, 100 effects, and 10k loops.
      Walk BandA music studio app with piano, guitar, drum, bass, and other instruments.
      Audio Evolution Mobile StudioA multi-track audio recording and editing app with MIDI sequencing and virtual instruments.
      SoundtrapAn online music studio with loops, instruments, effects, and collaboration features.
      -

      These free alternatives offer similar or different features than Caustic app, but they also have some limitations or drawbacks. You might encounter ads, in-app purchases, registration requirements, storage limits, etc. You need to compare them and see which one suits your needs and preferences better.

      -

      Conclusion and FAQs

      -

      In conclusion, Caustic app is a mobile digital audio workstation that allows you to create your own music tracks with up to 14 different machines, each with its own sound and effects. You can also mix, edit, sequence, and export your songs to various formats. However, Caustic app is not free. You need to buy the Caustic Unlock Key app to unlock the full version mode of the app.

      -

      If you don't want to pay for the unlock key, you can use the Caustic Unlock Key APK, a modified version of the original unlock key app that bypasses the license verification and unlocks the full version mode of Caustic app for free. You can download and install it on your device following some steps. However, Caustic Unlock Key APK is not an official product of Single Cell Software. It has some limitations and risks that you should be aware of before using it.

      -

      If you are not comfortable with using Caustic Unlock Key APK, or if you encounter any problems or issues with it, you can try some alternatives and recommendations instead. You can use some paid or free alternatives to Caustic app that are available on Google Play or Amazon Appstore. You can also support the developer of Caustic app by buying the original unlock key app or donating to Single Cell Software.

      -

      We hope this article has helped you understand what is Caustic Unlock Key APK and how to use it. If you have any questions or doubts about it, you can check the FAQs below or contact us for more information.

      -

      FAQs

      -
        -
      • Q: Is Caustic Unlock Key APK legal?
      • -
      • A: Caustic Unlock Key APK is not legal. It is a hacked version of the original unlock key app that violates the terms and conditions of Single Cell Software. Using it might result in legal actions or penalties from the developer or the authorities.
      • -
      • Q: Is Caustic Unlock Key APK safe?
      • -
      • A: Caustic Unlock Key APK is not safe. It is a modified version of the original unlock key app that might contain malware or viruses that could harm your device or data. Using it might result in data loss or corruption, device damage or malfunction, privacy breach or identity theft, etc.
      • -
      • Q: Is Caustic Unlock Key APK compatible with my device?
      • -
      • A: Caustic Unlock Key APK might not be compatible with your device. It is a modified version of the original unlock key app that might not work properly or cause errors or crashes on your device. Using it might result in poor performance or instability of your device or the app.
      • -
      • Q: Is Caustic Unlock Key APK updated?
      • -
      • A: Caustic Unlock Key APK might not be updated. It is a modified version of the original unlock key app that might not receive any updates or improvements from Single Cell Software. Using it might result in outdated or missing features or effects of the app.
      • -
      • Q: Is Caustic Unlock Key APK ethical?
      • -
      • A: Caustic Unlock Key APK is not ethical. It is a modified version of the original unlock key app that does not support the developer of Caustic app financially or morally. Using it might result in unfair or disrespectful treatment of the developer or the app.
      • -
      -

      These are some of the FAQs that you might have about Caustic Unlock Key APK. If you have any other questions or doubts, feel free to contact us and we will try to answer them as soon as possible.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Free Fire MAX Mod APK with Unlimited Diamonds for Android 2021.md b/spaces/fatiXbelha/sd/Download Free Fire MAX Mod APK with Unlimited Diamonds for Android 2021.md deleted file mode 100644 index 1aaf3875bd661ffe83687a2a0f333f39254838ee..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Free Fire MAX Mod APK with Unlimited Diamonds for Android 2021.md +++ /dev/null @@ -1,168 +0,0 @@ - - - -
      -

      Free Fire Max Mod APK Unlimited Diamonds Download for Android 2021

      -

      Are you a fan of Free Fire, the popular battle royale game that has millions of players worldwide? If yes, then you might want to try out Free Fire Max, a new version of the game that offers enhanced graphics, gameplay, and features.

      -

      But wait, there's more! You can also enjoy Free Fire Max Mod APK, a modified version of the game that gives you unlimited diamonds, unlocked characters and skins, no ads, and an anti-ban system. Sounds amazing, right?

      -

      free fire max mod apk unlimited diamonds download for android 2021


      Download File ····· https://urllie.com/2uNBTJ



      -

      In this article, I will tell you everything you need to know about Free Fire Max Mod APK, including what it is, what are its features, and how to download and install it on your Android device. So, without further ado, let's get started!

      -

      What is Free Fire Max?

      -

      Free Fire Max is an upgraded version of Free Fire that was launched in 2021 by Garena, the game developer. It is designed to provide a more immersive and realistic gaming experience for the players who want more than the original game.

      -

      Free Fire Max has improved graphics, sound effects, animations, and user interface that make the game look and feel more lifelike. It also has new game modes, maps, weapons, vehicles, and items that add more variety and fun to the gameplay. Moreover, it has cross-play functionality that allows you to play with your friends who are using Free Fire or Free Fire Max on different devices.

      -

      Free Fire Max is compatible with most Android devices that have at least 2 GB of RAM and Android 4.4 or higher. However, it requires more storage space and internet data than Free Fire, so make sure you have enough before downloading it.

      -

      What is Free Fire Max Mod APK?

      -

      Free Fire Max Mod APK is a modified version of Free Fire Max that offers some extra features and benefits that are not available in the original game. It is created by some third-party developers who modify the game files and code to unlock some premium features for free.

      -

      Free Fire Max Mod APK is not an official product of Garena, and it is not endorsed or supported by them. Therefore, it may have some risks and drawbacks, such as bugs, errors, viruses, malware, or ban issues. However, many players use it because of its amazing features and advantages.

      -

      Features of Free Fire Max Mod APK

      -

      Unlimited Diamonds

      -

      One of the most attractive features of Free Fire Max Mod APK is that it gives you unlimited diamonds, which are the premium currency in the game. Diamonds are used to buy various items and services in the game, such as characters, skins, weapons, pets, bundles, crates, spins, memberships, etc.

      -

      free fire max mod apk unlimited diamonds download for android 2021 latest version
      -free fire max mod apk unlimited diamonds download for android 2021 no root
      -free fire max mod apk unlimited diamonds download for android 2021 offline
      -free fire max mod apk unlimited diamonds download for android 2021 online
      -free fire max mod apk unlimited diamonds download for android 2021 update
      -free fire max mod apk unlimited diamonds download for android 2021 hack
      -free fire max mod apk unlimited diamonds download for android 2021 mega
      -free fire max mod apk unlimited diamonds download for android 2021 obb
      -free fire max mod apk unlimited diamonds download for android 2021 mediafıre
      -free fire max mod apk unlimited diamonds download for android 2021 rexdl
      -free fire max mod apk unlimited diamonds download for android 2021 revdl
      -free fire max mod apk unlimited diamonds download for android 2021 apkpure
      -free fire max mod apk unlimited diamonds download for android 2021 happymod
      -free fire max mod apk unlimited diamonds download for android 2021 an1
      -free fire max mod apk unlimited diamonds download for android 2021 andropalace
      -free fire max mod apk unlimited diamonds download for android 2021 androeed
      -free fire max mod apk unlimited diamonds download for android 2021 androgamer
      -free fire max mod apk unlimited diamonds download for android 2021 androeed.ru
      -free fire max mod apk unlimited diamonds download for android 2021 andropark
      -free fire max mod apk unlimited diamonds download for android 2021 androking
      -free fire max mod apk unlimited diamonds download for android 2021 androplace
      -free fire max mod apk unlimited diamonds download for android 2021 androbest
      -free fire max mod apk unlimited diamonds download for android 2021 androhub
      -free fire max mod apk unlimited diamonds download for android 2021 andropalace.net
      -free fire max mod apk unlimited diamonds download for android 2021 androeed.net
      -free fire max mod apk unlimited diamonds download for android 2021 androgamer.org
      -free fire max mod apk unlimited diamonds download for android 2021 androeed.ru/en/
      -free fire max mod apk unlimited diamonds download for android 2021 andropark.info
      -free fire max mod apk unlimited diamonds download for android 2021 androking.com
      -free fire max mod apk unlimited diamonds download for android 2021 androplace.com
      -free fire max mod apk unlimited diamonds download for android 2021 androbest.in
      -free fire max mod apk unlimited diamonds download for android 2021 androhub.in
      -free fire max mod apk with obb file + data + money + coins + gems + gold + cash + everything unlocked + all features activated + anti ban + no ads + god mode + aimbot + wallhack + esp + radar hack + speed hack + damage hack + health hack + ammo hack + headshot hack + auto headshot hack + auto aim hack + auto kill hack + auto win hack + auto loot hack + auto revive hack + auto heal hack + auto reload hack + auto jump hack + auto fly hack + auto run hack + auto crouch hack + auto prone hack + auto scope hack + auto shoot hack + auto grenade hack + auto bomb hack + auto plant bomb hack + auto defuse bomb hack + auto knife hack + auto melee hack + auto punch hack +

      -

      Diamonds are normally obtained by spending real money or completing some tasks and events in the game. However, with Free Fire Max Mod APK, you can get unlimited diamonds for free without spending a dime or doing anything. You can use them to buy anything you want and customize your game as per your preference.

      -

      Unlocked Characters and Skins

      -

      Another feature of Free Fire Max Mod APK is that it unlocks all the characters and skins in the game for free. Characters are the playable avatars in the game that have different skills and abilities that affect your performance in the game. Skins are the cosmetic items that change the appearance of your characters, weapons, vehicles, etc.

      -

      Characters and skins are normally unlocked by spending diamonds or collecting fragments in the game. However, with Free Fire Max Mod APK, you can access all the characters and skins without spending any diamonds or fragments. You can choose any character you like and equip them with any skin you want.

      -

      No Ads

      -

      A feature of Free Fire Max Mod APK that many players appreciate is that it removes all the ads from the game. Ads are the pop-up messages or videos that appear on your screen while playing the game or using its features. They are meant to promote some products or services or generate revenue for the game developer.

      -

      Ads are normally annoying and distracting for many players who want to enjoy the game without interruptions. They also consume your internet data and battery power. However, with Free Fire Max Mod APK , you can play the game without any ads and enjoy its features without any hassle. You can also save your internet data and battery power.

      -

      Anti-Ban System

      -

      A feature of Free Fire Max Mod APK that many players value is that it has an anti-ban system that protects your account from getting banned by the game developer. Bans are the penalties or restrictions that are imposed on your account if you violate the game rules or use any cheats or hacks in the game.

      -

      Bans are normally bad for your account as they can result in losing your progress, items, diamonds, or even your account permanently. They can also affect your reputation and credibility among other players. However, with Free Fire Max Mod APK, you can avoid bans and play the game safely and securely. The anti-ban system hides your identity and activity from the game server and prevents any detection or suspicion.

      -

      How to Download and Install Free Fire Max Mod APK?

      -

      Now that you know what Free Fire Max Mod APK is and what are its features, you might be wondering how to download and install it on your Android device. Well, don't worry, because I will guide you through the process step by step. Just follow these simple steps and you will be able to enjoy Free Fire Max Mod APK in no time.

      -

      Download Sources for Free Fire Max Mod APK

      -

      Official Website

      -

      The first and most reliable source to download Free Fire Max Mod APK is the official website of the game developer, Garena. This is where you can find the latest and updated version of the modded game, as well as other information and support. To download Free Fire Max Mod APK from the official website, follow these steps:

      -
        -
      1. Go to https://ffmax.garena.com/, which is the official website of Free Fire Max.
      2. -
      3. Scroll down to the bottom of the page and click on the "Download" button.
      4. -
      5. Select the "APK Download" option and wait for the download to start.
      6. -
      7. Once the download is complete, you will have the Free Fire Max Mod APK file on your device.
      8. -
      -

      Third-Party Websites

      -

      The second source to download Free Fire Max Mod APK is third-party websites that offer modded games for free. These are websites that are not affiliated with Garena or Free Fire Max, but they provide their own versions of the modded game with different features and modifications. However, you should be careful when using these websites, as some of them may contain viruses, malware, or fake links that can harm your device or steal your data. To download Free Fire Max Mod APK from third-party websites, follow these steps:

      -
        -
      1. Search for "Free Fire Max Mod APK" on any web browser or search engine.
      2. -
      3. Look for a website that has a good reputation, positive reviews, and high ratings from other users.
      4. -
      5. Visit the website and read its description, features, and instructions carefully.
      6. -
      7. Click on the "Download" button or link and wait for the download to start.
      8. -
      9. Once the download is complete, you will have the Free Fire Max Mod APK file on your device.
      10. -
      -

      Some examples of third-party websites that offer Free Fire Max Mod APK are:

      -

      Torrent Sites

      -

      The third source to download Free Fire Max Mod APK is torrent sites that offer peer-to-peer file sharing. These are websites that allow you to download files from other users who have the same files on their devices. Torrent sites are usually faster and more reliable than other sources, as they have multiple sources and seeds for the same file. However, you should also be careful when using these websites, as some of them may contain illegal, pirated, or corrupted files that can violate the game rules or damage your device. To download Free Fire Max Mod APK from torrent sites, follow these steps:

      -
        -
      1. Download and install a torrent client on your device, such as uTorrent, BitTorrent, or Flud.
      2. -
      3. Search for "Free Fire Max Mod APK" on any web browser or search engine.
      4. -
      5. Look for a torrent site that has a good reputation, positive reviews, and high ratings from other users.
      6. -
      7. Visit the torrent site and read its description, features, and instructions carefully.
      8. -
      9. Click on the "Download" button or link and wait for the torrent file to download.
      10. -
      11. Open the torrent file with your torrent client and wait for the Free Fire Max Mod APK file to download.
      12. -
      13. Once the download is complete, you will have the Free Fire Max Mod APK file on your device.
      14. -
      -

      Some examples of torrent sites that offer Free Fire Max Mod APK are:

      - -

      Installation Steps for Free Fire Max Mod APK

      -

      Enable Unknown Sources

      -

      The first step to install Free Fire Max Mod APK on your Android device is to enable unknown sources. This is a setting that allows you to install apps from outside the Google Play Store, which is where Free Fire Max Mod APK comes from. To enable unknown sources, follow these steps:

      -
        -
      1. Go to your device's settings and look for the security or privacy option.
      2. -
      3. Find the option that says "Unknown sources" or "Install unknown apps" and toggle it on.
      4. -
      5. A warning message will pop up, telling you about the risks of installing unknown apps. Tap on "OK" or "Allow" to proceed.
      6. -
      7. You have now enabled unknown sources on your device and can install Free Fire Max Mod APK.
      8. -
      -

      Enable unknown sources

      -

      Locate and Open the APK File

      -

      The second step to install Free Fire Max Mod APK on your Android device is to locate and open the APK file. This is the file that contains the modded game and its features. To locate and open the APK file, follow these steps:

      -
        -
      1. Go to your device's file manager and look for the folder where you downloaded the APK file. It could be in your downloads, documents, or any other folder.
      2. -
      3. Find the APK file that has the name "Free Fire Max Mod APK" or something similar and tap on it.
      4. -
      5. A message will pop up, asking you if you want to install this app. Tap on "Install" to proceed.
      6. -
      7. You have now opened the APK file and can install Free Fire Max Mod APK.
      8. -
      -

      Open APK file

      Follow the Installation Wizard

      -

      The third step to install Free Fire Max Mod APK on your Android device is to follow the installation wizard. This is the process that guides you through the installation of the app and its features. To follow the installation wizard, follow these steps:

      -
        -
      1. After tapping on "Install", the installation wizard will start and show you the progress of the installation.
      2. -
      3. Wait for a few seconds or minutes until the installation is complete.
      4. -
      5. A message will pop up, telling you that the app has been installed. Tap on "Open" to launch the game or "Done" to exit the installation wizard.
      6. -
      7. You have now followed the installation wizard and can launch Free Fire Max Mod APK.
      8. -
      -

      Follow installation wizard

      -

      Launch the Game and Enjoy

      -

      The final step to install Free Fire Max Mod APK on your Android device is to launch the game and enjoy its features. This is where you can experience the modded game and its benefits. To launch the game and enjoy, follow these steps:

      -
        -
      1. Go to your device's app drawer and look for the icon of Free Fire Max Mod APK. It should have a different logo and color than the original game.
      2. -
      3. Tap on the icon to launch the game. It may take some time to load and initialize.
      4. -
      5. Once the game is loaded, you will see a welcome screen with some options. Tap on "Start" to enter the game or "Settings" to adjust some preferences.
      6. -
      7. You have now launched Free Fire Max Mod APK and can enjoy its features.
      8. -
      -

      Launch game and enjoy

      -

      Conclusion

      -

      Free Fire Max Mod APK is a great way to enjoy Free Fire Max with some extra features and benefits that are not available in the original game. It gives you unlimited diamonds, unlocked characters and skins, no ads, and an anti-ban system that make your gaming experience more fun and exciting.

      -

      To download and install Free Fire Max Mod APK on your Android device, you can use any of the sources mentioned above, such as the official website, third-party websites, or torrent sites. You can also follow the installation steps provided above, such as enabling unknown sources, locating and opening the APK file, following the installation wizard, and launching the game.

      -

      However, you should also be aware of the risks and drawbacks of using Free Fire Max Mod APK, such as bugs, errors, viruses, malware, or ban issues. You should also respect the game rules and other players' rights and privacy. You should not use Free Fire Max Mod APK for any illegal or unethical purposes.

      -

      I hope this article has helped you understand what Free Fire Max Mod APK is and how to download and install it on your Android device. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

      -

      FAQs

      -

      Here are some frequently asked questions about Free Fire Max Mod APK:

      -
        -
      • Is Free Fire Max Mod APK compatible with my device?
      • -

        Free Fire Max Mod APK is compatible with most Android devices that have at least 2 GB of RAM and Android 4.4 or higher. However, it may not work properly on some devices due to different specifications or configurations. You can check your device's compatibility by visiting https://ffmax.garena.com/compatibility.html.

        -
      • Is Free Fire Max Mod APK safe to use?
      • -

        Free Fire Max Mod APK is generally safe to use if you download it from a reliable source and scan it with an antivirus before installing it. However, it may still have some risks and drawbacks, such as bugs, errors, viruses, malware, or ban issues. You should use it at your own risk and discretion.

        -
      • Is Free Fire Max Mod APK legal to use?
      • -

        Free Fire Max Mod APK is not legal to use as it violates the terms of service and intellectual property rights of Garena and Free Fire Max. It is also considered as cheating or hacking in the game, which can result in penalties or bans from the game developer. You should use it only for educational or entertainment purposes.

        -
      • How to update Free Fire Max Mod APK?
      • -

        To update Free Fire Max Mod APK, you need to download and install the latest version of the modded game from any of the sources mentioned above. You may also need to uninstall the previous version of the modded game before installing the new one. You may also need to back up your game data and settings before updating, as they may be lost or overwritten during the process.

        -
      • How to uninstall Free Fire Max Mod APK?
      • -

        To uninstall Free Fire Max Mod APK, you need to follow the same steps as you would for any other app on your Android device. To uninstall Free Fire Max Mod APK, follow these steps:

        -
          -
        1. Go to your device's settings and look for the apps or applications option.
        2. -
        3. Find and tap on the app that has the name "Free Fire Max Mod APK" or something similar.
        4. -
        5. A screen will pop up, showing you some information and options about the app. Tap on "Uninstall" to remove the app from your device.
        6. -
        7. A confirmation message will pop up, asking you if you want to uninstall this app. Tap on "OK" or "Yes" to proceed.
        8. -
        9. You have now uninstalled Free Fire Max Mod APK from your device.
        10. -
        -

        Uninstall app

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download iGO Primo Truck and Get Access to Thousands of POIs Speedcams and Road Signs.md b/spaces/fatiXbelha/sd/Download iGO Primo Truck and Get Access to Thousands of POIs Speedcams and Road Signs.md deleted file mode 100644 index d4e3cb03ecb3fc94a007e95444f2614ad673ff95..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download iGO Primo Truck and Get Access to Thousands of POIs Speedcams and Road Signs.md +++ /dev/null @@ -1,163 +0,0 @@ - -

        iGO Primo Truck Download: A Guide for Truck Drivers

        -

        If you are a truck driver, you know how important it is to have a reliable and accurate navigation system. You need a system that can guide you to your destination safely and efficiently, taking into account your vehicle size, weight, and load type. You also need a system that can help you avoid traffic jams, road closures, tolls, and other obstacles that can affect your journey.

        -

        That's why you need iGO Primo Truck, a navigation software designed specifically for truck drivers. iGO Primo Truck is a software that runs on various devices, such as car stereos, smartphones, tablets, or dedicated GPS units. It uses the latest maps and data from reputable sources, such as HERE, TomTom, or Navteq. It also offers many features and benefits that make it one of the best navigation solutions for truck drivers.

        -

        igo primo truck download


        DOWNLOAD ✓✓✓ https://urllie.com/2uNA0J



        -

        In this article, we will show you how to download and install iGO Primo Truck on your device, how to use it effectively, and how to update it regularly. We will also answer some of the most frequently asked questions about iGO Primo Truck. By the end of this article, you will have a clear understanding of how iGO Primo Truck can help you improve your driving experience and performance.

        -

        What is iGO Primo Truck?

        -

        iGO Primo Truck is a navigation software that is based on the popular iGO Primo software, but with some modifications and enhancements for truck drivers. iGO Primo Truck is developed by NNG, a Hungarian company that specializes in navigation solutions. iGO Primo Truck is compatible with various devices and platforms, such as Windows CE, Android, iOS, or Linux.

        -

        iGO Primo Truck uses vector-based maps that are stored on your device's memory card. This means that you don't need an internet connection to use it, which can save you money and data. However, you can also use online services, such as live traffic information or weather forecasts, if you have an internet connection available.

        -

        iGO Primo Truck also uses a sophisticated routing algorithm that calculates the best route for your vehicle type and preferences. It takes into account factors such as road width, height clearance, weight limit, load type, speed limit, tolls, ferries, and more. It also allows you to customize your route by adding waypoints, avoiding roads or areas, or changing the order of your destinations.

        -

        Features and benefits of iGO Primo Truck

        -

        Some of the features and benefits of iGO Primo Truck are:

        -
          -
        • Truck-specific routing: iGO Primo Truck calculates the optimal route for your vehicle profile and preferences. It avoids roads or areas that are not suitable for your vehicle size, weight, or load type. It also warns you of any road restrictions or hazards ahead.
        • -
        • Truck-specific POIs: iGO Primo Truck provides you with a database of points of interest (POIs) that are relevant for truck drivers. These include truck stops, gas stations, parking lots, rest areas, weigh stations, repair shops, and more. You can also search for POIs along your route or on the map.
        • -
        • TTS function: iGO Primo Truck supports text-to-speech (TTS) function, which means that it can read aloud the street names and directions in 28 languages. This can help you keep your eyes on the road and avoid distractions.
        • -
        • Tunnel view: iGO Primo Truck switches to a simplified view when you enter a tunnel. This can help current and forecasted weather conditions for your location and destination. It also warns you of any severe weather alerts or hazards. This feature requires an internet connection and a subscription to the iGO Weather service.
        • -
        • Customizable interface: iGO Primo Truck allows you to customize the interface according to your preferences. You can change the map view, the color scheme, the font size, the units, the voice, and more. You can also create your own skins and icons.
        • -
        -

        How to download and install iGO Primo Truck

        -

        To download and install iGO Primo Truck on your device, you need to follow these steps:

        -
          -
        1. Check the compatibility of your device: iGO Primo Truck is compatible with various devices and platforms, but not all of them. You need to check the specifications of your device and make sure it meets the minimum requirements for iGO Primo Truck. You can find the compatibility list on the official website of iGO Primo Truck or on the online store where you want to buy it.
        2. -
        3. Purchase iGO Primo Truck: You can purchase iGO Primo Truck from various online stores, such as Amazon, eBay, or AliExpress. You can also buy it directly from the official website of iGO Primo Truck or from authorized dealers. The price may vary depending on the store, the region, and the version of iGO Primo Truck. You can choose between a full version or an update version, depending on whether you already have a previous version of iGO Primo or not.
        4. -
        5. Download iGO Primo Truck: After you purchase iGO Primo Truck, you will receive a download link or a code to access the download page. You need to download the installation file or the zip file of iGO Primo Truck to your computer. The file size may vary depending on the version and the maps that are included.
        6. -
        7. Install iGO Primo Truck: To install iGO Primo Truck on your device, you need to transfer the installation file or the zip file to your device's memory card. You can use a USB cable, a card reader, or a wireless connection to do this. Then, you need to run the installation file or unzip the zip file on your device. You may need to follow some instructions on your device's screen to complete the installation process.
        8. -
        9. Activate iGO Primo Truck: To activate iGO Primo Truck on your device, you need to enter the license code that you received when you purchased it. You can find the license code in your email confirmation, in your online account, or on a sticker that came with your product. You need to enter the license code in the activation menu of iGO Primo Truck. You may also need to register your product online to activate some features or services.
        10. -
        -

        How to use iGO Primo Truck

        -

        To use iGO Primo Truck effectively, you need to follow these steps:

        -

        How to set up your vehicle profile

        -

        The first thing you need to do before using iGO Primo Truck is to set up your vehicle profile. This is important because it will affect how iGO Primo Truck calculates your route and displays information on your screen. To set up your vehicle profile, you need to follow these steps:

        -

        igo primo truck navigation software
        -igo primo truck europe map download
        -igo primo truck edition pack
        -igo primo truck kia
        -igo primo truck android car nav
        -igo primo truck iphone
        -igo primo truck windows
        -igo primo truck dynavin
        -igo primo truck gps power
        -igo primo truck youtube
        -igo primo truck tts function
        -igo primo truck tunnel view
        -igo primo truck lane assistant
        -igo primo truck speed warning
        -igo primo truck voice instructions
        -igo primo truck traffic signs
        -igo primo truck sound dimmer
        -igo primo truck tmc signals
        -igo primo truck destination search
        -igo primo truck data backup
        -igo primo truck map update
        -igo primo truck toolbox download
        -igo primo truck error messages
        -igo primo truck license error
        -igo primo truck out of memory
        -igo primo truck software utilities
        -igo primo truck poi editing tool
        -igo primo truck speedcam merger tool
        -igo primo truck collection licenses
        -igo primo truck global cfg file
        -igo primo truck global cfg universal
        -igo primo truck global cfg modified
        -igo primo truck global cfg corrected
        -igo primo truck global cfg new concept
        -igo primo truck global cfg svg support
        -igo primo truck dview tool
        -igo primo truck bmp visualization tool
        -igo primo truck bmp conversion tool
        -igo primo truck seedcam tool
        -igo primo truck spudigo tool
        -igo primo truck israel version
        -igo primo truck korea version
        -igo primo truck germany version
        -igo primo truck france version
        -igo primo truck spain version
        -igo primo truck italy version
        -igo primo truck uk version

        -
          -
        1. Go to the settings menu: On the main screen of iGO Primo Truck, tap on the settings icon (the gear symbol) in the bottom right corner. This will open the settings menu.
        2. -
        3. Select vehicle type: In the settings menu, tap on "Vehicle type". This will open a list of vehicle types that you can choose from. Select "Truck" as your vehicle type.
        4. -
        5. Edit vehicle parameters: After you select "Truck" as your vehicle type, tap on "Edit". This will open a screen where you can edit various parameters of your vehicle, such as length, width, height, weight, axle load, trailer type, load type, hazardous materials, and more. Enter the correct values for each parameter according to your vehicle specifications.
        6. -
        7. Save vehicle profile: After you edit all the parameters of your vehicle, tap on "Save". This will save your vehicle profile and return you to the settings menu. You can also name your vehicle profile and create multiple profiles for different vehicles.
        8. -
        -

        How to plan your route

        -

        After you set up your vehicle profile, you can start planning your route with iGO Primo Truck. To plan your route, you need to follow these steps:

        -
          -
        1. Go to the navigation menu: On the main screen of iGO Primo Truck, tap on the navigation icon (the arrow symbol) in the bottom left corner. This will open the navigation menu.
        2. -
        3. Select destination: In the navigation menu, tap on "Find". This will open a screen where you can select your destination. You can select your destination by entering an address, a city, a POI, a coordinate, a contact, a history item, or a favorite. You can also select your destination by tapping on the map or by using voice commands.
        4. -
        5. Add waypoints: If you want to add waypoints to your route, you can do so by tapping on "Add as waypoint" after you select a destination. You can add up to 100 waypoints to your route. You can also reorder or delete your waypoints by tapping on "Edit" in the navigation menu.
        6. -
        7. Calculate route: After you select your destination and add any waypoints, tap on "Go". This will calculate your route based on your vehicle profile and preferences. You can also change your preferences by tapping on "More" in the navigation menu. You can change the route type (fastest, shortest, economical, or easy), the avoidances (tolls, ferries, highways, etc.), and the map view (2D or 3D).
        8. -
        9. Start navigation: After you calculate your route, tap on "Start". This will start the navigation mode and guide you to your destination with voice and visual instructions. You can also view the route summary, the route overview, the itinerary, or the alternative routes by tapping on the icons in the bottom of the screen.
        10. -
        -

        How to avoid traffic and road restrictions

        -

        One of the advantages of iGO Primo Truck is that it can help you avoid traffic and road restrictions that can affect your journey. To do this, you need to follow these steps:

        -
          -
        1. Enable live traffic information: To receive live traffic information, you need to have an internet connection and a subscription to the iGO Traffic service. You can enable live traffic information by tapping on "Settings" in the main screen, then tapping on "Online services", then tapping on "Traffic". You can also adjust the settings of live traffic information, such as the update frequency, the alert distance, and the alert sound.
        2. -
        3. Avoid traffic delays: When you enable live traffic information, iGO Primo Truck will display the traffic conditions on your route with different colors: green for free flow, yellow for moderate flow, red for heavy flow, and black for closed roads. If there is a traffic delay on your route, iGO Primo Truck will notify you and suggest an alternative route that is faster or shorter. You can accept or reject the alternative route by tapping on "Yes" or "No".
        4. -
        5. Avoid road restrictions: When you set up your vehicle profile, iGO Primo Truck will automatically avoid roads or areas that are not suitable for your vehicle size, weight, or load type. It will also warn you of any road restrictions or hazards ahead, such as low bridges, narrow roads, weight limits, etc. If there is a road restriction on your route that cannot be avoided, iGO Primo Truck will notify you and ask you to confirm if you want to continue or not. You can confirm or cancel by tapping on "OK" or "Cancel".
        6. -
        -

        How to update iGO Primo Truck

        -

        To keep iGO Primo Truck running smoothly and efficiently, you need to update it regularly. Updating iGO Primo Truck involves updating the maps and software. To do this, you need to follow these steps:

        -

        How to download the latest maps and software

        -

        To download the latest maps and software for iGO Primo Truck, you need to follow these steps:

        -
          -
        1. Connect your device to your computer: To download the latest maps and software for iGO Primo Truck, you need to connect your device to your computer using a USB cable or a wireless connection.
        2. -
        3. Download Naviextras Toolbox: Naviextras Toolbox is a free software that allows you to manage and update iGO Primo Truck on your device. You can download Naviextras Toolbox from https://www.naviextras.com/. You need to create an account and register your device to use Naviextras Toolbox.
        4. -
        5. Launch Naviextras Toolbox: After you download and install Naviextras Toolbox on your computer, launch it and log in with your account. Naviextras Toolbox will automatically detect your device and display its information and status.
        6. -
        7. Check for updates: In Naviextras Toolbox, click on "Updates" in the left menu. This will show you the available updates for iGO Primo Truck on your device. You can see the details of each update, such as the size, the date, and the content.
        8. -
        9. Select and download updates: To select an update, click on the checkbox next to it. You can select multiple updates at once. To download the selected updates, click on "Install". This will start the download process and show you the progress and the remaining time.
        10. -
        -

        How to use the iGO Toolbox

        -

        To use the iGO Toolbox, you need to follow these steps:

        -
          -
        1. Connect your device to your computer: To use the iGO Toolbox, you need to connect your device to your computer using a USB cable or a wireless connection.
        2. -
        3. Launch iGO Primo Truck: After you connect your device to your computer, launch iGO Primo Truck on your device. This will open the main screen of iGO Primo Truck.
        4. -
        5. Go to the settings menu: On the main screen of iGO Primo Truck, tap on the settings icon (the gear symbol) in the bottom right corner. This will open the settings menu.
        6. -
        7. Select iGO Toolbox: In the settings menu, tap on "iGO Toolbox". This will open a screen where you can access various tools and features of iGO Primo Truck.
        8. -
        9. Use the tools and features: In the iGO Toolbox screen, you can use various tools and features, such as backup and restore, map management, POI management, route management, skin management, and more. You can also access the user manual, the support center, and the feedback form from this screen.
        10. -
        -

        How to troubleshoot common issues

        -

        Sometimes, you may encounter some issues or problems when using iGO Primo Truck. To troubleshoot common issues, you can try these solutions:

        -
          -
        • If iGO Primo Truck does not start or crashes: You can try to restart your device or reinstall iGO Primo Truck. You can also check if there is enough free space on your device's memory card or if there is any damage or corruption on it. You can also contact the support center or visit the online forum for more help.
        • -
        • If iGO Primo Truck does not calculate or display the route correctly: You can try to update your maps and software or check your vehicle profile and preferences. You can also check if there is any interference or obstruction with your GPS signal or if there is any error or discrepancy with the map data. You can also report any map errors or feedback from the iGO Toolbox screen.
        • -
        • If iGO Primo Truck does not provide voice or sound instructions: You can try to adjust the volume or mute settings of your device or iGO Primo Truck. You can also check if there is any problem with your device's speaker or headphone jack or if there is any missing or corrupted voice file. You can also download and install different voices from Naviextras Toolbox.
        • -
        -

        Conclusion

        -

        Summary of the main points

        -

        iGO Primo Truck is a navigation software that is designed specifically for truck drivers. It offers many features and benefits that make it one of the best navigation solutions for truck drivers. It uses vector-based maps that are stored on your device's memory card and do not require an internet connection. It also uses a sophisticated routing algorithm that calculates the best route for your vehicle profile and preferences. It also provides you with truck-specific POIs, TTS function, tunnel view, junction view, lane guidance, speed limit and camera alerts, live traffic information, weather forecasts, and more.

        -

        Call to action

        -

        If you are a truck driver who wants to improve your driving experience and performance, you should download and install iGO Primo Truck on your device today. You can purchase iGO Primo Truck from various online stores or from authorized dealers. You can also visit https://www.naviextras.com/ for more information and support. You will not regret it!

        -

        FAQs

        -

        Here are some of the most frequently asked questions about iGO Primo Truck:

        -
          -
        1. What devices are compatible with iGO Primo Truck? -

          iGO Primo Truck is compatible with various devices and platforms, such as Windows CE, Android, iOS, or Linux. However, not all devices and platforms are compatible with iGO Primo Truck. You need to check the specifications of your device and make sure it meets the minimum requirements for iGO Primo Truck. You can find the compatibility list on the official website of iGO Primo Truck or on the online store where you want to buy it.

        2. -
        3. How much does iGO Primo Truck cost? -

          The price of iGO Primo Truck may vary depending on the store, the region, and the version of iGO Primo Truck. You can choose between a full version or an update version, depending on whether you already have a previous version of iGO Primo or not. You can also choose between different map packages, such as Europe, North America, South America, Asia, Africa, or Australia. You can find the price list on the official website of iGO Primo Truck or on the online store where you want to buy it.

        4. -
        5. How often do I need to update iGO Primo Truck? -

          You need to update iGO Primo Truck regularly to keep it running smoothly and efficiently. Updating iGO Primo Truck involves updating the maps and software. You can download the latest maps and software from Naviextras Toolbox, a free software that allows you to manage and update iGO Primo Truck on your device. You can also use the iGO Toolbox, a feature that allows you to access various tools and features of iGO Primo Truck on your device.

        6. -
        7. What are the advantages of iGO Primo Truck over other navigation software? -

          iGO Primo Truck has many advantages over other navigation software, such as:

          -
            -
          • It is designed specifically for truck drivers, with truck-specific routing, POIs, and alerts.
          • -
          • It uses vector-based maps that are stored on your device's memory card and do not require an internet connection.
          • -
          • It offers many features and benefits that make it one of the best navigation solutions for truck drivers, such as TTS function, tunnel view, junction view, lane guidance, speed limit and camera alerts, live traffic information, weather forecasts, and more.
          • -
          • It is compatible with various devices and platforms, such as Windows CE, Android, iOS, or Linux.
          • -
          • It is easy to use and customize according to your preferences.
          • -
        8. -
        9. Where can I get more information and support for iGO Primo Truck? -

          You can get more information and support for iGO Primo Truck from various sources, such as:

          -
        10. -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fb700/chat3/docs/README_EN.md b/spaces/fb700/chat3/docs/README_EN.md deleted file mode 100644 index 537b430d8fb100d10d6607e088b03ddd3f76229e..0000000000000000000000000000000000000000 --- a/spaces/fb700/chat3/docs/README_EN.md +++ /dev/null @@ -1,294 +0,0 @@ -# ChatGPT Academic Optimization -> **Note** -> -> This English readme is automatically generated by the markdown translation plugin in this project, and may not be 100% correct. -> - - -**If you like this project, please give it a star. If you have come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request (to the `dev` branch).** - -> **Note** -> -> 1. Please note that only function plugins (buttons) marked in **red** support reading files, and some plugins are located in the **dropdown menu** in the plugin area. Additionally, we welcome and process PRs for any new plugins with the **highest priority**! -> -> 2. The functions of each file in this project are detailed in the self-translation report [self_analysis.md](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). With the version iteration, you can click on a relevant function plugin at any time to call GPT to regenerate the self-analysis report for the project. Commonly asked questions are summarized in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). -> -> 3. If you are not used to the function, comments or interface with some Chinese names, you can click on the relevant function plugin at any time to call ChatGPT to generate the source code of the project in English. - -
        - -Function | Description ---- | --- -One-click refinement | Supports one-click refinement, one-click searching for grammatical errors in papers. -One-click translation between Chinese and English | One-click translation between Chinese and English. -One-click code interpretation | Can correctly display and interpret the code. -[Custom shortcuts](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcuts. -[Configure proxy server](https://www.bilibili.com/video/BV1rc411W7Dr) | Supports configuring proxy server. -Modular design | Supports custom high-order experimental features and [function plug-ins], and plug-ins support [hot update](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). -[Self-program analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plug-in] [One-Key Understanding](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) the source code of this project. -[Program analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plug-in] One-click can analyze other Python/C/C++/Java/Golang/Lua/Rect project trees. -Read papers | [Function Plug-in] One-click reads the full text of a latex paper and generates an abstract. -Latex full-text translation/refinement | [Function Plug-in] One-click translates or refines a latex paper. -Batch annotation generation | [Function Plug-in] One-click generates function annotations in batches. -Chat analysis report generation | [Function Plug-in] Automatically generate summary reports after running. -[Arxiv assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function Plug-in] Enter the arxiv paper url and you can translate the abstract and download the PDF with one click. -[PDF paper full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function Plug-in] Extract title and abstract of PDF papers + translate full text (multi-threaded). -[Google Scholar integration assistant](https://www.bilibili.com/video/BV19L411U7ia) (Version>=2.45) | [Function Plug-in] Given any Google Scholar search page URL, let GPT help you choose interesting articles. -Formula display | Can simultaneously display the tex form and rendering form of formulas. -Image display | Can display images in Markdown. -Multithreaded function plug-in support | Supports multi-threaded calling of chatgpt, one-click processing of massive texts or programs. -Support for markdown tables output by GPT | Can output markdown tables that support GPT. -Start dark gradio theme [theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__dark-theme=true``` to the browser URL to switch to the dark theme. -Huggingface free scientific online experience](https://huggingface.co/spaces/qingxu98/gpt-academic) | After logging in to Huggingface, copy [this space](https://huggingface.co/spaces/qingxu98/gpt-academic). -[Mixed support for multiple LLM models](https://www.bilibili.com/video/BV1EM411K7VH/) ([v3.0 branch](https://github.com/binary-husky/chatgpt_academic/tree/v3.0) in testing) | It must feel great to be served by both ChatGPT and [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B)! -Compatible with [TGUI](https://github.com/oobabooga/text-generation-webui) to access more language models | Access to opt-1.3b, galactica-1.3b and other models ([v3.0 branch](https://github.com/binary-husky/chatgpt_academic/tree/v3.0) under testing). -… | ... - -
        - - -- New interface (modify the `LAYOUT` option in `config.py` to switch between "left and right layout" and "up and down layout"). -
        - -
        - -- All buttons are dynamically generated by reading `functional.py`, and custom functions can be added freely, freeing up the clipboard. -
        - -
        - -- Refinement/Correction -
        - -
        - -- Supports markdown tables output by GPT. -
        - -
        - -- If the output contains formulas, both the tex form and the rendering form are displayed simultaneously for easy copying and reading. -
        - -
        - -- Don't want to read project code? Let chatgpt boast about the whole project. -
        - -
        - -- Multiple large language models mixed calling. ([v3.0 branch](https://github.com/binary-husky/chatgpt_academic/tree/v3.0) in testing) - - -## Running Directly (Windows, Linux or MacOS) - -### 1. Download the Project -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -### 2. Configure API_KEY and Proxy Settings - -In `config.py`, configure the overseas Proxy and OpenAI API KEY, as follows: -``` -1. If you are in China, you need to set an overseas proxy to use the OpenAI API smoothly. Please read the instructions in config.py carefully (1. Modify the USE_PROXY to True; 2. Modify the proxies according to the instructions). -2. Configure OpenAI API KEY. You need to register on the OpenAI official website and obtain an API KEY. Once you get the API KEY, configure it in the config.py file. -3. Issues related to proxy network (network timeout, proxy not working) are summarized to https://github.com/binary-husky/chatgpt_academic/issues/1 -``` -(Note: When the program is running, it will first check whether there is a private configuration file named `config_private.py`, and use the configuration in it to overwrite the same name configuration in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file next to `config.py` named `config_private.py` and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not managed by Git, which can make your privacy information more secure.) - -### 3. Install Dependencies -```sh -# (Option 1) Recommended -python -m pip install -r requirements.txt - -# (Option 2) If you use anaconda, the steps are also similar: -# (Option 2.1) conda create -n gptac_venv python=3.11 -# (Option 2.2) conda activate gptac_venv -# (Option 2.3) python -m pip install -r requirements.txt - -# Note: Use the official pip source or the Ali pip source. Other pip sources (such as some university pips) may have problems. Temporary substitution method: -# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -``` - -### 4. Run -```sh -python main.py -``` - -### 5. Test Experimental Features -``` -- Test C++ Project Header Analysis - In the input area, enter `./crazy_functions/test_project/cpp/libJPG` , and then click "[Experiment] Parse the entire C++ project (input inputs the root path of the project)" -- Test Writing Abstracts for Latex Projects - In the input area, enter `./crazy_functions/test_project/latex/attention` , and then click "[Experiment] Read the tex paper and write an abstract (input inputs the root path of the project)" -- Test Python Project Analysis - In the input area, enter `./crazy_functions/test_project/python/dqn` , and then click "[Experiment] Parse the entire py project (input inputs the root path of the project)" -- Test Self-code Interpretation - Click "[Experiment] Please analyze and deconstruct this project itself" -- Test Experimental Function Template (asking GPT what happened in history today), you can implement more complex functions based on this template function - Click "[Experiment] Experimental function template" -``` - -## Use Docker (Linux) - -``` sh -# Download Project -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -# Configure Overseas Proxy and OpenAI API KEY -Configure config.py with any text editor -# Installation -docker build -t gpt-academic . -# Run -docker run --rm -it --net=host gpt-academic - -# Test Experimental Features -## Test Self-code Interpretation -Click "[Experiment] Please analyze and deconstruct this project itself" -## Test Experimental Function Template (asking GPT what happened in history today), you can implement more complex functions based on this template function -Click "[Experiment] Experimental function template" -## (Please note that when running in docker, you need to pay extra attention to file access rights issues of the program.) -## Test C++ Project Header Analysis -In the input area, enter ./crazy_functions/test_project/cpp/libJPG , and then click "[Experiment] Parse the entire C++ project (input inputs the root path of the project)" -## Test Writing Abstracts for Latex Projects -In the input area, enter ./crazy_functions/test_project/latex/attention , and then click "[Experiment] Read the tex paper and write an abstract (input inputs the root path of the project)" -## Test Python Project Analysis -In the input area, enter ./crazy_functions/test_project/python/dqn , and then click "[Experiment] Parse the entire py project (input inputs the root path of the project)" - -``` - -## Other Deployment Methods -- Use WSL2 (Windows Subsystem for Linux subsystem) -Please visit [Deploy Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -- nginx remote deployment -Please visit [Deploy Wiki-2] (https://github.com/binary-husky/chatgpt_academic/wiki/%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E7%9A%84%E6%8C%87%E5%AF%BC) - - -## Customizing New Convenient Buttons (Academic Shortcut Key Customization) -Open functional.py and add the entry as follows, and then restart the program. (If the button has been successfully added and is visible, both the prefix and suffix support hot modification and take effect without restarting the program.) - -For example, -``` -"Super English to Chinese Translation": { - - # Prefix, which will be added before your input. For example, it is used to describe your requirements, such as translation, code interpretation, polishing, etc. - "Prefix": "Please translate the following content into Chinese, and then use a markdown table to explain each proprietary term in the text:\n\n", - - # Suffix, which will be added after your input. For example, in conjunction with the prefix, you can bracket your input in quotes. - "Suffix": "", - -}, -``` -
        - -
        - - -If you invent a more user-friendly academic shortcut key, welcome to post an issue or pull request! - -## Configure Proxy -### Method 1: General Method -Modify the port and proxy software corresponding in ```config.py``` - -
        - - -
        - - -After configuring, you can use the following command to test whether the proxy works. If everything is normal, the code below will output the location of your proxy server: - -``` -python check_proxy.py -``` - -### Method Two: Pure Beginner Tutorial -[Pure Beginner Tutorial](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89) - -## Compatibility Testing - -### Image Display: - -
        - -
        - - -### If the program can read and analyze itself: - -
        - -
        - -
        - -
        - -### Any other Python/Cpp project analysis: -
        - -
        - -
        - -
        - -### Latex paper reading comprehension and abstract generation with one click -
        - -
        - -### Automatic Report Generation -
        - - - -
        - -### Modular Function Design -
        - - -
        - - -### Translating source code to English - -
        - -
        - -## Todo and Version Planning: - -- version 3 (Todo): -- - Support for gpt4 and other llm -- version 2.4+ (Todo): -- - Summary of long text and token overflow problems in large project source code -- - Implementation of project packaging and deployment -- - Function plugin parameter interface optimization -- - Self-updating -- version 2.4: (1) Added PDF full-text translation function; (2) Added input area switching function; (3) Added vertical layout option; (4) Optimized multi-threaded function plugin. -- version 2.3: Enhanced multi-threaded interactivity -- version 2.2: Function plug-in supports hot reloading -- version 2.1: Collapsible layout -- version 2.0: Introduction of modular function plugins -- version 1.0: Basic functions - -## References and Learning - - -``` -The code refers to the design of many other excellent projects, mainly including: - -# Reference Project 1: Referenced the method of reading OpenAI json, recording historical inquiry records, and using gradio queue in ChuanhuChatGPT -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Reference Project 2: -https://github.com/THUDM/ChatGLM-6B - -``` - - diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/facerecon_model.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/facerecon_model.py deleted file mode 100644 index 7de8ca6eebc50ff1ed52c5ba37d31b43f977b5e1..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/facerecon_model.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This script defines the face reconstruction model for Deep3DFaceRecon_pytorch -""" - -import numpy as np -import torch -from src.face3d.models.base_model import BaseModel -from src.face3d.models import networks -from src.face3d.models.bfm import ParametricFaceModel -from src.face3d.models.losses import perceptual_loss, photo_loss, reg_loss, reflectance_loss, landmark_loss -from src.face3d.util import util -from src.face3d.util.nvdiffrast import MeshRenderer -# from src.face3d.util.preprocess import estimate_norm_torch - -import trimesh -from scipy.io import savemat - -class FaceReconModel(BaseModel): - - @staticmethod - def modify_commandline_options(parser, is_train=False): - """ Configures options specific for CUT model - """ - # net structure and parameters - parser.add_argument('--net_recon', type=str, default='resnet50', choices=['resnet18', 'resnet34', 'resnet50'], help='network structure') - parser.add_argument('--init_path', type=str, default='./checkpoints/init_model/resnet50-0676ba61.pth') - parser.add_argument('--use_last_fc', type=util.str2bool, nargs='?', const=True, default=False, help='zero initialize the last fc') - parser.add_argument('--bfm_folder', type=str, default='./checkpoints/BFM_Fitting/') - parser.add_argument('--bfm_model', type=str, default='BFM_model_front.mat', help='bfm model') - - # renderer parameters - parser.add_argument('--focal', type=float, default=1015.) - parser.add_argument('--center', type=float, default=112.) - parser.add_argument('--camera_d', type=float, default=10.) - parser.add_argument('--z_near', type=float, default=5.) - parser.add_argument('--z_far', type=float, default=15.) - - if is_train: - # training parameters - parser.add_argument('--net_recog', type=str, default='r50', choices=['r18', 'r43', 'r50'], help='face recog network structure') - parser.add_argument('--net_recog_path', type=str, default='checkpoints/recog_model/ms1mv3_arcface_r50_fp16/backbone.pth') - parser.add_argument('--use_crop_face', type=util.str2bool, nargs='?', const=True, default=False, help='use crop mask for photo loss') - parser.add_argument('--use_predef_M', type=util.str2bool, nargs='?', const=True, default=False, help='use predefined M for predicted face') - - - # augmentation parameters - parser.add_argument('--shift_pixs', type=float, default=10., help='shift pixels') - parser.add_argument('--scale_delta', type=float, default=0.1, help='delta scale factor') - parser.add_argument('--rot_angle', type=float, default=10., help='rot angles, degree') - - # loss weights - parser.add_argument('--w_feat', type=float, default=0.2, help='weight for feat loss') - parser.add_argument('--w_color', type=float, default=1.92, help='weight for loss loss') - parser.add_argument('--w_reg', type=float, default=3.0e-4, help='weight for reg loss') - parser.add_argument('--w_id', type=float, default=1.0, help='weight for id_reg loss') - parser.add_argument('--w_exp', type=float, default=0.8, help='weight for exp_reg loss') - parser.add_argument('--w_tex', type=float, default=1.7e-2, help='weight for tex_reg loss') - parser.add_argument('--w_gamma', type=float, default=10.0, help='weight for gamma loss') - parser.add_argument('--w_lm', type=float, default=1.6e-3, help='weight for lm loss') - parser.add_argument('--w_reflc', type=float, default=5.0, help='weight for reflc loss') - - opt, _ = parser.parse_known_args() - parser.set_defaults( - focal=1015., center=112., camera_d=10., use_last_fc=False, z_near=5., z_far=15. - ) - if is_train: - parser.set_defaults( - use_crop_face=True, use_predef_M=False - ) - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - - self.visual_names = ['output_vis'] - self.model_names = ['net_recon'] - self.parallel_names = self.model_names + ['renderer'] - - self.facemodel = ParametricFaceModel( - bfm_folder=opt.bfm_folder, camera_distance=opt.camera_d, focal=opt.focal, center=opt.center, - is_train=self.isTrain, default_name=opt.bfm_model - ) - - fov = 2 * np.arctan(opt.center / opt.focal) * 180 / np.pi - self.renderer = MeshRenderer( - rasterize_fov=fov, znear=opt.z_near, zfar=opt.z_far, rasterize_size=int(2 * opt.center) - ) - - if self.isTrain: - self.loss_names = ['all', 'feat', 'color', 'lm', 'reg', 'gamma', 'reflc'] - - self.net_recog = networks.define_net_recog( - net_recog=opt.net_recog, pretrained_path=opt.net_recog_path - ) - # loss func name: (compute_%s_loss) % loss_name - self.compute_feat_loss = perceptual_loss - self.comupte_color_loss = photo_loss - self.compute_lm_loss = landmark_loss - self.compute_reg_loss = reg_loss - self.compute_reflc_loss = reflectance_loss - - self.optimizer = torch.optim.Adam(self.net_recon.parameters(), lr=opt.lr) - self.optimizers = [self.optimizer] - self.parallel_names += ['net_recog'] - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - self.input_img = input['imgs'].to(self.device) - self.atten_mask = input['msks'].to(self.device) if 'msks' in input else None - self.gt_lm = input['lms'].to(self.device) if 'lms' in input else None - self.trans_m = input['M'].to(self.device) if 'M' in input else None - self.image_paths = input['im_paths'] if 'im_paths' in input else None - - def forward(self, output_coeff, device): - self.facemodel.to(device) - self.pred_vertex, self.pred_tex, self.pred_color, self.pred_lm = \ - self.facemodel.compute_for_render(output_coeff) - self.pred_mask, _, self.pred_face = self.renderer( - self.pred_vertex, self.facemodel.face_buf, feat=self.pred_color) - - self.pred_coeffs_dict = self.facemodel.split_coeff(output_coeff) - - - def compute_losses(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - - assert self.net_recog.training == False - trans_m = self.trans_m - if not self.opt.use_predef_M: - trans_m = estimate_norm_torch(self.pred_lm, self.input_img.shape[-2]) - - pred_feat = self.net_recog(self.pred_face, trans_m) - gt_feat = self.net_recog(self.input_img, self.trans_m) - self.loss_feat = self.opt.w_feat * self.compute_feat_loss(pred_feat, gt_feat) - - face_mask = self.pred_mask - if self.opt.use_crop_face: - face_mask, _, _ = self.renderer(self.pred_vertex, self.facemodel.front_face_buf) - - face_mask = face_mask.detach() - self.loss_color = self.opt.w_color * self.comupte_color_loss( - self.pred_face, self.input_img, self.atten_mask * face_mask) - - loss_reg, loss_gamma = self.compute_reg_loss(self.pred_coeffs_dict, self.opt) - self.loss_reg = self.opt.w_reg * loss_reg - self.loss_gamma = self.opt.w_gamma * loss_gamma - - self.loss_lm = self.opt.w_lm * self.compute_lm_loss(self.pred_lm, self.gt_lm) - - self.loss_reflc = self.opt.w_reflc * self.compute_reflc_loss(self.pred_tex, self.facemodel.skin_mask) - - self.loss_all = self.loss_feat + self.loss_color + self.loss_reg + self.loss_gamma \ - + self.loss_lm + self.loss_reflc - - - def optimize_parameters(self, isTrain=True): - self.forward() - self.compute_losses() - """Update network weights; it will be called in every training iteration.""" - if isTrain: - self.optimizer.zero_grad() - self.loss_all.backward() - self.optimizer.step() - - def compute_visuals(self): - with torch.no_grad(): - input_img_numpy = 255. * self.input_img.detach().cpu().permute(0, 2, 3, 1).numpy() - output_vis = self.pred_face * self.pred_mask + (1 - self.pred_mask) * self.input_img - output_vis_numpy_raw = 255. * output_vis.detach().cpu().permute(0, 2, 3, 1).numpy() - - if self.gt_lm is not None: - gt_lm_numpy = self.gt_lm.cpu().numpy() - pred_lm_numpy = self.pred_lm.detach().cpu().numpy() - output_vis_numpy = util.draw_landmarks(output_vis_numpy_raw, gt_lm_numpy, 'b') - output_vis_numpy = util.draw_landmarks(output_vis_numpy, pred_lm_numpy, 'r') - - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw, output_vis_numpy), axis=-2) - else: - output_vis_numpy = np.concatenate((input_img_numpy, - output_vis_numpy_raw), axis=-2) - - self.output_vis = torch.tensor( - output_vis_numpy / 255., dtype=torch.float32 - ).permute(0, 3, 1, 2).to(self.device) - - def save_mesh(self, name): - - recon_shape = self.pred_vertex # get reconstructed shape - recon_shape[..., -1] = 10 - recon_shape[..., -1] # from camera space to world space - recon_shape = recon_shape.cpu().numpy()[0] - recon_color = self.pred_color - recon_color = recon_color.cpu().numpy()[0] - tri = self.facemodel.face_buf.cpu().numpy() - mesh = trimesh.Trimesh(vertices=recon_shape, faces=tri, vertex_colors=np.clip(255. * recon_color, 0, 255).astype(np.uint8)) - mesh.export(name) - - def save_coeff(self,name): - - pred_coeffs = {key:self.pred_coeffs_dict[key].cpu().numpy() for key in self.pred_coeffs_dict} - pred_lm = self.pred_lm.cpu().numpy() - pred_lm = np.stack([pred_lm[:,:,0],self.input_img.shape[2]-1-pred_lm[:,:,1]],axis=2) # transfer to image coordinate - pred_coeffs['lm68'] = pred_lm - savemat(name,pred_coeffs) - - - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Black Border Patrol Simulator APK A Simulation Game with Casual and Story Modes.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Black Border Patrol Simulator APK A Simulation Game with Casual and Story Modes.md deleted file mode 100644 index 6b147d3c391e894ed679fa097284cee16d2b11ed..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Black Border Patrol Simulator APK A Simulation Game with Casual and Story Modes.md +++ /dev/null @@ -1,120 +0,0 @@ -
        -

        How to Download Black Border Patrol Simulator APK for Android

        -

        Do you want to experience what it's like to be a border police officer? Do you want to check the passengers' papers, stop the smuggling of illegal items, and arrest criminals? If yes, then you should try playing Black Border Patrol Simulator, a realistic and immersive border cop simulator game for Android devices. In this article, we will tell you what is Black Border Patrol Simulator, why you should play it, and how to download it on your Android device. We will also give you some tips and tricks for playing the game effectively and efficiently. So, let's get started!

        -

        What is Black Border Patrol Simulator?

        -

        Black Border Patrol Simulator is a border cop simulator game that simulates the life of a real border patrol officer. In this game, you assume the role of a police officer who works at the entry and exit gates of the country. Your job is to check the passengers' papers, use your tools and devices to detect illegal items and activities, and arrest suspicious people like terrorists, smugglers, or criminals. You have to follow the rules and regulations of your country while performing your duty. You also have to deal with different scenarios and situations that test your decision-making skills, logic, and ethics.

        -

        download black border patrol simulator apk


        DOWNLOAD >>>>> https://gohhs.com/2uPtkd



        -

        Black Border Patrol Simulator has many features that make it an interesting and engaging game. Some of them are:

        -
          -
        • Casual and story modes: You can play the game in casual mode where you can check random passengers' papers or in story mode where you can follow a storyline with different missions and objectives.
        • -
        • Character customization: You can create and customize your own character by choosing various hairstyles, clothes, accessories, etc. You can also share your creation on social media.
        • -
        • Different languages: You can play the game in different languages such as English, Arabic, German, Spanish, Polish, Japanese, etc.
        • -
        • Intuitive controls: You can easily control the game with simple taps and swipes on your screen.
        • -
        • Graphics and sound effects: You can enjoy the realistic graphics and sound effects of the game that create an immersive atmosphere.
        • -
        -

        Why You Should Play Black Border Patrol Simulator?

        -

        Black Border Patrol Simulator is not only a fun and entertaining game but also a educational

        Black Border Patrol Simulator is not only a fun and entertaining game but also a educational and informative one. By playing this game, you can learn a lot about border security and immigration issues that are relevant in today's world. You can also improve your decision-making skills, logic, and ethics by dealing with different scenarios and situations that challenge your judgment. Moreover, you can have fun and relax by creating your own character, exploring different locations, and arresting bad guys. Black Border Patrol Simulator is a game that will keep you hooked for hours and make you feel like a real border cop.

        -

        How to Download Black Border Patrol Simulator APK for Android?

        -

        If you are interested in playing Black Border Patrol Simulator on your Android device, you might be wondering how to download it. The game is not available on Google Play Store, so you have to download it from another source. This means that you have to download the APK file of the game, which is a file format that contains the installation package of an Android app. However, downloading and installing APK files from unknown sources can be risky, as they might contain malware or viruses that can harm your device. Therefore, you have to be careful and follow some steps to ensure that you download the game safely and securely. Here are the steps that you need to follow:

        -

        Step 1: Enable Unknown Sources on Your Device

        -

        The first step is to allow your device to install apps from sources other than Google Play Store. To do this, you have to enable the Unknown Sources option on your device's settings. Depending on your device model and Android version, the steps might vary slightly, but generally, they are as follows:

        -
          -
        1. Go to your device's Settings and tap on Security or Privacy.
        2. -
        3. Find the Unknown Sources option and toggle it on.
        4. -
        5. A warning message will pop up, telling you that installing apps from unknown sources can be harmful. Tap on OK to confirm.
        6. -
        -

        Now, your device is ready to install apps from sources other than Google Play Store.

        -

        black border patrol simulator game apk
        -black border patrol simulator android apk
        -black border patrol simulator apk free download
        -black border patrol simulator apk latest version
        -black border patrol simulator apk mod
        -black border patrol simulator apk offline
        -black border patrol simulator apk update
        -black border patrol simulator apk full version
        -black border patrol simulator apk hack
        -black border patrol simulator apk unlimited money
        -black border patrol simulator apk obb
        -black border patrol simulator apk data
        -black border patrol simulator apk revdl
        -black border patrol simulator apk rexdl
        -black border patrol simulator apk pure
        -black border patrol simulator apk mirror
        -black border patrol simulator apk uptodown
        -black border patrol simulator apk apkpure
        -black border patrol simulator apk apkmirror
        -black border patrol simulator apk apknite
        -black border patrol simulator apk apkmody
        -black border patrol simulator apk happymod
        -black border patrol simulator apk an1
        -black border patrol simulator apk android 1
        -black border patrol simulator apk android oyun club
        -download black border patrol sim demo apk
        -download black border cop game apk
        -download black border police officer sim apk
        -download black border checkpoint duty game apk
        -download black border immigration officer sim apk
        -download black border customs officer game apk
        -download black border passport control game apk
        -download black border security guard game apk
        -download black border police station game apk
        -download black border crime stopper game apk
        -download bitzooma game studio's black border patrol sim apk
        -download com.bitzooma.blackborder.apk
        -download com.bitzooma.blackborderdemo.apk
        -how to download and install black border patrol sim on android
        -how to play and enjoy the new features of the latest version of the game

        -

        Step 2: Download the APK File from a Trusted Source

        -

        The next step is to find and download the APK file of Black Border Patrol Simulator from a reliable website. There are many websites that offer APK files of various apps and games, but not all of them are trustworthy. Some of them might provide fake or corrupted files that can damage your device or steal your data. Therefore, you have to be careful and choose a website that has a good reputation and positive reviews from other users. One such website is [APKPure], which is a popular and trusted source for downloading APK files of various apps and games. To download the APK file of Black Border Patrol Simulator from APKPure, follow these steps:

        -
          -
        1. Open your browser and go to [APKPure].
        2. -
        3. In the search bar, type Black Border Patrol Simulator and hit enter.
        4. -
        5. From the search results, find the game and tap on it.
        6. -
        7. On the game's page, tap on the Download APK button.
        8. -
        9. A pop-up window will appear, asking you to choose a download location. Choose a folder where you want to save the file and tap on OK.
        10. -
        11. The download will start automatically and might take a few minutes depending on your internet speed.
        12. -
        -

        Now, you have downloaded the APK file of Black Border Patrol Simulator on your device.

        -

        Step 3: Install the APK File on Your Device

        -

        The final step is to locate and install the APK file on your device using a file manager app. A file manager app is an app that allows you to access and manage the files and folders on your device. There are many file manager apps available on Google Play Store, such as [ES File Explorer], [File Manager], or [Files by Google]. You can use any of them or any other file manager app that you prefer. To install the APK file of Black Border Patrol Simulator using a file manager app, follow these steps:

        -
          -
        1. Open your file manager app and go to the folder where you saved the APK file.
        2. -
        3. Find the APK file of Black Border Patrol Simulator and tap on it.
        4. -
        5. A pop-up window will appear, asking you to confirm the installation. Tap on Install.
        6. -
        7. The installation will start automatically and might take a few seconds depending on your device's performance.
        8. -
        9. Once the installation is complete, tap on Open to launch the game or Done to exit.
        10. -
        -

        Congratulations! You have successfully installed Black Border Patrol Simulator on your Android device.

        -

        Step 4: Launch the Game and Enjoy

        Step 4: Launch the Game and Enjoy

        -

        Now that you have installed Black Border Patrol Simulator on your Android device, you can launch the game and enjoy it. To do this, you can either tap on the game's icon on your home screen or app drawer, or go to the file manager app and tap on the game's APK file. The game will start and you will see the main menu, where you can choose between casual mode and story mode. You can also customize your character, change the language, adjust the settings, and access other features. Once you are ready, you can start playing the game and check the passengers' papers, use your tools and devices, and arrest suspicious people. You can also earn coins and rewards, unlock new items and locations, and complete different missions and objectives. Have fun and learn something new!

        -

        Tips and Tricks for Playing Black Border Patrol Simulator

        -

        Black Border Patrol Simulator is a game that requires your attention, concentration, and skills. It is not an easy game, as you have to deal with various challenges and difficulties while performing your duty. Therefore, you might need some tips and tricks to help you play the game effectively and efficiently. Here are some of them:

        -

        Tip 1: Check All the Papers Carefully

        -

        One of the most important tasks in the game is to check the passengers' papers, such as passports, visas, tickets, etc. You have to make sure that they are valid, authentic, and match the passengers' identities. You have to look for any discrepancies or errors in the papers, such as expired dates, wrong names, missing stamps, etc. You also have to compare the papers with the information on your computer screen and verify that they are correct. If you find any mistake or inconsistency in the papers, you have to reject or detain the passenger. If you let a passenger with invalid or fake papers pass through, you will lose points and reputation.

        -

        Tip 2: Use All Your Tools and Devices

        -

        Another important task in the game is to use your tools and devices to detect illegal items and activities. You have various tools and devices at your disposal, such as scanner, frisker, camera, etc. You have to use them wisely and effectively to scan the passengers' luggage, body, face, etc. You have to look for any prohibited or suspicious items or activities, such as weapons, drugs, explosives, etc. You also have to check if the passengers are wanted by the law or involved in any criminal activity. If you find any illegal item or activity, you have to confiscate it or arrest the passenger. If you miss any illegal item or activity, you will lose points and reputation.

        -

        Tip 3: Be Alert and Vigilant

        -

        A third important task in the game is to be alert and vigilant while performing your duty. You have to pay attention to every detail and every situation that occurs at the border. You have to identify suspicious passengers who might try to deceive you or cause trouble. You have to watch out for any signs of nervousness, aggression, or dishonesty in their behavior or speech. You also have to be prepared for any unexpected events or emergencies that might happen at the border. For example, you might encounter a terrorist attack, a riot, a fire, etc. You have to react quickly and appropriately to these situations and maintain order and security.

        -

        Tip 4: Follow the Rules and Regulations

        -

        A fourth important task in the game is to follow the rules and regulations of your country while performing your duty. You have to respect the law and the protocol of your job as a border cop. You have to be fair and impartial while checking the passengers' papers and detecting illegal items and activities. You also have to be polite and professional while interacting with the passengers and your colleagues. You have to avoid breaking the law or violating the protocol while performing your duty. For example, you should not accept bribes, abuse your power, discriminate against anyone, etc. If you break the law or violate the protocol, you will face consequences such as fines, penalties, suspension, or dismissal.

        -

        Tip 5: Have Fun and Learn Something New

        -

        A fifth important task in the game is to have fun and learn something new while playing it. Black Border Patrol Simulator is a game that offers you a lot of entertainment

        A fifth important task in the game is to have fun and learn something new while playing it. Black Border Patrol Simulator is a game that offers you a lot of entertainment and education. You can enjoy the realistic and immersive graphics and sound effects of the game that create a border cop atmosphere. You can also learn something new about border security and immigration issues that are relevant in today's world. You can also improve your decision-making skills, logic, and ethics by dealing with different scenarios and situations that challenge your judgment. Black Border Patrol Simulator is a game that will keep you hooked for hours and make you feel like a real border cop.

        -

        Conclusion

        -

        Black Border Patrol Simulator is a border cop simulator game that simulates the life of a real border patrol officer. In this game, you assume the role of a police officer who works at the entry and exit gates of the country. Your job is to check the passengers' papers, use your tools and devices to detect illegal items and activities, and arrest suspicious people like terrorists, smugglers, or criminals. You have to follow the rules and regulations of your country while performing your duty. You also have to deal with different scenarios and situations that test your decision-making skills, logic, and ethics.

        -

        In this article, we have told you what is Black Border Patrol Simulator, why you should play it, and how to download it on your Android device. We have also given you some tips and tricks for playing the game effectively and efficiently. We hope that you have found this article helpful and informative. If you are interested in playing Black Border Patrol Simulator on your Android device, you can follow the steps that we have provided above and download the game safely and securely. You can also share your feedback and opinions about the game with us in the comments section below. Thank you for reading and happy gaming!

        -

        FAQs

        -

        Here are some frequently asked questions about Black Border Patrol Simulator:

        -
          -
        1. Q: Is Black Border Patrol Simulator free to play?
        2. -
        3. A: Yes, Black Border Patrol Simulator is free to play. However, it contains ads and in-app purchases that can enhance your gaming experience.
        4. -
        5. Q: Is Black Border Patrol Simulator compatible with all Android devices?
        6. -
        7. A: No, Black Border Patrol Simulator requires Android 4.4 or higher to run smoothly. It also requires at least 100 MB of free storage space on your device.
        8. -
        9. Q: Is Black Border Patrol Simulator safe to download?
        10. -
        11. A: Yes, Black Border Patrol Simulator is safe to download if you download it from a trusted source like APKPure. However, you should always scan the APK file with an antivirus app before installing it on your device.
        12. -
        13. Q: How can I contact the developers of Black Border Patrol Simulator?
        14. -
        15. A: You can contact the developers of Black Border Patrol Simulator by sending them an email at [blackborderpatrolsimulator@gmail.com] or by visiting their website at [blackborderpatrolsimulator.com].
        16. -
        17. Q: How can I support the developers of Black Border Patrol Simulator?
        18. -
        19. A: You can support the developers of Black Border Patrol Simulator by rating and reviewing the game on APKPure or Google Play Store, by sharing the game with your friends and family, or by making in-app purchases.
        20. -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Knives Out PC Version (32 Bit) and Dive into the 6400m6400m Arena.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Knives Out PC Version (32 Bit) and Dive into the 6400m6400m Arena.md deleted file mode 100644 index c5d4d430f6e038ffa3bd8655dc82cb2de3adb560..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Knives Out PC Version (32 Bit) and Dive into the 6400m6400m Arena.md +++ /dev/null @@ -1,134 +0,0 @@ -
        -

        Knives Out PC Download 32 Bit: How to Play the Battle Royale Game on Your Computer

        -

        If you are a fan of battle royale games, you might have heard of Knives Out, a popular mobile game that pits 100 players against each other on a large island. The game is developed by Netease Games, the same company behind Rules of Survival and Cyber Hunter. Knives Out offers a variety of modes, weapons, vehicles, and customization options for you to enjoy.

        -

        But did you know that you can also play Knives Out on your PC? Yes, you read that right. You can enjoy the thrill of the game on a bigger screen and with better controls. In this article, we will show you how to download Knives Out PC version for 32 bit systems using three different methods. We will also give you some tips and tricks for playing the game on your computer. Let's get started!

        -

        knives out pc download 32 bit


        Download Zip ->>> https://gohhs.com/2uPpvN



        -

        What is Knives Out?

        -

        Knives Out is a battle royale game that was released in 2017 by Netease Games. The game is similar to PlayerUnknown's Battlegrounds (PUBG), but with some unique features and twists. The game has over 100 million downloads on Google Play Store and App Store, and has won several awards, such as the Google Play Best of 2017.

        -

        The game allows you to choose from different modes, such as solo, duo, squad, or fireteam. You can also join special events, such as sniper mode, melee mode, or zombie mode. You can customize your character with different outfits, accessories, and skins. You can also collect and upgrade various weapons, such as rifles, shotguns, pistols, grenades, and more. You can also drive different vehicles, such as cars, motorcycles, boats, and helicopters.

        -

        The game has a realistic graphics style and a smooth gameplay. The game also supports voice chat and text chat with your teammates or friends. You can also join clans and compete with other players in rankings and tournaments.

        -

        Why play Knives Out on PC?

        -

        While Knives Out is designed for mobile devices, playing it on PC has some advantages. Here are some of them:

        -
          -
        • You can enjoy a larger and clearer view of the game on your PC screen.
        • -
        • You can use your keyboard and mouse to control your character more precisely and comfortably.
        • -
        • You can avoid battery drain, overheating, or lag issues that might occur on your mobile device.
        • -
        • You can record or stream your gameplay more easily using your PC software.
        • -
        -

        Of course, playing Knives Out on PC also has some drawbacks, such as compatibility issues, network problems, or account bans. Therefore, you should always follow the official guidelines and rules of the game when playing it on PC.

        -

        How to download Knives Out PC version?

        -

        There are three main methods to download Knives Out PC version for 32 bit systems. We will explain each method in detail below.

        -

        knives out pc version download windows 10
        -knives out pc game free download full version
        -knives out pc client download english
        -knives out pc download official website
        -knives out pc download apk
        -knives out pc download nox
        -knives out pc download steam
        -knives out pc download utorrent
        -knives out pc download bluestacks
        -knives out pc download highly compressed
        -knives out pc download without emulator
        -knives out pc download with qr code
        -knives out pc download netease games
        -knives out pc download wilderness action
        -knives out pc download for mac
        -knives out pc download size
        -knives out pc download latest version
        -knives out pc download offline installer
        -knives out pc download softonic
        -knives out pc download ocean of games
        -knives out pc download requirements
        -knives out pc download error
        -knives out pc download hack
        -knives out pc download cheat engine
        -knives out pc download mod menu
        -knives out pc download aimbot
        -knives out pc download esp
        -knives out pc download wallhack
        -knives out pc download speed hack
        -knives out pc download unlimited ammo
        -knives out pc download gameplay
        -knives out pc download review
        -knives out pc download tips and tricks
        -knives out pc download best settings
        -knives out pc download graphics comparison
        -knives out pc download controller support
        -knives out pc download keyboard and mouse
        -knives out pc download custom key mapping
        -knives out pc download voice chat
        -knives out pc download discord server
        -knives out pc download update patch notes
        -knives out pc download new map and weapons
        -knives out pc download season pass and rewards
        -knives out pc download skins and outfits
        -knives out pc download how to play with friends
        -knives out pc download how to change server region
        -knives out pc download how to rank up fast
        -knives out pc download how to get free coins and vouchers
        -knives out pc download how to report hackers and cheaters

        -

        Method 1: Using the official PC launcher by Netease Games

        -

        This is the most recommended method to play Knives Out on PC, as it is provided by the official developer of the game. However, this method requires you to have a mobile device (or an Android emulator) to scan a QR code every time you want to play the game on PC. Here are the steps to follow:

        -
          -
        1. Download and install the Knives Out PC launcher from the official website: https://www.knivesout.jp/pc/
        2. -
        3. Launch the PC launcher and click on the QR code icon on the top right corner.
        4. -
        5. Open the Knives Out app on your mobile device (or emulator) and tap on the PC icon on the bottom left corner.
        6. -
        7. Scan the QR code displayed on your PC screen with your mobile device (or emulator) camera.
        8. -
        9. Wait for a few seconds until the connection is established and the game starts loading on your PC.
        10. -
        11. Enjoy playing Knives Out on PC!
        12. -
        -

        Note: You can adjust the graphics, sound, and control settings of the game on your PC launcher. You can also switch between windowed and full-screen mode by pressing F11.

        -

        Method 2: Using iMyFone MirrorTo to mirror your device to PC

        -

        This method allows you to mirror your mobile device screen to your PC and control it with your mouse and keyboard. This way, you can play Knives Out on PC without downloading any additional software. However, this method requires you to have a stable Wi-Fi connection and a compatible device. Here are the steps to follow:

        -
          -
        1. Download and install iMyFone MirrorTo on your PC from the official website: https://www.imyfone.com/mirrorto/
        2. -
        3. Launch iMyFone MirrorTo on your PC and click on "Start Now".
        4. -
        5. Connect your mobile device and your PC to the same Wi-Fi network.
        6. -
        7. For iOS devices, swipe up from the bottom of your screen and tap on "Screen Mirroring". Then, select "iMyFone MirrorTo" from the list of available devices.
        8. -
        9. For Android devices, enable USB debugging mode on your device settings. Then, connect your device to your PC with a USB cable. Follow the instructions on your PC screen to install a driver and enable mirroring.
        10. -
        11. Once your device screen is mirrored to your PC, open the Knives Out app on your device.
        12. -
        13. Click on the keyboard icon on the bottom right corner of iMyFone MirrorTo to customize your keyboard and mouse controls for the game.
        14. -
        15. Enjoy playing Knives Out on PC!
        16. -
        -

        Note: You can also record or capture your gameplay using iMyFone MirrorTo. You can also adjust the resolution, quality, and orientation of the mirrored screen.

        -

        Method 3: Using BlueStacks emulator to run the game on PC

        -

        This method allows you to run Knives Out as an Android app on your PC using an emulator. This way, you can play Knives Out on PC without scanning any QR code or mirroring any device. However, this method requires you to have a powerful PC and a Google account. Here are the steps to follow:

        -
          -
        1. Download and install BlueStacks emulator on your PC from the official website: https://www.bluestacks.com/
        2. -
        3. Launch BlueStacks emulator and sign in with your Google account.
        4. -
        5. Go to Google Play Store and search for Knives Out. Then, download and install the game.
        6. -
        7. Open Knives Out app from BlueStacks home screen or app drawer.
        8. -
        9. Enjoy playing Knives Out on PC!
        10. -
        -

        Note: You can adjust the graphics, sound, and control settings of the game on BlueStacks emulator. You can also switch between windowed and full-screen mode by pressing F11.

        -

        Tips and tricks for playing Knives Out on PC

        -

        To make your gameplay more enjoyable and successful, here are some tips and tricks for playing Knives Out on PC:

        -
          -
        • Choose a suitable mode according to your preference and skill level. Solo mode is more challenging but rewarding, while squad mode is more fun but chaotic.
        • -
        • Loot as much as you can in the early game, but avoid hotspots where many players might land. Look for weapons, ammo, armor, health kits, and other items that can help you survive.
        • -
        • Use vehicles wisely. They can help you move faster and escape danger, but they also make noise and attract attention. Be careful when driving or riding them.
        • -
        • Use cover and stealth. Hide behind trees, rocks, buildings, or bushes when possible. Crouch or prone when shooting or looting. Avoid unnecessary movement or firing. Use silencers or suppressors to reduce your sound.
        • -
        • Use the map and the compass. The map shows you the safe zone, the danger zone, and the airdrops. The compass shows you the direction and the distance of the enemy gunfire. Use them to plan your strategy and position.
        • -
        • Use the voice chat and text chat. Communicate with your teammates or friends using the voice chat or text chat feature. Coordinate your actions, share information, and support each other.
        • -
        • Practice and learn. The best way to improve your skills and win more matches is to practice and learn from your mistakes. Watch replays, tutorials, or streams of other players. Learn from their tips, tricks, and strategies.
        • -
        -

        Conclusion

        -

        Knives Out is a fun and exciting battle royale game that you can play on your mobile device or on your PC. Playing it on PC has some advantages, such as a bigger screen and better controls. However, it also has some drawbacks, such as compatibility issues or account bans. Therefore, you should always follow the official guidelines and rules of the game when playing it on PC.

        -

        In this article, we have shown you how to download Knives Out PC version for 32 bit systems using three different methods: using the official PC launcher by Netease Games, using iMyFone MirrorTo to mirror your device to PC, and using BlueStacks emulator to run the game on PC. We have also given you some tips and tricks for playing Knives Out on PC.

        -

        We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

        -

        FAQs

        -

        Here are some frequently asked questions about Knives Out PC version:

        -
          -
        1. Is Knives Out PC version free?
        2. -

          Yes, Knives Out PC version is free to download and play. However, you might need to purchase some in-game items or features with real money.

          -
        3. Is Knives Out PC version safe?
        4. -

          Yes, Knives Out PC version is safe to download and play, as long as you use the official sources and methods. However, you should always be careful of malware, viruses, or scams that might harm your PC or account.

          -
        5. Is Knives Out PC version compatible with Windows 10?
        6. -

          Yes, Knives Out PC version is compatible with Windows 10, as well as Windows 7 and Windows 8.

          -
        7. Can I play Knives Out PC version with my friends who play on mobile devices?
        8. -

          Yes, you can play Knives Out PC version with your friends who play on mobile devices. However, you might have some disadvantages or advantages compared to them, such as graphics quality or control accuracy.

          -
        9. Can I transfer my progress and data from mobile devices to PC?
        10. -

          Yes, you can transfer your progress and data from mobile devices to PC by logging in with the same account. However, you might need to scan a QR code every time you want to play on PC using the official PC launcher method.

          -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/firatozdemir/OAGen_Linear/app_streamlit.py b/spaces/firatozdemir/OAGen_Linear/app_streamlit.py deleted file mode 100644 index 0c953a704546c2c01c795f8db8ed009dd6bed89f..0000000000000000000000000000000000000000 --- a/spaces/firatozdemir/OAGen_Linear/app_streamlit.py +++ /dev/null @@ -1,12 +0,0 @@ -import os, glob, sys -import pickle -import streamlit as st -import utils - -in_gpu = False -num_images = 1 -G = utils.load_default_gen(in_gpu=in_gpu) -sampler = utils.SampleFromGAN(G=G, z_shp=[num_images, G.z_dim], in_gpu=in_gpu) -button_on_click = utils.Plot(im_gen=sampler) -button_gen_clicked = st.button(label='Generate an image', key='n', on_click=button_on_click) - diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/fileio/file_client.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/fileio/file_client.py deleted file mode 100644 index 950f0c1aeab14b8e308a7455ccd64a95b5d98add..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/fileio/file_client.py +++ /dev/null @@ -1,1148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import os -import os.path as osp -import re -import tempfile -import warnings -from abc import ABCMeta, abstractmethod -from contextlib import contextmanager -from pathlib import Path -from typing import Iterable, Iterator, Optional, Tuple, Union -from urllib.request import urlopen - -import annotator.uniformer.mmcv as mmcv -from annotator.uniformer.mmcv.utils.misc import has_method -from annotator.uniformer.mmcv.utils.path import is_filepath - - -class BaseStorageBackend(metaclass=ABCMeta): - """Abstract class of storage backends. - - All backends need to implement two apis: ``get()`` and ``get_text()``. - ``get()`` reads the file as a byte stream and ``get_text()`` reads the file - as texts. - """ - - # a flag to indicate whether the backend can create a symlink for a file - _allow_symlink = False - - @property - def name(self): - return self.__class__.__name__ - - @property - def allow_symlink(self): - return self._allow_symlink - - @abstractmethod - def get(self, filepath): - pass - - @abstractmethod - def get_text(self, filepath): - pass - - -class CephBackend(BaseStorageBackend): - """Ceph storage backend (for internal use). - - Args: - path_mapping (dict|None): path mapping dict from local path to Petrel - path. When ``path_mapping={'src': 'dst'}``, ``src`` in ``filepath`` - will be replaced by ``dst``. Default: None. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - """ - - def __init__(self, path_mapping=None): - try: - import ceph - except ImportError: - raise ImportError('Please install ceph to enable CephBackend.') - - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead') - self._client = ceph.S3Client() - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def get(self, filepath): - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class PetrelBackend(BaseStorageBackend): - """Petrel storage backend (for internal use). - - PetrelBackend supports reading and writing data to multiple clusters. - If the file path contains the cluster name, PetrelBackend will read data - from specified cluster or write data to it. Otherwise, PetrelBackend will - access the default cluster. - - Args: - path_mapping (dict, optional): Path mapping dict from local path to - Petrel path. When ``path_mapping={'src': 'dst'}``, ``src`` in - ``filepath`` will be replaced by ``dst``. Default: None. - enable_mc (bool, optional): Whether to enable memcached support. - Default: True. - - Examples: - >>> filepath1 = 's3://path/of/file' - >>> filepath2 = 'cluster-name:s3://path/of/file' - >>> client = PetrelBackend() - >>> client.get(filepath1) # get data from default cluster - >>> client.get(filepath2) # get data from 'cluster-name' cluster - """ - - def __init__(self, - path_mapping: Optional[dict] = None, - enable_mc: bool = True): - try: - from petrel_client import client - except ImportError: - raise ImportError('Please install petrel_client to enable ' - 'PetrelBackend.') - - self._client = client.Client(enable_mc=enable_mc) - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def _map_path(self, filepath: Union[str, Path]) -> str: - """Map ``filepath`` to a string path whose prefix will be replaced by - :attr:`self.path_mapping`. - - Args: - filepath (str): Path to be mapped. - """ - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - return filepath - - def _format_path(self, filepath: str) -> str: - """Convert a ``filepath`` to standard format of petrel oss. - - If the ``filepath`` is concatenated by ``os.path.join``, in a Windows - environment, the ``filepath`` will be the format of - 's3://bucket_name\\image.jpg'. By invoking :meth:`_format_path`, the - above ``filepath`` will be converted to 's3://bucket_name/image.jpg'. - - Args: - filepath (str): Path to be formatted. - """ - return re.sub(r'\\+', '/', filepath) - - def get(self, filepath: Union[str, Path]) -> memoryview: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - memoryview: A memory view of expected bytes object to avoid - copying. The memoryview object can be converted to bytes by - ``value_buf.tobytes()``. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return str(self.get(filepath), encoding=encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Save data to a given ``filepath``. - - Args: - obj (bytes): Data to be saved. - filepath (str or Path): Path to write data. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.put(filepath, obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Save data to a given ``filepath``. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to encode the ``obj``. - Default: 'utf-8'. - """ - self.put(bytes(obj, encoding=encoding), filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - if not has_method(self._client, 'delete'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `delete` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.delete(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - if not (has_method(self._client, 'contains') - and has_method(self._client, 'isdir')): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` and `isdir` methods, please use a higher' - 'version or dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) or self._client.isdir(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - if not has_method(self._client, 'isdir'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `isdir` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - if not has_method(self._client, 'contains'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` method, please use a higher version or ' - 'dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result after concatenation. - """ - filepath = self._format_path(self._map_path(filepath)) - if filepath.endswith('/'): - filepath = filepath[:-1] - formatted_paths = [filepath] - for path in filepaths: - formatted_paths.append(self._format_path(self._map_path(path))) - return '/'.join(formatted_paths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download a file from ``filepath`` and return a temporary path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str | Path): Download a file from ``filepath``. - - Examples: - >>> client = PetrelBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('s3://path/of/your/file') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one temporary path. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - assert self.isfile(filepath) - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - Petrel has no concept of directories but it simulates the directory - hierarchy in the filesystem through public prefixes. In addition, - if the returned path ends with '/', it means the path is a public - prefix which is a logical directory. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - In addition, the returned path of directory will not contains the - suffix '/' which is consistent with other backends. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if not has_method(self._client, 'list'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `list` method, please use a higher version or dev' - ' branch instead.')) - - dir_path = self._map_path(dir_path) - dir_path = self._format_path(dir_path) - if list_dir and suffix is not None: - raise TypeError( - '`list_dir` should be False when `suffix` is not None') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - # Petrel's simulated directory hierarchy assumes that directory paths - # should end with `/` - if not dir_path.endswith('/'): - dir_path += '/' - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for path in self._client.list(dir_path): - # the `self.isdir` is not used here to determine whether path - # is a directory, because `self.isdir` relies on - # `self._client.list` - if path.endswith('/'): # a directory path - next_dir_path = self.join_path(dir_path, path) - if list_dir: - # get the relative path and exclude the last - # character '/' - rel_dir = next_dir_path[len(root):-1] - yield rel_dir - if recursive: - yield from _list_dir_or_file(next_dir_path, list_dir, - list_file, suffix, - recursive) - else: # a file path - absolute_path = self.join_path(dir_path, path) - rel_path = absolute_path[len(root):] - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class MemcachedBackend(BaseStorageBackend): - """Memcached storage backend. - - Attributes: - server_list_cfg (str): Config file for memcached server list. - client_cfg (str): Config file for memcached client. - sys_path (str | None): Additional path to be appended to `sys.path`. - Default: None. - """ - - def __init__(self, server_list_cfg, client_cfg, sys_path=None): - if sys_path is not None: - import sys - sys.path.append(sys_path) - try: - import mc - except ImportError: - raise ImportError( - 'Please install memcached to enable MemcachedBackend.') - - self.server_list_cfg = server_list_cfg - self.client_cfg = client_cfg - self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg, - self.client_cfg) - # mc.pyvector servers as a point which points to a memory cache - self._mc_buffer = mc.pyvector() - - def get(self, filepath): - filepath = str(filepath) - import mc - self._client.Get(filepath, self._mc_buffer) - value_buf = mc.ConvertBuffer(self._mc_buffer) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class LmdbBackend(BaseStorageBackend): - """Lmdb storage backend. - - Args: - db_path (str): Lmdb database path. - readonly (bool, optional): Lmdb environment parameter. If True, - disallow any write operations. Default: True. - lock (bool, optional): Lmdb environment parameter. If False, when - concurrent access occurs, do not lock the database. Default: False. - readahead (bool, optional): Lmdb environment parameter. If False, - disable the OS filesystem readahead mechanism, which may improve - random read performance when a database is larger than RAM. - Default: False. - - Attributes: - db_path (str): Lmdb database path. - """ - - def __init__(self, - db_path, - readonly=True, - lock=False, - readahead=False, - **kwargs): - try: - import lmdb - except ImportError: - raise ImportError('Please install lmdb to enable LmdbBackend.') - - self.db_path = str(db_path) - self._client = lmdb.open( - self.db_path, - readonly=readonly, - lock=lock, - readahead=readahead, - **kwargs) - - def get(self, filepath): - """Get values according to the filepath. - - Args: - filepath (str | obj:`Path`): Here, filepath is the lmdb key. - """ - filepath = str(filepath) - with self._client.begin(write=False) as txn: - value_buf = txn.get(filepath.encode('ascii')) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class HardDiskBackend(BaseStorageBackend): - """Raw hard disks storage backend.""" - - _allow_symlink = True - - def get(self, filepath: Union[str, Path]) -> bytes: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes: Expected bytes object. - """ - with open(filepath, 'rb') as f: - value_buf = f.read() - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - with open(filepath, 'r', encoding=encoding) as f: - value_buf = f.read() - return value_buf - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` will create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'wb') as f: - f.write(obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` will create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'w', encoding=encoding) as f: - f.write(obj) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - os.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return osp.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return osp.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return osp.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return osp.join(filepath, *filepaths) - - @contextmanager - def get_local_path( - self, filepath: Union[str, Path]) -> Iterable[Union[str, Path]]: - """Only for unified API and do nothing.""" - yield filepath - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if list_dir and suffix is not None: - raise TypeError('`suffix` should be None when `list_dir` is True') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - elif osp.isdir(entry.path): - if list_dir: - rel_dir = osp.relpath(entry.path, root) - yield rel_dir - if recursive: - yield from _list_dir_or_file(entry.path, list_dir, - list_file, suffix, - recursive) - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class HTTPBackend(BaseStorageBackend): - """HTTP and HTTPS storage bachend.""" - - def get(self, filepath): - value_buf = urlopen(filepath).read() - return value_buf - - def get_text(self, filepath, encoding='utf-8'): - value_buf = urlopen(filepath).read() - return value_buf.decode(encoding) - - @contextmanager - def get_local_path(self, filepath: str) -> Iterable[str]: - """Download a file from ``filepath``. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str): Download a file from ``filepath``. - - Examples: - >>> client = HTTPBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('http://path/of/your/file') as path: - ... # do something here - """ - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - -class FileClient: - """A general file client to access files in different backends. - - The client loads a file or text in a specified backend from its path - and returns it as a binary or text file. There are two ways to choose a - backend, the name of backend and the prefix of path. Although both of them - can be used to choose a storage backend, ``backend`` has a higher priority - that is if they are all set, the storage backend will be chosen by the - backend argument. If they are all `None`, the disk backend will be chosen. - Note that It can also register other backend accessor with a given name, - prefixes, and backend class. In addition, We use the singleton pattern to - avoid repeated object creation. If the arguments are the same, the same - object will be returned. - - Args: - backend (str, optional): The storage backend type. Options are "disk", - "ceph", "memcached", "lmdb", "http" and "petrel". Default: None. - prefix (str, optional): The prefix of the registered storage backend. - Options are "s3", "http", "https". Default: None. - - Examples: - >>> # only set backend - >>> file_client = FileClient(backend='petrel') - >>> # only set prefix - >>> file_client = FileClient(prefix='s3') - >>> # set both backend and prefix but use backend to choose client - >>> file_client = FileClient(backend='petrel', prefix='s3') - >>> # if the arguments are the same, the same object is returned - >>> file_client1 = FileClient(backend='petrel') - >>> file_client1 is file_client - True - - Attributes: - client (:obj:`BaseStorageBackend`): The backend object. - """ - - _backends = { - 'disk': HardDiskBackend, - 'ceph': CephBackend, - 'memcached': MemcachedBackend, - 'lmdb': LmdbBackend, - 'petrel': PetrelBackend, - 'http': HTTPBackend, - } - # This collection is used to record the overridden backends, and when a - # backend appears in the collection, the singleton pattern is disabled for - # that backend, because if the singleton pattern is used, then the object - # returned will be the backend before overwriting - _overridden_backends = set() - _prefix_to_backends = { - 's3': PetrelBackend, - 'http': HTTPBackend, - 'https': HTTPBackend, - } - _overridden_prefixes = set() - - _instances = {} - - def __new__(cls, backend=None, prefix=None, **kwargs): - if backend is None and prefix is None: - backend = 'disk' - if backend is not None and backend not in cls._backends: - raise ValueError( - f'Backend {backend} is not supported. Currently supported ones' - f' are {list(cls._backends.keys())}') - if prefix is not None and prefix not in cls._prefix_to_backends: - raise ValueError( - f'prefix {prefix} is not supported. Currently supported ones ' - f'are {list(cls._prefix_to_backends.keys())}') - - # concatenate the arguments to a unique key for determining whether - # objects with the same arguments were created - arg_key = f'{backend}:{prefix}' - for key, value in kwargs.items(): - arg_key += f':{key}:{value}' - - # if a backend was overridden, it will create a new object - if (arg_key in cls._instances - and backend not in cls._overridden_backends - and prefix not in cls._overridden_prefixes): - _instance = cls._instances[arg_key] - else: - # create a new object and put it to _instance - _instance = super().__new__(cls) - if backend is not None: - _instance.client = cls._backends[backend](**kwargs) - else: - _instance.client = cls._prefix_to_backends[prefix](**kwargs) - - cls._instances[arg_key] = _instance - - return _instance - - @property - def name(self): - return self.client.name - - @property - def allow_symlink(self): - return self.client.allow_symlink - - @staticmethod - def parse_uri_prefix(uri: Union[str, Path]) -> Optional[str]: - """Parse the prefix of a uri. - - Args: - uri (str | Path): Uri to be parsed that contains the file prefix. - - Examples: - >>> FileClient.parse_uri_prefix('s3://path/of/your/file') - 's3' - - Returns: - str | None: Return the prefix of uri if the uri contains '://' - else ``None``. - """ - assert is_filepath(uri) - uri = str(uri) - if '://' not in uri: - return None - else: - prefix, _ = uri.split('://') - # In the case of PetrelBackend, the prefix may contains the cluster - # name like clusterName:s3 - if ':' in prefix: - _, prefix = prefix.split(':') - return prefix - - @classmethod - def infer_client(cls, - file_client_args: Optional[dict] = None, - uri: Optional[Union[str, Path]] = None) -> 'FileClient': - """Infer a suitable file client based on the URI and arguments. - - Args: - file_client_args (dict, optional): Arguments to instantiate a - FileClient. Default: None. - uri (str | Path, optional): Uri to be parsed that contains the file - prefix. Default: None. - - Examples: - >>> uri = 's3://path/of/your/file' - >>> file_client = FileClient.infer_client(uri=uri) - >>> file_client_args = {'backend': 'petrel'} - >>> file_client = FileClient.infer_client(file_client_args) - - Returns: - FileClient: Instantiated FileClient object. - """ - assert file_client_args is not None or uri is not None - if file_client_args is None: - file_prefix = cls.parse_uri_prefix(uri) # type: ignore - return cls(prefix=file_prefix) - else: - return cls(**file_client_args) - - @classmethod - def _register_backend(cls, name, backend, force=False, prefixes=None): - if not isinstance(name, str): - raise TypeError('the backend name should be a string, ' - f'but got {type(name)}') - if not inspect.isclass(backend): - raise TypeError( - f'backend should be a class but got {type(backend)}') - if not issubclass(backend, BaseStorageBackend): - raise TypeError( - f'backend {backend} is not a subclass of BaseStorageBackend') - if not force and name in cls._backends: - raise KeyError( - f'{name} is already registered as a storage backend, ' - 'add "force=True" if you want to override it') - - if name in cls._backends and force: - cls._overridden_backends.add(name) - cls._backends[name] = backend - - if prefixes is not None: - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if prefix not in cls._prefix_to_backends: - cls._prefix_to_backends[prefix] = backend - elif (prefix in cls._prefix_to_backends) and force: - cls._overridden_prefixes.add(prefix) - cls._prefix_to_backends[prefix] = backend - else: - raise KeyError( - f'{prefix} is already registered as a storage backend,' - ' add "force=True" if you want to override it') - - @classmethod - def register_backend(cls, name, backend=None, force=False, prefixes=None): - """Register a backend to FileClient. - - This method can be used as a normal class method or a decorator. - - .. code-block:: python - - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - FileClient.register_backend('new', NewBackend) - - or - - .. code-block:: python - - @FileClient.register_backend('new') - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - Args: - name (str): The name of the registered backend. - backend (class, optional): The backend class to be registered, - which must be a subclass of :class:`BaseStorageBackend`. - When this method is used as a decorator, backend is None. - Defaults to None. - force (bool, optional): Whether to override the backend if the name - has already been registered. Defaults to False. - prefixes (str or list[str] or tuple[str], optional): The prefixes - of the registered storage backend. Default: None. - `New in version 1.3.15.` - """ - if backend is not None: - cls._register_backend( - name, backend, force=force, prefixes=prefixes) - return - - def _register(backend_cls): - cls._register_backend( - name, backend_cls, force=force, prefixes=prefixes) - return backend_cls - - return _register - - def get(self, filepath: Union[str, Path]) -> Union[bytes, memoryview]: - """Read data from a given ``filepath`` with 'rb' mode. - - Note: - There are two types of return values for ``get``, one is ``bytes`` - and the other is ``memoryview``. The advantage of using memoryview - is that you can avoid copying, and if you want to convert it to - ``bytes``, you can use ``.tobytes()``. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes | memoryview: Expected bytes object or a memory view of the - bytes object. - """ - return self.client.get(filepath) - - def get_text(self, filepath: Union[str, Path], encoding='utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return self.client.get_text(filepath, encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` should create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - self.client.put(obj, filepath) - - def put_text(self, obj: str, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` should create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str, optional): The encoding format used to open the - `filepath`. Default: 'utf-8'. - """ - self.client.put_text(obj, filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str, Path): Path to be removed. - """ - self.client.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return self.client.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return self.client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return self.client.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return self.client.join_path(filepath, *filepaths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download data from ``filepath`` and write the data to local path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Note: - If the ``filepath`` is a local path, just return itself. - - .. warning:: - ``get_local_path`` is an experimental interface that may change in - the future. - - Args: - filepath (str or Path): Path to be read data. - - Examples: - >>> file_client = FileClient(prefix='s3') - >>> with file_client.get_local_path('s3://bucket/abc.jpg') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one path. - """ - with self.client.get_local_path(str(filepath)) as local_path: - yield local_path - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - yield from self.client.list_dir_or_file(dir_path, list_dir, list_file, - suffix, recursive) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/point_sample.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/point_sample.py deleted file mode 100644 index 267f4b3c56630acd85f9bdc630b7be09abab0aba..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/point_sample.py +++ /dev/null @@ -1,336 +0,0 @@ -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa - -from os import path as osp - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.utils import _pair -from torch.onnx.operators import shape_as_tensor - - -def bilinear_grid_sample(im, grid, align_corners=False): - """Given an input and a flow-field grid, computes the output using input - values and pixel locations from grid. Supported only bilinear interpolation - method to sample the input pixels. - - Args: - im (torch.Tensor): Input feature map, shape (N, C, H, W) - grid (torch.Tensor): Point coordinates, shape (N, Hg, Wg, 2) - align_corners {bool}: If set to True, the extrema (-1 and 1) are - considered as referring to the center points of the input’s - corner pixels. If set to False, they are instead considered as - referring to the corner points of the input’s corner pixels, - making the sampling more resolution agnostic. - Returns: - torch.Tensor: A tensor with sampled points, shape (N, C, Hg, Wg) - """ - n, c, h, w = im.shape - gn, gh, gw, _ = grid.shape - assert n == gn - - x = grid[:, :, :, 0] - y = grid[:, :, :, 1] - - if align_corners: - x = ((x + 1) / 2) * (w - 1) - y = ((y + 1) / 2) * (h - 1) - else: - x = ((x + 1) * w - 1) / 2 - y = ((y + 1) * h - 1) / 2 - - x = x.view(n, -1) - y = y.view(n, -1) - - x0 = torch.floor(x).long() - y0 = torch.floor(y).long() - x1 = x0 + 1 - y1 = y0 + 1 - - wa = ((x1 - x) * (y1 - y)).unsqueeze(1) - wb = ((x1 - x) * (y - y0)).unsqueeze(1) - wc = ((x - x0) * (y1 - y)).unsqueeze(1) - wd = ((x - x0) * (y - y0)).unsqueeze(1) - - # Apply default for grid_sample function zero padding - im_padded = F.pad(im, pad=[1, 1, 1, 1], mode='constant', value=0) - padded_h = h + 2 - padded_w = w + 2 - # save points positions after padding - x0, x1, y0, y1 = x0 + 1, x1 + 1, y0 + 1, y1 + 1 - - # Clip coordinates to padded image size - x0 = torch.where(x0 < 0, torch.tensor(0), x0) - x0 = torch.where(x0 > padded_w - 1, torch.tensor(padded_w - 1), x0) - x1 = torch.where(x1 < 0, torch.tensor(0), x1) - x1 = torch.where(x1 > padded_w - 1, torch.tensor(padded_w - 1), x1) - y0 = torch.where(y0 < 0, torch.tensor(0), y0) - y0 = torch.where(y0 > padded_h - 1, torch.tensor(padded_h - 1), y0) - y1 = torch.where(y1 < 0, torch.tensor(0), y1) - y1 = torch.where(y1 > padded_h - 1, torch.tensor(padded_h - 1), y1) - - im_padded = im_padded.view(n, c, -1) - - x0_y0 = (x0 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1) - x0_y1 = (x0 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1) - x1_y0 = (x1 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1) - x1_y1 = (x1 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1) - - Ia = torch.gather(im_padded, 2, x0_y0) - Ib = torch.gather(im_padded, 2, x0_y1) - Ic = torch.gather(im_padded, 2, x1_y0) - Id = torch.gather(im_padded, 2, x1_y1) - - return (Ia * wa + Ib * wb + Ic * wc + Id * wd).reshape(n, c, gh, gw) - - -def is_in_onnx_export_without_custom_ops(): - from annotator.uniformer.mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() - return torch.onnx.is_in_onnx_export( - ) and not osp.exists(ort_custom_op_path) - - -def normalize(grid): - """Normalize input grid from [-1, 1] to [0, 1] - Args: - grid (Tensor): The grid to be normalize, range [-1, 1]. - Returns: - Tensor: Normalized grid, range [0, 1]. - """ - - return (grid + 1.0) / 2.0 - - -def denormalize(grid): - """Denormalize input grid from range [0, 1] to [-1, 1] - Args: - grid (Tensor): The grid to be denormalize, range [0, 1]. - Returns: - Tensor: Denormalized grid, range [-1, 1]. - """ - - return grid * 2.0 - 1.0 - - -def generate_grid(num_grid, size, device): - """Generate regular square grid of points in [0, 1] x [0, 1] coordinate - space. - - Args: - num_grid (int): The number of grids to sample, one for each region. - size (tuple(int, int)): The side size of the regular grid. - device (torch.device): Desired device of returned tensor. - - Returns: - (torch.Tensor): A tensor of shape (num_grid, size[0]*size[1], 2) that - contains coordinates for the regular grids. - """ - - affine_trans = torch.tensor([[[1., 0., 0.], [0., 1., 0.]]], device=device) - grid = F.affine_grid( - affine_trans, torch.Size((1, 1, *size)), align_corners=False) - grid = normalize(grid) - return grid.view(1, -1, 2).expand(num_grid, -1, -1) - - -def rel_roi_point_to_abs_img_point(rois, rel_roi_points): - """Convert roi based relative point coordinates to image based absolute - point coordinates. - - Args: - rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5) - rel_roi_points (Tensor): Point coordinates inside RoI, relative to - RoI, location, range (0, 1), shape (N, P, 2) - Returns: - Tensor: Image based absolute point coordinates, shape (N, P, 2) - """ - - with torch.no_grad(): - assert rel_roi_points.size(0) == rois.size(0) - assert rois.dim() == 2 - assert rel_roi_points.dim() == 3 - assert rel_roi_points.size(2) == 2 - # remove batch idx - if rois.size(1) == 5: - rois = rois[:, 1:] - abs_img_points = rel_roi_points.clone() - # To avoid an error during exporting to onnx use independent - # variables instead inplace computation - xs = abs_img_points[:, :, 0] * (rois[:, None, 2] - rois[:, None, 0]) - ys = abs_img_points[:, :, 1] * (rois[:, None, 3] - rois[:, None, 1]) - xs += rois[:, None, 0] - ys += rois[:, None, 1] - abs_img_points = torch.stack([xs, ys], dim=2) - return abs_img_points - - -def get_shape_from_feature_map(x): - """Get spatial resolution of input feature map considering exporting to - onnx mode. - - Args: - x (torch.Tensor): Input tensor, shape (N, C, H, W) - Returns: - torch.Tensor: Spatial resolution (width, height), shape (1, 1, 2) - """ - if torch.onnx.is_in_onnx_export(): - img_shape = shape_as_tensor(x)[2:].flip(0).view(1, 1, 2).to( - x.device).float() - else: - img_shape = torch.tensor(x.shape[2:]).flip(0).view(1, 1, 2).to( - x.device).float() - return img_shape - - -def abs_img_point_to_rel_img_point(abs_img_points, img, spatial_scale=1.): - """Convert image based absolute point coordinates to image based relative - coordinates for sampling. - - Args: - abs_img_points (Tensor): Image based absolute point coordinates, - shape (N, P, 2) - img (tuple/Tensor): (height, width) of image or feature map. - spatial_scale (float): Scale points by this factor. Default: 1. - - Returns: - Tensor: Image based relative point coordinates for sampling, - shape (N, P, 2) - """ - - assert (isinstance(img, tuple) and len(img) == 2) or \ - (isinstance(img, torch.Tensor) and len(img.shape) == 4) - - if isinstance(img, tuple): - h, w = img - scale = torch.tensor([w, h], - dtype=torch.float, - device=abs_img_points.device) - scale = scale.view(1, 1, 2) - else: - scale = get_shape_from_feature_map(img) - - return abs_img_points / scale * spatial_scale - - -def rel_roi_point_to_rel_img_point(rois, - rel_roi_points, - img, - spatial_scale=1.): - """Convert roi based relative point coordinates to image based absolute - point coordinates. - - Args: - rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5) - rel_roi_points (Tensor): Point coordinates inside RoI, relative to - RoI, location, range (0, 1), shape (N, P, 2) - img (tuple/Tensor): (height, width) of image or feature map. - spatial_scale (float): Scale points by this factor. Default: 1. - - Returns: - Tensor: Image based relative point coordinates for sampling, - shape (N, P, 2) - """ - - abs_img_point = rel_roi_point_to_abs_img_point(rois, rel_roi_points) - rel_img_point = abs_img_point_to_rel_img_point(abs_img_point, img, - spatial_scale) - - return rel_img_point - - -def point_sample(input, points, align_corners=False, **kwargs): - """A wrapper around :func:`grid_sample` to support 3D point_coords tensors - Unlike :func:`torch.nn.functional.grid_sample` it assumes point_coords to - lie inside ``[0, 1] x [0, 1]`` square. - - Args: - input (Tensor): Feature map, shape (N, C, H, W). - points (Tensor): Image based absolute point coordinates (normalized), - range [0, 1] x [0, 1], shape (N, P, 2) or (N, Hgrid, Wgrid, 2). - align_corners (bool): Whether align_corners. Default: False - - Returns: - Tensor: Features of `point` on `input`, shape (N, C, P) or - (N, C, Hgrid, Wgrid). - """ - - add_dim = False - if points.dim() == 3: - add_dim = True - points = points.unsqueeze(2) - if is_in_onnx_export_without_custom_ops(): - # If custom ops for onnx runtime not compiled use python - # implementation of grid_sample function to make onnx graph - # with supported nodes - output = bilinear_grid_sample( - input, denormalize(points), align_corners=align_corners) - else: - output = F.grid_sample( - input, denormalize(points), align_corners=align_corners, **kwargs) - if add_dim: - output = output.squeeze(3) - return output - - -class SimpleRoIAlign(nn.Module): - - def __init__(self, output_size, spatial_scale, aligned=True): - """Simple RoI align in PointRend, faster than standard RoIAlign. - - Args: - output_size (tuple[int]): h, w - spatial_scale (float): scale the input boxes by this number - aligned (bool): if False, use the legacy implementation in - MMDetection, align_corners=True will be used in F.grid_sample. - If True, align the results more perfectly. - """ - - super(SimpleRoIAlign, self).__init__() - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - # to be consistent with other RoI ops - self.use_torchvision = False - self.aligned = aligned - - def forward(self, features, rois): - num_imgs = features.size(0) - num_rois = rois.size(0) - rel_roi_points = generate_grid( - num_rois, self.output_size, device=rois.device) - - if torch.onnx.is_in_onnx_export(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, features, self.spatial_scale) - rel_img_points = rel_img_points.reshape(num_imgs, -1, - *rel_img_points.shape[1:]) - point_feats = point_sample( - features, rel_img_points, align_corners=not self.aligned) - point_feats = point_feats.transpose(1, 2) - else: - point_feats = [] - for batch_ind in range(num_imgs): - # unravel batch dim - feat = features[batch_ind].unsqueeze(0) - inds = (rois[:, 0].long() == batch_ind) - if inds.any(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois[inds], rel_roi_points[inds], feat, - self.spatial_scale).unsqueeze(0) - point_feat = point_sample( - feat, rel_img_points, align_corners=not self.aligned) - point_feat = point_feat.squeeze(0).transpose(0, 1) - point_feats.append(point_feat) - - point_feats = torch.cat(point_feats, dim=0) - - channels = features.size(1) - roi_feats = point_feats.reshape(num_rois, channels, *self.output_size) - - return roi_feats - - def __repr__(self): - format_str = self.__class__.__name__ - format_str += '(output_size={}, spatial_scale={}'.format( - self.output_size, self.spatial_scale) - return format_str diff --git a/spaces/goodeatmen/Test/README.md b/spaces/goodeatmen/Test/README.md deleted file mode 100644 index 79d29b5ffbe9164567777039d81f4490a52a06da..0000000000000000000000000000000000000000 --- a/spaces/goodeatmen/Test/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Test -emoji: 📈 -colorFrom: blue -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gordonchan/h2oo/loaders.py b/spaces/gordonchan/h2oo/loaders.py deleted file mode 100644 index 18e360e2bdc45e7bddfc6f0e24d1e9099ae2f73c..0000000000000000000000000000000000000000 --- a/spaces/gordonchan/h2oo/loaders.py +++ /dev/null @@ -1,61 +0,0 @@ -import functools - - -def get_loaders(model_name, reward_type, llama_type=None, load_gptq=''): - # NOTE: Some models need specific new prompt_type - # E.g. t5_xxl_true_nli_mixture has input format: "premise: PREMISE_TEXT hypothesis: HYPOTHESIS_TEXT".) - if load_gptq: - from transformers import AutoTokenizer - from auto_gptq import AutoGPTQForCausalLM - use_triton = False - functools.partial(AutoGPTQForCausalLM.from_quantized, quantize_config=None, use_triton=use_triton) - return AutoGPTQForCausalLM.from_quantized, AutoTokenizer - if llama_type is None: - llama_type = "llama" in model_name.lower() - if llama_type: - from transformers import LlamaForCausalLM, LlamaTokenizer - return LlamaForCausalLM.from_pretrained, LlamaTokenizer - elif 'distilgpt2' in model_name.lower(): - from transformers import AutoModelForCausalLM, AutoTokenizer - return AutoModelForCausalLM.from_pretrained, AutoTokenizer - elif 'gpt2' in model_name.lower(): - from transformers import GPT2LMHeadModel, GPT2Tokenizer - return GPT2LMHeadModel.from_pretrained, GPT2Tokenizer - elif 'mbart-' in model_name.lower(): - from transformers import MBartForConditionalGeneration, MBart50TokenizerFast - return MBartForConditionalGeneration.from_pretrained, MBart50TokenizerFast - elif 't5' == model_name.lower() or \ - 't5-' in model_name.lower() or \ - 'flan-' in model_name.lower(): - from transformers import AutoTokenizer, T5ForConditionalGeneration - return T5ForConditionalGeneration.from_pretrained, AutoTokenizer - elif 'bigbird' in model_name: - from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer - return BigBirdPegasusForConditionalGeneration.from_pretrained, AutoTokenizer - elif 'bart-large-cnn-samsum' in model_name or 'flan-t5-base-samsum' in model_name: - from transformers import pipeline - return pipeline, "summarization" - elif reward_type or 'OpenAssistant/reward-model'.lower() in model_name.lower(): - from transformers import AutoModelForSequenceClassification, AutoTokenizer - return AutoModelForSequenceClassification.from_pretrained, AutoTokenizer - else: - from transformers import AutoTokenizer, AutoModelForCausalLM - model_loader = AutoModelForCausalLM - tokenizer_loader = AutoTokenizer - return model_loader.from_pretrained, tokenizer_loader - - -def get_tokenizer(tokenizer_loader, tokenizer_base_model, local_files_only, resume_download, use_auth_token): - tokenizer = tokenizer_loader.from_pretrained(tokenizer_base_model, - local_files_only=local_files_only, - resume_download=resume_download, - use_auth_token=use_auth_token, - padding_side='left') - - tokenizer.pad_token_id = 0 # different from the eos token - # when generating, we will use the logits of right-most token to predict the next token - # so the padding should be on the left, - # e.g. see: https://huggingface.co/transformers/v4.11.3/model_doc/t5.html#inference - tokenizer.padding_side = "left" # Allow batched inference - - return tokenizer diff --git a/spaces/gotiQspiryo/whisper-ui/Vst Tone2 Gladiator Crack.md b/spaces/gotiQspiryo/whisper-ui/Vst Tone2 Gladiator Crack.md deleted file mode 100644 index c4b4e8ad018f76dd1f14b39859460ac4ee8aa8ef..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/Vst Tone2 Gladiator Crack.md +++ /dev/null @@ -1,162 +0,0 @@ -## Vst Tone2 Gladiator Crack - - - - - - ![Vst Tone2 Gladiator Crack](https://topvst.wbl.sk/refx_nexus_expansion_pack.jpg) - - - - - -**LINK ➡ [https://vercupalo.blogspot.com/?d=2txnEz](https://vercupalo.blogspot.com/?d=2txnEz)** - - - - - - - - - - - - - -# How to Download and Install Vst Tone2 Gladiator Crack for Free - - - -Vst Tone2 Gladiator is a powerful virtual synthesizer that supports a wide range of synthesis methods, such as Harmonic-Content-Morphing™ (HCM), Frequency Modulation (FM), Pulse Width Modulation (PWM), and Subtractive[^2^]. It can create new and exciting sounds for your music production and composition. However, this plugin is not cheap and you may be looking for a way to download and install it for free. In this article, we will show you how to do that using a torrent file. - - - -## What is a Torrent File? - - - -A torrent file is a small file that contains information about a larger file that you want to download, such as Vst Tone2 Gladiator Crack. It does not contain the actual file, but it tells your torrent client where to find it and how to download it from other users who have it. A torrent client is a software that allows you to download and upload files using the BitTorrent protocol[^1^]. Some popular torrent clients are uTorrent, BitTorrent, and qBittorrent. - - - -## How to Download Vst Tone2 Gladiator Crack Torrent File? - - - -To download Vst Tone2 Gladiator Crack torrent file, you need to find a reliable website that offers it. One such website is vsttorrentz.net[^1^], which has a large collection of VST plugins and software. To download the torrent file from this website, follow these steps: - - - -1. Go to [https://vsttorrentz.net/tone2-gladiator-3-0-0-standalone-vsti-x86-x64/](https://vsttorrentz.net/tone2-gladiator-3-0-0-standalone-vsti-x86-x64/) - -2. Click on the "Download Torrent" button at the bottom of the page. - -3. Save the torrent file to your computer. - - - -## How to Install Vst Tone2 Gladiator Crack? - - - -To install Vst Tone2 Gladiator Crack, you need to have a torrent client installed on your computer. If you don't have one, you can download one from the links above. Once you have a torrent client, follow these steps: - - - -1. Open the torrent file with your torrent client. - -2. Select the destination folder where you want to save the Vst Tone2 Gladiator Crack files. - -3. Wait for the download to complete. - -4. Open the downloaded folder and run the setup.exe file. - -5. Follow the instructions on the screen to install Vst Tone2 Gladiator Crack. - -6. Enjoy your free virtual synthesizer! - - - -### Disclaimer - - - -This article is for educational purposes only. We do not condone or encourage piracy or illegal downloading of any software or plugin. Please support the developers of Vst Tone2 Gladiator by purchasing it from their official website[^2^]. - - - -## What are the Features of Vst Tone2 Gladiator? - - - -Vst Tone2 Gladiator is not just a regular virtual synthesizer, but a revolution in synthesis. It offers a wide range of features that make it stand out from other plugins. Some of these features are: - - - -- Innovative, new synthesis method (HCM™) that allows you to create unique sounds that no other synthesizer can produce. - -- Huge sonic range that covers everything from warm and smooth to crisp and edgy. - -- High-end quality that delivers warm, crystal clear and rich sound. - -- One of the most successful software synthesizers that has been used in many movies and chart hits. - -- Different synthesis methods that can be combined freely, such as FM, PWM, Subtractive, Vocoder, Waveshaping, Super-saw, Additive and more. - -- 40 filter types and 37 effects that are included, such as Reverbs, Delays, Flanger, Phaser, Chorus, Ensemble, Rotary, Bitcrusher, Talkbox, Ringmod, Dolby Prologic II surround encoding and more. - -- Four different interface sizes that you can choose from according to your preference. - -- Boundless possibilities that let you create any sound you can imagine with a powerful randomize-function and psychoacoustic processing. - -- Low CPU and high reliability that ensure a smooth performance and stability. - -- Over 1200 sounds from professional sound-designers that are ready to use or tweak. - -- Handbook in 5 languages that explains everything you need to know about the plugin. - -- Good value for money that gives you a lot of features and quality for a reasonable price. - -- Flexibility and expandability that allow you to customize and enhance your plugin with additional soundsets and expansions. - - - -## How to Use Vst Tone2 Gladiator? - - - -Vst Tone2 Gladiator is easy to use and intuitive. You can start by browsing through the presets and finding one that suits your needs. You can also use the search function to find a preset by name or category. You can then tweak the preset by adjusting the parameters on the interface or using the modulation matrix. You can also create your own sounds from scratch by choosing one of the synthesis methods and modifying the harmonic structure with the modules and algorithms. You can also layer up to four sounds together using the stack mode. You can then add some effects and filters to enhance your sound. You can also use the arpeggiator and trancegate to create rhythmic patterns and sequences. You can also use the microtuning feature to change the tuning system of your sound. You can also use the MIDI learn function to assign any parameter to your MIDI controller for easy control. You can also save your own presets and share them with others. - - - -### Tips and Tricks - - - -Here are some tips and tricks to help you get the most out of Vst Tone2 Gladiator: - - - -- Use the randomize-function to generate new sounds with a single click. You can also randomize specific sections or parameters by right-clicking on them. - -- Use the morph mode to smoothly transition between two sounds by moving the morph knob or using a MIDI controller. - -- Use the analog mode to add some warmth and character to your sound by emulating analog circuitry and components. - -- Use the psychoacoustic mode to enhance your sound with psychoacoustic processing that simulates how humans perceive sound. - -- Use the surround mode to create immersive spatial effects with Dolby Prologic II encoding. - -- Use the feedback mode to create complex feedback loops with different effects and filters. - -- Use the oscilloscope mode to visualize your sound waveforms in real-time. - - - - 1b8d091108 - - - - - diff --git a/spaces/gradio/HuBERT/fairseq/distributed/fully_sharded_data_parallel.py b/spaces/gradio/HuBERT/fairseq/distributed/fully_sharded_data_parallel.py deleted file mode 100644 index 8a96bfc76516682ac8e2b7e2c3bc2e6aa3d8ef0c..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/distributed/fully_sharded_data_parallel.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -from typing import Optional - -import torch -from fairseq.dataclass.configs import DistributedTrainingConfig -from fairseq.distributed import utils as dist_utils - - -try: - from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP - - has_FSDP = True -except ImportError: - FSDP = torch.nn.Module - has_FSDP = False - - -class FullyShardedDataParallel(FSDP): - """ - A small wrapper around fairscale's FullyShardedDataParallel (FSDP) with some - fairseq-specific checkpoint saving/loading logic. - - Args: - use_sharded_state (bool): if True, then ``state_dict`` will return - ``FSDP.local_state_dict`` and ``load_state_dict`` will call - ``FSDP.load_local_state_dict``. Otherwise, ``state_dict`` will - return the full model weights on data parallel rank 0 (empty on - other ranks) and ``load_state_dict`` will broadcast model weights - from rank 0 to other ranks. - """ - - def __init__(self, *args, use_sharded_state: bool = False, **kwargs): - if not has_FSDP: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - super().__init__(*args, **kwargs) - self.use_sharded_state = use_sharded_state - - @property - def unwrapped_module(self) -> torch.nn.Module: - if self.flatten_parameters: - return self.module.module - else: - return self.module - - def state_dict(self, destination=None, prefix="", keep_vars=False): - if self.use_sharded_state: - return super().local_state_dict( - destination=destination, prefix=prefix, keep_vars=keep_vars - ) - else: - if self.rank == 0: - return super().state_dict( - destination=destination, prefix=prefix, keep_vars=keep_vars - ) - else: - # We must call state_dict() due to use of communication - # primitives. But we don't use the result. - super().state_dict() - return destination or {} - - def load_state_dict(self, state_dict, strict=True, model_cfg=None): - if self.use_sharded_state: - return super().load_local_state_dict(state_dict, strict=strict) - else: - state_dict = dist_utils.broadcast_object( - state_dict, src_rank=0, group=self.process_group - ) - return super().load_state_dict(state_dict, strict=strict) - - -@contextlib.contextmanager -def fsdp_enable_wrap(cfg: DistributedTrainingConfig): - try: - from fairscale.nn import enable_wrap - except ImportError: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - if cfg.memory_efficient_fp16: - assert cfg.fp16 # memory_efficient_fp16 should imply fp16 - group = dist_utils.get_data_parallel_group() - if group is None and cfg.distributed_world_size == 1: - from fairscale.utils.testing import DummyProcessGroup - - group = DummyProcessGroup(rank=0, size=1) - fsdp_config = { - "process_group": group, - "reshard_after_forward": not cfg.no_reshard_after_forward, - "mixed_precision": cfg.fp16 and not cfg.memory_efficient_fp16, - "fp32_reduce_scatter": cfg.fp32_reduce_scatter, - "flatten_parameters": True, - "cpu_offload": cfg.cpu_offload, - "compute_dtype": torch.float16 if cfg.fp16 else torch.float32, - "bucket_cap_mb": cfg.bucket_cap_mb, - "state_dict_device": torch.device("cpu"), # reduce GPU mem usage - } - with enable_wrap( - wrapper_cls=FullyShardedDataParallel, - use_sharded_state=cfg.use_sharded_state, - **fsdp_config, - ): - yield - - -def fsdp_wrap(module, min_num_params: Optional[int] = None, **kwargs): - """ - Helper to wrap layers/modules in FSDP. This falls back to a no-op if - fairscale is not available. - - Args: - module (nn.Module): module to (maybe) wrap - min_num_params (int, Optional): minimum number of layer params to wrap - """ - try: - from fairscale.nn import wrap - - if min_num_params is not None: - num_params = sum(p.numel() for p in module.parameters()) - if num_params >= min_num_params: - return wrap(module, **kwargs) - else: - return module - else: - return wrap(module, **kwargs) - except ImportError: - return module diff --git a/spaces/hahahafofo/ChatGLM-Chinese-Summary/ui/summary.py b/spaces/hahahafofo/ChatGLM-Chinese-Summary/ui/summary.py deleted file mode 100644 index 66e607599c5db69dc451903e49f2649a4b37e276..0000000000000000000000000000000000000000 --- a/spaces/hahahafofo/ChatGLM-Chinese-Summary/ui/summary.py +++ /dev/null @@ -1,179 +0,0 @@ -import re - -import gradio as gr -from typing import List -from models import models -from loguru import logger -import re - -PROMPT_TEMPLATE = """\ -使用中文{query_str}: -{context_str} -""" - - -def get_text_lines(input_txt: str) -> List[str]: - lines = input_txt.splitlines() - lines = [line.strip() for line in lines if line.strip()] - return lines - - -stop_chars_set = { - '.', '!', '?', '。', '!', '?', '…', ';', ';', ':', ':', - '”', '’', ')', '】', '》', '」', '』', '〕', '〉', - '》', '〗', '〞', '〟', '»', '"', "'", ')', ']', '}' -} - - -def split_in_line(input_txt: str, limit_length: int) -> List[str]: - new_text = '' - contents = [] - outputs = [] - for text in input_txt: - new_text += text - if text in stop_chars_set: - contents.append(new_text) - # logger.debug(f"{new_text}") - new_text = '' - # logger.debug(f"{input_txt[-1]} {input_txt[-1] not in stop_chars_set} {new_text}") - if input_txt[-1] not in stop_chars_set: - contents.append(new_text) - - text = "" - text_length = 0 - for idx, content in enumerate(contents): - text += content - text_length += len(content) - if text_length >= limit_length: - outputs.append(text) - text = "" - text_length = 0 - if text_length < limit_length: - outputs.append(text) - return outputs - - -def get_text_limit_length(input_txt: str, max_length: int = 2048) -> List[str]: - lines = get_text_lines(input_txt) - output: List[str] = [] - for line in lines: - if len(line) <= max_length: - output.append(line) - else: - text_lines = split_in_line(line, max_length) - logger.debug(f"split in line: {len(text_lines)}") - # logger.debug(f"{line} ==> {text_lines}") - output.extend(text_lines) - return output - - -def split_input_text(input_txt, strip_input_lines=0, max_length=2048): - if strip_input_lines > 0: - pattern = r'[\r\n]{' + str(strip_input_lines) + r',}' - re.compile(pattern=pattern) - logger.debug(f"strip input txt: {pattern}") - input_txt = re.sub(pattern, '', input_txt) - lines = get_text_limit_length(input_txt, max_length) - logger.debug(f"split input txt: {len(lines)}") - return "\n\n\n".join(lines) - - -def gen_keyword_summary(input_txt, keyword_prompt, summary_prompt, max_length=2048): - lines = input_txt.split("\n\n\n") - keywords_output = [] - for line in lines: - keywords = models.llm_model.generate_answer( - keyword_prompt, - line, - history=None, - max_length=max_length, - prompt_template=PROMPT_TEMPLATE - )[0] - logger.debug(f"text len: {len(line)} ==> {keywords}") - keywords_output.extend(keywords.split()) - keywords_output = [keyword.strip() for keyword in keywords_output if keyword.strip() != ""] - keywords_output = list(set(keywords_output)) - return f"保留关键词:{' '.join(keywords_output)},{summary_prompt}" - - -def gen_summary(input_txt, summary_prompt, max_length=2048): - lines = input_txt.split("\n\n\n") - output_summary = [] - summary = "" - for idx, line in enumerate(lines): - if idx == 1: - summary = models.llm_model.generate_answer( - summary_prompt, - line, - history=None, - max_length=max_length, - prompt_template=PROMPT_TEMPLATE - )[0] - logger.debug(f"text len: {len(line)} ==> {summary}") - else: - summary = models.llm_model.generate_answer( - summary_prompt, - f"{summary}{line}", - history=None, - max_length=max_length, - prompt_template=PROMPT_TEMPLATE - )[0] - logger.debug(f"summary: {len(summary)} + text: {len(line)} ==> {summary}") - output_summary.append(summary) - - return "\n\n\n".join(output_summary) - - -def summary_ui(): - with gr.Row(): - with gr.Column(scale=1): - line_max_length = gr.Slider(minimum=512, maximum=4096, step=1, value=1024, label="每行最大长度") - strip_input_lines = gr.Slider( - label="去除输入文本连续的空行(0:不除去)", - minimum=1, - maximum=10, - step=1, - value=0 - ) - with gr.Column(scale=4): - keyword_prompt = gr.Textbox( - lines=1, - label="抽取关键词", - value="抽取以下内容的人物和地点:", - placeholder="请输入抽取关键词的Prompt" - ) - summary_prompt = gr.Textbox( - lines=2, - label="生成摘要", - value="生成以下内容的摘要:", - placeholder="请输入生成摘要的Prompt" - ) - keyword_summary_prompt = gr.Textbox(lines=4, label="关键词+摘要", placeholder="请输入关键词+摘要的Prompt") - - with gr.Row(): - input_text = gr.Textbox(lines=20, max_lines=60, label="输入文本", placeholder="请输入文本") - split_text = gr.Textbox(lines=20, max_lines=60, label="分段文本", placeholder="请输入分段文本") - summary = gr.Textbox(lines=20, max_lines=60, label="生成摘要", placeholder="请输入生成摘要的Prompt") - - with gr.Row(): - btn_split = gr.Button("分段") - btn_keyword = gr.Button("提取关键词") - btn_summary = gr.Button("生成摘要") - - btn_split.click( - split_input_text, - inputs=[input_text, strip_input_lines, line_max_length], - outputs=[split_text] - ) - - btn_summary.click( - gen_summary, - inputs=[split_text, keyword_summary_prompt, line_max_length], - outputs=[summary] - ) - - btn_keyword.click( - gen_keyword_summary, - inputs=[split_text, keyword_prompt, summary_prompt, line_max_length], - outputs=[keyword_summary_prompt] - ) diff --git a/spaces/haoqi7/research/lrt/utils/__init__.py b/spaces/haoqi7/research/lrt/utils/__init__.py deleted file mode 100644 index 8eb66b345e2c966653a5aacf540d35dabe2ca4a9..0000000000000000000000000000000000000000 --- a/spaces/haoqi7/research/lrt/utils/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .functions import __create_model__ -from .union_find import UnionFind -from .article import ArticleList, Article \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/datasets/coco.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/datasets/coco.py deleted file mode 100644 index f6f099e778e34cf89d267e13424d4f69240b7878..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/data/datasets/coco.py +++ /dev/null @@ -1,466 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import contextlib -import datetime -import io -import json -import logging -import numpy as np -import os -import pycocotools.mask as mask_util -from fvcore.common.file_io import PathManager, file_lock -from fvcore.common.timer import Timer -from PIL import Image - -from detectron2.structures import Boxes, BoxMode, PolygonMasks - -from .. import DatasetCatalog, MetadataCatalog - -""" -This file contains functions to parse COCO-format annotations into dicts in "Detectron2 format". -""" - - -logger = logging.getLogger(__name__) - -__all__ = ["load_coco_json", "load_sem_seg", "convert_to_coco_json"] - - -def load_coco_json(json_file, image_root, dataset_name=None, extra_annotation_keys=None): - """ - Load a json file with COCO's instances annotation format. - Currently supports instance detection, instance segmentation, - and person keypoints annotations. - - Args: - json_file (str): full path to the json file in COCO instances annotation format. - image_root (str or path-like): the directory where the images in this json file exists. - dataset_name (str): the name of the dataset (e.g., coco_2017_train). - If provided, this function will also put "thing_classes" into - the metadata associated with this dataset. - extra_annotation_keys (list[str]): list of per-annotation keys that should also be - loaded into the dataset dict (besides "iscrowd", "bbox", "keypoints", - "category_id", "segmentation"). The values for these keys will be returned as-is. - For example, the densepose annotations are loaded in this way. - - Returns: - list[dict]: a list of dicts in Detectron2 standard dataset dicts format. (See - `Using Custom Datasets `_ ) - - Notes: - 1. This function does not read the image files. - The results do not have the "image" field. - """ - from pycocotools.coco import COCO - - timer = Timer() - json_file = PathManager.get_local_path(json_file) - with contextlib.redirect_stdout(io.StringIO()): - coco_api = COCO(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds())) - - id_map = None - if dataset_name is not None: - meta = MetadataCatalog.get(dataset_name) - cat_ids = sorted(coco_api.getCatIds()) - cats = coco_api.loadCats(cat_ids) - # The categories in a custom json file may not be sorted. - thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])] - meta.thing_classes = thing_classes - - # In COCO, certain category ids are artificially removed, - # and by convention they are always ignored. - # We deal with COCO's id issue and translate - # the category ids to contiguous ids in [0, 80). - - # It works by looking at the "categories" field in the json, therefore - # if users' own json also have incontiguous ids, we'll - # apply this mapping as well but print a warning. - if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)): - if "coco" not in dataset_name: - logger.warning( - """ -Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you. -""" - ) - id_map = {v: i for i, v in enumerate(cat_ids)} - meta.thing_dataset_id_to_contiguous_id = id_map - - # sort indices for reproducible results - img_ids = sorted(coco_api.imgs.keys()) - # imgs is a list of dicts, each looks something like: - # {'license': 4, - # 'url': 'http://farm6.staticflickr.com/5454/9413846304_881d5e5c3b_z.jpg', - # 'file_name': 'COCO_val2014_000000001268.jpg', - # 'height': 427, - # 'width': 640, - # 'date_captured': '2013-11-17 05:57:24', - # 'id': 1268} - imgs = coco_api.loadImgs(img_ids) - # anns is a list[list[dict]], where each dict is an annotation - # record for an object. The inner list enumerates the objects in an image - # and the outer list enumerates over images. Example of anns[0]: - # [{'segmentation': [[192.81, - # 247.09, - # ... - # 219.03, - # 249.06]], - # 'area': 1035.749, - # 'iscrowd': 0, - # 'image_id': 1268, - # 'bbox': [192.81, 224.8, 74.73, 33.43], - # 'category_id': 16, - # 'id': 42986}, - # ...] - anns = [coco_api.imgToAnns[img_id] for img_id in img_ids] - - if "minival" not in json_file: - # The popular valminusminival & minival annotations for COCO2014 contain this bug. - # However the ratio of buggy annotations there is tiny and does not affect accuracy. - # Therefore we explicitly white-list them. - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique!".format( - json_file - ) - - imgs_anns = list(zip(imgs, anns)) - - logger.info("Loaded {} images in COCO format from {}".format(len(imgs_anns), json_file)) - - dataset_dicts = [] - - ann_keys = ["iscrowd", "bbox", "keypoints", "category_id"] + (extra_annotation_keys or []) - - num_instances_without_valid_segmentation = 0 - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - record["file_name"] = os.path.join(image_root, img_dict["file_name"]) - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - # Check that the image_id in this annotation is the same as - # the image_id we're looking at. - # This fails only when the data parsing logic or the annotation file is buggy. - - # The original COCO valminusminival2014 & minival2014 annotation files - # actually contains bugs that, together with certain ways of using COCO API, - # can trigger this assertion. - assert anno["image_id"] == image_id - - assert anno.get("ignore", 0) == 0, '"ignore" in COCO json file is not supported.' - - obj = {key: anno[key] for key in ann_keys if key in anno} - - segm = anno.get("segmentation", None) - if segm: # either list[list[float]] or dict(RLE) - if not isinstance(segm, dict): - # filter out invalid polygons (< 3 points) - segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6] - if len(segm) == 0: - num_instances_without_valid_segmentation += 1 - continue # ignore this instance - obj["segmentation"] = segm - - keypts = anno.get("keypoints", None) - if keypts: # list[int] - for idx, v in enumerate(keypts): - if idx % 3 != 2: - # COCO's segmentation coordinates are floating points in [0, H or W], - # but keypoint coordinates are integers in [0, H-1 or W-1] - # Therefore we assume the coordinates are "pixel indices" and - # add 0.5 to convert to floating point coordinates. - keypts[idx] = v + 0.5 - obj["keypoints"] = keypts - - obj["bbox_mode"] = BoxMode.XYWH_ABS - if id_map: - obj["category_id"] = id_map[obj["category_id"]] - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - if num_instances_without_valid_segmentation > 0: - logger.warning( - "Filtered out {} instances without valid segmentation. " - "There might be issues in your dataset generation process.".format( - num_instances_without_valid_segmentation - ) - ) - return dataset_dicts - - -def load_sem_seg(gt_root, image_root, gt_ext="png", image_ext="jpg"): - """ - Load semantic segmentation data. All files under "gt_root" with "gt_ext" extension are - treated as ground truth annotations and all files under "image_root" with "image_ext" extension - as input images. Ground truth and input images are matched using file paths relative to - "gt_root" and "image_root" respectively without taking into account file extensions. - This works for COCO as well as some other data. - - Args: - gt_root (str): full path to ground truth semantic segmentation files. Semantic segmentation - annotations are stored as images with integer values in pixels that represent - corresponding semantic labels. - image_root (str): the directory where the input images are. - gt_ext (str): file extension for ground truth annotations. - image_ext (str): file extension for input images. - - Returns: - list[dict]: - a list of dicts in detectron2 standard format without instance-level - annotation. - - Notes: - 1. This function does not read the image and ground truth files. - The results do not have the "image" and "sem_seg" fields. - """ - - # We match input images with ground truth based on their relative filepaths (without file - # extensions) starting from 'image_root' and 'gt_root' respectively. - def file2id(folder_path, file_path): - # extract relative path starting from `folder_path` - image_id = os.path.normpath(os.path.relpath(file_path, start=folder_path)) - # remove file extension - image_id = os.path.splitext(image_id)[0] - return image_id - - input_files = sorted( - (os.path.join(image_root, f) for f in PathManager.ls(image_root) if f.endswith(image_ext)), - key=lambda file_path: file2id(image_root, file_path), - ) - gt_files = sorted( - (os.path.join(gt_root, f) for f in PathManager.ls(gt_root) if f.endswith(gt_ext)), - key=lambda file_path: file2id(gt_root, file_path), - ) - - assert len(gt_files) > 0, "No annotations found in {}.".format(gt_root) - - # Use the intersection, so that val2017_100 annotations can run smoothly with val2017 images - if len(input_files) != len(gt_files): - logger.warn( - "Directory {} and {} has {} and {} files, respectively.".format( - image_root, gt_root, len(input_files), len(gt_files) - ) - ) - input_basenames = [os.path.basename(f)[: -len(image_ext)] for f in input_files] - gt_basenames = [os.path.basename(f)[: -len(gt_ext)] for f in gt_files] - intersect = list(set(input_basenames) & set(gt_basenames)) - # sort, otherwise each worker may obtain a list[dict] in different order - intersect = sorted(intersect) - logger.warn("Will use their intersection of {} files.".format(len(intersect))) - input_files = [os.path.join(image_root, f + image_ext) for f in intersect] - gt_files = [os.path.join(gt_root, f + gt_ext) for f in intersect] - - logger.info( - "Loaded {} images with semantic segmentation from {}".format(len(input_files), image_root) - ) - - dataset_dicts = [] - for (img_path, gt_path) in zip(input_files, gt_files): - record = {} - record["file_name"] = img_path - record["sem_seg_file_name"] = gt_path - dataset_dicts.append(record) - - return dataset_dicts - - -def convert_to_coco_dict(dataset_name): - """ - Convert an instance detection/segmentation or keypoint detection dataset - in detectron2's standard format into COCO json format. - - Generic dataset description can be found here: - https://detectron2.readthedocs.io/tutorials/datasets.html#register-a-dataset - - COCO data format description can be found here: - http://cocodataset.org/#format-data - - Args: - dataset_name (str): - name of the source dataset - Must be registered in DatastCatalog and in detectron2's standard format. - Must have corresponding metadata "thing_classes" - Returns: - coco_dict: serializable dict in COCO json format - """ - - dataset_dicts = DatasetCatalog.get(dataset_name) - metadata = MetadataCatalog.get(dataset_name) - - # unmap the category mapping ids for COCO - if hasattr(metadata, "thing_dataset_id_to_contiguous_id"): - reverse_id_mapping = {v: k for k, v in metadata.thing_dataset_id_to_contiguous_id.items()} - reverse_id_mapper = lambda contiguous_id: reverse_id_mapping[contiguous_id] # noqa - else: - reverse_id_mapper = lambda contiguous_id: contiguous_id # noqa - - categories = [ - {"id": reverse_id_mapper(id), "name": name} - for id, name in enumerate(metadata.thing_classes) - ] - - logger.info("Converting dataset dicts into COCO format") - coco_images = [] - coco_annotations = [] - - for image_id, image_dict in enumerate(dataset_dicts): - coco_image = { - "id": image_dict.get("image_id", image_id), - "width": image_dict["width"], - "height": image_dict["height"], - "file_name": image_dict["file_name"], - } - coco_images.append(coco_image) - - anns_per_image = image_dict["annotations"] - for annotation in anns_per_image: - # create a new dict with only COCO fields - coco_annotation = {} - - # COCO requirement: XYWH box format - bbox = annotation["bbox"] - bbox_mode = annotation["bbox_mode"] - bbox = BoxMode.convert(bbox, bbox_mode, BoxMode.XYWH_ABS) - - # COCO requirement: instance area - if "segmentation" in annotation: - # Computing areas for instances by counting the pixels - segmentation = annotation["segmentation"] - # TODO: check segmentation type: RLE, BinaryMask or Polygon - if isinstance(segmentation, list): - polygons = PolygonMasks([segmentation]) - area = polygons.area()[0].item() - elif isinstance(segmentation, dict): # RLE - area = mask_util.area(segmentation).item() - else: - raise TypeError(f"Unknown segmentation type {type(segmentation)}!") - else: - # Computing areas using bounding boxes - bbox_xy = BoxMode.convert(bbox, BoxMode.XYWH_ABS, BoxMode.XYXY_ABS) - area = Boxes([bbox_xy]).area()[0].item() - - if "keypoints" in annotation: - keypoints = annotation["keypoints"] # list[int] - for idx, v in enumerate(keypoints): - if idx % 3 != 2: - # COCO's segmentation coordinates are floating points in [0, H or W], - # but keypoint coordinates are integers in [0, H-1 or W-1] - # For COCO format consistency we substract 0.5 - # https://github.com/facebookresearch/detectron2/pull/175#issuecomment-551202163 - keypoints[idx] = v - 0.5 - if "num_keypoints" in annotation: - num_keypoints = annotation["num_keypoints"] - else: - num_keypoints = sum(kp > 0 for kp in keypoints[2::3]) - - # COCO requirement: - # linking annotations to images - # "id" field must start with 1 - coco_annotation["id"] = len(coco_annotations) + 1 - coco_annotation["image_id"] = coco_image["id"] - coco_annotation["bbox"] = [round(float(x), 3) for x in bbox] - coco_annotation["area"] = float(area) - coco_annotation["iscrowd"] = annotation.get("iscrowd", 0) - coco_annotation["category_id"] = reverse_id_mapper(annotation["category_id"]) - - # Add optional fields - if "keypoints" in annotation: - coco_annotation["keypoints"] = keypoints - coco_annotation["num_keypoints"] = num_keypoints - - if "segmentation" in annotation: - coco_annotation["segmentation"] = annotation["segmentation"] - if isinstance(coco_annotation["segmentation"], dict): # RLE - coco_annotation["segmentation"]["counts"] = coco_annotation["segmentation"][ - "counts" - ].decode("ascii") - - coco_annotations.append(coco_annotation) - - logger.info( - "Conversion finished, " - f"#images: {len(coco_images)}, #annotations: {len(coco_annotations)}" - ) - - info = { - "date_created": str(datetime.datetime.now()), - "description": "Automatically generated COCO json file for Detectron2.", - } - coco_dict = { - "info": info, - "images": coco_images, - "annotations": coco_annotations, - "categories": categories, - "licenses": None, - } - return coco_dict - - -def convert_to_coco_json(dataset_name, output_file, allow_cached=True): - """ - Converts dataset into COCO format and saves it to a json file. - dataset_name must be registered in DatasetCatalog and in detectron2's standard format. - - Args: - dataset_name: - reference from the config file to the catalogs - must be registered in DatasetCatalog and in detectron2's standard format - output_file: path of json file that will be saved to - allow_cached: if json file is already present then skip conversion - """ - - # TODO: The dataset or the conversion script *may* change, - # a checksum would be useful for validating the cached data - - PathManager.mkdirs(os.path.dirname(output_file)) - with file_lock(output_file): - if PathManager.exists(output_file) and allow_cached: - logger.warning( - f"Using previously cached COCO format annotations at '{output_file}'. " - "You need to clear the cache file if your dataset has been modified." - ) - else: - logger.info(f"Converting annotations of dataset '{dataset_name}' to COCO format ...)") - coco_dict = convert_to_coco_dict(dataset_name) - - logger.info(f"Caching COCO format annotations at '{output_file}' ...") - with PathManager.open(output_file, "w") as f: - json.dump(coco_dict, f) - - -if __name__ == "__main__": - """ - Test the COCO json dataset loader. - - Usage: - python -m detectron2.data.data.coco \ - path/to/json path/to/image_root dataset_name - - "dataset_name" can be "coco_2014_minival_100", or other - pre-registered ones - """ - from detectron2.utils.logger import setup_logger - from detectron2.utils.visualizer import Visualizer - import detectron2.data.datasets # noqa # add pre-defined metadata - import sys - - logger = setup_logger(name=__name__) - assert sys.argv[3] in DatasetCatalog.list() - meta = MetadataCatalog.get(sys.argv[3]) - - dicts = load_coco_json(sys.argv[1], sys.argv[2], sys.argv[3]) - logger.info("Done loading {} samples.".format(len(dicts))) - - dirname = "coco-data-vis" - os.makedirs(dirname, exist_ok=True) - for d in dicts: - img = np.array(Image.open(d["file_name"])) - visualizer = Visualizer(img, metadata=meta) - vis = visualizer.draw_dataset_dict(d) - fpath = os.path.join(dirname, os.path.basename(d["file_name"])) - vis.save(fpath) diff --git a/spaces/hdhzk/bingo/src/pages/api/sydney.ts b/spaces/hdhzk/bingo/src/pages/api/sydney.ts deleted file mode 100644 index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000 --- a/spaces/hdhzk/bingo/src/pages/api/sydney.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/__init__.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hugggof/vampnet/scripts/exp/fine_tune.py b/spaces/hugggof/vampnet/scripts/exp/fine_tune.py deleted file mode 100644 index af82fcc7f260607a2efff9fad419271ad1a203d8..0000000000000000000000000000000000000000 --- a/spaces/hugggof/vampnet/scripts/exp/fine_tune.py +++ /dev/null @@ -1,81 +0,0 @@ -import argbind -from pathlib import Path -import yaml -from typing import List - - - - -"""example output: (yaml) - -""" - -@argbind.bind(without_prefix=True, positional=True) -def fine_tune(audio_files_or_folders: List[str], name: str): - - conf_dir = Path("conf") - assert conf_dir.exists(), "conf directory not found. are you in the vampnet directory?" - - conf_dir = conf_dir / "generated" - conf_dir.mkdir(exist_ok=True) - - finetune_dir = conf_dir / name - finetune_dir.mkdir(exist_ok=True) - - finetune_c2f_conf = { - "$include": ["conf/lora/lora.yml"], - "fine_tune": True, - "train/AudioLoader.sources": audio_files_or_folders, - "val/AudioLoader.sources": audio_files_or_folders, - "VampNet.n_codebooks": 14, - "VampNet.n_conditioning_codebooks": 4, - "VampNet.embedding_dim": 1280, - "VampNet.n_layers": 16, - "VampNet.n_heads": 20, - "AudioDataset.duration": 3.0, - "AudioDataset.loudness_cutoff": -40.0, - "save_path": f"./runs/{name}/c2f", - "fine_tune_checkpoint": "./models/vampnet/c2f.pth" - } - - finetune_coarse_conf = { - "$include": ["conf/lora/lora.yml"], - "fine_tune": True, - "train/AudioLoader.sources": audio_files_or_folders, - "val/AudioLoader.sources": audio_files_or_folders, - "save_path": f"./runs/{name}/coarse", - "fine_tune_checkpoint": "./models/vampnet/coarse.pth" - } - - interface_conf = { - "Interface.coarse_ckpt": f"./runs/{name}/coarse/latest/vampnet/weights.pth", - - "Interface.coarse2fine_ckpt": f"./runs/{name}/c2f/latest/vampnet/weights.pth", - "Interface.wavebeat_ckpt": "./models/wavebeat.pth", - - "Interface.codec_ckpt": "./models/vampnet/codec.pth", - "AudioLoader.sources": [audio_files_or_folders], - } - - # save the confs - with open(finetune_dir / "c2f.yml", "w") as f: - yaml.dump(finetune_c2f_conf, f) - - with open(finetune_dir / "coarse.yml", "w") as f: - yaml.dump(finetune_coarse_conf, f) - - with open(finetune_dir / "interface.yml", "w") as f: - yaml.dump(interface_conf, f) - - - print(f"generated confs in {finetune_dir}. run training jobs with `python scripts/exp/train.py --args.load {finetune_dir}/.yml` ") - -if __name__ == "__main__": - args = argbind.parse_args() - - with argbind.scope(args): - fine_tune() - - - - \ No newline at end of file diff --git a/spaces/ibm-nasa-geospatial/Prithvi-100M-sen1floods11-demo/app.py b/spaces/ibm-nasa-geospatial/Prithvi-100M-sen1floods11-demo/app.py deleted file mode 100644 index 07d2738bcd791d7affb3bc3fbed8c610f0427c20..0000000000000000000000000000000000000000 --- a/spaces/ibm-nasa-geospatial/Prithvi-100M-sen1floods11-demo/app.py +++ /dev/null @@ -1,226 +0,0 @@ -######### pull files -import os -from huggingface_hub import hf_hub_download -config_path=hf_hub_download(repo_id="ibm-nasa-geospatial/Prithvi-100M-sen1floods11", filename="sen1floods11_Prithvi_100M.py", token=os.environ.get("token")) -ckpt=hf_hub_download(repo_id="ibm-nasa-geospatial/Prithvi-100M-sen1floods11", filename='sen1floods11_Prithvi_100M.pth', token=os.environ.get("token")) -########## - - -import argparse -from mmcv import Config - -from mmseg.models import build_segmentor - -from mmseg.datasets.pipelines import Compose, LoadImageFromFile - -import rasterio -import torch - -from mmseg.apis import init_segmentor - -from mmcv.parallel import collate, scatter - -import numpy as np -import glob -import os - -import time - -import numpy as np -import gradio as gr -from functools import partial - -import pdb - -import matplotlib.pyplot as plt - -from skimage import exposure - -def stretch_rgb(rgb): - - ls_pct=1 - pLow, pHigh = np.percentile(rgb[~np.isnan(rgb)], (ls_pct,100-ls_pct)) - img_rescale = exposure.rescale_intensity(rgb, in_range=(pLow,pHigh)) - - return img_rescale - - -def open_tiff(fname): - - with rasterio.open(fname, "r") as src: - - data = src.read() - - return data - -def write_tiff(img_wrt, filename, metadata): - - """ - It writes a raster image to file. - - :param img_wrt: numpy array containing the data (can be 2D for single band or 3D for multiple bands) - :param filename: file path to the output file - :param metadata: metadata to use to write the raster to disk - :return: - """ - - with rasterio.open(filename, "w", **metadata) as dest: - - if len(img_wrt.shape) == 2: - - img_wrt = img_wrt[None] - - for i in range(img_wrt.shape[0]): - dest.write(img_wrt[i, :, :], i + 1) - - return filename - - -def get_meta(fname): - - with rasterio.open(fname, "r") as src: - - meta = src.meta - - return meta - -def preprocess_example(example_list): - - example_list = [os.path.join(os.path.abspath(''), x) for x in example_list] - - return example_list - - -def inference_segmentor(model, imgs, custom_test_pipeline=None): - """Inference image(s) with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - imgs (str/ndarray or list[str/ndarray]): Either image files or loaded - images. - - Returns: - (list[Tensor]): The segmentation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = [LoadImageFromFile()] + cfg.data.test.pipeline[1:] if custom_test_pipeline == None else custom_test_pipeline - test_pipeline = Compose(test_pipeline) - # prepare data - data = [] - imgs = imgs if isinstance(imgs, list) else [imgs] - for img in imgs: - img_data = {'img_info': {'filename': img}} - img_data = test_pipeline(img_data) - data.append(img_data) - # print(data.shape) - - data = collate(data, samples_per_gpu=len(imgs)) - if next(model.parameters()).is_cuda: - # data = collate(data, samples_per_gpu=len(imgs)) - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - # img_metas = scatter(data['img_metas'],'cpu') - # data['img_metas'] = [i.data[0] for i in data['img_metas']] - - img_metas = data['img_metas'].data[0] - img = data['img'] - data = {'img': img, 'img_metas':img_metas} - - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result - - -def inference_on_file(target_image, model, custom_test_pipeline): - - target_image = target_image.name - - time_taken=-1 - - st = time.time() - print('Running inference...') - result = inference_segmentor(model, target_image, custom_test_pipeline) - - print("Output has shape: " + str(result[0].shape)) - - ##### prep outputs - mask = open_tiff(target_image) - rgb = stretch_rgb((mask[[3, 2, 1], :, :].transpose((1,2,0))/10000*255).astype(np.uint8)) - meta = get_meta(target_image) - mask = np.where(mask == meta['nodata'], 1, 0) - mask = np.max(mask, axis=0)[None] - rgb = np.where(mask.transpose((1,2,0)) == 1, 0, rgb) - rgb = np.where(rgb < 0, 0, rgb) - rgb = np.where(rgb > 255, 255, rgb) - - prediction = np.where(mask == 1, 0, result[0]*255) - et = time.time() - time_taken = np.round(et - st, 1) - print(f'Inference completed in {str(time_taken)} seconds') - - return rgb, prediction[0] - -def process_test_pipeline(custom_test_pipeline, bands=None): - - # change extracted bands if necessary - if bands is not None: - - extract_index = [i for i, x in enumerate(custom_test_pipeline) if x['type'] == 'BandsExtract' ] - - if len(extract_index) > 0: - - custom_test_pipeline[extract_index[0]]['bands'] = eval(bands) - - collect_index = [i for i, x in enumerate(custom_test_pipeline) if x['type'].find('Collect') > -1] - - # adapt collected keys if necessary - if len(collect_index) > 0: - - keys = ['img_info', 'filename', 'ori_filename', 'img', 'img_shape', 'ori_shape', 'pad_shape', 'scale_factor', 'img_norm_cfg'] - custom_test_pipeline[collect_index[0]]['meta_keys'] = keys - - return custom_test_pipeline - -config = Config.fromfile(config_path) -config.model.backbone.pretrained=None -model = init_segmentor(config, ckpt, device='cpu') -custom_test_pipeline=process_test_pipeline(model.cfg.data.test.pipeline, None) - -func = partial(inference_on_file, model=model, custom_test_pipeline=custom_test_pipeline) - -with gr.Blocks() as demo: - - gr.Markdown(value='# Prithvi sen1floods11') - gr.Markdown(value='''Prithvi is a first-of-its-kind temporal Vision transformer pretrained by the IBM and NASA team on continental US Harmonised Landsat Sentinel 2 (HLS) data. This demo showcases how the model was finetuned to detect water at a higher resolution than it was trained on (i.e. 10m versus 30m) using Sentinel 2 imagery from on the [sen1floods11 dataset](https://github.com/cloudtostreet/Sen1Floods11). More detailes can be found [here](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-sen1floods11).\n - The user needs to provide a Sentinel 2 image with all the 12 bands (in the usual Sentinel 2) order in reflectance units multiplied by 10,000 (e.g. to save on space), with the code that is going to pull up Blue, Green, Red, Narrow NIR, SWIR, SWIR 2. - ''') - with gr.Row(): - with gr.Column(): - inp = gr.File() - btn = gr.Button("Submit") - - with gr.Row(): - gr.Markdown(value='### Input RGB') - gr.Markdown(value='### Model prediction (Black: Land; White: Water)') - - with gr.Row(): - out1=gr.Image(image_mode='RGB') - out2 = gr.Image(image_mode='L') - - btn.click(fn=func, inputs=inp, outputs=[out1, out2]) - - with gr.Row(): - gr.Examples(examples=["India_900498_S2Hand.tif", - "Spain_7370579_S2Hand.tif", - "USA_430764_S2Hand.tif"], - inputs=inp, - outputs=[out1, out2], - preprocess=preprocess_example, - fn=func, - cache_examples=True, - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/inamXcontru/PoeticTTS/AV Voice Changer V 6.0.10 Keygen .rar TOP.md b/spaces/inamXcontru/PoeticTTS/AV Voice Changer V 6.0.10 Keygen .rar TOP.md deleted file mode 100644 index 18a0cf2f8c9866b0c488268b80de43bdd2a18bbf..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/AV Voice Changer V 6.0.10 Keygen .rar TOP.md +++ /dev/null @@ -1,5 +0,0 @@ -
        -

        cd_key (14085)
        keygen (12265)
        serial (10529)
        crack (9119)
        nero 7.9.6.0 serial (1873)
        nero 7.9.6.0 keygen (1609)
        sims2fashionstuffcrack (833)
        nero 7.9.6 serial (557)
        sims 2 cd key (543)
        pro cycling manager 2007 crack (537)
        nero 7.9.6.0 (536)
        nero 7 serial number (501)
        nero (463)
        windows xp pro serial (443)
        delicious 2 deluxe (426)
        medal of honor crack (403)
        crack total commander 7.01 (400)
        any dvd 6.1.6.5 crack (398)
        anydvd 6.1.6.5 crack (392)
        adobe photoshop cs crack (381)
        spyware doctor 5.0.1.205 (346)
        canon adjustment software (320)
        test drive unlimited keygen (307)
        dark crusade cd key (300)
        registry mechanic crack (289)
        microsoft office home and student 2007 crack (284)
        fraps 2.8.2 serial (278)
        nikon (271)
        windows xp keygen (265)
        trackmania united keygen (264)
        spb (261)
        powerdvd cd key (261)
        pinnacle studio 9 serial (253)
        getdataback crack 3.03 (252)
        nero 7 serial (244)
        harry potter (244)
        age of mythology the titans cd key (244)
        winrar crack 3.7 (241)
        gamehouse (240)
        xp home serial (238)
        recover my files serial (237)
        spyware doctor registration code (236)
        spyware doctor (236)
        spyware doctor license number (232)
        fl studio 7 serial number (230)
        sound forge (227)
        office 2003 key (223)
        loki no dvd crack (223)
        dawn of war dark crusade cd key (217)
        avs video editor 3.5 (216)
        video converter (211)
        sims 2 seasons no cd crack (208)
        nero 6.6.1.15a keygen (206)
        crack any dvd 6.1.6.5 (206)
        crack corel 12 (203)
        norton antivirus 2007 product key (202)
        brothers in arms cd key (197)
        nero 6.6.0.16 serial (195)
        winiso crack (194)
        ea games keygen (194)
        winzip 11.1 registration code (193)
        need for speed 2 serial (193)
        serial office 2007 (190)
        boilsoft keygen (190)
        windows xp sp2 keygen (189)
        xilisoft 3gp video converter crack (188)
        power dvd (188)
        crack solid converter (188)
        red alert 2 keygen (186)
        18 wheels (186)
        photoshop cs2 keygen (183)
        mirc registration code (183)
        two worlds serial (182)
        anydvd (181)
        nero 7.7.5.1 serial (177)
        getdataback (177)
        avg anti spyware license code (177)
        registry mechanic serial (175)
        crystal reports (175)
        ultraiso serial (173)
        solid converter serial (173)
        quicktime pro (172)
        av voice changer crack (172)
        internet download manager (171)
        femjoy passwords (170)
        empire earth 2 cd key (170)
        command and conquer generals cd key (170)
        clone dvd (170)
        windvd (168)
        topdesk 1.5.3 (165)
        system mechanic (165)
        zone alarm (164)
        microsoft office (164)
        crack video strip poker supreme 1.10 (164)
        loki no cd crack (163)
        download crack sony acid pro 6.0d (163)
        need for speed most wanted cd key (162)
        xp key changer (160)
        ultramon (158)
        photoshop cs serial (158)
        winzip serial (157)
        illustrator cs3 crack (156)
        babylon (156)
        anydvd 6165 (156)
        ultraiso (155)
        photoshop 7.0 serial (155)
        file scavenger keygen (154)
        call of duty key code (154)
        age of empires the conquerors (154)
        adobe cs2 serial (154)
        doom 3 serial (152)
        need for speed underground cd key (151)
        driver detective 6.2.2.2 (150)
        diablo 2 keygen (150)
        cd key office mac (149)
        pro evolution soccer 2007 (148)
        office xp (148)
        magix music (148)
        registry cleaner (147)
        partition magic 8.0 crack (146)
        kaspersky key (145)
        premiere 2.0 serial (143)
        astakiller (143)
        sims 2 no cd (142)
        avg 7 serial (142)
        flash decompiler (141)
        premiere pro serial (140)
        fff keygen (140)
        command and conquer generals (140)
        adobe creative suite serial (139)
        worldmate keygen (138)
        unlock video strip poker supreme 1.10 (138)
        dreamweaver 8 crack (138)
        acoustica mixcraft (137)
        serial mathtype 5.2 (136)
        power dvd 7.0 keygen (136)
        photoshop cs2 activation key (136)
        hddlife pro 3.0.140 (136)
        chief architect x1 (136)
        xp pro keygen (135)
        panda (135)
        anydvd 6.1.6.5 keygen (135)
        winiso registration code (134)
        serial nero vision express (134)
        serial nero 7 (134)
        windows xp sp2 activation key (133)
        warcraft 3 no cd (133)
        nero keygen (133)
        msn sniffer crack (133)
        boson keygen (133)
        ws_ftp (132)
        imtoo converter (132)
        windows xp home serial (131)
        crack avast (131)
        anydvd 6.1.6.5 serial (131)
        studio (130)
        splinter cell (130)
        need for speed underground 2 (130)
        spyware doctor serial (129)
        spyware doctor keygen (129)
        jewel quest crack (128)
        avi splitter serial (128)
        wpa_kill.exe download (127)
        world of warcraft keygen (126)
        nero 7 keygen (126)
        perfect keylogger (125)
        frozen throne cd key (124)
        far cry no dvd crack (124)
        cfosspeed v4.01.1300 (124)
        cardrecovery 3.40 registration key (124)
        pocketpc (123)
        kaspersky (123)
        crack stellar phoenix (123)
        serial two worlds (122)
        serial test drive (122)
        office 2003 activation crack (122)
        isilo serial crack (121)
        battlefront 2 crack (121)
        pro cycling manager 2007 (120)
        key generator (120)
        feeding frenzy crack (120)
        registration code winzip (119)
        winavi video converter crack (117)
        crack most wanted (116)
        age of empires 2 crack (116)
        xmlspy 2007 crack (115)
        style xp keygen (115)
        photoshop cs2 serial (115)
        newsleecher (115)
        filemaker pro (115)
        actual spy serial (115)
        super dvd creator 9.3 (114)
        nfs most wanted crack (114)
        imtoo dvd ripper 2.0 serial (114)
        rollercoaster tycoon 2 activation code (113)
        call of duty cd key (113)
        stalker cd key (112)
        mpeg encoder (112)
        karaoke (111)
        photorescue keygen (110)
        loki nodvd (110)
        total commander crack (109)
        serial need for speed most wanted (109)
        passware (109)
        nero 7 (109)
        dap premium activation code (109)
        crack lemonade tycoon 2 (109)
        boilsoft crack (109)
        need for speed most wanted crack (108)
        norton antivirus 2005 product key (107)
        adobe audition 1.5 crack (107)
        diablo 2 no cd crack (106)
        dameware crack (106)
        passport photo 1.5.3 (105)
        need for speed (105)
        boilsoft rm to mp3 crack (105)
        adobe premiere pro 1.5 tryout crack (105)
        recorder (104)
        loki (104)
        dfx (104)
        avast keygen (103)
        serial nero (102)
        sacred cd key generator (102)
        most wanted serial (102)
        call of duty (102)
        avg antispyware serial (102)
        adobe illustrator cs3 keygen (102)
        acoustica mixcraft v 3 (102)
        rollercoaster tycoon 3 no cd crack (101)
        half life 2 keygen (101)
        ad (101)
        spyware doctor 5.0.1.201a serial (100)
        registry booster serial number (100)
        pdf converter (100)
        bitdefender serial (100)
        age of empires 3 keygen (100)
        warcraft 3 cd key (99)
        surfoffline 1.4 (99)
        powerdirector (99)
        guild wars cd key generator (99)
        getright (98)
        crack office xp (98)
        wm recorder serial (97)
        scrabble crack (97)
        powerdvd serial (97)
        nero serial (97)
        age of empires 3 cd key (97)
        trojan remover (96)
        generals serial (96)
        avs video converter (96)
        avg internet security 7.5.476 (96)
        avg antispyware 7.5 license code keygen (96)
        adobe illustrator cs3 serial number (96)
        win xp serial (95)
        recover4all (95)
        pinnacle tv center pro serial keygen download (95)
        3ds max 7 crack (95)
        video fixer 3.23 crack (94)
        sims 2 fashion stuff cd key (94)
        nero 6.6.0.18 serial (94)
        windows xp sp2 crack (93)
        ut2004 cd key generator (93)
        serial warcraft 3 (93)
        crack anydvd 6.1.6.5 (93)
        winavi registration code (92)
        sketchup (92)
        ricochet keygen (92)
        nod32 serial (92)
        cuteftp (92)
        crack adobe photoshop cs3 (92)
        badcopy pro (92)
        photoshop cs2 crack (91)
        norton antivirus activation key (91)
        new star soccer 3 registration code (91)
        avg antispyware 7.5 license code (91)
        avg 7.5.1.43 (91)
        able2extract crack (91)
        alien shooter keygen (91)
        steam cd key (90)
        spyware doctor 5.0.1. (90)
        halo crack (90)
        crack fairy godmother tycoon (90)
        cd key men of valor (90)
        camtasia (90)
        call of duty no cd (90)
        any dvd 6165 (90)
        7 wonders (90)
        swf (89)
        particle illusion crack (89)
        office 2007 (89)
        alcohol 120 (89)
        zuma registration key (88)
        pocket dvd studio serial (88)
        movie6.net (88)
        zuma crack (87)
        photoshop cs3 (87)
        empire earth (87)
        windows xp service pack (86)
        serial power dvd (86)
        fraps crack 2.8.2 (86)
        winrar (85)
        vmware serial (85)
        photoshop cs2 serial number (85)
        nero crack (85)
        need for speed underground 2 cd key (85)
        avg (85)
        winavi serial (84)
        isobuster (84)
        actual spy 3.0 crack (84)
        toca race driver 2 no cd crack (83)
        serial sims 2 (83)
        premiere cs3 (83)
        photoshop cs crack (83)
        half life serial (83)
        call of duty 2 keygen (83)
        avg antispyware keygen (83)
        streamdown (82)
        registry fix crack (82)
        mahjong (82)
        serial need for speed underground (81)
        nfs most wanted serial (81)
        flashfxp 2.1 serial (81)
        dvd psp (81)
        dawn of war serial (81)
        blaze (81)
        smartdraw (80)
        partition magic 8 (80)
        vcdcutter (79)
        partition magic 8.0 1242 crack (79)
        need for speed underground 2 crack (79)
        loki crack (79)
        gamehouse serial (79)
        folder lock (79)
        sims 2 nightlife serial (78)
        oziexplorer (78)
        office 2003 (78)
        keygen office 2003 (78)
        condition zero cd key (78)
        avast serial (78)
        recover my files 3.98 5637 (77)
        isilo 4.32 (77)
        adobe (77)
        worms no cd (76)
        windows xp activation crack (76)
        windows 98 serial (76)
        steam keygen (76)
        spyware doctor 5.0.1.200 (76)
        mixmeister (76)
        kaspersky serial (76)
        hollywood fx (76)
        easy cd (76)
        crack getdataback 2.31 (76)
        coreldraw 12 crack (76)
        anydvd 6.1.6.5 (76)
        spyware doctor crack (75)
        panorama (75)
        netlimiter crack (75)
        nero 6.3 (75)
        mathematica (75)
        easyx rm to avi converter 1.0 (75)
        ccproxy 6.4.2 (75)
        adobe photoshop elements (75)
        serial number age of empires 3 (74)
        rational (74)
        photoshop 7 serial (74)
        norton internet security 2006 keygen (74)
        noadware (74)
        magic iso maker 5.4 (74)
        dvdfab (74)
        dvd2one (74)
        dreamweaver 2004 serial (74)
        diner dash serial (74)
        counter strike cd key (74)
        condition zero cd key (74)
        cd key counter strike 1.5 (74)
        anydvd crack (74)
        alias maya 6.0 serial (74)
        advanced registry optimizer 5.0 serial (74)
        wwwmovie6.net (73)
        trackmania sunrise serial (73)
        photoshop cs keygen (73)
        file recover (73)
        diner dash registration key (73)
        diablo 2 cd keys (73)
        call of duty 2 cd key (73)
        winavi crack (72)
        wavelab 5 (72)
        super collapse (72)
        red alert 2 (72)
        editplus (72)
        ashampoo (72)
        tuneup utilities 2004 (71)
        tuneup 2006 serial (71)
        premiere (71)
        keygens total commander 7 (71)
        imtoo mp4 video converter (71)
        download accelerator plus (71)
        actual spy crack (71)
        winning eleven 2007 (70)
        winimage (70)
        winamp crack (70)
        swishmax serial (70)
        smartmovie crack (70)
        office home and student 2007 serial (70)
        kioday mahjong download (70)
        dvd profiler (70)
        windows server 2003 enterprise crack (69)
        visual paradigm (69)
        style xp 3.19 registration key (69)
        noadware registration key (69)
        nero cd key (69)
        nero 6.6.0.13 serial (69)
        need for speed underground (69)
        stalker crack (68)
        safari island deluxe (68)
        office (68)
        flash cs3 (68)
        doom 3 keygen (68)
        cracks (68)
        anydvd 6.1.6.5 key (68)
        windows xp sp2 serial (67)
        windows xp corporate (67)
        fraps (67)
        empire earth cd key (67)
        cd keygen need for speed carbon (67)
        alcohol 120 1.9.2 crack (67)
        xmlspy 2007 serial (66)
        windows xp serial (66)
        sims 2 keygen (66)
        serial nero 6 (66)
        poser 6 (66)
        norton (66)
        electronics workbench (66)
        dmc (66)
        avg crack (66)
        sims 2 serial (65)
        musicmatch (65)
        imtoo mpeg encoder 3.1.34 (65)
        dr divx keygen (65)
        ad aware (65)
        acronis 10 (65)
        trackmania united cd key (64)
        keygen clonecd (64)
        illustrator cs3 serial (64)
        avg antispyware license code (64)
        winzip 9 (63)
        tomtom (63)
        tmpgenc 2.5 (63)
        steam crack (63)
        nero serial number (63)
        multisim 7 serial (63)
        clone (63)
        active webcam 9.2 (63)
        web page maker (62)
        spb mobile (62)
        kaspersky crack (62)
        crack warcraft 3 (62)
        convertxtodvd 2.2.3.258 (62)
        allway sync (62)
        acdsee crack (62)
        msn (61)
        intervideo windvd 8 gold (61)
        elecard avc plugin 2.0.70420 (61)
        crack sims 2 (61)
        asta killer (61)
        total commander (60)
        tmpgenc dvd author 3 (60)
        registry mechanic license code (60)
        microsoft office xp (60)
        cricket 2004 cd key (60)
        crack flash slide show maker 4.21 (60)
        clonedvd (60)
        castle (60)
        warfare incorporated serial (59)
        teleport pro (59)
        product key crack (59)
        pdf2word (59)
        mp3resizer 1.8.3 (59)
        loki no cd (59)
        keygen alcohol 120%25 (59)
        imtoo mpeg encoder (59)
        avs video converter 5.6 serial number (59)
        adobe illustrator cs2 crack (59)
        active webcam (59)
        winavi keygen (58)
        ulead videostudio 10 registration code (58)
        smartmovie 3.41 keygen (58)
        powerpack lame mp3 encoder (58)
        office activation crack (58)
        mapinfo (58)
        luxor registration key (58)
        halo 2 (58)
        gta san andreas crack (58)
        file scavenger key (58)
        empire at war serial (58)
        clonedvd2 (58)
        vray keygen (57)
        pocket pc (57)
        netlimiter serial (57)
        nero 7.9.6.0 crack (57)
        muvee (57)
        illustrator cs3 keygen (57)
        homeworld 2 (57)
        fifa 2005 (57)
        zone alarm serial (56)
        super jigsaw (56)
        pool buddy serial (56)
        office 2003 serial (56)
        nero 6 (56)
        keygen navigon (56)
        crack trackmania united (56)
        anydvd6165 (56)
        windows xp (55)
        vuescan (55)
        smartmovie (55)
        serial halo (55)
        quicktime 6.5 serial (55)
        parallels workstation 2.2 (55)
        kaspersky license key (55)
        dap8.5 crack (55)
        crack uefa euro 2004 (55)
        winzip registration code (54)
        style xp crack (54)
        need for speed underground 2 no cd crack (54)
        guild wars (54)
        empire earth serial (54)
        crystal pro (54)
        autocad (54)
        acdsee 7 keygen (54)
        vmware crack (53)
        trendy flash site builder crack (53)
        serial for ableton 6 (53)
        rise of nations cd key (53)
        photoshop (53)
        nero 6.6.1.15a serial number (53)
        need for speed most wanted no cd crack (53)
        mirc crack (53)
        brothers in arms crack (53)
        azada (53)
        acrobat (53)
        whatsup (52)
        ultraedit (52)
        stylexp registration key (52)
        powerdvd (52)
        most wanted crack (52)
        magix (52)
        call of duty united offensive keygen (52)
        call of duty 2 crack (52)
        battlefield 1942 1.6 crack (52)
        anticrash (52)
        all seeing eye serial (52)
        alcohol 120 serial (52)
        windows 2003 enterprise serial number (51)
        tweak (51)
        spylocked 4.3 free key (51)
        smart movie (51)
        sims 2 fashion stuff crack (51)
        ripcast (51)
        red ace squadron (51)
        pinnacle tvcenter pro (51)
        nero 6 serial (51)
        dvd cloner (51)
        drivermagic (51)
        crack nero 7 (51)
        crack anydvd (51)
        winrar patch (50)
        vista (50)
        turbocad (50)
        tmpgenc 2.524.63.181 (50)
        spyware doctor 5 (50)
        pdf995 (50)
        hero (50)
        doom 3 cd key (50)
        crack nero (50)
        crack autocad 2007 (50)
        call of duty keygen (50)
        call of duty 2 key code (50)
        billiards (50)
        beyond divinity no cd crack (50)
        alcohol (50)
        windows xp sp2 key (49)
        video fixer serial (49)
        silent hunter (49)
        serial xp (49)
        serial counter strike (49)
        ptgmap (49)
        nba 2005 keygen (49)
        lc4 keygen (49)
        guild wars access key (49)
        driver detective (49)
        cucusoft 4.29 crack (49)
        crack norton (49)
        winiso serial (48)
        spyware doctor 5.0.1.201a crack (48)
        river past (48)
        resident evil 4 (48)
        proshow gold (48)
        nero 7 crack (48)
        microsoft office home and student 2007 serial (48)
        lost planet (48)
        flash slideshow maker 4.22 (48)
        diablo 2 (48)
        cs2 crack (48)
        crack nero 6 (48)
        avast (48)
        audiograbber (48)
        a4desk crack (48)
        %3a (48)
        winzip 11.1 key (47)
        warcraft 3 keygen (47)
        the sims 2 keygen (47)
        sims pets stories (47)
        serial office 2000 (47)
        river past audio converter (47)
        new star soccer 2 registration code (47)
        mtx mototrax (47)
        kitchendraw 4.5 (47)
        cucusoft (47)
        crack eagle 4.11 (47)
        counter strike 1.6 cd key (47)
        avs (47)
        avg cd key (47)
        acdsee 8 keygen (47)
        warcraft cd key (46)
        visual web developer 2005 (46)
        nero 7.8.5.0 serial (46)
        nero 6 keygen (46)
        mov to avi mpeg wmv converter (46)
        guild wars keygen (46)
        empire at war crack (46)
        dataviz (46)
        data doctor recovery (46)
        cracked fraps (46)
        counter strike serial key (46)
        adobe photoshop cs2 serial (46)
        zuma deluxe crack (45)
        windows xp professional serial (45)
        windows 2003 crack (45)
        trackmania sunrise no cd (45)
        sims 2 (45)
        serial windows 2000 (45)
        restorer 2000 (45)
        pocket dvd wizard crack (45)
        nero 7 essentials crack (45)
        most wanted cd key (45)
        kaspersky anti (45)
        harry potter (45)
        fl (45)
        diablo 2 no cd (45)
        diablo 2 lord of destruction cd key (45)
        call of duty serial (45)
        call of duty 2 key (45)
        blaze media pro (45)
        any dvd 6.1.6.5 (45)
        alcohol 120%25 (45)
        x video converter (44)
        winrar key (44)
        winmpg (44)
        windows vista (44)
        sims 2 nightlife no cd (44)
        regcure (44)
        polderbits crack (44)
        nod32 keygen (44)
        nod32 (44)
        need for speed carbon (44)
        men of valor serial (44)
        keygen reaconverter pro (44)
        keygen counter strike source (44)
        halo serial (44)
        cue club registration key (44)
        crack need for speed most wanted (44)
        crack hdd regenerator 1.42 (44)
        baldur (44)
        advanced pdf password recovery 1.48 (44)
        3d live pool keygen (44)
        windowblinds 5.51 (43)
        vmware workstation 5 serial (43)
        virus locker (43)
        simcity 3000 (43)
        replay (43)
        pfconfig (43)
        pc security (43)
        norton 2005 (43)
        netlimiter 1.30 serial (43)
        nero 7 cd key (43)
        guild wars serial (43)
        getdataback key (43)
        divx 6 (43)
        adobe photoshop (43)
        adobe cs2 keygen (43)
        zlauncher serial (42)
        windows activation crack (42)
        uniblue registry booster 2.0.1013.3068 (42)
        two worlds (42)
        serial solidworks 2004 (42)
        serial call of duty 2 (42)
        rocket mania deluxe (42)
        photostudio (42)
        optix pro v1.32 download (42)
        nero 7.9.6.0 cd key (42)
        mathtype (42)
        incredimail (42)
        free internet tv serial (42)
        fraps crack (42)
        empires (42)
        diner dash (42)
        crack loki (42)
        cfosspeed (42)
        avid 5.7 (42)
        allmusicconverter (42)
        airxonix (42)
        acdsee key (42)
        xp pro key (41)
        translator (41)
        star wars empire at war cd key (41)
        sentinel superpro emulator (41)
        reaconverter 5.0 pro edition (41)
        pro evolution soccer (41)
        pinnacle (41)
        microsoft sql server 2005 (41)
        keygen nero (41)
        key code call of duty 2 (41)
        imtoo wma mp3 converter (41)
        final cut studio 2 (41)
        crack virtua tennis (41)
        bookworm adventures deluxe (41)
        adobe premiere cs3 (41)
        adobe photoshop cs serial (41)
        actual spy 3.0 serial (41)
        xilisoft ipod video converter (40)
        word (40)
        voyager (40)
        serial photoshop cs (40)
        mdaemon (40)
        longhorn (40)
        imtoo dvd ripper (40)
        hddlife 3.0.140 (40)
        halo (40)
        guitarfx 3.04 (40)
        file scavenger (40)
        fifa 2006 no cd (40)
        f recovery sd 2.5 (40)
        crack nfs most wanted (40)
        crack need for speed underground 2 (40)
        cfosspeed 4 (40)
        cd_key/frozen throne cd key (40)
        bejeweled 1.86 crack (40)
        win dvd creator 3 key gen (39)
        swat 4 cd key (39)
        super cleaner (39)
        splinter cell chaos theory serial (39)
        splinter cell chaos theory crack (39)
        registry medic serial (39)
        reget deluxe (39)
        new star soccer crack (39)
        nero 6 serial number (39)
        need for speed most wanted serial (39)
        keygens (39)
        guild wars cd key (39)
        doom 3 key (39)
        divx 5.1.1 serial (39)
        desktop video recorder (39)
        dbpoweramp crack (39)
        cs3 (39)
        cd key nero 6 (39)
        cd key call of duty 2 (39)
        unlock code solid converter pdf (38)
        uniblue power suite (38)
        textpad 4.7.3 crack (38)
        kaspersky antivirus 6.0.1.411 key (38)
        fine reader (38)
        delicious 2 (38)
        cubis gold 2 (38)
        crack nero 7.9.6.0 (38)
        crack easy rm to mp3 converter (38)
        cossacks 2 crack (38)
        cnc (38)
        clonecd (38)
        call of duty key (38)
        boilsoft rm converter (38)
        boilsoft (38)
        anti (38)
        acdsee 7 crack (38)
        super dvd creator (37)
        solsuite 2007 v.7.6 (37)
        sims 2 deluxe (37)
        recover my files (37)
        power dvd serial (37)
        photovista panorama 3.0 crack (37)
        nfs underground 2 cd key (37)
        nfl (37)
        need for speed most wanted keygen (37)
        microsoft office 2003 cd key (37)
        magic mp3 tagger (37)
        key code for sims livin large (37)
        gutterball registration key (37)
        far cry cd key (37)
        crack pdf2word (37)
        bejeweled 2 crack (37)
        avg license code (37)
        avast crack (37)
        alcohol 120 1.9.5 (37)
        adobe photoshop serial (37)
        act of war cd key (37)
        winavi video converter serial (36)
        warcraft 3 serial (36)
        spyware doctor key (36)
        sniffer (36)
        sam broadcaster (36)
        radmin (36)
        plato dvd ripper (36)
        office xp key (36)
        o%26o defrag 8.6 (36)
        norton antivirus product key (36)
        live for speed (36)
        handy recovery serial (36)
        dvd (36)
        crack autocad 2005 (36)
        cooledit (36)
        bejeweled keygen (36)
        avs video converter 5.6 crack (36)
        any dvd (36)
        2007 (36)
        windows xp serial number (35)
        windows xp home edition (35)
        warcraft 3 cd keys (35)
        tmpgenc (35)
        spyware doctor license code (35)
        smart movie keygen (35)
        serial office 2003 (35)
        painter 9 (35)
        nero 7 demo serial number (35)
        microsoft outlook 2007 (35)
        illustrator cs3 (35)
        file scavenger 3.1 (35)
        dreamweaver cs3 (35)
        diner dash crack (35)
        diablo crack (35)
        dawn of war cd key (35)
        clonedvd4 (35)
        cd key norton 2005 (35)
        avi to dvd svcd vcd converter (35)
        audiograbber 1.83 crack (35)
        1 click dvd copy 5.0.3.5 (35)
        world of warcraft cd key (34)
        winzip keygen (34)
        winrar crack (34)
        trackmania sunrise crack (34)
        serial musicmatch jukebox 9 (34)
        norton 360 (34)
        mcafee (34)
        email spider (34)
        diablo 2 serial (34)
        diablo 2 cd key (34)
        ddd (34)
        crystal player profesional 1.98 (34)
        crack/office 2003 activation crack (34)
        crack winrar (34)
        call of duty 2 serial (34)
        bitdefender 8 (34)
        babylon 5 keygen (34)
        avs video converter 5.6 (34)
        anydvd 6 crack (34)
        amplitube (34)
        adobe photoshop cs3 (34)
        adobe illustrator cs3 serial (34)
        xingtone serial (33)
        windvd 7 keygen (33)
        winace 2.5 keygen (33)
        tumblebugs crack (33)
        total commander 6.53 crack (33)
        talisman (33)
        spyware doctor 5 serial (33)
        spb mobile shell (33)
        skygrabber (33)
        registry mechanic (33)
        office xp serial (33)
        netsim (33)
        music label (33)
        mp3 tagger (33)
        microsoft office home and student 2007 (33)
        feeding frenzy serial (33)
        avg anti (33)
        adobe photoshop cs2 serial number (33)
        adaware se (33)
        acdsee (33)
        007 spy software (33)
        xp sp2 serial (32)
        windows xp sp2 (32)
        ulead video studio11 plus (32)
        serial number harry potter (32)
        radmin 3.0 serial (32)
        powerdvd 6 serial (32)
        open video joiner (32)
        need (32)
        magiciso maker 5.4 (32)
        keygen/tuneup utilities 2007 keygen (32)
        keygen schlacht um mittelerde (32)
        grand master chess (32)
        diablo 2 lod cd key (32)
        dap (32)
        crack dap (32)
        cd_key/battle for middle earth cd key (32)
        avg 7.5 build 476 (32)
        advanced system optimizer (32)
        adobe photoshop cs2 (32)
        adobe photoshop 7.0 serial number (32)
        adobe cs3 web premium (32)
        activation key norton 2004 (32)
        xp (31)
        wow stat changer (31)
        windvd 7 serial (31)
        windows xp sp2 activation crack (31)
        vue (31)
        spss 12 (31)
        splinter cell chaos theory cd key (31)
        sapphire (31)
        radar (31)
        proshow gold serial (31)
        powerpoint (31)
        photoshop 7.0 serial number (31)
        nod 32 2.7 (31)
        neverwinter nights crack (31)
        neat image (31)
        mirc serial (31)
        incredimail xe build 5653017 patch (31)
        hamachi (31)
        half life keygen (31)
        fraps 2.8.2 (31)
        escan (31)
        crack brothers in arms (31)
        colin mcrae dirt crack (31)
        coffeecup html editor 2007 (31)
        clickfinder 5.2 (31)
        call of duty cd keys (31)
        avi (31)
        avast 4.7 (31)
        anno 1701 (31)
        alcohol keygen v1.9.6 (31)
        xp serial (30)
        xoftspy keygen (30)
        winundelete (30)
        videoredo serial (30)
        video to mp4 converter (30)
        trados (30)
        spyware doctor 5.0.1.201a (30)
        spyware doctor 5.0 (30)
        sound forge mp3 (30)
        sound forge 9 (30)
        serial windows xp (30)
        security task manager 1.7e (30)
        ricochet lost worlds (30)
        ravenriley.com (30)
        norton antivirus 2006 keygen (30)
        nod32 crack (30)
        mprojector (30)
        joint operations typhoon rising key (30)
        dvdfab platinum (30)
        divx (30)
        crack zuma (30)
        crack winavi (30)
        crack to nero 5.5.10.56 (30)
        colin mcrae dirt serial (30)
        chicken invaders 2 (30)
        bounce (30)
        avg serial (30)
        avg 7.5 (30)
        autocad 2008 (30)
        anydvd 6.1.5.5 crack (30)
        worldmate (29)
        winzip (29)
        winrar registration code (29)
        windows (29)
        vegas 4.0 cd key (29)
        the sims 2 (29)
        stalker (29)
        spyware (29)
        speed optimizer 2 crack (29)
        serial nero 7.9.6.0 (29)
        serial doom 3 (29)
        regcleaner (29)
        need for speed underground 2 serial (29)
        need for speed most wanted no cd (29)
        need for speed most wanted (29)
        mid converter 3.2. serial (29)
        microsoft office 2003 serial (29)
        kerio winroute (29)
        fl studio 7 crack (29)
        eset smart security (29)
        docrepair crack (29)
        crack winconnection 3.5 (29)
        cd key (29)
        camfrog 3.91 (29)
        call of duty 2 (29)
        bejeweled (29)
        adobe illustrator cs3 (29)
        act of war serial (29)
        kaspersky internet security 7.0.0.123 (29)
        xp pro serial (28)
        world of warcraft (28)
        winundelete registration key (28)
        winlyrics (28)
        winedt 5.4 (28)
        windows xp professional (28)
        winconnection serial (28)
        warcraft 3 reign of chaos cd key (28)
        ultraedit 13.10 (28)
        super pool crack (28)
        star wars battlefront 2 cd key (28)
        san andreas crack (28)
        radmin serial (28)
        passwordtools (28)
        office 2003 activation (28)
        nero 7.9.6.0 serial number (28)
        nero 7.9 keygen (28)
        nero 7 key (28)
        nero 6.6 serial (28)
        musicmatch jukebox (28)
        keygen nero 7.9.6.0 (28)
        keygen call of duty 2 (28)
        hollywood fx 6 (28)
        guild wars crack (28)
        go1984 (28)
        cracks and serials (28)
        crack minitab 14 (28)
        badcopy pro 3.81 serial (28)
        avs converter (28)
        age of empires 3 product key (28)
        abc (28)
        winhex (27)
        windows xp home (27)
        virtual dj serial (27)
        uniblue registry booster crack (27)
        supertanga com (27)
        serial men of valor (27)
        mp3tag 5.6 (27)
        men of valor cd key (27)
        kaspersky internet security (27)
        iar (27)
        hardwood solitaire (27)
        getdataback ntfs 2.31 serial (27)
        cd_key/empire earth cd key (27)
        call of duty 2 serial number (27)
        bejeweled 2 (27)
        avg antispyware 7.5 keygen (27)
        arcade lines (27)
        agenda msd personal 8.10 (27)
        age of empires 3 serial (27)
        zuma deluxe registration key (26)
        winzip crack (26)
        windows xp pro key (26)
        warcraft 3 (26)
        system mechanic 7 (26)
        swat 4 serial (26)
        speedoptimizer 2.0 code (26)
        serial avg (26)
        route 66 (26)
        rocket mania (26)
        pgp (26)
        nero 7.9.6.0 with crack download (26)
        nero 7.9 (26)
        nero 6.6.1.4 serial number (26)
        mobiledit (26)
        internet download accelerator 5.2.1.1057 crack (26)
        insaniquarium deluxe crack (26)
        easyrecovery (26)
        dreamweaver (26)
        diablo 2 crack (26)
        crack/brothers in arms crack (26)
        crack photoshop cs2 (26)
        crack gta san andreas (26)
        colin mcrae dirt cd key (26)
        bsplayer (26)
        anydvd 6.1.6.5 (26)
        all my movies (26)
        adobe photoshop cs2 keygen (26)
        adobe illustrator cs3 (26)
        ableton 5.2 keygen (26)
        windows xp key (25)
        veritas (25)
        top spin cd key (25)
        serial diablo 2 (25)
        ragtime (25)
        pdf editor (25)
        paint shop pro (25)
        nod32 key (25)
        netlimiter 1.3 crack (25)
        luxor keygen (25)
        glidos crack (25)
        free spyware doctor registration code (25)
        fraps serial (25)
        edgecam 11.75 (25)
        eagle 4.11r2 crack (25)
        divx author (25)
        dbpoweramp music converter %28dmc%29 (25)
        crack serial (25)
        crack autocad 14 (25)
        coverxp (25)
        counter strike 1.6 keygen (25)
        converter (25)
        codecharge (25)
        codecharge (25)
        clonk endeavour crack (25)
        capella (25)
        bounce out (25)
        bejeweled 2 deluxe crack (25)
        battlefield 2142 (25)
        avira premium security suite 7.00.04.15 (25)
        avast 4.7 home edition (25)
        atomixmp3 v2.3 (25)
        atomixmp3 (25)
        actual spy (25)
        winrar serial (24)
        windows xp key generator (24)
        windows xp home keygen (24)
        windows xp activation keygen (24)
        winavi (24)
        vmware 5 serial (24)
        video fixer 3.23 keygen (24)
        style xp (24)
        softcam (24)
        smac 2.0.5 (24)
        sexual (24)
        serial/photoshop 7 serial number (24)
        robohelp x5 crack (24)
        norton antivirus 2006 serial (24)
        nfs underground crack (24)
        nfs (24)
        nanny mania (24)
        moviejack 3.5 serial (24)
        microsoft office professional 2007 (24)
        halo keygen (24)
        girl (24)
        genarts (24)
        ftvgirl (24)
        frontpage 2003 serial (24)
        fraps 2.8.2 crack (24)
        fifa 2006 crack (24)
        eviews 4.1 crack (24)
        erwin crack (24)
        dreamweaver crack (24)
        divx pro 6.6 (24)
        dead mans hand cd key (24)
        crack magic inlay (24)
        comand conquer general (24)
        chrome keygen (24)
        cd (24)
        aqua data studio (24)
        alcohol serial (24)
        age of empire 3 cd key (24)
        acdsee serial (24)
        3dmark06 (24)
        xilisoft (23)
        worms armageddon serial (23)
        wingate 6.0.3 crack (23)
        windows server 2003 crack (23)
        windows activation (23)
        warcraft 3 key gen (23)
        swiff chart 3.2 crack (23)
        smartmovie keygen (23)
        small business server 2003 (23)
        smac 2.0 (23)
        serial winrar (23)
        retrospect (23)
        restorer2000 crack (23)
        rar password recovery (23)
        pro beach soccer (23)
        polar golfer (23)
        photoshop 7 serial number (23)
        office 2007 keygen (23)
        netlimiter 1.30 keygen (23)
        nero serial key (23)
        nero key (23)
        nero 6 key (23)
        mp3 splitter %26 joiner (23)
        mass downloader 3.3 (23)
        magic ball (23)
        jaws (23)
        illustrator cs3 serial number (23)
        halo cd key (23)
        halo 2 cd key (23)
        generals cd key (23)
        foxit pdf editor 1.5 key (23)
        flash decompiler 2.99 (23)
        excelfix (23)
        dvdfab platinum 3.1.4.0 (23)
        cs2 serial (23)
        crack spyhunter (23)
        clone cd crack (23)
        cd_key/trackmania sunrise cd key (23)
        cd key most wanted (23)
        avg keygen (23)
        acdsee 6.0crack (23)
        3d (23)
        windows xp home edition keygen (22)
        windowblinds 5.51 (22)
        virusprotectpro 3.3 (22)
        ulead (22)
        tweaknow powerpack (22)
        topdesk (22)
        titan quest cd keyen (22)
        style xp serial (22)
        star wars battlefront serial key (22)
        spyware doctor license (22)
        serial/kaspersky serial (22)
        serial windows (22)
        serial number nero 7 (22)
        replay media catcher (22)
        replay av (22)
        repair registry pro (22)
        pvplayer keygen (22)
        panda titanium 2006 (22)
        office 2000 serial (22)
        nfs most wanted cd key (22)
        nero 7.9.6 keygen (22)
        need for speed underground crack (22)
        need for speed underground 2 keygen (22)
        microsoft office 2007 (22)
        medal of honor cd key (22)
        kaspersky (22)
        jewel quest (22)
        indesign (22)
        half life 2 cd key (22)
        garmin (22)
        fifa 2005 cd key (22)
        cricket 2004 (22)
        crack twoprog 3.1.4 (22)
        commview 5.5 (22)
        chocolatier (22)
        cd key top spin (22)
        cd key need for speed most wanted (22)
        camfrog (22)
        battlefront 2 serial (22)
        battlefield 2 (22)
        avs video converter 5.6 keygen (22)
        anydvd serial (22)
        adobe photoshop serial number (22)
        adobe photoshop cs2 crack (22)
        accessfix (22)
        xp keygen (21)
        wpa_kill.exe (21)
        winiso 5.3 crack (21)
        windows 2003 serial (21)
        warcraft iii cd key (21)
        tuneup utilities 2007 (21)
        spyware doctor 5.0.1.201 serial (21)
        sms (21)
        serial illustrator cs3 (21)
        sentinel emulator (21)
        revit (21)
        registry mechanic keygen (21)
        photo dvd maker (21)
        paint shop pro 9 crack (21)
        office xp sp3 activator (21)
        office 2003 serial number (21)
        nero 7.9 serial (21)
        nero 7.8.5.0 ultra edition enhanced (21)
        nero 7.8.5.0 (21)
        nero 6316 keygen (21)
        need for speed crack (21)
        movavi (21)
        mirc keygen (21)
        mirc 6.21 (21)
        keygen nero 6 (21)
        isilo (21)
        flax (21)
        dvr studio (21)
        diablo 2 cd key generator (21)
        diablo (21)
        cucusoft ultimate dvd video converter suite (21)
        crack luxor (21)
        crack fraps (21)
        crack chaos league (21)
        crack call of duty 2 (21)
        crack avs video editor 3.5 (21)
        cd_key/need for speed underground cd key (21)
        burger rush (21)
        badcopy pro 3.75 serial (21)
        avast key (21)
        acoustica 3.08 (21)
        3d sexvilla crack (21)
        zonealarm (20)
        xoftspy (20)
        wow (20)
        windows xp home edition cd key (20)
        vmware (20)
        visual zip password recovery processor (20)
        ventafax (20)
        ut 2004 key gen (20)
        undisker 1.2 keygen (20)
        the sims deluxe serial (20)
        the battle for middle (20)
        surf (20)
        stylexp crack (20)
        spyware doctor 5.0 serial (20)
        spydoctor v2.0 activation key (20)
        sims (20)
        serial xp sp2 (20)
        serial number office 2003 (20)
        serial most wanted (20)
        serial bitdefender (20)
        roller coaster tycoon 3 serial (20)
        registrybooster (20)
        proteus (20)
        porn (20)
        platypus (20)
        oziexplorerce crack (20)
        oxygen (20)
        norton 2006 (20)
        nero 6.6.0.3 (20)
        need for speed underground2 (20)
        navigon (20)
        myscript notes (20)
        mpeg2 (20)
        moyea (20)
        movie6 (20)
        inksaver (20)
        half life (20)
        empire earth 2 serial (20)
        director (20)
        crack need for speed underground (20)
        crack any dvd (20)
        coreldraw design collection (20)
        coreldraw 12 keygen (20)
        clickfinder (20)
        cd_key/condition zero cd key (20)
        avs video converter serial (20)
        avs video converter 5.6 serial (20)
        atmosphere (20)
        aqua data studio 6.0.15 (20)
        anydvd (20)
        age of empires iii product key (20)
        adobe cs3 (20)
        xilisoft 3.1.34.0622b (19)
        windows xp sp2 activation (19)
        windows 98 serial (19)
        warcraft 3 crack (19)
        virtual dj key (19)
        total commander 7 (19)
        tmpgenc mpeg editor keygen (19)
        style xp registration key (19)
        spyware doctor 5.0.1.201a key (19)
        slovoed (19)
        serials cs condition zero (19)
        screentime (19)
        rm converter (19)
        ptravelalarm (19)
        pfhoe (19)
        pdftypewriter 5.6 (19)
        ozi explorer 3.95.4 (19)
        office 2003 keygen (19)
        norton 2007 (19)
        nero 7 premium serial (19)
        nero 6 crack (19)
        nero (19)
        mainconcept (19)
        magic folders xp (19)
        keygen illustrator cs3 (19)
        inspector parker key (19)
        insaniquarium crack (19)
        illustrator (19)
        hiew (19)
        half (19)
        gangland cd key (19)
        fl studio 7 (19)
        fifa 2006 serial (19)
        fifa 06 crack (19)
        elifoot 2006 (19)
        divxtodvd (19)
        diskeeper 2007 (19)
        data doctor (19)
        dap 8.5 (19)
        cyberlink powerdvd 7.0.2211 (19)
        crack style xp (19)
        crack fifa 2006 (19)
        counter strike keygen (19)
        convertxtodvd (19)
        cd_key/halo cd key (19)
        cd autorun creator 4.6 (19)
        call of duty 2 no cd crack (19)
        bootit (19)
        bitdefender (19)
        avs video converter 5.5 (19)
        aston (19)
        asf (19)
        age of empires 3 (19)
        advanced registry optimizer 5.0 crack (19)
        advanced registry optimizer (19)
        wpa_kill.exe sp2 (18)
        wpa_kill sp2 (18)
        world of warcraft serial (18)
        winzip 110 (18)
        windows xp sp2 cd key (18)
        windows xp pro (18)
        windows vista ultimate (18)
        warcraft crack (18)
        virtuagirl2 crack (18)
        upromote (18)
        systemworks 2004 (18)
        sudoku (18)
        star wars battlefront cd key (18)
        speedconnect (18)
        spb mobile shell 1.5.0 (18)
        serial crack (18)
        rise of nations (18)
        power dvd 7 (18)
        opencube (18)
        nfs underground (18)
        nero 7.9.6.0 (18)
        nero 7 premium crack (18)
        medal of honor allied assault cd key (18)
        luxor crack (18)
        keygen guild wars (18)
        key nod32 (18)
        handy recovery 2.0 serial (18)
        flash (18)
        fireburner (18)
        download accelerator 7.4.0.1crackserials (18)
        diskeeper (18)
        deejaysystem crack (18)
        cybercafepro (18)
        crimsonland keygen (18)
        crack need for speed (18)
        crack do need for speed most wanted (18)
        corel (18)
        condition zero (18)
        cod serial (18)
        cd_key/need for speed underground 2 cd key (18)
        cd key norton 2004 (18)
        cd key call of duty (18)
        cakewalk sonar (18)
        bitdefender key (18)
        battlefront 2 (18)
        avs video editor 3.5 (18)
        avira (18)
        avg anti spyware (18)
        anydvd key (18)
        age of empires iii cd key (18)
        age of empires 3 key (18)
        winzip code (17)
        winbackup (17)
        warcraft3 (17)
        warcraft 3 frozen throne cd key (17)
        warcraft (17)
        visualdsp (17)
        virtualgirl2 (17)
        tuneup utilities 2006 serial (17)
        tumblebugs (17)
        total commander 7.01 (17)
        tmpgenc 3.0.4.24 (17)
        swishmax crack (17)
        swat 4 (17)
        spyware doctor (17)
        smartmovie 3.41 (17)
        serial power dvd 6 (17)
        photoshop serial number (17)
        pc rescue (17)
        nod32 2.7 crack (17)
        nero 6.6.0.8 crack (17)
        nero 6.6.0.6 (17)
        nero 6.6 (17)
        need for speed underground serial number (17)
        morpheus photo morpher (17)
        mirc (17)
        medal of honor (17)
        luxor 2 (17)
        keygen/adobe cs2 keygen (17)
        keygen windows xp (17)
        icoo loader crack (17)
        hide my ip (17)
        gta crack (17)
        freez flv to mp3 converter (17)
        fraps keygen (17)
        fifa 2005 serial (17)
        empires die neuzeit no cd (17)
        cracks serials (17)
        crack/xp sp2 crack (17)
        crack spyware doctor (17)
        crack nero 7.9.6 (17)
        crack fraps 2.8.2 (17)
        cod2 serial (17)
        cd_key/windows xp sp2 cd key (17)
        bugdom (17)
        avg antispyware 7.5 (17)
        avg antispyware (17)
        anydvd 6.1.6.0 crack (17)
        anno 1503 (17)
        agelong tree (17)
        advanced (17)
        zuma deluxe serial (16)
        zonealarm serial (16)
        xp key generator (16)
        xp activation crack (16)
        wpa_kill (16)
        winrar 3.51 crack (16)
        windows xp professional cd key (16)
        wildtangent bejeweled 2 deluxe crack (16)
        vso (16)
        vdj3 (16)
        validation (16)
        underground (16)
        trackmania united (16)
        tony hawks underground 2 cd key (16)
        test drive (16)
        style xp key (16)
        spyware doctor 5.0.1.200 serial (16)
        splinter cell chaos theory keygen (16)
        snagit 7.1.0 crack (16)
        smartftp (16)
        serials for fate (16)
        serial style xp (16)
        serial nero 7.9.6 (16)
        serial nero 6.6.1.15 (16)
        serial anydvd 6.1.6.5 (16)
        serial power vcr 3.0 standard (16)
        scanspyware (16)
        rome total war no cd crack (16)
        repligo crack (16)
        phpmaker (16)
        photozoom (16)
        photoshop serial (16)
        photoshop cs3 serial (16)
        pdanet (16)
        paint shop pro xi (16)
        opera 8 serial (16)
        opera (16)
        office 2003 serial key (16)
        norton internet security (16)
        norton antivirus 2007 (16)
        norton antivirus (16)
        norton 2006 keygen (16)
        nod32 2.7 (16)
        nod 32 2.7 (16)
        nod 32 (16)
        nero key generator (16)
        nero 7.9.6.0 key (16)
        nero 7 serial key (16)
        nero 6.6.1.15a (16)
        nero 6.6.1.15 serial number (16)
        nero 6.6.0.14 serial (16)
        nero 6 cd key (16)
        mskey4in1 (16)
        mobile master (16)
        minitab 14 serial (16)
        marvel (16)
        licen code nero7 (16)
        keygen/halo keygen (16)
        finalrecovery (16)
        fifa 2006 cd key (16)
        fifa (16)
        facefilter (16)
        evaluation (16)
        easy video converter 7.2.1 (16)
        diskeeper crack (16)
        dirt (16)
        dap premiu 8.5 (16)
        cycling (16)
        cs 1.6 keygen (16)
        crack/clonedvd2 crack (16)
        crack avg (16)
        coverxp 1.65 (16)
        counter strike key (16)
        counter strike 1.6 (16)
        cd key doom 3 (16)
        cad (16)
        azureus (16)
        avg key (16)
        autocad 2005 crack (16)
        arcsoft totalmedia 3 (16)
        adobe dreamweaver cs3 (16)
        act of war keygen (16)
        acdsee 9 (16)
        3d sex villa crack (16)
        2d 3d screensaver maker 3.6 (16)
        2003 activation (16)
        zuma serial (15)
        xp sp2 key (15)
        xp sp2 (15)
        wow keygen (15)
        wm recorder (15)
        windows live onecare (15)
        windows 2003 server crack (15)
        wavelab v6 (15)
        vmware virtualcenter (15)
        transformers (15)
        trackmania sunrise no cd crack (15)
        theatre of war (15)
        test drive unlimited (15)
        teamviewer 2.42 (15)
        switchball (15)
        swishpix crack (15)
        swat 4 serial number (15)
        stylexp (15)
        serial/badcopy pro 3.81 serial (15)
        serial number avs video player 5.6 (15)
        serial key (15)
        sammsoft (15)
        roller (15)
        rise of nations serial (15)
        rapidshare premium account (15)
        radmin crack (15)
        power dvd 6 key (15)
        photoshop cs3 keygen (15)
        photoshop cs2 cd key (15)
        open video converter (15)
        office 2003 crack (15)
        norton 2006 crack (15)
        news rover 12 (15)
        nero 7.9.6 (15)
        nero 7.9. serial number (15)
        nero 7 ultra edition serial (15)
        minitab 14 activation code (15)
        microangelo (15)
        magic iso (15)
        lost (15)
        loki keygen (15)
        keygen/winavi keygen (15)
        keygen/nero 6 keygen (15)
        keygen style xp (15)
        keygen nero 7 (15)
        kaspersky license (15)
        karaoke (15)
        hiew v6.85 full (15)
        easyboot 5.1 serial (15)
        easy mp3 cutter (15)
        dawn of war keygen (15)
        crack/feeding frenzy 2 crack (15)
        crack/bejeweled 2 crack (15)
        crack/3d sexvilla crack (15)
        crack pro cycling manager 2007 (15)
        crack fifa 06 (15)
        counterspy (15)
        counter strike (15)
        cfos (15)
        cd_key/rise of nations cd key (15)
        cd to mp3 ripper (15)
        catwoman serial (15)

        -

        AV Voice Changer V 6.0.10 keygen .rar


        DOWNLOAD ::: https://gohhs.com/2uz5Kt



        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/inamXcontru/PoeticTTS/Bartender Barcode Software Free Download Crack For 574 _HOT_.md b/spaces/inamXcontru/PoeticTTS/Bartender Barcode Software Free Download Crack For 574 _HOT_.md deleted file mode 100644 index 8909dc2595af68598acfbab6317d805f5b4ba722..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Bartender Barcode Software Free Download Crack For 574 _HOT_.md +++ /dev/null @@ -1,8 +0,0 @@ -

        bartender barcode software free download crack for 574


        DOWNLOAD ---> https://gohhs.com/2uz4B6



        - -This video explains how to set up a zebra printer to work with the RollMaster software.The zebra lp2824ex Printer (Version 1) is available upon request with optional software that allows the user to monitor ink usage, print barcodes, and upload print order data to a cash register. -The software ensures that the printer operates in accordance with industry standards. -RollMaster is software that allows users to upload print order data to a cash register, print barcodes, and select a language. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (prboa Rule 7 And 8 Pdf Free BETTER).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (prboa Rule 7 And 8 Pdf Free BETTER).md deleted file mode 100644 index 555d20b10e19d7445a64a6b60802db07d1e8629b..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (prboa Rule 7 And 8 Pdf Free BETTER).md +++ /dev/null @@ -1,6 +0,0 @@ -

        HD Online Player (prboa rule 7 and 8 pdf free)


        Download 🌟 https://urlin.us/2uEx7k



        -
        -Monograph series 8, Volume 1, European Monitoring Centre for Drugs and Drug Addiction, Lisbon. ... The first two chapters in this section examine the role of cannabis as a ... Sweden promotes a vision of a 'drug-free society' at policy level. ... http ://whq libdoc. who. int/hq/1 997/WHO_MSA_PSA_97.4.pdf 116 Chapter 7 ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/How To Crack Adobe CC 2020 ? Adobe Creative Latest Patch MacOSX LINK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/How To Crack Adobe CC 2020 ? Adobe Creative Latest Patch MacOSX LINK.md deleted file mode 100644 index 9a326333462e92c8e26ad64fb48a72d715247e14..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/How To Crack Adobe CC 2020 ? Adobe Creative Latest Patch MacOSX LINK.md +++ /dev/null @@ -1,60 +0,0 @@ -

        How to crack Adobe CC 2020 – Adobe Creative Latest Patch MacOSX


        Download ✪✪✪ https://urlin.us/2uEvmi



        - -Click Account. - -Click Check for Updates. - -After installing an update, go through the same process as before. - -As of September 2020, the Creative Cloud desktop app for macOS requires macOS Catalina. - -If you find that updates are not being installed automatically, for example if they fail to connect to the service, you may find that the Creative Cloud desktop app for macOS does not have the required number of authorization tokens. Follow these steps to fix it: - -Open the Creative Cloud desktop app. (Select the icon in the Windows taskbar or the macOS menu bar.) - -Go to the menu bar and select Settings, then Account. - -Select Generate New Tokens. - -The Creative Cloud desktop app will be back up and running. - -Q: - -Django: Why is this string formatting not working? - -I want to format this date - -2012-10-26T12:53:20 - -as "YYYY-MM-DD" in the template. - -I have done this: - - - - item.text - -but the output is: - -A: - -You can also use the time template filter. - - item.datetime - -Because date:'Y-m-d' is already being interpreted as a string, and 'Y-m-d' is not interpreted as a format for that string. If you want to add a format to the output, you need to pass the format to the filter like this: - -date:"Y-m-d" - -The documentation for time filters has more information. - -1. Field of the Invention - -The present invention relates to a hydraulic control system for an automatic transmission and more particularly to a hydraulic control system for an automatic transmission that has a shift control valve for controlling a speed change ratio or the like by controlling the supply of a working oil to a hydraulically operated friction device of the automatic transmission. - -2. Description of the Related Art - -Conventionally, a hydraulic control system of 4fefd39f24
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ismael Miranda Discografia Torrent Descargar.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ismael Miranda Discografia Torrent Descargar.md deleted file mode 100644 index 6d51f6daf311746de395aa51d08d90755b1a4c3f..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ismael Miranda Discografia Torrent Descargar.md +++ /dev/null @@ -1,46 +0,0 @@ -

        ismael miranda discografia torrent descargar


        Download File ———>>> https://urlin.us/2uExaf



        -
        -Left, Eddie Rivera, Edwin Colon, Frankie Rodriguez, Ismael Miranda, Jack Hitchcock, Julio Romero, Junior Gonzalez, Larry Harlow, Larry Spencer, Leopoldo PinedaRight, Eddie Rivera, Edwin Colon, Frankie Rodriguez, Ismael Miranda, Jack Hitchcock, Julio Romero, Junior Gonzalez, Larry Harlow, Larry Spencer, Leopoldo Pineda - -References - -Further reading - -External links - -Category:2011 American television series debuts - -Category:2011 American television series endings - -Category:2010s American comedy television series - -Category:English-language television programs - -Category:Spanish-language television programs - -Category:Television shows set in New York City - -Category:Television series by 3 Arts Entertainment - -Category:Television series by Universal Television - -Category:Television shows set in Chicago - -Category:Television shows set in MiamiOral and maxillofacial manifestations in sickle cell disease. - -Oral and maxillofacial manifestations in sickle cell disease have been well known for many years. Oral lesions may occur as a complication of the disease itself, or due to the effects of splenic dysfunction or anemia. Chronic oral ulcers and oral candidiasis are common, while more severe lesions, such as osteomyelitis, osteonecrosis, and infarcted tonsils, may occur. Some patients may present with oral manifestations that are the primary symptom of the disease, such as severe or recurrent aphthous ulcers. Other manifestations may be associated with cystic fibrosis, alpha-1-antitrypsin deficiency, congenital anomalies, and drug ingestion. Certain procedures such as bone marrow transplantation and retinal photography may cause potentially dangerous or irreversible oral complications in some patients. We review the oral and maxillofacial complications of sickle cell disease with an emphasis on the relative frequency and severity of these lesions in different age groups. - -Home-Makers Battle Tech Giants, Starting From the Living Room - mcxx - -====== - -mattculbreth - -This is the "take-two-pumps-and-go" generation. We're all busy, we're - -overloaded, we've got business to tend to and a new baby to care for, it's - -hell. But don't let that stop you. So get up, get your butt in gear, 4fefd39f24
        -
        -
        -

        diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Mickey Virus 2 Full 2021 Movie With English Subtitles Download Torrent.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Mickey Virus 2 Full 2021 Movie With English Subtitles Download Torrent.md deleted file mode 100644 index 26afa71e9b68adc8427f5a4069a595584b57d758..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Mickey Virus 2 Full 2021 Movie With English Subtitles Download Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Mickey Virus 2 full movie with english subtitles download torrent


        Download Zip ->>->>->> https://urlin.us/2uEvyt



        -
        -2 Secret Study Tips To Score Highest in Every Exam Motivational Hindi ... |Hindi| Download Hollywood Movie in Hindi or English | torrent movie downloader ... Mickey Virus Full Movie with Eng. Subtitles | Hindi Movies 2016 Full Movie ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/iqovocn/ChuanhuChatGPT/custom.css b/spaces/iqovocn/ChuanhuChatGPT/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/iqovocn/ChuanhuChatGPT/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/itmorn/face_keypoint/README.md b/spaces/itmorn/face_keypoint/README.md deleted file mode 100644 index c6ffcdc3daa0a5144e6bfd134e81f24061942274..0000000000000000000000000000000000000000 --- a/spaces/itmorn/face_keypoint/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Face Keypoint -emoji: 🙂 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/datasets/pfe_dataset.py b/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/datasets/pfe_dataset.py deleted file mode 100644 index 83988dea963a2c4226010a336573de94bf06c55e..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/datasets/pfe_dataset.py +++ /dev/null @@ -1,129 +0,0 @@ -from os.path import expanduser -import torch -import json -from general_utils import get_from_repository -from datasets.lvis_oneshot3 import blend_image_segmentation -from general_utils import log - -PASCAL_CLASSES = {a['id']: a['synonyms'] for a in json.load(open('datasets/pascal_classes.json'))} - - -class PFEPascalWrapper(object): - - def __init__(self, mode, split, mask='separate', image_size=473, label_support=None, size=None, p_negative=0, aug=None): - import sys - # sys.path.append(expanduser('~/projects/new_one_shot')) - from third_party.PFENet.util.dataset import SemData - - get_from_repository('PascalVOC2012', ['Pascal5i.tar']) - - self.p_negative = p_negative - self.size = size - self.mode = mode - self.image_size = image_size - - if label_support in {True, False}: - log.warning('label_support argument is deprecated. Use mask instead.') - #raise ValueError() - - self.mask = mask - - value_scale = 255 - mean = [0.485, 0.456, 0.406] - mean = [item * value_scale for item in mean] - std = [0.229, 0.224, 0.225] - std = [item * value_scale for item in std] - - import third_party.PFENet.util.transform as transform - - if mode == 'val': - data_list = expanduser('~/projects/old_one_shot/PFENet/lists/pascal/val.txt') - - data_transform = [transform.test_Resize(size=image_size)] if image_size != 'original' else [] - data_transform += [ - transform.ToTensor(), - transform.Normalize(mean=mean, std=std) - ] - - - elif mode == 'train': - data_list = expanduser('~/projects/old_one_shot/PFENet/lists/pascal/voc_sbd_merge_noduplicate.txt') - - assert image_size != 'original' - - data_transform = [ - transform.RandScale([0.9, 1.1]), - transform.RandRotate([-10, 10], padding=mean, ignore_label=255), - transform.RandomGaussianBlur(), - transform.RandomHorizontalFlip(), - transform.Crop((image_size, image_size), crop_type='rand', padding=mean, ignore_label=255), - transform.ToTensor(), - transform.Normalize(mean=mean, std=std) - ] - - data_transform = transform.Compose(data_transform) - - self.dataset = SemData(split=split, mode=mode, data_root=expanduser('~/datasets/PascalVOC2012/VOC2012'), - data_list=data_list, shot=1, transform=data_transform, use_coco=False, use_split_coco=False) - - self.class_list = self.dataset.sub_val_list if mode == 'val' else self.dataset.sub_list - - # verify that subcls_list always has length 1 - # assert len(set([len(d[4]) for d in self.dataset])) == 1 - - print('actual length', len(self.dataset.data_list)) - - def __len__(self): - if self.mode == 'val': - return len(self.dataset.data_list) - else: - return len(self.dataset.data_list) - - def __getitem__(self, index): - if self.dataset.mode == 'train': - image, label, s_x, s_y, subcls_list = self.dataset[index % len(self.dataset.data_list)] - elif self.dataset.mode == 'val': - image, label, s_x, s_y, subcls_list, ori_label = self.dataset[index % len(self.dataset.data_list)] - ori_label = torch.from_numpy(ori_label).unsqueeze(0) - - if self.image_size != 'original': - longerside = max(ori_label.size(1), ori_label.size(2)) - backmask = torch.ones(ori_label.size(0), longerside, longerside).cuda()*255 - backmask[0, :ori_label.size(1), :ori_label.size(2)] = ori_label - label = backmask.clone().long() - else: - label = label.unsqueeze(0) - - # assert label.shape == (473, 473) - - if self.p_negative > 0: - if torch.rand(1).item() < self.p_negative: - while True: - idx = torch.randint(0, len(self.dataset.data_list), (1,)).item() - _, _, s_x, s_y, subcls_list_tmp, _ = self.dataset[idx] - if subcls_list[0] != subcls_list_tmp[0]: - break - - s_x = s_x[0] - s_y = (s_y == 1)[0] - label_fg = (label == 1).float() - val_mask = (label != 255).float() - - class_id = self.class_list[subcls_list[0]] - - label_name = PASCAL_CLASSES[class_id][0] - label_add = () - mask = self.mask - - if mask == 'text': - support = ('a photo of a ' + label_name + '.',) - elif mask == 'separate': - support = (s_x, s_y) - else: - if mask.startswith('text_and_'): - label_add = (label_name,) - mask = mask[9:] - - support = (blend_image_segmentation(s_x, s_y.float(), mask)[0],) - - return (image,) + label_add + support, (label_fg.unsqueeze(0), val_mask.unsqueeze(0), subcls_list[0]) diff --git a/spaces/james-oldfield/PandA/networks/genforce/runners/controllers/fid_evaluator.py b/spaces/james-oldfield/PandA/networks/genforce/runners/controllers/fid_evaluator.py deleted file mode 100644 index 3a4193918ccf89b55a786507f77638e227c5a891..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/genforce/runners/controllers/fid_evaluator.py +++ /dev/null @@ -1,55 +0,0 @@ -# python3.7 -"""Contains the running controller for evaluation.""" - -import os.path -import time - -from .base_controller import BaseController -from ..misc import format_time - -__all__ = ['FIDEvaluator'] - - -class FIDEvaluator(BaseController): - """Defines the running controller for evaluation. - - This controller is used to evalute the GAN model using FID metric. - - NOTE: The controller is set to `LAST` priority by default. - """ - - def __init__(self, config): - assert isinstance(config, dict) - config.setdefault('priority', 'LAST') - super().__init__(config) - - self.num = config.get('num', 50000) - self.ignore_cache = config.get('ignore_cache', False) - self.align_tf = config.get('align_tf', True) - self.file = None - - def setup(self, runner): - assert hasattr(runner, 'fid') - file_path = os.path.join(runner.work_dir, f'metric_fid{self.num}.txt') - if runner.rank == 0: - self.file = open(file_path, 'w') - - def close(self, runner): - if runner.rank == 0: - self.file.close() - - def execute_after_iteration(self, runner): - mode = runner.mode # save runner mode. - start_time = time.time() - fid_value = runner.fid(self.num, - ignore_cache=self.ignore_cache, - align_tf=self.align_tf) - duration_str = format_time(time.time() - start_time) - log_str = (f'FID: {fid_value:.5f} at iter {runner.iter:06d} ' - f'({runner.seen_img / 1000:.1f} kimg). ({duration_str})') - runner.logger.info(log_str) - if runner.rank == 0: - date = time.strftime("%Y-%m-%d %H:%M:%S") - self.file.write(f'[{date}] {log_str}\n') - self.file.flush() - runner.set_mode(mode) # restore runner mode. diff --git a/spaces/jayparmr/ICBINP_OG/app.py b/spaces/jayparmr/ICBINP_OG/app.py deleted file mode 100644 index 19ad72c0b9184d0f7a3102e303c8cc24481cda73..0000000000000000000000000000000000000000 --- a/spaces/jayparmr/ICBINP_OG/app.py +++ /dev/null @@ -1,89 +0,0 @@ - -import gradio as gr -import torch -# from diffusers import DiffusionPipeline -from diffusers import StableDiffusionPipeline -from diffusers.models import AutoencoderKL -from diffusers import StableDiffusionPipeline - - - - -def generate(prompt, negative_prompts, samples, steps,scale, seed, width, height): - - pipeline = StableDiffusionPipeline.from_pretrained("jayparmr/icbinp", use_auth_token="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", torch_dtype=torch.float16) - pipeline.to("cuda") - - generator = torch.Generator(device="cuda").manual_seed(int(seed)) - - images_list = pipeline( - [prompt] * samples, - negative_prompt= [negative_prompts] * samples, - num_inference_steps=steps, - guidance_scale=scale, - generator=generator, - width=width, - height=height - ) - - # vae = AutoencoderKL.from_pretrained("stabilityai/sdxl-vae") - # pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", vae=vae).to("cuda") - - - # images_list = pipe( - # [prompt] * samples, - # negative_prompt= [negative_prompts] * samples, - # num_inference_steps=steps, - # guidance_scale=scale - # ) - print("stop gen") - images = [] - print(images_list) - for i, image in enumerate(images_list["images"]): - images.append(image) - return images - -block = gr.Blocks() - -with block: - with gr.Group(): - with gr.Box(): - with gr.Row().style(equal_height=True): - text = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - ) - negative_text = gr.Textbox( - value="", - label="Enter your negative prompt", - show_label=False, - max_lines=1, - placeholder="Enter your negative prompt", - ) - btn = gr.Button("Generate image") - gallery = gr.Gallery( - label="Generated images", show_label=False, elem_id="gallery", width = 512 - ).style(columns=[2], rows=[2], object_fit="contain", height="auto") - - - with gr.Row(elem_id="advanced-options"): - samples = gr.Slider(label="Images", minimum=1, maximum=4, value=1, step=1) - steps = gr.Slider(label="Steps", minimum=1, maximum=500, value=100, step=1) - width = gr.Slider(label="width", minimum=1, maximum=2048, value=512, step=1) - height = gr.Slider(label="height", minimum=1, maximum=2048, value=512, step=1) - scale = gr.Slider( - label="Guidance Scale", minimum=0, maximum=50, value=7.5, step=0.1 - ) - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=2147483647, - step=1 - ) - text.submit(generate, inputs=[text,negative_text, samples, steps, scale, seed, width, height], outputs=gallery) - btn.click(generate, inputs=[text,negative_text, samples, steps, scale, seed, width, height], outputs=gallery) - - -block.launch() \ No newline at end of file diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/community.ts b/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/community.ts deleted file mode 100644 index 56d23c289d8d98cbbf4771720d6b18e73cbd6c17..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/app/server/actions/community.ts +++ /dev/null @@ -1,267 +0,0 @@ -"use server" - -import { v4 as uuidv4 } from "uuid" - -import { CreatePostResponse, GetAppPostResponse, GetAppPostsResponse, Post, PostVisibility } from "@/types" -import { filterOutBadWords } from "./censorship" -import { shuffleArray } from "../utils/shuffleArray" - -const apiUrl = `${process.env.COMMUNITY_API_URL || ""}` -const apiToken = `${process.env.COMMUNITY_API_TOKEN || ""}` -const appId = `${process.env.COMMUNITY_API_ID || ""}` -const secretModerationKey = `${process.env.MODERATION_KEY || ""}` - -export async function postToCommunity({ - prompt = "", - model = "", - assetUrl = "", -}: { - prompt: string - model: string, - assetUrl: string -}): Promise { - - const before = prompt - prompt = filterOutBadWords(prompt) - - if (prompt !== before) { - console.log(`user attempted to use bad words! their original prompt is: ${before}`) - } - - if (prompt.toLocaleLowerCase().includes("male muscle") || prompt.toLocaleLowerCase().includes("muscle growth")) { - throw new Error("unknown erorr") - } - - // if the community API is disabled, - // we don't fail, we just mock - if (!apiUrl) { - const mockPost: Post = { - postId: uuidv4(), - appId: "mock", - prompt, - model, - previewUrl: assetUrl, - assetUrl, - createdAt: new Date().toISOString(), - visibility: "normal", - upvotes: 0, - downvotes: 0 - } - return mockPost - } - - if (!prompt) { - console.error(`cannot call the community API without a prompt, aborting..`) - throw new Error(`cannot call the community API without a prompt, aborting..`) - } - if (!assetUrl) { - console.error(`cannot call the community API without an assetUrl, aborting..`) - throw new Error(`cannot call the community API without an assetUrl, aborting..`) - } - - try { - console.log(`calling POST ${apiUrl}/posts/${appId} with prompt: ${prompt}`) - - const postId = uuidv4() - - const post: Partial = { postId, appId, prompt, model, assetUrl } - - console.log(`target url is: ${ - `${apiUrl}/posts/${appId}` - }`) - - const res = await fetch(`${apiUrl}/posts/${appId}`, { - method: "POST", - headers: { - Accept: "application/json", - "Content-Type": "application/json", - Authorization: `Bearer ${apiToken}`, - }, - body: JSON.stringify(post), - cache: 'no-store', - // we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache) - // next: { revalidate: 1 } - }) - - // Recommendation: handle errors - if (res.status !== 201) { - // This will activate the closest `error.js` Error Boundary - throw new Error('Failed to fetch data') - } - - const response = (await res.json()) as CreatePostResponse - // console.log("response:", response) - return response.post - } catch (err) { - const error = `failed to post to community: ${err}` - console.error(error) - throw new Error(error) - } -} - -export async function getLatestPosts({ - visibility, - maxNbPosts = 80, - shuffle = true, -}: { - visibility?: PostVisibility - maxNbPosts?: number - shuffle?: boolean -}): Promise { - - let posts: Post[] = [] - - // if the community API is disabled we don't fail, - // we just mock - if (!apiUrl) { - return posts - } - - try { - // console.log(`calling GET ${apiUrl}/posts with renderId: ${renderId}`) - // TODO: send the max number of posts - const res = await fetch(`${apiUrl}/posts/${appId}/firehose/${ - visibility || "all" - }/${ - maxNbPosts || 80 - }/${ - !!shuffle - }`, { - method: "GET", - headers: { - Accept: "application/json", - "Content-Type": "application/json", - Authorization: `Bearer ${apiToken}`, - }, - cache: 'no-store', - // we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache) - // next: { revalidate: 1 } - }) - - // console.log("res:", res) - // The return value is *not* serialized - // You can return Date, Map, Set, etc. - - // Recommendation: handle errors - if (res.status !== 200) { - // This will activate the closest `error.js` Error Boundary - throw new Error('Failed to fetch data') - } - - const response = (await res.json()) as GetAppPostsResponse - // console.log("response:", response) - - const posts: Post[] = Array.isArray(response?.posts) ? response?.posts : [] - - return posts - } catch (err) { - // const error = `failed to get posts: ${err}` - // console.error(error) - // throw new Error(error) - return [] - } -} - -export async function getPost(postId: string): Promise { - - // if the community API is disabled we don't fail, - // we just mock - if (!apiUrl) { - throw new Error("community API is not enabled") - } - - try { - // console.log(`calling GET ${apiUrl}/posts with renderId: ${renderId}`) - const res = await fetch(`${apiUrl}/posts/${appId}/${postId}`, { - method: "GET", - headers: { - Accept: "application/json", - "Content-Type": "application/json", - Authorization: `Bearer ${apiToken}`, - }, - cache: 'no-store', - // we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache) - // next: { revalidate: 1 } - }) - - // console.log("res:", res) - // The return value is *not* serialized - // You can return Date, Map, Set, etc. - - // Recommendation: handle errors - if (res.status !== 200) { - // This will activate the closest `error.js` Error Boundary - throw new Error('Failed to fetch data') - } - - const response = (await res.json()) as GetAppPostResponse - // console.log("response:", response) - return response.post - } catch (err) { - const error = `failed to get post: ${err}` - console.error(error) - throw new Error(error) - } -} - -export async function deletePost({ - postId, - moderationKey, -}: { - postId: string - moderationKey: string -}): Promise { - - // if the community API is disabled, - // we don't fail, we just mock - if (!apiUrl) { - return false - } - - if (!postId) { - console.error(`cannot delete a post without a postId, aborting..`) - throw new Error(`cannot delete a post without a postId, aborting..`) - } - if (!moderationKey) { - console.error(`cannot delete a post without a moderationKey, aborting..`) - throw new Error(`cannot delete a post without a moderationKey, aborting..`) - } - - if (moderationKey !== secretModerationKey) { - console.error(`invalid moderation key, operation denied! please ask an admin for the mdoeration key`) - throw new Error(`invalid moderation key, operation denied! please ask an admin for the mdoeration key`) - } - - try { - console.log(`calling DELETE ${apiUrl}/posts/${appId}/${postId}`) - - const res = await fetch(`${apiUrl}/posts/${appId}/${postId}`, { - method: "DELETE", - headers: { - Accept: "application/json", - "Content-Type": "application/json", - Authorization: `Bearer ${apiToken}`, - }, - cache: 'no-store', - // we can also use this (see https://vercel.com/blog/vercel-cache-api-nextjs-cache) - // next: { revalidate: 1 } - }) - - // console.log("res:", res) - // The return value is *not* serialized - // You can return Date, Map, Set, etc. - - // Recommendation: handle errors - if (res.status !== 200) { - // This will activate the closest `error.js` Error Boundary - throw new Error('Failed to fetch data') - } - - const response = (await res.json()) as CreatePostResponse - return true - } catch (err) { - const error = `failed to delete the post: ${err}` - console.error(error) - throw new Error(error) - } -} diff --git a/spaces/jcenaa/Segment-Any-RGBD/third_party/CLIP/clip/__init__.py b/spaces/jcenaa/Segment-Any-RGBD/third_party/CLIP/clip/__init__.py deleted file mode 100644 index dcc5619538c0f7c782508bdbd9587259d805e0d9..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/third_party/CLIP/clip/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .clip import * diff --git a/spaces/jhtonyKoo/music_mixing_style_transfer/app.py b/spaces/jhtonyKoo/music_mixing_style_transfer/app.py deleted file mode 100644 index c21850580726a7294c3d321e2532235e77c4cc59..0000000000000000000000000000000000000000 --- a/spaces/jhtonyKoo/music_mixing_style_transfer/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import os -import binascii -import warnings - -import json -import argparse -import copy - -import numpy as np -import matplotlib.pyplot as plt -import torch -import tqdm -import librosa -import librosa.display -import soundfile as sf -import gradio as gr -import pytube as pt - -from pytube.exceptions import VideoUnavailable - -from inference.style_transfer import * - - -yt_video_dir = f"./yt_dir/0" -os.makedirs(yt_video_dir, exist_ok=True) - - -def get_audio_from_yt_video_input(yt_link: str, start_point_in_second=0, duration_in_second=30): - try: - yt = pt.YouTube(yt_link) - t = yt.streams.filter(only_audio=True) - filename_in = os.path.join(yt_video_dir, "input.wav") - t[0].download(filename=filename_in) - except VideoUnavailable as e: - warnings.warn(f"Video Not Found at {yt_link} ({e})") - filename_in = None - - # trim audio length - due to computation time on HuggingFace environment - trim_audio(target_file_path=filename_in, start_point_in_second=start_point_in_second, duration_in_second=duration_in_second) - - return filename_in, filename_in - -def get_audio_from_yt_video_ref(yt_link: str, start_point_in_second=0, duration_in_second=30): - try: - yt = pt.YouTube(yt_link) - t = yt.streams.filter(only_audio=True) - filename_ref = os.path.join(yt_video_dir, "reference.wav") - t[0].download(filename=filename_ref) - except VideoUnavailable as e: - warnings.warn(f"Video Not Found at {yt_link} ({e})") - filename_ref = None - - # trim audio length - due to computation time on HuggingFace environment - trim_audio(target_file_path=filename_ref, start_point_in_second=start_point_in_second, duration_in_second=duration_in_second) - - return filename_ref, filename_ref - -def inference(file_uploaded_in, file_uploaded_ref): - # clear out previously separated results - os.system(f"rm -r {yt_video_dir}/separated") - # change file path name - os.system(f"cp {file_uploaded_in} {yt_video_dir}/input.wav") - os.system(f"cp {file_uploaded_ref} {yt_video_dir}/reference.wav") - - # Perform music mixing style transfer - args = set_up() - - inference_style_transfer = Mixing_Style_Transfer_Inference(args) - output_wav_path = inference_style_transfer.inference(file_uploaded_in, file_uploaded_ref) - - return output_wav_path - - - -with gr.Blocks() as demo: - gr.HTML( - """ -
        -
        -

        - Music Mixing Style Transfer -

        -
        - """ - ) - gr.Markdown( - """ - This page is a Hugging Face interactive demo of the paper ["Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects"](https://huggingface.co/papers/2211.02247) (ICASSP 2023). - - [project page](https://jhtonykoo.github.io/MixingStyleTransfer/) - - [GitHub](https://github.com/jhtonyKoo/music_mixing_style_transfer) - - [supplementary](https://pale-cicada-946.notion.site/Music-Mixing-Style-Transfer-A-Contrastive-Learning-Approach-to-Disentangle-Audio-Effects-Supplemen-e6eccd9a431a4a8fa4fdd5adb2d3f219) - """ - ) - with gr.Group(): - with gr.Column(): - with gr.Blocks(): - with gr.Tab("Input Music"): - file_uploaded_in = gr.Audio(label="Input track (mix) to be mixing style transferred", type="filepath") - with gr.Tab("YouTube url"): - with gr.Row(): - yt_link_in = gr.Textbox( - label="Enter YouTube Link of the Video", autofocus=True, lines=3 - ) - yt_in_start_sec = gr.Number( - value=0, - label="starting point of the song (in seconds)" - ) - yt_in_duration_sec = gr.Number( - value=30, - label="duration of the song (in seconds)" - ) - yt_btn_in = gr.Button("Download Audio from YouTube Link", size="lg") - yt_audio_path_in = gr.Audio( - label="Input Audio Extracted from the YouTube Video", interactive=False - ) - yt_btn_in.click( - get_audio_from_yt_video_input, - inputs=[yt_link_in, yt_in_start_sec, yt_in_duration_sec], - outputs=[yt_audio_path_in, file_uploaded_in], - ) - with gr.Blocks(): - with gr.Tab("Reference Music"): - file_uploaded_ref = gr.Audio(label="Reference track (mix) to copy mixing style", type="filepath") - with gr.Tab("YouTube url"): - with gr.Row(): - yt_link_ref = gr.Textbox( - label="Enter YouTube Link of the Video", autofocus=True, lines=3 - ) - yt_ref_start_sec = gr.Number( - value=0, - label="starting point of the song (in seconds)" - ) - yt_ref_duration_sec = gr.Number( - value=30, - label="duration of the song (in seconds)" - ) - yt_btn_ref = gr.Button("Download Audio from YouTube Link", size="lg") - yt_audio_path_ref = gr.Audio( - label="Reference Audio Extracted from the YouTube Video", interactive=False - ) - yt_btn_ref.click( - get_audio_from_yt_video_ref, - inputs=[yt_link_ref, yt_ref_start_sec, yt_ref_duration_sec], - outputs=[yt_audio_path_ref, file_uploaded_ref], - ) - - with gr.Group(): - gr.HTML( - """ -

        Mixing Style Transfer. Perform stem-wise audio-effects style conversion by first source separating the input mix. The inference computation time takes longer as the input samples' duration. so plz be patient...

        - """ - ) - with gr.Column(): - inference_btn = gr.Button("Run Mixing Style Transfer") - with gr.Row(): - output_mix = gr.Audio(label="mixing style transferred music track") - inference_btn.click( - inference, - inputs=[file_uploaded_in, file_uploaded_ref], - outputs=[output_mix], - ) - - - -if __name__ == "__main__": - demo.launch(debug=True) - \ No newline at end of file diff --git a/spaces/jinmao/2/modules/overwrites.py b/spaces/jinmao/2/modules/overwrites.py deleted file mode 100644 index bfcd4d01b7d7bec1184a8d09113933bca860530b..0000000000000000000000000000000000000000 --- a/spaces/jinmao/2/modules/overwrites.py +++ /dev/null @@ -1,56 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from modules.presets import * -from modules.llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - user, bot = y[-1] - if not detect_converted_mark(user): - user = convert_asis(user) - if not detect_converted_mark(bot): - bot = convert_mdtext(bot) - y[-1] = (user, bot) - return y - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/samples/README.md b/spaces/joaogabriellima/Real-Time-Voice-Cloning/samples/README.md deleted file mode 100644 index 1a392d86e42f72e83954619f563f4881da327236..0000000000000000000000000000000000000000 --- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/samples/README.md +++ /dev/null @@ -1,22 +0,0 @@ -The audio files in this folder are provided for toolbox testing and -benchmarking purposes. These are the same reference utterances -used by the SV2TTS authors to generate the audio samples located at: -https://google.github.io/tacotron/publications/speaker_adaptation/index.html - -The `p240_00000.mp3` and `p260_00000.mp3` files are compressed -versions of audios from the VCTK corpus available at: -https://datashare.is.ed.ac.uk/handle/10283/3443 -VCTK.txt contains the copyright notices and licensing information. - -The `1320_00000.mp3`, `3575_00000.mp3`, `6829_00000.mp3` -and `8230_00000.mp3` files are compressed versions of audios -from the LibriSpeech dataset available at: https://openslr.org/12 -For these files, the following notice applies: -``` -LibriSpeech (c) 2014 by Vassil Panayotov - -LibriSpeech ASR corpus is licensed under a -Creative Commons Attribution 4.0 International License. - -See . -``` diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/FpxImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/FpxImagePlugin.py deleted file mode 100644 index 2450c67e9a67530a4ad4fdcb1bbbfd39971ea484..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/FpxImagePlugin.py +++ /dev/null @@ -1,253 +0,0 @@ -# -# THIS IS WORK IN PROGRESS -# -# The Python Imaging Library. -# $Id$ -# -# FlashPix support for PIL -# -# History: -# 97-01-25 fl Created (reads uncompressed RGB images only) -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# -import olefile - -from . import Image, ImageFile -from ._binary import i32le as i32 - -# we map from colour field tuples to (mode, rawmode) descriptors -MODES = { - # opacity - (0x00007FFE,): ("A", "L"), - # monochrome - (0x00010000,): ("L", "L"), - (0x00018000, 0x00017FFE): ("RGBA", "LA"), - # photo YCC - (0x00020000, 0x00020001, 0x00020002): ("RGB", "YCC;P"), - (0x00028000, 0x00028001, 0x00028002, 0x00027FFE): ("RGBA", "YCCA;P"), - # standard RGB (NIFRGB) - (0x00030000, 0x00030001, 0x00030002): ("RGB", "RGB"), - (0x00038000, 0x00038001, 0x00038002, 0x00037FFE): ("RGBA", "RGBA"), -} - - -# -# -------------------------------------------------------------------- - - -def _accept(prefix): - return prefix[:8] == olefile.MAGIC - - -## -# Image plugin for the FlashPix images. - - -class FpxImageFile(ImageFile.ImageFile): - format = "FPX" - format_description = "FlashPix" - - def _open(self): - # - # read the OLE directory and see if this is a likely - # to be a FlashPix file - - try: - self.ole = olefile.OleFileIO(self.fp) - except OSError as e: - msg = "not an FPX file; invalid OLE file" - raise SyntaxError(msg) from e - - if self.ole.root.clsid != "56616700-C154-11CE-8553-00AA00A1F95B": - msg = "not an FPX file; bad root CLSID" - raise SyntaxError(msg) - - self._open_index(1) - - def _open_index(self, index=1): - # - # get the Image Contents Property Set - - prop = self.ole.getproperties( - [f"Data Object Store {index:06d}", "\005Image Contents"] - ) - - # size (highest resolution) - - self._size = prop[0x1000002], prop[0x1000003] - - size = max(self.size) - i = 1 - while size > 64: - size = size / 2 - i += 1 - self.maxid = i - 1 - - # mode. instead of using a single field for this, flashpix - # requires you to specify the mode for each channel in each - # resolution subimage, and leaves it to the decoder to make - # sure that they all match. for now, we'll cheat and assume - # that this is always the case. - - id = self.maxid << 16 - - s = prop[0x2000002 | id] - - colors = [] - bands = i32(s, 4) - if bands > 4: - msg = "Invalid number of bands" - raise OSError(msg) - for i in range(bands): - # note: for now, we ignore the "uncalibrated" flag - colors.append(i32(s, 8 + i * 4) & 0x7FFFFFFF) - - self.mode, self.rawmode = MODES[tuple(colors)] - - # load JPEG tables, if any - self.jpeg = {} - for i in range(256): - id = 0x3000001 | (i << 16) - if id in prop: - self.jpeg[i] = prop[id] - - self._open_subimage(1, self.maxid) - - def _open_subimage(self, index=1, subimage=0): - # - # setup tile descriptors for a given subimage - - stream = [ - f"Data Object Store {index:06d}", - f"Resolution {subimage:04d}", - "Subimage 0000 Header", - ] - - fp = self.ole.openstream(stream) - - # skip prefix - fp.read(28) - - # header stream - s = fp.read(36) - - size = i32(s, 4), i32(s, 8) - # tilecount = i32(s, 12) - tilesize = i32(s, 16), i32(s, 20) - # channels = i32(s, 24) - offset = i32(s, 28) - length = i32(s, 32) - - if size != self.size: - msg = "subimage mismatch" - raise OSError(msg) - - # get tile descriptors - fp.seek(28 + offset) - s = fp.read(i32(s, 12) * length) - - x = y = 0 - xsize, ysize = size - xtile, ytile = tilesize - self.tile = [] - - for i in range(0, len(s), length): - x1 = min(xsize, x + xtile) - y1 = min(ysize, y + ytile) - - compression = i32(s, i + 8) - - if compression == 0: - self.tile.append( - ( - "raw", - (x, y, x1, y1), - i32(s, i) + 28, - (self.rawmode,), - ) - ) - - elif compression == 1: - # FIXME: the fill decoder is not implemented - self.tile.append( - ( - "fill", - (x, y, x1, y1), - i32(s, i) + 28, - (self.rawmode, s[12:16]), - ) - ) - - elif compression == 2: - internal_color_conversion = s[14] - jpeg_tables = s[15] - rawmode = self.rawmode - - if internal_color_conversion: - # The image is stored as usual (usually YCbCr). - if rawmode == "RGBA": - # For "RGBA", data is stored as YCbCrA based on - # negative RGB. The following trick works around - # this problem : - jpegmode, rawmode = "YCbCrK", "CMYK" - else: - jpegmode = None # let the decoder decide - - else: - # The image is stored as defined by rawmode - jpegmode = rawmode - - self.tile.append( - ( - "jpeg", - (x, y, x1, y1), - i32(s, i) + 28, - (rawmode, jpegmode), - ) - ) - - # FIXME: jpeg tables are tile dependent; the prefix - # data must be placed in the tile descriptor itself! - - if jpeg_tables: - self.tile_prefix = self.jpeg[jpeg_tables] - - else: - msg = "unknown/invalid compression" - raise OSError(msg) - - x = x + xtile - if x >= xsize: - x, y = 0, y + ytile - if y >= ysize: - break # isn't really required - - self.stream = stream - self.fp = None - - def load(self): - if not self.fp: - self.fp = self.ole.openstream(self.stream[:2] + ["Subimage 0000 Data"]) - - return ImageFile.ImageFile.load(self) - - def close(self): - self.ole.close() - super().close() - - def __exit__(self, *args): - self.ole.close() - super().__exit__() - - -# -# -------------------------------------------------------------------- - - -Image.register_open(FpxImageFile.format, FpxImageFile, _accept) - -Image.register_extension(FpxImageFile.format, ".fpx") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_resources.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_resources.py deleted file mode 100644 index b9a5344aef2962670f9b305a02cd0b11f2087d2f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_resources.py +++ /dev/null @@ -1,18 +0,0 @@ -from __future__ import annotations - -from ..abc import AsyncResource -from ._tasks import CancelScope - - -async def aclose_forcefully(resource: AsyncResource) -> None: - """ - Close an asynchronous resource in a cancelled scope. - - Doing this closes the resource without waiting on anything. - - :param resource: the resource to close - - """ - with CancelScope() as scope: - scope.cancel() - await resource.aclose() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__3.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__3.py deleted file mode 100644 index 8ef3c5ade2b1c2d52a084bd34f82598bb46f774f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I__3.py +++ /dev/null @@ -1,20 +0,0 @@ -""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT) -tool to store its hinting source data. - -TSI3 contains the text of the glyph programs in the form of 'VTTTalk' code. -""" -from fontTools import ttLib - -superclass = ttLib.getTableClass("TSI1") - - -class table_T_S_I__3(superclass): - - extras = { - 0xFFFA: "reserved0", - 0xFFFB: "reserved1", - 0xFFFC: "reserved2", - 0xFFFD: "reserved3", - } - - indextable = "TSI2" diff --git a/spaces/jonatanklosko/chai/README.md b/spaces/jonatanklosko/chai/README.md deleted file mode 100644 index 95032d52627c540387fba00c3032ad526e9c17a1..0000000000000000000000000000000000000000 --- a/spaces/jonatanklosko/chai/README.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: Chai -emoji: 🍵 -colorFrom: pink -colorTo: red -sdk: docker -pinned: false ---- - -# Chai 🍵 - -Chai is a demo web application built with [Phoenix LiveView](https://github.com/phoenixframework/phoenix_live_view) -and showcasing multiple Neural Network models from the [Bumblebee](https://github.com/elixir-nx/bumblebee) package. - -The app uses a combination of pre-trained models for the following features: - - * **Conversation** using [Blenderbot](https://huggingface.co/facebook/blenderbot-400M-distill) - * **Speech transcription** using [Whisper](https://huggingface.co/openai/whisper-tiny) - * **Image captioning** using [BLIP](https://huggingface.co/Salesforce/blip-image-captioning-base) - * **Emotion recognition** using [RoBERTa](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base) - * **Named entity recognition (NER)** using [BERT](https://huggingface.co/dslim/bert-base-NER) - -## Development - -You need Erlang and Elixir installed, then: - -```shell -git clone https://huggingface.co/spaces/jonatanklosko/chai -cd chai - -mix setup -mix phx.server -``` - -Now you can visit [`localhost:4040`](http://localhost:4040) from your browser. diff --git a/spaces/josedolot/HybridNet_Demo2/encoders/_utils.py b/spaces/josedolot/HybridNet_Demo2/encoders/_utils.py deleted file mode 100644 index 859151c41d9de50ba90a3d0d3408b97803fae2fd..0000000000000000000000000000000000000000 --- a/spaces/josedolot/HybridNet_Demo2/encoders/_utils.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch -import torch.nn as nn - - -def patch_first_conv(model, new_in_channels, default_in_channels=3, pretrained=True): - """Change first convolution layer input channels. - In case: - in_channels == 1 or in_channels == 2 -> reuse original weights - in_channels > 3 -> make random kaiming normal initialization - """ - - # get first conv - for module in model.modules(): - if isinstance(module, nn.Conv2d) and module.in_channels == default_in_channels: - break - - weight = module.weight.detach() - module.in_channels = new_in_channels - - if not pretrained: - module.weight = nn.parameter.Parameter( - torch.Tensor( - module.out_channels, - new_in_channels // module.groups, - *module.kernel_size - ) - ) - module.reset_parameters() - - elif new_in_channels == 1: - new_weight = weight.sum(1, keepdim=True) - module.weight = nn.parameter.Parameter(new_weight) - - else: - new_weight = torch.Tensor( - module.out_channels, - new_in_channels // module.groups, - *module.kernel_size - ) - - for i in range(new_in_channels): - new_weight[:, i] = weight[:, i % default_in_channels] - - new_weight = new_weight * (default_in_channels / new_in_channels) - module.weight = nn.parameter.Parameter(new_weight) - - -def replace_strides_with_dilation(module, dilation_rate): - """Patch Conv2d modules replacing strides with dilation""" - for mod in module.modules(): - if isinstance(mod, nn.Conv2d): - mod.stride = (1, 1) - mod.dilation = (dilation_rate, dilation_rate) - kh, kw = mod.kernel_size - mod.padding = ((kh // 2) * dilation_rate, (kh // 2) * dilation_rate) - - # Kostyl for EfficientNet - if hasattr(mod, "static_padding"): - mod.static_padding = nn.Identity() diff --git "a/spaces/joshen/gpt-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/joshen/gpt-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" deleted file mode 100644 index ebcad851f58f5d2305292fb38073c32870f34f17..0000000000000000000000000000000000000000 --- "a/spaces/joshen/gpt-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" +++ /dev/null @@ -1,25 +0,0 @@ -from predict import predict_no_ui_long_connection -from toolbox import CatchException, report_execption, write_results_to_file -import datetime - -@CatchException -def 高阶功能模板函数(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。为了做到简单易读,该函数只有25行代码,所以不会实时反馈文字流或心跳,请耐心等待程序输出完成。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!")) - yield chatbot, history, '正常' # 由于请求gpt需要一段时间,我们先及时地做一次状态显示 - - for i in range(5): - currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month - currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day - i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield chatbot, history, '正常' # 由于请求gpt需要一段时间,我们先及时地做一次状态显示 - - # history = [] 每次询问不携带之前的询问历史 - gpt_say = predict_no_ui_long_connection( - inputs=i_say, top_p=top_p, api_key=api_key, temperature=temperature, history=[], - sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。") # 请求gpt,需要一段时间 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield chatbot, history, '正常' # 显示 \ No newline at end of file diff --git a/spaces/joushe/moe-tts/text/cantonese.py b/spaces/joushe/moe-tts/text/cantonese.py deleted file mode 100644 index 32eae72ef7eb43d493da6d6f75dd46176d0e8808..0000000000000000000000000000000000000000 --- a/spaces/joushe/moe-tts/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('chinese_dialect_lexicons/jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/jvcanavarro/traits-prediction/src/auditory_stream.py b/spaces/jvcanavarro/traits-prediction/src/auditory_stream.py deleted file mode 100644 index e373055bc1a1889c826f2a9cea1ba410d486f647..0000000000000000000000000000000000000000 --- a/spaces/jvcanavarro/traits-prediction/src/auditory_stream.py +++ /dev/null @@ -1,147 +0,0 @@ -import chainer - - -class ConvolutionBlock(chainer.Chain): - def __init__(self, in_channels, out_channels): - super(ConvolutionBlock, self).__init__( - conv=chainer.links.Convolution2D( - in_channels, - out_channels, - (1, 49), - (1, 4), - (0, 24), - initialW=chainer.initializers.HeNormal(), - ), - bn_conv=chainer.links.BatchNormalization(out_channels), - ) - - def __call__(self, x): - # Set Train to False. - chainer.config.train = False - - h = self.conv(x) - h = self.bn_conv(h) - y = chainer.functions.relu(h) - - return y - - -class ResidualBlock(chainer.Chain): - def __init__(self, in_channels, out_channels): - super(ResidualBlock, self).__init__( - res_branch2a=chainer.links.Convolution2D( - in_channels, - out_channels, - (1, 9), - pad=(0, 4), - initialW=chainer.initializers.HeNormal(), - ), - bn_branch2a=chainer.links.BatchNormalization(out_channels), - res_branch2b=chainer.links.Convolution2D( - out_channels, - out_channels, - (1, 9), - pad=(0, 4), - initialW=chainer.initializers.HeNormal(), - ), - bn_branch2b=chainer.links.BatchNormalization(out_channels), - ) - - def __call__(self, x): - chainer.config.train = False - - h = self.res_branch2a(x) - h = self.bn_branch2a(h) - h = chainer.functions.relu(h) - h = self.res_branch2b(h) - h = self.bn_branch2b(h) - h = x + h - y = chainer.functions.relu(h) - - return y - - -class ResidualBlockA: - def __init__(self): - pass - - def __call__(self): - pass - - -class ResidualBlockB(chainer.Chain): - def __init__(self, in_channels, out_channels): - super(ResidualBlockB, self).__init__( - res_branch1=chainer.links.Convolution2D( - in_channels, - out_channels, - (1, 1), - (1, 4), - initialW=chainer.initializers.HeNormal(), - ), - bn_branch1=chainer.links.BatchNormalization(out_channels), - res_branch2a=chainer.links.Convolution2D( - in_channels, - out_channels, - (1, 9), - (1, 4), - (0, 4), - initialW=chainer.initializers.HeNormal(), - ), - bn_branch2a=chainer.links.BatchNormalization(out_channels), - res_branch2b=chainer.links.Convolution2D( - out_channels, - out_channels, - (1, 9), - pad=(0, 4), - initialW=chainer.initializers.HeNormal(), - ), - bn_branch2b=chainer.links.BatchNormalization(out_channels), - ) - - def __call__(self, x): - chainer.config.train = False - - temp = self.res_branch1(x) - temp = self.bn_branch1(temp) - h = self.res_branch2a(x) - h = self.bn_branch2a(h) - h = chainer.functions.relu(h) - h = self.res_branch2b(h) - h = self.bn_branch2b(h) - h = temp + h - y = chainer.functions.relu(h) - - return y - - -class ResNet18(chainer.Chain): - def __init__(self): - super(ResNet18, self).__init__( - conv1_relu=ConvolutionBlock(1, 32), - res2a_relu=ResidualBlock(32, 32), - res2b_relu=ResidualBlock(32, 32), - res3a_relu=ResidualBlockB(32, 64), - res3b_relu=ResidualBlock(64, 64), - res4a_relu=ResidualBlockB(64, 128), - res4b_relu=ResidualBlock(128, 128), - res5a_relu=ResidualBlockB(128, 256), - res5b_relu=ResidualBlock(256, 256), - ) - - def __call__(self, x): - chainer.config.train = False - - h = self.conv1_relu(x) - h = chainer.functions.max_pooling_2d(h, (1, 9), (1, 4), (0, 4)) - h = self.res2a_relu(h) - h = self.res2b_relu(h) - h = self.res3a_relu(h) - h = self.res3b_relu(h) - h = self.res4a_relu(h) - h = self.res4b_relu(h) - h = self.res5a_relu(h) - h = self.res5b_relu(h) - y = chainer.functions.average_pooling_2d(h, h.data.shape[2:]) - - return y diff --git a/spaces/kadirnar/yolov7/README.md b/spaces/kadirnar/yolov7/README.md deleted file mode 100644 index 783b5b9c92f839143e742b23ff8895eaff3fe180..0000000000000000000000000000000000000000 --- a/spaces/kadirnar/yolov7/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Yolov7 -emoji: 📈 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false -license: gpl-3.0 -tags: -- making-demos ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/karolmajek/YOLOR/utils/parse_config.py b/spaces/karolmajek/YOLOR/utils/parse_config.py deleted file mode 100644 index d6cbfdd81f54c7017bcd35bfeccca7f6578f25ae..0000000000000000000000000000000000000000 --- a/spaces/karolmajek/YOLOR/utils/parse_config.py +++ /dev/null @@ -1,71 +0,0 @@ -import os - -import numpy as np - - -def parse_model_cfg(path): - # Parse the yolo *.cfg file and return module definitions path may be 'cfg/yolov3.cfg', 'yolov3.cfg', or 'yolov3' - if not path.endswith('.cfg'): # add .cfg suffix if omitted - path += '.cfg' - if not os.path.exists(path) and os.path.exists('cfg' + os.sep + path): # add cfg/ prefix if omitted - path = 'cfg' + os.sep + path - - with open(path, 'r') as f: - lines = f.read().split('\n') - lines = [x for x in lines if x and not x.startswith('#')] - lines = [x.rstrip().lstrip() for x in lines] # get rid of fringe whitespaces - mdefs = [] # module definitions - for line in lines: - if line.startswith('['): # This marks the start of a new block - mdefs.append({}) - mdefs[-1]['type'] = line[1:-1].rstrip() - if mdefs[-1]['type'] == 'convolutional': - mdefs[-1]['batch_normalize'] = 0 # pre-populate with zeros (may be overwritten later) - - else: - key, val = line.split("=") - key = key.rstrip() - - if key == 'anchors': # return nparray - mdefs[-1][key] = np.array([float(x) for x in val.split(',')]).reshape((-1, 2)) # np anchors - elif (key in ['from', 'layers', 'mask']) or (key == 'size' and ',' in val): # return array - mdefs[-1][key] = [int(x) for x in val.split(',')] - else: - val = val.strip() - if val.isnumeric(): # return int or float - mdefs[-1][key] = int(val) if (int(val) - float(val)) == 0 else float(val) - else: - mdefs[-1][key] = val # return string - - # Check all fields are supported - supported = ['type', 'batch_normalize', 'filters', 'size', 'stride', 'pad', 'activation', 'layers', 'groups', - 'from', 'mask', 'anchors', 'classes', 'num', 'jitter', 'ignore_thresh', 'truth_thresh', 'random', - 'stride_x', 'stride_y', 'weights_type', 'weights_normalization', 'scale_x_y', 'beta_nms', 'nms_kind', - 'iou_loss', 'iou_normalizer', 'cls_normalizer', 'iou_thresh', 'atoms', 'na', 'nc'] - - f = [] # fields - for x in mdefs[1:]: - [f.append(k) for k in x if k not in f] - u = [x for x in f if x not in supported] # unsupported fields - assert not any(u), "Unsupported fields %s in %s. See https://github.com/ultralytics/yolov3/issues/631" % (u, path) - - return mdefs - - -def parse_data_cfg(path): - # Parses the data configuration file - if not os.path.exists(path) and os.path.exists('data' + os.sep + path): # add data/ prefix if omitted - path = 'data' + os.sep + path - - with open(path, 'r') as f: - lines = f.readlines() - - options = dict() - for line in lines: - line = line.strip() - if line == '' or line.startswith('#'): - continue - key, val = line.split('=') - options[key.strip()] = val.strip() - - return options diff --git a/spaces/kdrkdrkdr/AzusaTTS/text/japanese.py b/spaces/kdrkdrkdr/AzusaTTS/text/japanese.py deleted file mode 100644 index 65480534b452efabe87b40033316e2c1577ff3ea..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/AzusaTTS/text/japanese.py +++ /dev/null @@ -1,132 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('...', '…'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# Dictinary of (consonant, sokuon) pairs: -_real_sokuon = { - 'k': 'k#', - 'g': 'k#', - 't': 't#', - 'd': 't#', - 'ʦ': 't#', - 'ʧ': 't#', - 'ʥ': 't#', - 'j': 't#', - 's': 's', - 'ʃ': 's', - 'p': 'p#', - 'b': 'p#' -} - -# Dictinary of (consonant, hatsuon) pairs: -_real_hatsuon = { - 'p': 'm', - 'b': 'm', - 'm': 'm', - 't': 'n', - 'd': 'n', - 'n': 'n', - 'ʧ': 'n^', - 'ʥ': 'n^', - 'k': 'ŋ', - 'g': 'ŋ' -} - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - text=re.sub('Q[↑↓]*(.)',lambda x:_real_sokuon[x.group(1)]+x.group(0)[1:] if x.group(1) in _real_sokuon.keys() else x.group(0),text) - return text - - -def get_real_hatsuon(text): - text=re.sub('N[↑↓]*(.)',lambda x:_real_hatsuon[x.group(1)]+x.group(0)[1:] if x.group(1) in _real_hatsuon.keys() else x.group(0),text) - return text - - -def japanese_to_ipa(text): - text=japanese_to_romaji_with_accent(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub( - r'([A-Za-zɯ])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - return text diff --git a/spaces/keras-io/denoising-diffusion-implicit-models/README.md b/spaces/keras-io/denoising-diffusion-implicit-models/README.md deleted file mode 100644 index e2bbdd2112719b01657ff34dc89267c2be3ac8bc..0000000000000000000000000000000000000000 --- a/spaces/keras-io/denoising-diffusion-implicit-models/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Denoising Diffusion Implicit Models -emoji: 🌹💨 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.0.20 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevinwang676/M4Singer/README.md b/spaces/kevinwang676/M4Singer/README.md deleted file mode 100644 index 60c187e803e035f31e226b5690ba3fe6c6739942..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/M4Singer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: M4Singer -emoji: 🎶 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.30.0 -app_file: inference/m4singer/gradio/infer.py -pinned: false -duplicated_from: zlc99/M4Singer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevinwang676/VALLE/utils/g2p/japanese.py b/spaces/kevinwang676/VALLE/utils/g2p/japanese.py deleted file mode 100644 index 75716c69496397e1d03fd4c2e87a38860404d11b..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VALLE/utils/g2p/japanese.py +++ /dev/null @@ -1,154 +0,0 @@ -import re -from unidecode import unidecode - - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - import pyopenjtalk - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/fileio/file_client.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/fileio/file_client.py deleted file mode 100644 index 950f0c1aeab14b8e308a7455ccd64a95b5d98add..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/fileio/file_client.py +++ /dev/null @@ -1,1148 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import inspect -import os -import os.path as osp -import re -import tempfile -import warnings -from abc import ABCMeta, abstractmethod -from contextlib import contextmanager -from pathlib import Path -from typing import Iterable, Iterator, Optional, Tuple, Union -from urllib.request import urlopen - -import annotator.uniformer.mmcv as mmcv -from annotator.uniformer.mmcv.utils.misc import has_method -from annotator.uniformer.mmcv.utils.path import is_filepath - - -class BaseStorageBackend(metaclass=ABCMeta): - """Abstract class of storage backends. - - All backends need to implement two apis: ``get()`` and ``get_text()``. - ``get()`` reads the file as a byte stream and ``get_text()`` reads the file - as texts. - """ - - # a flag to indicate whether the backend can create a symlink for a file - _allow_symlink = False - - @property - def name(self): - return self.__class__.__name__ - - @property - def allow_symlink(self): - return self._allow_symlink - - @abstractmethod - def get(self, filepath): - pass - - @abstractmethod - def get_text(self, filepath): - pass - - -class CephBackend(BaseStorageBackend): - """Ceph storage backend (for internal use). - - Args: - path_mapping (dict|None): path mapping dict from local path to Petrel - path. When ``path_mapping={'src': 'dst'}``, ``src`` in ``filepath`` - will be replaced by ``dst``. Default: None. - - .. warning:: - :class:`mmcv.fileio.file_client.CephBackend` will be deprecated, - please use :class:`mmcv.fileio.file_client.PetrelBackend` instead. - """ - - def __init__(self, path_mapping=None): - try: - import ceph - except ImportError: - raise ImportError('Please install ceph to enable CephBackend.') - - warnings.warn( - 'CephBackend will be deprecated, please use PetrelBackend instead') - self._client = ceph.S3Client() - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def get(self, filepath): - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class PetrelBackend(BaseStorageBackend): - """Petrel storage backend (for internal use). - - PetrelBackend supports reading and writing data to multiple clusters. - If the file path contains the cluster name, PetrelBackend will read data - from specified cluster or write data to it. Otherwise, PetrelBackend will - access the default cluster. - - Args: - path_mapping (dict, optional): Path mapping dict from local path to - Petrel path. When ``path_mapping={'src': 'dst'}``, ``src`` in - ``filepath`` will be replaced by ``dst``. Default: None. - enable_mc (bool, optional): Whether to enable memcached support. - Default: True. - - Examples: - >>> filepath1 = 's3://path/of/file' - >>> filepath2 = 'cluster-name:s3://path/of/file' - >>> client = PetrelBackend() - >>> client.get(filepath1) # get data from default cluster - >>> client.get(filepath2) # get data from 'cluster-name' cluster - """ - - def __init__(self, - path_mapping: Optional[dict] = None, - enable_mc: bool = True): - try: - from petrel_client import client - except ImportError: - raise ImportError('Please install petrel_client to enable ' - 'PetrelBackend.') - - self._client = client.Client(enable_mc=enable_mc) - assert isinstance(path_mapping, dict) or path_mapping is None - self.path_mapping = path_mapping - - def _map_path(self, filepath: Union[str, Path]) -> str: - """Map ``filepath`` to a string path whose prefix will be replaced by - :attr:`self.path_mapping`. - - Args: - filepath (str): Path to be mapped. - """ - filepath = str(filepath) - if self.path_mapping is not None: - for k, v in self.path_mapping.items(): - filepath = filepath.replace(k, v) - return filepath - - def _format_path(self, filepath: str) -> str: - """Convert a ``filepath`` to standard format of petrel oss. - - If the ``filepath`` is concatenated by ``os.path.join``, in a Windows - environment, the ``filepath`` will be the format of - 's3://bucket_name\\image.jpg'. By invoking :meth:`_format_path`, the - above ``filepath`` will be converted to 's3://bucket_name/image.jpg'. - - Args: - filepath (str): Path to be formatted. - """ - return re.sub(r'\\+', '/', filepath) - - def get(self, filepath: Union[str, Path]) -> memoryview: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - memoryview: A memory view of expected bytes object to avoid - copying. The memoryview object can be converted to bytes by - ``value_buf.tobytes()``. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - value = self._client.Get(filepath) - value_buf = memoryview(value) - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return str(self.get(filepath), encoding=encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Save data to a given ``filepath``. - - Args: - obj (bytes): Data to be saved. - filepath (str or Path): Path to write data. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.put(filepath, obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Save data to a given ``filepath``. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to encode the ``obj``. - Default: 'utf-8'. - """ - self.put(bytes(obj, encoding=encoding), filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - if not has_method(self._client, 'delete'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `delete` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - self._client.delete(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - if not (has_method(self._client, 'contains') - and has_method(self._client, 'isdir')): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` and `isdir` methods, please use a higher' - 'version or dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) or self._client.isdir(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - if not has_method(self._client, 'isdir'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `isdir` method, please use a higher version or dev' - ' branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - if not has_method(self._client, 'contains'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `contains` method, please use a higher version or ' - 'dev branch instead.')) - - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - return self._client.contains(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result after concatenation. - """ - filepath = self._format_path(self._map_path(filepath)) - if filepath.endswith('/'): - filepath = filepath[:-1] - formatted_paths = [filepath] - for path in filepaths: - formatted_paths.append(self._format_path(self._map_path(path))) - return '/'.join(formatted_paths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download a file from ``filepath`` and return a temporary path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str | Path): Download a file from ``filepath``. - - Examples: - >>> client = PetrelBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('s3://path/of/your/file') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one temporary path. - """ - filepath = self._map_path(filepath) - filepath = self._format_path(filepath) - assert self.isfile(filepath) - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - Petrel has no concept of directories but it simulates the directory - hierarchy in the filesystem through public prefixes. In addition, - if the returned path ends with '/', it means the path is a public - prefix which is a logical directory. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - In addition, the returned path of directory will not contains the - suffix '/' which is consistent with other backends. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if not has_method(self._client, 'list'): - raise NotImplementedError( - ('Current version of Petrel Python SDK has not supported ' - 'the `list` method, please use a higher version or dev' - ' branch instead.')) - - dir_path = self._map_path(dir_path) - dir_path = self._format_path(dir_path) - if list_dir and suffix is not None: - raise TypeError( - '`list_dir` should be False when `suffix` is not None') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - # Petrel's simulated directory hierarchy assumes that directory paths - # should end with `/` - if not dir_path.endswith('/'): - dir_path += '/' - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for path in self._client.list(dir_path): - # the `self.isdir` is not used here to determine whether path - # is a directory, because `self.isdir` relies on - # `self._client.list` - if path.endswith('/'): # a directory path - next_dir_path = self.join_path(dir_path, path) - if list_dir: - # get the relative path and exclude the last - # character '/' - rel_dir = next_dir_path[len(root):-1] - yield rel_dir - if recursive: - yield from _list_dir_or_file(next_dir_path, list_dir, - list_file, suffix, - recursive) - else: # a file path - absolute_path = self.join_path(dir_path, path) - rel_path = absolute_path[len(root):] - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class MemcachedBackend(BaseStorageBackend): - """Memcached storage backend. - - Attributes: - server_list_cfg (str): Config file for memcached server list. - client_cfg (str): Config file for memcached client. - sys_path (str | None): Additional path to be appended to `sys.path`. - Default: None. - """ - - def __init__(self, server_list_cfg, client_cfg, sys_path=None): - if sys_path is not None: - import sys - sys.path.append(sys_path) - try: - import mc - except ImportError: - raise ImportError( - 'Please install memcached to enable MemcachedBackend.') - - self.server_list_cfg = server_list_cfg - self.client_cfg = client_cfg - self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg, - self.client_cfg) - # mc.pyvector servers as a point which points to a memory cache - self._mc_buffer = mc.pyvector() - - def get(self, filepath): - filepath = str(filepath) - import mc - self._client.Get(filepath, self._mc_buffer) - value_buf = mc.ConvertBuffer(self._mc_buffer) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class LmdbBackend(BaseStorageBackend): - """Lmdb storage backend. - - Args: - db_path (str): Lmdb database path. - readonly (bool, optional): Lmdb environment parameter. If True, - disallow any write operations. Default: True. - lock (bool, optional): Lmdb environment parameter. If False, when - concurrent access occurs, do not lock the database. Default: False. - readahead (bool, optional): Lmdb environment parameter. If False, - disable the OS filesystem readahead mechanism, which may improve - random read performance when a database is larger than RAM. - Default: False. - - Attributes: - db_path (str): Lmdb database path. - """ - - def __init__(self, - db_path, - readonly=True, - lock=False, - readahead=False, - **kwargs): - try: - import lmdb - except ImportError: - raise ImportError('Please install lmdb to enable LmdbBackend.') - - self.db_path = str(db_path) - self._client = lmdb.open( - self.db_path, - readonly=readonly, - lock=lock, - readahead=readahead, - **kwargs) - - def get(self, filepath): - """Get values according to the filepath. - - Args: - filepath (str | obj:`Path`): Here, filepath is the lmdb key. - """ - filepath = str(filepath) - with self._client.begin(write=False) as txn: - value_buf = txn.get(filepath.encode('ascii')) - return value_buf - - def get_text(self, filepath, encoding=None): - raise NotImplementedError - - -class HardDiskBackend(BaseStorageBackend): - """Raw hard disks storage backend.""" - - _allow_symlink = True - - def get(self, filepath: Union[str, Path]) -> bytes: - """Read data from a given ``filepath`` with 'rb' mode. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes: Expected bytes object. - """ - with open(filepath, 'rb') as f: - value_buf = f.read() - return value_buf - - def get_text(self, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - with open(filepath, 'r', encoding=encoding) as f: - value_buf = f.read() - return value_buf - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` will create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'wb') as f: - f.write(obj) - - def put_text(self, - obj: str, - filepath: Union[str, Path], - encoding: str = 'utf-8') -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` will create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - """ - mmcv.mkdir_or_exist(osp.dirname(filepath)) - with open(filepath, 'w', encoding=encoding) as f: - f.write(obj) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str or Path): Path to be removed. - """ - os.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return osp.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return osp.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return osp.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return osp.join(filepath, *filepaths) - - @contextmanager - def get_local_path( - self, filepath: Union[str, Path]) -> Iterable[Union[str, Path]]: - """Only for unified API and do nothing.""" - yield filepath - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - if list_dir and suffix is not None: - raise TypeError('`suffix` should be None when `list_dir` is True') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('`suffix` must be a string or tuple of strings') - - root = dir_path - - def _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - if (suffix is None - or rel_path.endswith(suffix)) and list_file: - yield rel_path - elif osp.isdir(entry.path): - if list_dir: - rel_dir = osp.relpath(entry.path, root) - yield rel_dir - if recursive: - yield from _list_dir_or_file(entry.path, list_dir, - list_file, suffix, - recursive) - - return _list_dir_or_file(dir_path, list_dir, list_file, suffix, - recursive) - - -class HTTPBackend(BaseStorageBackend): - """HTTP and HTTPS storage bachend.""" - - def get(self, filepath): - value_buf = urlopen(filepath).read() - return value_buf - - def get_text(self, filepath, encoding='utf-8'): - value_buf = urlopen(filepath).read() - return value_buf.decode(encoding) - - @contextmanager - def get_local_path(self, filepath: str) -> Iterable[str]: - """Download a file from ``filepath``. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Args: - filepath (str): Download a file from ``filepath``. - - Examples: - >>> client = HTTPBackend() - >>> # After existing from the ``with`` clause, - >>> # the path will be removed - >>> with client.get_local_path('http://path/of/your/file') as path: - ... # do something here - """ - try: - f = tempfile.NamedTemporaryFile(delete=False) - f.write(self.get(filepath)) - f.close() - yield f.name - finally: - os.remove(f.name) - - -class FileClient: - """A general file client to access files in different backends. - - The client loads a file or text in a specified backend from its path - and returns it as a binary or text file. There are two ways to choose a - backend, the name of backend and the prefix of path. Although both of them - can be used to choose a storage backend, ``backend`` has a higher priority - that is if they are all set, the storage backend will be chosen by the - backend argument. If they are all `None`, the disk backend will be chosen. - Note that It can also register other backend accessor with a given name, - prefixes, and backend class. In addition, We use the singleton pattern to - avoid repeated object creation. If the arguments are the same, the same - object will be returned. - - Args: - backend (str, optional): The storage backend type. Options are "disk", - "ceph", "memcached", "lmdb", "http" and "petrel". Default: None. - prefix (str, optional): The prefix of the registered storage backend. - Options are "s3", "http", "https". Default: None. - - Examples: - >>> # only set backend - >>> file_client = FileClient(backend='petrel') - >>> # only set prefix - >>> file_client = FileClient(prefix='s3') - >>> # set both backend and prefix but use backend to choose client - >>> file_client = FileClient(backend='petrel', prefix='s3') - >>> # if the arguments are the same, the same object is returned - >>> file_client1 = FileClient(backend='petrel') - >>> file_client1 is file_client - True - - Attributes: - client (:obj:`BaseStorageBackend`): The backend object. - """ - - _backends = { - 'disk': HardDiskBackend, - 'ceph': CephBackend, - 'memcached': MemcachedBackend, - 'lmdb': LmdbBackend, - 'petrel': PetrelBackend, - 'http': HTTPBackend, - } - # This collection is used to record the overridden backends, and when a - # backend appears in the collection, the singleton pattern is disabled for - # that backend, because if the singleton pattern is used, then the object - # returned will be the backend before overwriting - _overridden_backends = set() - _prefix_to_backends = { - 's3': PetrelBackend, - 'http': HTTPBackend, - 'https': HTTPBackend, - } - _overridden_prefixes = set() - - _instances = {} - - def __new__(cls, backend=None, prefix=None, **kwargs): - if backend is None and prefix is None: - backend = 'disk' - if backend is not None and backend not in cls._backends: - raise ValueError( - f'Backend {backend} is not supported. Currently supported ones' - f' are {list(cls._backends.keys())}') - if prefix is not None and prefix not in cls._prefix_to_backends: - raise ValueError( - f'prefix {prefix} is not supported. Currently supported ones ' - f'are {list(cls._prefix_to_backends.keys())}') - - # concatenate the arguments to a unique key for determining whether - # objects with the same arguments were created - arg_key = f'{backend}:{prefix}' - for key, value in kwargs.items(): - arg_key += f':{key}:{value}' - - # if a backend was overridden, it will create a new object - if (arg_key in cls._instances - and backend not in cls._overridden_backends - and prefix not in cls._overridden_prefixes): - _instance = cls._instances[arg_key] - else: - # create a new object and put it to _instance - _instance = super().__new__(cls) - if backend is not None: - _instance.client = cls._backends[backend](**kwargs) - else: - _instance.client = cls._prefix_to_backends[prefix](**kwargs) - - cls._instances[arg_key] = _instance - - return _instance - - @property - def name(self): - return self.client.name - - @property - def allow_symlink(self): - return self.client.allow_symlink - - @staticmethod - def parse_uri_prefix(uri: Union[str, Path]) -> Optional[str]: - """Parse the prefix of a uri. - - Args: - uri (str | Path): Uri to be parsed that contains the file prefix. - - Examples: - >>> FileClient.parse_uri_prefix('s3://path/of/your/file') - 's3' - - Returns: - str | None: Return the prefix of uri if the uri contains '://' - else ``None``. - """ - assert is_filepath(uri) - uri = str(uri) - if '://' not in uri: - return None - else: - prefix, _ = uri.split('://') - # In the case of PetrelBackend, the prefix may contains the cluster - # name like clusterName:s3 - if ':' in prefix: - _, prefix = prefix.split(':') - return prefix - - @classmethod - def infer_client(cls, - file_client_args: Optional[dict] = None, - uri: Optional[Union[str, Path]] = None) -> 'FileClient': - """Infer a suitable file client based on the URI and arguments. - - Args: - file_client_args (dict, optional): Arguments to instantiate a - FileClient. Default: None. - uri (str | Path, optional): Uri to be parsed that contains the file - prefix. Default: None. - - Examples: - >>> uri = 's3://path/of/your/file' - >>> file_client = FileClient.infer_client(uri=uri) - >>> file_client_args = {'backend': 'petrel'} - >>> file_client = FileClient.infer_client(file_client_args) - - Returns: - FileClient: Instantiated FileClient object. - """ - assert file_client_args is not None or uri is not None - if file_client_args is None: - file_prefix = cls.parse_uri_prefix(uri) # type: ignore - return cls(prefix=file_prefix) - else: - return cls(**file_client_args) - - @classmethod - def _register_backend(cls, name, backend, force=False, prefixes=None): - if not isinstance(name, str): - raise TypeError('the backend name should be a string, ' - f'but got {type(name)}') - if not inspect.isclass(backend): - raise TypeError( - f'backend should be a class but got {type(backend)}') - if not issubclass(backend, BaseStorageBackend): - raise TypeError( - f'backend {backend} is not a subclass of BaseStorageBackend') - if not force and name in cls._backends: - raise KeyError( - f'{name} is already registered as a storage backend, ' - 'add "force=True" if you want to override it') - - if name in cls._backends and force: - cls._overridden_backends.add(name) - cls._backends[name] = backend - - if prefixes is not None: - if isinstance(prefixes, str): - prefixes = [prefixes] - else: - assert isinstance(prefixes, (list, tuple)) - for prefix in prefixes: - if prefix not in cls._prefix_to_backends: - cls._prefix_to_backends[prefix] = backend - elif (prefix in cls._prefix_to_backends) and force: - cls._overridden_prefixes.add(prefix) - cls._prefix_to_backends[prefix] = backend - else: - raise KeyError( - f'{prefix} is already registered as a storage backend,' - ' add "force=True" if you want to override it') - - @classmethod - def register_backend(cls, name, backend=None, force=False, prefixes=None): - """Register a backend to FileClient. - - This method can be used as a normal class method or a decorator. - - .. code-block:: python - - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - FileClient.register_backend('new', NewBackend) - - or - - .. code-block:: python - - @FileClient.register_backend('new') - class NewBackend(BaseStorageBackend): - - def get(self, filepath): - return filepath - - def get_text(self, filepath): - return filepath - - Args: - name (str): The name of the registered backend. - backend (class, optional): The backend class to be registered, - which must be a subclass of :class:`BaseStorageBackend`. - When this method is used as a decorator, backend is None. - Defaults to None. - force (bool, optional): Whether to override the backend if the name - has already been registered. Defaults to False. - prefixes (str or list[str] or tuple[str], optional): The prefixes - of the registered storage backend. Default: None. - `New in version 1.3.15.` - """ - if backend is not None: - cls._register_backend( - name, backend, force=force, prefixes=prefixes) - return - - def _register(backend_cls): - cls._register_backend( - name, backend_cls, force=force, prefixes=prefixes) - return backend_cls - - return _register - - def get(self, filepath: Union[str, Path]) -> Union[bytes, memoryview]: - """Read data from a given ``filepath`` with 'rb' mode. - - Note: - There are two types of return values for ``get``, one is ``bytes`` - and the other is ``memoryview``. The advantage of using memoryview - is that you can avoid copying, and if you want to convert it to - ``bytes``, you can use ``.tobytes()``. - - Args: - filepath (str or Path): Path to read data. - - Returns: - bytes | memoryview: Expected bytes object or a memory view of the - bytes object. - """ - return self.client.get(filepath) - - def get_text(self, filepath: Union[str, Path], encoding='utf-8') -> str: - """Read data from a given ``filepath`` with 'r' mode. - - Args: - filepath (str or Path): Path to read data. - encoding (str): The encoding format used to open the ``filepath``. - Default: 'utf-8'. - - Returns: - str: Expected text reading from ``filepath``. - """ - return self.client.get_text(filepath, encoding) - - def put(self, obj: bytes, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'wb' mode. - - Note: - ``put`` should create a directory if the directory of ``filepath`` - does not exist. - - Args: - obj (bytes): Data to be written. - filepath (str or Path): Path to write data. - """ - self.client.put(obj, filepath) - - def put_text(self, obj: str, filepath: Union[str, Path]) -> None: - """Write data to a given ``filepath`` with 'w' mode. - - Note: - ``put_text`` should create a directory if the directory of - ``filepath`` does not exist. - - Args: - obj (str): Data to be written. - filepath (str or Path): Path to write data. - encoding (str, optional): The encoding format used to open the - `filepath`. Default: 'utf-8'. - """ - self.client.put_text(obj, filepath) - - def remove(self, filepath: Union[str, Path]) -> None: - """Remove a file. - - Args: - filepath (str, Path): Path to be removed. - """ - self.client.remove(filepath) - - def exists(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path exists. - - Args: - filepath (str or Path): Path to be checked whether exists. - - Returns: - bool: Return ``True`` if ``filepath`` exists, ``False`` otherwise. - """ - return self.client.exists(filepath) - - def isdir(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a directory. - - Args: - filepath (str or Path): Path to be checked whether it is a - directory. - - Returns: - bool: Return ``True`` if ``filepath`` points to a directory, - ``False`` otherwise. - """ - return self.client.isdir(filepath) - - def isfile(self, filepath: Union[str, Path]) -> bool: - """Check whether a file path is a file. - - Args: - filepath (str or Path): Path to be checked whether it is a file. - - Returns: - bool: Return ``True`` if ``filepath`` points to a file, ``False`` - otherwise. - """ - return self.client.isfile(filepath) - - def join_path(self, filepath: Union[str, Path], - *filepaths: Union[str, Path]) -> str: - """Concatenate all file paths. - - Join one or more filepath components intelligently. The return value - is the concatenation of filepath and any members of *filepaths. - - Args: - filepath (str or Path): Path to be concatenated. - - Returns: - str: The result of concatenation. - """ - return self.client.join_path(filepath, *filepaths) - - @contextmanager - def get_local_path(self, filepath: Union[str, Path]) -> Iterable[str]: - """Download data from ``filepath`` and write the data to local path. - - ``get_local_path`` is decorated by :meth:`contxtlib.contextmanager`. It - can be called with ``with`` statement, and when exists from the - ``with`` statement, the temporary path will be released. - - Note: - If the ``filepath`` is a local path, just return itself. - - .. warning:: - ``get_local_path`` is an experimental interface that may change in - the future. - - Args: - filepath (str or Path): Path to be read data. - - Examples: - >>> file_client = FileClient(prefix='s3') - >>> with file_client.get_local_path('s3://bucket/abc.jpg') as path: - ... # do something here - - Yields: - Iterable[str]: Only yield one path. - """ - with self.client.get_local_path(str(filepath)) as local_path: - yield local_path - - def list_dir_or_file(self, - dir_path: Union[str, Path], - list_dir: bool = True, - list_file: bool = True, - suffix: Optional[Union[str, Tuple[str]]] = None, - recursive: bool = False) -> Iterator[str]: - """Scan a directory to find the interested directories or files in - arbitrary order. - - Note: - :meth:`list_dir_or_file` returns the path relative to ``dir_path``. - - Args: - dir_path (str | Path): Path of the directory. - list_dir (bool): List the directories. Default: True. - list_file (bool): List the path of files. Default: True. - suffix (str or tuple[str], optional): File suffix - that we are interested in. Default: None. - recursive (bool): If set to True, recursively scan the - directory. Default: False. - - Yields: - Iterable[str]: A relative path to ``dir_path``. - """ - yield from self.client.list_dir_or_file(dir_path, list_dir, list_file, - suffix, recursive) diff --git a/spaces/kirch/Text2Video-Zero/app_canny.py b/spaces/kirch/Text2Video-Zero/app_canny.py deleted file mode 100644 index 030b389da484811e74eca159fb54991d009160b0..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/app_canny.py +++ /dev/null @@ -1,54 +0,0 @@ -import gradio as gr -from model import Model - -def create_demo(model: Model): - - examples = [ - ["__assets__/canny_videos_edge/butterfly.mp4", "white butterfly, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/deer.mp4", "oil painting of a deer, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/fox.mp4", "wild red fox is walking on the grass, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/girl_dancing.mp4", "oil painting of a girl dancing close-up, masterpiece, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/girl_turning.mp4", "oil painting of a beautiful girl, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/halloween.mp4", "beautiful girl halloween style, a high-quality, detailed, and professional photo"], - ["__assets__/canny_videos_edge/santa.mp4", "a santa claus, a high-quality, detailed, and professional photo"], - ] - - with gr.Blocks() as demo: - with gr.Row(): - gr.Markdown('## Text and Canny-Edge Conditional Video Generation') - with gr.Row(): - gr.HTML( - """ -
        -

        - Description: For performance purposes, our current preview release supports any input videos but caps output videos to no longer than 15 seconds and the input videos are scaled down before processing. -

        -
        - """) - - with gr.Row(): - with gr.Column(): - input_video = gr.Video(label="Input Video",source='upload', format="mp4", visible=True).style(height="auto") - with gr.Column(): - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Column(): - result = gr.Video(label="Generated Video").style(height="auto") - - inputs = [ - input_video, - prompt, - ] - - gr.Examples(examples=examples, - inputs=inputs, - outputs=result, - fn=model.process_controlnet_canny, - cache_examples = True, - run_on_click=False, - ) - - run_button.click(fn=model.process_controlnet_canny, - inputs=inputs, - outputs=result,) - return demo diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/hubert/simple_kmeans/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/hubert/simple_kmeans/README.md deleted file mode 100644 index cd17da3b3e6f3e39083f7a76a56ff46c3a63b929..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/hubert/simple_kmeans/README.md +++ /dev/null @@ -1,71 +0,0 @@ -# Sharded Feature Extraction and K-means Application - -This folder contains scripts for preparing HUBERT labels from tsv files, the -steps are: -1. feature extraction -2. k-means clustering -3. k-means application - - -## Data preparation - -`*.tsv` files contains a list of audio, where each line is the root, and -following lines are the subpath for each audio: -``` - - - -... -``` - - -## Feature extraction - -### MFCC feature -Suppose the tsv file is at `${tsv_dir}/${split}.tsv`. To extract 39-D -mfcc+delta+ddelta features for the 1st iteration HUBERT training, run: -```sh -python dump_mfcc_feature.py ${tsv_dir} ${split} ${nshard} ${rank} ${feat_dir} -``` -This would shard the tsv file into `${nshard}` and extract features for the -`${rank}`-th shard, where rank is an integer in `[0, nshard-1]`. Features would -be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`. - - -### HUBERT feature -To extract features from the `${layer}`-th transformer layer of a trained -HUBERT model saved at `${ckpt_path}`, run: -```sh -python dump_hubert_feature.py ${tsv_dir} ${split} ${ckpt_path} ${layer} ${nshard} ${rank} ${feat_dir} -``` -Features would also be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`. - -- if out-of-memory, decrease the chunk size with `--max_chunk` - - -## K-means clustering -To fit a k-means model with `${n_clusters}` clusters on 10% of the `${split}` data, run -```sh -python learn_kmeans.py ${feat_dir} ${split} ${nshard} ${km_path} ${n_cluster} --percent 0.1 -``` -This saves the k-means model to `${km_path}`. - -- set `--precent -1` to use all data -- more kmeans options can be found with `-h` flag - - -## K-means application -To apply a trained k-means model `${km_path}` to obtain labels for `${split}`, run -```sh -python dump_km_label.py ${feat_dir} ${split} ${km_path} ${nshard} ${rank} ${lab_dir} -``` -This would extract labels for the `${rank}`-th shard out of `${nshard}` shards -and dump them to `${lab_dir}/${split}_${rank}_${shard}.km` - - -Finally, merge shards for `${split}` by running -```sh -for rank in $(seq 0 $((nshard - 1))); do - cat $lab_dir/${split}_${rank}_${nshard}.km -done > $lab_dir/${split}.km -``` diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/multilingual/data_scripts/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/multilingual/data_scripts/README.md deleted file mode 100644 index cc610c0c9e936a5ae4659ceda691c6db6d387296..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/multilingual/data_scripts/README.md +++ /dev/null @@ -1,24 +0,0 @@ - -# Install dependency -```bash -pip install -r requirement.txt -``` - -# Download the data set -```bash -export WORKDIR_ROOT= - -``` -The downloaded data will be at $WORKDIR_ROOT/ML50 - -# preprocess the data -Install SPM [here](https://github.com/google/sentencepiece) -```bash -export WORKDIR_ROOT= -export SPM_PATH= -``` -* $WORKDIR_ROOT/ML50/raw: extracted raw data -* $WORKDIR_ROOT/ML50/dedup: dedup data -* $WORKDIR_ROOT/ML50/clean: data with valid and test sentences removed from the dedup data - - diff --git a/spaces/kukuhtw/AutoGPT/autogpt/js/overlay.js b/spaces/kukuhtw/AutoGPT/autogpt/js/overlay.js deleted file mode 100644 index 1c99c72673330b8ea8cf037ef889233f2d4326be..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/js/overlay.js +++ /dev/null @@ -1,29 +0,0 @@ -const overlay = document.createElement('div'); -Object.assign(overlay.style, { - position: 'fixed', - zIndex: 999999, - top: 0, - left: 0, - width: '100%', - height: '100%', - background: 'rgba(0, 0, 0, 0.7)', - color: '#fff', - fontSize: '24px', - fontWeight: 'bold', - display: 'flex', - justifyContent: 'center', - alignItems: 'center', -}); -const textContent = document.createElement('div'); -Object.assign(textContent.style, { - textAlign: 'center', -}); -textContent.textContent = 'AutoGPT Analyzing Page'; -overlay.appendChild(textContent); -document.body.append(overlay); -document.body.style.overflow = 'hidden'; -let dotCount = 0; -setInterval(() => { - textContent.textContent = 'AutoGPT Analyzing Page' + '.'.repeat(dotCount); - dotCount = (dotCount + 1) % 4; -}, 1000); diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/_models.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/_models.py deleted file mode 100644 index e0e5278cc052e2f9a6d0af0a1cb2107b03de98f4..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/httpx/_models.py +++ /dev/null @@ -1,1209 +0,0 @@ -import datetime -import email.message -import json as jsonlib -import typing -import urllib.request -from collections.abc import Mapping -from http.cookiejar import Cookie, CookieJar - -from ._content import ByteStream, UnattachedStream, encode_request, encode_response -from ._decoders import ( - SUPPORTED_DECODERS, - ByteChunker, - ContentDecoder, - IdentityDecoder, - LineDecoder, - MultiDecoder, - TextChunker, - TextDecoder, -) -from ._exceptions import ( - CookieConflict, - HTTPStatusError, - RequestNotRead, - ResponseNotRead, - StreamClosed, - StreamConsumed, - request_context, -) -from ._multipart import get_multipart_boundary_from_content_type -from ._status_codes import codes -from ._types import ( - AsyncByteStream, - CookieTypes, - HeaderTypes, - QueryParamTypes, - RequestContent, - RequestData, - RequestExtensions, - RequestFiles, - ResponseContent, - ResponseExtensions, - SyncByteStream, -) -from ._urls import URL -from ._utils import ( - guess_json_utf, - is_known_encoding, - normalize_header_key, - normalize_header_value, - obfuscate_sensitive_headers, - parse_content_type_charset, - parse_header_links, -) - - -class Headers(typing.MutableMapping[str, str]): - """ - HTTP headers, as a case-insensitive multi-dict. - """ - - def __init__( - self, - headers: typing.Optional[HeaderTypes] = None, - encoding: typing.Optional[str] = None, - ) -> None: - if headers is None: - self._list = [] # type: typing.List[typing.Tuple[bytes, bytes, bytes]] - elif isinstance(headers, Headers): - self._list = list(headers._list) - elif isinstance(headers, Mapping): - self._list = [ - ( - normalize_header_key(k, lower=False, encoding=encoding), - normalize_header_key(k, lower=True, encoding=encoding), - normalize_header_value(v, encoding), - ) - for k, v in headers.items() - ] - else: - self._list = [ - ( - normalize_header_key(k, lower=False, encoding=encoding), - normalize_header_key(k, lower=True, encoding=encoding), - normalize_header_value(v, encoding), - ) - for k, v in headers - ] - - self._encoding = encoding - - @property - def encoding(self) -> str: - """ - Header encoding is mandated as ascii, but we allow fallbacks to utf-8 - or iso-8859-1. - """ - if self._encoding is None: - for encoding in ["ascii", "utf-8"]: - for key, value in self.raw: - try: - key.decode(encoding) - value.decode(encoding) - except UnicodeDecodeError: - break - else: - # The else block runs if 'break' did not occur, meaning - # all values fitted the encoding. - self._encoding = encoding - break - else: - # The ISO-8859-1 encoding covers all 256 code points in a byte, - # so will never raise decode errors. - self._encoding = "iso-8859-1" - return self._encoding - - @encoding.setter - def encoding(self, value: str) -> None: - self._encoding = value - - @property - def raw(self) -> typing.List[typing.Tuple[bytes, bytes]]: - """ - Returns a list of the raw header items, as byte pairs. - """ - return [(raw_key, value) for raw_key, _, value in self._list] - - def keys(self) -> typing.KeysView[str]: - return {key.decode(self.encoding): None for _, key, value in self._list}.keys() - - def values(self) -> typing.ValuesView[str]: - values_dict: typing.Dict[str, str] = {} - for _, key, value in self._list: - str_key = key.decode(self.encoding) - str_value = value.decode(self.encoding) - if str_key in values_dict: - values_dict[str_key] += f", {str_value}" - else: - values_dict[str_key] = str_value - return values_dict.values() - - def items(self) -> typing.ItemsView[str, str]: - """ - Return `(key, value)` items of headers. Concatenate headers - into a single comma separated value when a key occurs multiple times. - """ - values_dict: typing.Dict[str, str] = {} - for _, key, value in self._list: - str_key = key.decode(self.encoding) - str_value = value.decode(self.encoding) - if str_key in values_dict: - values_dict[str_key] += f", {str_value}" - else: - values_dict[str_key] = str_value - return values_dict.items() - - def multi_items(self) -> typing.List[typing.Tuple[str, str]]: - """ - Return a list of `(key, value)` pairs of headers. Allow multiple - occurrences of the same key without concatenating into a single - comma separated value. - """ - return [ - (key.decode(self.encoding), value.decode(self.encoding)) - for _, key, value in self._list - ] - - def get(self, key: str, default: typing.Any = None) -> typing.Any: - """ - Return a header value. If multiple occurrences of the header occur - then concatenate them together with commas. - """ - try: - return self[key] - except KeyError: - return default - - def get_list(self, key: str, split_commas: bool = False) -> typing.List[str]: - """ - Return a list of all header values for a given key. - If `split_commas=True` is passed, then any comma separated header - values are split into multiple return strings. - """ - get_header_key = key.lower().encode(self.encoding) - - values = [ - item_value.decode(self.encoding) - for _, item_key, item_value in self._list - if item_key.lower() == get_header_key - ] - - if not split_commas: - return values - - split_values = [] - for value in values: - split_values.extend([item.strip() for item in value.split(",")]) - return split_values - - def update(self, headers: typing.Optional[HeaderTypes] = None) -> None: # type: ignore - headers = Headers(headers) - for key in headers.keys(): - if key in self: - self.pop(key) - self._list.extend(headers._list) - - def copy(self) -> "Headers": - return Headers(self, encoding=self.encoding) - - def __getitem__(self, key: str) -> str: - """ - Return a single header value. - - If there are multiple headers with the same key, then we concatenate - them with commas. See: https://tools.ietf.org/html/rfc7230#section-3.2.2 - """ - normalized_key = key.lower().encode(self.encoding) - - items = [ - header_value.decode(self.encoding) - for _, header_key, header_value in self._list - if header_key == normalized_key - ] - - if items: - return ", ".join(items) - - raise KeyError(key) - - def __setitem__(self, key: str, value: str) -> None: - """ - Set the header `key` to `value`, removing any duplicate entries. - Retains insertion order. - """ - set_key = key.encode(self._encoding or "utf-8") - set_value = value.encode(self._encoding or "utf-8") - lookup_key = set_key.lower() - - found_indexes = [ - idx - for idx, (_, item_key, _) in enumerate(self._list) - if item_key == lookup_key - ] - - for idx in reversed(found_indexes[1:]): - del self._list[idx] - - if found_indexes: - idx = found_indexes[0] - self._list[idx] = (set_key, lookup_key, set_value) - else: - self._list.append((set_key, lookup_key, set_value)) - - def __delitem__(self, key: str) -> None: - """ - Remove the header `key`. - """ - del_key = key.lower().encode(self.encoding) - - pop_indexes = [ - idx - for idx, (_, item_key, _) in enumerate(self._list) - if item_key.lower() == del_key - ] - - if not pop_indexes: - raise KeyError(key) - - for idx in reversed(pop_indexes): - del self._list[idx] - - def __contains__(self, key: typing.Any) -> bool: - header_key = key.lower().encode(self.encoding) - return header_key in [key for _, key, _ in self._list] - - def __iter__(self) -> typing.Iterator[typing.Any]: - return iter(self.keys()) - - def __len__(self) -> int: - return len(self._list) - - def __eq__(self, other: typing.Any) -> bool: - try: - other_headers = Headers(other) - except ValueError: - return False - - self_list = [(key, value) for _, key, value in self._list] - other_list = [(key, value) for _, key, value in other_headers._list] - return sorted(self_list) == sorted(other_list) - - def __repr__(self) -> str: - class_name = self.__class__.__name__ - - encoding_str = "" - if self.encoding != "ascii": - encoding_str = f", encoding={self.encoding!r}" - - as_list = list(obfuscate_sensitive_headers(self.multi_items())) - as_dict = dict(as_list) - - no_duplicate_keys = len(as_dict) == len(as_list) - if no_duplicate_keys: - return f"{class_name}({as_dict!r}{encoding_str})" - return f"{class_name}({as_list!r}{encoding_str})" - - -class Request: - def __init__( - self, - method: typing.Union[str, bytes], - url: typing.Union["URL", str], - *, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - stream: typing.Union[SyncByteStream, AsyncByteStream, None] = None, - extensions: typing.Optional[RequestExtensions] = None, - ): - self.method = ( - method.decode("ascii").upper() - if isinstance(method, bytes) - else method.upper() - ) - self.url = URL(url) - if params is not None: - self.url = self.url.copy_merge_params(params=params) - self.headers = Headers(headers) - self.extensions = {} if extensions is None else extensions - - if cookies: - Cookies(cookies).set_cookie_header(self) - - if stream is None: - content_type: typing.Optional[str] = self.headers.get("content-type") - headers, stream = encode_request( - content=content, - data=data, - files=files, - json=json, - boundary=get_multipart_boundary_from_content_type( - content_type=content_type.encode(self.headers.encoding) - if content_type - else None - ), - ) - self._prepare(headers) - self.stream = stream - # Load the request body, except for streaming content. - if isinstance(stream, ByteStream): - self.read() - else: - # There's an important distinction between `Request(content=...)`, - # and `Request(stream=...)`. - # - # Using `content=...` implies automatically populated `Host` and content - # headers, of either `Content-Length: ...` or `Transfer-Encoding: chunked`. - # - # Using `stream=...` will not automatically include *any* auto-populated headers. - # - # As an end-user you don't really need `stream=...`. It's only - # useful when: - # - # * Preserving the request stream when copying requests, eg for redirects. - # * Creating request instances on the *server-side* of the transport API. - self.stream = stream - - def _prepare(self, default_headers: typing.Dict[str, str]) -> None: - for key, value in default_headers.items(): - # Ignore Transfer-Encoding if the Content-Length has been set explicitly. - if key.lower() == "transfer-encoding" and "Content-Length" in self.headers: - continue - self.headers.setdefault(key, value) - - auto_headers: typing.List[typing.Tuple[bytes, bytes]] = [] - - has_host = "Host" in self.headers - has_content_length = ( - "Content-Length" in self.headers or "Transfer-Encoding" in self.headers - ) - - if not has_host and self.url.host: - auto_headers.append((b"Host", self.url.netloc)) - if not has_content_length and self.method in ("POST", "PUT", "PATCH"): - auto_headers.append((b"Content-Length", b"0")) - - self.headers = Headers(auto_headers + self.headers.raw) - - @property - def content(self) -> bytes: - if not hasattr(self, "_content"): - raise RequestNotRead() - return self._content - - def read(self) -> bytes: - """ - Read and return the request content. - """ - if not hasattr(self, "_content"): - assert isinstance(self.stream, typing.Iterable) - self._content = b"".join(self.stream) - if not isinstance(self.stream, ByteStream): - # If a streaming request has been read entirely into memory, then - # we can replace the stream with a raw bytes implementation, - # to ensure that any non-replayable streams can still be used. - self.stream = ByteStream(self._content) - return self._content - - async def aread(self) -> bytes: - """ - Read and return the request content. - """ - if not hasattr(self, "_content"): - assert isinstance(self.stream, typing.AsyncIterable) - self._content = b"".join([part async for part in self.stream]) - if not isinstance(self.stream, ByteStream): - # If a streaming request has been read entirely into memory, then - # we can replace the stream with a raw bytes implementation, - # to ensure that any non-replayable streams can still be used. - self.stream = ByteStream(self._content) - return self._content - - def __repr__(self) -> str: - class_name = self.__class__.__name__ - url = str(self.url) - return f"<{class_name}({self.method!r}, {url!r})>" - - def __getstate__(self) -> typing.Dict[str, typing.Any]: - return { - name: value - for name, value in self.__dict__.items() - if name not in ["extensions", "stream"] - } - - def __setstate__(self, state: typing.Dict[str, typing.Any]) -> None: - for name, value in state.items(): - setattr(self, name, value) - self.extensions = {} - self.stream = UnattachedStream() - - -class Response: - def __init__( - self, - status_code: int, - *, - headers: typing.Optional[HeaderTypes] = None, - content: typing.Optional[ResponseContent] = None, - text: typing.Optional[str] = None, - html: typing.Optional[str] = None, - json: typing.Any = None, - stream: typing.Union[SyncByteStream, AsyncByteStream, None] = None, - request: typing.Optional[Request] = None, - extensions: typing.Optional[ResponseExtensions] = None, - history: typing.Optional[typing.List["Response"]] = None, - default_encoding: typing.Union[str, typing.Callable[[bytes], str]] = "utf-8", - ): - self.status_code = status_code - self.headers = Headers(headers) - - self._request: typing.Optional[Request] = request - - # When follow_redirects=False and a redirect is received, - # the client will set `response.next_request`. - self.next_request: typing.Optional[Request] = None - - self.extensions = {} if extensions is None else extensions - self.history = [] if history is None else list(history) - - self.is_closed = False - self.is_stream_consumed = False - - self.default_encoding = default_encoding - - if stream is None: - headers, stream = encode_response(content, text, html, json) - self._prepare(headers) - self.stream = stream - if isinstance(stream, ByteStream): - # Load the response body, except for streaming content. - self.read() - else: - # There's an important distinction between `Response(content=...)`, - # and `Response(stream=...)`. - # - # Using `content=...` implies automatically populated content headers, - # of either `Content-Length: ...` or `Transfer-Encoding: chunked`. - # - # Using `stream=...` will not automatically include any content headers. - # - # As an end-user you don't really need `stream=...`. It's only - # useful when creating response instances having received a stream - # from the transport API. - self.stream = stream - - self._num_bytes_downloaded = 0 - - def _prepare(self, default_headers: typing.Dict[str, str]) -> None: - for key, value in default_headers.items(): - # Ignore Transfer-Encoding if the Content-Length has been set explicitly. - if key.lower() == "transfer-encoding" and "content-length" in self.headers: - continue - self.headers.setdefault(key, value) - - @property - def elapsed(self) -> datetime.timedelta: - """ - Returns the time taken for the complete request/response - cycle to complete. - """ - if not hasattr(self, "_elapsed"): - raise RuntimeError( - "'.elapsed' may only be accessed after the response " - "has been read or closed." - ) - return self._elapsed - - @elapsed.setter - def elapsed(self, elapsed: datetime.timedelta) -> None: - self._elapsed = elapsed - - @property - def request(self) -> Request: - """ - Returns the request instance associated to the current response. - """ - if self._request is None: - raise RuntimeError( - "The request instance has not been set on this response." - ) - return self._request - - @request.setter - def request(self, value: Request) -> None: - self._request = value - - @property - def http_version(self) -> str: - try: - http_version: bytes = self.extensions["http_version"] - except KeyError: - return "HTTP/1.1" - else: - return http_version.decode("ascii", errors="ignore") - - @property - def reason_phrase(self) -> str: - try: - reason_phrase: bytes = self.extensions["reason_phrase"] - except KeyError: - return codes.get_reason_phrase(self.status_code) - else: - return reason_phrase.decode("ascii", errors="ignore") - - @property - def url(self) -> URL: - """ - Returns the URL for which the request was made. - """ - return self.request.url - - @property - def content(self) -> bytes: - if not hasattr(self, "_content"): - raise ResponseNotRead() - return self._content - - @property - def text(self) -> str: - if not hasattr(self, "_text"): - content = self.content - if not content: - self._text = "" - else: - decoder = TextDecoder(encoding=self.encoding or "utf-8") - self._text = "".join([decoder.decode(self.content), decoder.flush()]) - return self._text - - @property - def encoding(self) -> typing.Optional[str]: - """ - Return an encoding to use for decoding the byte content into text. - The priority for determining this is given by... - - * `.encoding = <>` has been set explicitly. - * The encoding as specified by the charset parameter in the Content-Type header. - * The encoding as determined by `default_encoding`, which may either be - a string like "utf-8" indicating the encoding to use, or may be a callable - which enables charset autodetection. - """ - if not hasattr(self, "_encoding"): - encoding = self.charset_encoding - if encoding is None or not is_known_encoding(encoding): - if isinstance(self.default_encoding, str): - encoding = self.default_encoding - elif hasattr(self, "_content"): - encoding = self.default_encoding(self._content) - self._encoding = encoding or "utf-8" - return self._encoding - - @encoding.setter - def encoding(self, value: str) -> None: - self._encoding = value - - @property - def charset_encoding(self) -> typing.Optional[str]: - """ - Return the encoding, as specified by the Content-Type header. - """ - content_type = self.headers.get("Content-Type") - if content_type is None: - return None - - return parse_content_type_charset(content_type) - - def _get_content_decoder(self) -> ContentDecoder: - """ - Returns a decoder instance which can be used to decode the raw byte - content, depending on the Content-Encoding used in the response. - """ - if not hasattr(self, "_decoder"): - decoders: typing.List[ContentDecoder] = [] - values = self.headers.get_list("content-encoding", split_commas=True) - for value in values: - value = value.strip().lower() - try: - decoder_cls = SUPPORTED_DECODERS[value] - decoders.append(decoder_cls()) - except KeyError: - continue - - if len(decoders) == 1: - self._decoder = decoders[0] - elif len(decoders) > 1: - self._decoder = MultiDecoder(children=decoders) - else: - self._decoder = IdentityDecoder() - - return self._decoder - - @property - def is_informational(self) -> bool: - """ - A property which is `True` for 1xx status codes, `False` otherwise. - """ - return codes.is_informational(self.status_code) - - @property - def is_success(self) -> bool: - """ - A property which is `True` for 2xx status codes, `False` otherwise. - """ - return codes.is_success(self.status_code) - - @property - def is_redirect(self) -> bool: - """ - A property which is `True` for 3xx status codes, `False` otherwise. - - Note that not all responses with a 3xx status code indicate a URL redirect. - - Use `response.has_redirect_location` to determine responses with a properly - formed URL redirection. - """ - return codes.is_redirect(self.status_code) - - @property - def is_client_error(self) -> bool: - """ - A property which is `True` for 4xx status codes, `False` otherwise. - """ - return codes.is_client_error(self.status_code) - - @property - def is_server_error(self) -> bool: - """ - A property which is `True` for 5xx status codes, `False` otherwise. - """ - return codes.is_server_error(self.status_code) - - @property - def is_error(self) -> bool: - """ - A property which is `True` for 4xx and 5xx status codes, `False` otherwise. - """ - return codes.is_error(self.status_code) - - @property - def has_redirect_location(self) -> bool: - """ - Returns True for 3xx responses with a properly formed URL redirection, - `False` otherwise. - """ - return ( - self.status_code - in ( - # 301 (Cacheable redirect. Method may change to GET.) - codes.MOVED_PERMANENTLY, - # 302 (Uncacheable redirect. Method may change to GET.) - codes.FOUND, - # 303 (Client should make a GET or HEAD request.) - codes.SEE_OTHER, - # 307 (Equiv. 302, but retain method) - codes.TEMPORARY_REDIRECT, - # 308 (Equiv. 301, but retain method) - codes.PERMANENT_REDIRECT, - ) - and "Location" in self.headers - ) - - def raise_for_status(self) -> None: - """ - Raise the `HTTPStatusError` if one occurred. - """ - request = self._request - if request is None: - raise RuntimeError( - "Cannot call `raise_for_status` as the request " - "instance has not been set on this response." - ) - - if self.is_success: - return - - if self.has_redirect_location: - message = ( - "{error_type} '{0.status_code} {0.reason_phrase}' for url '{0.url}'\n" - "Redirect location: '{0.headers[location]}'\n" - "For more information check: https://httpstatuses.com/{0.status_code}" - ) - else: - message = ( - "{error_type} '{0.status_code} {0.reason_phrase}' for url '{0.url}'\n" - "For more information check: https://httpstatuses.com/{0.status_code}" - ) - - status_class = self.status_code // 100 - error_types = { - 1: "Informational response", - 3: "Redirect response", - 4: "Client error", - 5: "Server error", - } - error_type = error_types.get(status_class, "Invalid status code") - message = message.format(self, error_type=error_type) - raise HTTPStatusError(message, request=request, response=self) - - def json(self, **kwargs: typing.Any) -> typing.Any: - if self.charset_encoding is None and self.content and len(self.content) > 3: - encoding = guess_json_utf(self.content) - if encoding is not None: - return jsonlib.loads(self.content.decode(encoding), **kwargs) - return jsonlib.loads(self.text, **kwargs) - - @property - def cookies(self) -> "Cookies": - if not hasattr(self, "_cookies"): - self._cookies = Cookies() - self._cookies.extract_cookies(self) - return self._cookies - - @property - def links(self) -> typing.Dict[typing.Optional[str], typing.Dict[str, str]]: - """ - Returns the parsed header links of the response, if any - """ - header = self.headers.get("link") - ldict = {} - if header: - links = parse_header_links(header) - for link in links: - key = link.get("rel") or link.get("url") - ldict[key] = link - return ldict - - @property - def num_bytes_downloaded(self) -> int: - return self._num_bytes_downloaded - - def __repr__(self) -> str: - return f"" - - def __getstate__(self) -> typing.Dict[str, typing.Any]: - return { - name: value - for name, value in self.__dict__.items() - if name not in ["extensions", "stream", "is_closed", "_decoder"] - } - - def __setstate__(self, state: typing.Dict[str, typing.Any]) -> None: - for name, value in state.items(): - setattr(self, name, value) - self.is_closed = True - self.extensions = {} - self.stream = UnattachedStream() - - def read(self) -> bytes: - """ - Read and return the response content. - """ - if not hasattr(self, "_content"): - self._content = b"".join(self.iter_bytes()) - return self._content - - def iter_bytes( - self, chunk_size: typing.Optional[int] = None - ) -> typing.Iterator[bytes]: - """ - A byte-iterator over the decoded response content. - This allows us to handle gzip, deflate, and brotli encoded responses. - """ - if hasattr(self, "_content"): - chunk_size = len(self._content) if chunk_size is None else chunk_size - for i in range(0, len(self._content), max(chunk_size, 1)): - yield self._content[i : i + chunk_size] - else: - decoder = self._get_content_decoder() - chunker = ByteChunker(chunk_size=chunk_size) - with request_context(request=self._request): - for raw_bytes in self.iter_raw(): - decoded = decoder.decode(raw_bytes) - for chunk in chunker.decode(decoded): - yield chunk - decoded = decoder.flush() - for chunk in chunker.decode(decoded): - yield chunk # pragma: no cover - for chunk in chunker.flush(): - yield chunk - - def iter_text( - self, chunk_size: typing.Optional[int] = None - ) -> typing.Iterator[str]: - """ - A str-iterator over the decoded response content - that handles both gzip, deflate, etc but also detects the content's - string encoding. - """ - decoder = TextDecoder(encoding=self.encoding or "utf-8") - chunker = TextChunker(chunk_size=chunk_size) - with request_context(request=self._request): - for byte_content in self.iter_bytes(): - text_content = decoder.decode(byte_content) - for chunk in chunker.decode(text_content): - yield chunk - text_content = decoder.flush() - for chunk in chunker.decode(text_content): - yield chunk - for chunk in chunker.flush(): - yield chunk - - def iter_lines(self) -> typing.Iterator[str]: - decoder = LineDecoder() - with request_context(request=self._request): - for text in self.iter_text(): - for line in decoder.decode(text): - yield line - for line in decoder.flush(): - yield line - - def iter_raw( - self, chunk_size: typing.Optional[int] = None - ) -> typing.Iterator[bytes]: - """ - A byte-iterator over the raw response content. - """ - if self.is_stream_consumed: - raise StreamConsumed() - if self.is_closed: - raise StreamClosed() - if not isinstance(self.stream, SyncByteStream): - raise RuntimeError("Attempted to call a sync iterator on an async stream.") - - self.is_stream_consumed = True - self._num_bytes_downloaded = 0 - chunker = ByteChunker(chunk_size=chunk_size) - - with request_context(request=self._request): - for raw_stream_bytes in self.stream: - self._num_bytes_downloaded += len(raw_stream_bytes) - for chunk in chunker.decode(raw_stream_bytes): - yield chunk - - for chunk in chunker.flush(): - yield chunk - - self.close() - - def close(self) -> None: - """ - Close the response and release the connection. - Automatically called if the response body is read to completion. - """ - if not isinstance(self.stream, SyncByteStream): - raise RuntimeError("Attempted to call an sync close on an async stream.") - - if not self.is_closed: - self.is_closed = True - with request_context(request=self._request): - self.stream.close() - - async def aread(self) -> bytes: - """ - Read and return the response content. - """ - if not hasattr(self, "_content"): - self._content = b"".join([part async for part in self.aiter_bytes()]) - return self._content - - async def aiter_bytes( - self, chunk_size: typing.Optional[int] = None - ) -> typing.AsyncIterator[bytes]: - """ - A byte-iterator over the decoded response content. - This allows us to handle gzip, deflate, and brotli encoded responses. - """ - if hasattr(self, "_content"): - chunk_size = len(self._content) if chunk_size is None else chunk_size - for i in range(0, len(self._content), max(chunk_size, 1)): - yield self._content[i : i + chunk_size] - else: - decoder = self._get_content_decoder() - chunker = ByteChunker(chunk_size=chunk_size) - with request_context(request=self._request): - async for raw_bytes in self.aiter_raw(): - decoded = decoder.decode(raw_bytes) - for chunk in chunker.decode(decoded): - yield chunk - decoded = decoder.flush() - for chunk in chunker.decode(decoded): - yield chunk # pragma: no cover - for chunk in chunker.flush(): - yield chunk - - async def aiter_text( - self, chunk_size: typing.Optional[int] = None - ) -> typing.AsyncIterator[str]: - """ - A str-iterator over the decoded response content - that handles both gzip, deflate, etc but also detects the content's - string encoding. - """ - decoder = TextDecoder(encoding=self.encoding or "utf-8") - chunker = TextChunker(chunk_size=chunk_size) - with request_context(request=self._request): - async for byte_content in self.aiter_bytes(): - text_content = decoder.decode(byte_content) - for chunk in chunker.decode(text_content): - yield chunk - text_content = decoder.flush() - for chunk in chunker.decode(text_content): - yield chunk - for chunk in chunker.flush(): - yield chunk - - async def aiter_lines(self) -> typing.AsyncIterator[str]: - decoder = LineDecoder() - with request_context(request=self._request): - async for text in self.aiter_text(): - for line in decoder.decode(text): - yield line - for line in decoder.flush(): - yield line - - async def aiter_raw( - self, chunk_size: typing.Optional[int] = None - ) -> typing.AsyncIterator[bytes]: - """ - A byte-iterator over the raw response content. - """ - if self.is_stream_consumed: - raise StreamConsumed() - if self.is_closed: - raise StreamClosed() - if not isinstance(self.stream, AsyncByteStream): - raise RuntimeError("Attempted to call an async iterator on an sync stream.") - - self.is_stream_consumed = True - self._num_bytes_downloaded = 0 - chunker = ByteChunker(chunk_size=chunk_size) - - with request_context(request=self._request): - async for raw_stream_bytes in self.stream: - self._num_bytes_downloaded += len(raw_stream_bytes) - for chunk in chunker.decode(raw_stream_bytes): - yield chunk - - for chunk in chunker.flush(): - yield chunk - - await self.aclose() - - async def aclose(self) -> None: - """ - Close the response and release the connection. - Automatically called if the response body is read to completion. - """ - if not isinstance(self.stream, AsyncByteStream): - raise RuntimeError("Attempted to call an async close on an sync stream.") - - if not self.is_closed: - self.is_closed = True - with request_context(request=self._request): - await self.stream.aclose() - - -class Cookies(typing.MutableMapping[str, str]): - """ - HTTP Cookies, as a mutable mapping. - """ - - def __init__(self, cookies: typing.Optional[CookieTypes] = None) -> None: - if cookies is None or isinstance(cookies, dict): - self.jar = CookieJar() - if isinstance(cookies, dict): - for key, value in cookies.items(): - self.set(key, value) - elif isinstance(cookies, list): - self.jar = CookieJar() - for key, value in cookies: - self.set(key, value) - elif isinstance(cookies, Cookies): - self.jar = CookieJar() - for cookie in cookies.jar: - self.jar.set_cookie(cookie) - else: - self.jar = cookies - - def extract_cookies(self, response: Response) -> None: - """ - Loads any cookies based on the response `Set-Cookie` headers. - """ - urllib_response = self._CookieCompatResponse(response) - urllib_request = self._CookieCompatRequest(response.request) - - self.jar.extract_cookies(urllib_response, urllib_request) # type: ignore - - def set_cookie_header(self, request: Request) -> None: - """ - Sets an appropriate 'Cookie:' HTTP header on the `Request`. - """ - urllib_request = self._CookieCompatRequest(request) - self.jar.add_cookie_header(urllib_request) - - def set(self, name: str, value: str, domain: str = "", path: str = "/") -> None: - """ - Set a cookie value by name. May optionally include domain and path. - """ - kwargs = { - "version": 0, - "name": name, - "value": value, - "port": None, - "port_specified": False, - "domain": domain, - "domain_specified": bool(domain), - "domain_initial_dot": domain.startswith("."), - "path": path, - "path_specified": bool(path), - "secure": False, - "expires": None, - "discard": True, - "comment": None, - "comment_url": None, - "rest": {"HttpOnly": None}, - "rfc2109": False, - } - cookie = Cookie(**kwargs) # type: ignore - self.jar.set_cookie(cookie) - - def get( # type: ignore - self, - name: str, - default: typing.Optional[str] = None, - domain: typing.Optional[str] = None, - path: typing.Optional[str] = None, - ) -> typing.Optional[str]: - """ - Get a cookie by name. May optionally include domain and path - in order to specify exactly which cookie to retrieve. - """ - value = None - for cookie in self.jar: - if cookie.name == name: - if domain is None or cookie.domain == domain: - if path is None or cookie.path == path: - if value is not None: - message = f"Multiple cookies exist with name={name}" - raise CookieConflict(message) - value = cookie.value - - if value is None: - return default - return value - - def delete( - self, - name: str, - domain: typing.Optional[str] = None, - path: typing.Optional[str] = None, - ) -> None: - """ - Delete a cookie by name. May optionally include domain and path - in order to specify exactly which cookie to delete. - """ - if domain is not None and path is not None: - return self.jar.clear(domain, path, name) - - remove = [ - cookie - for cookie in self.jar - if cookie.name == name - and (domain is None or cookie.domain == domain) - and (path is None or cookie.path == path) - ] - - for cookie in remove: - self.jar.clear(cookie.domain, cookie.path, cookie.name) - - def clear( - self, domain: typing.Optional[str] = None, path: typing.Optional[str] = None - ) -> None: - """ - Delete all cookies. Optionally include a domain and path in - order to only delete a subset of all the cookies. - """ - args = [] - if domain is not None: - args.append(domain) - if path is not None: - assert domain is not None - args.append(path) - self.jar.clear(*args) - - def update(self, cookies: typing.Optional[CookieTypes] = None) -> None: # type: ignore - cookies = Cookies(cookies) - for cookie in cookies.jar: - self.jar.set_cookie(cookie) - - def __setitem__(self, name: str, value: str) -> None: - return self.set(name, value) - - def __getitem__(self, name: str) -> str: - value = self.get(name) - if value is None: - raise KeyError(name) - return value - - def __delitem__(self, name: str) -> None: - return self.delete(name) - - def __len__(self) -> int: - return len(self.jar) - - def __iter__(self) -> typing.Iterator[str]: - return (cookie.name for cookie in self.jar) - - def __bool__(self) -> bool: - for _ in self.jar: - return True - return False - - def __repr__(self) -> str: - cookies_repr = ", ".join( - [ - f"" - for cookie in self.jar - ] - ) - - return f"" - - class _CookieCompatRequest(urllib.request.Request): - """ - Wraps a `Request` instance up in a compatibility interface suitable - for use with `CookieJar` operations. - """ - - def __init__(self, request: Request) -> None: - super().__init__( - url=str(request.url), - headers=dict(request.headers), - method=request.method, - ) - self.request = request - - def add_unredirected_header(self, key: str, value: str) -> None: - super().add_unredirected_header(key, value) - self.request.headers[key] = value - - class _CookieCompatResponse: - """ - Wraps a `Request` instance up in a compatibility interface suitable - for use with `CookieJar` operations. - """ - - def __init__(self, response: Response): - self.response = response - - def info(self) -> email.message.Message: - info = email.message.Message() - for key, value in self.response.headers.multi_items(): - # Note that setting `info[key]` here is an "append" operation, - # not a "replace" operation. - # https://docs.python.org/3/library/email.compat32-message.html#email.message.Message.__setitem__ - info[key] = value - return info diff --git a/spaces/leogabraneth/text-generation-webui-main/update_wsl.bat b/spaces/leogabraneth/text-generation-webui-main/update_wsl.bat deleted file mode 100644 index 36d019a86641bb69392e04822f9697c80b28dcf9..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/update_wsl.bat +++ /dev/null @@ -1,11 +0,0 @@ -@echo off - -cd /D "%~dp0" - -set PATH=%PATH%;%SystemRoot%\system32 - -@rem sed -i 's/\x0D$//' ./wsl.sh converts newlines to unix format in the wsl script calling wsl.sh with 'update' will run updater -call wsl -e bash -lic "sed -i 's/\x0D$//' ./wsl.sh; source ./wsl.sh update" - -:end -pause diff --git a/spaces/lewiswu1209/MockingBird/synthesizer/utils/__init__.py b/spaces/lewiswu1209/MockingBird/synthesizer/utils/__init__.py deleted file mode 100644 index 5ae3e48110e61231acf1e666e5fa76af5e4ebdcd..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/synthesizer/utils/__init__.py +++ /dev/null @@ -1,45 +0,0 @@ -import torch - - -_output_ref = None -_replicas_ref = None - -def data_parallel_workaround(model, *input): - global _output_ref - global _replicas_ref - device_ids = list(range(torch.cuda.device_count())) - output_device = device_ids[0] - replicas = torch.nn.parallel.replicate(model, device_ids) - # input.shape = (num_args, batch, ...) - inputs = torch.nn.parallel.scatter(input, device_ids) - # inputs.shape = (num_gpus, num_args, batch/num_gpus, ...) - replicas = replicas[:len(inputs)] - outputs = torch.nn.parallel.parallel_apply(replicas, inputs) - y_hat = torch.nn.parallel.gather(outputs, output_device) - _output_ref = outputs - _replicas_ref = replicas - return y_hat - - -class ValueWindow(): - def __init__(self, window_size=100): - self._window_size = window_size - self._values = [] - - def append(self, x): - self._values = self._values[-(self._window_size - 1):] + [x] - - @property - def sum(self): - return sum(self._values) - - @property - def count(self): - return len(self._values) - - @property - def average(self): - return self.sum / max(1, self.count) - - def reset(self): - self._values = [] diff --git a/spaces/libhost/img.lite/README.md b/spaces/libhost/img.lite/README.md deleted file mode 100644 index 98b81a1d2c9bb4851d25215fa4585e1928aff6d8..0000000000000000000000000000000000000000 --- a/spaces/libhost/img.lite/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: 🖼 ImgLib.LITE -emoji: ✨️🖼📉 -colorFrom: blue -colorTo: green -sdk: streamlit -sdk_version: 1.2.0 -app_file: imglib.py -pinned: true -license: mit ---- - -![ImgLib.LITE](https://huggingface.co/spaces/TNR-5/Image-Semantic-Searchj/resolve/main/img/ImgLib.png) - -

        - -

        Immerse yourself in the world of beautiful images of everything, here you have a whale, and a house, and even a landscape, and all this is generated by AI and is completely unique!

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Cyberfoot 2014 Tam Indir 2021.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Cyberfoot 2014 Tam Indir 2021.md deleted file mode 100644 index e09b66fffda8edbec9f86e171bddd64c75db805c..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Cyberfoot 2014 Tam Indir 2021.md +++ /dev/null @@ -1,10 +0,0 @@ -

        cyberfoot 2014 tam indir


        Download ✑ ✑ ✑ https://bytlly.com/2uGwM1



        -
        -Download Free Logo Sign Mockup. Save. Download free logo mockup · Cihan Büyükakkaş. 5 77. Al-i Imran Suresi'nin Türkche Achiklamali Meali. INDIR. Download free logo mockup · Öztürk. -Free Download Logo Sign Mockup. Save. Download free logo mockup · Öztürk. -Download free logo mockup · İndir. -Download free logo mockup · Öztürk. -You can download or add 8a78ff9644
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Desktop Reminder 2 Pro Activation Key.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Desktop Reminder 2 Pro Activation Key.md deleted file mode 100644 index 3073c0841f814e8294cb0ebc3b0917e8d4bf95ea..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Desktop Reminder 2 Pro Activation Key.md +++ /dev/null @@ -1,62 +0,0 @@ - -

        Desktop Reminder 2 Pro Activation Key: How to Get and Use the Best Task Reminder Software

        - -

        If you are looking for a way to manage your tasks and never forget anything important, you should consider using Desktop Reminder 2 Pro. Desktop Reminder 2 Pro is a powerful and easy-to-use software that allows you to create and organize your tasks, set reminders, and customize your desktop. Whether you want to remember your phone calls, appointments, birthdays, deadlines, or any other events, Desktop Reminder 2 Pro can help you achieve your goals.

        - -

        What are the features of Desktop Reminder 2 Pro?

        - -

        Desktop Reminder 2 Pro has many features that make it a versatile and user-friendly solution for task management. Some of these features are:

        -

        Desktop Reminder 2 Pro Activation Key


        Download Zip 🗹 https://bytlly.com/2uGvZ1



        - -
          -
        • Task creation: You can create unlimited tasks with different categories, priorities, statuses, and notes. You can also assign icons and colors to your tasks for easy identification.
        • -
        • Task organization: You can sort, filter, group, and search your tasks by various criteria. You can also drag and drop your tasks to rearrange them on the task list.
        • -
        • Task reminder: You can set or remove an alarm for each task, and choose the type and frequency of the reminder. You can also snooze or dismiss the reminder when it pops up.
        • -
        • Desktop customization: You can change the appearance of your desktop by choosing from various themes, wallpapers, fonts, and colors. You can also take a photo as a desktop wallpaper with your webcam.
        • -
        • Data backup: You can backup and restore your data with a single click. You can also export and import your data in various formats, such as CSV, XML, HTML, etc.
        • -
        • Data security: You can protect your data with encryption and password features. You can also lock or hide your task list from unauthorized access.
        • -
        - -

        How to download and install Desktop Reminder 2 Pro?

        - -

        If you want to try Desktop Reminder 2 Pro, you can download it from the official website of Desktop Reminder. There are two versions of Desktop Reminder 2 Pro: Normal version and Portable version. The Normal version is installed on your computer and requires administrator rights. The Portable version is stored on a removable device and does not require installation or administrator rights. You can choose the version that suits your needs and preferences. The download file is a ZIP archive that contains the executable file and the license key. To install Desktop Reminder 2 Pro, you need to follow these steps:

        - -
          -
        1. Unzip the archive and run the executable file.
        2. -
        3. Follow the instructions on the screen to complete the installation.
        4. -
        5. Double click the license key file (*.keyDR) in Windows Explorer to activate the software.
        6. -
        7. That's all! You can now enjoy Desktop Reminder 2 Pro with full functionality.
        8. -
        - -

        Download Link: Desktop Reminder 2 Pro

        -

        - -

        How to use Desktop Reminder 2 Pro?

        - -

        To use Desktop Reminder 2 Pro, you need to launch the software from your desktop or start menu. You will see a simple and clean interface that consists of three main sections: Task List, Calendar, and Options. To create a task, you can click on the New Task button on the top menu bar. You will see a window that allows you to enter the details of your task, such as name, category, priority, status, date, time, alarm, note, etc. To save your task, you can click on the OK button or press Enter on your keyboard. Your task will be added to the task list. - -To organize your tasks, you can use various tools and options on the top menu bar. For example, you can use the Sort button to sort your tasks by different criteria; you can use the Filter button to filter your tasks by different criteria; you can use the Group button to group your tasks by different criteria; you can use the Search button to search for a specific task; etc. - -To set or remove a reminder for a task, you can right-click on the task and select Set Alarm or Remove Alarm from the context menu. You can also double-click on the task to edit its details and change its alarm settings. - -To customize your desktop, you can click on the Options button on the top menu bar. You will see a window that allows you to change various settings of the software, such as theme, wallpaper, font, color, language, backup, security, etc. - -To backup or restore your data, you can click on the Backup button on the top menu bar. You will see a window that allows you to backup or restore your data with a single click. - -To exit the software, you can click on the Exit button on the top menu bar or press Alt+F4 on your keyboard.

        - -

        Why choose Desktop Reminder 2 Pro?

        - -

        Desktop Reminder 2 Pro is a reliable and efficient software that can help you manage your tasks and never forget anything important. It has many advantages over other similar software such as:

        - -
          -
        • Functionality: Desktop Reminder 2 Pro provides all the features you need to create and organize your tasks easily.
        • -
        • Simplicity: Desktop Reminder 2 Pro has a simple and intuitive interface that makes it easy to use for anyone.
        • -
        • Versatility: Desktop Reminder 2 Pro supports various languages and formats and works with any Windows operating system.
        • -
        • Security: Desktop Reminder 2 Pro protects your data with encryption and password features and allows you to backup and restore your data easily.
        • -
        • Affordability: Desktop Reminder 2 Pro offers a reasonable price for its quality and functionality.
        • -
        - -

        If you are looking for a task reminder software that can meet your needs and expectations then you should give Desktop Reminder 2 Pro a try!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Disk Drill Pro 2020 Crack With Activation Code New Professional.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Disk Drill Pro 2020 Crack With Activation Code New Professional.md deleted file mode 100644 index 4ca7304a27e015e30c9a9eea63a356f815417781..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Disk Drill Pro 2020 Crack With Activation Code New Professional.md +++ /dev/null @@ -1,28 +0,0 @@ -
        -```html -

        Disk Drill Pro 2020 Crack With Activation Code New Professional: A Review

        -

        Disk Drill Pro 2020 is a powerful data recovery software that can help you recover lost or deleted files from any storage device. Whether you accidentally formatted your hard drive, emptied your recycle bin, or suffered a virus attack, Disk Drill Pro 2020 can scan and restore your data in minutes.

        -

        Disk Drill Pro 2020 Crack With Activation Code New Professional


        Download ⚙⚙⚙ https://bytlly.com/2uGw7y



        -

        But what if you don't want to pay for the full version of Disk Drill Pro 2020? You might be tempted to download a cracked version of the software from the internet, hoping to get the same features and performance without spending a dime. However, this is a risky and illegal move that can expose you to serious consequences.

        -

        In this article, we will explain why you should avoid Disk Drill Pro 2020 crack with activation code new professional and what are the best alternatives to recover your data safely and legally.

        -

        Why You Should Avoid Disk Drill Pro 2020 Crack With Activation Code New Professional

        -

        Disk Drill Pro 2020 crack with activation code new professional is a pirated version of the original software that has been modified by hackers to bypass the license verification process. By using this crack, you might think that you are getting the full functionality of Disk Drill Pro 2020 for free, but in reality, you are putting yourself and your data at risk.

        -

        -

        Here are some of the reasons why you should avoid Disk Drill Pro 2020 crack with activation code new professional:

        -
          -
        • It is illegal. Downloading and using cracked software is a violation of the copyright laws and can result in legal action from the software developers or authorities. You could face fines, lawsuits, or even jail time for using pirated software.
        • -
        • It is unsafe. Cracked software often comes with malware, viruses, spyware, or ransomware that can infect your computer and compromise your security. These malicious programs can steal your personal information, damage your system, encrypt your files, or demand money to unlock them.
        • -
        • It is unreliable. Cracked software often has bugs, errors, or missing features that can affect its performance and functionality. You might experience crashes, freezes, or data loss while using Disk Drill Pro 2020 crack with activation code new professional. Moreover, you will not be able to receive any updates or technical support from the official developers.
        • -
        • It is unethical. Cracked software harms the software industry and discourages innovation and development. By using Disk Drill Pro 2020 crack with activation code new professional, you are depriving the original developers of their rightful income and recognition for their hard work and creativity.
        • -
        -

        What Are The Best Alternatives To Disk Drill Pro 2020 Crack With Activation Code New Professional

        -

        If you want to recover your data without risking your security, legality, or morality, you should avoid Disk Drill Pro 2020 crack with activation code new professional and opt for one of these alternatives instead:

        -
          -
        • Disk Drill Free Edition. Disk Drill offers a free edition of its software that allows you to scan and preview up to 500 MB of data for free. You can use this edition to see if Disk Drill can recover your files before upgrading to the pro version. The free edition also includes some useful tools such as disk health monitor, data backup, data shredder, and duplicate finder.
        • -
        • Disk Drill Discount Coupon. If you want to unlock the full potential of Disk Drill Pro 2020, you can use a discount coupon to get a significant price reduction on the official website. You can find various coupons online or subscribe to Disk Drill's newsletter to get exclusive offers and deals.
        • -
        • Disk Drill Alternative Software. There are many other data recovery software available on the market that can help you restore your lost or deleted files. Some of the popular ones are Recuva, EaseUS Data Recovery Wizard, Stellar Data Recovery, and Wondershare Recoverit. You can compare their features, prices, and reviews online and choose the one that suits your needs and budget.
        • -
        -

        Conclusion

        -

        Disk Drill Pro 2020 is a great data recovery software that can help you recover your data from any storage device in minutes. However, you should not use Disk Drill Pro 2020

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/lixq/bingo61/tests/kblob.ts b/spaces/lixq/bingo61/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/luost26/DiffAb/diffab/utils/transforms/patch.py b/spaces/luost26/DiffAb/diffab/utils/transforms/patch.py deleted file mode 100644 index abe678eb6fa3f64a0637ab8dc87e3ef6102347b8..0000000000000000000000000000000000000000 --- a/spaces/luost26/DiffAb/diffab/utils/transforms/patch.py +++ /dev/null @@ -1,73 +0,0 @@ -import torch - -from ._base import _mask_select_data, register_transform -from ..protein import constants - - -@register_transform('patch_around_anchor') -class PatchAroundAnchor(object): - - def __init__(self, initial_patch_size=128, antigen_size=128): - super().__init__() - self.initial_patch_size = initial_patch_size - self.antigen_size = antigen_size - - def _center(self, data, origin): - origin = origin.reshape(1, 1, 3) - data['pos_heavyatom'] -= origin # (L, A, 3) - data['pos_heavyatom'] = data['pos_heavyatom'] * data['mask_heavyatom'][:, :, None] - data['origin'] = origin.reshape(3) - return data - - def __call__(self, data): - anchor_flag = data['anchor_flag'] # (L,) - anchor_points = data['pos_heavyatom'][anchor_flag, constants.BBHeavyAtom.CA] # (n_anchors, 3) - antigen_mask = (data['fragment_type'] == constants.Fragment.Antigen) - antibody_mask = torch.logical_not(antigen_mask) - - if anchor_flag.sum().item() == 0: - # Generating full antibody-Fv, no antigen given - data_patch = _mask_select_data( - data = data, - mask = antibody_mask, - ) - data_patch = self._center( - data_patch, - origin = data_patch['pos_heavyatom'][:, constants.BBHeavyAtom.CA].mean(dim=0) - ) - return data_patch - - pos_alpha = data['pos_heavyatom'][:, constants.BBHeavyAtom.CA] # (L, 3) - dist_anchor = torch.cdist(pos_alpha, anchor_points).min(dim=1)[0] # (L, ) - initial_patch_idx = torch.topk( - dist_anchor, - k = min(self.initial_patch_size, dist_anchor.size(0)), - largest=False, - )[1] # (initial_patch_size, ) - - dist_anchor_antigen = dist_anchor.masked_fill( - mask = antibody_mask, # Fill antibody with +inf - value = float('+inf') - ) # (L, ) - antigen_patch_idx = torch.topk( - dist_anchor_antigen, - k = min(self.antigen_size, antigen_mask.sum().item()), - largest=False, sorted=True - )[1] # (ag_size, ) - - patch_mask = torch.logical_or( - data['generate_flag'], - data['anchor_flag'], - ) - patch_mask[initial_patch_idx] = True - patch_mask[antigen_patch_idx] = True - - patch_idx = torch.arange(0, patch_mask.shape[0])[patch_mask] - - data_patch = _mask_select_data(data, patch_mask) - data_patch = self._center( - data_patch, - origin = anchor_points.mean(dim=0) - ) - data_patch['patch_idx'] = patch_idx - return data_patch diff --git a/spaces/lwchen/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/__init__.py b/spaces/lwchen/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ma-xu/LIVE/thrust/internal/benchmark/combine_benchmark_results.py b/spaces/ma-xu/LIVE/thrust/internal/benchmark/combine_benchmark_results.py deleted file mode 100644 index f82b21f80a1eadbb16e0e8c27cbbc9d64d268fa7..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/internal/benchmark/combine_benchmark_results.py +++ /dev/null @@ -1,817 +0,0 @@ -#! /usr/bin/env python -# -*- coding: utf-8 -*- - -############################################################################### -# Copyright (c) 2012-7 Bryce Adelstein Lelbach aka wash -# -# Distributed under the Boost Software License, Version 1.0. (See accompanying -# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) -############################################################################### - -############################################################################### -# Copyright (c) 2018 NVIDIA Corporation -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -############################################################################### - -# XXX Put code shared with `compare_benchmark_results.py` in a common place. - -# XXX Relative uncertainty. - -from sys import exit, stdout - -from os.path import splitext - -from itertools import imap # Lazy map. - -from math import sqrt, log10, floor - -from collections import deque - -from argparse import ArgumentParser as argument_parser - -from csv import DictReader as csv_dict_reader -from csv import DictWriter as csv_dict_writer - -from re import compile as regex_compile - -############################################################################### - -def unpack_tuple(f): - """Return a unary function that calls `f` with its argument unpacked.""" - return lambda args: f(*iter(args)) - -def strip_dict(d): - """Strip leading and trailing whitespace from all keys and values in `d`.""" - d.update({key: value.strip() for (key, value) in d.items()}) - -def merge_dicts(d0, d1): - """Create a new `dict` that is the union of `dict`s `d0` and `d1`.""" - d = d0.copy() - d.update(d1) - return d - -def strip_list(l): - """Strip leading and trailing whitespace from all values in `l`.""" - for i, value in enumerate(l): l[i] = value.strip() - -############################################################################### - -def int_or_float(x): - """Convert `x` to either `int` or `float`, preferring `int`. - - Raises: - ValueError : If `x` is not convertible to either `int` or `float` - """ - try: - return int(x) - except ValueError: - return float(x) - -def try_int_or_float(x): - """Try to convert `x` to either `int` or `float`, preferring `int`. `x` is - returned unmodified if conversion fails. - """ - try: - return int_or_float(x) - except ValueError: - return x - -############################################################################### - -def find_significant_digit(x): - """Return the significant digit of the number x. The result is the number of - digits after the decimal place to round to (negative numbers indicate rounding - before the decimal place).""" - if x == 0: return 0 - return -int(floor(log10(abs(x)))) - -def round_with_int_conversion(x, ndigits = None): - """Rounds `x` to `ndigits` after the the decimal place. If `ndigits` is less - than 1, convert the result to `int`. If `ndigits` is `None`, the significant - digit of `x` is used.""" - if ndigits is None: ndigits = find_significant_digit(x) - x_rounded = round(x, ndigits) - return int(x_rounded) if ndigits < 1 else x_rounded - -############################################################################### - -class measured_variable(object): - """A meta-variable representing measured data. It is composed of three raw - variables plus units meta-data. - - Attributes: - quantity (`str`) : - Name of the quantity variable of this object. - uncertainty (`str`) : - Name of the uncertainty variable of this object. - sample_size (`str`) : - Name of the sample size variable of this object. - units (units class or `None`) : - The units the value is measured in. - """ - - def __init__(self, quantity, uncertainty, sample_size, units = None): - self.quantity = quantity - self.uncertainty = uncertainty - self.sample_size = sample_size - self.units = units - - def as_tuple(self): - return (self.quantity, self.uncertainty, self.sample_size, self.units) - - def __iter__(self): - return iter(self.as_tuple()) - - def __str__(self): - return str(self.as_tuple()) - - def __repr__(self): - return str(self) - -class measured_value(object): - """An object that represents a value determined by multiple measurements. - - Attributes: - quantity (scalar) : - The quantity of the value, e.g. the arithmetic mean. - uncertainty (scalar) : - The measurement uncertainty, e.g. the sample standard deviation. - sample_size (`int`) : - The number of observations contributing to the value. - units (units class or `None`) : - The units the value is measured in. - """ - - def __init__(self, quantity, uncertainty, sample_size = 1, units = None): - self.quantity = quantity - self.uncertainty = uncertainty - self.sample_size = sample_size - self.units = units - - def as_tuple(self): - return (self.quantity, self.uncertainty, self.sample_size, self.units) - - def __iter__(self): - return iter(self.as_tuple()) - - def __str__(self): - return str(self.as_tuple()) - - def __repr__(self): - return str(self) - -############################################################################### - -def arithmetic_mean(X): - """Computes the arithmetic mean of the sequence `X`. - - Let: - - * `n = len(X)`. - * `u` denote the arithmetic mean of `X`. - - .. math:: - - u = \frac{\sum_{i = 0}^{n - 1} X_i}{n} - """ - return sum(X) / len(X) - -def sample_variance(X, u = None): - """Computes the sample variance of the sequence `X`. - - Let: - - * `n = len(X)`. - * `u` denote the arithmetic mean of `X`. - * `s` denote the sample standard deviation of `X`. - - .. math:: - - v = \frac{\sum_{i = 0}^{n - 1} (X_i - u)^2}{n - 1} - - Args: - X (`Iterable`) : The sequence of values. - u (number) : The arithmetic mean of `X`. - """ - if u is None: u = arithmetic_mean(X) - return sum(imap(lambda X_i: (X_i - u) ** 2, X)) / (len(X) - 1) - -def sample_standard_deviation(X, u = None, v = None): - """Computes the sample standard deviation of the sequence `X`. - - Let: - - * `n = len(X)`. - * `u` denote the arithmetic mean of `X`. - * `v` denote the sample variance of `X`. - * `s` denote the sample standard deviation of `X`. - - .. math:: - - s &= \sqrt{v} - &= \sqrt{\frac{\sum_{i = 0}^{n - 1} (X_i - u)^2}{n - 1}} - - Args: - X (`Iterable`) : The sequence of values. - u (number) : The arithmetic mean of `X`. - v (number) : The sample variance of `X`. - """ - if u is None: u = arithmetic_mean(X) - if v is None: v = sample_variance(X, u) - return sqrt(v) - -def combine_sample_size(As): - """Computes the combined sample variance of a group of `measured_value`s. - - Let: - - * `g = len(As)`. - * `n_i = As[i].samples`. - * `n` denote the combined sample size of `As`. - - .. math:: - - n = \sum{i = 0}^{g - 1} n_i - """ - return sum(imap(unpack_tuple(lambda u_i, s_i, n_i, t_i: n_i), As)) - -def combine_arithmetic_mean(As, n = None): - """Computes the combined arithmetic mean of a group of `measured_value`s. - - Let: - - * `g = len(As)`. - * `u_i = As[i].quantity`. - * `n_i = As[i].samples`. - * `n` denote the combined sample size of `As`. - * `u` denote the arithmetic mean of the quantities of `As`. - - .. math:: - - u = \frac{\sum{i = 0}^{g - 1} n_i u_i}{n} - """ - if n is None: n = combine_sample_size(As) - return sum(imap(unpack_tuple(lambda u_i, s_i, n_i, t_i: n_i * u_i), As)) / n - -def combine_sample_variance(As, n = None, u = None): - """Computes the combined sample variance of a group of `measured_value`s. - - Let: - - * `g = len(As)`. - * `u_i = As[i].quantity`. - * `s_i = As[i].uncertainty`. - * `n_i = As[i].samples`. - * `n` denote the combined sample size of `As`. - * `u` denote the arithmetic mean of the quantities of `As`. - * `v` denote the sample variance of `X`. - - .. math:: - - v = \frac{(\sum_{i = 0}^{g - 1} n_i (u_i - u)^2 + s_i^2 (n_i - 1))}{n - 1} - - Args: - As (`Iterable` of `measured_value`s) : The sequence of values. - n (number) : The combined sample sizes of `As`. - u (number) : The combined arithmetic mean of `As`. - """ - if n <= 1: return 0 - if n is None: n = combine_sample_size(As) - if u is None: u = combine_arithmetic_mean(As, n) - return sum(imap(unpack_tuple( - lambda u_i, s_i, n_i, t_i: n_i * (u_i - u) ** 2 + (s_i ** 2) * (n_i - 1) - ), As)) / (n - 1) - -def combine_sample_standard_deviation(As, n = None, u = None, v = None): - """Computes the combined sample standard deviation of a group of - `measured_value`s. - - Let: - - * `g = len(As)`. - * `u_i = As[i].quantity`. - * `s_i = As[i].uncertainty`. - * `n_i = As[i].samples`. - * `n` denote the combined sample size of `As`. - * `u` denote the arithmetic mean of the quantities of `As`. - * `v` denote the sample variance of `X`. - * `s` denote the sample standard deviation of `X`. - - .. math:: - - s &= \sqrt{v} - &= \sqrt{\frac{(\sum_{i = 0}^{g - 1} n_i (u_i - u)^2 + s_i^2 (n_i - 1))}{n - 1}} - - Args: - As (`Iterable` of `measured_value`s) : The sequence of values. - n (number) : The combined sample sizes of `As`. - u (number) : The combined arithmetic mean of `As`. - v (number) : The combined sample variance of `As`. - """ - if n <= 1: return 0 - if n is None: n = combine_sample_size(As) - if u is None: u = combine_arithmetic_mean(As, n) - if v is None: v = combine_sample_variance(As, n, u) - return sqrt(v) - -############################################################################### - -def process_program_arguments(): - ap = argument_parser( - description = ( - "Aggregates the results of multiple runs of benchmark results stored in " - "CSV format." - ) - ) - - ap.add_argument( - "-d", "--dependent-variable", - help = ("Treat the specified three variables as a dependent variable. The " - "1st variable is the measured quantity, the 2nd is the uncertainty " - "of the measurement and the 3rd is the sample size. The defaults " - "are the dependent variables of Thrust's benchmark suite. May be " - "specified multiple times."), - action = "append", type = str, dest = "dependent_variables", - metavar = "QUANTITY,UNCERTAINTY,SAMPLES" - ) - - ap.add_argument( - "-p", "--preserve-whitespace", - help = ("Don't trim leading and trailing whitespace from each CSV cell."), - action = "store_true", default = False - ) - - ap.add_argument( - "-o", "--output-file", - help = ("The file that results are written to. If `-`, results are " - "written to stdout."), - action = "store", type = str, default = "-", - metavar = "OUTPUT" - ) - - ap.add_argument( - "input_files", - help = ("Input CSV files. The first two rows should be a header. The 1st " - "header row specifies the name of each variable, and the 2nd " - "header row specifies the units for that variable."), - type = str, nargs = "+", - metavar = "INPUTS" - ) - - return ap.parse_args() - -############################################################################### - -def filter_comments(f, s = "#"): - """Return an iterator to the file `f` which filters out all lines beginning - with `s`.""" - return filter(lambda line: not line.startswith(s), f) - -############################################################################### - -class io_manager(object): - """Manages I/O operations and represents the input data as an `Iterable` - sequence of `dict`s. - - It is `Iterable` and an `Iterator`. It can be used with `with`. - - Attributes: - preserve_whitespace (`bool`) : - If `False`, leading and trailing whitespace is stripped from each CSV cell. - writer (`csv_dict_writer`) : - CSV writer object that the output is written to. - output_file (`file` or `stdout`) : - The output `file` object. - readers (`list` of `csv_dict_reader`s) : - List of input files as CSV reader objects. - input_files (list of `file`s) : - List of input `file` objects. - variable_names (`list` of `str`s) : - Names of the variables, in order. - variable_units (`list` of `str`s) : - Units of the variables, in order. - """ - - def __init__(self, input_files, output_file, preserve_whitespace = True): - """Read input files and open the output file and construct a new `io_manager` - object. - - If `preserve_whitespace` is `False`, leading and trailing whitespace is - stripped from each CSV cell. - - Raises - AssertionError : - If `len(input_files) <= 0` or `type(preserve_whitespace) != bool`. - """ - assert len(input_files) > 0, "No input files provided." - - assert type(preserve_whitespace) == bool - - self.preserve_whitespace = preserve_whitespace - - self.readers = deque() - - self.variable_names = None - self.variable_units = None - - self.input_files = deque() - - for input_file in input_files: - input_file_object = open(input_file) - reader = csv_dict_reader(filter_comments(input_file_object)) - - if not self.preserve_whitespace: - strip_list(reader.fieldnames) - - if self.variable_names is None: - self.variable_names = reader.fieldnames - else: - # Make sure all inputs have the same schema. - assert self.variable_names == reader.fieldnames, \ - "Input file (`" + input_file + "`) variable schema `" + \ - str(reader.fieldnames) + "` does not match the variable schema `" + \ - str(self.variable_names) + "`." - - # Consume the next row, which should be the second line of the header. - variable_units = reader.next() - - if not self.preserve_whitespace: - strip_dict(variable_units) - - if self.variable_units is None: - self.variable_units = variable_units - else: - # Make sure all inputs have the same units schema. - assert self.variable_units == variable_units, \ - "Input file (`" + input_file + "`) units schema `" + \ - str(variable_units) + "` does not match the units schema `" + \ - str(self.variable_units) + "`." - - self.readers.append(reader) - self.input_files.append(input_file_object) - - if output_file == "-": # Output to stdout. - self.output_file = stdout - else: # Output to user-specified file. - self.output_file = open(output_file, "w") - - self.writer = csv_dict_writer( - self.output_file, fieldnames = self.variable_names - ) - - def __enter__(self): - """Called upon entering a `with` statement.""" - return self - - def __exit__(self, *args): - """Called upon exiting a `with` statement.""" - if self.output_file is stdout: - self.output_file = None - elif self.output_file is not None: - self.output_file.__exit__(*args) - - for input_file in self.input_files: - input_file.__exit__(*args) - - ############################################################################# - # Input Stream. - - def __iter__(self): - """Return an iterator to the input sequence. - - This is a requirement for the `Iterable` protocol. - """ - return self - - def next(self): - """Consume and return the next record (a `dict` representing a CSV row) in - the input. - - This is a requirement for the `Iterator` protocol. - - Raises: - StopIteration : If there is no more input. - """ - if len(self.readers) == 0: - raise StopIteration() - - try: - row = self.readers[0].next() - if not self.preserve_whitespace: strip_dict(row) - return row - except StopIteration: - # The current reader is empty, so pop it, pop it's input file, close the - # input file, and then call ourselves again. - self.readers.popleft() - self.input_files.popleft().close() - return self.next() - - ############################################################################# - # Output. - - def write_header(self): - """Write the header for the output CSV file.""" - # Write the first line of the header. - self.writer.writeheader() - - # Write the second line of the header. - self.writer.writerow(self.variable_units) - - def write(self, d): - """Write a record (a `dict`) to the output CSV file.""" - self.writer.writerow(d) - -############################################################################### - -class dependent_variable_parser(object): - """Parses a `--dependent-variable=AVG,STDEV,TRIALS` command line argument.""" - - ############################################################################# - # Grammar - - # Parse a variable_name. - variable_name_rule = r'[^,]+' - - # Parse a variable classification. - dependent_variable_rule = r'(' + variable_name_rule + r')' \ - + r',' \ - + r'(' + variable_name_rule + r')' \ - + r',' \ - + r'(' + variable_name_rule + r')' - - engine = regex_compile(dependent_variable_rule) - - ############################################################################# - - def __call__(self, s): - """Parses the string `s` with the form "AVG,STDEV,TRIALS". - - Returns: - A `measured_variable`. - - Raises: - AssertionError : If parsing fails. - """ - - match = self.engine.match(s) - - assert match is not None, \ - "Dependent variable (-d) `" +s+ "` is invalid, the format is " + \ - "`AVG,STDEV,TRIALS`." - - return measured_variable(match.group(1), match.group(2), match.group(3)) - -############################################################################### - -class record_aggregator(object): - """Consumes and combines records and represents the result as an `Iterable` - sequence of `dict`s. - - It is `Iterable` and an `Iterator`. - - Attributes: - dependent_variables (`list` of `measured_variable`s) : - A list of dependent variables provided on the command line. - dataset (`dict`) : - A mapping of distinguishing (e.g. control + independent) values (`tuple`s - of variable-quantity pairs) to `list`s of dependent values (`dict`s from - variables to lists of cells). - in_order_dataset_keys : - A list of unique dataset keys (e.g. distinguishing variables) in order of - appearance. - """ - - parse_dependent_variable = dependent_variable_parser() - - def __init__(self, raw_dependent_variables): - """Parse dependent variables and construct a new `record_aggregator` object. - - Raises: - AssertionError : If parsing of dependent variables fails. - """ - self.dependent_variables = [] - - if raw_dependent_variables is not None: - for variable in raw_dependent_variables: - self.dependent_variables.append(self.parse_dependent_variable(variable)) - - self.dataset = {} - - self.in_order_dataset_keys = deque() - - ############################################################################# - # Insertion. - - def append(self, record): - """Add `record` to the dataset. - - Raises: - ValueError : If any `str`-to-numeric conversions fail. - """ - # The distinguishing variables are the control and independent variables. - # They form the key for each record in the dataset. Records with the same - # distinguishing variables are treated as observations of the same data - # point. - dependent_values = {} - - # To allow the same sample size variable to be used for multiple dependent - # variables, we don't pop sample size variables until we're done processing - # all variables. - sample_size_variables = [] - - # Separate the dependent values from the distinguishing variables and - # perform `str`-to-numeric conversions. - for variable in self.dependent_variables: - quantity, uncertainty, sample_size, units = variable.as_tuple() - - dependent_values[quantity] = [int_or_float(record.pop(quantity))] - dependent_values[uncertainty] = [int_or_float(record.pop(uncertainty))] - dependent_values[sample_size] = [int(record[sample_size])] - - sample_size_variables.append(sample_size) - - # Pop sample size variables. - for sample_size_variable in sample_size_variables: - # Allowed to fail, as we may have duplicates. - record.pop(sample_size_variable, None) - - # `dict`s aren't hashable, so create a tuple of key-value pairs. - distinguishing_values = tuple(record.items()) - - if distinguishing_values in self.dataset: - # These distinguishing values already exist, so get the `dict` they're - # mapped to, look up each key in `dependent_values` in the `dict`, and - # add the corresponding quantity in `dependent_values` to the list in the - # the `dict`. - for variable, columns in dependent_values.iteritems(): - self.dataset[distinguishing_values][variable] += columns - else: - # These distinguishing values aren't in the dataset, so add them and - # record them in `in_order_dataset_keys`. - self.dataset[distinguishing_values] = dependent_values - self.in_order_dataset_keys.append(distinguishing_values) - - ############################################################################# - # Postprocessing. - - def combine_dependent_values(self, dependent_values): - """Takes a mapping of dependent variables to lists of cells and returns - a new mapping with the cells combined. - - Raises: - AssertionError : If class invariants were violated. - """ - combined_dependent_values = dependent_values.copy() - - for variable in self.dependent_variables: - quantity, uncertainty, sample_size, units = variable.as_tuple() - - quantities = dependent_values[quantity] - uncertainties = dependent_values[uncertainty] - sample_sizes = dependent_values[sample_size] - - if type(sample_size) is list: - # Sample size hasn't been combined yet. - assert len(quantities) == len(uncertainties) \ - and len(uncertainties) == len(sample_sizes), \ - "Length of quantities list `(" + str(len(quantities)) + ")`, " + \ - "length of uncertainties list `(" + str(len(uncertainties)) + \ - "),` and length of sample sizes list `(" + str(len(sample_sizes)) + \ - ")` are not the same." - else: - # Another dependent variable that uses our sample size has combined it - # already. - assert len(quantities) == len(uncertainties), \ - "Length of quantities list `(" + str(len(quantities)) + ")` and " + \ - "length of uncertainties list `(" + str(len(uncertainties)) + \ - ")` are not the same." - - # Convert the three separate `list`s into one list of `measured_value`s. - measured_values = [] - - for i in range(len(quantities)): - mv = measured_value( - quantities[i], uncertainties[i], sample_sizes[i], units - ) - - measured_values.append(mv) - - # Combine the `measured_value`s. - combined_sample_size = combine_sample_size( - measured_values - ) - - combined_arithmetic_mean = combine_arithmetic_mean( - measured_values, combined_sample_size - ) - - combined_sample_standard_deviation = combine_sample_standard_deviation( - measured_values, combined_sample_size, combined_arithmetic_mean - ) - - # Round the quantity and uncertainty to the significant digit of - # uncertainty and insert the combined values into the results. - sigdig = find_significant_digit(combined_sample_standard_deviation) - -# combined_arithmetic_mean = round_with_int_conversion( -# combined_arithmetic_mean, sigdig -# ) - -# combined_sample_standard_deviation = round_with_int_conversion( -# combined_sample_standard_deviation, sigdig -# ) - - combined_dependent_values[quantity] = combined_arithmetic_mean - combined_dependent_values[uncertainty] = combined_sample_standard_deviation - combined_dependent_values[sample_size] = combined_sample_size - - return combined_dependent_values - - ############################################################################# - # Output Stream. - - def __iter__(self): - """Return an iterator to the output sequence of separated distinguishing - variables and dependent variables (a tuple of two `dict`s). - - This is a requirement for the `Iterable` protocol. - """ - return self - - def records(self): - """Return an iterator to the output sequence of CSV rows (`dict`s of - variables to values). - """ - return imap(unpack_tuple(lambda dist, dep: merge_dicts(dist, dep)), self) - - def next(self): - """Produce the components of the next output record - a tuple of two - `dict`s. The first `dict` is a mapping of distinguishing variables to - distinguishing values, the second `dict` is a mapping of dependent - variables to combined dependent values. Combining the two dicts forms a - CSV row suitable for output. - - This is a requirement for the `Iterator` protocol. - - Raises: - StopIteration : If there is no more output. - AssertionError : If class invariants were violated. - """ - assert len(self.dataset.keys()) == len(self.in_order_dataset_keys), \ - "Number of dataset keys (`" + str(len(self.dataset.keys())) + \ - "`) is not equal to the number of keys in the ordering list (`" + \ - str(len(self.in_order_dataset_keys)) + "`)." - - if len(self.in_order_dataset_keys) == 0: - raise StopIteration() - - # Get the next set of distinguishing values and convert them to a `dict`. - raw_distinguishing_values = self.in_order_dataset_keys.popleft() - distinguishing_values = dict(raw_distinguishing_values) - - dependent_values = self.dataset.pop(raw_distinguishing_values) - - combined_dependent_values = self.combine_dependent_values(dependent_values) - - return (distinguishing_values, combined_dependent_values) - -############################################################################### - -args = process_program_arguments() - -if args.dependent_variables is None: - args.dependent_variables = [ - "STL Average Walltime,STL Walltime Uncertainty,STL Trials", - "STL Average Throughput,STL Throughput Uncertainty,STL Trials", - "Thrust Average Walltime,Thrust Walltime Uncertainty,Thrust Trials", - "Thrust Average Throughput,Thrust Throughput Uncertainty,Thrust Trials" - ] - -# Read input files and open the output file. -with io_manager(args.input_files, - args.output_file, - args.preserve_whitespace) as iom: - # Parse dependent variable options. - ra = record_aggregator(args.dependent_variables) - - # Add all input data to the `record_aggregator`. - for record in iom: - ra.append(record) - - iom.write_header() - - # Write combined results out. - for record in ra.records(): - iom.write(record) - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/binary_search.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/binary_search.h deleted file mode 100644 index 54534143ecd7a4712094461b60c6e2b902a6781e..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/sequential/binary_search.h +++ /dev/null @@ -1,157 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file binary_search.h - * \brief Sequential implementation of binary search algorithms. - */ - -#pragma once - -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -__thrust_exec_check_disable__ -template -__host__ __device__ -ForwardIterator lower_bound(sequential::execution_policy &, - ForwardIterator first, - ForwardIterator last, - const T& val, - StrictWeakOrdering comp) -{ - // wrap comp - thrust::detail::wrapped_function< - StrictWeakOrdering, - bool - > wrapped_comp(comp); - - typedef typename thrust::iterator_difference::type difference_type; - - difference_type len = thrust::distance(first, last); - - while(len > 0) - { - difference_type half = len >> 1; - ForwardIterator middle = first; - - thrust::advance(middle, half); - - if(wrapped_comp(*middle, val)) - { - first = middle; - ++first; - len = len - half - 1; - } - else - { - len = half; - } - } - - return first; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ -ForwardIterator upper_bound(sequential::execution_policy &, - ForwardIterator first, - ForwardIterator last, - const T& val, - StrictWeakOrdering comp) -{ - // wrap comp - thrust::detail::wrapped_function< - StrictWeakOrdering, - bool - > wrapped_comp(comp); - - typedef typename thrust::iterator_difference::type difference_type; - - difference_type len = thrust::distance(first, last); - - while(len > 0) - { - difference_type half = len >> 1; - ForwardIterator middle = first; - - thrust::advance(middle, half); - - if(wrapped_comp(val, *middle)) - { - len = half; - } - else - { - first = middle; - ++first; - len = len - half - 1; - } - } - - return first; -} - - -__thrust_exec_check_disable__ -template -__host__ __device__ -bool binary_search(sequential::execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - const T& val, - StrictWeakOrdering comp) -{ - ForwardIterator iter = sequential::lower_bound(exec, first, last, val, comp); - - // wrap comp - thrust::detail::wrapped_function< - StrictWeakOrdering, - bool - > wrapped_comp(comp); - - return iter != last && !wrapped_comp(val,*iter); -} - - -} // end namespace sequential -} // end namespace detail -} // end namespace system -} // end namespace thrust - diff --git a/spaces/mabrotha/ChatGPT-prompt-generator/app.py b/spaces/mabrotha/ChatGPT-prompt-generator/app.py deleted file mode 100644 index 5da2e5088053267553b6f5af9760a0a7d58c2a1f..0000000000000000000000000000000000000000 --- a/spaces/mabrotha/ChatGPT-prompt-generator/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM -import gradio as gr - -tokenizer = AutoTokenizer.from_pretrained("merve/chatgpt-prompts-bart-long") -model = AutoModelForSeq2SeqLM.from_pretrained("merve/chatgpt-prompts-bart-long", from_tf=True) - -def generate(prompt): - - batch = tokenizer(prompt, return_tensors="pt") - generated_ids = model.generate(batch["input_ids"], max_new_tokens=150) - output = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) - return output[0] - -input_component = gr.Textbox(label = "Input a persona, e.g. photographer", value = "photographer") -output_component = gr.Textbox(label = "Prompt") -examples = [["photographer"], ["developer"]] -description = "This app generates ChatGPT prompts, it's based on a BART model trained on [this dataset](https://huggingface.co/datasets/fka/awesome-chatgpt-prompts). 📓 Simply enter a persona that you want the prompt to be generated based on. 🧙🏻🧑🏻‍🚀🧑🏻‍🎨🧑🏻‍🔬🧑🏻‍💻🧑🏼‍🏫🧑🏽‍🌾" -gr.Interface(generate, inputs = input_component, outputs=output_component, examples=examples, title = "👨🏻‍🎤 ChatGPT Prompt Generator 👨🏻‍🎤", description=description).launch() diff --git a/spaces/manh-linh/Linh-Gradio/README.md b/spaces/manh-linh/Linh-Gradio/README.md deleted file mode 100644 index 37d08ade5571b97cfc30189ba967671986680825..0000000000000000000000000000000000000000 --- a/spaces/manh-linh/Linh-Gradio/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Linh Gradio -emoji: 📊 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/manhkhanhUIT/BOPBTL/Global/util/__init__.py b/spaces/manhkhanhUIT/BOPBTL/Global/util/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/manu1612/spamdet/verdict/header_det.py b/spaces/manu1612/spamdet/verdict/header_det.py deleted file mode 100644 index 2673689aeff36243d348fd4dffd027c53e8a526b..0000000000000000000000000000000000000000 --- a/spaces/manu1612/spamdet/verdict/header_det.py +++ /dev/null @@ -1,43 +0,0 @@ -import email -from email.policy import default -import re - - -class HeaderAnalyzer: - def __init__(self): - pass - - def analyze_header(self, header): - # Extract relevant information from the header - sender = header.get("From") - subject = header.get("Subject") - to = header.get("To") - date = header.get("Date") - # Extract other relevant fields as needed - - # Apply rules or heuristics to analyze the header - spam_score = self.calculate_spam_score(header) # Calculate a spam score based on rules - - # Return the analyzed information - return { - "sender": sender, - "subject": subject, - "to": to, - "date": date, - "spam_score": spam_score - # Include other analyzed information as needed - } - - def calculate_spam_score(self, header): - # Apply rules or heuristics to calculate the spam score - spam_score = 0 - - # Example rules: - if header.get("X-Spam-Score"): - spam_score += float(header.get("X-Spam-Score")) - if header.get("X-Spam-Flag"): - spam_flag = header.get("X-Spam-Flag") - if spam_flag.lower() == "yes": - spam_score += 1 - - return spam_score \ No newline at end of file diff --git a/spaces/manymoon22173/RVC_MODELS/app.py b/spaces/manymoon22173/RVC_MODELS/app.py deleted file mode 100644 index 8afbafbeb874e6d0f440dd8a8020c6ad76155023..0000000000000000000000000000000000000000 --- a/spaces/manymoon22173/RVC_MODELS/app.py +++ /dev/null @@ -1,184 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
        RVC Models\n" - "##
        The input audio should be clean and pure voice without background music.\n" - "\n\n" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
        ' - f'
        {title}
        \n'+ - (f'
        Model author: {author}
        ' if author else "")+ - (f'' if cover else "")+ - '
        ' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/marioboy/neil-breen/synthesizer/preprocess.py b/spaces/marioboy/neil-breen/synthesizer/preprocess.py deleted file mode 100644 index cde325c4163d6800404de214202d773addfff296..0000000000000000000000000000000000000000 --- a/spaces/marioboy/neil-breen/synthesizer/preprocess.py +++ /dev/null @@ -1,259 +0,0 @@ -from multiprocessing.pool import Pool -from synthesizer import audio -from functools import partial -from itertools import chain -from encoder import inference as encoder -from pathlib import Path -from utils import logmmse -from tqdm import tqdm -import numpy as np -import librosa - - -def preprocess_dataset(datasets_root: Path, out_dir: Path, n_processes: int, - skip_existing: bool, hparams, no_alignments: bool, - datasets_name: str, subfolders: str): - # Gather the input directories - dataset_root = datasets_root.joinpath(datasets_name) - input_dirs = [dataset_root.joinpath(subfolder.strip()) for subfolder in subfolders.split(",")] - print("\n ".join(map(str, ["Using data from:"] + input_dirs))) - assert all(input_dir.exists() for input_dir in input_dirs) - - # Create the output directories for each output file type - out_dir.joinpath("mels").mkdir(exist_ok=True) - out_dir.joinpath("audio").mkdir(exist_ok=True) - - # Create a metadata file - metadata_fpath = out_dir.joinpath("train.txt") - metadata_file = metadata_fpath.open("a" if skip_existing else "w", encoding="utf-8") - - # Preprocess the dataset - speaker_dirs = list(chain.from_iterable(input_dir.glob("*") for input_dir in input_dirs)) - func = partial(preprocess_speaker, out_dir=out_dir, skip_existing=skip_existing, - hparams=hparams, no_alignments=no_alignments) - job = Pool(n_processes).imap(func, speaker_dirs) - for speaker_metadata in tqdm(job, datasets_name, len(speaker_dirs), unit="speakers"): - for metadatum in speaker_metadata: - metadata_file.write("|".join(str(x) for x in metadatum) + "\n") - metadata_file.close() - - # Verify the contents of the metadata file - with metadata_fpath.open("r", encoding="utf-8") as metadata_file: - metadata = [line.split("|") for line in metadata_file] - mel_frames = sum([int(m[4]) for m in metadata]) - timesteps = sum([int(m[3]) for m in metadata]) - sample_rate = hparams.sample_rate - hours = (timesteps / sample_rate) / 3600 - print("The dataset consists of %d utterances, %d mel frames, %d audio timesteps (%.2f hours)." % - (len(metadata), mel_frames, timesteps, hours)) - print("Max input length (text chars): %d" % max(len(m[5]) for m in metadata)) - print("Max mel frames length: %d" % max(int(m[4]) for m in metadata)) - print("Max audio timesteps length: %d" % max(int(m[3]) for m in metadata)) - - -def preprocess_speaker(speaker_dir, out_dir: Path, skip_existing: bool, hparams, no_alignments: bool): - metadata = [] - for book_dir in speaker_dir.glob("*"): - if no_alignments: - # Gather the utterance audios and texts - # LibriTTS uses .wav but we will include extensions for compatibility with other datasets - extensions = ["*.wav", "*.flac", "*.mp3"] - for extension in extensions: - wav_fpaths = book_dir.glob(extension) - - for wav_fpath in wav_fpaths: - # Load the audio waveform - wav, _ = librosa.load(str(wav_fpath), hparams.sample_rate) - if hparams.rescale: - wav = wav / np.abs(wav).max() * hparams.rescaling_max - - # Get the corresponding text - # Check for .txt (for compatibility with other datasets) - text_fpath = wav_fpath.with_suffix(".txt") - if not text_fpath.exists(): - # Check for .normalized.txt (LibriTTS) - text_fpath = wav_fpath.with_suffix(".normalized.txt") - assert text_fpath.exists() - with text_fpath.open("r") as text_file: - text = "".join([line for line in text_file]) - text = text.replace("\"", "") - text = text.strip() - - # Process the utterance - metadata.append(process_utterance(wav, text, out_dir, str(wav_fpath.with_suffix("").name), - skip_existing, hparams)) - else: - # Process alignment file (LibriSpeech support) - # Gather the utterance audios and texts - try: - alignments_fpath = next(book_dir.glob("*.alignment.txt")) - with alignments_fpath.open("r") as alignments_file: - alignments = [line.rstrip().split(" ") for line in alignments_file] - except StopIteration: - # A few alignment files will be missing - continue - - # Iterate over each entry in the alignments file - for wav_fname, words, end_times in alignments: - wav_fpath = book_dir.joinpath(wav_fname + ".flac") - assert wav_fpath.exists() - words = words.replace("\"", "").split(",") - end_times = list(map(float, end_times.replace("\"", "").split(","))) - - # Process each sub-utterance - wavs, texts = split_on_silences(wav_fpath, words, end_times, hparams) - for i, (wav, text) in enumerate(zip(wavs, texts)): - sub_basename = "%s_%02d" % (wav_fname, i) - metadata.append(process_utterance(wav, text, out_dir, sub_basename, - skip_existing, hparams)) - - return [m for m in metadata if m is not None] - - -def split_on_silences(wav_fpath, words, end_times, hparams): - # Load the audio waveform - wav, _ = librosa.load(str(wav_fpath), hparams.sample_rate) - if hparams.rescale: - wav = wav / np.abs(wav).max() * hparams.rescaling_max - - words = np.array(words) - start_times = np.array([0.0] + end_times[:-1]) - end_times = np.array(end_times) - assert len(words) == len(end_times) == len(start_times) - assert words[0] == "" and words[-1] == "" - - # Find pauses that are too long - mask = (words == "") & (end_times - start_times >= hparams.silence_min_duration_split) - mask[0] = mask[-1] = True - breaks = np.where(mask)[0] - - # Profile the noise from the silences and perform noise reduction on the waveform - silence_times = [[start_times[i], end_times[i]] for i in breaks] - silence_times = (np.array(silence_times) * hparams.sample_rate).astype(np.int) - noisy_wav = np.concatenate([wav[stime[0]:stime[1]] for stime in silence_times]) - if len(noisy_wav) > hparams.sample_rate * 0.02: - profile = logmmse.profile_noise(noisy_wav, hparams.sample_rate) - wav = logmmse.denoise(wav, profile, eta=0) - - # Re-attach segments that are too short - segments = list(zip(breaks[:-1], breaks[1:])) - segment_durations = [start_times[end] - end_times[start] for start, end in segments] - i = 0 - while i < len(segments) and len(segments) > 1: - if segment_durations[i] < hparams.utterance_min_duration: - # See if the segment can be re-attached with the right or the left segment - left_duration = float("inf") if i == 0 else segment_durations[i - 1] - right_duration = float("inf") if i == len(segments) - 1 else segment_durations[i + 1] - joined_duration = segment_durations[i] + min(left_duration, right_duration) - - # Do not re-attach if it causes the joined utterance to be too long - if joined_duration > hparams.hop_size * hparams.max_mel_frames / hparams.sample_rate: - i += 1 - continue - - # Re-attach the segment with the neighbour of shortest duration - j = i - 1 if left_duration <= right_duration else i - segments[j] = (segments[j][0], segments[j + 1][1]) - segment_durations[j] = joined_duration - del segments[j + 1], segment_durations[j + 1] - else: - i += 1 - - # Split the utterance - segment_times = [[end_times[start], start_times[end]] for start, end in segments] - segment_times = (np.array(segment_times) * hparams.sample_rate).astype(np.int) - wavs = [wav[segment_time[0]:segment_time[1]] for segment_time in segment_times] - texts = [" ".join(words[start + 1:end]).replace(" ", " ") for start, end in segments] - - # # DEBUG: play the audio segments (run with -n=1) - # import sounddevice as sd - # if len(wavs) > 1: - # print("This sentence was split in %d segments:" % len(wavs)) - # else: - # print("There are no silences long enough for this sentence to be split:") - # for wav, text in zip(wavs, texts): - # # Pad the waveform with 1 second of silence because sounddevice tends to cut them early - # # when playing them. You shouldn't need to do that in your parsers. - # wav = np.concatenate((wav, [0] * 16000)) - # print("\t%s" % text) - # sd.play(wav, 16000, blocking=True) - # print("") - - return wavs, texts - - -def process_utterance(wav: np.ndarray, text: str, out_dir: Path, basename: str, - skip_existing: bool, hparams): - ## FOR REFERENCE: - # For you not to lose your head if you ever wish to change things here or implement your own - # synthesizer. - # - Both the audios and the mel spectrograms are saved as numpy arrays - # - There is no processing done to the audios that will be saved to disk beyond volume - # normalization (in split_on_silences) - # - However, pre-emphasis is applied to the audios before computing the mel spectrogram. This - # is why we re-apply it on the audio on the side of the vocoder. - # - Librosa pads the waveform before computing the mel spectrogram. Here, the waveform is saved - # without extra padding. This means that you won't have an exact relation between the length - # of the wav and of the mel spectrogram. See the vocoder data loader. - - - # Skip existing utterances if needed - mel_fpath = out_dir.joinpath("mels", "mel-%s.npy" % basename) - wav_fpath = out_dir.joinpath("audio", "audio-%s.npy" % basename) - if skip_existing and mel_fpath.exists() and wav_fpath.exists(): - return None - - # Trim silence - if hparams.trim_silence: - wav = encoder.preprocess_wav(wav, normalize=False, trim_silence=True) - - # Skip utterances that are too short - if len(wav) < hparams.utterance_min_duration * hparams.sample_rate: - return None - - # Compute the mel spectrogram - mel_spectrogram = audio.melspectrogram(wav, hparams).astype(np.float32) - mel_frames = mel_spectrogram.shape[1] - - # Skip utterances that are too long - if mel_frames > hparams.max_mel_frames and hparams.clip_mels_length: - return None - - # Write the spectrogram, embed and audio to disk - np.save(mel_fpath, mel_spectrogram.T, allow_pickle=False) - np.save(wav_fpath, wav, allow_pickle=False) - - # Return a tuple describing this training example - return wav_fpath.name, mel_fpath.name, "embed-%s.npy" % basename, len(wav), mel_frames, text - - -def embed_utterance(fpaths, encoder_model_fpath): - if not encoder.is_loaded(): - encoder.load_model(encoder_model_fpath) - - # Compute the speaker embedding of the utterance - wav_fpath, embed_fpath = fpaths - wav = np.load(wav_fpath) - wav = encoder.preprocess_wav(wav) - embed = encoder.embed_utterance(wav) - np.save(embed_fpath, embed, allow_pickle=False) - - -def create_embeddings(synthesizer_root: Path, encoder_model_fpath: Path, n_processes: int): - wav_dir = synthesizer_root.joinpath("audio") - metadata_fpath = synthesizer_root.joinpath("train.txt") - assert wav_dir.exists() and metadata_fpath.exists() - embed_dir = synthesizer_root.joinpath("embeds") - embed_dir.mkdir(exist_ok=True) - - # Gather the input wave filepath and the target output embed filepath - with metadata_fpath.open("r") as metadata_file: - metadata = [line.split("|") for line in metadata_file] - fpaths = [(wav_dir.joinpath(m[0]), embed_dir.joinpath(m[2])) for m in metadata] - - # TODO: improve on the multiprocessing, it's terrible. Disk I/O is the bottleneck here. - # Embed the utterances in separate threads - func = partial(embed_utterance, encoder_model_fpath=encoder_model_fpath) - job = Pool(n_processes).imap(func, fpaths) - list(tqdm(job, "Embedding", len(fpaths), unit="utterances")) - diff --git a/spaces/matthoffner/starchat-ui/Makefile b/spaces/matthoffner/starchat-ui/Makefile deleted file mode 100644 index 8dc4e12dc227a0ffe26ac1769fd9da539e5b438c..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/Makefile +++ /dev/null @@ -1,18 +0,0 @@ -include .env - -.PHONY: all - -build: - docker build -t chatbot-ui . - -run: - export $(cat .env | xargs) - docker stop chatbot-ui || true && docker rm chatbot-ui || true - docker run --name chatbot-ui --rm -e OPENAI_API_KEY=${OPENAI_API_KEY} -p 3000:3000 chatbot-ui - -logs: - docker logs -f chatbot-ui - -push: - docker tag chatbot-ui:latest ${DOCKER_USER}/chatbot-ui:${DOCKER_TAG} - docker push ${DOCKER_USER}/chatbot-ui:${DOCKER_TAG} \ No newline at end of file diff --git a/spaces/maxmax20160403/sovits5.0/vits_decoder/alias/filter.py b/spaces/maxmax20160403/sovits5.0/vits_decoder/alias/filter.py deleted file mode 100644 index 7ad6ea87c1f10ddd94c544037791d7a4634d5ae1..0000000000000000000000000000000000000000 --- a/spaces/maxmax20160403/sovits5.0/vits_decoder/alias/filter.py +++ /dev/null @@ -1,95 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch -import torch.nn as nn -import torch.nn.functional as F -import math - -if 'sinc' in dir(torch): - sinc = torch.sinc -else: - # This code is adopted from adefossez's julius.core.sinc under the MIT License - # https://adefossez.github.io/julius/julius/core.html - # LICENSE is in incl_licenses directory. - def sinc(x: torch.Tensor): - """ - Implementation of sinc, i.e. sin(pi * x) / (pi * x) - __Warning__: Different to julius.sinc, the input is multiplied by `pi`! - """ - return torch.where(x == 0, - torch.tensor(1., device=x.device, dtype=x.dtype), - torch.sin(math.pi * x) / math.pi / x) - - -# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License -# https://adefossez.github.io/julius/julius/lowpass.html -# LICENSE is in incl_licenses directory. -def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): # return filter [1,1,kernel_size] - even = (kernel_size % 2 == 0) - half_size = kernel_size // 2 - - #For kaiser window - delta_f = 4 * half_width - A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95 - if A > 50.: - beta = 0.1102 * (A - 8.7) - elif A >= 21.: - beta = 0.5842 * (A - 21)**0.4 + 0.07886 * (A - 21.) - else: - beta = 0. - window = torch.kaiser_window(kernel_size, beta=beta, periodic=False) - - # ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio - if even: - time = (torch.arange(-half_size, half_size) + 0.5) - else: - time = torch.arange(kernel_size) - half_size - if cutoff == 0: - filter_ = torch.zeros_like(time) - else: - filter_ = 2 * cutoff * window * sinc(2 * cutoff * time) - # Normalize filter to have sum = 1, otherwise we will have a small leakage - # of the constant component in the input signal. - filter_ /= filter_.sum() - filter = filter_.view(1, 1, kernel_size) - - return filter - - -class LowPassFilter1d(nn.Module): - def __init__(self, - cutoff=0.5, - half_width=0.6, - stride: int = 1, - padding: bool = True, - padding_mode: str = 'replicate', - kernel_size: int = 12): - # kernel_size should be even number for stylegan3 setup, - # in this implementation, odd number is also possible. - super().__init__() - if cutoff < -0.: - raise ValueError("Minimum cutoff must be larger than zero.") - if cutoff > 0.5: - raise ValueError("A cutoff above 0.5 does not make sense.") - self.kernel_size = kernel_size - self.even = (kernel_size % 2 == 0) - self.pad_left = kernel_size // 2 - int(self.even) - self.pad_right = kernel_size // 2 - self.stride = stride - self.padding = padding - self.padding_mode = padding_mode - filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size) - self.register_buffer("filter", filter) - - #input [B, C, T] - def forward(self, x): - _, C, _ = x.shape - - if self.padding: - x = F.pad(x, (self.pad_left, self.pad_right), - mode=self.padding_mode) - out = F.conv1d(x, self.filter.expand(C, -1, -1), - stride=self.stride, groups=C) - - return out \ No newline at end of file diff --git a/spaces/menghanxia/disco/models/network.py b/spaces/menghanxia/disco/models/network.py deleted file mode 100644 index bd702e6cf6b3cc9092dc685bd8a65e12508b9636..0000000000000000000000000000000000000000 --- a/spaces/menghanxia/disco/models/network.py +++ /dev/null @@ -1,352 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import init -import torchvision -import torch.nn.utils.spectral_norm as spectral_norm -import math - - -class ConvBlock(nn.Module): - def __init__(self, inChannels, outChannels, convNum, normLayer=None): - super(ConvBlock, self).__init__() - self.inConv = nn.Sequential( - nn.Conv2d(inChannels, outChannels, kernel_size=3, padding=1), - nn.ReLU(inplace=True) - ) - layers = [] - for _ in range(convNum - 1): - layers.append(nn.Conv2d(outChannels, outChannels, kernel_size=3, padding=1)) - layers.append(nn.ReLU(inplace=True)) - if not (normLayer is None): - layers.append(normLayer(outChannels)) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - x = self.inConv(x) - x = self.conv(x) - return x - - -class ResidualBlock(nn.Module): - def __init__(self, channels, normLayer=None): - super(ResidualBlock, self).__init__() - layers = [] - layers.append(nn.Conv2d(channels, channels, kernel_size=3, padding=1)) - layers.append(spectral_norm(nn.Conv2d(channels, channels, kernel_size=3, padding=1))) - if not (normLayer is None): - layers.append(normLayer(channels)) - layers.append(nn.ReLU(inplace=True)) - layers.append(nn.Conv2d(channels, channels, kernel_size=3, padding=1)) - if not (normLayer is None): - layers.append(normLayer(channels)) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - residual = self.conv(x) - return F.relu(x + residual, inplace=True) - - -class ResidualBlockSN(nn.Module): - def __init__(self, channels, normLayer=None): - super(ResidualBlockSN, self).__init__() - layers = [] - layers.append(spectral_norm(nn.Conv2d(channels, channels, kernel_size=3, padding=1))) - layers.append(nn.LeakyReLU(0.2, True)) - layers.append(spectral_norm(nn.Conv2d(channels, channels, kernel_size=3, padding=1))) - if not (normLayer is None): - layers.append(normLayer(channels)) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - residual = self.conv(x) - return F.leaky_relu(x + residual, 2e-1, inplace=True) - - -class DownsampleBlock(nn.Module): - def __init__(self, inChannels, outChannels, convNum=2, normLayer=None): - super(DownsampleBlock, self).__init__() - layers = [] - layers.append(nn.Conv2d(inChannels, outChannels, kernel_size=3, padding=1, stride=2)) - layers.append(nn.ReLU(inplace=True)) - for _ in range(convNum - 1): - layers.append(nn.Conv2d(outChannels, outChannels, kernel_size=3, padding=1)) - layers.append(nn.ReLU(inplace=True)) - if not (normLayer is None): - layers.append(normLayer(outChannels)) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - return self.conv(x) - - -class UpsampleBlock(nn.Module): - def __init__(self, inChannels, outChannels, convNum=2, normLayer=None): - super(UpsampleBlock, self).__init__() - self.conv1 = nn.Conv2d(inChannels, outChannels, kernel_size=3, padding=1, stride=1) - self.combine = nn.Conv2d(2 * outChannels, outChannels, kernel_size=3, padding=1) - layers = [] - for _ in range(convNum - 1): - layers.append(nn.Conv2d(outChannels, outChannels, kernel_size=3, padding=1)) - layers.append(nn.ReLU(inplace=True)) - if not (normLayer is None): - layers.append(normLayer(outChannels)) - self.conv2 = nn.Sequential(*layers) - - def forward(self, x, x0): - x = self.conv1(x) - x = F.interpolate(x, scale_factor=2, mode='nearest') - x = self.combine(torch.cat((x, x0), 1)) - x = F.relu(x) - return self.conv2(x) - - -class UpsampleBlockSN(nn.Module): - def __init__(self, inChannels, outChannels, convNum=2, normLayer=None): - super(UpsampleBlockSN, self).__init__() - self.conv1 = spectral_norm(nn.Conv2d(inChannels, outChannels, kernel_size=3, stride=1, padding=1)) - self.shortcut = spectral_norm(nn.Conv2d(outChannels, outChannels, kernel_size=3, stride=1, padding=1)) - layers = [] - for _ in range(convNum - 1): - layers.append(spectral_norm(nn.Conv2d(outChannels, outChannels, kernel_size=3, padding=1))) - layers.append(nn.LeakyReLU(0.2, True)) - if not (normLayer is None): - layers.append(normLayer(outChannels)) - self.conv2 = nn.Sequential(*layers) - - def forward(self, x, x0): - x = self.conv1(x) - x = F.interpolate(x, scale_factor=2, mode='nearest') - x = x + self.shortcut(x0) - x = F.leaky_relu(x, 2e-1) - return self.conv2(x) - - -class HourGlass2(nn.Module): - def __init__(self, inChannel=3, outChannel=1, resNum=3, normLayer=None): - super(HourGlass2, self).__init__() - self.inConv = ConvBlock(inChannel, 64, convNum=2, normLayer=normLayer) - self.down1 = DownsampleBlock(64, 128, convNum=2, normLayer=normLayer) - self.down2 = DownsampleBlock(128, 256, convNum=2, normLayer=normLayer) - self.residual = nn.Sequential(*[ResidualBlock(256) for _ in range(resNum)]) - self.up2 = UpsampleBlock(256, 128, convNum=3, normLayer=normLayer) - self.up1 = UpsampleBlock(128, 64, convNum=3, normLayer=normLayer) - self.outConv = nn.Conv2d(64, outChannel, kernel_size=3, padding=1) - - def forward(self, x): - f1 = self.inConv(x) - f2 = self.down1(f1) - f3 = self.down2(f2) - r3 = self.residual(f3) - r2 = self.up2(r3, f2) - r1 = self.up1(r2, f1) - y = self.outConv(r1) - return y - - -class ColorProbNet(nn.Module): - def __init__(self, inChannel=1, outChannel=2, with_SA=False): - super(ColorProbNet, self).__init__() - BNFunc = nn.BatchNorm2d - # conv1: 256 - conv1_2 = [spectral_norm(nn.Conv2d(inChannel, 64, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv1_2 += [spectral_norm(nn.Conv2d(64, 64, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv1_2 += [BNFunc(64, affine=True)] - # conv2: 128 - conv2_3 = [spectral_norm(nn.Conv2d(64, 128, 3, stride=2, padding=1)), nn.LeakyReLU(0.2, True),] - conv2_3 += [spectral_norm(nn.Conv2d(128, 128, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv2_3 += [spectral_norm(nn.Conv2d(128, 128, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv2_3 += [BNFunc(128, affine=True)] - # conv3: 64 - conv3_3 = [spectral_norm(nn.Conv2d(128, 256, 3, stride=2, padding=1)), nn.LeakyReLU(0.2, True),] - conv3_3 += [spectral_norm(nn.Conv2d(256, 256, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv3_3 += [spectral_norm(nn.Conv2d(256, 256, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv3_3 += [BNFunc(256, affine=True)] - # conv4: 32 - conv4_3 = [spectral_norm(nn.Conv2d(256, 512, 3, stride=2, padding=1)), nn.LeakyReLU(0.2, True),] - conv4_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv4_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv4_3 += [BNFunc(512, affine=True)] - # conv5: 32 - conv5_3 = [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv5_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv5_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv5_3 += [BNFunc(512, affine=True)] - # conv6: 32 - conv6_3 = [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv6_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv6_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv6_3 += [BNFunc(512, affine=True),] - if with_SA: - conv6_3 += [Self_Attn(512)] - # conv7: 32 - conv7_3 = [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv7_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv7_3 += [spectral_norm(nn.Conv2d(512, 512, 3, stride=1, padding=1)), nn.LeakyReLU(0.2, True),] - conv7_3 += [BNFunc(512, affine=True)] - # conv8: 64 - conv8up = [nn.Upsample(scale_factor=2, mode='nearest'), nn.Conv2d(512, 256, 3, stride=1, padding=1),] - conv3short8 = [nn.Conv2d(256, 256, 3, stride=1, padding=1),] - conv8_3 = [nn.ReLU(True),] - conv8_3 += [nn.Conv2d(256, 256, 3, stride=1, padding=1), nn.ReLU(True),] - conv8_3 += [nn.Conv2d(256, 256, 3, stride=1, padding=1), nn.ReLU(True),] - conv8_3 += [BNFunc(256, affine=True),] - # conv9: 128 - conv9up = [nn.Upsample(scale_factor=2, mode='nearest'), nn.Conv2d(256, 128, 3, stride=1, padding=1),] - conv9_2 = [nn.Conv2d(128, 128, 3, stride=1, padding=1), nn.ReLU(True),] - conv9_2 += [BNFunc(128, affine=True)] - # conv10: 64 - conv10up = [nn.Upsample(scale_factor=2, mode='nearest'), nn.Conv2d(128, 64, 3, stride=1, padding=1),] - conv10_2 = [nn.ReLU(True),] - conv10_2 += [nn.Conv2d(64, outChannel, 3, stride=1, padding=1), nn.ReLU(True),] - - self.conv1_2 = nn.Sequential(*conv1_2) - self.conv2_3 = nn.Sequential(*conv2_3) - self.conv3_3 = nn.Sequential(*conv3_3) - self.conv4_3 = nn.Sequential(*conv4_3) - self.conv5_3 = nn.Sequential(*conv5_3) - self.conv6_3 = nn.Sequential(*conv6_3) - self.conv7_3 = nn.Sequential(*conv7_3) - self.conv8up = nn.Sequential(*conv8up) - self.conv3short8 = nn.Sequential(*conv3short8) - self.conv8_3 = nn.Sequential(*conv8_3) - self.conv9up = nn.Sequential(*conv9up) - self.conv9_2 = nn.Sequential(*conv9_2) - self.conv10up = nn.Sequential(*conv10up) - self.conv10_2 = nn.Sequential(*conv10_2) - # claffificaton output - #self.model_class = nn.Sequential(*[nn.Conv2d(256, 313, kernel_size=1, padding=0, stride=1),]) - - def forward(self, input_grays): - f1_2 = self.conv1_2(input_grays) - f2_3 = self.conv2_3(f1_2) - f3_3 = self.conv3_3(f2_3) - f4_3 = self.conv4_3(f3_3) - f5_3 = self.conv5_3(f4_3) - f6_3 = self.conv6_3(f5_3) - f7_3 = self.conv7_3(f6_3) - f8_up = self.conv8up(f7_3) + self.conv3short8(f3_3) - f8_3 = self.conv8_3(f8_up) - f9_up = self.conv9up(f8_3) - f9_2 = self.conv9_2(f9_up) - f10_up = self.conv10up(f9_2) - f10_2 = self.conv10_2(f10_up) - out_feats = f10_2 - #out_probs = self.model_class(f8_3) - return out_feats - - - -def conv(batchNorm, in_planes, out_planes, kernel_size=3, stride=1): - if batchNorm: - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=(kernel_size-1)//2, bias=False), - nn.BatchNorm2d(out_planes), - nn.LeakyReLU(0.1) - ) - else: - return nn.Sequential( - nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=(kernel_size-1)//2, bias=True), - nn.LeakyReLU(0.1) - ) - - -def deconv(in_planes, out_planes): - return nn.Sequential( - nn.ConvTranspose2d(in_planes, out_planes, kernel_size=4, stride=2, padding=1, bias=True), - nn.LeakyReLU(0.1) - ) - -class SpixelNet(nn.Module): - def __init__(self, inChannel=3, outChannel=9, batchNorm=True): - super(SpixelNet,self).__init__() - self.batchNorm = batchNorm - self.conv0a = conv(self.batchNorm, inChannel, 16, kernel_size=3) - self.conv0b = conv(self.batchNorm, 16, 16, kernel_size=3) - self.conv1a = conv(self.batchNorm, 16, 32, kernel_size=3, stride=2) - self.conv1b = conv(self.batchNorm, 32, 32, kernel_size=3) - self.conv2a = conv(self.batchNorm, 32, 64, kernel_size=3, stride=2) - self.conv2b = conv(self.batchNorm, 64, 64, kernel_size=3) - self.conv3a = conv(self.batchNorm, 64, 128, kernel_size=3, stride=2) - self.conv3b = conv(self.batchNorm, 128, 128, kernel_size=3) - self.conv4a = conv(self.batchNorm, 128, 256, kernel_size=3, stride=2) - self.conv4b = conv(self.batchNorm, 256, 256, kernel_size=3) - self.deconv3 = deconv(256, 128) - self.conv3_1 = conv(self.batchNorm, 256, 128) - self.deconv2 = deconv(128, 64) - self.conv2_1 = conv(self.batchNorm, 128, 64) - self.deconv1 = deconv(64, 32) - self.conv1_1 = conv(self.batchNorm, 64, 32) - self.deconv0 = deconv(32, 16) - self.conv0_1 = conv(self.batchNorm, 32, 16) - self.pred_mask0 = nn.Conv2d(16, outChannel, kernel_size=3, stride=1, padding=1, bias=True) - self.softmax = nn.Softmax(1) - for m in self.modules(): - if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d): - init.kaiming_normal_(m.weight, 0.1) - if m.bias is not None: - init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - init.constant_(m.weight, 1) - init.constant_(m.bias, 0) - - def forward(self, x): - out1 = self.conv0b(self.conv0a(x)) #5*5 - out2 = self.conv1b(self.conv1a(out1)) #11*11 - out3 = self.conv2b(self.conv2a(out2)) #23*23 - out4 = self.conv3b(self.conv3a(out3)) #47*47 - out5 = self.conv4b(self.conv4a(out4)) #95*95 - out_deconv3 = self.deconv3(out5) - concat3 = torch.cat((out4, out_deconv3), 1) - out_conv3_1 = self.conv3_1(concat3) - out_deconv2 = self.deconv2(out_conv3_1) - concat2 = torch.cat((out3, out_deconv2), 1) - out_conv2_1 = self.conv2_1(concat2) - out_deconv1 = self.deconv1(out_conv2_1) - concat1 = torch.cat((out2, out_deconv1), 1) - out_conv1_1 = self.conv1_1(concat1) - out_deconv0 = self.deconv0(out_conv1_1) - concat0 = torch.cat((out1, out_deconv0), 1) - out_conv0_1 = self.conv0_1(concat0) - mask0 = self.pred_mask0(out_conv0_1) - prob0 = self.softmax(mask0) - return prob0 - - - -## VGG architecter, used for the perceptual loss using a pretrained VGG network -class VGG19(torch.nn.Module): - def __init__(self, requires_grad=False, local_pretrained_path='checkpoints/vgg19.pth'): - super().__init__() - #vgg_pretrained_features = torchvision.models.vgg19(pretrained=True).features - model = torchvision.models.vgg19() - model.load_state_dict(torch.load(local_pretrained_path)) - vgg_pretrained_features = model.features - - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5(h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/init.js b/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/init.js deleted file mode 100644 index 2e61759b05c45666ac2013000d8c4da1bc367630..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/init.js +++ /dev/null @@ -1,426 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -window.ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden') - -window.palette = function palette(min, max){ - // https://blocks.roadtolarissa.com/1wheel/raw/94091c1f8a69d5966e48aef4ac19baf9/index.html?colors=00006e-006a78-00a963-8a8a8a-d5882a-a15142-7f0000&numTicks=255&space=lab&type=basis - var colors = ['#00006e', '#00006e', '#00006f', '#00006f', '#00006f', '#000070', '#000070', '#000170', '#000471', '#000871', '#000b71', '#000f72', '#001272', '#001572', '#001872', '#001b73', '#001e73', '#002173', '#002473', '#002674', '#002974', '#002c74', '#002e74', '#003174', '#003375', '#003675', '#003975', '#003b75', '#003e75', '#004075', '#004375', '#004575', '#004775', '#004a75', '#004c75', '#004f75', '#005175', '#005375', '#005675', '#005875', '#005a75', '#005c75', '#005e75', '#006175', '#006375', '#006574', '#006774', '#006974', '#006b74', '#006d74', '#006f73', '#007173', '#007373', '#007473', '#007672', '#007872', '#007a72', '#007b72', '#007d71', '#007f71', '#008071', '#008270', '#008370', '#008570', '#008670', '#00886f', '#00896f', '#008a6f', '#008c6f', '#008d6e', '#008e6e', '#008f6e', '#00906e', '#00916e', '#00926d', '#00936d', '#00946d', '#00956d', '#00966d', '#00976d', '#00976d', '#00986d', '#00996d', '#00996d', '#009a6d', '#009a6e', '#009b6e', '#009b6e', '#009b6e', '#079c6f', '#119c6f', '#189c6f', '#1e9c70', '#249c70', '#289c70', '#2d9c71', '#319c71', '#359c71', '#399c72', '#3c9c72', '#409c73', '#439c73', '#479b74', '#4a9b74', '#4d9b74', '#509b75', '#539a75', '#569a76', '#599976', '#5c9976', '#5f9976', '#629877', '#659877', '#679777', '#6a9777', '#6d9677', '#6f9678', '#729578', '#749578', '#779478', '#799477', '#7c9377', '#7e9377', '#819277', '#839277', '#859176', '#889176', '#8a9175', '#8c9075', '#8e9074', '#908f73', '#938f73', '#958e72', '#978e71', '#998e70', '#9b8d6f', '#9d8d6e', '#9f8d6d', '#a08c6c', '#a28c6b', '#a48c69', '#a68b68', '#a88b67', '#a98b65', '#ab8a64', '#ac8a63', '#ae8a61', '#af8960', '#b1895f', '#b2895d', '#b4885c', '#b5885a', '#b68859', '#b78757', '#b88756', '#b98755', '#ba8653', '#bb8652', '#bc8550', '#bd854f', '#be854d', '#bf844c', '#bf844b', '#c0834a', '#c08348', '#c18247', '#c18246', '#c28145', '#c28044', '#c28043', '#c27f42', '#c27e41', '#c37e40', '#c27d3f', '#c27c3f', '#c27b3e', '#c27a3d', '#c27a3d', '#c1793c', '#c1783c', '#c1773c', '#c0763b', '#c0753b', '#bf743a', '#bf733a', '#be713a', '#bd703a', '#bd6f39', '#bc6e39', '#bb6d39', '#bb6b38', '#ba6a38', '#b96938', '#b86737', '#b76637', '#b76537', '#b66336', '#b56236', '#b46035', '#b35e35', '#b25d34', '#b15b34', '#b05933', '#af5833', '#ae5632', '#ad5431', '#ad5230', '#ac502f', '#ab4e2f', '#aa4c2e', '#a94a2c', '#a8482b', '#a7462a', '#a64429', '#a54127', '#a43f26', '#a33d24', '#a33a23', '#a23721', '#a1351f', '#a0321e', '#9f2f1c', '#9e2c1a', '#9d2818', '#9c2516', '#9c2114', '#9b1d11', '#9a180f', '#99120d', '#980b0a', '#970207', '#960004', '#950001', '#940000', '#930000', '#920000', '#910000', '#900000', '#8f0000', '#8e0000', '#8e0000', '#8d0000', '#8c0000', '#8b0000', '#8a0000', '#890000', '#880000', '#870000', '#860000', '#850000', '#840000', '#830000', '#820000', '#810000', '#800000'] - - return v => { - var i = d3.clamp(0, (v - min)/(max - min), 1) - return colors[Math.round(i*(colors.length - 1))] - } - - // https://gka.github.io/palettes/#/99|d|00429d,96ffea,d1ea00|d1ea00,ff005e,93003a|1|1 - // https://gka.github.io/palettes/#/99|d|00429d,96ffea,f1f1d2|f1f1d2,ff005e,93003a|1|1 - //https://gka.github.io/palettes/#/99|d|00429d,76dfca,d1d1b3|d1d1b3,a787a8,93003a|1|1 - // https://gka.github.io/palettes/#/99|d|76dfca,00429d,000000|000000,93003a,ff005e|1|1 - - // https://gka.github.io/palettes/#/99|d|078977,91a5ff,555555|555555,e2bfe3,980000|0|1 - // https://gka.github.io/palettes/#/99|d|002854,a1ffe1,555555|555555,ffa361,980000|0|1 - // https://gka.github.io/palettes/#/99|d|002854,a1ffe1,616161|616161,f47e2a,9e005c|0|1 - // var nMid = 13 - // var midIndex = Math.floor(colors.length/2) - // var minIndex = midIndex - (nMid - 1)/2 - // var maxIndex = midIndex + (nMid - 1)/2 - // var interpolate = d3.interpolate(colors[minIndex], colors[maxIndex]) - - // d3.range(minIndex, maxIndex + 1).forEach(i => { - // colors[i] = interpolate((i - minIndex)/nMid) - // }) - - // return d => { - // var rv = d3.interpolateGreys(d/2 + 2/2) - // if (rv == 'rgb(255, 255, 255)') rv = 'rgb(254, 254, 254)' - // return rv - // } - -} -window.util = { - palette, - color: d3.interpolateSpectral, - color: palette(0, 1), -} -window.util.colors = [1 - .25, .25].map(util.color) -window.util.colors.push('#aaaa00') - -!(function(){ - var memo = {} - - util.color2array = d => { - if (memo[d]) return memo[d] - - var {r, g, b} = d3.color(d).rgb() - return memo[d] = [r, g, b].map(v => v/255) - } -})() - - -// add colors to inline elements -!(function(){ - d3.selectAll('c0').st({fontWeight: 600, color: util.colors[0]}) - d3.selectAll('c1').st({fontWeight: 600, color: util.colors[1]}) - d3.selectAll('c2').st({fontWeight: 600, color: util.colors[2]}) -})() - - - -window.pairs = [ - { - class: 'texas-ohio', - s0: 'In New York, they like to buy _.', - s1: 'In Texas, they like to buy _.', - count: 30, - annotations: [ - { - str: 'BERT associates these potential purchases more with Texas
        than New York...', - pos: [15, 15], - color: util.colors[1] - }, - { - str: '...and these purchases
        more with New York
        than Texas', - pos: [290, 305], - color: util.colors[0] - }, - ], - ariaLabel: 'Scatter plot of differences in purchases between New York and Texas. Oil, cotten and land are associated more with Texas; Pictures and perfume are more associated with New York', - alts: [ - { - str: 'Ireland v. Australia', - s1: 'We went to Ireland and bought a _.', - s0: 'We went to Australia and bought a _.', - }, - { - str: 'Arctic v. Equator', - s1: 'Near the Arctic, they like to buy _.', - s0: 'Near the equator, they like to buy _.', - }, - { - str: 'Coast v. Plains', - s1: 'On the coast, they like to buy _.', - s0: 'On the plains, they like to buy _.', - }, - { - str: 'Narnia v. Gotham', - s1: 'In Narnia, they bought a _.', - s0: 'In Gotham, they bought a _.', - }, - { - str: 'Supermarket v. Mall', - s1: 'At the supermarket, they like to buy _.', - s0: 'At the mall, they like to buy _.', - }, - // { - // str: 'Train v. Plane', - // s1: 'At the airport, they like to buy _.', - // s0: 'At the bus depot, they like to buy _.', - // }, - // { - // str: 'buy v. sell', - // s0: 'They like to buy _.', - // s1: 'We like to buy _.', - // }, - // { - // str: 'Paris v. London', - // s1: 'In Paris, they like to buy _.', - // s0: 'In London, they like to buy _.', - // }, - ] - // type: 'Differences', - }, - { - class: 'age-name', - s0: 'Elsie was born in the year of _.', - s1: 'Lauren was born in the year of _.', - count: 200, - ariaLabel: 'Scatter plot of differences in birth years between Elsie and Lauren.', - }, - { - class: 'jim-jane', - s0: 'Jim worked as a _.', - s1: 'Jane worked as a _.', - count: 30, - ariaLabel: 'Scatter plot of differences in occupations between Jim and Jane. Salesmen, carpenter and mechanic are more associated with Jim; Nurse, secretary and modal are more associated with Jane.', - }, - { - class: 'nurse-name', - s0: 'The new nurse was named _.', - s1: 'The new doctor was named _.', - count: 30, - ariaLabel: 'Scatter plot of differences in doctor and nurse names. David, Michael and himself are more associated with doctors; Jean, Sarah and Catherine are more associated with nurses.', - - }, - { - class: 'nurse-name-zari-cda', - s0: 'The new nurse was named _.', - s1: 'The new doctor was named _.', - s0: 'The doctor performed CPR even though _ knew it was too late.', - s1: 'The nurse performed CPR even though _ knew it was too late.', - s0model: '_zari_cda', - s1model: '_zari_cda', - showModel: true, - count: 30, - ariaLabel: 'Scatter plot of differences in doctor and nurse names in the Zari model. He and she are equally associated with both. But Jack, Logan and Andrew are more associated with doctors; Emily, Rachel and Amy are more associated with nurses.', - }, - { - class: 'interesting-pair', - s1: '_ flavored ice cream is tasty.', - s0: '_ flavored ice cream is revolting.', - count: 30, - alts: [ - { - str: 'Dangerous animals', - s1: '_ is a [friendly|dangerous] animal', - s0: '_ is a [friendly|dangerous] animal', - }, - ] - } -] - -pairs.forEach(d => { - d.count = d.count || 200 - d.s0model = d.s0model || '' - d.s1model = d.s1model || '' - d.annotations = d.annotations || [] - d.model = d.s0model ? 'Zari' : 'BERT' - d.type = d.type || 'Likelihoods' - d.pairStr = JSON.stringify(d) -}) -// pairs = [window.pairs[1]] - - -var diffs = [ - { - s0: 'In [Texas|Paris], [Men|Women] like to buy _.', - s0: 'Born in [1940|2018], [his|her] name was _.', - s0: 'In [1908|2018], [he|she] was employed as a _.', - class: 'difference-difference', - count: 1000, - annotations: [], - model: 'BERT', - type: 'Likelihoods', - ariaLabel: 'Small multiple difference in difference plots.', - } -] - -diffs.forEach(d => { - d.pairStr = JSON.stringify(d) -}) - - -window.sents = [ - { - class: 'hamlet', - str: 'To be or not to be, that is the question;', - }, -] -sents.push({class: 'texas', str: pairs[0].s1.replace('_', 'things')}) -sents.push({class: 'new-york', str: pairs[0].s0.replace('_', 'things')}) - - -window.init = async function(){ - try { window.regltick.cancel() } catch (e) {} - - if (!window.tokenizer){ - window.tokenizer = new BertTokenizer() - await tokenizer.load() - } - - if (!window.bertLargeVocab){ - var text = await (await fetch('data/bert_large_vocab.txt')).text() - window.bertLargeVocab = text - .split('\n') - } - - sents.forEach(initSent) - sleep(10) - - pairs.forEach(initPair) - sleep(500) - window.initGenderOverTime() - - - // Skip rendering differene in difference until scrolled into view - var renderDiffDiff = false - var observer = new IntersectionObserver(entries => { - entries.forEach(d => { - if (renderDiffDiff || !d.isIntersecting) return - - initDiff(diffs[0]) - renderDiffDiff = true - }) - }, {}) - observer.observe(d3.select('.difference-difference').node()) - if (renderDiffDiff) initDiff(diffs[0]) - - - function sleep(ms) { - return new Promise(resolve => setTimeout(resolve, ms)) - } -} - -// Run init, rerun when width changes -!(function(){ - var lastInnerWidth = null - - function resize(){ - if (lastInnerWidth == window.innerWidth) return - lastInnerWidth = window.innerWidth - - window.init() - } - resize() - d3.select(window).on('resize', _.debounce(resize, 500)) -})() - -// Hamlet text entry -!(function(){ - var sel = d3.select('.hamlet-edit').html('') - .st({textAlign: 'center', marginTop: 17}) - .on('keydown', function(){ - sel.classed('changed', 1) - if (d3.event.keyCode != 13) return - d3.event.preventDefault() - - update() - }) - - var sent = sents[0] - - var inputSel = sel.append('textarea').at({cols: 30}) - inputSel.node().value = sent.str - - // sel.append('div') - sel.append('button.button.update').on('click', update).text('Update Sentence') - .st({width: 140, height: 47, marginLeft: 20, marginTop: 0, top: -19, marginRight: 0}) - - - function update(){ - sent.str = inputSel.node().value - - sel.classed('changed', 0) - initSent(sent) - } -})() - - -window.addLockedTooltip = function(sel){ - sel - .on('mouseover', function(d, i){ - ttSel - .html(d) - .select('.footend').remove() - - var x = this.offsetLeft, - y = this.offsetTop, - bb = ttSel.node().getBoundingClientRect(), - left = d3.clamp(20, (x-bb.width/2), window.innerWidth - bb.width - 20), - top = innerHeight + scrollY > y + 20 + bb.height ? y + 20 : y - bb.height - 10; - - ttSel.st({left, top}).classed('tooltip-hidden', false) - }) - - sel.on('mousemove',mouseover).on('mouseout', mouseout) - ttSel.on('mousemove', mouseover).on('mouseout', mouseout) - function mouseover(){ - if (window.__ttfade) window.__ttfade.stop() - } - function mouseout(){ - if (window.__ttfade) window.__ttfade.stop() - window.__ttfade = d3.timeout(() => { - ttSel.classed('tooltip-hidden', true) - }, 250) - } -} - -// Footnotes -!(function(){ - var footnums = '¹²³⁴⁵⁶⁷⁸⁹' - - var footendSel = d3.selectAll('.footend') - .each(function(d, i){ - var sel = d3.select(this) - var ogHTML = sel.parent().html() - sel - .at({href: '#footstart-' + i, id: 'footend-' + i}) - .text(footnums[i]) - .datum(ogHTML) - }) - - - var footstartSel = d3.selectAll('.footstart') - .each(function(d, i){ - d3.select(this) - .at({ - href: '#footend-' + i, - }) - .text(footnums[i]) - .datum(footendSel.data()[i]) - .parent().at({id: 'footstart-' + i}) - }) - .call(addLockedTooltip) - -})() - - - - - - - -// // Populate interesting alts -// !(() => { -// var listSel = d3.select('.interesting-list').st({display: 'none'}) - -// var listStr = listSel.text() - -// _.last(pairs).alts = listStr.split('-').map(d => d.trim()).filter(d => d).map(rawStr => { -// var start = rawStr.split('[')[0] -// var end = rawStr.split(']')[1] - -// var [t0, t1] = rawStr.split('[')[1].split(']')[0].split('|') -// var s0 = start + t0 + end -// var s1 = start + t1 + end - -// var str = `
        ${start} -// ${t1}|${t0} -// ${end}
        `.replace('_', '____') - -// return {str, s0, s1} -// }) -// })() - -// // Populate difference in difference -// !(() => { -// var listSel = d3.select('.difference-difference-list').st({display: 'none'}) - -// var listStr = listSel.text() - -// diffs[0].alts = listStr.split('-').map(d => d.trim()).filter(d => d).map(rawStr => { -// var start = rawStr.split('[')[0] -// var end = rawStr.split(']')[1] - -// var [t0, t1] = rawStr.split('[')[1].split(']')[0].split('|') -// var s0 = start + t0 + end -// var s1 = start + t1 + end - -// var str = `
        ${rawStr}
        `.replace('_', '____') - - -// return {str, s0, s1, rawStr} -// }) -// })() diff --git a/spaces/merve/measuring-fairness/source/third_party/params.js b/spaces/merve/measuring-fairness/source/third_party/params.js deleted file mode 100644 index 8b4b8b39bb932ef3d7784445c6e9e5fc04b12841..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/source/third_party/params.js +++ /dev/null @@ -1,22 +0,0 @@ -window.makeParams = function(){ - var url = new URL(window.location) - var searchParams = new URLSearchParams(url.search) - - var rv = {} - - rv.get = key => { - return searchParams.get(key) - } - - rv.set = (key, value) => { - searchParams.set(key, value) - - url.search = searchParams.toString() - history.replaceState(null, '', url) - } - - return rv -} - - -if (window.init) init() \ No newline at end of file diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/model.py b/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/model.py deleted file mode 100644 index 4e3c9687a3f4f7301cf053bee95c1e288b1c939b..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/model.py +++ /dev/null @@ -1,703 +0,0 @@ -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - -# Wrapper that gives name to tensor -class NamedTensor(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x): - return x - -# Give each style a unique name -class StridedStyle(nn.ModuleList): - def __init__(self, n_latents): - super().__init__([NamedTensor() for _ in range(n_latents)]) - self.n_latents = n_latents - - def forward(self, x): - # x already strided - styles = [self[i](x[:, i, :]) for i in range(self.n_latents)] - return torch.stack(styles, dim=1) - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - self.strided_style = StridedStyle(self.n_latent) - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_w=False, - noise=None, - randomize_noise=True, - ): - if not input_is_w: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) == 1: - # One global latent - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - elif len(styles) == 2: - # Latent mixing with two latents - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = self.strided_style(torch.cat([latent, latent2], 1)) - else: - # One latent per layer - assert len(styles) == self.n_latent, f'Expected {self.n_latents} latents, got {len(styles)}' - styles = torch.stack(styles, dim=1) # [N, 18, 512] - latent = self.strided_style(styles) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out - diff --git a/spaces/mikeee/convbot/convbot/convbot.py b/spaces/mikeee/convbot/convbot/convbot.py deleted file mode 100644 index 8bb32be13e01d0148c6c60a54ea6ec69adb1748b..0000000000000000000000000000000000000000 --- a/spaces/mikeee/convbot/convbot/convbot.py +++ /dev/null @@ -1,145 +0,0 @@ -"""Generate a response.""" -# pylint:disable=line-too-long, too-many-argument -import torch -from logzero import logger -from transformers import AutoModelForCausalLM, AutoTokenizer - -from .force_async import force_async - -# model_name = "microsoft/DialoGPT-large" -# model_name = "microsoft/DialoGPT-small" -# pylint: disable=invalid-name -model_name = "microsoft/DialoGPT-medium" -tokenizer = AutoTokenizer.from_pretrained(model_name) -model = AutoModelForCausalLM.from_pretrained(model_name) - - -def _convbot( - text: str, - max_length: int = 1000, - do_sample: bool = True, - top_p: float = 0.95, - top_k: int = 0, - temperature: float = 0.75, -) -> str: - """Generate a reponse. - - Args - n_retires: retry if response is "" or the same as previouse resp. - - Returns - reply - """ - try: - chat_history_ids = _convbot.chat_history_ids - except AttributeError: - chat_history_ids = "" - - try: - chat_history_ids = _convbot.chat_history_ids - except AttributeError: - chat_history_ids = "" - - input_ids = tokenizer.encode(text + tokenizer.eos_token, return_tensors="pt") - if isinstance(chat_history_ids, torch.Tensor): - bot_input_ids = torch.cat([chat_history_ids, input_ids], dim=-1) - else: - bot_input_ids = input_ids - - # generate a bot response - chat_history_ids = model.generate( - bot_input_ids, - max_length=max_length, - do_sample=do_sample, - top_p=top_p, - top_k=top_k, - temperature=temperature, - pad_token_id=tokenizer.eos_token_id, - ) - - output = tokenizer.decode( - chat_history_ids[:, bot_input_ids.shape[-1] :][0], skip_special_tokens=True - ) - _convbot.chat_history_ids = chat_history_ids - - return output - - -def convbot( - text: str, - n_retries: int = 3, - max_length: int = 1000, - do_sample: bool = True, - top_p: float = 0.95, - top_k: int = 0, - temperature: float = 0.75, -) -> str: - """Generate a response.""" - try: - n_retries = int(n_retries) - except Exception as e: - logger.error(e) - raise - try: - prev_resp = convbot.prev_resp - except AttributeError: - prev_resp = "" - - resp = _convbot(text, max_length, do_sample, top_p, top_k, temperature) - - # retry n_retries if resp is empty - if not resp.strip(): - idx = 0 - while idx < n_retries: - idx += 1 - _convbot.chat_history_ids = "" - resp = _convbot(text, max_length, do_sample, top_p, top_k, temperature) - if resp.strip(): - break - else: - logger.warning("bot acting up (empty response), something has gone awry") - - # check repeated responses - if resp.strip() == prev_resp: - idx = 0 - while idx < n_retries: - idx += 1 - resp = _convbot(text, max_length, do_sample, top_p, top_k, temperature) - if resp.strip() != prev_resp: - break - else: - logger.warning("bot acting up (repeating), something has gone awry") - - convbot.prev_resp = resp - - return resp - - -@force_async -def aconvbot( - text: str, - n_retries: int = 3, - max_length: int = 1000, - do_sample: bool = True, - top_p: float = 0.95, - top_k: int = 0, - temperature: float = 0.75, -) -> str: - try: - _ = convbot(text, n_retries, max_length, do_sample, top_p, top_k, temperature) - except Exception as e: - logger.error(e) - raise - return _ - - -def main(): - print("Bot: Talk to me") - while 1: - text = input("You: ") - resp = _convbot(text) - print("Bot: ", resp) - - -if __name__ == "__main__": - main() diff --git a/spaces/mikeee/radiobee-aligner/radiobee/gen_pset.py b/spaces/mikeee/radiobee-aligner/radiobee/gen_pset.py deleted file mode 100644 index aac4eb9f77b45558f4b6e91cea341f6818204203..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/radiobee/gen_pset.py +++ /dev/null @@ -1,184 +0,0 @@ -"""Gne pset from cmat. Find pairs for a given cmat. - -tinybee.find_pairs.py with fixed estimator='dbscan' eps=eps, min_samples=min_samples -""" -# pylint: disable=too-many-locals, unused-import, invalid-name - -from typing import List, Tuple, Union - -import numpy as np -import pandas as pd -from sklearn.cluster import DBSCAN -import logzero -from logzero import logger -from radiobee.cmat2tset import cmat2tset -from radiobee.interpolate_pset import interpolate_pset - - -def _gen_pset( - cmat1: Union[List[List[float]], np.ndarray, pd.DataFrame], - eps: float = 10, - min_samples: int = 6, - delta: float = 7, - verbose: Union[bool, int] = False, - # ) -> List[Tuple[int, int, Union[float, str]]]: -) -> List[Tuple[Union[float, str], Union[float, str], Union[float, str]]]: - """Gen pset from cmat. - - Find pairs for a given cmat. - - Args: - cmat: correlation/similarity matrix - eps: min epsilon for DBSCAN (10) - min_samples: minimum # of samples for DBSCAN (6) - delta: tolerance (7) - - Returns: - pairs + "" or metric (float) - - dbscan_pairs' setup - if eps is None: - eps = src_len * .01 - if eps < 3: - eps = 3 - if min_samples is None: - min_samples = tgt_len / 100 * 0.5 - if min_samples < 3: - min_samples = 3 - - def gen_eps_minsamples(src_len, tgt_len): - eps = src_len * .01 - if eps < 3: - eps = 3 - - min_samples = tgt_len / 100 * 0.5 - if min_samples < 3: - min_samples = 3 - return {"eps": eps, "min_samples": min_samples} - - """ - if isinstance(verbose, bool): - if verbose: - verbose = 10 - else: - verbose = 20 - logzero.loglevel(verbose) - - # if isinstance(cmat, list): - cmat = np.array(cmat1) - - src_len, tgt_len = cmat.shape - - # tset = cmat2tset(cmat) - tset = cmat2tset(cmat).tolist() - - logger.debug("tset: %s", tset) - - # iset = gen_iset(cmat, verbose=verbose, estimator=estimator) - labels = DBSCAN(eps=eps, min_samples=min_samples).fit(tset).labels_ - - df_tset = pd.DataFrame(tset, columns=["x", "y", "cos"]) - cset = df_tset[labels > -1].to_numpy() - - # sort cset - _ = sorted(cset.tolist(), key=lambda x: x[0]) - iset = interpolate_pset(_, tgt_len) - - # *_, ymax = zip(*tset) - # ymax = list(ymax) - # low_ = np.min(ymax) - 1 # reset to minimum_value - 1 - - buff = [(-1, -1, ""), (tgt_len, src_len, "")] - - # for idx, tset_elm in enumerate(tset): - for tset_elm in tset: - logger.debug("buff: %s", buff) - # postion max in ymax and insert in buff - # if with range given by iset+-delta and - # it's valid (do not exceed constraint - # by neighboring points - - # argmax = int(np.argmax(ymax)) - - # logger.debug("=== %s,%s === %s", _, argmax, tset[_]) - logger.debug("=== %s === %s", _, tset_elm) - - # ymax[_] = low_ - # elm = tset[argmax] - # elm0, *_ = elm - - elm0, *_ = tset_elm - - # position elm in buff - idx = -1 # for making pyright happy - for idx, loc in enumerate(buff): - if loc[0] > elm0: - break - else: - idx += 1 # last - - # insert elm in for valid elm - # (within range inside two neighboring points) - - # pos = int(tset[argmax][0]) - pos = int(tset_elm[0]) - logger.debug(" %s <=> %s ", tset_elm, iset[pos]) - - # if abs(tset[argmax][1] - iset[pos][1]) <= delta: - if abs(tset_elm[1] - iset[pos][1]) <= delta: - if tset_elm[1] > buff[idx - 1][1] and tset_elm[1] < buff[idx][1]: - buff.insert(idx, tset_elm) - logger.debug("idx: %s, tset_elm: %s", idx, tset_elm) - else: - logger.debug("\t***\t idx: %s, tset_elm: %s", idx, tset_elm) - _ = """ - if abs(tset[loc][1] - iset[loc][1]) <= delta: - if tset[loc][1] > buff[idx][1] and tset[loc][1] < buff[idx + 1][1]: - buff.insert(idx + 1, tset[loc]) - # """ - - # remove first and last entry in buff - buff.pop(0) - buff.pop() - - # return [(1, 1, "")] - return [(int(elm0), int(elm1), elm2) for elm0, elm1, elm2 in buff] - - -def gen_pset( - cmat1: Union[List[List[float]], np.ndarray, pd.DataFrame], - eps: float = 10, - min_samples: int = 6, - delta: float = 7, - verbose: Union[bool, int] = False, -) -> List[Tuple[Union[float, str], Union[float, str], Union[float, str]]]: - """Gen pset. - - Refer to _gen_pset. - """ - del verbose - gen_pset.min_samples = min_samples - for min_s in range(min_samples): - logger.debug(" min_samples, try %s", min_samples - min_s) - try: - pset = _gen_pset( - cmat1, - eps=eps, - min_samples=min_samples - min_s, - delta=delta, - ) - break - except ValueError: - logger.debug(" decrease min_samples by %s", min_s + 1) - continue - except Exception as e: - logger.error(e) - continue - else: - # break should happen above when min_samples = 2 - raise Exception("bummer, this shouldn't happen, probably another bug") - - # store new min_samples - gen_pset.min_samples = min_samples - min_s - - return pset diff --git a/spaces/miku-hutao/vits-uma-genshin-honkai/monotonic_align/__init__.py b/spaces/miku-hutao/vits-uma-genshin-honkai/monotonic_align/__init__.py deleted file mode 100644 index e97eecc595dd3bd97d0104ec62799e2e5efea57c..0000000000000000000000000000000000000000 --- a/spaces/miku-hutao/vits-uma-genshin-honkai/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/mjdolan/Holiday-StyleGAN-NADA/e4e/options/train_options.py b/spaces/mjdolan/Holiday-StyleGAN-NADA/e4e/options/train_options.py deleted file mode 100644 index 583ea1423fdc9a649cd7044d74d554bf0ac2bf51..0000000000000000000000000000000000000000 --- a/spaces/mjdolan/Holiday-StyleGAN-NADA/e4e/options/train_options.py +++ /dev/null @@ -1,84 +0,0 @@ -from argparse import ArgumentParser -from configs.paths_config import model_paths - - -class TrainOptions: - - def __init__(self): - self.parser = ArgumentParser() - self.initialize() - - def initialize(self): - self.parser.add_argument('--exp_dir', type=str, help='Path to experiment output directory') - self.parser.add_argument('--dataset_type', default='ffhq_encode', type=str, - help='Type of dataset/experiment to run') - self.parser.add_argument('--encoder_type', default='Encoder4Editing', type=str, help='Which encoder to use') - - self.parser.add_argument('--batch_size', default=4, type=int, help='Batch size for training') - self.parser.add_argument('--test_batch_size', default=2, type=int, help='Batch size for testing and inference') - self.parser.add_argument('--workers', default=4, type=int, help='Number of train dataloader workers') - self.parser.add_argument('--test_workers', default=2, type=int, - help='Number of test/inference dataloader workers') - - self.parser.add_argument('--learning_rate', default=0.0001, type=float, help='Optimizer learning rate') - self.parser.add_argument('--optim_name', default='ranger', type=str, help='Which optimizer to use') - self.parser.add_argument('--train_decoder', default=False, type=bool, help='Whether to train the decoder model') - self.parser.add_argument('--start_from_latent_avg', action='store_true', - help='Whether to add average latent vector to generate codes from encoder.') - self.parser.add_argument('--lpips_type', default='alex', type=str, help='LPIPS backbone') - - self.parser.add_argument('--lpips_lambda', default=0.8, type=float, help='LPIPS loss multiplier factor') - self.parser.add_argument('--id_lambda', default=0.1, type=float, help='ID loss multiplier factor') - self.parser.add_argument('--l2_lambda', default=1.0, type=float, help='L2 loss multiplier factor') - - self.parser.add_argument('--stylegan_weights', default=model_paths['stylegan_ffhq'], type=str, - help='Path to StyleGAN model weights') - self.parser.add_argument('--stylegan_size', default=1024, type=int, - help='size of pretrained StyleGAN Generator') - self.parser.add_argument('--checkpoint_path', default=None, type=str, help='Path to pSp model checkpoint') - - self.parser.add_argument('--max_steps', default=500000, type=int, help='Maximum number of training steps') - self.parser.add_argument('--image_interval', default=100, type=int, - help='Interval for logging train images during training') - self.parser.add_argument('--board_interval', default=50, type=int, - help='Interval for logging metrics to tensorboard') - self.parser.add_argument('--val_interval', default=1000, type=int, help='Validation interval') - self.parser.add_argument('--save_interval', default=None, type=int, help='Model checkpoint interval') - - # Discriminator flags - self.parser.add_argument('--w_discriminator_lambda', default=0, type=float, help='Dw loss multiplier') - self.parser.add_argument('--w_discriminator_lr', default=2e-5, type=float, help='Dw learning rate') - self.parser.add_argument("--r1", type=float, default=10, help="weight of the r1 regularization") - self.parser.add_argument("--d_reg_every", type=int, default=16, - help="interval for applying r1 regularization") - self.parser.add_argument('--use_w_pool', action='store_true', - help='Whether to store a latnet codes pool for the discriminator\'s training') - self.parser.add_argument("--w_pool_size", type=int, default=50, - help="W\'s pool size, depends on --use_w_pool") - - # e4e specific - self.parser.add_argument('--delta_norm', type=int, default=2, help="norm type of the deltas") - self.parser.add_argument('--delta_norm_lambda', type=float, default=2e-4, help="lambda for delta norm loss") - - # Progressive training - self.parser.add_argument('--progressive_steps', nargs='+', type=int, default=None, - help="The training steps of training new deltas. steps[i] starts the delta_i training") - self.parser.add_argument('--progressive_start', type=int, default=None, - help="The training step to start training the deltas, overrides progressive_steps") - self.parser.add_argument('--progressive_step_every', type=int, default=2_000, - help="Amount of training steps for each progressive step") - - # Save additional training info to enable future training continuation from produced checkpoints - self.parser.add_argument('--save_training_data', action='store_true', - help='Save intermediate training data to resume training from the checkpoint') - self.parser.add_argument('--sub_exp_dir', default=None, type=str, help='Name of sub experiment directory') - self.parser.add_argument('--keep_optimizer', action='store_true', - help='Whether to continue from the checkpoint\'s optimizer') - self.parser.add_argument('--resume_training_from_ckpt', default=None, type=str, - help='Path to training checkpoint, works when --save_training_data was set to True') - self.parser.add_argument('--update_param_list', nargs='+', type=str, default=None, - help="Name of training parameters to update the loaded training checkpoint") - - def parse(self): - opts = self.parser.parse_args() - return opts diff --git a/spaces/mlnotes/borrador_constitucion_chile/README.md b/spaces/mlnotes/borrador_constitucion_chile/README.md deleted file mode 100644 index 305b95b17923f8b73894287976787dbe398204be..0000000000000000000000000000000000000000 --- a/spaces/mlnotes/borrador_constitucion_chile/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Borrador Constitucion Chile -emoji: 🌖 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.0.15 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/moadams/rainbowRainClassificationAPP/rainbowrain_env/Scripts/deactivate.bat b/spaces/moadams/rainbowRainClassificationAPP/rainbowrain_env/Scripts/deactivate.bat deleted file mode 100644 index 62a39a7584f4d7c5fbc31758e3e9e7eff700276d..0000000000000000000000000000000000000000 --- a/spaces/moadams/rainbowRainClassificationAPP/rainbowrain_env/Scripts/deactivate.bat +++ /dev/null @@ -1,22 +0,0 @@ -@echo off - -if defined _OLD_VIRTUAL_PROMPT ( - set "PROMPT=%_OLD_VIRTUAL_PROMPT%" -) -set _OLD_VIRTUAL_PROMPT= - -if defined _OLD_VIRTUAL_PYTHONHOME ( - set "PYTHONHOME=%_OLD_VIRTUAL_PYTHONHOME%" - set _OLD_VIRTUAL_PYTHONHOME= -) - -if defined _OLD_VIRTUAL_PATH ( - set "PATH=%_OLD_VIRTUAL_PATH%" -) - -set _OLD_VIRTUAL_PATH= - -set VIRTUAL_ENV= -set VIRTUAL_ENV_PROMPT= - -:END diff --git a/spaces/motleykrug/README/README.md b/spaces/motleykrug/README/README.md deleted file mode 100644 index 4ec3bc0f7cfd2daea74f389f42b1e2a667376888..0000000000000000000000000000000000000000 --- a/spaces/motleykrug/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 🌖 -colorFrom: pink -colorTo: red -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card. diff --git a/spaces/mounikakadimi28/ml_salary_prediction/app.py b/spaces/mounikakadimi28/ml_salary_prediction/app.py deleted file mode 100644 index 8c3e7d835430b217be82204826344f4ae067e62e..0000000000000000000000000000000000000000 --- a/spaces/mounikakadimi28/ml_salary_prediction/app.py +++ /dev/null @@ -1,35 +0,0 @@ -# importing the gradio -import gradio as gr -import pandas as pd -import pickle -import sklearn - -# defining the function for interface -def ml_model(age,exp,gender,edu): - age = age - exp = exp - if gender=='Male': - value=1 - else: - value=0 - masters_value=0 - phd_value=0 - - if edu=="Master's": - masters_value=1 - elif edu=='PhD': - phd_value=1 - df=pd.DataFrame({'Age':[age],'Years of Experience':[exp],'Gender_Male':[value],"Education Level_Master's":[masters_value],'Education Level_PhD':[phd_value]}) - with open('model_creation/lr_sal_pred.pkl', 'rb') as f: - model = pickle.load(f) - pred=model.predict(df) - result = pred[0] - result = (int(pred/1000))*1000 - return f'Your expected Salary is : {result}' - -interface=gr.Interface(title="Let's Predict My Salary ",description='Interface developed by Mounika and Ganesh',fn=ml_model, - inputs=[gr.components.Number(label='Enter your Age.....'),gr.components.Number(label='Enter your Experience.....'),gr.components.Radio(label='Select your Gender....',choices=['Male','Female']),gr.components.Dropdown(label='Select your highest Qualification',choices=['PhD',"Master's",'Bachelors'])], - outputs='text', - theme='freddyaboulton/test-blue') -interface.launch() - diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/multilingual/data_scripts/check_iswlt_test_data.py b/spaces/mshukor/UnIVAL/fairseq/examples/multilingual/data_scripts/check_iswlt_test_data.py deleted file mode 100644 index f8e2eb0f15699f1b458a8445d0c1dd6229a21f77..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/multilingual/data_scripts/check_iswlt_test_data.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os, sys -import subprocess -import re -from subprocess import check_call, check_output - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -BLEU_REGEX = re.compile("^BLEU\\S* = (\\S+) ") -def run_eval_bleu(cmd): - output = check_output(cmd, shell=True, stderr=subprocess.STDOUT).decode("utf-8").strip() - print(output) - bleu = -1.0 - for line in output.strip().split('\n'): - m = BLEU_REGEX.search(line) - if m is not None: - bleu = m.groups()[0] - bleu = float(bleu) - break - return bleu - -def check_data_test_bleu(raw_folder, data_lang_pairs): - not_matchings = [] - for sacrebleu_set, src_tgts in data_lang_pairs: - for src_tgt in src_tgts: - print(f'checking test bleus for: {src_tgt} at {sacrebleu_set}') - src, tgt = src_tgt.split('-') - ssrc, stgt = src[:2], tgt[:2] - if os.path.exists(f'{raw_folder}/test.{tgt}-{src}.{src}'): - # reversed direction may have different test set - test_src = f'{raw_folder}/test.{tgt}-{src}.{src}' - else: - test_src = f'{raw_folder}/test.{src}-{tgt}.{src}' - cmd1 = f'cat {test_src} | sacrebleu -t "{sacrebleu_set}" -l {stgt}-{ssrc}; [ $? -eq 0 ] || echo ""' - test_tgt = f'{raw_folder}/test.{src}-{tgt}.{tgt}' - cmd2 = f'cat {test_tgt} | sacrebleu -t "{sacrebleu_set}" -l {ssrc}-{stgt}; [ $? -eq 0 ] || echo ""' - bleu1 = run_eval_bleu(cmd1) - if bleu1 != 100.0: - not_matchings.append(f'{sacrebleu_set}:{src_tgt} source side not matching: {test_src}') - bleu2 = run_eval_bleu(cmd2) - if bleu2 != 100.0: - not_matchings.append(f'{sacrebleu_set}:{src_tgt} target side not matching: {test_tgt}') - return not_matchings - -if __name__ == "__main__": - to_data_path = f'{WORKDIR_ROOT}/iwsltv2' - not_matching = check_data_test_bleu( - f'{to_data_path}/raw', - [ - ('iwslt17', ['en_XX-ar_AR', 'en_XX-ko_KR', 'ar_AR-en_XX', 'ko_KR-en_XX']), - ('iwslt17', ['en_XX-it_IT', 'en_XX-nl_XX', 'it_IT-en_XX', 'nl_XX-en_XX']), - ('iwslt17/tst2015', ['en_XX-vi_VN', "vi_VN-en_XX"]), - ] - ) - if len(not_matching) > 0: - print('the following datasets do not have matching test datasets:\n\t', '\n\t'.join(not_matching)) - diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/translation_moe/translation_moe_src/translation_moe.py b/spaces/mshukor/UnIVAL/fairseq/examples/translation_moe/translation_moe_src/translation_moe.py deleted file mode 100644 index 7f28c32dd6152f53d6922cdfccfa903e0bdc5829..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/translation_moe/translation_moe_src/translation_moe.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import torch -from omegaconf import II - -from fairseq import metrics, utils -from fairseq.dataclass import ChoiceEnum -from fairseq.tasks import register_task -from fairseq.tasks.translation import TranslationConfig, TranslationTask - -from .logsumexp_moe import LogSumExpMoE -from .mean_pool_gating_network import MeanPoolGatingNetwork - - -METHOD_CHOICES = ChoiceEnum(["sMoElp", "sMoEup", "hMoElp", "hMoEup"]) - - -@dataclass -class TranslationMoEConfig(TranslationConfig): - method: METHOD_CHOICES = field( - default="hMoEup", - metadata={"help": "MoE method"}, - ) - num_experts: int = field( - default=3, - metadata={"help": "number of experts"}, - ) - mean_pool_gating_network: bool = field( - default=False, - metadata={"help": "use a simple mean-pooling gating network"}, - ) - mean_pool_gating_network_dropout: float = field( - default=0, - metadata={"help": "dropout for mean-pooling gating network"}, - ) - mean_pool_gating_network_encoder_dim: int = field( - default=0, - metadata={"help": "encoder output dim for mean-pooling gating network"}, - ) - gen_expert: int = field( - default=0, - metadata={"help": "which expert to use for generation"}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - - -@register_task("translation_moe", dataclass=TranslationMoEConfig) -class TranslationMoETask(TranslationTask): - """ - Translation task for Mixture of Experts (MoE) models. - - See `"Mixture Models for Diverse Machine Translation: Tricks of the Trade" - (Shen et al., 2019) `_. - - Args: - src_dict (~fairseq.data.Dictionary): dictionary for the source language - tgt_dict (~fairseq.data.Dictionary): dictionary for the target language - - .. note:: - - The translation task is compatible with :mod:`fairseq-train`, - :mod:`fairseq-generate` and :mod:`fairseq-interactive`. - - The translation task provides the following additional command-line - arguments: - - .. argparse:: - :ref: fairseq.tasks.translation_parser - :prog: - """ - - cfg: TranslationMoEConfig - - def __init__(self, cfg: TranslationMoEConfig, src_dict, tgt_dict): - if cfg.method == "sMoElp": - # soft MoE with learned prior - self.uniform_prior = False - self.hard_selection = False - elif cfg.method == "sMoEup": - # soft MoE with uniform prior - self.uniform_prior = True - self.hard_selection = False - elif cfg.method == "hMoElp": - # hard MoE with learned prior - self.uniform_prior = False - self.hard_selection = True - elif cfg.method == "hMoEup": - # hard MoE with uniform prior - self.uniform_prior = True - self.hard_selection = True - - # add indicator tokens for each expert - for i in range(cfg.num_experts): - # add to both dictionaries in case we're sharing embeddings - src_dict.add_symbol("".format(i)) - tgt_dict.add_symbol("".format(i)) - - super().__init__(cfg, src_dict, tgt_dict) - - def build_model(self, cfg): - from fairseq import models - - model = models.build_model(cfg, self) - if not self.uniform_prior and not hasattr(model, "gating_network"): - if self.cfg.mean_pool_gating_network: - if self.cfg.mean_pool_gating_network_encoder_dim > 0: - encoder_dim = self.cfg.mean_pool_gating_network_encoder_dim - elif getattr(cfg, "encoder_embed_dim", None): - # assume that encoder_embed_dim is the encoder's output dimension - encoder_dim = cfg.encoder_embed_dim - else: - raise ValueError( - "Must specify --mean-pool-gating-network-encoder-dim" - ) - - if self.cfg.mean_pool_gating_network_dropout > 0: - dropout = self.cfg.mean_pool_gating_network_dropout - elif getattr(cfg, "dropout", None): - dropout = cfg.dropout - else: - raise ValueError("Must specify task.mean_pool_gating_network_dropout") - - model.gating_network = MeanPoolGatingNetwork( - encoder_dim, - self.cfg.num_experts, - dropout, - ) - else: - raise ValueError( - "translation_moe task with learned prior requires the model to " - "have a gating network; try using --mean-pool-gating-network" - ) - return model - - def expert_index(self, i): - return i + self.tgt_dict.index("") - - def _get_loss(self, sample, model, criterion): - assert hasattr( - criterion, "compute_loss" - ), "translation_moe task requires the criterion to implement the compute_loss() method" - - k = self.cfg.num_experts - bsz = sample["target"].size(0) - - def get_lprob_y(encoder_out, prev_output_tokens_k): - net_output = model.decoder( - prev_output_tokens=prev_output_tokens_k, - encoder_out=encoder_out, - ) - loss, _ = criterion.compute_loss(model, net_output, sample, reduce=False) - loss = loss.view(bsz, -1) - return -loss.sum(dim=1, keepdim=True) # -> B x 1 - - def get_lprob_yz(winners=None): - encoder_out = model.encoder( - src_tokens=sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - ) - - if winners is None: - lprob_y = [] - for i in range(k): - prev_output_tokens_k = sample["net_input"][ - "prev_output_tokens" - ].clone() - assert not prev_output_tokens_k.requires_grad - prev_output_tokens_k[:, 0] = self.expert_index(i) - lprob_y.append(get_lprob_y(encoder_out, prev_output_tokens_k)) - lprob_y = torch.cat(lprob_y, dim=1) # -> B x K - else: - prev_output_tokens_k = sample["net_input"]["prev_output_tokens"].clone() - prev_output_tokens_k[:, 0] = self.expert_index(winners) - lprob_y = get_lprob_y(encoder_out, prev_output_tokens_k) # -> B - - if self.uniform_prior: - lprob_yz = lprob_y - else: - lprob_z = model.gating_network(encoder_out) # B x K - if winners is not None: - lprob_z = lprob_z.gather(dim=1, index=winners.unsqueeze(-1)) - lprob_yz = lprob_y + lprob_z.type_as(lprob_y) # B x K - - return lprob_yz - - # compute responsibilities without dropout - with utils.model_eval(model): # disable dropout - with torch.no_grad(): # disable autograd - lprob_yz = get_lprob_yz() # B x K - prob_z_xy = torch.nn.functional.softmax(lprob_yz, dim=1) - assert not prob_z_xy.requires_grad - - # compute loss with dropout - if self.hard_selection: - winners = prob_z_xy.max(dim=1)[1] - loss = -get_lprob_yz(winners) - else: - lprob_yz = get_lprob_yz() # B x K - loss = -LogSumExpMoE.apply(lprob_yz, prob_z_xy, 1) - - loss = loss.sum() - sample_size = ( - sample["target"].size(0) if self.cfg.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": utils.item(loss.data), - "ntokens": sample["ntokens"], - "nsentences": bsz, - "sample_size": sample_size, - "posterior": prob_z_xy.float().sum(dim=0).cpu(), - } - return loss, sample_size, logging_output - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - model.train() - loss, sample_size, logging_output = self._get_loss(sample, model, criterion) - if ignore_grad: - loss *= 0 - optimizer.backward(loss) - return loss, sample_size, logging_output - - def valid_step(self, sample, model, criterion): - model.eval() - with torch.no_grad(): - loss, sample_size, logging_output = self._get_loss(sample, model, criterion) - return loss, sample_size, logging_output - - def inference_step( - self, - generator, - models, - sample, - prefix_tokens=None, - expert=None, - constraints=None, - ): - expert = expert or self.cfg.gen_expert - with torch.no_grad(): - return generator.generate( - models, - sample, - prefix_tokens=prefix_tokens, - constraints=constraints, - bos_token=self.expert_index(expert), - ) - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - metrics.log_scalar( - "posterior", - sum(log["posterior"] for log in logging_outputs if "posterior" in log), - ) diff --git a/spaces/mshukor/UnIVAL/run_scripts/averaging/ratatouille/scaling_best/caption/video/unival_video_caption_stage_1_initvideoqa.sh b/spaces/mshukor/UnIVAL/run_scripts/averaging/ratatouille/scaling_best/caption/video/unival_video_caption_stage_1_initvideoqa.sh deleted file mode 100644 index 595cad57f0dbdd79a6d7d8b6eec4f2dd957fb9ce..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/run_scripts/averaging/ratatouille/scaling_best/caption/video/unival_video_caption_stage_1_initvideoqa.sh +++ /dev/null @@ -1,211 +0,0 @@ - - -# Number of GPUs per GPU worker -export GPUS_PER_NODE=8 -# Number of GPU workers, for single-worker training, please set to 1 -export NUM_NODES=$SLURM_NNODES -# The ip address of the rank-0 worker, for single-worker training, please set to localhost -master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1) -export MASTER_ADDR=$master_addr - -# The port for communication -export MASTER_PORT=12350 -# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0 -export RANK=$SLURM_NODEID - -echo "MASTER_ADDR: $MASTER_ADDR" -echo "RANK :$RANK" -echo "NUM_NODES :$NUM_NODES" -echo "GPUS_PER_NODE :$GPUS_PER_NODE" - -export MIOPEN_USER_DB_PATH=/lus/home/NAT/gda2204/mshukor/.config/miopen_${MASTER_ADDR}_${SLURM_PROCID}/ - -echo "MIOPEN_USER_DB_PATH :$MIOPEN_USER_DB_PATH" - -num_workers=0 - -exp_name=unival_video_caption_stage_1_initvideoqa - - -ofa_dir=/lus/home/NAT/gda2204/mshukor/code/unival -base_data_dir=/lus/scratch/NAT/gda2204/SHARED/data -base_log_dir=/work/NAT/gda2204/mshukor/logs - -save_base_log_dir=/lus/scratch/NAT/gda2204/SHARED/logs -save_dir=${save_base_log_dir}/ofa/checkpoints/caption/${exp_name} - -log_dir=${save_dir} - -mkdir -p $log_dir $save_dir - -bpe_dir=${ofa_dir}/utils/BPE -user_dir=${ofa_dir}/ofa_module - - - -image_dir=${base_data_dir} - - -data_dir=${base_data_dir}/ofa/video_data/caption_data -data=${data_dir}/msrvtt_caption_train7k.tsv,${data_dir}/msrvtt_caption_test3k.tsv -eval_cider_cached=${data_dir}/cider_cached_tokens/msrvtt-test3k-words.p - -data=${data_dir}/msrvtt_caption_train7k_1.tsv,${data_dir}/msrvtt_caption_train7k_2.tsv,${data_dir}/msrvtt_caption_train7k_3.tsv,${data_dir}/msrvtt_caption_train7k_4.tsv,${data_dir}/msrvtt_caption_train7k_5.tsv,${data_dir}/msrvtt_caption_train7k_6.tsv,${data_dir}/msrvtt_caption_train7k_7.tsv,${data_dir}/msrvtt_caption_train7k_8.tsv,${data_dir}/msrvtt_caption_train7k_9.tsv,${data_dir}/msrvtt_caption_train7k_10.tsv,${data_dir}/msrvtt_caption_test3k.tsv - -restore_file=/lus/scratch/NAT/gda2204/SHARED/logs/ofa/checkpoints/vqa/unival_video_vqa/35_0.04_1e-4_480/checkpoint_best.pt - - -lr=1e-5 - - -# ${base_log_dir}/ofa/checkpoints/caption/${exp_name}/20_0.06_6000/checkpoint_last.pt - - -selected_cols=0,4,2 - -task=video_caption -arch=unival_base -pretrained_model= - - -criterion=adjust_label_smoothed_encouraging_loss -label_smoothing=0.1 - -max_epoch=15 -warmup_ratio=0.06 -batch_size=16 -update_freq=2 -resnet_drop_path_rate=0.0 -encoder_drop_path_rate=0.1 -decoder_drop_path_rate=0.1 -dropout=0.1 -attention_dropout=0.0 -max_src_length=80 -max_tgt_length=20 -num_bins=1000 -# patch_image_size=480 -drop_worst_ratio=0.2 - - - - -### -image_encoder_name=timm_resnet #vit_base_patch16_224 -patch_image_size=480 -resnet_type=resnet101 - -resnet_model_path=${base_log_dir}/pretrained_models/resnet101-5d3b4d8f.pth - -# video -video_encoder_name=all_resnext101 -patch_frame_size=384 -video_model_path=${base_log_dir}/pretrained_models/3dcnn/resnext-101-kinetics.pth #${base_log_dir}/pretrained_models/TimeSformer_divST_8x32_224_K600.pyth -num_frames=16 - - -save_interval=1 -validate_interval_updates=2000 -save_interval_updates=0 - - -sample_patch_num='--sample-patch-num=784' # '' - -eval_args='--eval-args={"beam":5,"unnormalized":true,"temperature":1.0,"stop_on_max_len":true}' - - - -drop_worst_ratio=0.05 # modified from 0.2 for el -log_end=0.75 # for el -drop_best_ratio=0.05 -drop_best_after=6000 -drop_worst_after=6000 - -use_dataaug='--use-dataaug' - -for max_epoch in {20,}; do - echo "max_epoch "${max_epoch} - for warmup_ratio in {0.06,}; do - echo "warmup_ratio "${warmup_ratio} - for drop_worst_after in {6000,}; do - echo "drop_worst_after "${drop_worst_after} - - log_file=${log_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after}".log" - save_path=${save_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after} - mkdir -p $save_path - - python3 -m torch.distributed.launch \ - --nnodes=${NUM_NODES} \ - --nproc_per_node=${GPUS_PER_NODE} \ - --master_port=${MASTER_PORT} \ - --node_rank=${RANK} \ - --master_addr=${MASTER_ADDR} \ - --use_env ${ofa_dir}/train.py \ - $data \ - --selected-cols=${selected_cols} \ - --bpe-dir=${bpe_dir} \ - --user-dir=${user_dir} \ - --restore-file=${restore_file} \ - --save-dir=${save_path} \ - --task=${task} \ - --arch=${arch} \ - --criterion=${criterion} \ - --label-smoothing=${label_smoothing} \ - --batch-size=${batch_size} \ - --update-freq=${update_freq} \ - --encoder-normalize-before \ - --decoder-normalize-before \ - --share-decoder-input-output-embed \ - --share-all-embeddings \ - --layernorm-embedding \ - --patch-layernorm-embedding \ - --code-layernorm-embedding \ - --resnet-drop-path-rate=${resnet_drop_path_rate} \ - --encoder-drop-path-rate=${encoder_drop_path_rate} \ - --decoder-drop-path-rate=${decoder_drop_path_rate} \ - --dropout=${dropout} \ - --attention-dropout=${attention_dropout} \ - --weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=1.0 \ - --lr-scheduler=polynomial_decay --lr=${lr} \ - --max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \ - --log-format=simple --log-interval=10 \ - --fixed-validation-seed=7 \ - --no-epoch-checkpoints --keep-best-checkpoints=1 \ - --save-interval=${save_interval} --validate-interval=1 \ - --save-interval-updates=${save_interval_updates} --validate-interval-updates=${validate_interval_updates} \ - --eval-cider \ - --eval-cider-cached-tokens=${eval_cider_cached} \ - --eval-args='{"beam":5,"max_len_b":16,"no_repeat_ngram_size":3}' \ - --best-checkpoint-metric=cider --maximize-best-checkpoint-metric \ - --max-src-length=${max_src_length} \ - --max-tgt-length=${max_tgt_length} \ - --find-unused-parameters \ - --freeze-encoder-embedding \ - --freeze-decoder-embedding \ - --add-type-embedding \ - --scale-attn \ - --scale-fc \ - --scale-heads \ - --disable-entangle \ - --num-bins=${num_bins} \ - --patch-image-size=${patch_image_size} \ - --drop-worst-ratio=${drop_worst_ratio} \ - --drop-worst-after=${drop_worst_after} \ - --fp16 \ - --fp16-scale-window=512 \ - --num-workers=0 \ - --image-encoder-name=${image_encoder_name} \ - --image-dir=${image_dir} \ - --video-encoder-name=${video_encoder_name} \ - --video-model-path=${video_model_path} \ - --patch-frame-size=${patch_frame_size} \ - ${sample_patch_num} \ - ${eval_args} \ - --num-frames=${num_frames} \ - --log-end ${log_end} --drop-best-ratio ${drop_best_ratio} --drop-best-after ${drop_best_after} \ - ${use_dataaug} \ - --reset-dataloader --reset-meters --reset-optimizer \ - --strict - - done - done -done \ No newline at end of file diff --git a/spaces/naotakigawa/qatool/pages/ChatbotWebRead.py b/spaces/naotakigawa/qatool/pages/ChatbotWebRead.py deleted file mode 100644 index f7296a1f28c15604fbc40e41989cf4e1b0e4052d..0000000000000000000000000000000000000000 --- a/spaces/naotakigawa/qatool/pages/ChatbotWebRead.py +++ /dev/null @@ -1,142 +0,0 @@ - -import streamlit as st -import faiss -import langchain -from llama_index.callbacks import CallbackManager -from llama_index import ServiceContext,VectorStoreIndex -from llama_index.chat_engine import CondenseQuestionChatEngine -from llama_index.node_parser import SimpleNodeParser -from llama_index.langchain_helpers.text_splitter import TokenTextSplitter -from llama_index.constants import DEFAULT_CHUNK_OVERLAP -from llama_index.response_synthesizers import get_response_synthesizer -from llama_index import SimpleWebPageReader -from llama_index.llms.base import ChatMessage, MessageRole -from llama_index.prompts.base import ChatPromptTemplate -from llama_index.prompts.base import PromptTemplate -# from llama_index.prompts import Prompt -from llama_index import Prompt - -from log import logger -import tiktoken -import common -langchain.verbose = True - -custom_prompt = PromptTemplate("""\ - 以下はこれまでの会話履歴と、ドキュメントを検索して回答する必要がある、ユーザーからの会話文です。 - 会話と新しい会話文に基づいて、検索クエリを作成します。 - - {chat_history} - - {question} - -""") - -TEXT_QA_SYSTEM_PROMPT = ChatMessage( - content=( - "あなたは世界中で信頼されているQAシステムです。\n" - "事前知識ではなく、常に提供されたコンテキスト情報を使用してクエリに回答してください。\n" - "従うべきいくつかのルール:\n" - "1. 回答内で指定されたコンテキストを直接参照しないでください。\n" - "2. 「コンテキストに基づいて、...」や「コンテキスト情報は...」、またはそれに類するような記述は避けてください。" - ), - role=MessageRole.SYSTEM, -) - -# QAプロンプトテンプレートメッセージ -TEXT_QA_PROMPT_TMPL_MSGS = [ - TEXT_QA_SYSTEM_PROMPT, - ChatMessage( - content=( - "コンテキスト情報は以下のとおりです。\n" - "---------------------\n" - "{context_str}\n" - "---------------------\n" - "事前知識ではなくコンテキスト情報を考慮して、クエリに答えます。\n" - "Query: {query_str}\n" - "Answer: " - ), - role=MessageRole.USER, - ), -] -CHAT_TEXT_QA_PROMPT = ChatPromptTemplate(message_templates=TEXT_QA_PROMPT_TMPL_MSGS) - -CHAT_REFINE_PROMPT_TMPL_MSGS = [ - ChatMessage( - content=( - "あなたは、既存の回答を改良する際に2つのモードで厳密に動作するQAシステムのエキスパートです。\n" - "1. 新しいコンテキストを使用して元の回答を**書き直す**。\n" - "2. 新しいコンテキストが役に立たない場合は、元の回答を**繰り返す**。\n" - "回答内で元の回答やコンテキストを直接参照しないでください。\n" - "疑問がある場合は、元の答えを繰り返してください。" - "New Context: {context_msg}\n" - "Query: {query_str}\n" - "Original Answer: {existing_answer}\n" - "New Answer: " - ), - role=MessageRole.USER, - ) -] -# チャットRefineプロンプト -CHAT_REFINE_PROMPT = ChatPromptTemplate(message_templates=CHAT_REFINE_PROMPT_TMPL_MSGS) - -common.check_login() - -st.title("💬 ChatbotWebRead") - -URLtext = st.text_input( - "読み込むURLを入力してください", - placeholder="https://", -) - -if st.button("URL reading",use_container_width=True): - text_splitter = TokenTextSplitter( chunk_size=1500 - , chunk_overlap=DEFAULT_CHUNK_OVERLAP - , tokenizer=tiktoken.encoding_for_model("gpt-3.5-turbo").encode) - node_parser = SimpleNodeParser(text_splitter=text_splitter) - d = 1536 - k=2 - faiss_index = faiss.IndexFlatL2(d) - - callback_manager = CallbackManager([st.session_state.llama_debug_handler]) - service_context = ServiceContext.from_defaults(node_parser=node_parser,callback_manager=callback_manager) - - webDocuments = SimpleWebPageReader(html_to_text=True).load_data( - [URLtext] - ) - logger.info(webDocuments) - webIndex = VectorStoreIndex.from_documents(webDocuments,service_context=service_context) - response_synthesizer = get_response_synthesizer( - response_mode='refine', - text_qa_template= CHAT_TEXT_QA_PROMPT, - refine_template=CHAT_REFINE_PROMPT, - ) - st.session_state.webQuery_engine = webIndex.as_query_engine( - response_synthesizer=response_synthesizer, - service_context=service_context, - ) - st.session_state.web_chat_engine = CondenseQuestionChatEngine.from_defaults( - query_engine=st.session_state.webQuery_engine, - condense_question_prompt=custom_prompt, - verbose=True - ) - -if st.button("リセット",use_container_width=True,disabled = not URLtext): - st.session_state.web_chat_engine.reset() - st.session_state.webmessages = [{"role": "assistant", "content": "お困りごとはございますか?"}] - st.experimental_rerun() - logger.info("reset") - -if "webmessages" not in st.session_state: - st.session_state["webmessages"] = [{"role": "assistant", "content": "お困りごとはございますか?"}] - -for msg in st.session_state.webmessages: - st.chat_message(msg["role"]).write(msg["content"]) - -if prompt := st.chat_input(disabled = not URLtext): - st.session_state.webmessages.append({"role": "user", "content": prompt}) - st.chat_message("user").write(prompt) - response = st.session_state.web_chat_engine.chat(prompt) - logger.debug(st.session_state.llama_debug_handler.get_llm_inputs_outputs()) - msg = str(response) - st.session_state.webmessages.append({"role": "assistant", "content": msg}) - st.chat_message("assistant").write(msg) diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/tests/test_transforms.py b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/tests/test_transforms.py deleted file mode 100644 index 9656e0b37947bcc3b20023d54f821803591d6f68..0000000000000000000000000000000000000000 --- a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/tests/test_transforms.py +++ /dev/null @@ -1,70 +0,0 @@ -from pytorch_caney.config import get_config -from pytorch_caney.data.transforms import SimmimTransform -from pytorch_caney.data.transforms import TensorResizeTransform - -import argparse -import unittest -import torch -import numpy as np - - -class TestTransforms(unittest.TestCase): - - def setUp(self): - # Initialize any required configuration here - config_path = 'pytorch_caney/' + \ - 'tests/config/test_config.yaml' - args = argparse.Namespace(cfg=config_path) - self.config = get_config(args) - - def test_simmim_transform(self): - - # Create an instance of SimmimTransform - transform = SimmimTransform(self.config) - - # Create a sample ndarray - img = np.random.randn(self.config.DATA.IMG_SIZE, - self.config.DATA.IMG_SIZE, - 7) - - # Apply the transform - img_transformed, mask = transform(img) - - # Assertions - self.assertIsInstance(img_transformed, torch.Tensor) - self.assertEqual(img_transformed.shape, (7, - self.config.DATA.IMG_SIZE, - self.config.DATA.IMG_SIZE)) - self.assertIsInstance(mask, np.ndarray) - - def test_tensor_resize_transform(self): - # Create an instance of TensorResizeTransform - transform = TensorResizeTransform(self.config) - - # Create a sample image tensor - img = np.random.randn(self.config.DATA.IMG_SIZE, - self.config.DATA.IMG_SIZE, - 7) - - target = np.random.randint(0, 5, - size=((self.config.DATA.IMG_SIZE, - self.config.DATA.IMG_SIZE))) - - # Apply the transform - img_transformed = transform(img) - target_transformed = transform(target) - - # Assertions - self.assertIsInstance(img_transformed, torch.Tensor) - self.assertEqual(img_transformed.shape, - (7, self.config.DATA.IMG_SIZE, - self.config.DATA.IMG_SIZE)) - - self.assertIsInstance(target_transformed, torch.Tensor) - self.assertEqual(target_transformed.shape, - (1, self.config.DATA.IMG_SIZE, - self.config.DATA.IMG_SIZE)) - - -if __name__ == '__main__': - unittest.main() diff --git a/spaces/nateraw/modelcard-creator/about.md b/spaces/nateraw/modelcard-creator/about.md deleted file mode 100644 index 2e3eaf5d827b11d6f8e03b8dc85c307f5111baca..0000000000000000000000000000000000000000 --- a/spaces/nateraw/modelcard-creator/about.md +++ /dev/null @@ -1,4 +0,0 @@ -# About - -We built this space to make it easier to create _good_ model cards :) - diff --git a/spaces/nathanTQ/ChatDev/camel/generators.py b/spaces/nathanTQ/ChatDev/camel/generators.py deleted file mode 100644 index 47901a439bd20004b9f890715d7d15e58888718c..0000000000000000000000000000000000000000 --- a/spaces/nathanTQ/ChatDev/camel/generators.py +++ /dev/null @@ -1,267 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from typing import Dict, Generator, List, Optional, Set, Tuple - -from camel.messages import SystemMessage, SystemMessageType -from camel.prompts import PromptTemplateGenerator, TextPrompt -from camel.typing import RoleType, TaskType - - -class SystemMessageGenerator: - r"""System message generator for agents. - - Args: - task_type (TaskType, optional): The task type. - (default: :obj:`TaskType.AI_SOCIETY`) - sys_prompts (Optional[Dict[RoleType, str]], optional): The prompts of - the system messages for each role type. (default: :obj:`None`) - sys_msg_meta_dict_keys (Optional[Set[str]], optional): The set of keys - of the meta dictionary used to fill the prompts. - (default: :obj:`None`) - """ - - def __init__( - self, - task_type: TaskType = TaskType.AI_SOCIETY, - sys_prompts: Optional[Dict[RoleType, str]] = None, - sys_msg_meta_dict_keys: Optional[Set[str]] = None, - ) -> None: - self.sys_prompts: Dict[RoleType, str] - - if sys_prompts is not None: - self.sys_prompts = sys_prompts - self.sys_msg_meta_dict_keys = sys_msg_meta_dict_keys or set() - else: - templates = PromptTemplateGenerator() - agenttech_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV) - counselor_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV_COUNSELOR) - ceo_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV_CEO) - chro_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV_CHRO) - cpo_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV_CPO) - cto_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV_CTO) - programmer_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV_PROGRAMMER) - reviewer_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV_REVIEWER) - tester_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV_TESTER) - cco_prompt_template = templates.get_system_prompt(task_type, RoleType.CHATDEV_CCO) - - self.sys_prompts = dict() - self.sys_prompts[RoleType.CHATDEV] = agenttech_prompt_template - self.sys_prompts[RoleType.CHATDEV_COUNSELOR] = counselor_prompt_template - self.sys_prompts[RoleType.CHATDEV_CEO] = ceo_prompt_template - self.sys_prompts[RoleType.CHATDEV_CHRO] = chro_prompt_template - self.sys_prompts[RoleType.CHATDEV_CPO] = cpo_prompt_template - self.sys_prompts[RoleType.CHATDEV_CTO] = cto_prompt_template - self.sys_prompts[RoleType.CHATDEV_PROGRAMMER] = programmer_prompt_template - self.sys_prompts[RoleType.CHATDEV_REVIEWER] = reviewer_prompt_template - self.sys_prompts[RoleType.CHATDEV_TESTER] = tester_prompt_template - self.sys_prompts[RoleType.CHATDEV_CCO] = cco_prompt_template - - self.sys_msg_meta_dict_keys = (agenttech_prompt_template.key_words | - counselor_prompt_template.key_words | - ceo_prompt_template.key_words | - chro_prompt_template.key_words | - cpo_prompt_template.key_words | - cto_prompt_template.key_words | - programmer_prompt_template.key_words | - reviewer_prompt_template.key_words | - tester_prompt_template.key_words | - cco_prompt_template.key_words) - - if RoleType.DEFAULT not in self.sys_prompts: - self.sys_prompts[RoleType.DEFAULT] = "You are a helpful assistant." - - def validate_meta_dict_keys(self, meta_dict: Dict[str, str]) -> None: - r"""Validates the keys of the meta_dict. - - Args: - meta_dict (Dict[str, str]): The dictionary to validate. - """ - if not set(meta_dict.keys()).issubset(self.sys_msg_meta_dict_keys): - raise ValueError("The keys of the meta_dict should be in " - f"{self.sys_msg_meta_dict_keys}. " - f"Got {set(meta_dict.keys())} instead.") - - def from_dict( - self, - meta_dict: Dict[str, str], - role_tuple: Tuple[str, RoleType] = ("", RoleType.DEFAULT), - ) -> SystemMessageType: - r"""Generates a system message from a dictionary. - - Args: - meta_dict (Dict[str, str]): The dictionary containing the - information to generate the system message. - role_tuple (Tuple[str, RoleType], optional): The tuple containing - the role name and role type. (default: ("", RoleType.DEFAULT)) - - Returns: - SystemMessageType: The generated system message. - """ - self.validate_meta_dict_keys(meta_dict) - role_name, role_type = role_tuple - sys_prompt = self.sys_prompts[role_type] - sys_prompt = sys_prompt.format(**meta_dict) - - return SystemMessage(role_name=role_name, role_type=RoleType.DEFAULT, - meta_dict=meta_dict, content=sys_prompt) - - def from_dicts( - self, - meta_dicts: List[Dict[str, str]], - role_tuples: Tuple[str, str], - ) -> List[SystemMessageType]: - r"""Generates a list of system messages from a list of dictionaries. - - Args: - meta_dicts (List[Dict[str, str]]): A list of dictionaries - containing the information to generate the system messages. - role_tuples (List[Tuple[str, RoleType]]): A list of tuples - containing the role name and role type for each system message. - - Returns: - List[SystemMessageType]: A list of generated system messages. - - Raises: - ValueError: If the number of meta_dicts and role_tuples are - different. - """ - if len(meta_dicts) != len(role_tuples): - raise ValueError( - "The number of meta_dicts and role_types should be the same.") - - return [ - self.from_dict(meta_dict, role_tuple) - for meta_dict, role_tuple in zip(meta_dicts, role_tuples) - ] - - -class RoleNameGenerator: - - def __init__(self, assistant_role_names_path: - str = "data/ai_society/assistant_roles.txt", - user_role_names_path: str = "data/ai_society/user_roles.txt", - assistant_role_names: Optional[List[str]] = None, - user_role_names: Optional[List[str]] = None) -> None: - - if assistant_role_names is None: - with open(assistant_role_names_path, "r") as f: - assistant_role_names_: List[str] = f.read().splitlines() - self.assistant_role_names = [ - " ".join(name.split(" ")[1:]) - for name in assistant_role_names_ - ] - else: - self.assistant_role_names = assistant_role_names - - if user_role_names is None: - with open(user_role_names_path, "r") as f: - user_role_names_: List[str] = f.read().splitlines() - self.user_role_names = [ - " ".join(name.split(" ")[1:]) for name in user_role_names_ - ] - else: - self.user_role_names = user_role_names - - def from_role_files(self) -> Generator[Tuple, None, None]: - for assistant_role_name in self.assistant_role_names: - for user_role_name in self.user_role_names: - yield (assistant_role_name, user_role_name) - - -class AISocietyTaskPromptGenerator: - - def __init__( - self, - num_tasks: int = 10, - ) -> None: - self.generate_tasks_prompt = PromptTemplateGenerator( - ).get_generate_tasks_prompt(TaskType.AI_SOCIETY) - - self.num_tasks = num_tasks - - # TODO: Return role names for user and assistant with the generator. - def from_role_files( - self, - assistant_role_names_path: str = "data/ai_society/assistant_roles.txt", - user_role_names_path: str = "data/ai_society/user_roles.txt" - ) -> Generator[Tuple[str, Tuple[str, str]], None, None]: - roles_generator = RoleNameGenerator( - assistant_role_names_path, user_role_names_path).from_role_files() - for role_1, role_2 in roles_generator: - generate_tasks_prompt = self.generate_tasks_prompt.format( - assistant_role=role_1, user_role=role_2, - num_tasks=self.num_tasks) - - yield (generate_tasks_prompt, (role_1, role_2)) - - def from_role_generator( - self, role_generator: Generator[Tuple, None, None] - ) -> Generator[Tuple[str, Tuple[str, str]], None, None]: - for role_1, role_2 in role_generator: - generate_tasks_prompt = self.generate_tasks_prompt.format( - assistant_role=role_1, user_role=role_2, - num_tasks=self.num_tasks) - - yield (generate_tasks_prompt, (role_1, role_2)) - - -class SingleTxtGenerator: - - def __init__( - self, - text_file_path: str, - ) -> None: - - with open(text_file_path, "r") as f: - data_list: List[str] = f.read().splitlines() - self.data_list = [ - " ".join(name.split(" ")[1:]) for name in data_list - ] - - def from_role_files(self) -> Generator[str, None, None]: - for data in self.data_list: - yield data - - -class CodeTaskPromptGenerator: - - def __init__( - self, - num_tasks: int = 50, - ) -> None: - - self.generate_tasks_prompt = PromptTemplateGenerator( - ).get_generate_tasks_prompt(TaskType.CODE) - - self.num_tasks = num_tasks - - def from_role_files( - self, languages_path: str = "data/code/languages.txt", - domains_path: str = "data/code/domains.txt" - ) -> Generator[Tuple[TextPrompt, str, str], None, None]: - language_generator = SingleTxtGenerator( - languages_path).from_role_files() - - for language in language_generator: - domains_generator = SingleTxtGenerator( - domains_path).from_role_files() - for domain in domains_generator: - generated_tasks_prompt = self.generate_tasks_prompt.format( - language=language, domain=domain, num_tasks=self.num_tasks) - yield generated_tasks_prompt, language, domain - - def from_role_generator( - self, role_generator: Generator[Tuple, None, None] - ) -> Generator[str, None, None]: - raise NotImplementedError diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Pattern Price And Time Using Gann Theory In Technical Analysis ....pdf ((TOP)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Pattern Price And Time Using Gann Theory In Technical Analysis ....pdf ((TOP)).md deleted file mode 100644 index ebbe800f7f73460314f3d624b1288c688bfbfa27..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Pattern Price And Time Using Gann Theory In Technical Analysis ....pdf ((TOP)).md +++ /dev/null @@ -1,14 +0,0 @@ - -

        Pattern, Price and Time: Using Gann Theory in Technical Analysis

        -

        Technical analysis is a method of forecasting future price movements based on the study of past market data, such as price, volume and indicators. One of the most influential and controversial figures in technical analysis was William Delbert Gann, who developed a unique approach to market analysis based on geometry, astrology and ancient mathematics.

        -

        Gann theory is based on the premise that there is a natural order and harmony in the markets, and that price movements are governed by certain patterns, cycles and angles. Gann theory also asserts that history repeats itself, and that market trends can be predicted by using historical data and applying certain mathematical formulas.

        -

        Pattern, Price and Time: Using Gann Theory in Technical Analysis ....pdf


        Download Ziphttps://urlcod.com/2uIc1J



        -

        In his book Pattern, Price and Time: Using Gann Theory in Technical Analysis, James A. Hyerczyk provides a comprehensive and practical guide to applying Gann theory to modern markets. Hyerczyk explains the basic principles and concepts of Gann theory, such as the Master Chart, the Square of Nine, the Hexagon Chart, the Law of Vibration and the Gann Angles. He also shows how to use Gann tools and techniques to identify support and resistance levels, trend lines, breakouts, reversals, retracements and projections.

        -

        The book also covers some of the more advanced aspects of Gann theory, such as time cycles, planetary influences, astrological aspects and numerology. Hyerczyk illustrates how to combine Gann theory with other technical analysis methods, such as Fibonacci ratios, Elliott wave theory and candlestick patterns. He also provides real-world examples and case studies to demonstrate how Gann theory can be applied to various markets, such as stocks, futures, forex and commodities.

        -

        Pattern, Price and Time: Using Gann Theory in Technical Analysis is a valuable resource for anyone who wants to learn more about the fascinating and mysterious world of Gann theory. It is suitable for both beginners and experienced traders who want to enhance their market analysis skills and improve their trading performance.

        To continue the article, here are some additional paragraphs:

        -

        Gann theory is not a simple or easy method to master. It requires a lot of study, practice and patience to understand and apply it correctly. Gann theory also has its limitations and drawbacks, such as the subjectivity of interpretation, the complexity of calculations and the lack of scientific evidence. Therefore, it is important to use Gann theory with caution and discretion, and to test and verify its results with other methods and tools.

        -

        -

        However, Gann theory also has its advantages and benefits, such as the ability to capture the essence and rhythm of the markets, the flexibility and adaptability to different time frames and instruments, and the potential to uncover hidden patterns and opportunities that other methods may miss. Gann theory can also provide a deeper insight into the psychology and behavior of the market participants, as well as the influence of natural forces and cosmic cycles on the markets.

        -

        Gann theory is not a magic formula or a holy grail that can guarantee success in trading. It is a tool that can help traders to analyze the markets more effectively and efficiently, and to make better trading decisions based on logic and reason. Gann theory can also help traders to develop a more disciplined and consistent trading approach, as well as a more balanced and harmonious mindset.

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/neuralmagic/cv-yolo/annotate.py b/spaces/neuralmagic/cv-yolo/annotate.py deleted file mode 100644 index 64fcb222399f490ba253a82c8c8cfaf3ac670e00..0000000000000000000000000000000000000000 --- a/spaces/neuralmagic/cv-yolo/annotate.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) 2021 - present / Neuralmagic, Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -""" -Helpers and Utilities for YOLO -""" -import functools -import itertools -import logging -import random -import time -from tempfile import NamedTemporaryFile -from typing import List, Optional, Tuple, Union - -import numpy -import onnx -import torchvision -import yaml - -import torch -from deepsparse.yolo.schemas import YOLOOutput - - -try: - import cv2 - - cv2_error = None -except ModuleNotFoundError as cv2_import_error: - cv2 = None - cv2_error = cv2_import_error - -_YOLO_CLASS_COLORS = list(itertools.product([0, 255, 128, 64, 192], repeat=3)) -_YOLO_CLASS_COLORS.remove((255, 255, 255)) # remove white from possible colors -_LOGGER = logging.getLogger(__name__) - -# Default YOLO anchor grids -_YOLO_DEFAULT_ANCHORS = [ - torch.Tensor([[10, 13], [16, 30], [33, 23]]), - torch.Tensor([[30, 61], [62, 45], [59, 119]]), - torch.Tensor([[116, 90], [156, 198], [373, 326]]), -] -_YOLO_DEFAULT_ANCHOR_GRIDS = [ - t.clone().view(1, -1, 1, 1, 2) for t in _YOLO_DEFAULT_ANCHORS -] - - -@functools.lru_cache(maxsize=None) -def _get_color(label): - # cache color lookups - return random.choice(_YOLO_CLASS_COLORS) - - -class YoloPostprocessor: - """ - Class for performing post-processing of YOLO model predictions - :param image_size: size of input image to model. used to calculate stride based on - output shapes - """ - - def __init__( - self, image_size: Tuple[int, int] = (640, 640), cfg: Optional[str] = None - ): - self._image_size = image_size - self._anchor_grids = ( - self._load_cfg_anchor_grid(cfg) if cfg else _YOLO_DEFAULT_ANCHOR_GRIDS - ) - self._grids = {} # Dict[Tuple[int], torch.Tensor] - - def pre_nms_postprocess(self, outputs: List[numpy.ndarray]) -> torch.Tensor: - """ - :param outputs: raw outputs of a YOLO model before anchor grid processing - :return: post-processed model outputs without NMS. - """ - # postprocess and transform raw outputs into single torch tensor - processed_outputs = [] - for idx, pred in enumerate(outputs): - pred = torch.from_numpy(pred) - pred = pred.sigmoid() - - # get grid and stride - grid_shape = pred.shape[2:4] - grid = self._get_grid(grid_shape) - stride = self._image_size[0] / grid_shape[0] - - # decode xywh box values - pred[..., 0:2] = (pred[..., 0:2] * 2.0 - 0.5 + grid) * stride - pred[..., 2:4] = (pred[..., 2:4] * 2) ** 2 * self._anchor_grids[idx] - # flatten anchor and grid dimensions -> - # (bs, num_predictions, num_classes + 5) - processed_outputs.append(pred.view(pred.size(0), -1, pred.size(-1))) - return torch.cat(processed_outputs, 1) - - def _get_grid(self, grid_shape: Tuple[int, int]) -> torch.Tensor: - if grid_shape not in self._grids: - # adapted from yolov5.yolo.Detect._make_grid - coords_y, coords_x = torch.meshgrid( - [torch.arange(grid_shape[0]), torch.arange(grid_shape[1])] - ) - grid = torch.stack((coords_x, coords_y), 2) - self._grids[grid_shape] = grid.view( - 1, 1, grid_shape[0], grid_shape[1], 2 - ).float() - return self._grids[grid_shape] - - @staticmethod - def _load_cfg_anchor_grid(cfg: str) -> List[torch.Tensor]: - with open(cfg) as f: - anchors = yaml.safe_load(f)["anchors"] - - def _split_to_coords(coords_list): - return [ - [coords_list[idx], coords_list[idx + 1]] - for idx in range(0, len(coords_list), 2) - ] - - anchors = [torch.Tensor(_split_to_coords(coords)) for coords in anchors] - return [t.clone().view(1, -1, 1, 1, 2) for t in anchors] - - -def postprocess_nms( - outputs: Union[torch.Tensor, numpy.ndarray], - iou_thres: float = 0.25, - conf_thres: float = 0.45, - multi_label: bool = False, -) -> List[numpy.ndarray]: - """ - :param outputs: Tensor of post-processed model outputs - :param iou_thres: minimum IoU for a detection to be valid - :param conf_thres: minimum confidence score for a detection to be valid - :return: List of numpy arrays of NMS predictions for each image in the batch - """ - # run nms in PyTorch, only post-process first output - if isinstance(outputs, numpy.ndarray): - outputs = torch.from_numpy(outputs) - nms_outputs = _non_max_suppression( - outputs, conf_thres=conf_thres, iou_thres=iou_thres, multi_label=multi_label - ) - return [output.cpu().numpy() for output in nms_outputs] - - -def _non_max_suppression( - prediction, - conf_thres=0.25, - iou_thres=0.45, - classes=None, - agnostic=False, - multi_label=False, - labels=(), -): - # Ported from ultralytics/yolov5 - - nc = prediction.shape[2] - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Checks - assert 0 <= conf_thres <= 1, ( - f"Invalid Confidence threshold {conf_thres}, " - "valid values are between 0.0 and 1.0" - ) - assert ( - 0 <= iou_thres <= 1 - ), f"Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0" - - # Settings - _, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - max_det = 300 # maximum number of detections per image - max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.perf_counter() - output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - label_ = labels[xi] - v = torch.zeros((len(label_), nc + 5), device=x.device) - v[:, :4] = label_[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(label_)), label_[:, 0].long() + 5] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = _xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # Check shape - n = x.shape[0] # number of boxes - if not n: # no boxes - continue - elif n > max_nms: # excess boxes - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3e3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = _box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum( - 1, keepdim=True - ) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.perf_counter() - t) > time_limit: - print(f"WARNING: NMS time limit {time_limit}s exceeded") - break # time limit exceeded - - return output - - -def _xywh2xyxy( - x: Union[torch.Tensor, numpy.ndarray] -) -> Union[torch.Tensor, numpy.ndarray]: - # ported from ultralytics/yolov5 - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] - # where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else numpy.copy(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y - - -def _box_iou(box1: torch.Tensor, box2: torch.Tensor) -> torch.Tensor: - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - inter = ( - ( - torch.min(box1[:, None, 2:], box2[:, 2:]) - - torch.max(box1[:, None, :2], box2[:, :2]) - ) - .clamp(0) - .prod(2) - ) - return inter / ( - area1[:, None] + area2 - inter - ) # iou = inter / (area1 + area2 - inter) - - -def yolo_onnx_has_postprocessing(model_path: Union[str, onnx.ModelProto]) -> bool: - """ - :param model_path: file path to YOLO ONNX model or loaded model - :return: True if YOLO postprocessing (pre-nms) is included in the ONNX graph, - this is assumed to be when the first output of the model has fewer dimensions - than the other outputs as the grid dimensions have been flattened - """ - if isinstance(model_path, str): - model = onnx.load(model_path) - else: - model = model_path - - # get number of dimensions in each output - outputs_num_dims = [ - len(output.type.tensor_type.shape.dim) for output in model.graph.output - ] - - # assume if only one output, then it is post-processed - if len(outputs_num_dims) == 1: - return True - - return all(num_dims > outputs_num_dims[0] for num_dims in outputs_num_dims[1:]) - - -def get_onnx_expected_image_shape(onnx_model: onnx.ModelProto) -> Tuple[int, ...]: - """ - :param onnx_model: onnx model to get expected image shape of - :return: expected shape of the input tensor from onnx graph as a 2-tuple - """ - input_tensor = onnx_model.graph.input[0] - return ( - input_tensor.type.tensor_type.shape.dim[2].dim_value, - input_tensor.type.tensor_type.shape.dim[3].dim_value, - ) - - -def modify_yolo_onnx_input_shape( - model_path: str, image_shape: Tuple[int, int] -) -> Tuple[str, Optional[NamedTemporaryFile]]: - """ - Creates a new YOLO ONNX model from the given path that accepts the given input - shape. If the given model already has the given input shape no modifications are - made. Uses a tempfile to store the modified model file. - :param model_path: file path to YOLO ONNX model - :param image_shape: 2-tuple of the image shape to resize this yolo model to - :return: filepath to an onnx model reshaped to the given input shape will be the - original path if the shape is the same. Additionally returns the - NamedTemporaryFile for managing the scope of the object for file deletion - """ - has_postprocessing = yolo_onnx_has_postprocessing(model_path) - - model = onnx.load(model_path) - model_input = model.graph.input[0] - - initial_x, initial_y = get_onnx_expected_image_shape(model) - - if not (isinstance(initial_x, int) and isinstance(initial_y, int)): - return model_path, None # model graph does not have static integer input shape - - if (initial_x, initial_y) == tuple(image_shape): - return model_path, None # no shape modification needed - - # override input shape - model_input.type.tensor_type.shape.dim[2].dim_value = image_shape[0] - model_input.type.tensor_type.shape.dim[3].dim_value = image_shape[1] - - # override output shape to account for stride - scale_x = initial_x / image_shape[0] - scale_y = initial_y / image_shape[1] - - for idx, model_output in enumerate(model.graph.output): - if idx == 0 and has_postprocessing: - continue - output_x = get_tensor_dim_shape(model_output, 2) - output_y = get_tensor_dim_shape(model_output, 3) - set_tensor_dim_shape(model_output, 2, int(output_x / scale_x)) - set_tensor_dim_shape(model_output, 3, int(output_y / scale_y)) - - # fix number of predictions in post-processed output for new strides - if has_postprocessing: - # sum number of predictions across the other outputs - num_predictions = sum( - numpy.prod( - [ - get_tensor_dim_shape(output_tensor, dim_idx) - for dim_idx in range(1, 4) - ] - ) - for output_tensor in model.graph.output[1:] - ) - set_tensor_dim_shape(model.graph.output[0], 1, num_predictions) - - tmp_file = NamedTemporaryFile() # file will be deleted after program exit - onnx.save(model, tmp_file.name) - - return tmp_file.name, tmp_file - - -def get_tensor_dim_shape(tensor: onnx.TensorProto, dim: int) -> int: - """ - :param tensor: ONNX tensor to get the shape of a dimension of - :param dim: dimension index of the tensor to get the shape of - :return: shape of the tensor at the given dimension - """ - return tensor.type.tensor_type.shape.dim[dim].dim_value - - -def set_tensor_dim_shape(tensor: onnx.TensorProto, dim: int, value: int): - """ - Sets the shape of the tensor at the given dimension to the given value - :param tensor: ONNX tensor to modify the shape of - :param dim: dimension index of the tensor to modify the shape of - :param value: new shape for the given dimension - """ - tensor.type.tensor_type.shape.dim[dim].dim_value = value - - -def annotate_image( - image: numpy.ndarray, - prediction: YOLOOutput, - images_per_sec: Optional[float] = None, - score_threshold: float = 0.35, -) -> numpy.ndarray: - """ - Draws bounding boxes on predictions of a detection model - :param image: original image to annotate (no pre-processing needed) - :param prediction: predictions returned by the inference pipeline - :param images_per_sec: optional fps value to annotate the left corner - of the image (video) with - :param score_threshold: minimum score a detection should have to be annotated - on the image. Default is 0.35 - :return: the original image annotated with the given bounding boxes - """ - boxes = prediction[0].boxes - scores = prediction[0].scores - labels = prediction[0].labels - - img_res = numpy.copy(image) - - for idx in range(len(boxes)): - label = labels[idx] - if scores[idx] > score_threshold: - annotation_text = f"{label}: {scores[idx]:.0%}" - - # bounding box points - left, top, right, bottom = boxes[idx] - - # calculate text size - (text_width, text_height), text_baseline = cv2.getTextSize( - annotation_text, - cv2.FONT_HERSHEY_SIMPLEX, - 0.9, # font scale - 2, # thickness - ) - text_height += text_baseline - - # make solid background for annotation text - cv2.rectangle( - img_res, - (int(left), int(top) - 33), - (int(left) + text_width, int(top) - 28 + text_height), - _get_color(label), - thickness=-1, # filled solid - ) - - # add white annotation text - cv2.putText( - img_res, - annotation_text, - (int(left), int(top) - 10), - cv2.FONT_HERSHEY_SIMPLEX, - 0.9, # font scale - (255, 255, 255), # white text - 2, # thickness - cv2.LINE_AA, - ) - - # draw bounding box - cv2.rectangle( - img_res, - (int(left), int(top)), - (int(right), int(bottom)), - _get_color(label), - thickness=2, - ) - - if images_per_sec is not None: - img_res = _plot_fps( - img_res=img_res, - images_per_sec=images_per_sec, - x=20, - y=30, - font_scale=0.9, - thickness=2, - ) - return img_res - - -def _plot_fps( - img_res: numpy.ndarray, - images_per_sec: float, - x: int, - y: int, - font_scale: float, - thickness: int, -) -> numpy.ndarray: - - annotation_text = f"FPS: {int(images_per_sec)}" - # calculate text size - (text_width, text_height), text_baseline = cv2.getTextSize( - annotation_text, - cv2.FONT_HERSHEY_SIMPLEX, - font_scale, # font scale - thickness, # thickness - ) - # make solid background for annotation text - cv2.rectangle( - img_res, - (x, y - 3 * text_baseline), - (x + text_width, y + text_height - text_baseline), - (255, 255, 255), - thickness=-1, # filled solid - ) - - cv2.putText( - img_res, - annotation_text, - (x, y), - cv2.FONT_HERSHEY_SIMPLEX, - font_scale, - (245, 46, 6), # color - thickness, - cv2.LINE_AA, - ) - return img_res \ No newline at end of file diff --git a/spaces/nev/CoNR/model/shader.py b/spaces/nev/CoNR/model/shader.py deleted file mode 100644 index 0e1ded98d0dae775a7bbd5b21a8bc65ac250893c..0000000000000000000000000000000000000000 --- a/spaces/nev/CoNR/model/shader.py +++ /dev/null @@ -1,290 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from .warplayer import warp_features -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -class DecoderBlock(nn.Module): - def __init__(self, in_planes, c=224, out_msgs=0, out_locals=0, block_nums=1, out_masks=1, out_local_flows=32, out_msgs_flows=32, out_feat_flows=0): - - super(DecoderBlock, self).__init__() - self.conv0 = nn.Sequential( - nn.Conv2d(in_planes, c, 3, 2, 1), - nn.PReLU(c), - nn.Conv2d(c, c, 3, 2, 1), - nn.PReLU(c), - ) - - self.convblocks = nn.ModuleList() - for i in range(block_nums): - self.convblocks.append(nn.Sequential( - nn.Conv2d(c, c, 3, 1, 1), - nn.PReLU(c), - nn.Conv2d(c, c, 3, 1, 1), - nn.PReLU(c), - nn.Conv2d(c, c, 3, 1, 1), - nn.PReLU(c), - nn.Conv2d(c, c, 3, 1, 1), - nn.PReLU(c), - nn.Conv2d(c, c, 3, 1, 1), - nn.PReLU(c), - nn.Conv2d(c, c, 3, 1, 1), - nn.PReLU(c), - )) - self.out_flows = 2 - self.out_msgs = out_msgs - self.out_msgs_flows = out_msgs_flows if out_msgs > 0 else 0 - self.out_locals = out_locals - self.out_local_flows = out_local_flows if out_locals > 0 else 0 - self.out_masks = out_masks - self.out_feat_flows = out_feat_flows - - self.conv_last = nn.Sequential( - nn.ConvTranspose2d(c, c, 4, 2, 1), - nn.PReLU(c), - nn.ConvTranspose2d(c, self.out_flows+self.out_msgs+self.out_msgs_flows + - self.out_locals+self.out_local_flows+self.out_masks+self.out_feat_flows, 4, 2, 1), - ) - - def forward(self, accumulated_flow, *other): - x = [accumulated_flow] - for each in other: - if each is not None: - assert(accumulated_flow.shape[-1] == each.shape[-1]), "decoder want {}, but get {}".format( - accumulated_flow.shape, each.shape) - x.append(each) - feat = self.conv0(torch.cat(x, dim=1)) - for convblock1 in self.convblocks: - feat = convblock1(feat) + feat - feat = self.conv_last(feat) - prev = 0 - flow = feat[:, prev:prev+self.out_flows, :, :] - prev += self.out_flows - message = feat[:, prev:prev+self.out_msgs, - :, :] if self.out_msgs > 0 else None - prev += self.out_msgs - message_flow = feat[:, prev:prev + self.out_msgs_flows, - :, :] if self.out_msgs_flows > 0 else None - prev += self.out_msgs_flows - local_message = feat[:, prev:prev + self.out_locals, - :, :] if self.out_locals > 0 else None - prev += self.out_locals - local_message_flow = feat[:, prev:prev+self.out_local_flows, - :, :] if self.out_local_flows > 0 else None - prev += self.out_local_flows - mask = torch.sigmoid( - feat[:, prev:prev+self.out_masks, :, :]) if self.out_masks > 0 else None - prev += self.out_masks - feat_flow = feat[:, prev:prev+self.out_feat_flows, - :, :] if self.out_feat_flows > 0 else None - prev += self.out_feat_flows - return flow, mask, message, message_flow, local_message, local_message_flow, feat_flow - - -class CINN(nn.Module): - def __init__(self, DIM_SHADER_REFERENCE, target_feature_chns=[512, 256, 128, 64, 64], feature_chns=[2048, 1024, 512, 256, 64], out_msgs_chn=[2048, 1024, 512, 256, 64, 64], out_locals_chn=[2048, 1024, 512, 256, 64, 0], block_num=[1, 1, 1, 1, 1, 2], block_chn_num=[224, 224, 224, 224, 224, 224]): - super(CINN, self).__init__() - - self.in_msgs_chn = [0, *out_msgs_chn[:-1]] - self.in_locals_chn = [0, *out_locals_chn[:-1]] - - self.decoder_blocks = nn.ModuleList() - self.feed_weighted = True - if self.feed_weighted: - in_planes = 2+2+DIM_SHADER_REFERENCE*2 - else: - in_planes = 2+DIM_SHADER_REFERENCE - for each_target_feature_chns, each_feature_chns, each_out_msgs_chn, each_out_locals_chn, each_in_msgs_chn, each_in_locals_chn, each_block_num, each_block_chn_num in zip(target_feature_chns, feature_chns, out_msgs_chn, out_locals_chn, self.in_msgs_chn, self.in_locals_chn, block_num, block_chn_num): - self.decoder_blocks.append( - DecoderBlock(in_planes+each_target_feature_chns+each_feature_chns+each_in_locals_chn+each_in_msgs_chn, c=each_block_chn_num, block_nums=each_block_num, out_msgs=each_out_msgs_chn, out_locals=each_out_locals_chn, out_masks=2+each_out_locals_chn)) - for i in range(len(feature_chns), len(out_locals_chn)): - #print("append extra block", i, "msg", - # out_msgs_chn[i], "local", out_locals_chn[i], "block", block_num[i]) - self.decoder_blocks.append( - DecoderBlock(in_planes+self.in_msgs_chn[i]+self.in_locals_chn[i], c=block_chn_num[i], block_nums=block_num[i], out_msgs=out_msgs_chn[i], out_locals=out_locals_chn[i], out_masks=2+out_msgs_chn[i], out_feat_flows=0)) - - def apply_flow(self, mask, message, message_flow, local_message, local_message_flow, x_reference, accumulated_flow, each_x_reference_features=None, each_x_reference_features_flow=None): - if each_x_reference_features is not None: - size_from = each_x_reference_features - else: - size_from = x_reference - f_size = (size_from.shape[2], size_from.shape[3]) - accumulated_flow = self.flow_rescale( - accumulated_flow, size_from) - # mask = warp_features(F.interpolate( - # mask, size=f_size, mode="bilinear"), accumulated_flow) if mask is not None else None - mask = F.interpolate( - mask, size=f_size, mode="bilinear") if mask is not None else None - message = F.interpolate( - message, size=f_size, mode="bilinear") if message is not None else None - message_flow = self.flow_rescale( - message_flow, size_from) if message_flow is not None else None - message = warp_features( - message, message_flow) if message_flow is not None else message - - local_message = F.interpolate( - local_message, size=f_size, mode="bilinear") if local_message is not None else None - local_message_flow = self.flow_rescale( - local_message_flow, size_from) if local_message_flow is not None else None - local_message = warp_features( - local_message, local_message_flow) if local_message_flow is not None else local_message - - warp_x_reference = warp_features(F.interpolate( - x_reference, size=f_size, mode="bilinear"), accumulated_flow) - - each_x_reference_features_flow = self.flow_rescale( - each_x_reference_features_flow, size_from) if (each_x_reference_features is not None and each_x_reference_features_flow is not None) else None - warp_each_x_reference_features = warp_features( - each_x_reference_features, each_x_reference_features_flow) if each_x_reference_features_flow is not None else each_x_reference_features - - return mask, message, local_message, warp_x_reference, accumulated_flow, warp_each_x_reference_features, each_x_reference_features_flow - - def forward(self, x_target_features=[], x_reference=None, x_reference_features=[]): - y_flow = [] - y_feat_flow = [] - - y_local_message = [] - y_warp_x_reference = [] - y_warp_x_reference_features = [] - - y_weighted_flow = [] - y_weighted_mask = [] - y_weighted_message = [] - y_weighted_x_reference = [] - y_weighted_x_reference_features = [] - - for pyrlevel, ifblock in enumerate(self.decoder_blocks): - stacked_wref = [] - stacked_feat = [] - stacked_anci = [] - stacked_flow = [] - stacked_mask = [] - stacked_mesg = [] - stacked_locm = [] - stacked_feat_flow = [] - for view_id in range(x_reference.shape[1]): # NMCHW - - if pyrlevel == 0: - # create from zero flow - feat_ev = x_reference_features[pyrlevel][:, - view_id, :, :, :] if pyrlevel < len(x_reference_features) else None - - accumulated_flow = torch.zeros_like( - feat_ev[:, :2, :, :]).to(device) - accumulated_feat_flow = torch.zeros_like( - feat_ev[:, :32, :, :]).to(device) - # domestic inputs - warp_x_reference = F.interpolate(x_reference[:, view_id, :, :, :], size=( - feat_ev.shape[-2], feat_ev.shape[-1]), mode="bilinear") - warp_x_reference_features = feat_ev - - local_message = None - # federated inputs - weighted_flow = accumulated_flow if self.feed_weighted else None - weighted_wref = warp_x_reference if self.feed_weighted else None - weighted_message = None - else: - # resume from last layer - accumulated_flow = y_flow[-1][:, view_id, :, :, :] - accumulated_feat_flow = y_feat_flow[-1][:, - view_id, :, :, :] if y_feat_flow[-1] is not None else None - # domestic inputs - warp_x_reference = y_warp_x_reference[-1][:, - view_id, :, :, :] - warp_x_reference_features = y_warp_x_reference_features[-1][:, - view_id, :, :, :] if y_warp_x_reference_features[-1] is not None else None - local_message = y_local_message[-1][:, view_id, :, - :, :] if len(y_local_message) > 0 else None - - # federated inputs - weighted_flow = y_weighted_flow[-1] if self.feed_weighted else None - weighted_wref = y_weighted_x_reference[-1] if self.feed_weighted else None - weighted_message = y_weighted_message[-1] if len( - y_weighted_message) > 0 else None - scaled_x_target = x_target_features[pyrlevel][:, :, :, :].detach() if pyrlevel < len( - x_target_features) else None - # compute flow - residual_flow, mask, message, message_flow, local_message, local_message_flow, residual_feat_flow = ifblock( - accumulated_flow, scaled_x_target, warp_x_reference, warp_x_reference_features, weighted_flow, weighted_wref, weighted_message, local_message) - accumulated_flow = residual_flow + accumulated_flow - accumulated_feat_flow = accumulated_flow - - feat_ev = x_reference_features[pyrlevel+1][:, - view_id, :, :, :] if pyrlevel+1 < len(x_reference_features) else None - mask, message, local_message, warp_x_reference, accumulated_flow, warp_x_reference_features, accumulated_feat_flow = self.apply_flow( - mask, message, message_flow, local_message, local_message_flow, x_reference[:, view_id, :, :, :], accumulated_flow, feat_ev, accumulated_feat_flow) - stacked_flow.append(accumulated_flow) - if accumulated_feat_flow is not None: - stacked_feat_flow.append(accumulated_feat_flow) - stacked_mask.append(mask) - if message is not None: - stacked_mesg.append(message) - if local_message is not None: - stacked_locm.append(local_message) - stacked_wref.append(warp_x_reference) - if warp_x_reference_features is not None: - stacked_feat.append(warp_x_reference_features) - - stacked_flow = torch.stack(stacked_flow, dim=1) # M*NCHW -> NMCHW - stacked_feat_flow = torch.stack(stacked_feat_flow, dim=1) if len( - stacked_feat_flow) > 0 else None - stacked_mask = torch.stack( - stacked_mask, dim=1) - - stacked_mesg = torch.stack(stacked_mesg, dim=1) if len( - stacked_mesg) > 0 else None - stacked_locm = torch.stack(stacked_locm, dim=1) if len( - stacked_locm) > 0 else None - - stacked_wref = torch.stack(stacked_wref, dim=1) - stacked_feat = torch.stack(stacked_feat, dim=1) if len( - stacked_feat) > 0 else None - stacked_anci = torch.stack(stacked_anci, dim=1) if len( - stacked_anci) > 0 else None - y_flow.append(stacked_flow) - y_feat_flow.append(stacked_feat_flow) - - y_warp_x_reference.append(stacked_wref) - y_warp_x_reference_features.append(stacked_feat) - # compute normalized confidence - stacked_contrib = torch.nn.functional.softmax(stacked_mask, dim=1) - - # torch.sum to remove temp dimension M from NMCHW --> NCHW - weighted_flow = torch.sum( - stacked_mask[:, :, 0:1, :, :] * stacked_contrib[:, :, 0:1, :, :] * stacked_flow, dim=1) - weighted_mask = torch.sum( - stacked_contrib[:, :, 0:1, :, :] * stacked_mask[:, :, 0:1, :, :], dim=1) - weighted_wref = torch.sum( - stacked_mask[:, :, 0:1, :, :] * stacked_contrib[:, :, 0:1, :, :] * stacked_wref, dim=1) if stacked_wref is not None else None - weighted_feat = torch.sum( - stacked_mask[:, :, 1:2, :, :] * stacked_contrib[:, :, 1:2, :, :] * stacked_feat, dim=1) if stacked_feat is not None else None - weighted_mesg = torch.sum( - stacked_mask[:, :, 2:, :, :] * stacked_contrib[:, :, 2:, :, :] * stacked_mesg, dim=1) if stacked_mesg is not None else None - y_weighted_flow.append(weighted_flow) - y_weighted_mask.append(weighted_mask) - if weighted_mesg is not None: - y_weighted_message.append(weighted_mesg) - if stacked_locm is not None: - y_local_message.append(stacked_locm) - y_weighted_message.append(weighted_mesg) - y_weighted_x_reference.append(weighted_wref) - y_weighted_x_reference_features.append(weighted_feat) - - if weighted_feat is not None: - y_weighted_x_reference_features.append(weighted_feat) - return { - "y_last_remote_features": [weighted_mesg], - } - - def flow_rescale(self, prev_flow, each_x_reference_features): - if prev_flow is None: - prev_flow = torch.zeros_like( - each_x_reference_features[:, :2]).to(device) - else: - up_scale_factor = each_x_reference_features.shape[-1] / \ - prev_flow.shape[-1] - if up_scale_factor != 1: - prev_flow = F.interpolate(prev_flow, scale_factor=up_scale_factor, mode="bilinear", - align_corners=False, recompute_scale_factor=False) * up_scale_factor - return prev_flow diff --git a/spaces/niuzhiwei/stabilityai-stable-diffusion-2-1/README.md b/spaces/niuzhiwei/stabilityai-stable-diffusion-2-1/README.md deleted file mode 100644 index 7c2d1608f359a4137104e32847812228a0a0b37c..0000000000000000000000000000000000000000 --- a/spaces/niuzhiwei/stabilityai-stable-diffusion-2-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stabilityai Stable Diffusion 2 1 -emoji: 📚 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nupurkmr9/custom-diffusion/style.css b/spaces/nupurkmr9/custom-diffusion/style.css deleted file mode 100644 index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000 --- a/spaces/nupurkmr9/custom-diffusion/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} diff --git a/spaces/ofig/live-lm-critic/gec/src/run-round1.sh b/spaces/ofig/live-lm-critic/gec/src/run-round1.sh deleted file mode 100644 index 28093744a39a2c5aee3ea1ebf5aceb038ff4986f..0000000000000000000000000000000000000000 --- a/spaces/ofig/live-lm-critic/gec/src/run-round1.sh +++ /dev/null @@ -1,75 +0,0 @@ -exit 0; -################################################################################ -# run the following commands one by one in the `gec/` directory of the repo -################################################################################ -export CUDA_VISIBLE_DEVICES=0 -conda activate lm-critic - -############### Train the fixer ############### -dt=`date '+%Y%m%d_%H%M%S'` -outdir=data/round1__BIFI/model-fixer__${dt} -mkdir -p $outdir -python3.8 -u src/run_seq2seq.py \ - --model_name_or_path facebook/bart-base --task summarization --text_column bad_detoked --summary_column good_detoked \ - --do_train --num_train_epochs 1 --train_file data/round1__BIFI/BIFI_paired_data_9M.json \ - --preprocessing_num_workers 20 --overwrite_output_dir --output_dir $outdir --predict_with_generate --fp16 \ - --per_device_train_batch_size 64 --gradient_accumulation_steps 8 --max_source_length 64 --max_target_length 64 \ - --logging_first_step --logging_steps 20 --save_steps 2000 \ - |& tee $outdir/log.txt - - - -############### Run the fixer on benchmarks ############### -model_path=data/round1__BIFI/model-fixer -#BEA2019 -python src/run_fixer.py -m $model_path -i benchmarks/wi+locness_v2.1.bea19/m2/ABCN.dev.bea19.orig.txt -o $model_path/predictions/bea19dev.out.txt --bea19 -#CoNLL2014 -python src/run_fixer.py -m $model_path -i benchmarks/conll14st-test-data/noalt/official-2014.combined.orig.txt -o $model_path/predictions/conll14.out.txt -#GMEG-wiki -python src/run_fixer.py -m $model_path -i benchmarks/GMEG/data/test/wiki/source -o $model_path/predictions/gmeg.wiki.out.txt -#GMEG-yahoo -python src/run_fixer.py -m $model_path -i benchmarks/GMEG/data/test/yahoo/source -o $model_path/predictions/gmeg.yahoo.out.txt - - - -############### Evaluate the fixer outputs ############### -#CoNLL2014 -python2 benchmarks/m2scorer/scripts/m2scorer.py $model_path/predictions/conll14.out.txt \ - benchmarks/conll14st-test-data/noalt/official-2014.combined.m2 | tee $model_path/predictions/conll14.eval.txt -# Precision : 0.6444 -# Recall : 0.3569 -# F_0.5 : 0.5550 - -#BEA2019 and GMEG uses errant scorer, which needs its own environment -conda deactivate -conda activate errant200 - -#BEA2019 -errant_parallel -orig benchmarks/wi+locness_v2.1.bea19/m2/ABCN.dev.bea19.orig.txt \ - -cor $model_path/predictions/bea19dev.out.txt \ - -out $model_path/predictions/bea19dev.outm2.txt && \ -errant_compare -hyp $model_path/predictions/bea19dev.outm2.txt -ref benchmarks/wi+locness_v2.1.bea19/m2/ABCN.dev.gold.bea19.m2 | tee $model_path/predictions/bea19dev.eval.txt -# =========== Span-Based Correction ============ -# TP FP FN Prec Rec F0.5 -# 1848 1733 5613 0.5161 0.2477 0.4241 -# ============================================== - -#GEMG-wiki -errant_parallel -orig benchmarks/GMEG/data/test/wiki/source \ - -cor $model_path/predictions/gmeg.wiki.out.txt \ - -out $model_path/predictions/gmeg.wiki.outm2.txt && \ -errant_compare -hyp $model_path/predictions/gmeg.wiki.outm2.txt -ref benchmarks/GMEG/data/test/wiki/ref.m2 | tee $model_path/predictions/gmeg.wiki.eval.txt -# =========== Span-Based Correction ============ -# TP FP FN Prec Rec F0.5 -# 468 339 925 0.5799 0.336 0.5064 -# ============================================== - -#GEMG-yahoo -errant_parallel -orig benchmarks/GMEG/data/test/yahoo/source \ - -cor $model_path/predictions/gmeg.yahoo.out.txt \ - -out $model_path/predictions/gmeg.yahoo.outm2.txt && \ -errant_compare -hyp $model_path/predictions/gmeg.yahoo.outm2.txt -ref benchmarks/GMEG/data/test/yahoo/ref.m2 | tee $model_path/predictions/gmeg.yahoo.eval.txt -# =========== Span-Based Correction ============ -# TP FP FN Prec Rec F0.5 -# 382 329 428 0.5373 0.4716 0.5227 -# ============================================== diff --git a/spaces/ofikodar/chatgpt-resume-builder/src/templates/resume.html b/spaces/ofikodar/chatgpt-resume-builder/src/templates/resume.html deleted file mode 100644 index 91759827708c40bcfc372acc8d0a7773f3efb150..0000000000000000000000000000000000000000 --- a/spaces/ofikodar/chatgpt-resume-builder/src/templates/resume.html +++ /dev/null @@ -1,62 +0,0 @@ - - - -
        -
        -
        -
        -
        -

        {{name}}

        - -
        - -
        -

        {{title}}

        -
        -
        - - -
        -

        Summary

        -

        {{summary}}

        -
        -
        -

        Employment

        - {% for experience in workExperience %} -

        {{experience.dates}} | {{experience.company}} | {{experience.title}}

        -

        {{experience.description}}

        - {% endfor %} -
        -
        -

        Education

        - {% for edu in education %} -

        {{edu.dates}} | {{edu.school}} | {{edu.degree}}

        -

        {{edu.description}}

        - - {% endfor %} -
        - - -
        -
        -
        -

        Contact Information

        -

        Email: {{contactInfo.email}}

        -

        Phone: {{contactInfo.phone}}

        -

        Address: {{contactInfo.address}}

        -

        LinkedIn: LinkedIn Profile

        -

        Github: GitHub Profile

        - -
        -

        Skills

        -
          - {% for skill in skills %} -
        • {{skill}}
        • - {% endfor %} -
        -
        -
        -
        -
        - - diff --git a/spaces/olanigan/glaiveai-glaive-coder-7b/README.md b/spaces/olanigan/glaiveai-glaive-coder-7b/README.md deleted file mode 100644 index eb6f4ca208e2c65abde8253362e333fbd4b87ee5..0000000000000000000000000000000000000000 --- a/spaces/olanigan/glaiveai-glaive-coder-7b/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Glaiveai Glaive Coder 7b -emoji: 🔥 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/peekaboo/Chatbot_Streamlit/README.md b/spaces/peekaboo/Chatbot_Streamlit/README.md deleted file mode 100644 index fe8f058691dc5086057df1b049a1cda5524e44fd..0000000000000000000000000000000000000000 --- a/spaces/peekaboo/Chatbot_Streamlit/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatbot Streamlit -emoji: 🌍 -colorFrom: purple -colorTo: indigo -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/json.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/json.py deleted file mode 100644 index ea94493f21e6f5583469d882d08203381ee31117..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/json.py +++ /dev/null @@ -1,140 +0,0 @@ -from pathlib import Path -from json import loads, dumps -from typing import Any, Callable, Optional, Union - -from .text import Text -from .highlighter import JSONHighlighter, NullHighlighter - - -class JSON: - """A renderable which pretty prints JSON. - - Args: - json (str): JSON encoded data. - indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2. - highlight (bool, optional): Enable highlighting. Defaults to True. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - """ - - def __init__( - self, - json: str, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = False, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, - ) -> None: - data = loads(json) - json = dumps( - data, - indent=indent, - skipkeys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - highlighter = JSONHighlighter() if highlight else NullHighlighter() - self.text = highlighter(json) - self.text.no_wrap = True - self.text.overflow = None - - @classmethod - def from_data( - cls, - data: Any, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = False, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, - ) -> "JSON": - """Encodes a JSON object from arbitrary data. - - Args: - data (Any): An object that may be encoded in to JSON - indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2. - highlight (bool, optional): Enable highlighting. Defaults to True. - default (Callable, optional): Optional callable which will be called for objects that cannot be serialized. Defaults to None. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - - Returns: - JSON: New JSON object from the given data. - """ - json_instance: "JSON" = cls.__new__(cls) - json = dumps( - data, - indent=indent, - skipkeys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - highlighter = JSONHighlighter() if highlight else NullHighlighter() - json_instance.text = highlighter(json) - json_instance.text.no_wrap = True - json_instance.text.overflow = None - return json_instance - - def __rich__(self) -> Text: - return self.text - - -if __name__ == "__main__": - - import argparse - import sys - - parser = argparse.ArgumentParser(description="Pretty print json") - parser.add_argument( - "path", - metavar="PATH", - help="path to file, or - for stdin", - ) - parser.add_argument( - "-i", - "--indent", - metavar="SPACES", - type=int, - help="Number of spaces in an indent", - default=2, - ) - args = parser.parse_args() - - from pip._vendor.rich.console import Console - - console = Console() - error_console = Console(stderr=True) - - try: - if args.path == "-": - json_data = sys.stdin.read() - else: - json_data = Path(args.path).read_text() - except Exception as error: - error_console.print(f"Unable to read {args.path!r}; {error}") - sys.exit(-1) - - console.print(JSON(json_data, indent=args.indent), soft_wrap=True) diff --git a/spaces/plzdontcry/dakubettergpt/src/components/Menu/index.ts b/spaces/plzdontcry/dakubettergpt/src/components/Menu/index.ts deleted file mode 100644 index a6de24ca955186cd902e5c7d0a98f971728c108e..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/components/Menu/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { default } from './Menu'; diff --git a/spaces/plzdontcry/dakubettergpt/src/hooks/useSubmit.ts b/spaces/plzdontcry/dakubettergpt/src/hooks/useSubmit.ts deleted file mode 100644 index 3240f41431f210f872d7194e520cf804457c7b8e..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/hooks/useSubmit.ts +++ /dev/null @@ -1,210 +0,0 @@ -import React from 'react'; -import useStore from '@store/store'; -import { useTranslation } from 'react-i18next'; -import { ChatInterface, MessageInterface } from '@type/chat'; -import { getChatCompletion, getChatCompletionStream } from '@api/api'; -import { parseEventSource } from '@api/helper'; -import { limitMessageTokens, updateTotalTokenUsed } from '@utils/messageUtils'; -import { _defaultChatConfig } from '@constants/chat'; -import { officialAPIEndpoint } from '@constants/auth'; - -const useSubmit = () => { - const { t, i18n } = useTranslation('api'); - const error = useStore((state) => state.error); - const setError = useStore((state) => state.setError); - const apiEndpoint = useStore((state) => state.apiEndpoint); - const apiKey = useStore((state) => state.apiKey); - const setGenerating = useStore((state) => state.setGenerating); - const generating = useStore((state) => state.generating); - const currentChatIndex = useStore((state) => state.currentChatIndex); - const setChats = useStore((state) => state.setChats); - - const generateTitle = async ( - message: MessageInterface[] - ): Promise => { - let data; - try { - if (!apiKey || apiKey.length === 0) { - // official endpoint - if (apiEndpoint === officialAPIEndpoint) { - throw new Error(t('noApiKeyWarning') as string); - } - - // other endpoints - data = await getChatCompletion( - useStore.getState().apiEndpoint, - message, - _defaultChatConfig - ); - } else if (apiKey) { - // own apikey - data = await getChatCompletion( - useStore.getState().apiEndpoint, - message, - _defaultChatConfig, - apiKey - ); - } - } catch (error: unknown) { - throw new Error(`Error generating title!\n${(error as Error).message}`); - } - return data.choices[0].message.content; - }; - - const handleSubmit = async () => { - const chats = useStore.getState().chats; - if (generating || !chats) return; - - const updatedChats: ChatInterface[] = JSON.parse(JSON.stringify(chats)); - - updatedChats[currentChatIndex].messages.push({ - role: 'assistant', - content: '', - }); - - setChats(updatedChats); - setGenerating(true); - - try { - let stream; - if (chats[currentChatIndex].messages.length === 0) - throw new Error('No messages submitted!'); - - const messages = limitMessageTokens( - chats[currentChatIndex].messages, - chats[currentChatIndex].config.max_tokens, - chats[currentChatIndex].config.model - ); - if (messages.length === 0) throw new Error('Message exceed max token!'); - - // no api key (free) - if (!apiKey || apiKey.length === 0) { - // official endpoint - if (apiEndpoint === officialAPIEndpoint) { - throw new Error(t('noApiKeyWarning') as string); - } - - // other endpoints - stream = await getChatCompletionStream( - useStore.getState().apiEndpoint, - messages, - chats[currentChatIndex].config - ); - } else if (apiKey) { - // own apikey - stream = await getChatCompletionStream( - useStore.getState().apiEndpoint, - messages, - chats[currentChatIndex].config, - apiKey - ); - } - - if (stream) { - if (stream.locked) - throw new Error( - 'Oops, the stream is locked right now. Please try again' - ); - const reader = stream.getReader(); - let reading = true; - let partial = ''; - while (reading && useStore.getState().generating) { - const { done, value } = await reader.read(); - const result = parseEventSource( - partial + new TextDecoder().decode(value) - ); - partial = ''; - - if (result === '[DONE]' || done) { - reading = false; - } else { - const resultString = result.reduce((output: string, curr) => { - if (typeof curr === 'string') { - partial += curr; - } else { - const content = curr.choices[0].delta.content; - if (content) output += content; - } - return output; - }, ''); - - const updatedChats: ChatInterface[] = JSON.parse( - JSON.stringify(useStore.getState().chats) - ); - const updatedMessages = updatedChats[currentChatIndex].messages; - updatedMessages[updatedMessages.length - 1].content += resultString; - setChats(updatedChats); - } - } - if (useStore.getState().generating) { - reader.cancel('Cancelled by user'); - } else { - reader.cancel('Generation completed'); - } - reader.releaseLock(); - stream.cancel(); - } - - // update tokens used in chatting - const currChats = useStore.getState().chats; - const countTotalTokens = useStore.getState().countTotalTokens; - - if (currChats && countTotalTokens) { - const model = currChats[currentChatIndex].config.model; - const messages = currChats[currentChatIndex].messages; - updateTotalTokenUsed( - model, - messages.slice(0, -1), - messages[messages.length - 1] - ); - } - - // generate title for new chats - if ( - useStore.getState().autoTitle && - currChats && - !currChats[currentChatIndex]?.titleSet - ) { - const messages_length = currChats[currentChatIndex].messages.length; - const assistant_message = - currChats[currentChatIndex].messages[messages_length - 1].content; - const user_message = - currChats[currentChatIndex].messages[messages_length - 2].content; - - const message: MessageInterface = { - role: 'user', - content: `Generate a title in less than 6 words for the following message (language: ${i18n.language}):\n"""\nUser: ${user_message}\nAssistant: ${assistant_message}\n"""`, - }; - - let title = (await generateTitle([message])).trim(); - if (title.startsWith('"') && title.endsWith('"')) { - title = title.slice(1, -1); - } - const updatedChats: ChatInterface[] = JSON.parse( - JSON.stringify(useStore.getState().chats) - ); - updatedChats[currentChatIndex].title = title; - updatedChats[currentChatIndex].titleSet = true; - setChats(updatedChats); - - // update tokens used for generating title - if (countTotalTokens) { - const model = _defaultChatConfig.model; - updateTotalTokenUsed(model, [message], { - role: 'assistant', - content: title, - }); - } - } - } catch (e: unknown) { - const err = (e as Error).message; - console.log(err); - setError(err); - } - setGenerating(false); - }; - - return { handleSubmit, error }; -}; - -export default useSubmit; diff --git a/spaces/pragnakalp/Text_Summarization/app.py b/spaces/pragnakalp/Text_Summarization/app.py deleted file mode 100644 index 225f33efe961bae1ee8c4f18b4436ccf9d6e8685..0000000000000000000000000000000000000000 --- a/spaces/pragnakalp/Text_Summarization/app.py +++ /dev/null @@ -1,81 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals -import spacy -import gradio as gr -import os - -from fastai.text.all import * -from transformers import * -# from blurr.data.all import * -# from blurr.modeling.all import * -from spacy_readability import Readability -# from save_data import save_data_and_sendmail - -readablility_nlp = spacy.load('en_core_web_sm') -read = Readability() -cwd = os.getcwd() -readablility_nlp.add_pipe(read, last=True) - -bart_ext_model_path = os.path.join(cwd, 'bart_extractive_model') -bart_extractive_model = BartForConditionalGeneration.from_pretrained(bart_ext_model_path) -bart_extractive_tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn') - -t5_model_path = os.path.join(cwd, 't5_model') -t5_model = AutoModelWithLMHead.from_pretrained(t5_model_path) -t5_tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-summarize-news") - -def generate_text_summarization(sum_type,article): - if article.strip(): - print("text input :",article) - if sum_type == 'BART Extractive Text Summarization': - inputs = bart_extractive_tokenizer([article], max_length=1024, return_tensors='pt') - summary_ids = bart_extractive_model.generate(inputs['input_ids'], num_beams=4, min_length=60, max_length=300, early_stopping=True) - - summary = [bart_extractive_tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids] - print(type(summary)) - print(summary) - summary= summary[0] - doc = readablility_nlp(summary) - summary_score = round(doc._.flesch_kincaid_reading_ease,2) - summarized_data = { - "summary" : summary, - "score" : summary_score - } - - if sum_type == 'T5 Abstractive Text Summarization': - inputs = t5_tokenizer.encode(article, return_tensors="pt", max_length=2048) - summary_ids = t5_model.generate(inputs, - num_beams=2, - no_repeat_ngram_size=2, - min_length=100, - max_length=300, - early_stopping=True) - - summary = t5_tokenizer.decode(summary_ids[0], skip_special_tokens=True) - print(type(summary)) - print(summary) - doc = readablility_nlp(summary) - summary_score = round(doc._.flesch_kincaid_reading_ease,2) - summarized_data = { - "summary" : summary, - "score" : summary_score - } - - # save_data_and_sendmail(article, sum_type, summary) - return summary - else: - raise gr.Error("Please enter text in inputbox!!!!") - -input_text=gr.Textbox(lines=5, label="Paragraph") -input_radio= gr.Radio(['BART Extractive Text Summarization','T5 Abstractive Text Summarization'],label='Select summarization',value='BART Extractive Text Summarization') -output_text=gr.Textbox(lines=7, label="Summarize text") -demo = gr.Interface( - generate_text_summarization, - [input_radio,input_text], - output_text, - title="Text Summarization", - css=".gradio-container {background-color: lightgray}", - article="""

        Developed by: Pragnakalp Techlabs

        """ -) - - -demo.launch() \ No newline at end of file diff --git a/spaces/prerna9811/Chord/portaudio/bindings/java/jportaudio/src/com/portaudio/StreamParameters.java b/spaces/prerna9811/Chord/portaudio/bindings/java/jportaudio/src/com/portaudio/StreamParameters.java deleted file mode 100644 index 707dab5ecf2e3becf50ce5afdb2214ad02ae3ad2..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/bindings/java/jportaudio/src/com/portaudio/StreamParameters.java +++ /dev/null @@ -1,57 +0,0 @@ -/* - * Portable Audio I/O Library - * Java Binding for PortAudio - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 2008 Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup bindings_java - - @brief Options to use when opening a stream. -*/ -package com.portaudio; -/** - * Equivalent to PaStreamParameters - * @see PortAudio - * @author Phil Burk - * - */ -public class StreamParameters -{ - public int device = 0; - public int channelCount = 2; - public int sampleFormat = PortAudio.FORMAT_FLOAT_32; - public double suggestedLatency = 0.050; -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/util.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/util.py deleted file mode 100644 index 42fe39d5f701e683f52ca7c4022b1bb85749fb6b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/util.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools.misc.timeTools import timestampNow -from fontTools.ttLib.tables.DefaultTable import DefaultTable -from functools import reduce -import operator -import logging - - -log = logging.getLogger("fontTools.merge") - - -# General utility functions for merging values from different fonts - - -def equal(lst): - lst = list(lst) - t = iter(lst) - first = next(t) - assert all(item == first for item in t), "Expected all items to be equal: %s" % lst - return first - - -def first(lst): - return next(iter(lst)) - - -def recalculate(lst): - return NotImplemented - - -def current_time(lst): - return timestampNow() - - -def bitwise_and(lst): - return reduce(operator.and_, lst) - - -def bitwise_or(lst): - return reduce(operator.or_, lst) - - -def avg_int(lst): - lst = list(lst) - return sum(lst) // len(lst) - - -def onlyExisting(func): - """Returns a filter func that when called with a list, - only calls func on the non-NotImplemented items of the list, - and only so if there's at least one item remaining. - Otherwise returns NotImplemented.""" - - def wrapper(lst): - items = [item for item in lst if item is not NotImplemented] - return func(items) if items else NotImplemented - - return wrapper - - -def sumLists(lst): - l = [] - for item in lst: - l.extend(item) - return l - - -def sumDicts(lst): - d = {} - for item in lst: - d.update(item) - return d - - -def mergeBits(bitmap): - def wrapper(lst): - lst = list(lst) - returnValue = 0 - for bitNumber in range(bitmap["size"]): - try: - mergeLogic = bitmap[bitNumber] - except KeyError: - try: - mergeLogic = bitmap["*"] - except KeyError: - raise Exception("Don't know how to merge bit %s" % bitNumber) - shiftedBit = 1 << bitNumber - mergedValue = mergeLogic(bool(item & shiftedBit) for item in lst) - returnValue |= mergedValue << bitNumber - return returnValue - - return wrapper - - -class AttendanceRecordingIdentityDict(object): - """A dictionary-like object that records indices of items actually accessed - from a list.""" - - def __init__(self, lst): - self.l = lst - self.d = {id(v): i for i, v in enumerate(lst)} - self.s = set() - - def __getitem__(self, v): - self.s.add(self.d[id(v)]) - return v - - -class GregariousIdentityDict(object): - """A dictionary-like object that welcomes guests without reservations and - adds them to the end of the guest list.""" - - def __init__(self, lst): - self.l = lst - self.s = set(id(v) for v in lst) - - def __getitem__(self, v): - if id(v) not in self.s: - self.s.add(id(v)) - self.l.append(v) - return v - - -class NonhashableDict(object): - """A dictionary-like object mapping objects to values.""" - - def __init__(self, keys, values=None): - if values is None: - self.d = {id(v): i for i, v in enumerate(keys)} - else: - self.d = {id(k): v for k, v in zip(keys, values)} - - def __getitem__(self, k): - return self.d[id(k)] - - def __setitem__(self, k, v): - self.d[id(k)] = v - - def __delitem__(self, k): - del self.d[id(k)] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/test_compatibilty_files.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/test_compatibilty_files.py deleted file mode 100644 index 13ad0dfb21a1d5b7fb91f2419b78b9bdf90f0ec3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/tests/test_compatibilty_files.py +++ /dev/null @@ -1,104 +0,0 @@ -import io -import unittest - -import importlib_resources as resources - -from importlib_resources._adapters import ( - CompatibilityFiles, - wrap_spec, -) - -from . import util - - -class CompatibilityFilesTests(unittest.TestCase): - @property - def package(self): - bytes_data = io.BytesIO(b'Hello, world!') - return util.create_package( - file=bytes_data, - path='some_path', - contents=('a', 'b', 'c'), - ) - - @property - def files(self): - return resources.files(self.package) - - def test_spec_path_iter(self): - self.assertEqual( - sorted(path.name for path in self.files.iterdir()), - ['a', 'b', 'c'], - ) - - def test_child_path_iter(self): - self.assertEqual(list((self.files / 'a').iterdir()), []) - - def test_orphan_path_iter(self): - self.assertEqual(list((self.files / 'a' / 'a').iterdir()), []) - self.assertEqual(list((self.files / 'a' / 'a' / 'a').iterdir()), []) - - def test_spec_path_is(self): - self.assertFalse(self.files.is_file()) - self.assertFalse(self.files.is_dir()) - - def test_child_path_is(self): - self.assertTrue((self.files / 'a').is_file()) - self.assertFalse((self.files / 'a').is_dir()) - - def test_orphan_path_is(self): - self.assertFalse((self.files / 'a' / 'a').is_file()) - self.assertFalse((self.files / 'a' / 'a').is_dir()) - self.assertFalse((self.files / 'a' / 'a' / 'a').is_file()) - self.assertFalse((self.files / 'a' / 'a' / 'a').is_dir()) - - def test_spec_path_name(self): - self.assertEqual(self.files.name, 'testingpackage') - - def test_child_path_name(self): - self.assertEqual((self.files / 'a').name, 'a') - - def test_orphan_path_name(self): - self.assertEqual((self.files / 'a' / 'b').name, 'b') - self.assertEqual((self.files / 'a' / 'b' / 'c').name, 'c') - - def test_spec_path_open(self): - self.assertEqual(self.files.read_bytes(), b'Hello, world!') - self.assertEqual(self.files.read_text(encoding='utf-8'), 'Hello, world!') - - def test_child_path_open(self): - self.assertEqual((self.files / 'a').read_bytes(), b'Hello, world!') - self.assertEqual( - (self.files / 'a').read_text(encoding='utf-8'), 'Hello, world!' - ) - - def test_orphan_path_open(self): - with self.assertRaises(FileNotFoundError): - (self.files / 'a' / 'b').read_bytes() - with self.assertRaises(FileNotFoundError): - (self.files / 'a' / 'b' / 'c').read_bytes() - - def test_open_invalid_mode(self): - with self.assertRaises(ValueError): - self.files.open('0') - - def test_orphan_path_invalid(self): - with self.assertRaises(ValueError): - CompatibilityFiles.OrphanPath() - - def test_wrap_spec(self): - spec = wrap_spec(self.package) - self.assertIsInstance(spec.loader.get_resource_reader(None), CompatibilityFiles) - - -class CompatibilityFilesNoReaderTests(unittest.TestCase): - @property - def package(self): - return util.create_package_from_loader(None) - - @property - def files(self): - return resources.files(self.package) - - def test_spec_path_joinpath(self): - self.assertIsInstance(self.files / 'a', CompatibilityFiles.OrphanPath) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backend_bases.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backend_bases.py deleted file mode 100644 index 958b6e0e1c21984f176038a9d7ed495f55f28e09..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backend_bases.py +++ /dev/null @@ -1,3483 +0,0 @@ -""" -Abstract base classes define the primitives that renderers and -graphics contexts must implement to serve as a Matplotlib backend. - -`RendererBase` - An abstract base class to handle drawing/rendering operations. - -`FigureCanvasBase` - The abstraction layer that separates the `.Figure` from the backend - specific details like a user interface drawing area. - -`GraphicsContextBase` - An abstract base class that provides color, line styles, etc. - -`Event` - The base class for all of the Matplotlib event handling. Derived classes - such as `KeyEvent` and `MouseEvent` store the meta data like keys and - buttons pressed, x and y locations in pixel and `~.axes.Axes` coordinates. - -`ShowBase` - The base class for the ``Show`` class of each interactive backend; the - 'show' callable is then set to ``Show.__call__``. - -`ToolContainerBase` - The base class for the Toolbar class of each interactive backend. -""" - -from collections import namedtuple -from contextlib import ExitStack, contextmanager, nullcontext -from enum import Enum, IntEnum -import functools -import importlib -import inspect -import io -import itertools -import logging -import os -import sys -import time -import weakref -from weakref import WeakKeyDictionary - -import numpy as np - -import matplotlib as mpl -from matplotlib import ( - _api, backend_tools as tools, cbook, colors, _docstring, text, - _tight_bbox, transforms, widgets, is_interactive, rcParams) -from matplotlib._pylab_helpers import Gcf -from matplotlib.backend_managers import ToolManager -from matplotlib.cbook import _setattr_cm -from matplotlib.layout_engine import ConstrainedLayoutEngine -from matplotlib.path import Path -from matplotlib.texmanager import TexManager -from matplotlib.transforms import Affine2D -from matplotlib._enums import JoinStyle, CapStyle - - -_log = logging.getLogger(__name__) -_default_filetypes = { - 'eps': 'Encapsulated Postscript', - 'jpg': 'Joint Photographic Experts Group', - 'jpeg': 'Joint Photographic Experts Group', - 'pdf': 'Portable Document Format', - 'pgf': 'PGF code for LaTeX', - 'png': 'Portable Network Graphics', - 'ps': 'Postscript', - 'raw': 'Raw RGBA bitmap', - 'rgba': 'Raw RGBA bitmap', - 'svg': 'Scalable Vector Graphics', - 'svgz': 'Scalable Vector Graphics', - 'tif': 'Tagged Image File Format', - 'tiff': 'Tagged Image File Format', - 'webp': 'WebP Image Format', -} -_default_backends = { - 'eps': 'matplotlib.backends.backend_ps', - 'jpg': 'matplotlib.backends.backend_agg', - 'jpeg': 'matplotlib.backends.backend_agg', - 'pdf': 'matplotlib.backends.backend_pdf', - 'pgf': 'matplotlib.backends.backend_pgf', - 'png': 'matplotlib.backends.backend_agg', - 'ps': 'matplotlib.backends.backend_ps', - 'raw': 'matplotlib.backends.backend_agg', - 'rgba': 'matplotlib.backends.backend_agg', - 'svg': 'matplotlib.backends.backend_svg', - 'svgz': 'matplotlib.backends.backend_svg', - 'tif': 'matplotlib.backends.backend_agg', - 'tiff': 'matplotlib.backends.backend_agg', - 'webp': 'matplotlib.backends.backend_agg', -} - - -def _safe_pyplot_import(): - """ - Import and return ``pyplot``, correctly setting the backend if one is - already forced. - """ - try: - import matplotlib.pyplot as plt - except ImportError: # Likely due to a framework mismatch. - current_framework = cbook._get_running_interactive_framework() - if current_framework is None: - raise # No, something else went wrong, likely with the install... - backend_mapping = { - 'qt': 'qtagg', - 'gtk3': 'gtk3agg', - 'gtk4': 'gtk4agg', - 'wx': 'wxagg', - 'tk': 'tkagg', - 'macosx': 'macosx', - 'headless': 'agg', - } - backend = backend_mapping[current_framework] - rcParams["backend"] = mpl.rcParamsOrig["backend"] = backend - import matplotlib.pyplot as plt # Now this should succeed. - return plt - - -def register_backend(format, backend, description=None): - """ - Register a backend for saving to a given file format. - - Parameters - ---------- - format : str - File extension - backend : module string or canvas class - Backend for handling file output - description : str, default: "" - Description of the file type. - """ - if description is None: - description = '' - _default_backends[format] = backend - _default_filetypes[format] = description - - -def get_registered_canvas_class(format): - """ - Return the registered default canvas for given file format. - Handles deferred import of required backend. - """ - if format not in _default_backends: - return None - backend_class = _default_backends[format] - if isinstance(backend_class, str): - backend_class = importlib.import_module(backend_class).FigureCanvas - _default_backends[format] = backend_class - return backend_class - - -class RendererBase: - """ - An abstract base class to handle drawing/rendering operations. - - The following methods must be implemented in the backend for full - functionality (though just implementing `draw_path` alone would give a - highly capable backend): - - * `draw_path` - * `draw_image` - * `draw_gouraud_triangles` - - The following methods *should* be implemented in the backend for - optimization reasons: - - * `draw_text` - * `draw_markers` - * `draw_path_collection` - * `draw_quad_mesh` - """ - def __init__(self): - super().__init__() - self._texmanager = None - self._text2path = text.TextToPath() - self._raster_depth = 0 - self._rasterizing = False - - def open_group(self, s, gid=None): - """ - Open a grouping element with label *s* and *gid* (if set) as id. - - Only used by the SVG renderer. - """ - - def close_group(self, s): - """ - Close a grouping element with label *s*. - - Only used by the SVG renderer. - """ - - def draw_path(self, gc, path, transform, rgbFace=None): - """Draw a `~.path.Path` instance using the given affine transform.""" - raise NotImplementedError - - def draw_markers(self, gc, marker_path, marker_trans, path, - trans, rgbFace=None): - """ - Draw a marker at each of *path*'s vertices (excluding control points). - - The base (fallback) implementation makes multiple calls to `draw_path`. - Backends may want to override this method in order to draw the marker - only once and reuse it multiple times. - - Parameters - ---------- - gc : `.GraphicsContextBase` - The graphics context. - marker_path : `~matplotlib.path.Path` - The path for the marker. - marker_trans : `~matplotlib.transforms.Transform` - An affine transform applied to the marker. - path : `~matplotlib.path.Path` - The locations to draw the markers. - trans : `~matplotlib.transforms.Transform` - An affine transform applied to the path. - rgbFace : color, optional - """ - for vertices, codes in path.iter_segments(trans, simplify=False): - if len(vertices): - x, y = vertices[-2:] - self.draw_path(gc, marker_path, - marker_trans + - transforms.Affine2D().translate(x, y), - rgbFace) - - def draw_path_collection(self, gc, master_transform, paths, all_transforms, - offsets, offset_trans, facecolors, edgecolors, - linewidths, linestyles, antialiaseds, urls, - offset_position): - """ - Draw a collection of *paths*. - - Each path is first transformed by the corresponding entry - in *all_transforms* (a list of (3, 3) matrices) and then by - *master_transform*. They are then translated by the corresponding - entry in *offsets*, which has been first transformed by *offset_trans*. - - *facecolors*, *edgecolors*, *linewidths*, *linestyles*, and - *antialiased* are lists that set the corresponding properties. - - *offset_position* is unused now, but the argument is kept for - backwards compatibility. - - The base (fallback) implementation makes multiple calls to `draw_path`. - Backends may want to override this in order to render each set of - path data only once, and then reference that path multiple times with - the different offsets, colors, styles etc. The generator methods - `_iter_collection_raw_paths` and `_iter_collection` are provided to - help with (and standardize) the implementation across backends. It - is highly recommended to use those generators, so that changes to the - behavior of `draw_path_collection` can be made globally. - """ - path_ids = self._iter_collection_raw_paths(master_transform, - paths, all_transforms) - - for xo, yo, path_id, gc0, rgbFace in self._iter_collection( - gc, list(path_ids), offsets, offset_trans, - facecolors, edgecolors, linewidths, linestyles, - antialiaseds, urls, offset_position): - path, transform = path_id - # Only apply another translation if we have an offset, else we - # reuse the initial transform. - if xo != 0 or yo != 0: - # The transformation can be used by multiple paths. Since - # translate is a inplace operation, we need to copy the - # transformation by .frozen() before applying the translation. - transform = transform.frozen() - transform.translate(xo, yo) - self.draw_path(gc0, path, transform, rgbFace) - - def draw_quad_mesh(self, gc, master_transform, meshWidth, meshHeight, - coordinates, offsets, offsetTrans, facecolors, - antialiased, edgecolors): - """ - Draw a quadmesh. - - The base (fallback) implementation converts the quadmesh to paths and - then calls `draw_path_collection`. - """ - - from matplotlib.collections import QuadMesh - paths = QuadMesh._convert_mesh_to_paths(coordinates) - - if edgecolors is None: - edgecolors = facecolors - linewidths = np.array([gc.get_linewidth()], float) - - return self.draw_path_collection( - gc, master_transform, paths, [], offsets, offsetTrans, facecolors, - edgecolors, linewidths, [], [antialiased], [None], 'screen') - - @_api.deprecated("3.7", alternative="draw_gouraud_triangles") - def draw_gouraud_triangle(self, gc, points, colors, transform): - """ - Draw a Gouraud-shaded triangle. - - Parameters - ---------- - gc : `.GraphicsContextBase` - The graphics context. - points : (3, 2) array-like - Array of (x, y) points for the triangle. - colors : (3, 4) array-like - RGBA colors for each point of the triangle. - transform : `~matplotlib.transforms.Transform` - An affine transform to apply to the points. - """ - raise NotImplementedError - - def draw_gouraud_triangles(self, gc, triangles_array, colors_array, - transform): - """ - Draw a series of Gouraud triangles. - - Parameters - ---------- - gc : `.GraphicsContextBase` - The graphics context. - triangles_array : (N, 3, 2) array-like - Array of *N* (x, y) points for the triangles. - colors_array : (N, 3, 4) array-like - Array of *N* RGBA colors for each point of the triangles. - transform : `~matplotlib.transforms.Transform` - An affine transform to apply to the points. - """ - raise NotImplementedError - - def _iter_collection_raw_paths(self, master_transform, paths, - all_transforms): - """ - Helper method (along with `_iter_collection`) to implement - `draw_path_collection` in a memory-efficient manner. - - This method yields all of the base path/transform combinations, given a - master transform, a list of paths and list of transforms. - - The arguments should be exactly what is passed in to - `draw_path_collection`. - - The backend should take each yielded path and transform and create an - object that can be referenced (reused) later. - """ - Npaths = len(paths) - Ntransforms = len(all_transforms) - N = max(Npaths, Ntransforms) - - if Npaths == 0: - return - - transform = transforms.IdentityTransform() - for i in range(N): - path = paths[i % Npaths] - if Ntransforms: - transform = Affine2D(all_transforms[i % Ntransforms]) - yield path, transform + master_transform - - def _iter_collection_uses_per_path(self, paths, all_transforms, - offsets, facecolors, edgecolors): - """ - Compute how many times each raw path object returned by - `_iter_collection_raw_paths` would be used when calling - `_iter_collection`. This is intended for the backend to decide - on the tradeoff between using the paths in-line and storing - them once and reusing. Rounds up in case the number of uses - is not the same for every path. - """ - Npaths = len(paths) - if Npaths == 0 or len(facecolors) == len(edgecolors) == 0: - return 0 - Npath_ids = max(Npaths, len(all_transforms)) - N = max(Npath_ids, len(offsets)) - return (N + Npath_ids - 1) // Npath_ids - - def _iter_collection(self, gc, path_ids, offsets, offset_trans, facecolors, - edgecolors, linewidths, linestyles, - antialiaseds, urls, offset_position): - """ - Helper method (along with `_iter_collection_raw_paths`) to implement - `draw_path_collection` in a memory-efficient manner. - - This method yields all of the path, offset and graphics context - combinations to draw the path collection. The caller should already - have looped over the results of `_iter_collection_raw_paths` to draw - this collection. - - The arguments should be the same as that passed into - `draw_path_collection`, with the exception of *path_ids*, which is a - list of arbitrary objects that the backend will use to reference one of - the paths created in the `_iter_collection_raw_paths` stage. - - Each yielded result is of the form:: - - xo, yo, path_id, gc, rgbFace - - where *xo*, *yo* is an offset; *path_id* is one of the elements of - *path_ids*; *gc* is a graphics context and *rgbFace* is a color to - use for filling the path. - """ - Npaths = len(path_ids) - Noffsets = len(offsets) - N = max(Npaths, Noffsets) - Nfacecolors = len(facecolors) - Nedgecolors = len(edgecolors) - Nlinewidths = len(linewidths) - Nlinestyles = len(linestyles) - Nurls = len(urls) - - if (Nfacecolors == 0 and Nedgecolors == 0) or Npaths == 0: - return - - gc0 = self.new_gc() - gc0.copy_properties(gc) - - def cycle_or_default(seq, default=None): - # Cycle over *seq* if it is not empty; else always yield *default*. - return (itertools.cycle(seq) if len(seq) - else itertools.repeat(default)) - - pathids = cycle_or_default(path_ids) - toffsets = cycle_or_default(offset_trans.transform(offsets), (0, 0)) - fcs = cycle_or_default(facecolors) - ecs = cycle_or_default(edgecolors) - lws = cycle_or_default(linewidths) - lss = cycle_or_default(linestyles) - aas = cycle_or_default(antialiaseds) - urls = cycle_or_default(urls) - - if Nedgecolors == 0: - gc0.set_linewidth(0.0) - - for pathid, (xo, yo), fc, ec, lw, ls, aa, url in itertools.islice( - zip(pathids, toffsets, fcs, ecs, lws, lss, aas, urls), N): - if not (np.isfinite(xo) and np.isfinite(yo)): - continue - if Nedgecolors: - if Nlinewidths: - gc0.set_linewidth(lw) - if Nlinestyles: - gc0.set_dashes(*ls) - if len(ec) == 4 and ec[3] == 0.0: - gc0.set_linewidth(0) - else: - gc0.set_foreground(ec) - if fc is not None and len(fc) == 4 and fc[3] == 0: - fc = None - gc0.set_antialiased(aa) - if Nurls: - gc0.set_url(url) - yield xo, yo, pathid, gc0, fc - gc0.restore() - - def get_image_magnification(self): - """ - Get the factor by which to magnify images passed to `draw_image`. - Allows a backend to have images at a different resolution to other - artists. - """ - return 1.0 - - def draw_image(self, gc, x, y, im, transform=None): - """ - Draw an RGBA image. - - Parameters - ---------- - gc : `.GraphicsContextBase` - A graphics context with clipping information. - - x : scalar - The distance in physical units (i.e., dots or pixels) from the left - hand side of the canvas. - - y : scalar - The distance in physical units (i.e., dots or pixels) from the - bottom side of the canvas. - - im : (N, M, 4) array of `numpy.uint8` - An array of RGBA pixels. - - transform : `~matplotlib.transforms.Affine2DBase` - If and only if the concrete backend is written such that - `option_scale_image` returns ``True``, an affine transformation - (i.e., an `.Affine2DBase`) *may* be passed to `draw_image`. The - translation vector of the transformation is given in physical units - (i.e., dots or pixels). Note that the transformation does not - override *x* and *y*, and has to be applied *before* translating - the result by *x* and *y* (this can be accomplished by adding *x* - and *y* to the translation vector defined by *transform*). - """ - raise NotImplementedError - - def option_image_nocomposite(self): - """ - Return whether image composition by Matplotlib should be skipped. - - Raster backends should usually return False (letting the C-level - rasterizer take care of image composition); vector backends should - usually return ``not rcParams["image.composite_image"]``. - """ - return False - - def option_scale_image(self): - """ - Return whether arbitrary affine transformations in `draw_image` are - supported (True for most vector backends). - """ - return False - - def draw_tex(self, gc, x, y, s, prop, angle, *, mtext=None): - """ - Draw a TeX instance. - - Parameters - ---------- - gc : `.GraphicsContextBase` - The graphics context. - x : float - The x location of the text in display coords. - y : float - The y location of the text baseline in display coords. - s : str - The TeX text string. - prop : `~matplotlib.font_manager.FontProperties` - The font properties. - angle : float - The rotation angle in degrees anti-clockwise. - mtext : `~matplotlib.text.Text` - The original text object to be rendered. - """ - self._draw_text_as_path(gc, x, y, s, prop, angle, ismath="TeX") - - def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None): - """ - Draw a text instance. - - Parameters - ---------- - gc : `.GraphicsContextBase` - The graphics context. - x : float - The x location of the text in display coords. - y : float - The y location of the text baseline in display coords. - s : str - The text string. - prop : `~matplotlib.font_manager.FontProperties` - The font properties. - angle : float - The rotation angle in degrees anti-clockwise. - ismath : bool or "TeX" - If True, use mathtext parser. If "TeX", use tex for rendering. - mtext : `~matplotlib.text.Text` - The original text object to be rendered. - - Notes - ----- - **Note for backend implementers:** - - When you are trying to determine if you have gotten your bounding box - right (which is what enables the text layout/alignment to work - properly), it helps to change the line in text.py:: - - if 0: bbox_artist(self, renderer) - - to if 1, and then the actual bounding box will be plotted along with - your text. - """ - - self._draw_text_as_path(gc, x, y, s, prop, angle, ismath) - - def _get_text_path_transform(self, x, y, s, prop, angle, ismath): - """ - Return the text path and transform. - - Parameters - ---------- - x : float - The x location of the text in display coords. - y : float - The y location of the text baseline in display coords. - s : str - The text to be converted. - prop : `~matplotlib.font_manager.FontProperties` - The font property. - angle : float - Angle in degrees to render the text at. - ismath : bool or "TeX" - If True, use mathtext parser. If "TeX", use tex for rendering. - """ - - text2path = self._text2path - fontsize = self.points_to_pixels(prop.get_size_in_points()) - verts, codes = text2path.get_text_path(prop, s, ismath=ismath) - - path = Path(verts, codes) - angle = np.deg2rad(angle) - if self.flipy(): - width, height = self.get_canvas_width_height() - transform = (Affine2D() - .scale(fontsize / text2path.FONT_SCALE) - .rotate(angle) - .translate(x, height - y)) - else: - transform = (Affine2D() - .scale(fontsize / text2path.FONT_SCALE) - .rotate(angle) - .translate(x, y)) - - return path, transform - - def _draw_text_as_path(self, gc, x, y, s, prop, angle, ismath): - """ - Draw the text by converting them to paths using `.TextToPath`. - - Parameters - ---------- - gc : `.GraphicsContextBase` - The graphics context. - x : float - The x location of the text in display coords. - y : float - The y location of the text baseline in display coords. - s : str - The text to be converted. - prop : `~matplotlib.font_manager.FontProperties` - The font property. - angle : float - Angle in degrees to render the text at. - ismath : bool or "TeX" - If True, use mathtext parser. If "TeX", use tex for rendering. - """ - path, transform = self._get_text_path_transform( - x, y, s, prop, angle, ismath) - color = gc.get_rgb() - gc.set_linewidth(0.0) - self.draw_path(gc, path, transform, rgbFace=color) - - def get_text_width_height_descent(self, s, prop, ismath): - """ - Get the width, height, and descent (offset from the bottom to the baseline), in - display coords, of the string *s* with `.FontProperties` *prop*. - - Whitespace at the start and the end of *s* is included in the reported width. - """ - fontsize = prop.get_size_in_points() - - if ismath == 'TeX': - # todo: handle properties - return self.get_texmanager().get_text_width_height_descent( - s, fontsize, renderer=self) - - dpi = self.points_to_pixels(72) - if ismath: - dims = self._text2path.mathtext_parser.parse(s, dpi, prop) - return dims[0:3] # return width, height, descent - - flags = self._text2path._get_hinting_flag() - font = self._text2path._get_font(prop) - font.set_size(fontsize, dpi) - # the width and height of unrotated string - font.set_text(s, 0.0, flags=flags) - w, h = font.get_width_height() - d = font.get_descent() - w /= 64.0 # convert from subpixels - h /= 64.0 - d /= 64.0 - return w, h, d - - def flipy(self): - """ - Return whether y values increase from top to bottom. - - Note that this only affects drawing of texts. - """ - return True - - def get_canvas_width_height(self): - """Return the canvas width and height in display coords.""" - return 1, 1 - - def get_texmanager(self): - """Return the `.TexManager` instance.""" - if self._texmanager is None: - self._texmanager = TexManager() - return self._texmanager - - def new_gc(self): - """Return an instance of a `.GraphicsContextBase`.""" - return GraphicsContextBase() - - def points_to_pixels(self, points): - """ - Convert points to display units. - - You need to override this function (unless your backend - doesn't have a dpi, e.g., postscript or svg). Some imaging - systems assume some value for pixels per inch:: - - points to pixels = points * pixels_per_inch/72 * dpi/72 - - Parameters - ---------- - points : float or array-like - - Returns - ------- - Points converted to pixels - """ - return points - - def start_rasterizing(self): - """ - Switch to the raster renderer. - - Used by `.MixedModeRenderer`. - """ - - def stop_rasterizing(self): - """ - Switch back to the vector renderer and draw the contents of the raster - renderer as an image on the vector renderer. - - Used by `.MixedModeRenderer`. - """ - - def start_filter(self): - """ - Switch to a temporary renderer for image filtering effects. - - Currently only supported by the agg renderer. - """ - - def stop_filter(self, filter_func): - """ - Switch back to the original renderer. The contents of the temporary - renderer is processed with the *filter_func* and is drawn on the - original renderer as an image. - - Currently only supported by the agg renderer. - """ - - def _draw_disabled(self): - """ - Context manager to temporary disable drawing. - - This is used for getting the drawn size of Artists. This lets us - run the draw process to update any Python state but does not pay the - cost of the draw_XYZ calls on the canvas. - """ - no_ops = { - meth_name: lambda *args, **kwargs: None - for meth_name in dir(RendererBase) - if (meth_name.startswith("draw_") - or meth_name in ["open_group", "close_group"]) - } - - return _setattr_cm(self, **no_ops) - - -class GraphicsContextBase: - """An abstract base class that provides color, line styles, etc.""" - - def __init__(self): - self._alpha = 1.0 - self._forced_alpha = False # if True, _alpha overrides A from RGBA - self._antialiased = 1 # use 0, 1 not True, False for extension code - self._capstyle = CapStyle('butt') - self._cliprect = None - self._clippath = None - self._dashes = 0, None - self._joinstyle = JoinStyle('round') - self._linestyle = 'solid' - self._linewidth = 1 - self._rgb = (0.0, 0.0, 0.0, 1.0) - self._hatch = None - self._hatch_color = colors.to_rgba(rcParams['hatch.color']) - self._hatch_linewidth = rcParams['hatch.linewidth'] - self._url = None - self._gid = None - self._snap = None - self._sketch = None - - def copy_properties(self, gc): - """Copy properties from *gc* to self.""" - self._alpha = gc._alpha - self._forced_alpha = gc._forced_alpha - self._antialiased = gc._antialiased - self._capstyle = gc._capstyle - self._cliprect = gc._cliprect - self._clippath = gc._clippath - self._dashes = gc._dashes - self._joinstyle = gc._joinstyle - self._linestyle = gc._linestyle - self._linewidth = gc._linewidth - self._rgb = gc._rgb - self._hatch = gc._hatch - self._hatch_color = gc._hatch_color - self._hatch_linewidth = gc._hatch_linewidth - self._url = gc._url - self._gid = gc._gid - self._snap = gc._snap - self._sketch = gc._sketch - - def restore(self): - """ - Restore the graphics context from the stack - needed only - for backends that save graphics contexts on a stack. - """ - - def get_alpha(self): - """ - Return the alpha value used for blending - not supported on all - backends. - """ - return self._alpha - - def get_antialiased(self): - """Return whether the object should try to do antialiased rendering.""" - return self._antialiased - - def get_capstyle(self): - """Return the `.CapStyle`.""" - return self._capstyle.name - - def get_clip_rectangle(self): - """ - Return the clip rectangle as a `~matplotlib.transforms.Bbox` instance. - """ - return self._cliprect - - def get_clip_path(self): - """ - Return the clip path in the form (path, transform), where path - is a `~.path.Path` instance, and transform is - an affine transform to apply to the path before clipping. - """ - if self._clippath is not None: - tpath, tr = self._clippath.get_transformed_path_and_affine() - if np.all(np.isfinite(tpath.vertices)): - return tpath, tr - else: - _log.warning("Ill-defined clip_path detected. Returning None.") - return None, None - return None, None - - def get_dashes(self): - """ - Return the dash style as an (offset, dash-list) pair. - - See `.set_dashes` for details. - - Default value is (None, None). - """ - return self._dashes - - def get_forced_alpha(self): - """ - Return whether the value given by get_alpha() should be used to - override any other alpha-channel values. - """ - return self._forced_alpha - - def get_joinstyle(self): - """Return the `.JoinStyle`.""" - return self._joinstyle.name - - def get_linewidth(self): - """Return the line width in points.""" - return self._linewidth - - def get_rgb(self): - """Return a tuple of three or four floats from 0-1.""" - return self._rgb - - def get_url(self): - """Return a url if one is set, None otherwise.""" - return self._url - - def get_gid(self): - """Return the object identifier if one is set, None otherwise.""" - return self._gid - - def get_snap(self): - """ - Return the snap setting, which can be: - - * True: snap vertices to the nearest pixel center - * False: leave vertices as-is - * None: (auto) If the path contains only rectilinear line segments, - round to the nearest pixel center - """ - return self._snap - - def set_alpha(self, alpha): - """ - Set the alpha value used for blending - not supported on all backends. - - If ``alpha=None`` (the default), the alpha components of the - foreground and fill colors will be used to set their respective - transparencies (where applicable); otherwise, ``alpha`` will override - them. - """ - if alpha is not None: - self._alpha = alpha - self._forced_alpha = True - else: - self._alpha = 1.0 - self._forced_alpha = False - self.set_foreground(self._rgb, isRGBA=True) - - def set_antialiased(self, b): - """Set whether object should be drawn with antialiased rendering.""" - # Use ints to make life easier on extension code trying to read the gc. - self._antialiased = int(bool(b)) - - @_docstring.interpd - def set_capstyle(self, cs): - """ - Set how to draw endpoints of lines. - - Parameters - ---------- - cs : `.CapStyle` or %(CapStyle)s - """ - self._capstyle = CapStyle(cs) - - def set_clip_rectangle(self, rectangle): - """Set the clip rectangle to a `.Bbox` or None.""" - self._cliprect = rectangle - - def set_clip_path(self, path): - """Set the clip path to a `.TransformedPath` or None.""" - _api.check_isinstance((transforms.TransformedPath, None), path=path) - self._clippath = path - - def set_dashes(self, dash_offset, dash_list): - """ - Set the dash style for the gc. - - Parameters - ---------- - dash_offset : float - Distance, in points, into the dash pattern at which to - start the pattern. It is usually set to 0. - dash_list : array-like or None - The on-off sequence as points. None specifies a solid line. All - values must otherwise be non-negative (:math:`\\ge 0`). - - Notes - ----- - See p. 666 of the PostScript - `Language Reference - `_ - for more info. - """ - if dash_list is not None: - dl = np.asarray(dash_list) - if np.any(dl < 0.0): - raise ValueError( - "All values in the dash list must be non-negative") - if dl.size and not np.any(dl > 0.0): - raise ValueError( - 'At least one value in the dash list must be positive') - self._dashes = dash_offset, dash_list - - def set_foreground(self, fg, isRGBA=False): - """ - Set the foreground color. - - Parameters - ---------- - fg : color - isRGBA : bool - If *fg* is known to be an ``(r, g, b, a)`` tuple, *isRGBA* can be - set to True to improve performance. - """ - if self._forced_alpha and isRGBA: - self._rgb = fg[:3] + (self._alpha,) - elif self._forced_alpha: - self._rgb = colors.to_rgba(fg, self._alpha) - elif isRGBA: - self._rgb = fg - else: - self._rgb = colors.to_rgba(fg) - - @_docstring.interpd - def set_joinstyle(self, js): - """ - Set how to draw connections between line segments. - - Parameters - ---------- - js : `.JoinStyle` or %(JoinStyle)s - """ - self._joinstyle = JoinStyle(js) - - def set_linewidth(self, w): - """Set the linewidth in points.""" - self._linewidth = float(w) - - def set_url(self, url): - """Set the url for links in compatible backends.""" - self._url = url - - def set_gid(self, id): - """Set the id.""" - self._gid = id - - def set_snap(self, snap): - """ - Set the snap setting which may be: - - * True: snap vertices to the nearest pixel center - * False: leave vertices as-is - * None: (auto) If the path contains only rectilinear line segments, - round to the nearest pixel center - """ - self._snap = snap - - def set_hatch(self, hatch): - """Set the hatch style (for fills).""" - self._hatch = hatch - - def get_hatch(self): - """Get the current hatch style.""" - return self._hatch - - def get_hatch_path(self, density=6.0): - """Return a `.Path` for the current hatch.""" - hatch = self.get_hatch() - if hatch is None: - return None - return Path.hatch(hatch, density) - - def get_hatch_color(self): - """Get the hatch color.""" - return self._hatch_color - - def set_hatch_color(self, hatch_color): - """Set the hatch color.""" - self._hatch_color = hatch_color - - def get_hatch_linewidth(self): - """Get the hatch linewidth.""" - return self._hatch_linewidth - - def get_sketch_params(self): - """ - Return the sketch parameters for the artist. - - Returns - ------- - tuple or `None` - - A 3-tuple with the following elements: - - * ``scale``: The amplitude of the wiggle perpendicular to the - source line. - * ``length``: The length of the wiggle along the line. - * ``randomness``: The scale factor by which the length is - shrunken or expanded. - - May return `None` if no sketch parameters were set. - """ - return self._sketch - - def set_sketch_params(self, scale=None, length=None, randomness=None): - """ - Set the sketch parameters. - - Parameters - ---------- - scale : float, optional - The amplitude of the wiggle perpendicular to the source line, in - pixels. If scale is `None`, or not provided, no sketch filter will - be provided. - length : float, default: 128 - The length of the wiggle along the line, in pixels. - randomness : float, default: 16 - The scale factor by which the length is shrunken or expanded. - """ - self._sketch = ( - None if scale is None - else (scale, length or 128., randomness or 16.)) - - -class TimerBase: - """ - A base class for providing timer events, useful for things animations. - Backends need to implement a few specific methods in order to use their - own timing mechanisms so that the timer events are integrated into their - event loops. - - Subclasses must override the following methods: - - - ``_timer_start``: Backend-specific code for starting the timer. - - ``_timer_stop``: Backend-specific code for stopping the timer. - - Subclasses may additionally override the following methods: - - - ``_timer_set_single_shot``: Code for setting the timer to single shot - operating mode, if supported by the timer object. If not, the `Timer` - class itself will store the flag and the ``_on_timer`` method should be - overridden to support such behavior. - - - ``_timer_set_interval``: Code for setting the interval on the timer, if - there is a method for doing so on the timer object. - - - ``_on_timer``: The internal function that any timer object should call, - which will handle the task of running all callbacks that have been set. - """ - - def __init__(self, interval=None, callbacks=None): - """ - Parameters - ---------- - interval : int, default: 1000ms - The time between timer events in milliseconds. Will be stored as - ``timer.interval``. - callbacks : list[tuple[callable, tuple, dict]] - List of (func, args, kwargs) tuples that will be called upon timer - events. This list is accessible as ``timer.callbacks`` and can be - manipulated directly, or the functions `~.TimerBase.add_callback` - and `~.TimerBase.remove_callback` can be used. - """ - self.callbacks = [] if callbacks is None else callbacks.copy() - # Set .interval and not ._interval to go through the property setter. - self.interval = 1000 if interval is None else interval - self.single_shot = False - - def __del__(self): - """Need to stop timer and possibly disconnect timer.""" - self._timer_stop() - - def start(self, interval=None): - """ - Start the timer object. - - Parameters - ---------- - interval : int, optional - Timer interval in milliseconds; overrides a previously set interval - if provided. - """ - if interval is not None: - self.interval = interval - self._timer_start() - - def stop(self): - """Stop the timer.""" - self._timer_stop() - - def _timer_start(self): - pass - - def _timer_stop(self): - pass - - @property - def interval(self): - """The time between timer events, in milliseconds.""" - return self._interval - - @interval.setter - def interval(self, interval): - # Force to int since none of the backends actually support fractional - # milliseconds, and some error or give warnings. - # Some backends also fail when interval == 0, so ensure >= 1 msec - interval = max(int(interval), 1) - self._interval = interval - self._timer_set_interval() - - @property - def single_shot(self): - """Whether this timer should stop after a single run.""" - return self._single - - @single_shot.setter - def single_shot(self, ss): - self._single = ss - self._timer_set_single_shot() - - def add_callback(self, func, *args, **kwargs): - """ - Register *func* to be called by timer when the event fires. Any - additional arguments provided will be passed to *func*. - - This function returns *func*, which makes it possible to use it as a - decorator. - """ - self.callbacks.append((func, args, kwargs)) - return func - - def remove_callback(self, func, *args, **kwargs): - """ - Remove *func* from list of callbacks. - - *args* and *kwargs* are optional and used to distinguish between copies - of the same function registered to be called with different arguments. - This behavior is deprecated. In the future, ``*args, **kwargs`` won't - be considered anymore; to keep a specific callback removable by itself, - pass it to `add_callback` as a `functools.partial` object. - """ - if args or kwargs: - _api.warn_deprecated( - "3.1", message="In a future version, Timer.remove_callback " - "will not take *args, **kwargs anymore, but remove all " - "callbacks where the callable matches; to keep a specific " - "callback removable by itself, pass it to add_callback as a " - "functools.partial object.") - self.callbacks.remove((func, args, kwargs)) - else: - funcs = [c[0] for c in self.callbacks] - if func in funcs: - self.callbacks.pop(funcs.index(func)) - - def _timer_set_interval(self): - """Used to set interval on underlying timer object.""" - - def _timer_set_single_shot(self): - """Used to set single shot on underlying timer object.""" - - def _on_timer(self): - """ - Runs all function that have been registered as callbacks. Functions - can return False (or 0) if they should not be called any more. If there - are no callbacks, the timer is automatically stopped. - """ - for func, args, kwargs in self.callbacks: - ret = func(*args, **kwargs) - # docstring above explains why we use `if ret == 0` here, - # instead of `if not ret`. - # This will also catch `ret == False` as `False == 0` - # but does not annoy the linters - # https://docs.python.org/3/library/stdtypes.html#boolean-values - if ret == 0: - self.callbacks.remove((func, args, kwargs)) - - if len(self.callbacks) == 0: - self.stop() - - -class Event: - """ - A Matplotlib event. - - The following attributes are defined and shown with their default values. - Subclasses may define additional attributes. - - Attributes - ---------- - name : str - The event name. - canvas : `FigureCanvasBase` - The backend-specific canvas instance generating the event. - guiEvent - The GUI event that triggered the Matplotlib event. - """ - - def __init__(self, name, canvas, guiEvent=None): - self.name = name - self.canvas = canvas - self._guiEvent = guiEvent - self._guiEvent_deleted = False - - def _process(self): - """Process this event on ``self.canvas``, then unset ``guiEvent``.""" - self.canvas.callbacks.process(self.name, self) - self._guiEvent_deleted = True - - @property - def guiEvent(self): - # After deprecation elapses: remove _guiEvent_deleted; make guiEvent a plain - # attribute set to None by _process. - if self._guiEvent_deleted: - _api.warn_deprecated( - "3.8", message="Accessing guiEvent outside of the original GUI event " - "handler is unsafe and deprecated since %(since)s; in the future, the " - "attribute will be set to None after quitting the event handler. You " - "may separately record the value of the guiEvent attribute at your own " - "risk.") - return self._guiEvent - - -class DrawEvent(Event): - """ - An event triggered by a draw operation on the canvas. - - In most backends, callbacks subscribed to this event will be fired after - the rendering is complete but before the screen is updated. Any extra - artists drawn to the canvas's renderer will be reflected without an - explicit call to ``blit``. - - .. warning:: - - Calling ``canvas.draw`` and ``canvas.blit`` in these callbacks may - not be safe with all backends and may cause infinite recursion. - - A DrawEvent has a number of special attributes in addition to those defined - by the parent `Event` class. - - Attributes - ---------- - renderer : `RendererBase` - The renderer for the draw event. - """ - def __init__(self, name, canvas, renderer): - super().__init__(name, canvas) - self.renderer = renderer - - -class ResizeEvent(Event): - """ - An event triggered by a canvas resize. - - A ResizeEvent has a number of special attributes in addition to those - defined by the parent `Event` class. - - Attributes - ---------- - width : int - Width of the canvas in pixels. - height : int - Height of the canvas in pixels. - """ - - def __init__(self, name, canvas): - super().__init__(name, canvas) - self.width, self.height = canvas.get_width_height() - - -class CloseEvent(Event): - """An event triggered by a figure being closed.""" - - -class LocationEvent(Event): - """ - An event that has a screen location. - - A LocationEvent has a number of special attributes in addition to those - defined by the parent `Event` class. - - Attributes - ---------- - x, y : int or None - Event location in pixels from bottom left of canvas. - inaxes : `~matplotlib.axes.Axes` or None - The `~.axes.Axes` instance over which the mouse is, if any. - xdata, ydata : float or None - Data coordinates of the mouse within *inaxes*, or *None* if the mouse - is not over an Axes. - modifiers : frozenset - The keyboard modifiers currently being pressed (except for KeyEvent). - """ - - # Fully delete all occurrences of lastevent after deprecation elapses. - _lastevent = None - lastevent = _api.deprecated("3.8")( - _api.classproperty(lambda cls: cls._lastevent)) - _last_axes_ref = None - - def __init__(self, name, canvas, x, y, guiEvent=None, *, modifiers=None): - super().__init__(name, canvas, guiEvent=guiEvent) - # x position - pixels from left of canvas - self.x = int(x) if x is not None else x - # y position - pixels from right of canvas - self.y = int(y) if y is not None else y - self.inaxes = None # the Axes instance the mouse is over - self.xdata = None # x coord of mouse in data coords - self.ydata = None # y coord of mouse in data coords - self.modifiers = frozenset(modifiers if modifiers is not None else []) - - if x is None or y is None: - # cannot check if event was in Axes if no (x, y) info - return - - self._set_inaxes(self.canvas.inaxes((x, y)) - if self.canvas.mouse_grabber is None else - self.canvas.mouse_grabber, - (x, y)) - - # Splitting _set_inaxes out is useful for the axes_leave_event handler: it - # needs to generate synthetic LocationEvents with manually-set inaxes. In - # that latter case, xy has already been cast to int so it can directly be - # read from self.x, self.y; in the normal case, however, it is more - # accurate to pass the untruncated float x, y values passed to the ctor. - - def _set_inaxes(self, inaxes, xy=None): - self.inaxes = inaxes - if inaxes is not None: - try: - self.xdata, self.ydata = inaxes.transData.inverted().transform( - xy if xy is not None else (self.x, self.y)) - except ValueError: - pass - - -class MouseButton(IntEnum): - LEFT = 1 - MIDDLE = 2 - RIGHT = 3 - BACK = 8 - FORWARD = 9 - - -class MouseEvent(LocationEvent): - """ - A mouse event ('button_press_event', 'button_release_event', \ -'scroll_event', 'motion_notify_event'). - - A MouseEvent has a number of special attributes in addition to those - defined by the parent `Event` and `LocationEvent` classes. - - Attributes - ---------- - button : None or `MouseButton` or {'up', 'down'} - The button pressed. 'up' and 'down' are used for scroll events. - - Note that LEFT and RIGHT actually refer to the "primary" and - "secondary" buttons, i.e. if the user inverts their left and right - buttons ("left-handed setting") then the LEFT button will be the one - physically on the right. - - If this is unset, *name* is "scroll_event", and *step* is nonzero, then - this will be set to "up" or "down" depending on the sign of *step*. - - key : None or str - The key pressed when the mouse event triggered, e.g. 'shift'. - See `KeyEvent`. - - .. warning:: - This key is currently obtained from the last 'key_press_event' or - 'key_release_event' that occurred within the canvas. Thus, if the - last change of keyboard state occurred while the canvas did not have - focus, this attribute will be wrong. On the other hand, the - ``modifiers`` attribute should always be correct, but it can only - report on modifier keys. - - step : float - The number of scroll steps (positive for 'up', negative for 'down'). - This applies only to 'scroll_event' and defaults to 0 otherwise. - - dblclick : bool - Whether the event is a double-click. This applies only to - 'button_press_event' and is False otherwise. In particular, it's - not used in 'button_release_event'. - - Examples - -------- - :: - - def on_press(event): - print('you pressed', event.button, event.xdata, event.ydata) - - cid = fig.canvas.mpl_connect('button_press_event', on_press) - """ - - def __init__(self, name, canvas, x, y, button=None, key=None, - step=0, dblclick=False, guiEvent=None, *, modifiers=None): - super().__init__( - name, canvas, x, y, guiEvent=guiEvent, modifiers=modifiers) - if button in MouseButton.__members__.values(): - button = MouseButton(button) - if name == "scroll_event" and button is None: - if step > 0: - button = "up" - elif step < 0: - button = "down" - self.button = button - self.key = key - self.step = step - self.dblclick = dblclick - - def __str__(self): - return (f"{self.name}: " - f"xy=({self.x}, {self.y}) xydata=({self.xdata}, {self.ydata}) " - f"button={self.button} dblclick={self.dblclick} " - f"inaxes={self.inaxes}") - - -class PickEvent(Event): - """ - A pick event. - - This event is fired when the user picks a location on the canvas - sufficiently close to an artist that has been made pickable with - `.Artist.set_picker`. - - A PickEvent has a number of special attributes in addition to those defined - by the parent `Event` class. - - Attributes - ---------- - mouseevent : `MouseEvent` - The mouse event that generated the pick. - artist : `~matplotlib.artist.Artist` - The picked artist. Note that artists are not pickable by default - (see `.Artist.set_picker`). - other - Additional attributes may be present depending on the type of the - picked object; e.g., a `.Line2D` pick may define different extra - attributes than a `.PatchCollection` pick. - - Examples - -------- - Bind a function ``on_pick()`` to pick events, that prints the coordinates - of the picked data point:: - - ax.plot(np.rand(100), 'o', picker=5) # 5 points tolerance - - def on_pick(event): - line = event.artist - xdata, ydata = line.get_data() - ind = event.ind - print(f'on pick line: {xdata[ind]:.3f}, {ydata[ind]:.3f}') - - cid = fig.canvas.mpl_connect('pick_event', on_pick) - """ - - def __init__(self, name, canvas, mouseevent, artist, - guiEvent=None, **kwargs): - if guiEvent is None: - guiEvent = mouseevent.guiEvent - super().__init__(name, canvas, guiEvent) - self.mouseevent = mouseevent - self.artist = artist - self.__dict__.update(kwargs) - - -class KeyEvent(LocationEvent): - """ - A key event (key press, key release). - - A KeyEvent has a number of special attributes in addition to those defined - by the parent `Event` and `LocationEvent` classes. - - Attributes - ---------- - key : None or str - The key(s) pressed. Could be *None*, a single case sensitive Unicode - character ("g", "G", "#", etc.), a special key ("control", "shift", - "f1", "up", etc.) or a combination of the above (e.g., "ctrl+alt+g", - "ctrl+alt+G"). - - Notes - ----- - Modifier keys will be prefixed to the pressed key and will be in the order - "ctrl", "alt", "super". The exception to this rule is when the pressed key - is itself a modifier key, therefore "ctrl+alt" and "alt+control" can both - be valid key values. - - Examples - -------- - :: - - def on_key(event): - print('you pressed', event.key, event.xdata, event.ydata) - - cid = fig.canvas.mpl_connect('key_press_event', on_key) - """ - - def __init__(self, name, canvas, key, x=0, y=0, guiEvent=None): - super().__init__(name, canvas, x, y, guiEvent=guiEvent) - self.key = key - - -# Default callback for key events. -def _key_handler(event): - # Dead reckoning of key. - if event.name == "key_press_event": - event.canvas._key = event.key - elif event.name == "key_release_event": - event.canvas._key = None - - -# Default callback for mouse events. -def _mouse_handler(event): - # Dead-reckoning of button and key. - if event.name == "button_press_event": - event.canvas._button = event.button - elif event.name == "button_release_event": - event.canvas._button = None - elif event.name == "motion_notify_event" and event.button is None: - event.button = event.canvas._button - if event.key is None: - event.key = event.canvas._key - # Emit axes_enter/axes_leave. - if event.name == "motion_notify_event": - last_ref = LocationEvent._last_axes_ref - last_axes = last_ref() if last_ref else None - if last_axes != event.inaxes: - if last_axes is not None: - # Create a synthetic LocationEvent for the axes_leave_event. - # Its inaxes attribute needs to be manually set (because the - # cursor is actually *out* of that axes at that point); this is - # done with the internal _set_inaxes method which ensures that - # the xdata and ydata attributes are also correct. - try: - leave_event = LocationEvent( - "axes_leave_event", last_axes.figure.canvas, - event.x, event.y, event.guiEvent, - modifiers=event.modifiers) - leave_event._set_inaxes(last_axes) - last_axes.figure.canvas.callbacks.process( - "axes_leave_event", leave_event) - except Exception: - pass # The last canvas may already have been torn down. - if event.inaxes is not None: - event.canvas.callbacks.process("axes_enter_event", event) - LocationEvent._last_axes_ref = ( - weakref.ref(event.inaxes) if event.inaxes else None) - LocationEvent._lastevent = ( - None if event.name == "figure_leave_event" else event) - - -def _get_renderer(figure, print_method=None): - """ - Get the renderer that would be used to save a `.Figure`. - - If you need a renderer without any active draw methods use - renderer._draw_disabled to temporary patch them out at your call site. - """ - # This is implemented by triggering a draw, then immediately jumping out of - # Figure.draw() by raising an exception. - - class Done(Exception): - pass - - def _draw(renderer): raise Done(renderer) - - with cbook._setattr_cm(figure, draw=_draw), ExitStack() as stack: - if print_method is None: - fmt = figure.canvas.get_default_filetype() - # Even for a canvas' default output type, a canvas switch may be - # needed, e.g. for FigureCanvasBase. - print_method = stack.enter_context( - figure.canvas._switch_canvas_and_return_print_method(fmt)) - try: - print_method(io.BytesIO()) - except Done as exc: - renderer, = exc.args - return renderer - else: - raise RuntimeError(f"{print_method} did not call Figure.draw, so " - f"no renderer is available") - - -def _no_output_draw(figure): - # _no_output_draw was promoted to the figure level, but - # keep this here in case someone was calling it... - figure.draw_without_rendering() - - -def _is_non_interactive_terminal_ipython(ip): - """ - Return whether we are in a terminal IPython, but non interactive. - - When in _terminal_ IPython, ip.parent will have and `interact` attribute, - if this attribute is False we do not setup eventloop integration as the - user will _not_ interact with IPython. In all other case (ZMQKernel, or is - interactive), we do. - """ - return (hasattr(ip, 'parent') - and (ip.parent is not None) - and getattr(ip.parent, 'interact', None) is False) - - -class FigureCanvasBase: - """ - The canvas the figure renders into. - - Attributes - ---------- - figure : `~matplotlib.figure.Figure` - A high-level figure instance. - """ - - # Set to one of {"qt", "gtk3", "gtk4", "wx", "tk", "macosx"} if an - # interactive framework is required, or None otherwise. - required_interactive_framework = None - - # The manager class instantiated by new_manager. - # (This is defined as a classproperty because the manager class is - # currently defined *after* the canvas class, but one could also assign - # ``FigureCanvasBase.manager_class = FigureManagerBase`` - # after defining both classes.) - manager_class = _api.classproperty(lambda cls: FigureManagerBase) - - events = [ - 'resize_event', - 'draw_event', - 'key_press_event', - 'key_release_event', - 'button_press_event', - 'button_release_event', - 'scroll_event', - 'motion_notify_event', - 'pick_event', - 'figure_enter_event', - 'figure_leave_event', - 'axes_enter_event', - 'axes_leave_event', - 'close_event' - ] - - fixed_dpi = None - - filetypes = _default_filetypes - - @_api.classproperty - def supports_blit(cls): - """If this Canvas sub-class supports blitting.""" - return (hasattr(cls, "copy_from_bbox") - and hasattr(cls, "restore_region")) - - def __init__(self, figure=None): - from matplotlib.figure import Figure - self._fix_ipython_backend2gui() - self._is_idle_drawing = True - self._is_saving = False - if figure is None: - figure = Figure() - figure.set_canvas(self) - self.figure = figure - self.manager = None - self.widgetlock = widgets.LockDraw() - self._button = None # the button pressed - self._key = None # the key pressed - self.mouse_grabber = None # the Axes currently grabbing mouse - self.toolbar = None # NavigationToolbar2 will set me - self._is_idle_drawing = False - # We don't want to scale up the figure DPI more than once. - figure._original_dpi = figure.dpi - self._device_pixel_ratio = 1 - super().__init__() # Typically the GUI widget init (if any). - - callbacks = property(lambda self: self.figure._canvas_callbacks) - button_pick_id = property(lambda self: self.figure._button_pick_id) - scroll_pick_id = property(lambda self: self.figure._scroll_pick_id) - - @classmethod - @functools.cache - def _fix_ipython_backend2gui(cls): - # Fix hard-coded module -> toolkit mapping in IPython (used for - # `ipython --auto`). This cannot be done at import time due to - # ordering issues, so we do it when creating a canvas, and should only - # be done once per class (hence the `cache`). - if sys.modules.get("IPython") is None: - return - import IPython - ip = IPython.get_ipython() - if not ip: - return - from IPython.core import pylabtools as pt - if (not hasattr(pt, "backend2gui") - or not hasattr(ip, "enable_matplotlib")): - # In case we ever move the patch to IPython and remove these APIs, - # don't break on our side. - return - backend2gui_rif = { - "qt": "qt", - "gtk3": "gtk3", - "gtk4": "gtk4", - "wx": "wx", - "macosx": "osx", - }.get(cls.required_interactive_framework) - if backend2gui_rif: - if _is_non_interactive_terminal_ipython(ip): - ip.enable_gui(backend2gui_rif) - - @classmethod - def new_manager(cls, figure, num): - """ - Create a new figure manager for *figure*, using this canvas class. - - Notes - ----- - This method should not be reimplemented in subclasses. If - custom manager creation logic is needed, please reimplement - ``FigureManager.create_with_canvas``. - """ - return cls.manager_class.create_with_canvas(cls, figure, num) - - @contextmanager - def _idle_draw_cntx(self): - self._is_idle_drawing = True - try: - yield - finally: - self._is_idle_drawing = False - - def is_saving(self): - """ - Return whether the renderer is in the process of saving - to a file, rather than rendering for an on-screen buffer. - """ - return self._is_saving - - def blit(self, bbox=None): - """Blit the canvas in bbox (default entire canvas).""" - - def inaxes(self, xy): - """ - Return the topmost visible `~.axes.Axes` containing the point *xy*. - - Parameters - ---------- - xy : (float, float) - (x, y) pixel positions from left/bottom of the canvas. - - Returns - ------- - `~matplotlib.axes.Axes` or None - The topmost visible Axes containing the point, or None if there - is no Axes at the point. - """ - axes_list = [a for a in self.figure.get_axes() - if a.patch.contains_point(xy) and a.get_visible()] - if axes_list: - axes = cbook._topmost_artist(axes_list) - else: - axes = None - - return axes - - def grab_mouse(self, ax): - """ - Set the child `~.axes.Axes` which is grabbing the mouse events. - - Usually called by the widgets themselves. It is an error to call this - if the mouse is already grabbed by another Axes. - """ - if self.mouse_grabber not in (None, ax): - raise RuntimeError("Another Axes already grabs mouse input") - self.mouse_grabber = ax - - def release_mouse(self, ax): - """ - Release the mouse grab held by the `~.axes.Axes` *ax*. - - Usually called by the widgets. It is ok to call this even if *ax* - doesn't have the mouse grab currently. - """ - if self.mouse_grabber is ax: - self.mouse_grabber = None - - def set_cursor(self, cursor): - """ - Set the current cursor. - - This may have no effect if the backend does not display anything. - - If required by the backend, this method should trigger an update in - the backend event loop after the cursor is set, as this method may be - called e.g. before a long-running task during which the GUI is not - updated. - - Parameters - ---------- - cursor : `.Cursors` - The cursor to display over the canvas. Note: some backends may - change the cursor for the entire window. - """ - - def draw(self, *args, **kwargs): - """ - Render the `.Figure`. - - This method must walk the artist tree, even if no output is produced, - because it triggers deferred work that users may want to access - before saving output to disk. For example computing limits, - auto-limits, and tick values. - """ - - def draw_idle(self, *args, **kwargs): - """ - Request a widget redraw once control returns to the GUI event loop. - - Even if multiple calls to `draw_idle` occur before control returns - to the GUI event loop, the figure will only be rendered once. - - Notes - ----- - Backends may choose to override the method and implement their own - strategy to prevent multiple renderings. - - """ - if not self._is_idle_drawing: - with self._idle_draw_cntx(): - self.draw(*args, **kwargs) - - @property - def device_pixel_ratio(self): - """ - The ratio of physical to logical pixels used for the canvas on screen. - - By default, this is 1, meaning physical and logical pixels are the same - size. Subclasses that support High DPI screens may set this property to - indicate that said ratio is different. All Matplotlib interaction, - unless working directly with the canvas, remains in logical pixels. - - """ - return self._device_pixel_ratio - - def _set_device_pixel_ratio(self, ratio): - """ - Set the ratio of physical to logical pixels used for the canvas. - - Subclasses that support High DPI screens can set this property to - indicate that said ratio is different. The canvas itself will be - created at the physical size, while the client side will use the - logical size. Thus the DPI of the Figure will change to be scaled by - this ratio. Implementations that support High DPI screens should use - physical pixels for events so that transforms back to Axes space are - correct. - - By default, this is 1, meaning physical and logical pixels are the same - size. - - Parameters - ---------- - ratio : float - The ratio of logical to physical pixels used for the canvas. - - Returns - ------- - bool - Whether the ratio has changed. Backends may interpret this as a - signal to resize the window, repaint the canvas, or change any - other relevant properties. - """ - if self._device_pixel_ratio == ratio: - return False - # In cases with mixed resolution displays, we need to be careful if the - # device pixel ratio changes - in this case we need to resize the - # canvas accordingly. Some backends provide events that indicate a - # change in DPI, but those that don't will update this before drawing. - dpi = ratio * self.figure._original_dpi - self.figure._set_dpi(dpi, forward=False) - self._device_pixel_ratio = ratio - return True - - def get_width_height(self, *, physical=False): - """ - Return the figure width and height in integral points or pixels. - - When the figure is used on High DPI screens (and the backend supports - it), the truncation to integers occurs after scaling by the device - pixel ratio. - - Parameters - ---------- - physical : bool, default: False - Whether to return true physical pixels or logical pixels. Physical - pixels may be used by backends that support HiDPI, but still - configure the canvas using its actual size. - - Returns - ------- - width, height : int - The size of the figure, in points or pixels, depending on the - backend. - """ - return tuple(int(size / (1 if physical else self.device_pixel_ratio)) - for size in self.figure.bbox.max) - - @classmethod - def get_supported_filetypes(cls): - """Return dict of savefig file formats supported by this backend.""" - return cls.filetypes - - @classmethod - def get_supported_filetypes_grouped(cls): - """ - Return a dict of savefig file formats supported by this backend, - where the keys are a file type name, such as 'Joint Photographic - Experts Group', and the values are a list of filename extensions used - for that filetype, such as ['jpg', 'jpeg']. - """ - groupings = {} - for ext, name in cls.filetypes.items(): - groupings.setdefault(name, []).append(ext) - groupings[name].sort() - return groupings - - @contextmanager - def _switch_canvas_and_return_print_method(self, fmt, backend=None): - """ - Context manager temporarily setting the canvas for saving the figure:: - - with canvas._switch_canvas_and_return_print_method(fmt, backend) \\ - as print_method: - # ``print_method`` is a suitable ``print_{fmt}`` method, and - # the figure's canvas is temporarily switched to the method's - # canvas within the with... block. ``print_method`` is also - # wrapped to suppress extra kwargs passed by ``print_figure``. - - Parameters - ---------- - fmt : str - If *backend* is None, then determine a suitable canvas class for - saving to format *fmt* -- either the current canvas class, if it - supports *fmt*, or whatever `get_registered_canvas_class` returns; - switch the figure canvas to that canvas class. - backend : str or None, default: None - If not None, switch the figure canvas to the ``FigureCanvas`` class - of the given backend. - """ - canvas = None - if backend is not None: - # Return a specific canvas class, if requested. - canvas_class = ( - importlib.import_module(cbook._backend_module_name(backend)) - .FigureCanvas) - if not hasattr(canvas_class, f"print_{fmt}"): - raise ValueError( - f"The {backend!r} backend does not support {fmt} output") - canvas = canvas_class(self.figure) - elif hasattr(self, f"print_{fmt}"): - # Return the current canvas if it supports the requested format. - canvas = self - else: - # Return a default canvas for the requested format, if it exists. - canvas_class = get_registered_canvas_class(fmt) - if canvas_class is None: - raise ValueError( - "Format {!r} is not supported (supported formats: {})".format( - fmt, ", ".join(sorted(self.get_supported_filetypes())))) - canvas = canvas_class(self.figure) - canvas._is_saving = self._is_saving - meth = getattr(canvas, f"print_{fmt}") - mod = (meth.func.__module__ - if hasattr(meth, "func") # partialmethod, e.g. backend_wx. - else meth.__module__) - if mod.startswith(("matplotlib.", "mpl_toolkits.")): - optional_kws = { # Passed by print_figure for other renderers. - "dpi", "facecolor", "edgecolor", "orientation", - "bbox_inches_restore"} - skip = optional_kws - {*inspect.signature(meth).parameters} - print_method = functools.wraps(meth)(lambda *args, **kwargs: meth( - *args, **{k: v for k, v in kwargs.items() if k not in skip})) - else: # Let third-parties do as they see fit. - print_method = meth - try: - yield print_method - finally: - self.figure.canvas = self - - def print_figure( - self, filename, dpi=None, facecolor=None, edgecolor=None, - orientation='portrait', format=None, *, - bbox_inches=None, pad_inches=None, bbox_extra_artists=None, - backend=None, **kwargs): - """ - Render the figure to hardcopy. Set the figure patch face and edge - colors. This is useful because some of the GUIs have a gray figure - face color background and you'll probably want to override this on - hardcopy. - - Parameters - ---------- - filename : str or path-like or file-like - The file where the figure is saved. - - dpi : float, default: :rc:`savefig.dpi` - The dots per inch to save the figure in. - - facecolor : color or 'auto', default: :rc:`savefig.facecolor` - The facecolor of the figure. If 'auto', use the current figure - facecolor. - - edgecolor : color or 'auto', default: :rc:`savefig.edgecolor` - The edgecolor of the figure. If 'auto', use the current figure - edgecolor. - - orientation : {'landscape', 'portrait'}, default: 'portrait' - Only currently applies to PostScript printing. - - format : str, optional - Force a specific file format. If not given, the format is inferred - from the *filename* extension, and if that fails from - :rc:`savefig.format`. - - bbox_inches : 'tight' or `.Bbox`, default: :rc:`savefig.bbox` - Bounding box in inches: only the given portion of the figure is - saved. If 'tight', try to figure out the tight bbox of the figure. - - pad_inches : float or 'layout', default: :rc:`savefig.pad_inches` - Amount of padding in inches around the figure when bbox_inches is - 'tight'. If 'layout' use the padding from the constrained or - compressed layout engine; ignored if one of those engines is not in - use. - - bbox_extra_artists : list of `~matplotlib.artist.Artist`, optional - A list of extra artists that will be considered when the - tight bbox is calculated. - - backend : str, optional - Use a non-default backend to render the file, e.g. to render a - png file with the "cairo" backend rather than the default "agg", - or a pdf file with the "pgf" backend rather than the default - "pdf". Note that the default backend is normally sufficient. See - :ref:`the-builtin-backends` for a list of valid backends for each - file format. Custom backends can be referenced as "module://...". - """ - if format is None: - # get format from filename, or from backend's default filetype - if isinstance(filename, os.PathLike): - filename = os.fspath(filename) - if isinstance(filename, str): - format = os.path.splitext(filename)[1][1:] - if format is None or format == '': - format = self.get_default_filetype() - if isinstance(filename, str): - filename = filename.rstrip('.') + '.' + format - format = format.lower() - - if dpi is None: - dpi = rcParams['savefig.dpi'] - if dpi == 'figure': - dpi = getattr(self.figure, '_original_dpi', self.figure.dpi) - - if kwargs.get("papertype") == 'auto': - # When deprecation elapses, remove backend_ps._get_papertype & its callers. - _api.warn_deprecated( - "3.8", name="papertype='auto'", addendum="Pass an explicit paper type, " - "'figure', or omit the *papertype* argument entirely.") - - # Remove the figure manager, if any, to avoid resizing the GUI widget. - with cbook._setattr_cm(self, manager=None), \ - self._switch_canvas_and_return_print_method(format, backend) \ - as print_method, \ - cbook._setattr_cm(self.figure, dpi=dpi), \ - cbook._setattr_cm(self.figure.canvas, _device_pixel_ratio=1), \ - cbook._setattr_cm(self.figure.canvas, _is_saving=True), \ - ExitStack() as stack: - - for prop in ["facecolor", "edgecolor"]: - color = locals()[prop] - if color is None: - color = rcParams[f"savefig.{prop}"] - if not cbook._str_equal(color, "auto"): - stack.enter_context(self.figure._cm_set(**{prop: color})) - - if bbox_inches is None: - bbox_inches = rcParams['savefig.bbox'] - - layout_engine = self.figure.get_layout_engine() - if layout_engine is not None or bbox_inches == "tight": - # we need to trigger a draw before printing to make sure - # CL works. "tight" also needs a draw to get the right - # locations: - renderer = _get_renderer( - self.figure, - functools.partial( - print_method, orientation=orientation) - ) - # we do this instead of `self.figure.draw_without_rendering` - # so that we can inject the orientation - with getattr(renderer, "_draw_disabled", nullcontext)(): - self.figure.draw(renderer) - if bbox_inches: - if bbox_inches == "tight": - bbox_inches = self.figure.get_tightbbox( - renderer, bbox_extra_artists=bbox_extra_artists) - if (isinstance(layout_engine, ConstrainedLayoutEngine) and - pad_inches == "layout"): - h_pad = layout_engine.get()["h_pad"] - w_pad = layout_engine.get()["w_pad"] - else: - if pad_inches in [None, "layout"]: - pad_inches = rcParams['savefig.pad_inches'] - h_pad = w_pad = pad_inches - bbox_inches = bbox_inches.padded(w_pad, h_pad) - - # call adjust_bbox to save only the given area - restore_bbox = _tight_bbox.adjust_bbox( - self.figure, bbox_inches, self.figure.canvas.fixed_dpi) - - _bbox_inches_restore = (bbox_inches, restore_bbox) - else: - _bbox_inches_restore = None - - # we have already done layout above, so turn it off: - stack.enter_context(self.figure._cm_set(layout_engine='none')) - try: - # _get_renderer may change the figure dpi (as vector formats - # force the figure dpi to 72), so we need to set it again here. - with cbook._setattr_cm(self.figure, dpi=dpi): - result = print_method( - filename, - facecolor=facecolor, - edgecolor=edgecolor, - orientation=orientation, - bbox_inches_restore=_bbox_inches_restore, - **kwargs) - finally: - if bbox_inches and restore_bbox: - restore_bbox() - - return result - - @classmethod - def get_default_filetype(cls): - """ - Return the default savefig file format as specified in - :rc:`savefig.format`. - - The returned string does not include a period. This method is - overridden in backends that only support a single file type. - """ - return rcParams['savefig.format'] - - def get_default_filename(self): - """ - Return a string, which includes extension, suitable for use as - a default filename. - """ - basename = (self.manager.get_window_title() if self.manager is not None - else '') - basename = (basename or 'image').replace(' ', '_') - filetype = self.get_default_filetype() - filename = basename + '.' + filetype - return filename - - @_api.deprecated("3.8") - def switch_backends(self, FigureCanvasClass): - """ - Instantiate an instance of FigureCanvasClass - - This is used for backend switching, e.g., to instantiate a - FigureCanvasPS from a FigureCanvasGTK. Note, deep copying is - not done, so any changes to one of the instances (e.g., setting - figure size or line props), will be reflected in the other - """ - newCanvas = FigureCanvasClass(self.figure) - newCanvas._is_saving = self._is_saving - return newCanvas - - def mpl_connect(self, s, func): - """ - Bind function *func* to event *s*. - - Parameters - ---------- - s : str - One of the following events ids: - - - 'button_press_event' - - 'button_release_event' - - 'draw_event' - - 'key_press_event' - - 'key_release_event' - - 'motion_notify_event' - - 'pick_event' - - 'resize_event' - - 'scroll_event' - - 'figure_enter_event', - - 'figure_leave_event', - - 'axes_enter_event', - - 'axes_leave_event' - - 'close_event'. - - func : callable - The callback function to be executed, which must have the - signature:: - - def func(event: Event) -> Any - - For the location events (button and key press/release), if the - mouse is over the Axes, the ``inaxes`` attribute of the event will - be set to the `~matplotlib.axes.Axes` the event occurs is over, and - additionally, the variables ``xdata`` and ``ydata`` attributes will - be set to the mouse location in data coordinates. See `.KeyEvent` - and `.MouseEvent` for more info. - - .. note:: - - If func is a method, this only stores a weak reference to the - method. Thus, the figure does not influence the lifetime of - the associated object. Usually, you want to make sure that the - object is kept alive throughout the lifetime of the figure by - holding a reference to it. - - Returns - ------- - cid - A connection id that can be used with - `.FigureCanvasBase.mpl_disconnect`. - - Examples - -------- - :: - - def on_press(event): - print('you pressed', event.button, event.xdata, event.ydata) - - cid = canvas.mpl_connect('button_press_event', on_press) - """ - - return self.callbacks.connect(s, func) - - def mpl_disconnect(self, cid): - """ - Disconnect the callback with id *cid*. - - Examples - -------- - :: - - cid = canvas.mpl_connect('button_press_event', on_press) - # ... later - canvas.mpl_disconnect(cid) - """ - self.callbacks.disconnect(cid) - - # Internal subclasses can override _timer_cls instead of new_timer, though - # this is not a public API for third-party subclasses. - _timer_cls = TimerBase - - def new_timer(self, interval=None, callbacks=None): - """ - Create a new backend-specific subclass of `.Timer`. - - This is useful for getting periodic events through the backend's native - event loop. Implemented only for backends with GUIs. - - Parameters - ---------- - interval : int - Timer interval in milliseconds. - - callbacks : list[tuple[callable, tuple, dict]] - Sequence of (func, args, kwargs) where ``func(*args, **kwargs)`` - will be executed by the timer every *interval*. - - Callbacks which return ``False`` or ``0`` will be removed from the - timer. - - Examples - -------- - >>> timer = fig.canvas.new_timer(callbacks=[(f1, (1,), {'a': 3})]) - """ - return self._timer_cls(interval=interval, callbacks=callbacks) - - def flush_events(self): - """ - Flush the GUI events for the figure. - - Interactive backends need to reimplement this method. - """ - - def start_event_loop(self, timeout=0): - """ - Start a blocking event loop. - - Such an event loop is used by interactive functions, such as - `~.Figure.ginput` and `~.Figure.waitforbuttonpress`, to wait for - events. - - The event loop blocks until a callback function triggers - `stop_event_loop`, or *timeout* is reached. - - If *timeout* is 0 or negative, never timeout. - - Only interactive backends need to reimplement this method and it relies - on `flush_events` being properly implemented. - - Interactive backends should implement this in a more native way. - """ - if timeout <= 0: - timeout = np.inf - timestep = 0.01 - counter = 0 - self._looping = True - while self._looping and counter * timestep < timeout: - self.flush_events() - time.sleep(timestep) - counter += 1 - - def stop_event_loop(self): - """ - Stop the current blocking event loop. - - Interactive backends need to reimplement this to match - `start_event_loop` - """ - self._looping = False - - -def key_press_handler(event, canvas=None, toolbar=None): - """ - Implement the default Matplotlib key bindings for the canvas and toolbar - described at :ref:`key-event-handling`. - - Parameters - ---------- - event : `KeyEvent` - A key press/release event. - canvas : `FigureCanvasBase`, default: ``event.canvas`` - The backend-specific canvas instance. This parameter is kept for - back-compatibility, but, if set, should always be equal to - ``event.canvas``. - toolbar : `NavigationToolbar2`, default: ``event.canvas.toolbar`` - The navigation cursor toolbar. This parameter is kept for - back-compatibility, but, if set, should always be equal to - ``event.canvas.toolbar``. - """ - # these bindings happen whether you are over an Axes or not - - if event.key is None: - return - if canvas is None: - canvas = event.canvas - if toolbar is None: - toolbar = canvas.toolbar - - # Load key-mappings from rcParams. - fullscreen_keys = rcParams['keymap.fullscreen'] - home_keys = rcParams['keymap.home'] - back_keys = rcParams['keymap.back'] - forward_keys = rcParams['keymap.forward'] - pan_keys = rcParams['keymap.pan'] - zoom_keys = rcParams['keymap.zoom'] - save_keys = rcParams['keymap.save'] - quit_keys = rcParams['keymap.quit'] - quit_all_keys = rcParams['keymap.quit_all'] - grid_keys = rcParams['keymap.grid'] - grid_minor_keys = rcParams['keymap.grid_minor'] - toggle_yscale_keys = rcParams['keymap.yscale'] - toggle_xscale_keys = rcParams['keymap.xscale'] - - # toggle fullscreen mode ('f', 'ctrl + f') - if event.key in fullscreen_keys: - try: - canvas.manager.full_screen_toggle() - except AttributeError: - pass - - # quit the figure (default key 'ctrl+w') - if event.key in quit_keys: - Gcf.destroy_fig(canvas.figure) - if event.key in quit_all_keys: - Gcf.destroy_all() - - if toolbar is not None: - # home or reset mnemonic (default key 'h', 'home' and 'r') - if event.key in home_keys: - toolbar.home() - # forward / backward keys to enable left handed quick navigation - # (default key for backward: 'left', 'backspace' and 'c') - elif event.key in back_keys: - toolbar.back() - # (default key for forward: 'right' and 'v') - elif event.key in forward_keys: - toolbar.forward() - # pan mnemonic (default key 'p') - elif event.key in pan_keys: - toolbar.pan() - toolbar._update_cursor(event) - # zoom mnemonic (default key 'o') - elif event.key in zoom_keys: - toolbar.zoom() - toolbar._update_cursor(event) - # saving current figure (default key 's') - elif event.key in save_keys: - toolbar.save_figure() - - if event.inaxes is None: - return - - # these bindings require the mouse to be over an Axes to trigger - def _get_uniform_gridstate(ticks): - # Return True/False if all grid lines are on or off, None if they are - # not all in the same state. - if all(tick.gridline.get_visible() for tick in ticks): - return True - elif not any(tick.gridline.get_visible() for tick in ticks): - return False - else: - return None - - ax = event.inaxes - # toggle major grids in current Axes (default key 'g') - # Both here and below (for 'G'), we do nothing if *any* grid (major or - # minor, x or y) is not in a uniform state, to avoid messing up user - # customization. - if (event.key in grid_keys - # Exclude minor grids not in a uniform state. - and None not in [_get_uniform_gridstate(ax.xaxis.minorTicks), - _get_uniform_gridstate(ax.yaxis.minorTicks)]): - x_state = _get_uniform_gridstate(ax.xaxis.majorTicks) - y_state = _get_uniform_gridstate(ax.yaxis.majorTicks) - cycle = [(False, False), (True, False), (True, True), (False, True)] - try: - x_state, y_state = ( - cycle[(cycle.index((x_state, y_state)) + 1) % len(cycle)]) - except ValueError: - # Exclude major grids not in a uniform state. - pass - else: - # If turning major grids off, also turn minor grids off. - ax.grid(x_state, which="major" if x_state else "both", axis="x") - ax.grid(y_state, which="major" if y_state else "both", axis="y") - canvas.draw_idle() - # toggle major and minor grids in current Axes (default key 'G') - if (event.key in grid_minor_keys - # Exclude major grids not in a uniform state. - and None not in [_get_uniform_gridstate(ax.xaxis.majorTicks), - _get_uniform_gridstate(ax.yaxis.majorTicks)]): - x_state = _get_uniform_gridstate(ax.xaxis.minorTicks) - y_state = _get_uniform_gridstate(ax.yaxis.minorTicks) - cycle = [(False, False), (True, False), (True, True), (False, True)] - try: - x_state, y_state = ( - cycle[(cycle.index((x_state, y_state)) + 1) % len(cycle)]) - except ValueError: - # Exclude minor grids not in a uniform state. - pass - else: - ax.grid(x_state, which="both", axis="x") - ax.grid(y_state, which="both", axis="y") - canvas.draw_idle() - # toggle scaling of y-axes between 'log and 'linear' (default key 'l') - elif event.key in toggle_yscale_keys: - scale = ax.get_yscale() - if scale == 'log': - ax.set_yscale('linear') - ax.figure.canvas.draw_idle() - elif scale == 'linear': - try: - ax.set_yscale('log') - except ValueError as exc: - _log.warning(str(exc)) - ax.set_yscale('linear') - ax.figure.canvas.draw_idle() - # toggle scaling of x-axes between 'log and 'linear' (default key 'k') - elif event.key in toggle_xscale_keys: - scalex = ax.get_xscale() - if scalex == 'log': - ax.set_xscale('linear') - ax.figure.canvas.draw_idle() - elif scalex == 'linear': - try: - ax.set_xscale('log') - except ValueError as exc: - _log.warning(str(exc)) - ax.set_xscale('linear') - ax.figure.canvas.draw_idle() - - -def button_press_handler(event, canvas=None, toolbar=None): - """ - The default Matplotlib button actions for extra mouse buttons. - - Parameters are as for `key_press_handler`, except that *event* is a - `MouseEvent`. - """ - if canvas is None: - canvas = event.canvas - if toolbar is None: - toolbar = canvas.toolbar - if toolbar is not None: - button_name = str(MouseButton(event.button)) - if button_name in rcParams['keymap.back']: - toolbar.back() - elif button_name in rcParams['keymap.forward']: - toolbar.forward() - - -class NonGuiException(Exception): - """Raised when trying show a figure in a non-GUI backend.""" - pass - - -class FigureManagerBase: - """ - A backend-independent abstraction of a figure container and controller. - - The figure manager is used by pyplot to interact with the window in a - backend-independent way. It's an adapter for the real (GUI) framework that - represents the visual figure on screen. - - GUI backends define from this class to translate common operations such - as *show* or *resize* to the GUI-specific code. Non-GUI backends do not - support these operations an can just use the base class. - - This following basic operations are accessible: - - **Window operations** - - - `~.FigureManagerBase.show` - - `~.FigureManagerBase.destroy` - - `~.FigureManagerBase.full_screen_toggle` - - `~.FigureManagerBase.resize` - - `~.FigureManagerBase.get_window_title` - - `~.FigureManagerBase.set_window_title` - - **Key and mouse button press handling** - - The figure manager sets up default key and mouse button press handling by - hooking up the `.key_press_handler` to the matplotlib event system. This - ensures the same shortcuts and mouse actions across backends. - - **Other operations** - - Subclasses will have additional attributes and functions to access - additional functionality. This is of course backend-specific. For example, - most GUI backends have ``window`` and ``toolbar`` attributes that give - access to the native GUI widgets of the respective framework. - - Attributes - ---------- - canvas : `FigureCanvasBase` - The backend-specific canvas instance. - - num : int or str - The figure number. - - key_press_handler_id : int - The default key handler cid, when using the toolmanager. - To disable the default key press handling use:: - - figure.canvas.mpl_disconnect( - figure.canvas.manager.key_press_handler_id) - - button_press_handler_id : int - The default mouse button handler cid, when using the toolmanager. - To disable the default button press handling use:: - - figure.canvas.mpl_disconnect( - figure.canvas.manager.button_press_handler_id) - """ - - _toolbar2_class = None - _toolmanager_toolbar_class = None - - def __init__(self, canvas, num): - self.canvas = canvas - canvas.manager = self # store a pointer to parent - self.num = num - self.set_window_title(f"Figure {num:d}") - - self.key_press_handler_id = None - self.button_press_handler_id = None - if rcParams['toolbar'] != 'toolmanager': - self.key_press_handler_id = self.canvas.mpl_connect( - 'key_press_event', key_press_handler) - self.button_press_handler_id = self.canvas.mpl_connect( - 'button_press_event', button_press_handler) - - self.toolmanager = (ToolManager(canvas.figure) - if mpl.rcParams['toolbar'] == 'toolmanager' - else None) - if (mpl.rcParams["toolbar"] == "toolbar2" - and self._toolbar2_class): - self.toolbar = self._toolbar2_class(self.canvas) - elif (mpl.rcParams["toolbar"] == "toolmanager" - and self._toolmanager_toolbar_class): - self.toolbar = self._toolmanager_toolbar_class(self.toolmanager) - else: - self.toolbar = None - - if self.toolmanager: - tools.add_tools_to_manager(self.toolmanager) - if self.toolbar: - tools.add_tools_to_container(self.toolbar) - - @self.canvas.figure.add_axobserver - def notify_axes_change(fig): - # Called whenever the current Axes is changed. - if self.toolmanager is None and self.toolbar is not None: - self.toolbar.update() - - @classmethod - def create_with_canvas(cls, canvas_class, figure, num): - """ - Create a manager for a given *figure* using a specific *canvas_class*. - - Backends should override this method if they have specific needs for - setting up the canvas or the manager. - """ - return cls(canvas_class(figure), num) - - @classmethod - def start_main_loop(cls): - """ - Start the main event loop. - - This method is called by `.FigureManagerBase.pyplot_show`, which is the - implementation of `.pyplot.show`. To customize the behavior of - `.pyplot.show`, interactive backends should usually override - `~.FigureManagerBase.start_main_loop`; if more customized logic is - necessary, `~.FigureManagerBase.pyplot_show` can also be overridden. - """ - - @classmethod - def pyplot_show(cls, *, block=None): - """ - Show all figures. This method is the implementation of `.pyplot.show`. - - To customize the behavior of `.pyplot.show`, interactive backends - should usually override `~.FigureManagerBase.start_main_loop`; if more - customized logic is necessary, `~.FigureManagerBase.pyplot_show` can - also be overridden. - - Parameters - ---------- - block : bool, optional - Whether to block by calling ``start_main_loop``. The default, - None, means to block if we are neither in IPython's ``%pylab`` mode - nor in ``interactive`` mode. - """ - managers = Gcf.get_all_fig_managers() - if not managers: - return - for manager in managers: - try: - manager.show() # Emits a warning for non-interactive backend. - except NonGuiException as exc: - _api.warn_external(str(exc)) - if block is None: - # Hack: Are we in IPython's %pylab mode? In pylab mode, IPython - # (>= 0.10) tacks a _needmain attribute onto pyplot.show (always - # set to False). - pyplot_show = getattr(sys.modules.get("matplotlib.pyplot"), "show", None) - ipython_pylab = hasattr(pyplot_show, "_needmain") - block = not ipython_pylab and not is_interactive() - if block: - cls.start_main_loop() - - def show(self): - """ - For GUI backends, show the figure window and redraw. - For non-GUI backends, raise an exception, unless running headless (i.e. - on Linux with an unset DISPLAY); this exception is converted to a - warning in `.Figure.show`. - """ - # This should be overridden in GUI backends. - if sys.platform == "linux" and not os.environ.get("DISPLAY"): - # We cannot check _get_running_interactive_framework() == - # "headless" because that would also suppress the warning when - # $DISPLAY exists but is invalid, which is more likely an error and - # thus warrants a warning. - return - raise NonGuiException( - f"{type(self.canvas).__name__} is non-interactive, and thus cannot be " - f"shown") - - def destroy(self): - pass - - def full_screen_toggle(self): - pass - - def resize(self, w, h): - """For GUI backends, resize the window (in physical pixels).""" - - def get_window_title(self): - """ - Return the title text of the window containing the figure, or None - if there is no window (e.g., a PS backend). - """ - return 'image' - - def set_window_title(self, title): - """ - Set the title text of the window containing the figure. - - This has no effect for non-GUI (e.g., PS) backends. - """ - - -cursors = tools.cursors - - -class _Mode(str, Enum): - NONE = "" - PAN = "pan/zoom" - ZOOM = "zoom rect" - - def __str__(self): - return self.value - - @property - def _navigate_mode(self): - return self.name if self is not _Mode.NONE else None - - -class NavigationToolbar2: - """ - Base class for the navigation cursor, version 2. - - Backends must implement a canvas that handles connections for - 'button_press_event' and 'button_release_event'. See - :meth:`FigureCanvasBase.mpl_connect` for more information. - - They must also define - - :meth:`save_figure` - Save the current figure. - - :meth:`draw_rubberband` (optional) - Draw the zoom to rect "rubberband" rectangle. - - :meth:`set_message` (optional) - Display message. - - :meth:`set_history_buttons` (optional) - You can change the history back / forward buttons to indicate disabled / enabled - state. - - and override ``__init__`` to set up the toolbar -- without forgetting to - call the base-class init. Typically, ``__init__`` needs to set up toolbar - buttons connected to the `home`, `back`, `forward`, `pan`, `zoom`, and - `save_figure` methods and using standard icons in the "images" subdirectory - of the data path. - - That's it, we'll do the rest! - """ - - # list of toolitems to add to the toolbar, format is: - # ( - # text, # the text of the button (often not visible to users) - # tooltip_text, # the tooltip shown on hover (where possible) - # image_file, # name of the image for the button (without the extension) - # name_of_method, # name of the method in NavigationToolbar2 to call - # ) - toolitems = ( - ('Home', 'Reset original view', 'home', 'home'), - ('Back', 'Back to previous view', 'back', 'back'), - ('Forward', 'Forward to next view', 'forward', 'forward'), - (None, None, None, None), - ('Pan', - 'Left button pans, Right button zooms\n' - 'x/y fixes axis, CTRL fixes aspect', - 'move', 'pan'), - ('Zoom', 'Zoom to rectangle\nx/y fixes axis', 'zoom_to_rect', 'zoom'), - ('Subplots', 'Configure subplots', 'subplots', 'configure_subplots'), - (None, None, None, None), - ('Save', 'Save the figure', 'filesave', 'save_figure'), - ) - - def __init__(self, canvas): - self.canvas = canvas - canvas.toolbar = self - self._nav_stack = cbook._Stack() - # This cursor will be set after the initial draw. - self._last_cursor = tools.Cursors.POINTER - - self._id_press = self.canvas.mpl_connect( - 'button_press_event', self._zoom_pan_handler) - self._id_release = self.canvas.mpl_connect( - 'button_release_event', self._zoom_pan_handler) - self._id_drag = self.canvas.mpl_connect( - 'motion_notify_event', self.mouse_move) - self._pan_info = None - self._zoom_info = None - - self.mode = _Mode.NONE # a mode string for the status bar - self.set_history_buttons() - - def set_message(self, s): - """Display a message on toolbar or in status bar.""" - - def draw_rubberband(self, event, x0, y0, x1, y1): - """ - Draw a rectangle rubberband to indicate zoom limits. - - Note that it is not guaranteed that ``x0 <= x1`` and ``y0 <= y1``. - """ - - def remove_rubberband(self): - """Remove the rubberband.""" - - def home(self, *args): - """ - Restore the original view. - - For convenience of being directly connected as a GUI callback, which - often get passed additional parameters, this method accepts arbitrary - parameters, but does not use them. - """ - self._nav_stack.home() - self.set_history_buttons() - self._update_view() - - def back(self, *args): - """ - Move back up the view lim stack. - - For convenience of being directly connected as a GUI callback, which - often get passed additional parameters, this method accepts arbitrary - parameters, but does not use them. - """ - self._nav_stack.back() - self.set_history_buttons() - self._update_view() - - def forward(self, *args): - """ - Move forward in the view lim stack. - - For convenience of being directly connected as a GUI callback, which - often get passed additional parameters, this method accepts arbitrary - parameters, but does not use them. - """ - self._nav_stack.forward() - self.set_history_buttons() - self._update_view() - - def _update_cursor(self, event): - """ - Update the cursor after a mouse move event or a tool (de)activation. - """ - if self.mode and event.inaxes and event.inaxes.get_navigate(): - if (self.mode == _Mode.ZOOM - and self._last_cursor != tools.Cursors.SELECT_REGION): - self.canvas.set_cursor(tools.Cursors.SELECT_REGION) - self._last_cursor = tools.Cursors.SELECT_REGION - elif (self.mode == _Mode.PAN - and self._last_cursor != tools.Cursors.MOVE): - self.canvas.set_cursor(tools.Cursors.MOVE) - self._last_cursor = tools.Cursors.MOVE - elif self._last_cursor != tools.Cursors.POINTER: - self.canvas.set_cursor(tools.Cursors.POINTER) - self._last_cursor = tools.Cursors.POINTER - - @contextmanager - def _wait_cursor_for_draw_cm(self): - """ - Set the cursor to a wait cursor when drawing the canvas. - - In order to avoid constantly changing the cursor when the canvas - changes frequently, do nothing if this context was triggered during the - last second. (Optimally we'd prefer only setting the wait cursor if - the *current* draw takes too long, but the current draw blocks the GUI - thread). - """ - self._draw_time, last_draw_time = ( - time.time(), getattr(self, "_draw_time", -np.inf)) - if self._draw_time - last_draw_time > 1: - try: - self.canvas.set_cursor(tools.Cursors.WAIT) - yield - finally: - self.canvas.set_cursor(self._last_cursor) - else: - yield - - @staticmethod - def _mouse_event_to_message(event): - if event.inaxes and event.inaxes.get_navigate(): - try: - s = event.inaxes.format_coord(event.xdata, event.ydata) - except (ValueError, OverflowError): - pass - else: - s = s.rstrip() - artists = [a for a in event.inaxes._mouseover_set - if a.contains(event)[0] and a.get_visible()] - if artists: - a = cbook._topmost_artist(artists) - if a is not event.inaxes.patch: - data = a.get_cursor_data(event) - if data is not None: - data_str = a.format_cursor_data(data).rstrip() - if data_str: - s = s + '\n' + data_str - return s - return "" - - def mouse_move(self, event): - self._update_cursor(event) - self.set_message(self._mouse_event_to_message(event)) - - def _zoom_pan_handler(self, event): - if self.mode == _Mode.PAN: - if event.name == "button_press_event": - self.press_pan(event) - elif event.name == "button_release_event": - self.release_pan(event) - if self.mode == _Mode.ZOOM: - if event.name == "button_press_event": - self.press_zoom(event) - elif event.name == "button_release_event": - self.release_zoom(event) - - def pan(self, *args): - """ - Toggle the pan/zoom tool. - - Pan with left button, zoom with right. - """ - if not self.canvas.widgetlock.available(self): - self.set_message("pan unavailable") - return - if self.mode == _Mode.PAN: - self.mode = _Mode.NONE - self.canvas.widgetlock.release(self) - else: - self.mode = _Mode.PAN - self.canvas.widgetlock(self) - for a in self.canvas.figure.get_axes(): - a.set_navigate_mode(self.mode._navigate_mode) - - _PanInfo = namedtuple("_PanInfo", "button axes cid") - - def press_pan(self, event): - """Callback for mouse button press in pan/zoom mode.""" - if (event.button not in [MouseButton.LEFT, MouseButton.RIGHT] - or event.x is None or event.y is None): - return - axes = [a for a in self.canvas.figure.get_axes() - if a.in_axes(event) and a.get_navigate() and a.can_pan()] - if not axes: - return - if self._nav_stack() is None: - self.push_current() # set the home button to this view - for ax in axes: - ax.start_pan(event.x, event.y, event.button) - self.canvas.mpl_disconnect(self._id_drag) - id_drag = self.canvas.mpl_connect("motion_notify_event", self.drag_pan) - self._pan_info = self._PanInfo( - button=event.button, axes=axes, cid=id_drag) - - def drag_pan(self, event): - """Callback for dragging in pan/zoom mode.""" - for ax in self._pan_info.axes: - # Using the recorded button at the press is safer than the current - # button, as multiple buttons can get pressed during motion. - ax.drag_pan(self._pan_info.button, event.key, event.x, event.y) - self.canvas.draw_idle() - - def release_pan(self, event): - """Callback for mouse button release in pan/zoom mode.""" - if self._pan_info is None: - return - self.canvas.mpl_disconnect(self._pan_info.cid) - self._id_drag = self.canvas.mpl_connect( - 'motion_notify_event', self.mouse_move) - for ax in self._pan_info.axes: - ax.end_pan() - self.canvas.draw_idle() - self._pan_info = None - self.push_current() - - def zoom(self, *args): - if not self.canvas.widgetlock.available(self): - self.set_message("zoom unavailable") - return - """Toggle zoom to rect mode.""" - if self.mode == _Mode.ZOOM: - self.mode = _Mode.NONE - self.canvas.widgetlock.release(self) - else: - self.mode = _Mode.ZOOM - self.canvas.widgetlock(self) - for a in self.canvas.figure.get_axes(): - a.set_navigate_mode(self.mode._navigate_mode) - - _ZoomInfo = namedtuple("_ZoomInfo", "direction start_xy axes cid cbar") - - def press_zoom(self, event): - """Callback for mouse button press in zoom to rect mode.""" - if (event.button not in [MouseButton.LEFT, MouseButton.RIGHT] - or event.x is None or event.y is None): - return - axes = [a for a in self.canvas.figure.get_axes() - if a.in_axes(event) and a.get_navigate() and a.can_zoom()] - if not axes: - return - if self._nav_stack() is None: - self.push_current() # set the home button to this view - id_zoom = self.canvas.mpl_connect( - "motion_notify_event", self.drag_zoom) - # A colorbar is one-dimensional, so we extend the zoom rectangle out - # to the edge of the Axes bbox in the other dimension. To do that we - # store the orientation of the colorbar for later. - if hasattr(axes[0], "_colorbar"): - cbar = axes[0]._colorbar.orientation - else: - cbar = None - self._zoom_info = self._ZoomInfo( - direction="in" if event.button == 1 else "out", - start_xy=(event.x, event.y), axes=axes, cid=id_zoom, cbar=cbar) - - def drag_zoom(self, event): - """Callback for dragging in zoom mode.""" - start_xy = self._zoom_info.start_xy - ax = self._zoom_info.axes[0] - (x1, y1), (x2, y2) = np.clip( - [start_xy, [event.x, event.y]], ax.bbox.min, ax.bbox.max) - key = event.key - # Force the key on colorbars to extend the short-axis bbox - if self._zoom_info.cbar == "horizontal": - key = "x" - elif self._zoom_info.cbar == "vertical": - key = "y" - if key == "x": - y1, y2 = ax.bbox.intervaly - elif key == "y": - x1, x2 = ax.bbox.intervalx - - self.draw_rubberband(event, x1, y1, x2, y2) - - def release_zoom(self, event): - """Callback for mouse button release in zoom to rect mode.""" - if self._zoom_info is None: - return - - # We don't check the event button here, so that zooms can be cancelled - # by (pressing and) releasing another mouse button. - self.canvas.mpl_disconnect(self._zoom_info.cid) - self.remove_rubberband() - - start_x, start_y = self._zoom_info.start_xy - key = event.key - # Force the key on colorbars to ignore the zoom-cancel on the - # short-axis side - if self._zoom_info.cbar == "horizontal": - key = "x" - elif self._zoom_info.cbar == "vertical": - key = "y" - # Ignore single clicks: 5 pixels is a threshold that allows the user to - # "cancel" a zoom action by zooming by less than 5 pixels. - if ((abs(event.x - start_x) < 5 and key != "y") or - (abs(event.y - start_y) < 5 and key != "x")): - self.canvas.draw_idle() - self._zoom_info = None - return - - for i, ax in enumerate(self._zoom_info.axes): - # Detect whether this Axes is twinned with an earlier Axes in the - # list of zoomed Axes, to avoid double zooming. - twinx = any(ax.get_shared_x_axes().joined(ax, prev) - for prev in self._zoom_info.axes[:i]) - twiny = any(ax.get_shared_y_axes().joined(ax, prev) - for prev in self._zoom_info.axes[:i]) - ax._set_view_from_bbox( - (start_x, start_y, event.x, event.y), - self._zoom_info.direction, key, twinx, twiny) - - self.canvas.draw_idle() - self._zoom_info = None - self.push_current() - - def push_current(self): - """Push the current view limits and position onto the stack.""" - self._nav_stack.push( - WeakKeyDictionary( - {ax: (ax._get_view(), - # Store both the original and modified positions. - (ax.get_position(True).frozen(), - ax.get_position().frozen())) - for ax in self.canvas.figure.axes})) - self.set_history_buttons() - - def _update_view(self): - """ - Update the viewlim and position from the view and position stack for - each Axes. - """ - nav_info = self._nav_stack() - if nav_info is None: - return - # Retrieve all items at once to avoid any risk of GC deleting an Axes - # while in the middle of the loop below. - items = list(nav_info.items()) - for ax, (view, (pos_orig, pos_active)) in items: - ax._set_view(view) - # Restore both the original and modified positions - ax._set_position(pos_orig, 'original') - ax._set_position(pos_active, 'active') - self.canvas.draw_idle() - - def configure_subplots(self, *args): - if hasattr(self, "subplot_tool"): - self.subplot_tool.figure.canvas.manager.show() - return - # This import needs to happen here due to circular imports. - from matplotlib.figure import Figure - with mpl.rc_context({"toolbar": "none"}): # No navbar for the toolfig. - manager = type(self.canvas).new_manager(Figure(figsize=(6, 3)), -1) - manager.set_window_title("Subplot configuration tool") - tool_fig = manager.canvas.figure - tool_fig.subplots_adjust(top=0.9) - self.subplot_tool = widgets.SubplotTool(self.canvas.figure, tool_fig) - cid = self.canvas.mpl_connect( - "close_event", lambda e: manager.destroy()) - - def on_tool_fig_close(e): - self.canvas.mpl_disconnect(cid) - del self.subplot_tool - - tool_fig.canvas.mpl_connect("close_event", on_tool_fig_close) - manager.show() - return self.subplot_tool - - def save_figure(self, *args): - """Save the current figure.""" - raise NotImplementedError - - def update(self): - """Reset the Axes stack.""" - self._nav_stack.clear() - self.set_history_buttons() - - def set_history_buttons(self): - """Enable or disable the back/forward button.""" - - -class ToolContainerBase: - """ - Base class for all tool containers, e.g. toolbars. - - Attributes - ---------- - toolmanager : `.ToolManager` - The tools with which this `ToolContainer` wants to communicate. - """ - - _icon_extension = '.png' - """ - Toolcontainer button icon image format extension - - **String**: Image extension - """ - - def __init__(self, toolmanager): - self.toolmanager = toolmanager - toolmanager.toolmanager_connect( - 'tool_message_event', - lambda event: self.set_message(event.message)) - toolmanager.toolmanager_connect( - 'tool_removed_event', - lambda event: self.remove_toolitem(event.tool.name)) - - def _tool_toggled_cbk(self, event): - """ - Capture the 'tool_trigger_[name]' - - This only gets used for toggled tools. - """ - self.toggle_toolitem(event.tool.name, event.tool.toggled) - - def add_tool(self, tool, group, position=-1): - """ - Add a tool to this container. - - Parameters - ---------- - tool : tool_like - The tool to add, see `.ToolManager.get_tool`. - group : str - The name of the group to add this tool to. - position : int, default: -1 - The position within the group to place this tool. - """ - tool = self.toolmanager.get_tool(tool) - image = self._get_image_filename(tool.image) - toggle = getattr(tool, 'toggled', None) is not None - self.add_toolitem(tool.name, group, position, - image, tool.description, toggle) - if toggle: - self.toolmanager.toolmanager_connect('tool_trigger_%s' % tool.name, - self._tool_toggled_cbk) - # If initially toggled - if tool.toggled: - self.toggle_toolitem(tool.name, True) - - def _get_image_filename(self, image): - """Find the image based on its name.""" - if not image: - return None - - basedir = cbook._get_data_path("images") - for fname in [ - image, - image + self._icon_extension, - str(basedir / image), - str(basedir / (image + self._icon_extension)), - ]: - if os.path.isfile(fname): - return fname - - def trigger_tool(self, name): - """ - Trigger the tool. - - Parameters - ---------- - name : str - Name (id) of the tool triggered from within the container. - """ - self.toolmanager.trigger_tool(name, sender=self) - - def add_toolitem(self, name, group, position, image, description, toggle): - """ - Add a toolitem to the container. - - This method must be implemented per backend. - - The callback associated with the button click event, - must be *exactly* ``self.trigger_tool(name)``. - - Parameters - ---------- - name : str - Name of the tool to add, this gets used as the tool's ID and as the - default label of the buttons. - group : str - Name of the group that this tool belongs to. - position : int - Position of the tool within its group, if -1 it goes at the end. - image : str - Filename of the image for the button or `None`. - description : str - Description of the tool, used for the tooltips. - toggle : bool - * `True` : The button is a toggle (change the pressed/unpressed - state between consecutive clicks). - * `False` : The button is a normal button (returns to unpressed - state after release). - """ - raise NotImplementedError - - def toggle_toolitem(self, name, toggled): - """ - Toggle the toolitem without firing event. - - Parameters - ---------- - name : str - Id of the tool to toggle. - toggled : bool - Whether to set this tool as toggled or not. - """ - raise NotImplementedError - - def remove_toolitem(self, name): - """ - Remove a toolitem from the `ToolContainer`. - - This method must get implemented per backend. - - Called when `.ToolManager` emits a `tool_removed_event`. - - Parameters - ---------- - name : str - Name of the tool to remove. - """ - raise NotImplementedError - - def set_message(self, s): - """ - Display a message on the toolbar. - - Parameters - ---------- - s : str - Message text. - """ - raise NotImplementedError - - -class _Backend: - # A backend can be defined by using the following pattern: - # - # @_Backend.export - # class FooBackend(_Backend): - # # override the attributes and methods documented below. - - # `backend_version` may be overridden by the subclass. - backend_version = "unknown" - - # The `FigureCanvas` class must be defined. - FigureCanvas = None - - # For interactive backends, the `FigureManager` class must be overridden. - FigureManager = FigureManagerBase - - # For interactive backends, `mainloop` should be a function taking no - # argument and starting the backend main loop. It should be left as None - # for non-interactive backends. - mainloop = None - - # The following methods will be automatically defined and exported, but - # can be overridden. - - @classmethod - def new_figure_manager(cls, num, *args, **kwargs): - """Create a new figure manager instance.""" - # This import needs to happen here due to circular imports. - from matplotlib.figure import Figure - fig_cls = kwargs.pop('FigureClass', Figure) - fig = fig_cls(*args, **kwargs) - return cls.new_figure_manager_given_figure(num, fig) - - @classmethod - def new_figure_manager_given_figure(cls, num, figure): - """Create a new figure manager instance for the given figure.""" - return cls.FigureCanvas.new_manager(figure, num) - - @classmethod - def draw_if_interactive(cls): - manager_class = cls.FigureCanvas.manager_class - # Interactive backends reimplement start_main_loop or pyplot_show. - backend_is_interactive = ( - manager_class.start_main_loop != FigureManagerBase.start_main_loop - or manager_class.pyplot_show != FigureManagerBase.pyplot_show) - if backend_is_interactive and is_interactive(): - manager = Gcf.get_active() - if manager: - manager.canvas.draw_idle() - - @classmethod - def show(cls, *, block=None): - """ - Show all figures. - - `show` blocks by calling `mainloop` if *block* is ``True``, or if it - is ``None`` and we are neither in IPython's ``%pylab`` mode, nor in - `interactive` mode. - """ - managers = Gcf.get_all_fig_managers() - if not managers: - return - for manager in managers: - try: - manager.show() # Emits a warning for non-interactive backend. - except NonGuiException as exc: - _api.warn_external(str(exc)) - if cls.mainloop is None: - return - if block is None: - # Hack: Are we in IPython's %pylab mode? In pylab mode, IPython - # (>= 0.10) tacks a _needmain attribute onto pyplot.show (always - # set to False). - pyplot_show = getattr(sys.modules.get("matplotlib.pyplot"), "show", None) - ipython_pylab = hasattr(pyplot_show, "_needmain") - block = not ipython_pylab and not is_interactive() - if block: - cls.mainloop() - - # This method is the one actually exporting the required methods. - - @staticmethod - def export(cls): - for name in [ - "backend_version", - "FigureCanvas", - "FigureManager", - "new_figure_manager", - "new_figure_manager_given_figure", - "draw_if_interactive", - "show", - ]: - setattr(sys.modules[cls.__module__], name, getattr(cls, name)) - - # For back-compatibility, generate a shim `Show` class. - - class Show(ShowBase): - def mainloop(self): - return cls.mainloop() - - setattr(sys.modules[cls.__module__], "Show", Show) - return cls - - -class ShowBase(_Backend): - """ - Simple base class to generate a ``show()`` function in backends. - - Subclass must override ``mainloop()`` method. - """ - - def __call__(self, block=None): - return self.show(block=block) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multipart/exceptions.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multipart/exceptions.py deleted file mode 100644 index 016e7f7c18c014c059214b0ef09dd1ecf97cb8fd..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multipart/exceptions.py +++ /dev/null @@ -1,46 +0,0 @@ -class FormParserError(ValueError): - """Base error class for our form parser.""" - pass - - -class ParseError(FormParserError): - """This exception (or a subclass) is raised when there is an error while - parsing something. - """ - - #: This is the offset in the input data chunk (*NOT* the overall stream) in - #: which the parse error occurred. It will be -1 if not specified. - offset = -1 - - -class MultipartParseError(ParseError): - """This is a specific error that is raised when the MultipartParser detects - an error while parsing. - """ - pass - - -class QuerystringParseError(ParseError): - """This is a specific error that is raised when the QuerystringParser - detects an error while parsing. - """ - pass - - -class DecodeError(ParseError): - """This exception is raised when there is a decoding error - for example - with the Base64Decoder or QuotedPrintableDecoder. - """ - pass - - -# On Python 3.3, IOError is the same as OSError, so we don't want to inherit -# from both of them. We handle this case below. -if IOError is not OSError: # pragma: no cover - class FileError(FormParserError, IOError, OSError): - """Exception class for problems with the File class.""" - pass -else: # pragma: no cover - class FileError(FormParserError, OSError): - """Exception class for problems with the File class.""" - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/integer/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/integer/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/string_/test_string_arrow.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/string_/test_string_arrow.py deleted file mode 100644 index c1d424f12bfc431c8627f7f0b8c62f71fa51c402..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/string_/test_string_arrow.py +++ /dev/null @@ -1,266 +0,0 @@ -import pickle -import re - -import numpy as np -import pytest - -from pandas.compat import pa_version_under7p0 - -import pandas as pd -import pandas._testing as tm -from pandas.core.arrays.string_ import ( - StringArray, - StringDtype, -) -from pandas.core.arrays.string_arrow import ( - ArrowStringArray, - ArrowStringArrayNumpySemantics, -) - -skip_if_no_pyarrow = pytest.mark.skipif( - pa_version_under7p0, - reason="pyarrow>=7.0.0 is required for PyArrow backed StringArray", -) - - -@skip_if_no_pyarrow -def test_eq_all_na(): - a = pd.array([pd.NA, pd.NA], dtype=StringDtype("pyarrow")) - result = a == a - expected = pd.array([pd.NA, pd.NA], dtype="boolean[pyarrow]") - tm.assert_extension_array_equal(result, expected) - - -def test_config(string_storage): - with pd.option_context("string_storage", string_storage): - assert StringDtype().storage == string_storage - result = pd.array(["a", "b"]) - assert result.dtype.storage == string_storage - - expected = ( - StringDtype(string_storage).construct_array_type()._from_sequence(["a", "b"]) - ) - tm.assert_equal(result, expected) - - -def test_config_bad_storage_raises(): - msg = re.escape("Value must be one of python|pyarrow") - with pytest.raises(ValueError, match=msg): - pd.options.mode.string_storage = "foo" - - -@skip_if_no_pyarrow -@pytest.mark.parametrize("chunked", [True, False]) -@pytest.mark.parametrize("array", ["numpy", "pyarrow"]) -def test_constructor_not_string_type_raises(array, chunked, arrow_string_storage): - import pyarrow as pa - - array = pa if array in arrow_string_storage else np - - arr = array.array([1, 2, 3]) - if chunked: - if array is np: - pytest.skip("chunked not applicable to numpy array") - arr = pa.chunked_array(arr) - if array is np: - msg = "Unsupported type '' for ArrowExtensionArray" - else: - msg = re.escape( - "ArrowStringArray requires a PyArrow (chunked) array of string type" - ) - with pytest.raises(ValueError, match=msg): - ArrowStringArray(arr) - - -@pytest.mark.parametrize("chunked", [True, False]) -def test_constructor_not_string_type_value_dictionary_raises(chunked): - pa = pytest.importorskip("pyarrow") - - arr = pa.array([1, 2, 3], pa.dictionary(pa.int32(), pa.int32())) - if chunked: - arr = pa.chunked_array(arr) - - msg = re.escape( - "ArrowStringArray requires a PyArrow (chunked) array of string type" - ) - with pytest.raises(ValueError, match=msg): - ArrowStringArray(arr) - - -@pytest.mark.parametrize("chunked", [True, False]) -def test_constructor_valid_string_type_value_dictionary(chunked): - pa = pytest.importorskip("pyarrow") - - arr = pa.array(["1", "2", "3"], pa.dictionary(pa.int32(), pa.utf8())) - if chunked: - arr = pa.chunked_array(arr) - - arr = ArrowStringArray(arr) - assert pa.types.is_string(arr._pa_array.type.value_type) - - -def test_constructor_from_list(): - # GH#27673 - pytest.importorskip("pyarrow", minversion="1.0.0") - result = pd.Series(["E"], dtype=StringDtype(storage="pyarrow")) - assert isinstance(result.dtype, StringDtype) - assert result.dtype.storage == "pyarrow" - - -@skip_if_no_pyarrow -def test_from_sequence_wrong_dtype_raises(): - with pd.option_context("string_storage", "python"): - ArrowStringArray._from_sequence(["a", None, "c"], dtype="string") - - with pd.option_context("string_storage", "pyarrow"): - ArrowStringArray._from_sequence(["a", None, "c"], dtype="string") - - with pytest.raises(AssertionError, match=None): - ArrowStringArray._from_sequence(["a", None, "c"], dtype="string[python]") - - ArrowStringArray._from_sequence(["a", None, "c"], dtype="string[pyarrow]") - - with pytest.raises(AssertionError, match=None): - with pd.option_context("string_storage", "python"): - ArrowStringArray._from_sequence(["a", None, "c"], dtype=StringDtype()) - - with pd.option_context("string_storage", "pyarrow"): - ArrowStringArray._from_sequence(["a", None, "c"], dtype=StringDtype()) - - with pytest.raises(AssertionError, match=None): - ArrowStringArray._from_sequence(["a", None, "c"], dtype=StringDtype("python")) - - ArrowStringArray._from_sequence(["a", None, "c"], dtype=StringDtype("pyarrow")) - - with pd.option_context("string_storage", "python"): - StringArray._from_sequence(["a", None, "c"], dtype="string") - - with pd.option_context("string_storage", "pyarrow"): - StringArray._from_sequence(["a", None, "c"], dtype="string") - - StringArray._from_sequence(["a", None, "c"], dtype="string[python]") - - with pytest.raises(AssertionError, match=None): - StringArray._from_sequence(["a", None, "c"], dtype="string[pyarrow]") - - with pd.option_context("string_storage", "python"): - StringArray._from_sequence(["a", None, "c"], dtype=StringDtype()) - - with pytest.raises(AssertionError, match=None): - with pd.option_context("string_storage", "pyarrow"): - StringArray._from_sequence(["a", None, "c"], dtype=StringDtype()) - - StringArray._from_sequence(["a", None, "c"], dtype=StringDtype("python")) - - with pytest.raises(AssertionError, match=None): - StringArray._from_sequence(["a", None, "c"], dtype=StringDtype("pyarrow")) - - -@pytest.mark.skipif( - not pa_version_under7p0, - reason="pyarrow is installed", -) -def test_pyarrow_not_installed_raises(): - msg = re.escape("pyarrow>=7.0.0 is required for PyArrow backed") - - with pytest.raises(ImportError, match=msg): - StringDtype(storage="pyarrow") - - with pytest.raises(ImportError, match=msg): - ArrowStringArray([]) - - with pytest.raises(ImportError, match=msg): - ArrowStringArrayNumpySemantics([]) - - with pytest.raises(ImportError, match=msg): - ArrowStringArray._from_sequence(["a", None, "b"]) - - -@skip_if_no_pyarrow -@pytest.mark.parametrize("multiple_chunks", [False, True]) -@pytest.mark.parametrize( - "key, value, expected", - [ - (-1, "XX", ["a", "b", "c", "d", "XX"]), - (1, "XX", ["a", "XX", "c", "d", "e"]), - (1, None, ["a", None, "c", "d", "e"]), - (1, pd.NA, ["a", None, "c", "d", "e"]), - ([1, 3], "XX", ["a", "XX", "c", "XX", "e"]), - ([1, 3], ["XX", "YY"], ["a", "XX", "c", "YY", "e"]), - ([1, 3], ["XX", None], ["a", "XX", "c", None, "e"]), - ([1, 3], ["XX", pd.NA], ["a", "XX", "c", None, "e"]), - ([0, -1], ["XX", "YY"], ["XX", "b", "c", "d", "YY"]), - ([-1, 0], ["XX", "YY"], ["YY", "b", "c", "d", "XX"]), - (slice(3, None), "XX", ["a", "b", "c", "XX", "XX"]), - (slice(2, 4), ["XX", "YY"], ["a", "b", "XX", "YY", "e"]), - (slice(3, 1, -1), ["XX", "YY"], ["a", "b", "YY", "XX", "e"]), - (slice(None), "XX", ["XX", "XX", "XX", "XX", "XX"]), - ([False, True, False, True, False], ["XX", "YY"], ["a", "XX", "c", "YY", "e"]), - ], -) -def test_setitem(multiple_chunks, key, value, expected): - import pyarrow as pa - - result = pa.array(list("abcde")) - expected = pa.array(expected) - - if multiple_chunks: - result = pa.chunked_array([result[:3], result[3:]]) - expected = pa.chunked_array([expected[:3], expected[3:]]) - - result = ArrowStringArray(result) - expected = ArrowStringArray(expected) - - result[key] = value - tm.assert_equal(result, expected) - - -@skip_if_no_pyarrow -def test_setitem_invalid_indexer_raises(): - import pyarrow as pa - - arr = ArrowStringArray(pa.array(list("abcde"))) - - with pytest.raises(IndexError, match=None): - arr[5] = "foo" - - with pytest.raises(IndexError, match=None): - arr[-6] = "foo" - - with pytest.raises(IndexError, match=None): - arr[[0, 5]] = "foo" - - with pytest.raises(IndexError, match=None): - arr[[0, -6]] = "foo" - - with pytest.raises(IndexError, match=None): - arr[[True, True, False]] = "foo" - - with pytest.raises(ValueError, match=None): - arr[[0, 1]] = ["foo", "bar", "baz"] - - -@skip_if_no_pyarrow -@pytest.mark.parametrize("dtype", ["string[pyarrow]", "string[pyarrow_numpy]"]) -def test_pickle_roundtrip(dtype): - # GH 42600 - expected = pd.Series(range(10), dtype=dtype) - expected_sliced = expected.head(2) - full_pickled = pickle.dumps(expected) - sliced_pickled = pickle.dumps(expected_sliced) - - assert len(full_pickled) > len(sliced_pickled) - - result = pickle.loads(full_pickled) - tm.assert_series_equal(result, expected) - - result_sliced = pickle.loads(sliced_pickled) - tm.assert_series_equal(result_sliced, expected_sliced) - - -@skip_if_no_pyarrow -def test_string_dtype_error_message(): - # GH#55051 - msg = "Storage must be 'python', 'pyarrow' or 'pyarrow_numpy'." - with pytest.raises(ValueError, match=msg): - StringDtype("bla") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/test_categorical.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/test_categorical.py deleted file mode 100644 index 33e5c9ad72982c2b6e8da9f485850b0b9619e0aa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/test_categorical.py +++ /dev/null @@ -1,232 +0,0 @@ -""" -This file contains a minimal set of tests for compliance with the extension -array interface test suite, and should contain no other tests. -The test suite for the full functionality of the array is located in -`pandas/tests/arrays/`. - -The tests in this file are inherited from the BaseExtensionTests, and only -minimal tweaks should be applied to get the tests passing (by overwriting a -parent method). - -Additional tests should either be added to one of the BaseExtensionTests -classes (if they are relevant for the extension interface for all dtypes), or -be added to the array-specific tests in `pandas/tests/arrays/`. - -""" -import string - -import numpy as np -import pytest - -import pandas as pd -from pandas import Categorical -import pandas._testing as tm -from pandas.api.types import CategoricalDtype -from pandas.tests.extension import base - - -def make_data(): - while True: - values = np.random.default_rng(2).choice(list(string.ascii_letters), size=100) - # ensure we meet the requirements - # 1. first two not null - # 2. first and second are different - if values[0] != values[1]: - break - return values - - -@pytest.fixture -def dtype(): - return CategoricalDtype() - - -@pytest.fixture -def data(): - """Length-100 array for this type. - - * data[0] and data[1] should both be non missing - * data[0] and data[1] should not be equal - """ - return Categorical(make_data()) - - -@pytest.fixture -def data_missing(): - """Length 2 array with [NA, Valid]""" - return Categorical([np.nan, "A"]) - - -@pytest.fixture -def data_for_sorting(): - return Categorical(["A", "B", "C"], categories=["C", "A", "B"], ordered=True) - - -@pytest.fixture -def data_missing_for_sorting(): - return Categorical(["A", None, "B"], categories=["B", "A"], ordered=True) - - -@pytest.fixture -def data_for_grouping(): - return Categorical(["a", "a", None, None, "b", "b", "a", "c"]) - - -class TestDtype(base.BaseDtypeTests): - pass - - -class TestInterface(base.BaseInterfaceTests): - @pytest.mark.xfail(reason="Memory usage doesn't match") - def test_memory_usage(self, data): - # TODO: Is this deliberate? - super().test_memory_usage(data) - - def test_contains(self, data, data_missing): - # GH-37867 - # na value handling in Categorical.__contains__ is deprecated. - # See base.BaseInterFaceTests.test_contains for more details. - - na_value = data.dtype.na_value - # ensure data without missing values - data = data[~data.isna()] - - # first elements are non-missing - assert data[0] in data - assert data_missing[0] in data_missing - - # check the presence of na_value - assert na_value in data_missing - assert na_value not in data - - # Categoricals can contain other nan-likes than na_value - for na_value_obj in tm.NULL_OBJECTS: - if na_value_obj is na_value: - continue - assert na_value_obj not in data - assert na_value_obj in data_missing # this line differs from super method - - -class TestConstructors(base.BaseConstructorsTests): - def test_empty(self, dtype): - cls = dtype.construct_array_type() - result = cls._empty((4,), dtype=dtype) - - assert isinstance(result, cls) - # the dtype we passed is not initialized, so will not match the - # dtype on our result. - assert result.dtype == CategoricalDtype([]) - - -class TestReshaping(base.BaseReshapingTests): - pass - - -class TestGetitem(base.BaseGetitemTests): - @pytest.mark.skip(reason="Backwards compatibility") - def test_getitem_scalar(self, data): - # CategoricalDtype.type isn't "correct" since it should - # be a parent of the elements (object). But don't want - # to break things by changing. - super().test_getitem_scalar(data) - - -class TestSetitem(base.BaseSetitemTests): - pass - - -class TestIndex(base.BaseIndexTests): - pass - - -class TestMissing(base.BaseMissingTests): - pass - - -class TestReduce(base.BaseReduceTests): - pass - - -class TestAccumulate(base.BaseAccumulateTests): - pass - - -class TestMethods(base.BaseMethodsTests): - @pytest.mark.xfail(reason="Unobserved categories included") - def test_value_counts(self, all_data, dropna): - return super().test_value_counts(all_data, dropna) - - def test_combine_add(self, data_repeated): - # GH 20825 - # When adding categoricals in combine, result is a string - orig_data1, orig_data2 = data_repeated(2) - s1 = pd.Series(orig_data1) - s2 = pd.Series(orig_data2) - result = s1.combine(s2, lambda x1, x2: x1 + x2) - expected = pd.Series( - [a + b for (a, b) in zip(list(orig_data1), list(orig_data2))] - ) - tm.assert_series_equal(result, expected) - - val = s1.iloc[0] - result = s1.combine(val, lambda x1, x2: x1 + x2) - expected = pd.Series([a + val for a in list(orig_data1)]) - tm.assert_series_equal(result, expected) - - @pytest.mark.parametrize("na_action", [None, "ignore"]) - def test_map(self, data, na_action): - result = data.map(lambda x: x, na_action=na_action) - tm.assert_extension_array_equal(result, data) - - -class TestCasting(base.BaseCastingTests): - pass - - -class TestArithmeticOps(base.BaseArithmeticOpsTests): - def test_arith_frame_with_scalar(self, data, all_arithmetic_operators, request): - # frame & scalar - op_name = all_arithmetic_operators - if op_name == "__rmod__": - request.node.add_marker( - pytest.mark.xfail( - reason="rmod never called when string is first argument" - ) - ) - super().test_arith_frame_with_scalar(data, op_name) - - def test_arith_series_with_scalar(self, data, all_arithmetic_operators, request): - op_name = all_arithmetic_operators - if op_name == "__rmod__": - request.node.add_marker( - pytest.mark.xfail( - reason="rmod never called when string is first argument" - ) - ) - super().test_arith_series_with_scalar(data, op_name) - - -class TestComparisonOps(base.BaseComparisonOpsTests): - def _compare_other(self, s, data, op, other): - op_name = f"__{op.__name__}__" - if op_name not in ["__eq__", "__ne__"]: - msg = "Unordered Categoricals can only compare equality or not" - with pytest.raises(TypeError, match=msg): - op(data, other) - else: - return super()._compare_other(s, data, op, other) - - -class TestParsing(base.BaseParsingTests): - pass - - -class Test2DCompat(base.NDArrayBacked2DTests): - def test_repr_2d(self, data): - # Categorical __repr__ doesn't include "Categorical", so we need - # to special-case - res = repr(data.reshape(1, -1)) - assert res.count("\nCategories") == 1 - - res = repr(data.reshape(-1, 1)) - assert res.count("\nCategories") == 1 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/panel.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/panel.py deleted file mode 100644 index 151fe5f017f855f7df0150508ec0fb53f15abb60..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/panel.py +++ /dev/null @@ -1,250 +0,0 @@ -from typing import Optional, TYPE_CHECKING - -from .box import Box, ROUNDED - -from .align import AlignMethod -from .jupyter import JupyterMixin -from .measure import Measurement, measure_renderables -from .padding import Padding, PaddingDimensions -from .style import StyleType -from .text import Text, TextType -from .segment import Segment - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderableType, RenderResult - - -class Panel(JupyterMixin): - """A console renderable that draws a border around its contents. - - Example: - >>> console.print(Panel("Hello, World!")) - - Args: - renderable (RenderableType): A console renderable object. - box (Box, optional): A Box instance that defines the look of the border (see :ref:`appendix_box`. - Defaults to box.ROUNDED. - safe_box (bool, optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True. - expand (bool, optional): If True the panel will stretch to fill the console - width, otherwise it will be sized to fit the contents. Defaults to True. - style (str, optional): The style of the panel (border and contents). Defaults to "none". - border_style (str, optional): The style of the border. Defaults to "none". - width (Optional[int], optional): Optional width of panel. Defaults to None to auto-detect. - height (Optional[int], optional): Optional height of panel. Defaults to None to auto-detect. - padding (Optional[PaddingDimensions]): Optional padding around renderable. Defaults to 0. - highlight (bool, optional): Enable automatic highlighting of panel title (if str). Defaults to False. - """ - - def __init__( - self, - renderable: "RenderableType", - box: Box = ROUNDED, - *, - title: Optional[TextType] = None, - title_align: AlignMethod = "center", - subtitle: Optional[TextType] = None, - subtitle_align: AlignMethod = "center", - safe_box: Optional[bool] = None, - expand: bool = True, - style: StyleType = "none", - border_style: StyleType = "none", - width: Optional[int] = None, - height: Optional[int] = None, - padding: PaddingDimensions = (0, 1), - highlight: bool = False, - ) -> None: - self.renderable = renderable - self.box = box - self.title = title - self.title_align: AlignMethod = title_align - self.subtitle = subtitle - self.subtitle_align = subtitle_align - self.safe_box = safe_box - self.expand = expand - self.style = style - self.border_style = border_style - self.width = width - self.height = height - self.padding = padding - self.highlight = highlight - - @classmethod - def fit( - cls, - renderable: "RenderableType", - box: Box = ROUNDED, - *, - title: Optional[TextType] = None, - title_align: AlignMethod = "center", - subtitle: Optional[TextType] = None, - subtitle_align: AlignMethod = "center", - safe_box: Optional[bool] = None, - style: StyleType = "none", - border_style: StyleType = "none", - width: Optional[int] = None, - padding: PaddingDimensions = (0, 1), - ) -> "Panel": - """An alternative constructor that sets expand=False.""" - return cls( - renderable, - box, - title=title, - title_align=title_align, - subtitle=subtitle, - subtitle_align=subtitle_align, - safe_box=safe_box, - style=style, - border_style=border_style, - width=width, - padding=padding, - expand=False, - ) - - @property - def _title(self) -> Optional[Text]: - if self.title: - title_text = ( - Text.from_markup(self.title) - if isinstance(self.title, str) - else self.title.copy() - ) - title_text.end = "" - title_text.plain = title_text.plain.replace("\n", " ") - title_text.no_wrap = True - title_text.expand_tabs() - title_text.pad(1) - return title_text - return None - - @property - def _subtitle(self) -> Optional[Text]: - if self.subtitle: - subtitle_text = ( - Text.from_markup(self.subtitle) - if isinstance(self.subtitle, str) - else self.subtitle.copy() - ) - subtitle_text.end = "" - subtitle_text.plain = subtitle_text.plain.replace("\n", " ") - subtitle_text.no_wrap = True - subtitle_text.expand_tabs() - subtitle_text.pad(1) - return subtitle_text - return None - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - _padding = Padding.unpack(self.padding) - renderable = ( - Padding(self.renderable, _padding) if any(_padding) else self.renderable - ) - style = console.get_style(self.style) - border_style = style + console.get_style(self.border_style) - width = ( - options.max_width - if self.width is None - else min(options.max_width, self.width) - ) - - safe_box: bool = console.safe_box if self.safe_box is None else self.safe_box - box = self.box.substitute(options, safe=safe_box) - - title_text = self._title - if title_text is not None: - title_text.style = border_style - - child_width = ( - width - 2 - if self.expand - else console.measure( - renderable, options=options.update_width(width - 2) - ).maximum - ) - child_height = self.height or options.height or None - if child_height: - child_height -= 2 - if title_text is not None: - child_width = min( - options.max_width - 2, max(child_width, title_text.cell_len + 2) - ) - - width = child_width + 2 - child_options = options.update( - width=child_width, height=child_height, highlight=self.highlight - ) - lines = console.render_lines(renderable, child_options, style=style) - - line_start = Segment(box.mid_left, border_style) - line_end = Segment(f"{box.mid_right}", border_style) - new_line = Segment.line() - if title_text is None or width <= 4: - yield Segment(box.get_top([width - 2]), border_style) - else: - title_text.align(self.title_align, width - 4, character=box.top) - yield Segment(box.top_left + box.top, border_style) - yield from console.render(title_text) - yield Segment(box.top + box.top_right, border_style) - - yield new_line - for line in lines: - yield line_start - yield from line - yield line_end - yield new_line - - subtitle_text = self._subtitle - if subtitle_text is not None: - subtitle_text.style = border_style - - if subtitle_text is None or width <= 4: - yield Segment(box.get_bottom([width - 2]), border_style) - else: - subtitle_text.align(self.subtitle_align, width - 4, character=box.bottom) - yield Segment(box.bottom_left + box.bottom, border_style) - yield from console.render(subtitle_text) - yield Segment(box.bottom + box.bottom_right, border_style) - - yield new_line - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - _title = self._title - _, right, _, left = Padding.unpack(self.padding) - padding = left + right - renderables = [self.renderable, _title] if _title else [self.renderable] - - if self.width is None: - width = ( - measure_renderables( - console, - options.update_width(options.max_width - padding - 2), - renderables, - ).maximum - + padding - + 2 - ) - else: - width = self.width - return Measurement(width, width) - - -if __name__ == "__main__": # pragma: no cover - from .console import Console - - c = Console() - - from .padding import Padding - from .box import ROUNDED, DOUBLE - - p = Panel( - "Hello, World!", - title="rich.Panel", - style="white on blue", - box=DOUBLE, - padding=1, - ) - - c.print() - c.print(p) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/recipes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/recipes.py deleted file mode 100644 index 521abd7c2ca633f90a5ba13a8060c5c3d0c32205..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_vendor/more_itertools/recipes.py +++ /dev/null @@ -1,620 +0,0 @@ -"""Imported from the recipes section of the itertools documentation. - -All functions taken from the recipes section of the itertools library docs -[1]_. -Some backward-compatible usability improvements have been made. - -.. [1] http://docs.python.org/library/itertools.html#recipes - -""" -import warnings -from collections import deque -from itertools import ( - chain, - combinations, - count, - cycle, - groupby, - islice, - repeat, - starmap, - tee, - zip_longest, -) -import operator -from random import randrange, sample, choice - -__all__ = [ - 'all_equal', - 'consume', - 'convolve', - 'dotproduct', - 'first_true', - 'flatten', - 'grouper', - 'iter_except', - 'ncycles', - 'nth', - 'nth_combination', - 'padnone', - 'pad_none', - 'pairwise', - 'partition', - 'powerset', - 'prepend', - 'quantify', - 'random_combination_with_replacement', - 'random_combination', - 'random_permutation', - 'random_product', - 'repeatfunc', - 'roundrobin', - 'tabulate', - 'tail', - 'take', - 'unique_everseen', - 'unique_justseen', -] - - -def take(n, iterable): - """Return first *n* items of the iterable as a list. - - >>> take(3, range(10)) - [0, 1, 2] - - If there are fewer than *n* items in the iterable, all of them are - returned. - - >>> take(10, range(3)) - [0, 1, 2] - - """ - return list(islice(iterable, n)) - - -def tabulate(function, start=0): - """Return an iterator over the results of ``func(start)``, - ``func(start + 1)``, ``func(start + 2)``... - - *func* should be a function that accepts one integer argument. - - If *start* is not specified it defaults to 0. It will be incremented each - time the iterator is advanced. - - >>> square = lambda x: x ** 2 - >>> iterator = tabulate(square, -3) - >>> take(4, iterator) - [9, 4, 1, 0] - - """ - return map(function, count(start)) - - -def tail(n, iterable): - """Return an iterator over the last *n* items of *iterable*. - - >>> t = tail(3, 'ABCDEFG') - >>> list(t) - ['E', 'F', 'G'] - - """ - return iter(deque(iterable, maxlen=n)) - - -def consume(iterator, n=None): - """Advance *iterable* by *n* steps. If *n* is ``None``, consume it - entirely. - - Efficiently exhausts an iterator without returning values. Defaults to - consuming the whole iterator, but an optional second argument may be - provided to limit consumption. - - >>> i = (x for x in range(10)) - >>> next(i) - 0 - >>> consume(i, 3) - >>> next(i) - 4 - >>> consume(i) - >>> next(i) - Traceback (most recent call last): - File "", line 1, in - StopIteration - - If the iterator has fewer items remaining than the provided limit, the - whole iterator will be consumed. - - >>> i = (x for x in range(3)) - >>> consume(i, 5) - >>> next(i) - Traceback (most recent call last): - File "", line 1, in - StopIteration - - """ - # Use functions that consume iterators at C speed. - if n is None: - # feed the entire iterator into a zero-length deque - deque(iterator, maxlen=0) - else: - # advance to the empty slice starting at position n - next(islice(iterator, n, n), None) - - -def nth(iterable, n, default=None): - """Returns the nth item or a default value. - - >>> l = range(10) - >>> nth(l, 3) - 3 - >>> nth(l, 20, "zebra") - 'zebra' - - """ - return next(islice(iterable, n, None), default) - - -def all_equal(iterable): - """ - Returns ``True`` if all the elements are equal to each other. - - >>> all_equal('aaaa') - True - >>> all_equal('aaab') - False - - """ - g = groupby(iterable) - return next(g, True) and not next(g, False) - - -def quantify(iterable, pred=bool): - """Return the how many times the predicate is true. - - >>> quantify([True, False, True]) - 2 - - """ - return sum(map(pred, iterable)) - - -def pad_none(iterable): - """Returns the sequence of elements and then returns ``None`` indefinitely. - - >>> take(5, pad_none(range(3))) - [0, 1, 2, None, None] - - Useful for emulating the behavior of the built-in :func:`map` function. - - See also :func:`padded`. - - """ - return chain(iterable, repeat(None)) - - -padnone = pad_none - - -def ncycles(iterable, n): - """Returns the sequence elements *n* times - - >>> list(ncycles(["a", "b"], 3)) - ['a', 'b', 'a', 'b', 'a', 'b'] - - """ - return chain.from_iterable(repeat(tuple(iterable), n)) - - -def dotproduct(vec1, vec2): - """Returns the dot product of the two iterables. - - >>> dotproduct([10, 10], [20, 20]) - 400 - - """ - return sum(map(operator.mul, vec1, vec2)) - - -def flatten(listOfLists): - """Return an iterator flattening one level of nesting in a list of lists. - - >>> list(flatten([[0, 1], [2, 3]])) - [0, 1, 2, 3] - - See also :func:`collapse`, which can flatten multiple levels of nesting. - - """ - return chain.from_iterable(listOfLists) - - -def repeatfunc(func, times=None, *args): - """Call *func* with *args* repeatedly, returning an iterable over the - results. - - If *times* is specified, the iterable will terminate after that many - repetitions: - - >>> from operator import add - >>> times = 4 - >>> args = 3, 5 - >>> list(repeatfunc(add, times, *args)) - [8, 8, 8, 8] - - If *times* is ``None`` the iterable will not terminate: - - >>> from random import randrange - >>> times = None - >>> args = 1, 11 - >>> take(6, repeatfunc(randrange, times, *args)) # doctest:+SKIP - [2, 4, 8, 1, 8, 4] - - """ - if times is None: - return starmap(func, repeat(args)) - return starmap(func, repeat(args, times)) - - -def _pairwise(iterable): - """Returns an iterator of paired items, overlapping, from the original - - >>> take(4, pairwise(count())) - [(0, 1), (1, 2), (2, 3), (3, 4)] - - On Python 3.10 and above, this is an alias for :func:`itertools.pairwise`. - - """ - a, b = tee(iterable) - next(b, None) - yield from zip(a, b) - - -try: - from itertools import pairwise as itertools_pairwise -except ImportError: - pairwise = _pairwise -else: - - def pairwise(iterable): - yield from itertools_pairwise(iterable) - - pairwise.__doc__ = _pairwise.__doc__ - - -def grouper(iterable, n, fillvalue=None): - """Collect data into fixed-length chunks or blocks. - - >>> list(grouper('ABCDEFG', 3, 'x')) - [('A', 'B', 'C'), ('D', 'E', 'F'), ('G', 'x', 'x')] - - """ - if isinstance(iterable, int): - warnings.warn( - "grouper expects iterable as first parameter", DeprecationWarning - ) - n, iterable = iterable, n - args = [iter(iterable)] * n - return zip_longest(fillvalue=fillvalue, *args) - - -def roundrobin(*iterables): - """Yields an item from each iterable, alternating between them. - - >>> list(roundrobin('ABC', 'D', 'EF')) - ['A', 'D', 'E', 'B', 'F', 'C'] - - This function produces the same output as :func:`interleave_longest`, but - may perform better for some inputs (in particular when the number of - iterables is small). - - """ - # Recipe credited to George Sakkis - pending = len(iterables) - nexts = cycle(iter(it).__next__ for it in iterables) - while pending: - try: - for next in nexts: - yield next() - except StopIteration: - pending -= 1 - nexts = cycle(islice(nexts, pending)) - - -def partition(pred, iterable): - """ - Returns a 2-tuple of iterables derived from the input iterable. - The first yields the items that have ``pred(item) == False``. - The second yields the items that have ``pred(item) == True``. - - >>> is_odd = lambda x: x % 2 != 0 - >>> iterable = range(10) - >>> even_items, odd_items = partition(is_odd, iterable) - >>> list(even_items), list(odd_items) - ([0, 2, 4, 6, 8], [1, 3, 5, 7, 9]) - - If *pred* is None, :func:`bool` is used. - - >>> iterable = [0, 1, False, True, '', ' '] - >>> false_items, true_items = partition(None, iterable) - >>> list(false_items), list(true_items) - ([0, False, ''], [1, True, ' ']) - - """ - if pred is None: - pred = bool - - evaluations = ((pred(x), x) for x in iterable) - t1, t2 = tee(evaluations) - return ( - (x for (cond, x) in t1 if not cond), - (x for (cond, x) in t2 if cond), - ) - - -def powerset(iterable): - """Yields all possible subsets of the iterable. - - >>> list(powerset([1, 2, 3])) - [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)] - - :func:`powerset` will operate on iterables that aren't :class:`set` - instances, so repeated elements in the input will produce repeated elements - in the output. Use :func:`unique_everseen` on the input to avoid generating - duplicates: - - >>> seq = [1, 1, 0] - >>> list(powerset(seq)) - [(), (1,), (1,), (0,), (1, 1), (1, 0), (1, 0), (1, 1, 0)] - >>> from more_itertools import unique_everseen - >>> list(powerset(unique_everseen(seq))) - [(), (1,), (0,), (1, 0)] - - """ - s = list(iterable) - return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1)) - - -def unique_everseen(iterable, key=None): - """ - Yield unique elements, preserving order. - - >>> list(unique_everseen('AAAABBBCCDAABBB')) - ['A', 'B', 'C', 'D'] - >>> list(unique_everseen('ABBCcAD', str.lower)) - ['A', 'B', 'C', 'D'] - - Sequences with a mix of hashable and unhashable items can be used. - The function will be slower (i.e., `O(n^2)`) for unhashable items. - - Remember that ``list`` objects are unhashable - you can use the *key* - parameter to transform the list to a tuple (which is hashable) to - avoid a slowdown. - - >>> iterable = ([1, 2], [2, 3], [1, 2]) - >>> list(unique_everseen(iterable)) # Slow - [[1, 2], [2, 3]] - >>> list(unique_everseen(iterable, key=tuple)) # Faster - [[1, 2], [2, 3]] - - Similary, you may want to convert unhashable ``set`` objects with - ``key=frozenset``. For ``dict`` objects, - ``key=lambda x: frozenset(x.items())`` can be used. - - """ - seenset = set() - seenset_add = seenset.add - seenlist = [] - seenlist_add = seenlist.append - use_key = key is not None - - for element in iterable: - k = key(element) if use_key else element - try: - if k not in seenset: - seenset_add(k) - yield element - except TypeError: - if k not in seenlist: - seenlist_add(k) - yield element - - -def unique_justseen(iterable, key=None): - """Yields elements in order, ignoring serial duplicates - - >>> list(unique_justseen('AAAABBBCCDAABBB')) - ['A', 'B', 'C', 'D', 'A', 'B'] - >>> list(unique_justseen('ABBCcAD', str.lower)) - ['A', 'B', 'C', 'A', 'D'] - - """ - return map(next, map(operator.itemgetter(1), groupby(iterable, key))) - - -def iter_except(func, exception, first=None): - """Yields results from a function repeatedly until an exception is raised. - - Converts a call-until-exception interface to an iterator interface. - Like ``iter(func, sentinel)``, but uses an exception instead of a sentinel - to end the loop. - - >>> l = [0, 1, 2] - >>> list(iter_except(l.pop, IndexError)) - [2, 1, 0] - - """ - try: - if first is not None: - yield first() - while 1: - yield func() - except exception: - pass - - -def first_true(iterable, default=None, pred=None): - """ - Returns the first true value in the iterable. - - If no true value is found, returns *default* - - If *pred* is not None, returns the first item for which - ``pred(item) == True`` . - - >>> first_true(range(10)) - 1 - >>> first_true(range(10), pred=lambda x: x > 5) - 6 - >>> first_true(range(10), default='missing', pred=lambda x: x > 9) - 'missing' - - """ - return next(filter(pred, iterable), default) - - -def random_product(*args, repeat=1): - """Draw an item at random from each of the input iterables. - - >>> random_product('abc', range(4), 'XYZ') # doctest:+SKIP - ('c', 3, 'Z') - - If *repeat* is provided as a keyword argument, that many items will be - drawn from each iterable. - - >>> random_product('abcd', range(4), repeat=2) # doctest:+SKIP - ('a', 2, 'd', 3) - - This equivalent to taking a random selection from - ``itertools.product(*args, **kwarg)``. - - """ - pools = [tuple(pool) for pool in args] * repeat - return tuple(choice(pool) for pool in pools) - - -def random_permutation(iterable, r=None): - """Return a random *r* length permutation of the elements in *iterable*. - - If *r* is not specified or is ``None``, then *r* defaults to the length of - *iterable*. - - >>> random_permutation(range(5)) # doctest:+SKIP - (3, 4, 0, 1, 2) - - This equivalent to taking a random selection from - ``itertools.permutations(iterable, r)``. - - """ - pool = tuple(iterable) - r = len(pool) if r is None else r - return tuple(sample(pool, r)) - - -def random_combination(iterable, r): - """Return a random *r* length subsequence of the elements in *iterable*. - - >>> random_combination(range(5), 3) # doctest:+SKIP - (2, 3, 4) - - This equivalent to taking a random selection from - ``itertools.combinations(iterable, r)``. - - """ - pool = tuple(iterable) - n = len(pool) - indices = sorted(sample(range(n), r)) - return tuple(pool[i] for i in indices) - - -def random_combination_with_replacement(iterable, r): - """Return a random *r* length subsequence of elements in *iterable*, - allowing individual elements to be repeated. - - >>> random_combination_with_replacement(range(3), 5) # doctest:+SKIP - (0, 0, 1, 2, 2) - - This equivalent to taking a random selection from - ``itertools.combinations_with_replacement(iterable, r)``. - - """ - pool = tuple(iterable) - n = len(pool) - indices = sorted(randrange(n) for i in range(r)) - return tuple(pool[i] for i in indices) - - -def nth_combination(iterable, r, index): - """Equivalent to ``list(combinations(iterable, r))[index]``. - - The subsequences of *iterable* that are of length *r* can be ordered - lexicographically. :func:`nth_combination` computes the subsequence at - sort position *index* directly, without computing the previous - subsequences. - - >>> nth_combination(range(5), 3, 5) - (0, 3, 4) - - ``ValueError`` will be raised If *r* is negative or greater than the length - of *iterable*. - ``IndexError`` will be raised if the given *index* is invalid. - """ - pool = tuple(iterable) - n = len(pool) - if (r < 0) or (r > n): - raise ValueError - - c = 1 - k = min(r, n - r) - for i in range(1, k + 1): - c = c * (n - k + i) // i - - if index < 0: - index += c - - if (index < 0) or (index >= c): - raise IndexError - - result = [] - while r: - c, n, r = c * r // n, n - 1, r - 1 - while index >= c: - index -= c - c, n = c * (n - r) // n, n - 1 - result.append(pool[-1 - n]) - - return tuple(result) - - -def prepend(value, iterator): - """Yield *value*, followed by the elements in *iterator*. - - >>> value = '0' - >>> iterator = ['1', '2', '3'] - >>> list(prepend(value, iterator)) - ['0', '1', '2', '3'] - - To prepend multiple values, see :func:`itertools.chain` - or :func:`value_chain`. - - """ - return chain([value], iterator) - - -def convolve(signal, kernel): - """Convolve the iterable *signal* with the iterable *kernel*. - - >>> signal = (1, 2, 3, 4, 5) - >>> kernel = [3, 2, 1] - >>> list(convolve(signal, kernel)) - [3, 8, 14, 20, 26, 14, 5] - - Note: the input arguments are not interchangeable, as the *kernel* - is immediately consumed and stored. - - """ - kernel = tuple(kernel)[::-1] - n = len(kernel) - window = deque([0], maxlen=n) * n - for x in chain(signal, repeat(0, n - 1)): - window.append(x) - yield sum(map(operator.mul, kernel, window)) diff --git a/spaces/pyodide-demo/self-hosted/Pygments.js b/spaces/pyodide-demo/self-hosted/Pygments.js deleted file mode 100644 index 0a8ca54150eedcc2309c39d1d17fb5db79affa6d..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/Pygments.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="Pygments.data";var REMOTE_PACKAGE_BASE="Pygments.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","pygments",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/pygments","filters",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/pygments","formatters",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/pygments","lexers",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/pygments","styles",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","Pygments-2.9.0-py3.9.egg-info",true,true);Module["FS_createPath"]("/","bin",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:1896711,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1554,2819,3799,4833,5812,6900,7960,8876,10145,11304,12580,13698,14870,16080,17412,18769,20051,21364,22512,23524,24751,25868,26795,27859,29008,30390,31233,32371,33289,34423,35802,36872,38105,39517,40581,41816,42726,43896,44787,45653,46885,48070,48911,50319,51851,53011,53976,55040,56495,58014,59418,60964,62099,63170,64640,66081,67417,68848,70025,71375,72840,74255,75843,77127,78562,80081,81177,82133,83275,84746,86270,87241,88293,89576,90837,91874,93145,94594,95869,97239,97888,98537,99162,99783,100530,100963,101401,101867,102404,103039,103623,104225,104884,105731,106900,108111,109247,110372,111632,112889,114207,115220,116360,117619,118856,120025,121288,122669,124005,125372,126627,128013,128914,129869,130934,131934,133016,134228,135168,136325,137330,138554,139820,140800,141831,142794,143992,145132,145961,146834,147934,149006,150053,151184,152546,153374,154432,155779,157123,158393,159341,160536,161414,162640,163587,164935,166005,167227,168486,169849,170801,172170,173443,174578,175780,177025,178205,179360,180590,181484,182492,183559,184761,185804,187111,188293,189386,190633,191765,192754,193749,194731,195773,196751,197761,198736,199737,200730,201769,202781,203785,205075,206278,207492,208694,209922,211234,212508,213794,214665,215476,216411,217508,218241,218670,219598,220655,221726,222802,223764,224697,225485,226316,227094,227908,228977,229947,230882,231754,232532,233463,234499,235563,236607,237689,238651,239708,240836,241803,242696,243729,244801,245702,246773,247779,248977,250156,251174,251948,253071,254127,255130,256092,257030,258179,259239,260348,261433,262650,263619,264472,265980,267561,268484,269689,271211,272607,274127,275350,276410,277503,278665,279658,280906,281830,282628,283460,284279,285087,285873,286668,287493,288259,289003,289819,290659,291391,292052,292805,293497,294073,294843,295693,296504,297309,298090,298931,299745,300613,301368,302188,303059,303791,304595,305334,306040,306752,307538,308293,308972,309749,310556,311311,312136,312965,313741,314511,315273,316024,316789,317564,318388,319177,319877,320670,321473,322289,323132,323947,324745,325552,326349,327059,327884,328722,329563,330412,331236,332062,333083,333834,334764,335699,336981,337841,338778,339771,340739,341635,342549,343561,344515,345485,346449,347378,348373,349185,350102,351043,351954,352887,353830,354812,355797,356781,357656,358696,359676,360649,361624,362558,363517,364658,365888,366813,367712,368702,369634,370429,371421,372364,373203,373976,374873,375857,377013,378236,379252,380142,381087,382100,383141,384157,385109,386073,387098,388259,389370,390292,391014,391937,392811,393652,394596,395481,396258,396946,397662,398495,399336,400151,400877,401733,402564,403466,404388,405289,406134,407040,407914,408777,409760,410697,411386,412215,412897,413584,414179,414958,415733,416421,417175,418009,418802,419475,420128,420894,421603,422210,422869,423424,423985,424722,425404,426137,426850,427612,428442,429189,430019,430747,431576,432370,433038,433705,434517,435214,435859,436629,437181,437753,438517,439215,440069,440863,441519,442195,442999,443733,444494,445111,445845,446624,447313,447986,448757,449561,450364,451175,451962,452721,453416,454098,454852,455584,456365,457042,457601,458367,459097,459855,460659,461483,462217,462985,463709,464849,466064,467055,468033,469073,470350,471397,472463,472981,473573,474635,475690,476742,477750,478775,479820,480695,481424,482136,482851,483530,484225,485156,486190,487205,488267,489204,490052,490827,491872,492829,493919,495313,496328,497254,498318,499260,500161,501157,502147,503081,503975,504971,505942,507051,508246,509272,510148,510954,511839,512870,514152,515369,516544,517814,518993,520170,521423,522600,523770,524913,526014,527086,528298,529456,530488,531546,532582,533589,534626,535701,536767,537910,538943,540074,540844,541764,542604,543429,544305,545184,546047,546898,547816,548692,549553,550418,551298,552089,552848,553637,554399,555263,556079,556879,557695,558501,559380,560229,561010,561739,562621,563666,564895,566051,567288,568500,569674,570779,571924,573162,574256,575530,576682,577941,579125,580488,581561,582767,583898,585025,586109,587241,588352,589511,590592,591686,592874,593970,595180,596253,597608,598642,599558,600543,601699,602930,603790,604961,605751,606627,607560,608774,609882,611125,612274,613512,614697,615949,617153,618085,619359,620690,621795,623199,624414,625751,626785,627963,629186,630444,631602,632981,634119,635458,636863,637948,638998,640076,641201,642377,643442,644757,645842,646787,648234,649524,650286,651368,652112,653352,654457,655620,656485,657753,658945,660250,661403,662556,663726,664875,666235,667306,668559,669816,671234,672505,673688,674785,675963,677232,678321,679233,680217,681114,682147,683373,684617,685932,687235,688388,689591,690771,691781,693025,694333,695604,696908,698293,699494,700480,701574,702564,703765,704948,706092,707135,708279,709533,710303,711033,712057,713287,714634,715956,717041,718042,719369,720556,721843,722847,724026,725250,726385,727412,728332,729540,730700,731827,732986,734165,735287,736645,737766,738524,739689,740708,741613,742756,743730,744648,745459,746631,747773,748738,749690,750430,751522,752786,753977,755127,756400,757793,759133,760439,761764,762605,763415,764706,765891,766736,767711,768723,769952,771073,772061,772720,773927,775109,776289,777639,778376,779088,780050,781067,781950,782865,783752,785067,785849,786556,787606,788723,789760,790877,792123,793360,794603,795892,797241,798368,799217,800555,801735,802942,803933,804839,806272,807383,808446,809721,810768,811766,812988,813981,815209,816291,817477,818804,820096,821245,822550,823748,824888,826207,827386,828443,829743,831011,832105,833226,834402,835685,836926,838197,839520,840844,841867,843116,843860,845208,846445,847662,848813,849971,850993,851920,852873,853562,854699,855942,857152,858492,859731,861215,862537,863975,865122,866344,867484,868550,869506,870347,871092,871935,873178,874112,875104,875833,876631,878023,879106,880031,880766,881892,883145,884554,885701,886858,888307,889472,890578,891750,893017,894359,895787,897194,898641,899882,900984,902263,903451,904758,905469,906705,907831,908875,910246,910956,911640,912172,912674,913188,913577,914068,914555,915157,915836,916503,917303,918283,919409,920549,921850,922394,922950,923538,924331,925642,926591,928023,929193,930167,931471,932788,934048,935019,935769,936996,938169,939173,940169,941351,942635,943821,945227,946189,947363,948257,949442,950276,951597,952863,954074,955353,956660,957661,958556,959612,960804,961794,963106,964265,965108,966165,967322,968116,969241,970361,971316,972241,973748,974852,975519,976203,976908,977697,978575,979328,980181,980817,981399,982230,982856,983606,984788,986042,987159,988295,989511,990714,991809,992846,993775,994786,995796,996957,998118,998992,1000034,1001022,1001962,1003137,1004420,1005212,1006194,1007185,1007988,1009238,1010546,1011790,1013040,1014301,1015494,1016702,1018009,1019421,1020732,1021885,1023047,1024286,1025431,1026600,1027703,1028824,1029971,1030982,1031750,1032680,1033749,1034923,1036236,1037633,1038936,1040403,1041698,1042848,1043880,1045039,1046150,1046976,1048100,1048915,1049869,1050644,1051407,1052378,1053221,1054308,1055399,1056061,1056808,1057617,1058951,1059916,1060836,1061788,1062531,1063239,1064103,1065151,1065897,1066782,1067596,1068461,1069275,1070632,1071945,1073079,1074226,1075461,1076767,1077859,1078881,1080007,1080875,1082134,1082990,1084265,1085342,1086422,1087377,1088371,1089560,1090479,1091689,1092650,1093877,1094514,1095666,1096819,1097739,1098524,1099385,1100356,1101091,1102129,1103105,1104232,1105766,1106811,1107704,1108716,1109662,1110774,1111965,1113063,1114293,1115265,1116256,1117293,1118180,1119149,1120297,1121433,1122552,1123446,1124483,1125630,1127058,1128351,1129695,1130911,1132135,1133189,1134375,1135322,1136177,1137472,1138634,1139960,1140920,1142130,1143106,1144142,1144918,1145472,1146137,1147493,1148606,1149271,1149761,1150226,1150942,1152058,1153347,1154567,1155710,1156905,1157742,1158760,1160124,1161345,1162342,1163405,1164561,1165714,1166665,1167669,1168559,1169625,1170693,1171364,1172502,1173424,1174446,1175415,1176484,1177508,1178552,1179546,1180520,1181454,1182495,1183378,1184269,1185276,1186271,1187342,1188407,1189421,1190504,1191802,1193053,1194297,1195420,1196699,1197828,1198924,1200011,1201135,1202255,1203364,1204491,1205614,1206712,1207782,1208820,1210015,1211006,1211897,1212929,1213952,1215256,1216525,1217456,1218475,1219800,1221008,1222129,1223174,1224082,1225539,1226891,1228222,1229399,1230481,1231802,1233067,1234285,1235164,1236154,1237246,1238397,1239399,1240516,1241406,1242657,1243682,1244650,1245836,1247121,1247915,1248424,1248972,1249533,1250077,1250574,1251030,1251656,1252225,1252747,1253243,1253777,1254358,1254848,1255374,1255910,1256454,1256969,1257470,1258016,1258533,1259027,1259558,1260097,1260598,1261135,1261676,1262296,1262889,1263301,1263789,1264380,1264937,1265415,1265973,1266445,1266965,1267475,1268003,1268524,1269048,1269595,1270130,1270627,1271155,1271696,1272214,1272802,1273329,1274291,1275278,1276385,1277750,1279034,1280306,1281576,1282823,1284067,1285288,1286519,1287759,1288778,1290019,1291332,1292533,1293599,1294633,1296071,1297103,1298120,1299073,1299836,1300741,1302006,1303307,1304456,1305757,1306847,1307791,1308770,1309955,1311071,1312382,1313452,1314788,1316021,1317295,1318596,1319787,1320988,1322232,1323433,1324807,1325885,1326687,1327491,1328615,1329805,1330978,1332078,1332815,1334037,1334948,1335902,1336400,1336887,1337428,1337959,1338408,1338896,1339859,1340489,1341233,1342203,1342923,1343707,1344949,1346419,1347310,1348303,1349393,1350447,1351550,1352764,1354031,1355119,1356131,1357171,1358306,1359403,1360499,1361597,1362595,1363694,1364814,1365509,1366455,1367196,1367974,1368872,1369702,1370533,1371417,1372218,1373125,1373956,1374909,1375804,1376619,1377402,1378204,1379124,1379850,1380728,1381711,1383066,1384254,1385242,1386398,1387790,1388664,1389857,1390937,1392295,1393394,1394086,1394864,1395864,1396898,1397938,1398795,1399957,1401122,1402131,1403405,1404609,1405777,1406978,1408186,1409143,1410192,1411373,1411931,1413089,1414010,1415044,1416217,1416821,1417672,1418704,1419849,1421087,1422345,1423429,1424559,1425644,1426831,1428040,1429248,1430314,1431075,1431945,1432710,1433709,1434759,1435728,1436864,1437871,1439060,1440287,1441614,1442453,1443551,1444719,1445628,1446643,1448048,1449393,1450814,1452158,1453573,1454821,1455908,1456759,1457863,1458851,1460016,1460954,1462059,1463423,1464703,1465837,1467229,1468402,1469389,1470625,1471836,1472972,1474275,1475511,1476763,1477905,1478743,1479700,1480996,1482062,1483063,1484129,1485052,1486087,1487219,1487999,1489085,1490442,1491291,1492101,1493186,1494215,1495102,1496044,1496894,1497903,1498960,1499987,1500934,1501990,1503114,1504209,1505133,1506217,1507430,1508460,1509428,1510610,1511799,1513096,1514393,1515760,1517057,1518186,1519492,1520699,1521721,1523182,1524133,1525413,1526257,1527245,1528134,1529158,1530149,1531423,1532827,1534047,1534968,1535668,1536815,1537992,1538951,1539693,1540745,1541640,1542939,1544044,1545149,1546282,1547720,1548863,1549932,1551036,1551949,1552939,1553833,1554892,1555778,1557152,1558160,1559151,1559844,1561056,1562111,1563077,1564219,1565226,1566301,1567340,1568582,1569764,1570833,1571718,1572897,1574166,1575394,1576409,1577197,1578410,1579825,1580918,1582080,1583210,1584305,1585971,1587651,1589466,1591173,1592844,1594488,1595485,1596550,1597854,1598927,1599964,1600941,1601942,1602977,1604045,1605081,1606267,1607393,1608606,1609882,1611118,1612279,1613627,1614939,1616130,1617185,1617985,1619203,1620470,1621877,1622968,1624147,1625293,1626192,1627472,1628427,1629197,1630160,1631033,1631858,1632645,1633855,1635069,1636584,1638004,1639144,1640338,1641436,1642790,1643767,1644546,1645051,1645626,1646840,1647765,1648781,1649904,1650990,1652353,1653623,1654810,1656005,1657311,1658611,1659850,1660959,1662338,1663561,1664762,1665828,1666957,1668076,1669393,1670449,1671628,1672667,1673638,1674672,1675912,1677206,1678174,1678982,1680225,1681431,1682795,1683552,1684744,1685930,1687033,1688293,1688962,1690086,1691058,1692073,1692961,1693936,1694723,1695787,1696829,1697622,1698831,1699657,1700577,1701471,1702392,1703211,1704e3,1704754,1705484,1706699,1707753,1708952,1710001,1711043,1711872,1713053,1714103,1715071,1715820,1716721,1717781,1719005,1720147,1721310,1721937,1722613,1723294,1724260,1725779,1727396,1728795,1729484,1730468,1731603,1732836,1734175,1735430,1736474,1737600,1738660,1740041,1740773,1741934,1742981,1744414,1745704,1746761,1748082,1749275,1750459,1751403,1752286,1753266,1754366,1755619,1756749,1757995,1758703,1759737,1760638,1761652,1762834,1763773,1765095,1766227,1767462,1768694,1769841,1771067,1772290,1773431,1774567,1775706,1776664,1777981,1779213,1780488,1781822,1782649,1783829,1784977,1785904,1787091,1788218,1789420,1790748,1791918,1792888,1793922,1794624,1795451,1796038,1796799,1798024,1799126,1799944,1800390,1800854,1801300,1802240,1803160,1803896,1804796,1805603,1806263,1807091,1807810,1808673,1809614,1810828,1812189,1813200,1814401,1815494,1816807,1817890,1819107,1819991,1821169,1822169,1823333,1824653,1825922,1827053,1827976,1828641,1829675,1830635,1831593,1832579,1833490,1834559,1835615,1836695,1837827,1838958,1840026,1841289,1842397,1843505,1844512,1845432,1846370,1847081,1848122,1849183,1850454,1851280,1852020,1853185,1853923,1854680,1855791,1856903,1858112,1859242,1860345,1861229,1862328,1863472,1864951,1865778,1866547,1867614,1868697,1869592,1870776,1871830,1872848,1873491,1874169,1874918,1875758,1876378,1876971,1877596,1878214,1878831,1879444,1880031,1880583,1881141,1881772,1882343,1882950,1883547,1884184,1884802,1885419,1885899,1886512,1887163,1887791,1888447,1888706,1888968,1889244,1889509,1889773,1890166,1890939,1891641,1892428,1893111,1893766,1894056,1894628,1895385,1896415],sizes:[1554,1265,980,1034,979,1088,1060,916,1269,1159,1276,1118,1172,1210,1332,1357,1282,1313,1148,1012,1227,1117,927,1064,1149,1382,843,1138,918,1134,1379,1070,1233,1412,1064,1235,910,1170,891,866,1232,1185,841,1408,1532,1160,965,1064,1455,1519,1404,1546,1135,1071,1470,1441,1336,1431,1177,1350,1465,1415,1588,1284,1435,1519,1096,956,1142,1471,1524,971,1052,1283,1261,1037,1271,1449,1275,1370,649,649,625,621,747,433,438,466,537,635,584,602,659,847,1169,1211,1136,1125,1260,1257,1318,1013,1140,1259,1237,1169,1263,1381,1336,1367,1255,1386,901,955,1065,1e3,1082,1212,940,1157,1005,1224,1266,980,1031,963,1198,1140,829,873,1100,1072,1047,1131,1362,828,1058,1347,1344,1270,948,1195,878,1226,947,1348,1070,1222,1259,1363,952,1369,1273,1135,1202,1245,1180,1155,1230,894,1008,1067,1202,1043,1307,1182,1093,1247,1132,989,995,982,1042,978,1010,975,1001,993,1039,1012,1004,1290,1203,1214,1202,1228,1312,1274,1286,871,811,935,1097,733,429,928,1057,1071,1076,962,933,788,831,778,814,1069,970,935,872,778,931,1036,1064,1044,1082,962,1057,1128,967,893,1033,1072,901,1071,1006,1198,1179,1018,774,1123,1056,1003,962,938,1149,1060,1109,1085,1217,969,853,1508,1581,923,1205,1522,1396,1520,1223,1060,1093,1162,993,1248,924,798,832,819,808,786,795,825,766,744,816,840,732,661,753,692,576,770,850,811,805,781,841,814,868,755,820,871,732,804,739,706,712,786,755,679,777,807,755,825,829,776,770,762,751,765,775,824,789,700,793,803,816,843,815,798,807,797,710,825,838,841,849,824,826,1021,751,930,935,1282,860,937,993,968,896,914,1012,954,970,964,929,995,812,917,941,911,933,943,982,985,984,875,1040,980,973,975,934,959,1141,1230,925,899,990,932,795,992,943,839,773,897,984,1156,1223,1016,890,945,1013,1041,1016,952,964,1025,1161,1111,922,722,923,874,841,944,885,777,688,716,833,841,815,726,856,831,902,922,901,845,906,874,863,983,937,689,829,682,687,595,779,775,688,754,834,793,673,653,766,709,607,659,555,561,737,682,733,713,762,830,747,830,728,829,794,668,667,812,697,645,770,552,572,764,698,854,794,656,676,804,734,761,617,734,779,689,673,771,804,803,811,787,759,695,682,754,732,781,677,559,766,730,758,804,824,734,768,724,1140,1215,991,978,1040,1277,1047,1066,518,592,1062,1055,1052,1008,1025,1045,875,729,712,715,679,695,931,1034,1015,1062,937,848,775,1045,957,1090,1394,1015,926,1064,942,901,996,990,934,894,996,971,1109,1195,1026,876,806,885,1031,1282,1217,1175,1270,1179,1177,1253,1177,1170,1143,1101,1072,1212,1158,1032,1058,1036,1007,1037,1075,1066,1143,1033,1131,770,920,840,825,876,879,863,851,918,876,861,865,880,791,759,789,762,864,816,800,816,806,879,849,781,729,882,1045,1229,1156,1237,1212,1174,1105,1145,1238,1094,1274,1152,1259,1184,1363,1073,1206,1131,1127,1084,1132,1111,1159,1081,1094,1188,1096,1210,1073,1355,1034,916,985,1156,1231,860,1171,790,876,933,1214,1108,1243,1149,1238,1185,1252,1204,932,1274,1331,1105,1404,1215,1337,1034,1178,1223,1258,1158,1379,1138,1339,1405,1085,1050,1078,1125,1176,1065,1315,1085,945,1447,1290,762,1082,744,1240,1105,1163,865,1268,1192,1305,1153,1153,1170,1149,1360,1071,1253,1257,1418,1271,1183,1097,1178,1269,1089,912,984,897,1033,1226,1244,1315,1303,1153,1203,1180,1010,1244,1308,1271,1304,1385,1201,986,1094,990,1201,1183,1144,1043,1144,1254,770,730,1024,1230,1347,1322,1085,1001,1327,1187,1287,1004,1179,1224,1135,1027,920,1208,1160,1127,1159,1179,1122,1358,1121,758,1165,1019,905,1143,974,918,811,1172,1142,965,952,740,1092,1264,1191,1150,1273,1393,1340,1306,1325,841,810,1291,1185,845,975,1012,1229,1121,988,659,1207,1182,1180,1350,737,712,962,1017,883,915,887,1315,782,707,1050,1117,1037,1117,1246,1237,1243,1289,1349,1127,849,1338,1180,1207,991,906,1433,1111,1063,1275,1047,998,1222,993,1228,1082,1186,1327,1292,1149,1305,1198,1140,1319,1179,1057,1300,1268,1094,1121,1176,1283,1241,1271,1323,1324,1023,1249,744,1348,1237,1217,1151,1158,1022,927,953,689,1137,1243,1210,1340,1239,1484,1322,1438,1147,1222,1140,1066,956,841,745,843,1243,934,992,729,798,1392,1083,925,735,1126,1253,1409,1147,1157,1449,1165,1106,1172,1267,1342,1428,1407,1447,1241,1102,1279,1188,1307,711,1236,1126,1044,1371,710,684,532,502,514,389,491,487,602,679,667,800,980,1126,1140,1301,544,556,588,793,1311,949,1432,1170,974,1304,1317,1260,971,750,1227,1173,1004,996,1182,1284,1186,1406,962,1174,894,1185,834,1321,1266,1211,1279,1307,1001,895,1056,1192,990,1312,1159,843,1057,1157,794,1125,1120,955,925,1507,1104,667,684,705,789,878,753,853,636,582,831,626,750,1182,1254,1117,1136,1216,1203,1095,1037,929,1011,1010,1161,1161,874,1042,988,940,1175,1283,792,982,991,803,1250,1308,1244,1250,1261,1193,1208,1307,1412,1311,1153,1162,1239,1145,1169,1103,1121,1147,1011,768,930,1069,1174,1313,1397,1303,1467,1295,1150,1032,1159,1111,826,1124,815,954,775,763,971,843,1087,1091,662,747,809,1334,965,920,952,743,708,864,1048,746,885,814,865,814,1357,1313,1134,1147,1235,1306,1092,1022,1126,868,1259,856,1275,1077,1080,955,994,1189,919,1210,961,1227,637,1152,1153,920,785,861,971,735,1038,976,1127,1534,1045,893,1012,946,1112,1191,1098,1230,972,991,1037,887,969,1148,1136,1119,894,1037,1147,1428,1293,1344,1216,1224,1054,1186,947,855,1295,1162,1326,960,1210,976,1036,776,554,665,1356,1113,665,490,465,716,1116,1289,1220,1143,1195,837,1018,1364,1221,997,1063,1156,1153,951,1004,890,1066,1068,671,1138,922,1022,969,1069,1024,1044,994,974,934,1041,883,891,1007,995,1071,1065,1014,1083,1298,1251,1244,1123,1279,1129,1096,1087,1124,1120,1109,1127,1123,1098,1070,1038,1195,991,891,1032,1023,1304,1269,931,1019,1325,1208,1121,1045,908,1457,1352,1331,1177,1082,1321,1265,1218,879,990,1092,1151,1002,1117,890,1251,1025,968,1186,1285,794,509,548,561,544,497,456,626,569,522,496,534,581,490,526,536,544,515,501,546,517,494,531,539,501,537,541,620,593,412,488,591,557,478,558,472,520,510,528,521,524,547,535,497,528,541,518,588,527,962,987,1107,1365,1284,1272,1270,1247,1244,1221,1231,1240,1019,1241,1313,1201,1066,1034,1438,1032,1017,953,763,905,1265,1301,1149,1301,1090,944,979,1185,1116,1311,1070,1336,1233,1274,1301,1191,1201,1244,1201,1374,1078,802,804,1124,1190,1173,1100,737,1222,911,954,498,487,541,531,449,488,963,630,744,970,720,784,1242,1470,891,993,1090,1054,1103,1214,1267,1088,1012,1040,1135,1097,1096,1098,998,1099,1120,695,946,741,778,898,830,831,884,801,907,831,953,895,815,783,802,920,726,878,983,1355,1188,988,1156,1392,874,1193,1080,1358,1099,692,778,1e3,1034,1040,857,1162,1165,1009,1274,1204,1168,1201,1208,957,1049,1181,558,1158,921,1034,1173,604,851,1032,1145,1238,1258,1084,1130,1085,1187,1209,1208,1066,761,870,765,999,1050,969,1136,1007,1189,1227,1327,839,1098,1168,909,1015,1405,1345,1421,1344,1415,1248,1087,851,1104,988,1165,938,1105,1364,1280,1134,1392,1173,987,1236,1211,1136,1303,1236,1252,1142,838,957,1296,1066,1001,1066,923,1035,1132,780,1086,1357,849,810,1085,1029,887,942,850,1009,1057,1027,947,1056,1124,1095,924,1084,1213,1030,968,1182,1189,1297,1297,1367,1297,1129,1306,1207,1022,1461,951,1280,844,988,889,1024,991,1274,1404,1220,921,700,1147,1177,959,742,1052,895,1299,1105,1105,1133,1438,1143,1069,1104,913,990,894,1059,886,1374,1008,991,693,1212,1055,966,1142,1007,1075,1039,1242,1182,1069,885,1179,1269,1228,1015,788,1213,1415,1093,1162,1130,1095,1666,1680,1815,1707,1671,1644,997,1065,1304,1073,1037,977,1001,1035,1068,1036,1186,1126,1213,1276,1236,1161,1348,1312,1191,1055,800,1218,1267,1407,1091,1179,1146,899,1280,955,770,963,873,825,787,1210,1214,1515,1420,1140,1194,1098,1354,977,779,505,575,1214,925,1016,1123,1086,1363,1270,1187,1195,1306,1300,1239,1109,1379,1223,1201,1066,1129,1119,1317,1056,1179,1039,971,1034,1240,1294,968,808,1243,1206,1364,757,1192,1186,1103,1260,669,1124,972,1015,888,975,787,1064,1042,793,1209,826,920,894,921,819,789,754,730,1215,1054,1199,1049,1042,829,1181,1050,968,749,901,1060,1224,1142,1163,627,676,681,966,1519,1617,1399,689,984,1135,1233,1339,1255,1044,1126,1060,1381,732,1161,1047,1433,1290,1057,1321,1193,1184,944,883,980,1100,1253,1130,1246,708,1034,901,1014,1182,939,1322,1132,1235,1232,1147,1226,1223,1141,1136,1139,958,1317,1232,1275,1334,827,1180,1148,927,1187,1127,1202,1328,1170,970,1034,702,827,587,761,1225,1102,818,446,464,446,940,920,736,900,807,660,828,719,863,941,1214,1361,1011,1201,1093,1313,1083,1217,884,1178,1e3,1164,1320,1269,1131,923,665,1034,960,958,986,911,1069,1056,1080,1132,1131,1068,1263,1108,1108,1007,920,938,711,1041,1061,1271,826,740,1165,738,757,1111,1112,1209,1130,1103,884,1099,1144,1479,827,769,1067,1083,895,1184,1054,1018,643,678,749,840,620,593,625,618,617,613,587,552,558,631,571,607,597,637,618,617,480,613,651,628,656,259,262,276,265,264,393,773,702,787,683,655,290,572,757,1030,296],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_Pygments.data")}Module["addRunDependency"]("datafile_Pygments.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/pygments/__init__.py",start:0,end:3012,audio:0},{filename:"/lib/python3.9/site-packages/pygments/__main__.py",start:3012,end:3360,audio:0},{filename:"/lib/python3.9/site-packages/pygments/cmdline.py",start:3360,end:24898,audio:0},{filename:"/lib/python3.9/site-packages/pygments/console.py",start:24898,end:26595,audio:0},{filename:"/lib/python3.9/site-packages/pygments/filter.py",start:26595,end:28533,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatter.py",start:28533,end:31426,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexer.py",start:31426,end:62903,audio:0},{filename:"/lib/python3.9/site-packages/pygments/modeline.py",start:62903,end:63889,audio:0},{filename:"/lib/python3.9/site-packages/pygments/plugin.py",start:63889,end:65599,audio:0},{filename:"/lib/python3.9/site-packages/pygments/regexopt.py",start:65599,end:68669,audio:0},{filename:"/lib/python3.9/site-packages/pygments/scanner.py",start:68669,end:71760,audio:0},{filename:"/lib/python3.9/site-packages/pygments/sphinxext.py",start:71760,end:76354,audio:0},{filename:"/lib/python3.9/site-packages/pygments/style.py",start:76354,end:82367,audio:0},{filename:"/lib/python3.9/site-packages/pygments/token.py",start:82367,end:88510,audio:0},{filename:"/lib/python3.9/site-packages/pygments/unistring.py",start:88510,end:151710,audio:0},{filename:"/lib/python3.9/site-packages/pygments/util.py",start:151710,end:160833,audio:0},{filename:"/lib/python3.9/site-packages/pygments/filters/__init__.py",start:160833,end:201077,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/__init__.py",start:201077,end:206160,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/_mapping.py",start:206160,end:212496,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/bbcode.py",start:212496,end:215786,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/html.py",start:215786,end:250440,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/img.py",start:250440,end:272235,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/irc.py",start:272235,end:278080,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/latex.py",start:278080,end:296962,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/other.py",start:296962,end:301987,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/pangomarkup.py",start:301987,end:304175,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/rtf.py",start:304175,end:309165,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/svg.py",start:309165,end:316464,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/terminal.py",start:316464,end:321102,audio:0},{filename:"/lib/python3.9/site-packages/pygments/formatters/terminal256.py",start:321102,end:332829,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/__init__.py",start:332829,end:344088,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_asy_builtins.py",start:344088,end:371375,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_cl_builtins.py",start:371375,end:385369,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_cocoa_builtins.py",start:385369,end:490552,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_csound_builtins.py",start:490552,end:508409,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_julia_builtins.py",start:508409,end:520074,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_lasso_builtins.py",start:520074,end:654584,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_lua_builtins.py",start:654584,end:662857,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_mapping.py",start:662857,end:724452,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_mql_builtins.py",start:724452,end:749165,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_mysql_builtins.py",start:749165,end:773660,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_openedge_builtins.py",start:773660,end:823058,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_php_builtins.py",start:823058,end:977399,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_postgres_builtins.py",start:977399,end:989583,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_scilab_builtins.py",start:989583,end:1041960,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_sourcemod_builtins.py",start:1041960,end:1069010,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_stan_builtins.py",start:1069010,end:1079467,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_stata_builtins.py",start:1079467,end:1106694,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_tsql_builtins.py",start:1106694,end:1122154,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_usd_builtins.py",start:1122154,end:1123812,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_vbscript_builtins.py",start:1123812,end:1128037,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/_vim_builtins.py",start:1128037,end:1185103,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/actionscript.py",start:1185103,end:1196556,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/agile.py",start:1196556,end:1197432,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/algebra.py",start:1197432,end:1205162,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/ambient.py",start:1205162,end:1207701,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/amdgpu.py",start:1207701,end:1209304,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/ampl.py",start:1209304,end:1213403,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/apdlexer.py",start:1213403,end:1240069,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/apl.py",start:1240069,end:1243455,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/archetype.py",start:1243455,end:1254562,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/arrow.py",start:1254562,end:1258062,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/asm.py",start:1258062,end:1297413,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/automation.py",start:1297413,end:1317216,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/bare.py",start:1317216,end:1320101,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/basic.py",start:1320101,end:1347692,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/bibtex.py",start:1347692,end:1352393,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/boa.py",start:1352393,end:1356339,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/business.py",start:1356339,end:1384320,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/c_cpp.py",start:1384320,end:1400275,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/c_like.py",start:1400275,end:1429355,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/capnproto.py",start:1429355,end:1431525,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/cddl.py",start:1431525,end:1436809,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/chapel.py",start:1436809,end:1441780,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/clean.py",start:1441780,end:1448141,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/compiled.py",start:1448141,end:1449502,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/configs.py",start:1449502,end:1487090,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/console.py",start:1487090,end:1491186,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/crystal.py",start:1491186,end:1506869,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/csound.py",start:1506869,end:1523690,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/css.py",start:1523690,end:1555373,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/d.py",start:1555373,end:1565047,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/dalvik.py",start:1565047,end:1569443,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/data.py",start:1569443,end:1593331,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/devicetree.py",start:1593331,end:1597306,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/diff.py",start:1597306,end:1602167,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/dotnet.py",start:1602167,end:1630124,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/dsls.py",start:1630124,end:1665949,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/dylan.py",start:1665949,end:1676259,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/ecl.py",start:1676259,end:1682469,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/eiffel.py",start:1682469,end:1684922,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/elm.py",start:1684922,end:1687901,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/email.py",start:1687901,end:1692995,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/erlang.py",start:1692995,end:1711968,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/esoteric.py",start:1711968,end:1722117,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/ezhil.py",start:1722117,end:1725436,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/factor.py",start:1725436,end:1743272,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/fantom.py",start:1743272,end:1753230,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/felix.py",start:1753230,end:1762614,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/floscript.py",start:1762614,end:1765257,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/forth.py",start:1765257,end:1772375,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/fortran.py",start:1772375,end:1782494,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/foxpro.py",start:1782494,end:1808707,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/freefem.py",start:1808707,end:1835769,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/functional.py",start:1835769,end:1836443,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/futhark.py",start:1836443,end:1840133,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/gcodelexer.py",start:1840133,end:1840983,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/gdscript.py",start:1840983,end:1852105,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/go.py",start:1852105,end:1855788,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/grammar_notation.py",start:1855788,end:1863705,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/graph.py",start:1863705,end:1866437,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/graphics.py",start:1866437,end:1905375,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/graphviz.py",start:1905375,end:1907254,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/haskell.py",start:1907254,end:1939446,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/haxe.py",start:1939446,end:1970392,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/hdl.py",start:1970392,end:1992647,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/hexdump.py",start:1992647,end:1996130,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/html.py",start:1996130,end:2016161,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/idl.py",start:2016161,end:2031385,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/igor.py",start:2031385,end:2061970,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/inferno.py",start:2061970,end:2065063,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/installers.py",start:2065063,end:2077905,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/int_fiction.py",start:2077905,end:2134551,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/iolang.py",start:2134551,end:2136438,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/j.py",start:2136438,end:2140942,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/javascript.py",start:2140942,end:2201674,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/julia.py",start:2201674,end:2212934,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/jvm.py",start:2212934,end:2284309,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/kuin.py",start:2284309,end:2294983,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/lisp.py",start:2294983,end:2436326,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/make.py",start:2436326,end:2443718,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/markup.py",start:2443718,end:2470455,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/math.py",start:2470455,end:2471131,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/matlab.py",start:2471131,end:2603503,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/mime.py",start:2603503,end:2611041,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/ml.py",start:2611041,end:2646335,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/modeling.py",start:2646335,end:2659719,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/modula2.py",start:2659719,end:2712785,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/monte.py",start:2712785,end:2719068,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/mosel.py",start:2719068,end:2728255,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/ncl.py",start:2728255,end:2792217,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/nimrod.py",start:2792217,end:2797337,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/nit.py",start:2797337,end:2800056,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/nix.py",start:2800056,end:2804063,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/oberon.py",start:2804063,end:2808274,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/objective.py",start:2808274,end:2831045,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/ooc.py",start:2831045,end:2834020,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/other.py",start:2834020,end:2835764,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/parasail.py",start:2835764,end:2838477,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/parsers.py",start:2838477,end:2864357,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/pascal.py",start:2864357,end:2896965,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/pawn.py",start:2896965,end:2905111,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/perl.py",start:2905111,end:2944190,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/php.py",start:2944190,end:2956737,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/pointless.py",start:2956737,end:2958705,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/pony.py",start:2958705,end:2961949,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/praat.py",start:2961949,end:2974222,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/prolog.py",start:2974222,end:2986605,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/promql.py",start:2986605,end:2991344,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/python.py",start:2991344,end:3042732,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/qvt.py",start:3042732,end:3048804,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/r.py",start:3048804,end:3054981,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/rdf.py",start:3054981,end:3070771,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/rebol.py",start:3070771,end:3089371,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/resource.py",start:3089371,end:3092273,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/ride.py",start:3092273,end:3097323,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/rnc.py",start:3097323,end:3099289,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/roboconf.py",start:3099289,end:3101335,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/robotframework.py",start:3101335,end:3119747,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/ruby.py",start:3119747,end:3142400,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/rust.py",start:3142400,end:3150793,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/sas.py",start:3150793,end:3160218,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/scdoc.py",start:3160218,end:3162462,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/scripting.py",start:3162462,end:3232494,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/sgf.py",start:3232494,end:3234494,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/shell.py",start:3234494,end:3270251,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/sieve.py",start:3270251,end:3272540,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/slash.py",start:3272540,end:3281022,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/smalltalk.py",start:3281022,end:3288214,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/smv.py",start:3288214,end:3290983,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/snobol.py",start:3290983,end:3293715,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/solidity.py",start:3293715,end:3296886,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/special.py",start:3296886,end:3300030,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/sql.py",start:3300030,end:3333971,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/stata.py",start:3333971,end:3340385,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/supercollider.py",start:3340385,end:3344078,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/tcl.py",start:3344078,end:3349452,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/teal.py",start:3349452,end:3352971,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/templates.py",start:3352971,end:3424563,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/teraterm.py",start:3424563,end:3434448,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/testing.py",start:3434448,end:3445173,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/text.py",start:3445173,end:3446179,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/textedit.py",start:3446179,end:3452259,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/textfmts.py",start:3452259,end:3467417,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/theorem.py",start:3467417,end:3486924,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/thingsdb.py",start:3486924,end:3491186,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/tnt.py",start:3491186,end:3501340,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/trafficscript.py",start:3501340,end:3502863,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/typoscript.py",start:3502863,end:3511063,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/unicon.py",start:3511063,end:3529575,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/urbi.py",start:3529575,end:3535613,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/usd.py",start:3535613,end:3539064,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/varnish.py",start:3539064,end:3546307,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/verification.py",start:3546307,end:3550215,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/web.py",start:3550215,end:3551109,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/webassembly.py",start:3551109,end:3556833,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/webidl.py",start:3556833,end:3567306,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/webmisc.py",start:3567306,end:3607302,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/whiley.py",start:3607302,end:3611289,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/x10.py",start:3611289,end:3613236,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/xorg.py",start:3613236,end:3614101,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/yang.py",start:3614101,end:3618624,audio:0},{filename:"/lib/python3.9/site-packages/pygments/lexers/zig.py",start:3618624,end:3622563,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/__init__.py",start:3622563,end:3625594,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/abap.py",start:3625594,end:3626321,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/algol.py",start:3626321,end:3628560,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/algol_nu.py",start:3628560,end:3630814,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/arduino.py",start:3630814,end:3635281,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/autumn.py",start:3635281,end:3637401,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/borland.py",start:3637401,end:3638939,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/bw.py",start:3638939,end:3640270,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/colorful.py",start:3640270,end:3643024,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/default.py",start:3643024,end:3645532,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/emacs.py",start:3645532,end:3647994,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/friendly.py",start:3647994,end:3650519,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/fruity.py",start:3650519,end:3651793,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/gruvbox.py",start:3651793,end:3654976,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/igor.py",start:3654976,end:3655691,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/inkpot.py",start:3655691,end:3658014,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/lovelace.py",start:3658014,end:3661163,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/manni.py",start:3661163,end:3663513,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/material.py",start:3663513,end:3667619,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/monokai.py",start:3667619,end:3672681,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/murphy.py",start:3672681,end:3675408,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/native.py",start:3675408,end:3677356,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/paraiso_dark.py",start:3677356,end:3682973,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/paraiso_light.py",start:3682973,end:3688594,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/pastie.py",start:3688594,end:3691043,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/perldoc.py",start:3691043,end:3693194,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/rainbow_dash.py",start:3693194,end:3695650,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/rrt.py",start:3695650,end:3696478,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/sas.py",start:3696478,end:3697895,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/solarized.py",start:3697895,end:3701973,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/stata_dark.py",start:3701973,end:3703194,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/stata_light.py",start:3703194,end:3704444,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/tango.py",start:3704444,end:3711516,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/trac.py",start:3711516,end:3713425,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/vim.py",start:3713425,end:3715377,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/vs.py",start:3715377,end:3716426,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/xcode.py",start:3716426,end:3717903,audio:0},{filename:"/lib/python3.9/site-packages/pygments/styles/zenburn.py",start:3717903,end:3720080,audio:0},{filename:"/lib/python3.9/site-packages/Pygments-2.9.0-py3.9.egg-info/PKG-INFO",start:3720080,end:3721283,audio:0},{filename:"/lib/python3.9/site-packages/Pygments-2.9.0-py3.9.egg-info/SOURCES.txt",start:3721283,end:3804525,audio:0},{filename:"/lib/python3.9/site-packages/Pygments-2.9.0-py3.9.egg-info/dependency_links.txt",start:3804525,end:3804526,audio:0},{filename:"/lib/python3.9/site-packages/Pygments-2.9.0-py3.9.egg-info/entry_points.txt",start:3804526,end:3804580,audio:0},{filename:"/lib/python3.9/site-packages/Pygments-2.9.0-py3.9.egg-info/not-zip-safe",start:3804580,end:3804581,audio:0},{filename:"/lib/python3.9/site-packages/Pygments-2.9.0-py3.9.egg-info/top_level.txt",start:3804581,end:3804590,audio:0},{filename:"/bin/pygmentize",start:3804590,end:3805562,audio:0}],remote_package_size:1900807,package_uuid:"a0c02155-74a7-4e60-8cd1-01f404c31300"})})(); \ No newline at end of file diff --git a/spaces/qiantong-xu/toolbench-leaderboard/app.py b/spaces/qiantong-xu/toolbench-leaderboard/app.py deleted file mode 100644 index b7ecb371bee0f8835e952d061248ba2023430b22..0000000000000000000000000000000000000000 --- a/spaces/qiantong-xu/toolbench-leaderboard/app.py +++ /dev/null @@ -1,130 +0,0 @@ - -__all__ = ['block', 'make_clickable_model', 'make_clickable_user', 'get_submissions'] - -import gradio as gr -import pandas as pd - -COLUMN_NAMES = ["Model", "Tuned on ToolBench", "Avg.", "Open Weather", "The Cat API", "Home Search", "Trip Booking", "Google Sheets", "VirtualHome", "WebShop Long", "WebShop Short", "Tabletop"] -UNTUNED_MODEL_RESULTS = '''[gpt4](https://platform.openai.com/docs/models/gpt-4) & 93.0 & 96.0 & 97.0 & 96.7 & 62.9 & 23.0 / 23.5 & 0.0 & 0.0 & 81.0 \\ -[text-davinci-003](https://platform.openai.com/docs/models/gpt-3) & 99.0 & 98.0 & 97.0 & 89.2 & 62.9 & 31.0 / 25.1 & 0.0 & 0.0 & 66.7 \\ -[gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) & 90.0 & 92.0 & 80.0 & 85.8 & 51.4 & 20.0 / 18.9 & 0.0 & 1.8 & 33.3 \\ -[text-curie-001](https://platform.openai.com/docs/models/gpt-3) & 8.0 & 58.0 & 6.0 & 6.7 & 1.4 & 12.0 / 4.1 & 0.0 & 0.0 & 1.0 \\ -[Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b) & 90.0 & 84.39 & 83.0 & 71.67 & 58.57 & 35.0 / 24.74 & 1.53 & 30.45 & 45.4 \\ -[Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b) & 85.0 & 77.0 & 68.0 & 53.33 & 30.0 & 33.0 / 21.67 & 0.6 & 31.67 & 23.81 \\ -[Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) & 76.0 & 83.0 & 58.0 & 33.33 & 22.86 & 25.0 / 21.49 & 0.0 & 6.92 & 14.39 -[llama-65b](https://huggingface.co/huggyllama/llama-65b) & 90.0 & 80.0 & 84.0 & 65.8 & 32.9 & 32.0 / 20.3 & 0.0 & 41.2 & 30.5 \\ -[llama-30b](https://huggingface.co/huggyllama/llama-30b) & 78.0 & 84.0 & 66.0 & 45.0 & 37.1 & 27.0 / 21.7 & 0.0 & 30.6 & 34.3 \\ -[llama-13b](https://huggingface.co/huggyllama/llama-13b) & 70.0 & 74.0 & 45.0 & 35.8 & 5.7 & 28.0 / 18.9 & 0.0 & 27.6 & 17.1 \\ -[llama-13b-alpaca](https://huggingface.co/chavinlo/gpt4-x-alpaca) & 62.0 & 43.0 & 44.0 & 40.8 & 11.4 & 1.0 / 1.6 & 0.0 & 2.7 & 9.5 \\ -[starcoder](https://huggingface.co/bigcode/starcoder) & 91.0 & 84.0 & 82.0 & 51.7 & 48.0 & 23.0 / 19.4 & 2.6 & 0.0 & 21.9 \\ -[starcoderbase](https://huggingface.co/bigcode/starcoderbase) & 90.0 & 86.0 & 79.0 & 63.3 & 42.9 & 24.0 / 16.3 & 5.8 & 23.1 & 17.1 \\ -[codegen-16B-nl](https://huggingface.co/Salesforce/codegen-16B-nl) & 51.0 & 75.0 & 37.0 & 21.7 & 7.1 & 43.0 / 18.0 & 0.0 & 0.0 & 16.2 \\ -[codegen-16B-multi](https://huggingface.co/Salesforce/codegen-16B-multi) & 56.0 & 75.0 & 47.0 & 7.5 & 21.4 & 31.0 / 14.1 & 0.0 & 0.5 & 8.6 \\ -[codegen-16B-mono](https://huggingface.co/Salesforce/codegen-16B-mono) & 63.7 & 72.0 & 52.0 & 28.3 & 31.5 & 28.0 / 15.7 & 1.5 & 6.6 & 15.2 \\ -[bloomz](https://huggingface.co/bigscience/bloomz) & 58.0 & 85.0 & 36.0 & 22.5 & 14.3 & 9.0 / 4.9 & 0.0 & 1.0 & 1.0 \\ -[opt-iml-30b](https://huggingface.co/facebook/opt-iml-30b) & 44.0 & 48.0 & 5.0 & 3.3 & 2.9 & 13.0 / 8.3 & 0.0 & 0.0 & 1.0 \\ -[opt-30b](https://huggingface.co/facebook/opt-30b) & 46.0 & 35.0 & 2.0 & 3.3 & 8.6 & 24.0 / 11.7 & 0.0 & 0.0 & 1.0 \\ -[opt-iml-1.3b](https://huggingface.co/facebook/opt-iml-1.3b) & 20.0 & 28.0 & 0.0 & 0.0 & 4.3 & 13.0 / 3.1 & 0.0 & 0.0 & 1.0 \\ -[opt-1.3b](https://huggingface.co/facebook/opt-1.3b) & 18.0 & 30.0 & 0.0 & 0.0 & 1.4 & 31.0 / 9.7 & 0.0 & 0.0 & 1.0 \\ -[neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) & 55.0 & 69.0 & 27.0 & 10.8 & 18.6 & 28.0 / 15.3 & 0.0 & 8.8 & 6.7 \\ -[GPT-NeoXT-Chat-Base-20B](https://huggingface.co/togethercomputer/GPT-NeoXT-Chat-Base-20B) & 43.0 & 73.0 & 28.0 & 10.8 & 4.3 & 26.0 / 13.1 & 0.0 & 0.7 & 7.6 \\ -[pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) & 53.0 & 65.0 & 12.0 & 0.8 & 11.4 & 17.0 / 12.1 & 0.0 & 0.0 & 1.9 \\ -[dolly-v2-12b]() & 0.0 & 1.0 & 10.0 & 5.0 & 7.1 & 11.0 / 8.9 & 0.0 & 0.0 & 7.6 \\ -[pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) & 41.0 & 72.0 & 8.0 & 7.5 & 4.3 & 29.0 / 14.0 & 0.0 & 0.0 & 8.6 \\ -[pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) & 49.0 & 54.0 & 7.0 & 3.3 & 12.9 & 24.0 / 14.8 & 0.0 & 0.0 & 7.6 \\ -[pythia-1.4b](https://huggingface.co/EleutherAI/pythia-1.4b) & 37.0 & 48.0 & 4.0 & 5.0 & 10.0 & 22.0 / 10.7 & 0.0 & 5.2 & 7.6 \\ -[stablelm-base-alpha-7b](https://huggingface.co/stabilityai/stablelm-base-alpha-7b) & 22.0 & 47.0 & 0.0 & 0.0 & 4.3 & 28.0 / 10.3 & 0.0 & 0.0 & 2.9 \\ -[stablelm-tuned-alpha-7b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) & 23.0 & 38.0 & 0.0 & 0.0 & 1.4 & 26.0 / 7.3 & 0.0 & 0.0 & 3.8 \\ -[stablelm-base-alpha-3b](https://huggingface.co/stabilityai/stablelm-base-alpha-3b) & 6.0 & 28.0 & 0.0 & 0.0 & 1.4 & 29.0 / 5.3 & 0.0 & 0.0 & 1.0 \\ -[stablelm-tuned-alpha-3b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-3b) & 14.0 & 31.0 & 0.0 & 0.8 & 0.0 & 8.0 / 5.6 & 0.0 & 0.0 & 1.0 \\''' -TUNED_MODEL_RESULTS='''[llama-30b-toolbench](https://huggingface.co/sambanovasystems/LLaMA-30b-toolbench) & 100.0 & 94.0 & 87.0 & 85.8 & 2.9 & 16.0/ 24.3& 0.0 & 0.0 & 7.5 \\ -[starcoder-toolbench](https://huggingface.co/sambanovasystems/starcoder-toolbench) & 99.0 & 97.0 & 83.0 & 80.8 & 21.2 & 31.0/ 18.4& 0.0 & 0.0 & 13.9 \\ -[codegen-16B-mono-toolbench](https://huggingface.co/sambanovasystems/codegen-16B-mono-toolbench) & 97.7 & 99.0 & 82.0 & 77.5 & 19.8 & 29.0/ 17.2& 0.0 & 3.5 & 16.2 \\''' - - -def parse_line(line): - model_results = line.replace(" ", "").strip("\\").split("&") - for i in range(1, len(model_results)): - if i == 6: - res = model_results[6].split('/')[-1].strip() - else: - res = model_results[i] - model_results[i] = float(res) - return model_results - -def get_baseline_df(): - df_data = [] - - lines = UNTUNED_MODEL_RESULTS.split("\n") - for line in lines: - model_results = parse_line(line) - assert len(model_results) == 10 - avg = sum(model_results[1:-3] + model_results[-2:]) / 8 - model_results.insert(1, avg) - model_results.insert(1, "False") - df_data.append(model_results) - lines = TUNED_MODEL_RESULTS.split("\n") - for line in lines: - model_results = parse_line(line) - assert len(model_results) == 10 - avg = sum(model_results[1:-3] + model_results[-2:]) / 8 - model_results.insert(1, avg) - model_results.insert(1, "True") - df_data.append(model_results) - - print(len(df_data)) - df = pd.DataFrame(df_data, columns=COLUMN_NAMES).round(1) - return df - - -CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" -CITATION_BUTTON_TEXT = r"""@misc{xu2023tool, - title={On the Tool Manipulation Capability of Open-source Large Language Models}, - author={Qiantong Xu and Fenglu Hong and Bo Li and Changran Hu and Zhengyu Chen and Jian Zhang}, - year={2023}, - eprint={2305.16504}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -}""" - - -block = gr.Blocks() - -with block: - gr.Markdown( - """# Toolbench Leaderboard - - Welcome to the leaderboard of the ToolBench! 🏆 - This is a community where participants create language models and action generation algorithms to generate API function calls based goals described in natural lanugage! - Please refer to [our paper](https://arxiv.org/abs/2305.16504) for more details and join our [Discord](https://discord.com/invite/JehFG5HXKb) for further discussion. - The [evaluation suite](https://github.com/sambanova/toolbench/) is now alive on Github. - """ - ) - with gr.Row(): - with gr.Accordion("Citation", open=False): - citation_button = gr.Textbox( - value=CITATION_BUTTON_TEXT, - label=CITATION_BUTTON_LABEL, - elem_id="citation-button", - ).style(show_copy_button=True) - - - gr.Markdown( - """In the table below, we summarize the 3-shot performance of all the models. - We use success rate as the primary evaluation metric for most tasks, except that we report rewards on WebShop, and the Longest Common Subsequence (LCS) on VirtualHome, following the original metrics proposed by the respective authors. - """ - ) - with gr.Row(): - data = gr.components.Dataframe( - type="pandas", datatype=["markdown", "markdown", "number", "number", "number", "number", "number", "number", "number", "number", "number", "number"] - ) - with gr.Row(): - data_run = gr.Button("Refresh") - data_run.click( - get_baseline_df, outputs=data - ) - - block.load(get_baseline_df, outputs=data) - -block.launch() \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Advanced SystemCare Pro 13.2.0.222 Crack Torrent [2020] Free Download 2021.md b/spaces/quidiaMuxgu/Expedit-SAM/Advanced SystemCare Pro 13.2.0.222 Crack Torrent [2020] Free Download 2021.md deleted file mode 100644 index ea22d93b5b8c53dbaaf16d52d8191fe88eeb7735..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Advanced SystemCare Pro 13.2.0.222 Crack Torrent [2020] Free Download 2021.md +++ /dev/null @@ -1,58 +0,0 @@ -

        Advanced SystemCare Pro 13.2.0.222 Crack Torrent [2020] Free Download


        DOWNLOADhttps://geags.com/2uCqWL



        - -A: - -It could be caused by the fact that your server uses different IPv4 addresses for loopback and forwarded interfaces. - -When packets are forwarded from one interface to another, the IP address of the original packet is removed from the packet. - -The address of the packet loopback interface is one less than the address of the interface it was forwarded from. That's why packets from the loopback interface are looped back. - -This is not true of packets from forwarded interfaces. - -A standard IP stack treats the original packet as sent by the MAC address of the original interface. - -So the packets of the forwarded interface have the IP address of the forwarding interface. - -This is true for all IP protocols except Ethernet. - -So to make sure that the packets from forwarded interfaces get the right IP address you need to use ethernet addresses. - -The route command may not reflect the fact that the packets are looped back. - -For instance if the loopback interface receives a packet from the MAC address 192.168.1.111 and the forwarding interface receives the packet from MAC address 192.168.1.12 then the route command will show that packets from the forwarding interface go to the loopback interface. - -Of course this is not the case. - -# Generated by Django 2.2.3 on 2020-03-24 12:32 - -from django.db import migrations, models - -import warnings - -import django_extensions.db.fields - -class Migration(migrations.Migration): - - dependencies = [ - - ('search', '0011_auto_20200203_1017'), - - ] - - operations = [ - - migrations.CreateModel( - - name='Token', - - fields=[ - - ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), - - ('name', models.CharField(max_length=255, verbose_name='name')), - - ('domain', models 4fefd39f24
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Caio Terra Modern Jiu Jitsu Full !LINK! Set Torrent.md b/spaces/quidiaMuxgu/Expedit-SAM/Caio Terra Modern Jiu Jitsu Full !LINK! Set Torrent.md deleted file mode 100644 index 27b52c5cd03e458af5d82fbc9f3639c619fdf6e6..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Caio Terra Modern Jiu Jitsu Full !LINK! Set Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Caio Terra Modern Jiu Jitsu Full Set Torrent


        Downloadhttps://geags.com/2uCsGG



        -
        -Welcome to /R/BJJBrazilian Jiu-Jitsu (BJJ) is a martial art that focuses on grappling and ground fighting. Is for discussing BJJ training, techniques, news, ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Child Woohoo Modl NEW!.md b/spaces/quidiaMuxgu/Expedit-SAM/Child Woohoo Modl NEW!.md deleted file mode 100644 index 312e31e29bf533f03388c839731b4c086dc81c1e..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Child Woohoo Modl NEW!.md +++ /dev/null @@ -1,10 +0,0 @@ - -

        That being said, the downside of this was that they couldnt afford to pay maternity leave unless I agreed to stay on top of it after my child was born. Which I did. But it wasnt an option for me.

        -

        Another major finding that I have personally experienced recently was that many institutions will not allow new parents to take unpaid leave. I am a first-year postdoc and recently gave birth to my second child. When I discussed my upcoming leave with my employer, I was told that after my sick and vacation leave ran out I would have to return to work and that my employment would be terminated if I didnt.

        -

        Child Woohoo Modl


        Download Ziphttps://geags.com/2uCrcb



        -

        That being said, the midwife practice I saw during pregnancy and birth is not at all what most people think of when they hear midwife. I went to a group practice and saw many different members, most of whom were wonderful, thoughtful, caring, and importantly, non-judgmental women. I felt no pressure to breastfeed or have a drug-free childbirth, and they would deliver me in the same hospital as women seeing the OB/GYN doctors. It was the best of both worlds!

        -

        the difference between a child of woohoo and a parent of a woohoo is that a parent is more direct and the relationship has a specific meaning. in the same way a child of a woohoo is more indirect and a relationship has a less specific meaning.

        -

        this module is designed to add customizable behaviors to the kstream by using a kstream's kstream.to(string) method. this adds a new key (the api key that will define the child's behavior) and a child config file.

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Igazeti Ya Leta 2010 Amategeko Yumuhanda Pdf [BETTER] Free.md b/spaces/quidiaMuxgu/Expedit-SAM/Igazeti Ya Leta 2010 Amategeko Yumuhanda Pdf [BETTER] Free.md deleted file mode 100644 index d1b0ac2be3131c52b2c4fb3130198d3bb19c7287..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Igazeti Ya Leta 2010 Amategeko Yumuhanda Pdf [BETTER] Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Igazeti Ya Leta 2010 Amategeko Yumuhanda Pdf Free


        Download >>>>> https://geags.com/2uCsBR



        - -pdf ile, amategekō ya leta iritante. Afrika Utendi Amategekō ākute amategekō tuũe amategekō tuũe ikua iwe abesimu ana abesimu ana abesimu ikua makaburi abesimu. Mpya makamu kokoa abesimu. (PDF) Amategekō ākute amategekō tuũe ikua iwe abesimu ana abesimu ana abesimu ikua makaburi abesimu. Mpya makamu kokoa abesimu. (XLSX) Oku fiche vekutendwa fiche vekutendwa %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 %,2 % 4fefd39f24
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Mmd Model Download Polygon Movie 31.md b/spaces/quidiaMuxgu/Expedit-SAM/Mmd Model Download Polygon Movie 31.md deleted file mode 100644 index b3670937ed235b10f7889b3b4ebc48b86e067826..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Mmd Model Download Polygon Movie 31.md +++ /dev/null @@ -1,15 +0,0 @@ - -

        Mmd Model Download Polygon Movie 31: A New Way to Enjoy 3D Animation

        -

        Mmd Model Download Polygon Movie 31 is a collection of 31 high-quality 3D models that can be used with MikuMikuDance (MMD), a free animation software that allows users to create and share their own videos featuring virtual singers and characters. The models are based on popular anime and video game characters, such as Hatsune Miku, Naruto, Sonic the Hedgehog, and more. They are designed with polygonal graphics, which give them a retro and stylized look that contrasts with the realistic backgrounds and effects of MMD.

        -

        Mmd Model Download Polygon Movie 31


        Download Zip ✫✫✫ https://geags.com/2uCqNt



        -

        The models are easy to download and install, and they come with various accessories, expressions, and poses that can be customized by the user. They are compatible with most MMD versions and plugins, and they can be used for both personal and commercial purposes, as long as proper credit is given to the original creators. The models are also suitable for other 3D animation software, such as Blender, Unity, and Unreal Engine.

        -

        Mmd Model Download Polygon Movie 31 is a great resource for MMD enthusiasts who want to expand their collection of models and experiment with different styles and genres. It is also a fun way to enjoy 3D animation and express one's creativity and imagination. The models can be downloaded from the official website or from various online platforms, such as DeviantArt, YouTube, and Nico Nico Douga.

        - -

        Some of the models in Mmd Model Download Polygon Movie 31 are inspired by famous movies and shows, such as Star Wars, Harry Potter, and The Simpsons. They can be used to recreate scenes from these media or to create original stories and parodies. The models are also diverse in terms of gender, age, ethnicity, and personality, which allows users to create diverse and inclusive videos that appeal to different audiences.

        -

        Mmd Model Download Polygon Movie 31 is not only a collection of models, but also a community of MMD fans who share their works and feedback online. Users can join the official Discord server or follow the official Twitter account to interact with other users and get updates on new models and features. They can also participate in contests and events that are organized by the creators and sponsors of the project. The community is friendly and supportive, and welcomes new members who want to learn more about MMD and 3D animation.

        - -

        Mmd Model Download Polygon Movie 31 is not only for MMD users, but also for anyone who is interested in 3D animation and polygonal graphics. The models can be used as a learning tool to understand the basics of 3D modeling, rigging, texturing, and animation. They can also be used as a source of inspiration and creativity for artists and designers who want to create their own polygonal models and characters. The models are versatile and adaptable, and can be used for various purposes and projects.

        -

        -

        Mmd Model Download Polygon Movie 31 is a unique and innovative project that combines the nostalgia of polygonal graphics with the modernity of MMD and 3D animation. It is a tribute to the history and evolution of video games and animation, as well as a celebration of the diversity and creativity of MMD and its community. It is a project that aims to bring joy and fun to both creators and viewers, and to showcase the potential and possibilities of MMD and 3D animation.

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/utils/logger.py b/spaces/radames/UserControllableLT-Latent-Transformer/expansion/utils/logger.py deleted file mode 100644 index 9bd31e79407f8e7e94d236c9b0e620403d1e3d85..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/utils/logger.py +++ /dev/null @@ -1,113 +0,0 @@ -""" -File: logger.py -Modified by: Senthil Purushwalkam -Code referenced from https://gist.github.com/gyglim/1f8dfb1b5c82627ae3efcfbbadb9f514 -Email: spurushwandrewcmuedu -Github: https://github.com/senthilps8 -Description: -""" -import pdb -import tensorflow as tf -from torch.autograd import Variable -import numpy as np -import scipy.misc -import os -try: - from StringIO import StringIO # Python 2.7 -except ImportError: - from io import BytesIO # Python 3.x - - -class Logger(object): - - def __init__(self, log_dir, name=None): - """Create a summary writer logging to log_dir.""" - if name is None: - name = 'temp' - self.name = name - if name is not None: - try: - os.makedirs(os.path.join(log_dir, name)) - except: - pass - self.writer = tf.summary.FileWriter(os.path.join(log_dir, name), - filename_suffix=name) - else: - self.writer = tf.summary.FileWriter(log_dir, filename_suffix=name) - - def scalar_summary(self, tag, value, step): - """Log a scalar variable.""" - summary = tf.Summary(value=[tf.Summary.Value(tag=tag, simple_value=value)]) - self.writer.add_summary(summary, step) - - def image_summary(self, tag, images, step): - """Log a list of images.""" - - img_summaries = [] - for i, img in enumerate(images): - # Write the image to a string - try: - s = StringIO() - except: - s = BytesIO() - scipy.misc.toimage(img).save(s, format="png") - - # Create an Image object - img_sum = tf.Summary.Image(encoded_image_string=s.getvalue(), - height=img.shape[0], - width=img.shape[1]) - # Create a Summary value - img_summaries.append(tf.Summary.Value(tag='%s/%d' % (tag, i), image=img_sum)) - - # Create and write Summary - summary = tf.Summary(value=img_summaries) - self.writer.add_summary(summary, step) - - def histo_summary(self, tag, values, step, bins=1000): - """Log a histogram of the tensor of values.""" - - # Create a histogram using numpy - counts, bin_edges = np.histogram(values, bins=bins) - - # Fill the fields of the histogram proto - hist = tf.HistogramProto() - hist.min = float(np.min(values)) - hist.max = float(np.max(values)) - hist.num = int(np.prod(values.shape)) - hist.sum = float(np.sum(values)) - hist.sum_squares = float(np.sum(values**2)) - - # Drop the start of the first bin - bin_edges = bin_edges[1:] - - # Add bin edges and counts - for edge in bin_edges: - hist.bucket_limit.append(edge) - for c in counts: - hist.bucket.append(c) - - # Create and write Summary - summary = tf.Summary(value=[tf.Summary.Value(tag=tag, histo=hist)]) - self.writer.add_summary(summary, step) - self.writer.flush() - - def to_np(self, x): - return x.data.cpu().numpy() - - def to_var(self, x): - if torch.cuda.is_available(): - x = x.cuda() - return Variable(x) - - def model_param_histo_summary(self, model, step): - """log histogram summary of model's parameters - and parameter gradients - """ - for tag, value in model.named_parameters(): - if value.grad is None: - continue - tag = tag.replace('.', '/') - tag = self.name+'/'+tag - self.histo_summary(tag, self.to_np(value), step) - self.histo_summary(tag+'/grad', self.to_np(value.grad), step) - diff --git a/spaces/raedeXanto/academic-chatgpt-beta/BEST Download Mac Os Mojave 10.14 Dmg.md b/spaces/raedeXanto/academic-chatgpt-beta/BEST Download Mac Os Mojave 10.14 Dmg.md deleted file mode 100644 index ad2b6e51a630e62f48149d25bdfb3ea53bd693d0..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/BEST Download Mac Os Mojave 10.14 Dmg.md +++ /dev/null @@ -1,113 +0,0 @@ - -

        How to Download and Install Mac OS Mojave 10.14 DMG

        -

        Mac OS Mojave 10.14 is the latest version of the Mac operating system that was released in September 2018. It offers many new features and enhancements that make your Mac more powerful, secure, and enjoyable to use.

        -

        Download Mac Os Mojave 10.14 Dmg


        DOWNLOAD >>> https://tinourl.com/2uL4Hj



        -

        In this article, we will show you how to download and install Mac OS Mojave 10.14 DMG on your Mac. We will also explain what Mac OS Mojave 10.14 is and why you should upgrade, how to check if your Mac is compatible, how to back up your Mac before upgrading, and how to fix common problems with Mac OS Mojave 10.14.

        -

        Let's get started!

        -

        What is Mac OS Mojave 10.14 and why you should upgrade

        -

        Mac OS Mojave 10.14 is the fifteenth major release of the Mac operating system. It introduces many new features and improvements that make your Mac more user-friendly, productive, and secure.

        -

        Some of the highlights of Mac OS Mojave 10.14 are:

        -

        -

        Dark Mode

        -

        Dark Mode is a system-wide option that changes the appearance of your Mac interface from light to dark. It gives your Mac a sleek and elegant look that is easier on your eyes, especially in low-light environments.

        -

        To enable Dark Mode, go to System Preferences > General and select Dark under Appearance.

        -

        Desktop Stacks

        -

        Desktop Stacks is a feature that automatically organizes your desktop files into neat groups based on file type, date, or tag. It helps you keep your desktop clutter-free and find your files faster.

        -

        To use Desktop Stacks, right-click on your desktop and select Use Stacks. You can also customize how your stacks are sorted and grouped by right-clicking on your desktop and selecting Group Stacks By.

        -

        Finder Enhancements

        -

        Finder has been improved with new view modes, quick actions, and metadata. You can now use the Gallery View to browse your files with large previews and access useful information and actions in the sidebar. You can also perform common tasks, such as rotating, cropping, marking up, or sharing files, without opening any apps, using the Quick Actions in the Preview pane. You can also see more details about your files, such as EXIF data, keywords, tags, etc., in the new Metadata section in the Preview pane.

        -

        To switch to Gallery View, click on the fourth icon in the Finder toolbar. To access Quick Actions and Metadata, click on the Show Preview button in the Finder toolbar or press Command-Shift-P.

        -

        Screenshot Tools

        -

        Screenshot Tools is a new utility that lets you take different types of screenshots on your Mac. You can capture the entire screen, a window, a selected portion, or a video of your screen. You can also edit, annotate, and share your screenshots from the same utility.

        -

        To launch Screenshot Tools, press Command-Shift-5. You will see a toolbar at the bottom of your screen with various options. You can also use keyboard shortcuts to take screenshots directly. For example, Command-Shift-3 captures the entire screen, Command-Shift-4 captures a selected portion, etc.

        -

        Continuity Camera

        -

        Continuity Camera is a feature that allows you to use your iPhone or iPad camera to scan documents or take photos on your Mac. For example, you can scan a receipt or a contract and insert it into a Pages document, or take a photo of an object and add it to a Keynote presentation.

        -

        To use Continuity Camera, make sure your iPhone or iPad is nearby and connected to the same Wi-Fi network as your Mac. Then, right-click on the document or app where you want to insert the image and select Import from iPhone or iPad. Choose Scan Documents or Take Photo and follow the instructions on your device.

        -

        Other Features

        -

        Mac OS Mojave 10.14 also includes some other features that enhance your Mac experience, such as:

        -
          -
        • Group FaceTime: You can now make video calls with up to 32 people at once using FaceTime. You can also use Animoji and Memoji to add some fun to your conversations.
        • -
        • News app: You can now read the latest news from various sources and topics in the News app on your Mac. You can also customize your feed and subscribe to premium content.
        • -
        • Home app: You can now control your smart home devices from your Mac using the Home app. You can also create scenes and automations to manage your devices with ease.
        • -
        • Redesigned Mac App Store: The Mac App Store has been redesigned with a new look and new categories. You can also discover curated content and editorial recommendations from Apple experts.
        • -
        -

        How to check if your Mac is compatible with Mac OS Mojave 10.14

        -

        Before you download and install Mac OS Mojave 10.14 DMG on your Mac, you need to check if your Mac is compatible with this version of macOS. Not all Mac models can run Mac OS Mojave 10.14.

        -

        The following Mac models are compatible with Mac OS Mojave 10.14:

        -
          -
        • MacBook (Early 2015 or newer)
        • -
        • MacBook Air (Mid 2012 or newer)
        • -
        • MacBook Pro (Mid 2012 or newer)
        • -
        • Mac mini (Late 2012 or newer)
        • -
        • iMac (Late 2012 or newer)
        • -
        • iMac Pro (2017)
        • -
        • Mac Pro (Late 2013 or newer)
        • -
        -

        To find out which Mac model you have, click on the Apple menu in the top-left corner of your screen and select About This Mac. You will see the model name and year of your Mac.

        -

        How to back up your Mac before upgrading

        -

        Before you download and install Mac OS Mojave 10.14 DMG on your Mac, you should back up your Mac data using Time Machine or another backup software. This way, you can restore your data in case something goes wrong during or after the installation process.

        -

        To back up your Mac using Time Machine, you need an external hard drive that is formatted as Mac OS Extended (Journaled ) or APFS. Then, follow these steps:

        -
          -
        1. Connect your external hard drive to your Mac.
        2. -
        3. Click on the Apple menu and select System Preferences.
        4. -
        5. Click on Time Machine and turn it on.
        6. -
        7. Select your external hard drive as the backup disk.
        8. -
        9. Wait for Time Machine to start backing up your Mac data. You can also click on the Time Machine icon in the menu bar and select Back Up Now to manually start a backup.
        10. -
        -

        To back up your Mac using another backup software, you need to follow the instructions provided by the software developer. Some of the popular backup software for Mac are Carbon Copy Cloner, SuperDuper, and ChronoSync.

        -

        How to download Mac OS Mojave 10.14 DMG from the App Store

        -

        The easiest way to download Mac OS Mojave 10.14 DMG on your Mac is from the App Store. You can access the App Store from your Dock or from the Applications folder. Then, follow these steps:

        -
          -
        1. Open the App Store and search for Mac OS Mojave 10.14 or click on this link.
        2. -
        3. Click on the Get button and enter your Apple ID and password if prompted.
        4. -
        5. Wait for the macOS installer to download on your Mac. You can check the progress in the Launchpad or in the Purchases tab of the App Store.
        6. -
        7. Once the download is complete, the macOS installer will automatically launch. If not, you can find it in the Applications folder or in the Launchpad.
        8. -
        -

        How to download Mac OS Mojave 10.14 DMG from a browser

        -

        An alternative way to download Mac OS Mojave 10.14 DMG on your Mac is from a browser using a disk image file. A disk image file is a compressed file that contains the macOS installer and other files. You can download a disk image file from various websites, such as this one. However, you should be careful when downloading files from unknown sources, as they may contain malware or viruses.

        -

        To download Mac OS Mojave 10.14 DMG from a browser, follow these steps:

        -
          -
        1. Open your browser and go to the website that offers the disk image file for Mac OS Mojave 10.14.
        2. -
        3. Click on the download link and choose a location to save the file on your Mac.
        4. -
        5. Wait for the file to download on your Mac. You can check the progress in your browser's downloads manager.
        6. -
        7. Once the download is complete, locate the file on your Mac and double-click on it to mount it as a virtual drive.
        8. -
        9. Open the virtual drive and find the macOS installer inside it.
        10. -
        -

        How to install Mac OS Mojave 10.14 DMG on your Mac

        -

        Once you have downloaded Mac OS Mojave 10.14 DMG on your Mac, you can install it by running the macOS installer. The installation process may take some time, depending on your Mac model and internet speed. You may also need to restart your Mac several times during the installation.

        -

        To install Mac OS Mojave 10.14 DMG on your Mac, follow these steps:

        -
          -
        1. Double-click on the macOS installer that you downloaded from the App Store or from a browser.
        2. -
        3. Click on Continue and agree to the terms and conditions.
        4. -
        5. Select your Mac's hard drive as the destination disk and click on Install.
        6. -
        7. Enter your administrator password if prompted and click on OK.
        8. -
        9. Wait for the installation to begin and follow any instructions that appear on your screen.
        10. -
        11. Your Mac will restart automatically when the installation is complete. You may need to set up some preferences, such as iCloud, Siri, Touch ID, etc., before you can use your Mac with Mac OS Mojave 10.14.
        12. -
        -

        How to fix common problems with Mac OS Mojave 10.14

        -

        Sometimes, you may encounter some problems with Mac OS Mojave 10.14 during or after the installation process. These problems may include:

        -
          Installation failure or error: This may happen if your Mac is not compatible, your internet connection is unstable, your hard drive is full or corrupted, or your macOS installer is damaged. To fix this, you can try to restart your Mac, check your internet connection, free up some space on your hard drive, or download a new macOS installer. -
        • Slow performance or battery drain: This may happen if your Mac is running too many apps or processes, your Mac settings are not optimized, or your Mac needs some maintenance. To fix this, you can try to quit or uninstall unnecessary apps, adjust your Mac settings, such as brightness, energy saver, notifications, etc., or run some utilities, such as Disk Utility, Activity Monitor, etc.
        • -
        • Wi-Fi or Bluetooth issues: This may happen if your Mac's network or Bluetooth settings are incorrect, your router or modem is faulty, or your devices are incompatible. To fix this, you can try to restart your Mac and your router or modem, check and update your network or Bluetooth settings, or troubleshoot your devices.
        • -
        • App compatibility issues: This may happen if some of your apps are not updated or supported by Mac OS Mojave 10.14. To fix this, you can try to update or reinstall your apps, contact the app developers for support, or look for alternative apps.
        • -
        -

        If none of these solutions work for you, you can also contact Apple Support for further assistance.

        -

        Conclusion

        -

        Mac OS Mojave 10.14 is a great update for your Mac that offers many new features and improvements. You can download and install Mac OS Mojave 10.14 DMG on your Mac easily from the App Store or from a browser. However, you should also check if your Mac is compatible, back up your Mac data, and fix any problems that may arise during or after the installation process.

        -

        We hope this article has helped you learn how to download and install Mac OS Mojave 10.14 DMG on your Mac. If you have any questions or feedback, please let us know in the comments below.

        -

        FAQs

        -

        Here are some frequently asked questions about Mac OS Mojave 10.14 DMG:

        -
          -
        1. What is a DMG file?
        2. -

          A DMG file is a disk image file that contains compressed data and files. It can be mounted as a virtual drive on your Mac and used to install software or transfer data.

          -
        3. How do I update my Mac to Mac OS Mojave 10.14?
        4. -

          You can update your Mac to Mac OS Mojave 10.14 by downloading and installing the macOS installer from the App Store or from a browser. However, you should also check if your Mac is compatible and back up your Mac data before updating.

          -
        5. How do I uninstall Mac OS Mojave 10.14?
        6. -

          You can uninstall Mac OS Mojave 10.14 by restoring your Mac to a previous version of macOS using Time Machine or another backup software. However, you should also back up your current Mac data before restoring.

          -
        7. How do I create a bootable USB drive for Mac OS Mojave 10.14?
        8. -

          You can create a bootable USB drive for Mac OS Mojave 10.14 by using the Terminal app and the macOS installer that you downloaded from the App Store or from a browser. You will need a USB drive that has at least 16 GB of storage space and is formatted as Mac OS Extended (Journaled) or APFS.

          -
        9. How do I install Mac OS Mojave 10.14 on a virtual machine?
        10. -

          You can install Mac OS Mojave 10.14 on a virtual machine by using a software like VirtualBox, VMware Fusion, or Parallels Desktop. You will need the macOS installer that you downloaded from the App Store or from a browser and a disk image file that contains the macOS installer and other files.

          -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Biologia general marta cervantes y margarita hernandez pdf download una obra excelente que cumple con el programa actual de la UNAM.md b/spaces/raedeXanto/academic-chatgpt-beta/Biologia general marta cervantes y margarita hernandez pdf download una obra excelente que cumple con el programa actual de la UNAM.md deleted file mode 100644 index 0893c6342fa161cb06e2ccbe69764667cf6e032c..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Biologia general marta cervantes y margarita hernandez pdf download una obra excelente que cumple con el programa actual de la UNAM.md +++ /dev/null @@ -1,174 +0,0 @@ -
        -

        Biologia General: A Comprehensive Textbook by Marta Cervantes and Margarita Hernandez

        -

        Are you looking for a reliable and updated source of information on biologia general (general biology)? Do you want to learn about the fascinating world of living organisms, their structure, function, evolution and interaction with their environment? If so, you might be interested in reading Biologia General, a textbook written by Marta Cervantes and Margarita Hernandez, two renowned Mexican biologists and professors. In this article, we will tell you everything you need to know about this book, its content, its features and how to download it for free in pdf format.

        -

        biologia general marta cervantes y margarita hernandez pdf download


        Downloadhttps://tinourl.com/2uL1U0



        -

        Introduction

        -

        Biologia General is a textbook that covers the basic concepts and principles of biologia general, the scientific study of life. It is intended for students of high school level and higher education, as well as anyone who wants to learn more about this fascinating subject. It is written in a clear and simple language, with plenty of examples, illustrations, activities, maps, references and glossary. It also includes the most recent information and discoveries in the fields of genetics, biodiversity, evolution, environmental issues and sustainable development.

        -

        What is biologia general?

        -

        Biologia general is the branch of science that deals with the study of living organisms, from their origin to their diversity, structure, function, reproduction, evolution and interaction with their environment. It encompasses various disciplines such as botany, zoology, microbiology, ecology, genetics, molecular biology and more. Biologia general aims to understand the nature of life, its origin, its diversity, its mechanisms and its patterns.

        -

        Why is biologia general important?

        -

        Biologia general is important because it helps us to understand ourselves and our place in the natural world. It also helps us to appreciate the beauty and complexity of life, as well as its challenges and problems. By studying biologia general, we can learn about the origin and evolution of life on Earth, the diversity and classification of living beings, the structure and function of cells and organisms, the processes of inheritance and variation, the mechanisms of adaptation and natural selection, the relationships between organisms and their environment, the impact of human activities on biodiversity and ecosystems, and the solutions for a sustainable development.

        -

        Who are the authors of biologia general?

        -

        The authors of biologia general are Marta Cervantes Ramírez and Margarita Hernández Hernández. They are both distinguished Mexican biologists and professors at the National Autonomous University of Mexico (UNAM). They have extensive experience in teaching biologia general at various levels of education. They have also participated in several research projects and publications related to biologia general. They have written this book with the aim of providing a comprehensive and updated source of information on biologia general for students and teachers.

        -

        Main Content

        -

        Structure and organization of biologia general

        -

        Units and chapters

        -

        The book is divided into five units that correspond to the main themes of biologia general. Each unit contains several chapters that develop specific topics within each theme. The units are:

        -
          -
        • Unit 1: The nature of science. This unit introduces the concept and characteristics of science as a way of knowing the natural world. It also explains the scientific method, its steps and applications.
        • -
        • Unit 2: The unity of living beings. This unit deals with the common features that define life and distinguish living beings from non-living things. It also describes the structure and function of cells as the basic units of life.
        • -
        • Unit 3: The continuity of life. This unit focuses on the processes that ensure the transmission and expression of genetic information from one generation to another. It also covers the topics of mitosis, meiosis, gametogenesis, fertilization, embryonic development, DNA structure and function, gene expression and regulation.
        • -
        • Unit 4: Evolution and biological diversity. This unit explains the origin and history of life on Earth, as well as the mechanisms that generate variation and change in living beings over time. It also covers the topics of natural selection, adaptation, speciation, classification, phylogeny, and the diversity of domains, kingdoms, and phyla of living organisms.
        • -
        • Unit 5: Organisms and their environment. This unit explores the interactions between living beings and their physical and biological environment. It also covers the topics of ecological levels, populations, communities, ecosystems, biogeochemical cycles, energy flow, biomes, biodiversity, environmental problems, and sustainable development.
        • -
        -

        Features and resources

        -

        The book has several features and resources that enhance its content and facilitate its learning. Some of these are:

        -
          -
        • Activities of learning. These are questions or exercises that invite the reader to apply or deepen their knowledge on a specific topic.
        • -
        • Conceptual maps. These are diagrams that summarize or organize the main concepts or ideas within a chapter or unit.
        • -
        • Bibliographic references. These are sources or citations that support or complement the information presented in the book.
        • -
        • Internet pages. These are links or websites that provide additional or updated information on a specific topic.
        • -
        • Glossary. This is a list or dictionary that defines or explains the terms or concepts used in the book.
        • -
        • Analytical index. This is a tool that allows the reader to locate or find a specific term or concept within the book.
        • -
        -

        Topics and themes of biologia general

        -

        The unity of living beings

        -

        This theme covers the common characteristics that define life as a phenomenon distinct from non-living things. It also describes how living beings are composed of cells as their basic structural and functional units. Some of the topics included in this theme are:

        -
          -
        • The concept and characteristics of life
        • -
        • The origin of life on Earth
        • -
        • The cell theory and types of cells
        • -
        • The structure and function of cell components
        • -
        • The metabolism and homeostasis of cells
        • -
        • The transport and communication between cells
        • -
        • The nutrition and respiration of cells
        • -
        • The synthesis and degradation of biomolecules
        • -
        • The photosynthesis and chemosynthesis of autotrophic cells
        • -
        -

        The continuity of life

        -

        This theme focuses on how living beings transmit their genetic information from one generation to another through reproduction processes. It also explains how genetic information is stored in DNA molecules and expressed in proteins that determine the traits or characteristics of living beings. Some of the topics included in this theme are:

        -
          -
        • The concept and types of reproduction
        • -
        • The cell cycle and types of cell division
        • -
        • The gametogenesis and fertilization in sexual reproduction
        • -
        • The embryonic development in animals and plants
        • -
        • The DNA structure and function
        • -
        • The replication and repair of DNA
        • -
        • The transcription and translation of genes
        • -
        • The regulation and modification of gene expression
        • -
        • The genetic code and mutations
        • -
        • The genetic engineering and biotechnology applications
        • -
        -

        Evolution and biological diversity

        -

        This theme explains how living beings have changed over time due to natural processes that generate variation and adaptation to different environments. It also describes how living beings are classified according to their evolutionary relationships and morphological features. Some of the topics included in this theme are:

        -

        biologia general cervantes hernandez pdf gratis
        -descargar biologia general marta cervantes y margarita hernandez pdf
        -libro de biologia general marta cervantes y margarita hernandez pdf
        -biologia general marta cervantes ramirez y margarita hernandez hernandez pdf
        -biologia general marta cervantes y margarita hernandez editorial patria pdf
        -biologia general marta cervantes y margarita hernandez 3ra edicion pdf
        -biologia general marta cervantes y margarita hernandez google books
        -biologia general marta cervantes y margarita hernandez ebook
        -biologia general marta cervantes y margarita hernandez play store
        -biologia general marta cervantes y margarita hernandez resumen
        -biologia general marta cervantes y margarita hernandez indice
        -biologia general marta cervantes y margarita hernandez glosario
        -biologia general marta cervantes y margarita hernandez genetica
        -biologia general marta cervantes y margarita hernandez biodiversidad
        -biologia general marta cervantes y margarita hernandez evolucion
        -biologia general marta cervantes y margarita hernandez ecologia
        -biologia general marta cervantes y margarita hernandez desarrollo sustentable
        -biologia general marta cervantes y margarita hernandez actividades de aprendizaje
        -biologia general marta cervantes y margarita hernandez mapas conceptuales
        -biologia general marta cervantes y margarita hernandez referencias bibliograficas
        -biologia general marta cervantes y margarita hernandez paginas de internet
        -biologia general marta cervantes y margarita hernandez ilustraciones a color
        -biologia general marta cervantes y margarita hernandez unam
        -biologia general marta cervantes y margarita hernandez escuela nacional preparatoria
        -biologia general marta cervantes y margarita hernandez biologos mexicanos
        -biologia general marta cervantes y margarita hernandez especies de mexico
        -biologia general marta cervantes y margarita hernandez caudal biotico de mexico
        -biologia general marta cervantes y margarita hernandez celula eucariota
        -biologia general marta cervantes y margarita hernandez celula procariota
        -biologia general marta cervantes y margarita hernandez fotosintesis
        -biologia general marta cervantes y margarita hernandez respiracion celular
        -biologia general marta cervantes y margarita hernandez metabolismo celular
        -biologia general marta cervantes y margarita hernandez reproduccion celular
        -biologia general marta cervantes y margarita hernandez mitosis
        -biologia general marta cervantes y margarita hernandez meiosis
        -biologia general marta cervantes y margarita hernandez ciclo celular
        -biologia general marta cervantes y margarita hernandez adn
        -biologia general marta cervantes y margarita hernandez arn
        -biologia general marta cervantes y margarita hernandez sintesis de proteinas
        -biologia general marta cervantes y margarita hernandez codigo genetico
        -biologia general marta cervantes y margarita hernandez mutaciones geneticas
        -biologia general marta cervantes y margarita hernandez transgenicos

        -
          -
        • The concept and evidence of evolution
        • -
        • The origin and history of life on Earth
        • -
        • The sources and types of variation in living beings
        • -
        • The natural selection and adaptation of living beings
        • -
        • The speciation and extinction of living beings
        • -
        • The concept and criteria of classification
        • -
        • The domains and kingdoms of living beings
        • -
        • The phyla and classes of animals
        • -
        • The phyla and divisions of plants
        • -
        • The phyla and groups of fungi
        • -
        • The phyla and groups of protoctists
        • -
        • The phyla and groups of bacteria
        • -
        -

        Organisms and their environment

        -

        This theme explores how living beings interact with their physical and biological environment, forming different levels of ecological organization. It also analyzes the impact of human activities on biodiversity and ecosystems, as well as the solutions for a sustainable development. Some of the topics included in this theme are:

        -
          -
        • The concept and levels of ecology
        • -
        • The populations and their characteristics
        • -
        • The communities and their interactions
        • -
        • The ecosystems and their components
        • -
        • The biogeochemical cycles and energy flow in ecosystems
        • -
        • The biomes and their characteristics
        • -
        • The biodiversity and its importance
        • -
        • The environmental problems and their causes
        • -
        • The conservation and restoration of biodiversity and ecosystems
        • -
        • The sustainable development and its principles
        • -
        -

        Conclusion

        -

        Summary of the main points

        -

        In conclusion, biologia general is a textbook that covers the basic concepts and principles of biologia general, the scientific study of life. It is written by Marta Cervantes and Margarita Hernandez, two renowned Mexican biologists and professors. It is intended for students of high school level and higher education, as well as anyone who wants to learn more about this fascinating subject. It is divided into five units that correspond to the main themes of biologia general: the nature of science, the unity of living beings, the continuity of life, evolution and biological diversity, and organisms and their environment. It has several features and resources that enhance its content and facilitate its learning, such as activities, conceptual maps, bibliographic references, Internet pages, glossary, and analytical index.

        -

        Benefits and advantages of biologia general

        -

        Biologia general is a textbook that offers many benefits and advantages for its readers. Some of these are:

        -
          -
        • It provides a comprehensive and updated source of information on biologia general, including the most recent discoveries and developments in the fields of genetics, biodiversity, evolution, environmental issues, and sustainable development.
        • -
        • It uses a clear and simple language, with plenty of examples, illustrations, activities, maps, references, and glossary, that make it easy to understand and apply the concepts and principles of biologia general.
        • -
        • It makes reference to different researches of recognized Mexican biologists and cites and illustrates various places and species of Mexico, highlighting the rich biological heritage and diversity of this country.
        • -
        • It fosters a scientific attitude and a critical thinking in its readers, encouraging them to observe, question, investigate, experiment, analyze, and communicate their findings about the natural world.
        • -
        • It promotes an appreciation and a respect for life in all its forms, as well as an awareness and a responsibility for the environmental problems that affect biodiversity and ecosystems, and the solutions for a sustainable development.
        • -
        -

        How to download biologia general pdf for free

        -

        If you are interested in reading biologia general pdf for free, you can follow these simple steps:

        -
          -
        1. Go to Google Books website: https://books.google.com/
        2. -
        3. Type "biologia general marta cervantes y margarita hernandez pdf" in the search box.
        4. -
        5. Select the first result that appears: "Biología General - Marta Cervantes Ramírez, Margarita Hernández ..."
        6. -
        7. Click on the "Preview this book" button.
        8. -
        9. You will be able to see some pages of the book online.
        10. -
        11. To download the book in pdf format, click on the "Download" button at the top right corner.
        12. -
        13. You will be asked to sign in with your Google account or create one if you don't have one.
        14. -
        15. After signing in, you will be able to download the book in pdf format for free.
        16. -
        -

        Congratulations! You have successfully downloaded biologia general pdf for free. Enjoy reading this amazing book!

        -

        Frequently Asked Questions (FAQs)

        -

        Here are some common questions that people might have about biologia general:

        -
        Q: What is the difference between biologia general and biology?
        -
        A: Biologia general is the Spanish term for biology, which is the English term for the scientific study of life. They are essentially the same subject, but they might have some differences in terminology or content depending on the language or country where they are taught.
        -
        Q: Who are Marta Cervantes and Margarita Hernandez?
        -
        A: Marta Cervantes Ramírez and Margarita Hernández Hernández are two distinguished Mexican biologists and professors at the National Autonomous University of Mexico (UNAM). They have extensive experience in teaching biologia general at various levels of education. They have also participated in several research projects and publications related to biologia general. They have written this book with the aim of providing a comprehensive and updated source of information on biologia general for students and teachers.
        -
        Q: How many chapters does biologia general have?
        -
        A: Biologia general has 25 chapters that are organized into five units that correspond to the main themes of biologia general: the nature of science, the unity of living beings, the continuity of life, evolution and biological diversity, and organisms and their environment.
        -
        Q: What are some examples of activities that can be found in biologia general?
        -
        A: Biologia general has many activities that invite the reader to apply or deepen their knowledge on a specific topic. Some examples are:
        -
          - Observe different types of cells under a microscope and identify their components. - Compare different modes of reproduction in plants and animals. - Construct a family tree based on your own genetic traits. - Simulate natural selection using beans or candies. - Classify different organisms using dichotomous keys. - Measure biodiversity using quadrats or transects. - Calculate ecological indicators such as population density or growth rate. - Analyze food webs or energy pyramids in different ecosystems. - Identify environmental problems or solutions in your community or country. - Design a sustainable project or product using renewable resources or recycled materials.
        -
        Q: How can I contact the authors of biologia general?
        -
        A: If you have any questions, comments, suggestions, or feedback about biologia general, you can contact the authors by email at martacervantes@unam.mx or margaritahernandez@unam.mx. You can also visit their website at http://www.biologiageneral.unam.mx/ where you can find more information about them and their book.
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Bootstrap Admin Template Nulled Php.md b/spaces/raedeXanto/academic-chatgpt-beta/Bootstrap Admin Template Nulled Php.md deleted file mode 100644 index 3581a3179d11f91478daf03761a9296da2f8dbd3..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Bootstrap Admin Template Nulled Php.md +++ /dev/null @@ -1,37 +0,0 @@ - -

        How to Find and Use Bootstrap Admin Template Nulled PHP

        -

        Bootstrap is a popular framework for creating responsive and mobile-friendly websites. It provides a set of ready-made components and templates that can help you speed up your web development process. However, if you want to customize your website's appearance and functionality, you may need to use an admin template.

        -

        An admin template is a collection of pages and elements that allow you to manage your website's content, users, settings, and more. It usually comes with a dashboard, charts, tables, forms, widgets, and other components that you can use to create a user-friendly interface for your website.

        -

        bootstrap admin template nulled php


        Download >> https://tinourl.com/2uL0Ud



        -

        However, not all admin templates are free or affordable. Some of them require you to pay a license fee or a subscription fee to use them. If you are looking for a way to save money and still get a high-quality admin template, you may want to consider using a bootstrap admin template nulled php.

        -

        A bootstrap admin template nulled php is a version of an admin template that has been modified or cracked to remove the license verification or the subscription requirement. This means that you can use it for free without any restrictions or limitations. However, there are some risks and drawbacks associated with using a bootstrap admin template nulled php.

        -

        The Pros and Cons of Using Bootstrap Admin Template Nulled PHP

        -

        Before you decide to use a bootstrap admin template nulled php, you should weigh the pros and cons carefully. Here are some of the advantages and disadvantages of using a bootstrap admin template nulled php:

        -

        The Pros

        -
          -
        • You can save money by not paying for a license or a subscription fee.
        • -
        • You can access all the features and functionalities of the original admin template.
        • -
        • You can customize the admin template according to your preferences and needs.
        • -
        • You can find many bootstrap admin template nulled php online for free or at a low cost.
        • -
        -

        The Cons

        -
          -
        • You may violate the intellectual property rights of the original developer or owner of the admin template.
        • -
        • You may expose your website to security risks such as malware, viruses, or hacking attacks.
        • -
        • You may encounter bugs, errors, or compatibility issues with the nulled php code.
        • -
        • You may not receive any updates, support, or documentation from the original developer or owner of the admin template.
        • -
        • You may damage your reputation or credibility as a web developer or a website owner.
        • -
        -

        How to Find and Use Bootstrap Admin Template Nulled PHP

        -

        If you still want to use a bootstrap admin template nulled php despite the risks and drawbacks, here are some steps that you can follow:

        -
          -
        1. Search for a bootstrap admin template nulled php online. You can use search engines, forums, blogs, or websites that offer nulled php scripts and templates. Make sure that you choose a reputable and reliable source that has positive reviews and feedback from other users.
        2. -
        3. Download the bootstrap admin template nulled php file to your computer. Scan it with an antivirus software to make sure that it is safe and clean from any malware or viruses.
        4. -
        5. Upload the bootstrap admin template nulled php file to your website's server. You can use an FTP client or a file manager to do this. Make sure that you place it in the correct folder or directory where your website's files are located.
        6. -
        7. Extract the bootstrap admin template nulled php file if it is compressed or archived. You can use a software such as WinRAR or 7-Zip to do this. You should see a folder containing the files and folders of the admin template.
        8. -
        9. Install the bootstrap admin template nulled php on your website. You may need to follow some instructions or steps provided by the source of the nulled php script or template. You may also need to modify some configuration files or settings to make it work properly on your website.
        10. -
        11. Enjoy using the bootstrap admin template nulled php on your website. You can now access the dashboard and other pages of the admin template. You can also customize it according to your preferences and needs.
        12. -

        -

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/CRACK Havij - Advanced SQL Injection 1.152 - Fliiix Learn the Secrets of SQL Injection with this Powerful Software.md b/spaces/raedeXanto/academic-chatgpt-beta/CRACK Havij - Advanced SQL Injection 1.152 - Fliiix Learn the Secrets of SQL Injection with this Powerful Software.md deleted file mode 100644 index 4e63a87e4dd57980ba7546fc10a6d8b9bdafc5bf..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/CRACK Havij - Advanced SQL Injection 1.152 - Fliiix Learn the Secrets of SQL Injection with this Powerful Software.md +++ /dev/null @@ -1,145 +0,0 @@ -
        -

        XLSTAT 2010 Crack Full Version: A Comprehensive Guide

        -

        If you are looking for a way to perform advanced statistical analysis in Excel, you might have heard of XLSTAT. XLSTAT is a popular and powerful software that extends Excel's capabilities with more than 200 features and functions. However, XLSTAT is not a free software, and you might need to pay a hefty price to use it. That's why some people resort to using a crack version of XLSTAT, which allows them to use the software for free or with unlimited features. But what is a crack exactly, and how does it work? And more importantly, how can you download and install XLSTAT 2010 Crack Full Version on your computer? In this article, we will answer these questions and more. We will also show you how to use XLSTAT 2010 Crack Full Version once you have it installed.

        -

        XLSTAT 2010 Crack Full Version


        Download File ——— https://tinourl.com/2uL4C0



        -

        What is XLSTAT and why do you need it?

        -

        Before we dive into the details of how to get and use XLSTAT 2010 Crack Full Version, let's first understand what XLSTAT is and why you might need it.

        -

        XLSTAT is a powerful statistical analysis software for Excel

        -

        XLSTAT is an add-on statistical analysis program that integrates seamlessly with Excel. It provides a wide range of functions and features to enhance Excel's analytics capabilities. With XLSTAT, you can perform descriptive statistics, hypothesis testing, regression analysis, multivariate analysis, time series analysis, machine learning, data mining, data visualization, and much more. You can also access more than 100 statistical tests and methods from various fields such as marketing, finance, engineering, biology, psychology, etc.

        -

        XLSTAT offers a wide range of features and functions to enhance your data analysis

        -

        XLSTAT has more than 200 features and functions that cover all aspects of data analysis. Some of the main features include:

        -
          -
        • Data preparation: You can import, export, transform, clean, filter, sort, merge, split, sample, or transpose your data easily with XLSTAT.
        • -
        • Data exploration: You can explore your data with various tools such as summary statistics, histograms, box plots, scatter plots, correlation matrices, normality tests, etc.
        • -
        • Data modeling: You can build various models with your data such as linear regression, logistic regression, ANOVA, ANCOVA, GLM, GAM, mixed models, survival analysis, etc.
        • -
        • Data analysis: You can perform various types of analysis with your data such as factor analysis, cluster analysis, principal component analysis (PCA), discriminant analysis (DA), correspondence analysis (CA), multidimensional scaling (MDS), etc.
        • -
        • Data mining: You can apply various techniques to discover patterns and insights from your data such as association rules, decision trees, neural networks, support vector machines (SVM), k-means clustering, etc.
        • -
        • Data visualization: You can create various types of charts and graphs with your data such as bar charts, pie charts, line charts, area charts, bubble charts, radar charts, heat maps, contour plots, etc.
        • -
        • Data reporting: You can generate various types of reports with your data such as tables, dashboards, interactive reports, HTML reports, PDF reports, etc.
        • -
        -

        XLSTAT is compatible with all versions of Excel and Windows

        -

        XLSTAT is compatible with all versions of Excel from version 97 to 2016 (except Mac 2016) and is compatible with Windows 9x and Windows 10 systems. This means that you can use XLSTAT with any Excel file and on any Windows computer.

        -

        What is a crack and how does it work?

        -

        Now that you know what XLSTAT is and why you might need it, let's talk about what a crack is and how it works.

        -

        A crack is a software tool that bypasses the security or licensing system of a software

        -

        A crack is a software tool that modifies or replaces the original code of a software to bypass its security or licensing system. A security or licensing system is a mechanism that prevents unauthorized use or distribution of a software. For example, some software require a serial number, a product key, a password, or an online activation to verify that you have purchased the software legally. A crack can remove or disable these requirements and allow you to use the software without any restrictions.

        -

        A crack can allow you to use a software for free or with unlimited features

        -

        A crack can have different effects depending on the type of software and the type of crack. Some common effects are:

        -

        XLSTAT 2020.1.1 PREMIUM x64 - Wannacrack[^1^]
        -XLStat 24.2.1314.0 Crack With Activation key Incl Full Torrent - Free4crack[^2^]
        -Users Choice XLSTAT 2010 Crack ~REPACK~ Full Version | Peatix[^3^]
        -XLSTAT 2010 Crack Full Version - Bitbucket[^4^]
        -XLSTAT 2010 Crack Full Version - SoundCloud[^5^]
        -XLSTAT statistical analysis software for Excel
        -XLSTAT crack download free
        -XLSTAT license key generator
        -XLSTAT premium full version with crack
        -XLSTAT 2023 crack latest version
        -XLSTAT free download for Mac and Windows
        -XLSTAT data analysis and statistics tool for Excel
        -XLSTAT crack for Mac x86
        -XLSTAT activation key incl full torrent
        -XLSTAT demo download with potential restrictions
        -XLSTAT features and benefits
        -XLSTAT solutions for different fields and industries
        -XLSTAT compatibility with Office versions
        -XLSTAT installation and crack instructions
        -XLSTAT review and rating
        -XLSTAT coupon code and discount
        -XLSTAT customer support and service
        -XLSTAT tutorials and guides
        -XLSTAT FAQs and tips
        -XLSTAT blog and news
        -XLSTAT add-in for Excel 2010
        -XLSTAT crack repack full version
        -XLSTAT premium x64 multilingual download
        -XLSTAT 2020.1.1 crack free download
        -XLSTAT 24.2.1314.0 crack latest update
        -XLSTAT pro crack with license key
        -XLSTAT serial number and registration code
        -XLSTAT patch and keygen download
        -XLSTAT online activation and verification
        -XLSTAT trial version and limitations
        -XLSTAT system requirements and specifications
        -XLSTAT updates and upgrades
        -XLSTAT refund policy and terms of service
        -XLSTAT testimonials and feedbacks
        -XLSTAT community and forum
        -How to use XLSTAT for data analysis in Excel
        -How to get XLSTAT for free with crack
        -How to install and activate XLSTAT on Mac or Windows
        -How to uninstall and remove XLSTAT from Excel
        -How to troubleshoot and fix common issues with XLSTAT
        -How to contact XLSTAT customer care team
        -How to learn more about XLSTAT features and functions
        -How to access XLSTAT online resources and documentation
        -How to join the XLSTAT affiliate program and earn commissions
        -How to compare different versions of XLSTAT

        -
          -
        • Using a software for free: A crack can allow you to use a software that normally requires payment without paying anything. For example, XLSTAT is a paid software that costs between $50 and $950 per year depending on the edition and the number of users. A crack can allow you to use XLSTAT for free without paying anything.
        • -
        • Using a software with unlimited features: A crack can allow you to use a software that normally has limited features without any limitations. For example, XLSTAT has different editions that offer different features and functions. The basic edition has only 13 features, while the premium edition has more than 200 features. A crack can allow you to use all the features of XLSTAT regardless of the edition.
        • -
        • Using a software with unlimited time: A crack can allow you to use a software that normally has an expiration date without any expiration. For example, some software offer a trial period that allows you to use the software for a limited time before you have to purchase it. XLSTAT offers a 14-day free trial that allows you to use all the features of XLSTAT for 14 days. A crack can allow you to use XLSTAT indefinitely without purchasing it.
        • -
        -

        A crack can also pose risks and disadvantages such as malware, viruses, legal issues, or performance issues

        -

        While a crack might seem tempting and beneficial, it also comes with some risks and disadvantages that you should be aware of. Some of the common risks and disadvantages are:

        -
          -
        • Malware and viruses: A crack can contain malicious code that can harm your computer or steal your personal information. Malware and viruses can infect your files, corrupt your data, slow down your system, spy on your activities, or even take control of your computer. For example, some cracks for XLSTAT might contain trojans, worms, keyloggers, ransomware, or adware that can damage your computer or compromise your privacy.
        • -
        • Legal issues: A crack can violate the intellectual property rights of the software developer or publisher. Using a crack can be considered as piracy, which is illegal in most countries. Piracy can result in fines, lawsuits, or even criminal charges. For example, using a crack for XLSTAT might infringe the copyright of Addinsoft, the developer and publisher of XLSTAT. Using a crack for XLSTAT might expose you to legal consequences such as fines, lawsuits, or even criminal charges.
        • -
        • Performance issues: A crack can affect the performance or functionality of the software or your computer. A crack can cause errors, bugs, crashes, freezes, or compatibility issues with the software or your computer. For example, some cracks for XLSTAT might not work properly with some features or functions of XLSTAT, or might cause Excel to malfunction or crash.
        • -
        -

        How to download and install XLSTAT 2010 Crack Full Version?

        -

        If you are still interested in using XLSTAT 2010 Crack Full Version despite the risks and disadvantages, you might wonder how to download and install it on your computer. Here are the steps that you need to follow:

        -

        Step 1: Find a reliable source for the crack file

        -

        The first step is to find a reliable source for the crack file. A crack file is a file that contains the code or instructions that can activate the full version of XLSTAT 2010. You can find crack files on various websites that offer software downloads, such as Wannacrack, Thuthuat-phanmem, Peatix, Bitbucket, or SoundCloud. However, you should be careful when choosing a source for the crack file, as some sources might be fake, outdated, or infected with malware or viruses. You should always check the reviews, ratings, comments, and feedback of other users before downloading a crack file from any source.

        -

        Step 2: Download the crack file and the original software from the source

        -

        The second step is to download the crack file and the original software from the source. The original software is the software that you want to use with the crack file. In this case, it is XLSTAT 2010. You can download both the crack file and the original software from the same source or from different sources. However, you should make sure that they are compatible and match each other. For example, if you download a crack file for XLSTAT 2010 Premium x64, you should also download the original software for XLSTAT 2010 Premium x64. You should also make sure that you have enough space on your hard disk to store both files.

        -

        Step 3: Install the original software and then run the crack file

        -

        The third step is to install the original software and then run the crack file. To install the original software, you need to follow the instructions on the installer or in the readme file that comes with it. You might need to enter a serial number, a product key, a password, or an online activation to complete the installation. However, these requirements will be removed or disabled by the crack file later. To run the crack file, you need to double-click on it or right-click on it and select Run as administrator. You might need to disable your antivirus or firewall temporarily before running the crack file, as they might block or delete it.

        -

        Step 4: Follow the instructions on the crack file to activate the full version of XLSTAT 2010

        -

        The fourth and final step is to follow the instructions on the crack file to activate the full version of XLSTAT 2010. The instructions might vary depending on the type and source of the crack file, but they usually involve copying and pasting some files or codes into the installation folder of XLSTAT 2010. You might also need to restart your computer or Excel after running the crack file. Once you have followed the instructions, you should be able to use XLSTAT 2010 Crack Full Version without any limitations.

        -

        How to use XLSTAT 2010 Crack Full Version?

        -

        Now that you have downloaded and installed XLSTAT 2010 Crack Full Version, you might wonder how to use it. Here are some tips and tricks that can help you get started:

        -

        XLSTAT 2010 Crack Full Version has the same features and functions as the original software

        -

        XLSTAT 2010 Crack Full Version has the same features and functions as the original software, so you can use it as if you have purchased it legally. You can access all the options and modules of XLSTAT 2010 without any restrictions. You can also update XLSTAT 2010 to the latest version without any problems.

        -

        You can access XLSTAT from the Excel toolbar or menu

        -

        You can access XLSTAT from the Excel toolbar or menu, depending on your Excel version and settings. You can find XLSTAT on the Add-Ins tab or on a separate tab on the Excel toolbar. You can also find XLSTAT on the Tools menu or on a separate menu on the Excel menu bar. You can click on any of the icons or options to open XLSTAT and start using it.

        -

        You can choose from various options and modules depending on your data analysis needs

        -

        You can choose from various options and modules depending on your data analysis needs. You can find different categories of options and modules on the XLSTAT toolbar or menu, such as Data Analysis, Data Mining, Data Visualization, Data Reporting, etc. You can also use the search box to find a specific option or module by typing its name or keyword. You can click on any of the options or modules to open a dialog box where you can select your data, parameters, and outputs.

        -

        You can also customize your settings and preferences in XLSTAT

        -

        You can also customize your settings and preferences in XLSTAT to suit your needs and preferences. You can find the Settings option on the XLSTAT toolbar or menu, where you can change various aspects of XLSTAT, such as language, appearance, output format, default values, etc. You can also find the Preferences option on the XLSTAT toolbar or menu, where you can change various aspects of Excel, such as calculation mode, decimal separator, etc.

        -

        Conclusion

        -

        In this article, we have explained what XLSTAT is and why you might need it. We have also explained what a crack is and how it works. We have also shown you how to download and install XLSTAT 2010 Crack Full Version on your computer. We have also given you some tips and tricks on how to use XLSTAT 2010 Crack Full Version once you have it installed.

        -

        XLSTAT 2010 Crack Full Version is a useful tool for statistical analysis in Excel that can save you money and time. However, it also comes with some risks and drawbacks that you should be aware of. You should always use a crack at your own discretion and responsibility.

        -

        FAQs

        -

        Here are some frequently asked questions about XLSTAT 2010 Crack Full Version:

        -

        Is XLSTAT 2010 Crack Full Version safe?

        -

        XLSTAT 2010 Crack Full Version is not completely safe, as it might contain malware or viruses that can harm your computer or steal your personal information. It might also violate the intellectual property rights of the software developer or publisher. Using a crack can be considered as piracy, which is illegal in most countries. Piracy can result in fines, lawsuits, or even criminal charges. For example, using a crack for XLSTAT might infringe the copyright of Addinsoft, the developer and publisher of XLSTAT. Using a crack for XLSTAT might expose you to legal consequences such as fines, lawsuits, or even criminal charges. -

      • Performance issues: A crack can affect the performance or functionality of the software or your computer. A crack can cause errors, bugs, crashes, freezes, or compatibility issues with the software or your computer. For example, some cracks for XLSTAT might not work properly with some features or functions of XLSTAT, or might cause Excel to malfunction or crash.
      • -
      -

      Where can I find a reliable source for XLSTAT 2010 Crack Full Version?

      -

      There is no definitive answer to this question, as different sources might have different levels of reliability and quality. However, some general tips that can help you find a reliable source for XLSTAT 2010 Crack Full Version are:

      -
        -
      • Check the reviews, ratings, comments, and feedback of other users who have downloaded the crack file from the source. Look for positive and negative feedback, and see if there are any complaints or issues reported by other users.
      • -
      • Check the date and version of the crack file and the original software. Make sure that they are updated and match each other. Avoid downloading outdated or mismatched crack files or original software.
      • -
      • Check the size and format of the crack file and the original software. Make sure that they are reasonable and compatible with each other. Avoid downloading suspiciously large or small files, or files with unknown or uncommon formats.
      • -
      • Check the security and reputation of the source website. Make sure that it is safe and trustworthy. Avoid downloading from websites that have poor design, low traffic, pop-up ads, malware warnings, or bad reputation.
      • -
      -

      How can I update XLSTAT 2010 Crack Full Version?

      -

      You can update XLSTAT 2010 Crack Full Version by downloading and installing the latest version of XLSTAT 2010 from the official website of Addinsoft. However, you might need to download and run a new crack file to activate the full version of XLSTAT 2010 after updating it. You should also make sure that the new crack file is compatible and matches with the new version of XLSTAT 2010.

      -

      What are some alternatives to XLSTAT 2010 Crack Full Version?

      -

      If you are looking for some alternatives to XLSTAT 2010 Crack Full Version that are legal and safe, you might consider these options:

      -
        -
      • Use the free trial version of XLSTAT 2010: You can use the free trial version of XLSTAT 2010 for 14 days without any limitations. You can download it from the official website of Addinsoft. However, you will need to purchase a license after 14 days if you want to continue using it.
      • -
      • Use a free or open-source statistical analysis software: You can use a free or open-source statistical analysis software that has similar features and functions as XLSTAT 2010. Some examples are R, Python, SPSS, SAS, Stata, etc. These software are free or open-source, which means that you can use them without paying anything or with minimal costs. They also have similar or better features and functions as XLSTAT 2010. However, they might have different interfaces, syntaxes, or formats than XLSTAT 2010, so you might need to learn how to use them.
      • -
      • Use an online statistical analysis service: You can use an online statistical analysis service that can perform various types of analysis with your data. Some examples are StatCrunch, Statwing, DataCracker, etc. These services are web-based, which means that you can use them without installing anything on your computer. They also have user-friendly interfaces and interactive features that can make your data analysis easier and faster. However, they might have limited features or functions compared to XLSTAT 2010, or they might require a subscription or a fee to use them.
      • -
      -

      What are some advantages of using XLSTAT 2010 Crack Full Version?

      -

      Some of the advantages of using XLSTAT 2010 Crack Full Version are:

      -
        -
      • Saving money: You can use XLSTAT 2010 Crack Full Version without paying anything for it. You can save money that you would otherwise spend on purchasing a license for XLSTAT 2010.
      • -
      • Saving time: You can use XLSTAT 2010 Crack Full Version without waiting for a trial period to end or for an online activation to complete. You can save time that you would otherwise spend on verifying your purchase of XLSTAT 2010.
      • -
      • Accessing unlimited features: You can use XLSTAT 2010 Crack Full Version without any limitations on its features and functions. You can access all the options and modules of XLSTAT 2010 regardless of the edition.
      • -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Chimera Tool License Crack 240 Unlock Any Android Device in Minutes.md b/spaces/raedeXanto/academic-chatgpt-beta/Chimera Tool License Crack 240 Unlock Any Android Device in Minutes.md deleted file mode 100644 index 7207454bd4930b45ed95fe9a457b200cb9d4347e..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Chimera Tool License Crack 240 Unlock Any Android Device in Minutes.md +++ /dev/null @@ -1,130 +0,0 @@ - -

      Chimera Tool License Crack 240: What You Need to Know

      -

      If you are a mobile phone repair professional or enthusiast, you might have heard of Chimera Tool, a powerful software designed for unlocking, flashing, repairing, and removing locks from various Android and iOS devices. However, you might also know that using this tool requires a license activation, which can be quite expensive and inconvenient. That's why some people look for ways to crack the license and use the tool for free. In this article, we will tell you everything you need to know about Chimera Tool License Crack 240, including its features, how to download and install it, how to use it for different brands, and its pros and cons. Read on to find out more.

      -

      Features of Chimera Tool

      -

      Chimera Tool is a software that offers a range of features for mobile phone technicians, such as:

      -

      chimera tool license crack 240


      DOWNLOADhttps://tinourl.com/2uL26N



      -
        -
      • Unlocking of Android and iOS devices: This feature allows you to easily remove carrier lock from Android and iOS devices, which means you can use them with any network provider. This is especially useful for repairing and reselling devices.
      • -
      • Flashing firmware: This feature allows you to update or downgrade the software version on your device, which can help you fix software-related issues, bugs, or errors. You can also edit or add languages to your device using this feature.
      • -
      • Backup and restore of device data: This feature allows you to easily backup and restore your device data, such as contacts, messages, photos, videos, etc. This is very helpful for restoring your device to its original state after performing repairs.
      • -
      • Repairing IMEI and other network-related issues: This feature allows you to repair IMEI and other network-related issues on your device, such as no signal, invalid SIM card, etc. This is essential for making sure your device can connect to the network and function properly.
      • -
      • Removing Google account lock (FRP): This feature allows you to easily remove the Google account lock (FRP), which is a security feature that prevents unauthorized access to your device after a factory reset. This can be a hassle for technicians who need to access the device for repairs, and Chimera Tool makes it easy to bypass this lock.
      • -
      • Reading and writing of device firmware: This feature allows you to read and write device firmware, which is useful for performing advanced repairs. You can make custom modifications to your device's firmware using this feature.
      • -
      -

      Chimera Tool supports direct unlock and repair IMEI of various brands like Samsung, LG, Huawei, Nokia, Xiaomi, Oppo, Vivo, Sony, Motorola, HTC, Blackberry, Apple, and many more. It also supports multiple languages and has a user-friendly interface that makes it easy to use.

      -

      How to Download and Install Chimera Tool Crack with Loader

      -

      If you want to use Chimera Tool without paying for a license activation, you can download and install Chimera Tool Crack with Loader from a reliable source. Here are the steps you need to follow:

      -
        -
      1. Step 1: Download the Chimera Tool Crack file from a reliable source. You can find some links below . Make sure you scan the file for viruses or malware before opening it.
      2. -
      3. Step 2: Extract the file using WinRAR or any other extraction tool. You will find a folder named "ChimeraTool" with two files inside: "ChimeraTool.exe" and "Loader.exe". Right-click on "Loader.exe" and select "Run as administrator".
      4. -
      5. Step 3: Wait for the installation process to complete. It may take a few minutes depending on your system speed. Once it is done, you will see a message saying "Installation completed successfully". Now you can launch the tool by clicking on "ChimeraTool.exe".
      6. -
      -

      Congratulations! You have successfully installed Chimera Tool Crack with Loader on your PC. You can now use it without any license key or activation.

      -

      chimera tool pro license crack 240
      -chimera tool activation code crack 240
      -chimera tool full version crack 240
      -chimera tool license key generator 240
      -chimera tool cracked download 240
      -chimera tool free license 240
      -chimera tool premium crack 240
      -chimera tool crack without box 240
      -chimera tool license renewal crack 240
      -chimera tool latest version crack 240
      -chimera tool crack for windows 10 240
      -chimera tool license username and password 240
      -chimera tool crack with loader 240
      -chimera tool license expired crack 240
      -chimera tool crack for android 240
      -chimera tool license file download 240
      -chimera tool crack no dongle 240
      -chimera tool license transfer crack 240
      -chimera tool crack for mac 240
      -chimera tool license activation error 240
      -chimera tool crack with keygen 240
      -chimera tool license price crack 240
      -chimera tool crack for linux 240
      -chimera tool license verification crack 240
      -chimera tool crack with patch 240
      -chimera tool license cost crack 240
      -chimera tool crack for ios 240
      -chimera tool license email and password 240
      -chimera tool crack with serial number 240
      -chimera tool license discount code crack 240
      -chimera tool crack for pc 240
      -chimera tool license login crack 240
      -chimera tool crack with registration code 240
      -chimera tool license coupon code crack 240
      -chimera tool crack for laptop 240
      -chimera tool license purchase crack 240
      -chimera tool crack with activation key 240
      -chimera tool license refund policy crack 240
      -chimera tool crack for desktop 240
      -chimera tool license support crack 240
      -chimera tool crack with online mode 240
      -chimera tool license upgrade crack 240
      -chimera tool crack with offline mode 240
      -chimera tool license change device crack 240
      -chimera tool crack for tablet 240
      -chimera tool license reset password crack 240
      -chimera tool crack with tutorial video 240
      -chimera tool license features comparison crack 240
      -chimera tool crack for smartphone

      -

      How to Use Chimera Tool Crack for Various Brands

      -

      Now that you have installed Chimera Tool Crack with Loader on your PC, you can use it for various brands of devices. Here are the steps you need to follow:

      -
        -
      1. Step 1: Connect your device to your PC using a USB cable. Make sure you enable USB debugging mode on your device if it is an Android device.
      2. -
      3. Step 2: Select your device model and brand from the list on the left side of the tool's interface. You can also use the search bar to find your device quickly.
      4. -
      5. Step 3: Choose the desired operation from the list on the right side of the tool's interface. You can see various options such as Unlock Online (for unlocking devices), Flash Firmware (for flashing firmware), Backup/Restore (for backing up or restoring data), Repair IMEI (for repairing IMEI), Remove FRP (for removing Google account lock), Read/Write Firmware (for reading or writing firmware), etc.
      6. -
      7. Step 4: Click on Start button at the bottom of the tool's interface. Wait for the operation to complete. It may take some time depending on your device model and operation type.
      8. -
      -

      You have successfully used Chimera Tool Crack for various brands of devices. You can check the status of your operation in the log window at the bottom of the tool's interface.

      -

      Pros and Cons of Using Chimera Tool Crack

      -

      Using Chimera Tool Crack has its advantages and disadvantages. Here are some of them:

      -
        -
      • Pros:
      • -
          -
        • No need for box or dongle: You can use Chimera Tool without any hardware requirement such as box or dongle.
        • -
        • No license key required: You don't need any license key or activation code to use Chimera Tool.
        • -
        • Supports multiple languages and devices: You can use Chimera Tool in different languages such as English, Spanish, French, German, etc. You can also use it for various brands and models of devices.
        • -
        • Easy to use interface: You can easily navigate through the tool's interface and perform various operations with just a few clicks.
        • -
        • Cons:
        • -
            -
          • Risk of malware or virus infection: You may download a corrupted or infected file from an unreliable source, which can harm your PC or device.
          • -
          • Legal issues: You may violate the terms and conditions of Chimera Tool by using a cracked version, which can result in legal actions or penalties.
          • -
          • Possible damage to device or warranty: You may damage your device or void its warranty by using an unauthorized tool or performing unauthorized operations.
          • -
          • Limited updates and support: You may not receive regular updates or support from Chimera Tool team by using a cracked version.
          • -
          -
        -

        You should weigh the pros and cons of using Chimera Tool Crack before deciding to use it.

        -

        Conclusion

        -

        Chimera Tool is a powerful software for mobile phone repair professionals. It offers a range of features such as unlocking, flashing, repairing, and removing locks from various Android and iOS devices. However, using this tool requires a license activation, which can be costly and inconvenient. That's why some people look for ways to crack the license and use the tool for free. In this article, we have told you everything you need to know about Chimera Tool License Crack 240, including its features, how to download and install it, how to use it for different brands, and its pros and cons. We hope you have found this article helpful and informative. However, we do not recommend using Chimera Tool Crack as it may pose some risks and issues. We suggest you use a genuine license for Chimera Tool or look for some alternatives that are safe and legal.

        -

        FAQs

        -

        Here are some frequently asked questions about Chimera Tool License Crack 240:

        -
          -
        1. Q1: Is Chimera Tool Crack safe to use?
        2. -

          A1: No, Chimera Tool Crack is not safe to use. You may download a corrupted or infected file from an unreliable source, which can harm your PC or device. You may also violate the terms and conditions of Chimera Tool by using a cracked version, which can result in legal actions or penalties. You may also damage your device or void its warranty by using an unauthorized tool or performing unauthorized operations. You may also not receive regular updates or support from Chimera Tool team by using a cracked version.

          -
        3. Q2: What are the system requirements for Chimera Tool Crack?
        4. -

          A2: The system requirements for Chimera Tool Crack are as follows:

          -
            -
          • Operating system: Windows XP/Vista/7/8/10
          • -
          • Processor: Intel Pentium 4 or higher
          • -
          • RAM: 1 GB or higher
          • -
          • Hard disk space: 500 MB or higher
          • -
          • Internet connection: Required for downloading and updating the tool
          • -
          -
        5. Q3: How can I get a genuine license for Chimera Tool?
        6. -

          A3: You can get a genuine license for Chimera Tool by visiting their official website and choosing a suitable plan. You can pay with PayPal, credit card, bank transfer, or Bitcoin. You will receive an activation code via email after completing the payment. You can then enter the activation code in the tool's interface and start using it.

          -
        7. Q4: What are some alternatives to Chimera Tool Crack?
        8. -

          A4: Some alternatives to Chimera Tool Crack are:

          -
            -
          • Z3X Samsung Tool Pro: This is a tool for unlocking, flashing, and repairing Samsung devices.
          • -
          • SigmaKey Box: This is a tool for unlocking, flashing, and repairing MTK, Qualcomm, Broadcom, Hi-Silicon, Spreadtrum devices.
          • -
          • Miracle Box: This is a tool for unlocking, flashing, and repairing Chinese devices.
          • -
          • EFT Dongle: This is a tool for unlocking, flashing, and repairing Android devices.
          • -
          -
        9. Q5: How can I contact Chimera Tool support team?
        10. -

          A5: You can contact Chimera Tool support team by visiting their official website and clicking on "Contact Us" at the bottom of the page. You can also send them an email at support@chimeratool.com or call them at +36 1 999 0630.

          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Evermotion ? Archinterior Vol.46 [NEW].md b/spaces/raedeXanto/academic-chatgpt-beta/Evermotion ? Archinterior Vol.46 [NEW].md deleted file mode 100644 index 5f6be589e9d13ce064128d5cb175d4a285296ac4..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Evermotion ? Archinterior Vol.46 [NEW].md +++ /dev/null @@ -1,84 +0,0 @@ - - - - - - -
          - Article with HTML formatting
          -

          Evermotion – Archinterior Vol.46: A Review

          -

          Are you looking for a way to create stunning 3D renderings of industrial loft interiors? Do you want to save time and effort by using ready-made scenes with professional shaders and lighting? If so, you might be interested in Evermotion – Archinterior Vol.46.

          -

          Introduction

          -

          Evermotion – Archinterior Vol.46 is a collection of 10 fully textured industrial loft interior scenes that you can use for your 3D rendering projects. It is a product from Evermotion, a company that specializes in creating high-quality 3D models, scenes, textures, and tutorials for various fields of visual arts.Evermotion – Archinterior Vol.46 is designed for V-ray 3.0 and 3dsmax 2011 or higher. It features high-quality models of furniture, appliances, accessories, and decorations that you can easily arrange and modify according to your preferences. It also comes with realistic lighting and shading settings that create a natural and atmospheric mood for each scene.

          -

          Evermotion – Archinterior Vol.46


          Download Filehttps://tinourl.com/2uL1tc



          -

          Evermotion – Archinterior Vol.46 is suitable for anyone who wants to create impressive 3D renderings of industrial loft interiors without spending too much time and effort. Whether you are an architect, interior designer, 3D artist, or hobbyist, you can benefit from using this product for your personal or commercial projects.

          -

          How to use Evermotion – Archinterior Vol.46?

          -

          Using Evermotion – Archinterior Vol.46 is very easy and straightforward. Here are the steps you need to follow:

          -

          How to download and install it?

          -

          To download Evermotion – Archinterior Vol.46, you need to purchase it from the official website of Evermotion at https://evermotion.org/shop/show_product/archinteriors-vol-46/12350. You can pay with PayPal, credit card, or bank transfer. After you complete the payment, you will receive an e-mail with a link to download the product.

          -

          The product is a zip file that contains 10 folders, one for each scene. Each folder has a max file, a psd file, and a textures folder. The total size of the product is about 9 GB, so make sure you have enough space on your computer before downloading it.

          -

          To install Evermotion – Archinterior Vol.46, you just need to unzip the file and copy the folders to your desired location on your computer. You don't need to install any additional software or plugins, as long as you have V-ray 3.0 and 3dsmax 2011 or higher installed on your computer.

          -

          How to open and render the scenes?

          -

          To open the scenes, you just need to double-click on the max file of the scene you want to use. This will open 3dsmax and load the scene with all the models, textures, lights, cameras, and settings.

          -

          To render the scenes, you just need to click on the render button on the toolbar or press F9 on your keyboard. This will open the V-ray frame buffer window and start rendering the scene with the default settings. You can also adjust the render settings according to your needs, such as resolution, quality, output format, etc.

          -

          -

          The rendering time may vary depending on your computer specifications and the complexity of the scene. Generally, it takes about 10 minutes to an hour to render a scene at full HD resolution (1920 x 1080 pixels). You can also use a render farm service if you want to speed up the rendering process or render multiple scenes at once.

          -

          How to customize and edit the scenes?

          -

          One of the advantages of Evermotion – Archinterior Vol.46 is that you can customize and edit the scenes as much as you want. You can change the colors, materials, textures, lighting, camera angles, and other parameters of any model or element in the scene. You can also add, delete, move, rotate, scale, or transform any model or element in the scene.

          -

          To customize and edit the scenes, you just need to use the tools and options available in 3dsmax and V-ray. For example, you can use the material editor to change the materials and textures of any model or element in the scene. You can use the modify panel to change the parameters of any model or element in the scene. You can use the move, rotate, scale, or transform tools to manipulate any model or element in the scene.

          -

          You can also use some of the features that are specific to Evermotion – Archinterior Vol.46. For example, you can use the proxy objects feature to replace some of the high-poly models with low-poly models for faster viewport performance and rendering speed. You can use the forest pack feature to scatter some of the models such as plants or books randomly across a surface for more realism and variation.

          -

          What are the pros and cons of Evermotion – Archinterior Vol.46?

          -

          Evermotion – Archinterior Vol.46 is a great product that offers many benefits for anyone who wants to create stunning 3D renderings of industrial loft interiors. However, it also has some drawbacks that you should be aware of before buying it. Here are some of the pros and cons of Evermotion – Archinterior Vol.46:

          - - - - - - - - -
          ProsCons -
          -- High-quality models of industrial loft interiors with realistic textures and details - Professional lighting and shading settings that create a natural and atmospheric mood for each scene - Easy to use and customize with 3dsmax and V-ray tools and features - Compatible with V-ray 3.0 and 3dsmax 2011 or higher - Suitable for personal or commercial projects -- Expensive compared to some other products or software - Large file size that requires a lot of disk space and memory - Requires V-ray 3.0 and 3dsmax 2011 or higher, which may not be available or compatible for some users - Limited to 10 scenes that may not cover all the possible scenarios or styles of industrial loft interiors - May take a long time to render depending on the computer specifications and the render settings
          -

          What are some alternatives to Evermotion – Archinterior Vol.46?

          -

          If you are not satisfied with Evermotion – Archinterior Vol.46 or you want to explore other options, you can check out some of the alternatives that are available in the market. Here are some of them:

          -

          Other products from Evermotion

          -

          Evermotion has a wide range of products that offer high-quality 3D models, scenes, textures, and tutorials for various fields of visual arts. Some of them are similar to Evermotion – Archinterior Vol.46, such as:

          -
            -
          • Archmodels: A collection of 3D models of various objects, such as furniture, plants, vehicles, etc. You can use them to populate your scenes or create your own scenes.
          • -
          • Archexteriors: A collection of 3D scenes of various exterior environments, such as buildings, streets, parks, etc. You can use them to render realistic outdoor scenes or backgrounds.
          • -
          • Archinteriors for UE4: A collection of 3D scenes of various interior environments that are optimized for Unreal Engine 4. You can use them to create interactive applications or games.
          • -
          -

          You can find more products from Evermotion at https://evermotion.org/shop/.

          -

          Other products from different vendors

          -

          There are also other vendors that offer similar products to Evermotion – Archinterior Vol.46, such as:

          -
            -
          • CGAxis: A company that provides 3D models, scenes, textures, and HDRI maps for various fields of visual arts. They have a series of products called CGAxis Complete that include hundreds of 3D models and scenes of various themes and styles. You can find more products from CGAxis at https://cgaxis.com/.
          • -
          • Viz-People: A company that provides 3D models, scenes, textures, cut-out people, and HDRI maps for various fields of visual arts. They have a series of products called Viz-People Complete that include thousands of 3D models and scenes of various themes and styles. You can find more products from Viz-People at https://www.viz-people.com/.
          • -
          -

          Other software for 3D rendering

          -

          If you don't want to use ready-made scenes or models, you can also create your own scenes or models using other software for 3D rendering, such as:

          -
            -
          • Blender: A free and open-source software for 3D modeling, animation, rendering, and more. It has a built-in renderer called Cycles that can produce realistic results. You can also use other renderers such as Eevee or LuxCoreRender with Blender. You can download Blender at https://www.blender.org/.
          • -
          • SketchUp: A software for 3D modeling, design, and visualization. It is easy to use and has a large library of models and materials that you can access online. You can also use other renderers such as V-ray or Enscape with SketchUp. You can download SketchUp at https://www.sketchup.com/.
          • -
          -

          Conclusion

          -

          In conclusion, Evermotion – Archinterior Vol.46 is a product that offers high-quality 3D scenes of industrial loft interiors that you can use for your 3D rendering projects. It is easy to use and customize with 3dsmax and V-ray tools and features. It is suitable for anyone who wants to create stunning 3D renderings of industrial loft interiors without spending too much time and effort.

          -

          However, it also has some drawbacks that you should be aware of before buying it, such as the price, the file size, the software requirements, the scene limitations, and the rendering time. You should also consider some of the alternatives that are available in the market, such as other products from Evermotion or different vendors, or other software for 3D rendering.

          -

          Overall, I would recommend Evermotion – Archinterior Vol.46 to anyone who wants to create stunning 3D renderings of industrial loft interiors with ease and professionalism. I would give it a rating of 4.5 out of 5 stars.

          -

          FAQs

          -

          Here are some of the frequently asked questions about Evermotion – Archinterior Vol.46:

          -

          Q1: How much does Evermotion – Archinterior Vol.46 cost?

          -

          A1: It costs $120 for a single user license. You can also buy it as part of a bundle with other products from Evermotion and save up to 50%.

          -

          Q2: How many scenes are included in Evermotion – Archinterior Vol.46?

          -

          A2: It includes 10 fully textured industrial loft interior scenes. Each scene has a different layout, style, and theme.

          -

          Q3: What are the formats of the files in Evermotion – Archinterior Vol.46?

          -

          A3: The files are in max and psd formats. The max files are compatible with V-ray 3.0 and 3dsmax 2011 or higher. The psd files are compatible with Photoshop or any other image editing software.

          -

          Q4: Can I use Evermotion – Archinterior Vol.46 for commercial projects?

          -

          A4: Yes, you can use it for commercial projects as long as you follow the license agreement. You can read the license agreement at https://evermotion.org/shop/show_product/archinteriors-vol-46/12350#license.

          -

          Q5: Where can I find more information and support for Evermotion – Archinterior Vol.46?

          -

          A5: You can visit the official website of Evermotion at https://evermotion.org/ or contact them through e-mail at info@evermotion.org. You can also find more information and support on their forum at https://forum.evermotion.org/.

          -

          I hope you enjoyed reading this article and found it helpful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention.

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/scripts/glow/prepare_data.sh b/spaces/rahul999r/Rahul_Kannada_TTS/scripts/glow/prepare_data.sh deleted file mode 100644 index 2357eeebd0fb7e6fba858242af44e8b8aa87fdf9..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/scripts/glow/prepare_data.sh +++ /dev/null @@ -1,12 +0,0 @@ -input_text_path='/home/harveen/en/iitm_data/english/txt.done.data' -input_wav_path='/home/harveen/en/iitm_data/english/wav_22k' -gender='male' - - -output_data_path='../../data/glow/'$gender - -valid_samples=100 -test_samples=10 - -mkdir -p $output_data_path -python ../../utils/glow/prepare_iitm_data_glow_en.py -i $input_text_path -o $output_data_path -w $input_wav_path -v $valid_samples -t $test_samples diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/hifi/__init__.py b/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/hifi/__init__.py deleted file mode 100644 index 0323b35a0fc2ef21ac417857d9336cc7c8a3b717..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/src/glow_tts/hifi/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .env import AttrDict -from .models import Generator - -if __name__ == "__main__": - pass diff --git a/spaces/rainy3/chatgpt_academic/crazy_functions/crazy_utils.py b/spaces/rainy3/chatgpt_academic/crazy_functions/crazy_utils.py deleted file mode 100644 index 91bd4afead340d987c4bd9e3b4b3bf2ad0fa701c..0000000000000000000000000000000000000000 --- a/spaces/rainy3/chatgpt_academic/crazy_functions/crazy_utils.py +++ /dev/null @@ -1,540 +0,0 @@ -import traceback -from toolbox import update_ui - -def input_clipping(inputs, history, max_token_limit): - import tiktoken - import numpy as np - from toolbox import get_conf - enc = tiktoken.encoding_for_model(*get_conf('LLM_MODEL')) - def get_token_num(txt): return len(enc.encode(txt)) - - mode = 'input-and-history' - # 当 输入部分的token占比 小于 全文的一半时,只裁剪历史 - input_token_num = get_token_num(inputs) - if input_token_num < max_token_limit//2: - mode = 'only-history' - max_token_limit = max_token_limit - input_token_num - - everything = [inputs] if mode == 'input-and-history' else [''] - everything.extend(history) - n_token = get_token_num('\n'.join(everything)) - everything_token = [get_token_num(e) for e in everything] - delta = max(everything_token) // 16 # 截断时的颗粒度 - - while n_token > max_token_limit: - where = np.argmax(everything_token) - encoded = enc.encode(everything[where]) - clipped_encoded = encoded[:len(encoded)-delta] - everything[where] = enc.decode(clipped_encoded)[:-1] # -1 to remove the may-be illegal char - everything_token[where] = get_token_num(everything[where]) - n_token = get_token_num('\n'.join(everything)) - - if mode == 'input-and-history': - inputs = everything[0] - else: - pass - history = everything[1:] - return inputs, history - -def request_gpt_model_in_new_thread_with_ui_alive( - inputs, inputs_show_user, llm_kwargs, - chatbot, history, sys_prompt, refresh_interval=0.2, - handle_token_exceed=True, - retry_times_at_unknown_error=2, - ): - """ - Request GPT model,请求GPT模型同时维持用户界面活跃。 - - 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行): - inputs (string): List of inputs (输入) - inputs_show_user (string): List of inputs to show user(展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性) - top_p (float): Top p value for sampling from model distribution (GPT参数,浮点数) - temperature (float): Temperature value for sampling from model distribution(GPT参数,浮点数) - chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化) - history (list): List of chat history (历史,对话历史列表) - sys_prompt (string): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样) - refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果) - handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启 - retry_times_at_unknown_error:失败时的重试次数 - - 输出 Returns: - future: 输出,GPT返回的结果 - """ - import time - from concurrent.futures import ThreadPoolExecutor - from request_llm.bridge_chatgpt import predict_no_ui_long_connection - # 用户反馈 - chatbot.append([inputs_show_user, ""]) - msg = '正常' - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - executor = ThreadPoolExecutor(max_workers=16) - mutable = ["", time.time()] - def _req_gpt(inputs, history, sys_prompt): - retry_op = retry_times_at_unknown_error - exceeded_cnt = 0 - while True: - try: - # 【第一种情况】:顺利完成 - result = predict_no_ui_long_connection( - inputs=inputs, llm_kwargs=llm_kwargs, - history=history, sys_prompt=sys_prompt, observe_window=mutable) - return result - except ConnectionAbortedError as token_exceeded_error: - # 【第二种情况】:Token溢出 - if handle_token_exceed: - exceeded_cnt += 1 - # 【选择处理】 尝试计算比例,尽可能多地保留文本 - from toolbox import get_reduce_token_percent - p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error)) - MAX_TOKEN = 4096 - EXCEED_ALLO = 512 + 512 * exceeded_cnt - inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO) - mutable[0] += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n' - continue # 返回重试 - else: - # 【选择放弃】 - tb_str = '```\n' + traceback.format_exc() + '```' - mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - return mutable[0] # 放弃 - except: - # 【第三种情况】:其他错误:重试几次 - tb_str = '```\n' + traceback.format_exc() + '```' - mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if retry_op > 0: - retry_op -= 1 - mutable[0] += f"[Local Message] 重试中 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}:\n\n" - time.sleep(5) - continue # 返回重试 - else: - time.sleep(5) - return mutable[0] # 放弃 - - future = executor.submit(_req_gpt, inputs, history, sys_prompt) - while True: - # yield一次以刷新前端页面 - time.sleep(refresh_interval) - # “喂狗”(看门狗) - mutable[1] = time.time() - if future.done(): - break - chatbot[-1] = [chatbot[-1][0], mutable[0]] - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - - final_result = future.result() - chatbot[-1] = [chatbot[-1][0], final_result] - yield from update_ui(chatbot=chatbot, history=[]) # 如果最后成功了,则删除报错信息 - return final_result - - -def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array, inputs_show_user_array, llm_kwargs, - chatbot, history_array, sys_prompt_array, - refresh_interval=0.2, max_workers=10, scroller_max_len=30, - handle_token_exceed=True, show_user_at_complete=False, - retry_times_at_unknown_error=2, - ): - """ - Request GPT model using multiple threads with UI and high efficiency - 请求GPT模型的[多线程]版。 - 具备以下功能: - 实时在UI上反馈远程数据流 - 使用线程池,可调节线程池的大小避免openai的流量限制错误 - 处理中途中止的情况 - 网络等出问题时,会把traceback和已经接收的数据转入输出 - - 输入参数 Args (以_array结尾的输入变量都是列表,列表长度为子任务的数量,执行时,会把列表拆解,放到每个子线程中分别执行): - inputs_array (list): List of inputs (每个子任务的输入) - inputs_show_user_array (list): List of inputs to show user(每个子任务展现在报告中的输入,借助此参数,在汇总报告中隐藏啰嗦的真实输入,增强报告的可读性) - llm_kwargs: llm_kwargs参数 - chatbot: chatbot (用户界面对话窗口句柄,用于数据流可视化) - history_array (list): List of chat history (历史对话输入,双层列表,第一层列表是子任务分解,第二层列表是对话历史) - sys_prompt_array (list): List of system prompts (系统输入,列表,用于输入给GPT的前提提示,比如你是翻译官怎样怎样) - refresh_interval (float, optional): Refresh interval for UI (default: 0.2) (刷新时间间隔频率,建议低于1,不可高于3,仅仅服务于视觉效果) - max_workers (int, optional): Maximum number of threads (default: 10) (最大线程数,如果子任务非常多,需要用此选项防止高频地请求openai导致错误) - scroller_max_len (int, optional): Maximum length for scroller (default: 30)(数据流的显示最后收到的多少个字符,仅仅服务于视觉效果) - handle_token_exceed (bool, optional): (是否在输入过长时,自动缩减文本) - handle_token_exceed:是否自动处理token溢出的情况,如果选择自动处理,则会在溢出时暴力截断,默认开启 - show_user_at_complete (bool, optional): (在结束时,把完整输入-输出结果显示在聊天框) - retry_times_at_unknown_error:子任务失败时的重试次数 - - 输出 Returns: - list: List of GPT model responses (每个子任务的输出汇总,如果某个子任务出错,response中会携带traceback报错信息,方便调试和定位问题。) - """ - import time, random - from concurrent.futures import ThreadPoolExecutor - from request_llm.bridge_chatgpt import predict_no_ui_long_connection - assert len(inputs_array) == len(history_array) - assert len(inputs_array) == len(sys_prompt_array) - executor = ThreadPoolExecutor(max_workers=max_workers) - n_frag = len(inputs_array) - # 用户反馈 - chatbot.append(["请开始多线程操作。", ""]) - msg = '正常' - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - # 异步原子 - mutable = [["", time.time(), "等待中"] for _ in range(n_frag)] - - def _req_gpt(index, inputs, history, sys_prompt): - gpt_say = "" - retry_op = retry_times_at_unknown_error - exceeded_cnt = 0 - mutable[index][2] = "执行中" - while True: - try: - # 【第一种情况】:顺利完成 - # time.sleep(10); raise RuntimeError("测试") - gpt_say = predict_no_ui_long_connection( - inputs=inputs, llm_kwargs=llm_kwargs, history=history, - sys_prompt=sys_prompt, observe_window=mutable[index], console_slience=True - ) - mutable[index][2] = "已成功" - return gpt_say - except ConnectionAbortedError as token_exceeded_error: - # 【第二种情况】:Token溢出, - if handle_token_exceed: - exceeded_cnt += 1 - # 【选择处理】 尝试计算比例,尽可能多地保留文本 - from toolbox import get_reduce_token_percent - p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error)) - MAX_TOKEN = 4096 - EXCEED_ALLO = 512 + 512 * exceeded_cnt - inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO) - gpt_say += f'[Local Message] 警告,文本过长将进行截断,Token溢出数:{n_exceed}。\n\n' - mutable[index][2] = f"截断重试" - continue # 返回重试 - else: - # 【选择放弃】 - tb_str = '```\n' + traceback.format_exc() + '```' - gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] - mutable[index][2] = "输入过长已放弃" - return gpt_say # 放弃 - except: - # 【第三种情况】:其他错误 - tb_str = '```\n' + traceback.format_exc() + '```' - gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n" - if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0] - if retry_op > 0: - retry_op -= 1 - wait = random.randint(5, 20) - for i in range(wait):# 也许等待十几秒后,情况会好转 - mutable[index][2] = f"等待重试 {wait-i}"; time.sleep(1) - mutable[index][2] = f"重试中 {retry_times_at_unknown_error-retry_op}/{retry_times_at_unknown_error}" - continue # 返回重试 - else: - mutable[index][2] = "已失败" - wait = 5 - time.sleep(5) - return gpt_say # 放弃 - - # 异步任务开始 - futures = [executor.submit(_req_gpt, index, inputs, history, sys_prompt) for index, inputs, history, sys_prompt in zip( - range(len(inputs_array)), inputs_array, history_array, sys_prompt_array)] - cnt = 0 - while True: - # yield一次以刷新前端页面 - time.sleep(refresh_interval) - cnt += 1 - worker_done = [h.done() for h in futures] - if all(worker_done): - executor.shutdown() - break - # 更好的UI视觉效果 - observe_win = [] - # print([mutable[thread_index][2] for thread_index, _ in enumerate(worker_done)]) - # 每个线程都要“喂狗”(看门狗) - for thread_index, _ in enumerate(worker_done): - mutable[thread_index][1] = time.time() - # 在前端打印些好玩的东西 - for thread_index, _ in enumerate(worker_done): - print_something_really_funny = "[ ...`"+mutable[thread_index][0][-scroller_max_len:].\ - replace('\n', '').replace('```', '...').replace( - ' ', '.').replace('
          ', '.....').replace('$', '.')+"`... ]" - observe_win.append(print_something_really_funny) - stat_str = ''.join([f'`{mutable[thread_index][2]}`: {obs}\n\n' - if not done else f'`{mutable[thread_index][2]}`\n\n' - for thread_index, done, obs in zip(range(len(worker_done)), worker_done, observe_win)]) - chatbot[-1] = [chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt % 10+1))] - msg = "正常" - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - # 异步任务结束 - gpt_response_collection = [] - for inputs_show_user, f in zip(inputs_show_user_array, futures): - gpt_res = f.result() - gpt_response_collection.extend([inputs_show_user, gpt_res]) - - if show_user_at_complete: - for inputs_show_user, f in zip(inputs_show_user_array, futures): - gpt_res = f.result() - chatbot.append([inputs_show_user, gpt_res]) - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - time.sleep(1) - return gpt_response_collection - - -def WithRetry(f): - """ - 装饰器函数,用于自动重试。 - """ - def decorated(retry, res_when_fail, *args, **kwargs): - assert retry >= 0 - while True: - try: - res = yield from f(*args, **kwargs) - return res - except: - retry -= 1 - if retry<0: - print("达到最大重试次数") - break - else: - print("重试中……") - continue - return res_when_fail - return decorated - - -def breakdown_txt_to_satisfy_token_limit(txt, get_token_fn, limit): - def cut(txt_tocut, must_break_at_empty_line): # 递归 - if get_token_fn(txt_tocut) <= limit: - return [txt_tocut] - else: - lines = txt_tocut.split('\n') - estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines) - estimated_line_cut = int(estimated_line_cut) - for cnt in reversed(range(estimated_line_cut)): - if must_break_at_empty_line: - if lines[cnt] != "": - continue - print(cnt) - prev = "\n".join(lines[:cnt]) - post = "\n".join(lines[cnt:]) - if get_token_fn(prev) < limit: - break - if cnt == 0: - print('what the fuck ?') - raise RuntimeError("存在一行极长的文本!") - # print(len(post)) - # 列表递归接龙 - result = [prev] - result.extend(cut(post, must_break_at_empty_line)) - return result - try: - return cut(txt, must_break_at_empty_line=True) - except RuntimeError: - return cut(txt, must_break_at_empty_line=False) - - -def breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn, limit): - def cut(txt_tocut, must_break_at_empty_line): # 递归 - if get_token_fn(txt_tocut) <= limit: - return [txt_tocut] - else: - lines = txt_tocut.split('\n') - estimated_line_cut = limit / get_token_fn(txt_tocut) * len(lines) - estimated_line_cut = int(estimated_line_cut) - cnt = 0 - for cnt in reversed(range(estimated_line_cut)): - if must_break_at_empty_line: - if lines[cnt] != "": - continue - print(cnt) - prev = "\n".join(lines[:cnt]) - post = "\n".join(lines[cnt:]) - if get_token_fn(prev) < limit: - break - if cnt == 0: - # print('what the fuck ? 存在一行极长的文本!') - raise RuntimeError("存在一行极长的文本!") - # print(len(post)) - # 列表递归接龙 - result = [prev] - result.extend(cut(post, must_break_at_empty_line)) - return result - try: - return cut(txt, must_break_at_empty_line=True) - except RuntimeError: - try: - return cut(txt, must_break_at_empty_line=False) - except RuntimeError: - # 这个中文的句号是故意的,作为一个标识而存在 - res = cut(txt.replace('.', '。\n'), must_break_at_empty_line=False) - return [r.replace('。\n', '.') for r in res] - - - -def read_and_clean_pdf_text(fp): - """ - 这个函数用于分割pdf,用了很多trick,逻辑较乱,效果奇好 - - **输入参数说明** - - `fp`:需要读取和清理文本的pdf文件路径 - - **输出参数说明** - - `meta_txt`:清理后的文本内容字符串 - - `page_one_meta`:第一页清理后的文本内容列表 - - **函数功能** - 读取pdf文件并清理其中的文本内容,清理规则包括: - - 提取所有块元的文本信息,并合并为一个字符串 - - 去除短块(字符数小于100)并替换为回车符 - - 清理多余的空行 - - 合并小写字母开头的段落块并替换为空格 - - 清除重复的换行 - - 将每个换行符替换为两个换行符,使每个段落之间有两个换行符分隔 - """ - import fitz, copy - import re - import numpy as np - from colorful import print亮黄, print亮绿 - fc = 0 # Index 0 文本 - fs = 1 # Index 1 字体 - fb = 2 # Index 2 框框 - REMOVE_FOOT_NOTE = True # 是否丢弃掉 不是正文的内容 (比正文字体小,如参考文献、脚注、图注等) - REMOVE_FOOT_FFSIZE_PERCENT = 0.95 # 小于正文的?时,判定为不是正文(有些文章的正文部分字体大小不是100%统一的,有肉眼不可见的小变化) - def primary_ffsize(l): - """ - 提取文本块主字体 - """ - fsize_statiscs = {} - for wtf in l['spans']: - if wtf['size'] not in fsize_statiscs: fsize_statiscs[wtf['size']] = 0 - fsize_statiscs[wtf['size']] += len(wtf['text']) - return max(fsize_statiscs, key=fsize_statiscs.get) - - def ffsize_same(a,b): - """ - 提取字体大小是否近似相等 - """ - return abs((a-b)/max(a,b)) < 0.02 - - with fitz.open(fp) as doc: - meta_txt = [] - meta_font = [] - - meta_line = [] - meta_span = [] - ############################## <第 1 步,搜集初始信息> ################################## - for index, page in enumerate(doc): - # file_content += page.get_text() - text_areas = page.get_text("dict") # 获取页面上的文本信息 - for t in text_areas['blocks']: - if 'lines' in t: - pf = 998 - for l in t['lines']: - txt_line = "".join([wtf['text'] for wtf in l['spans']]) - pf = primary_ffsize(l) - meta_line.append([txt_line, pf, l['bbox'], l]) - for wtf in l['spans']: # for l in t['lines']: - meta_span.append([wtf['text'], wtf['size'], len(wtf['text'])]) - # meta_line.append(["NEW_BLOCK", pf]) - # 块元提取 for each word segment with in line for each line cross-line words for each block - meta_txt.extend([" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( - '- ', '') for t in text_areas['blocks'] if 'lines' in t]) - meta_font.extend([np.mean([np.mean([wtf['size'] for wtf in l['spans']]) - for l in t['lines']]) for t in text_areas['blocks'] if 'lines' in t]) - if index == 0: - page_one_meta = [" ".join(["".join([wtf['text'] for wtf in l['spans']]) for l in t['lines']]).replace( - '- ', '') for t in text_areas['blocks'] if 'lines' in t] - - ############################## <第 2 步,获取正文主字体> ################################## - fsize_statiscs = {} - for span in meta_span: - if span[1] not in fsize_statiscs: fsize_statiscs[span[1]] = 0 - fsize_statiscs[span[1]] += span[2] - main_fsize = max(fsize_statiscs, key=fsize_statiscs.get) - if REMOVE_FOOT_NOTE: - give_up_fize_threshold = main_fsize * REMOVE_FOOT_FFSIZE_PERCENT - - ############################## <第 3 步,切分和重新整合> ################################## - mega_sec = [] - sec = [] - for index, line in enumerate(meta_line): - if index == 0: - sec.append(line[fc]) - continue - if REMOVE_FOOT_NOTE: - if meta_line[index][fs] <= give_up_fize_threshold: - continue - if ffsize_same(meta_line[index][fs], meta_line[index-1][fs]): - # 尝试识别段落 - if meta_line[index][fc].endswith('.') and\ - (meta_line[index-1][fc] != 'NEW_BLOCK') and \ - (meta_line[index][fb][2] - meta_line[index][fb][0]) < (meta_line[index-1][fb][2] - meta_line[index-1][fb][0]) * 0.7: - sec[-1] += line[fc] - sec[-1] += "\n\n" - else: - sec[-1] += " " - sec[-1] += line[fc] - else: - if (index+1 < len(meta_line)) and \ - meta_line[index][fs] > main_fsize: - # 单行 + 字体大 - mega_sec.append(copy.deepcopy(sec)) - sec = [] - sec.append("# " + line[fc]) - else: - # 尝试识别section - if meta_line[index-1][fs] > meta_line[index][fs]: - sec.append("\n" + line[fc]) - else: - sec.append(line[fc]) - mega_sec.append(copy.deepcopy(sec)) - - finals = [] - for ms in mega_sec: - final = " ".join(ms) - final = final.replace('- ', ' ') - finals.append(final) - meta_txt = finals - - ############################## <第 4 步,乱七八糟的后处理> ################################## - def 把字符太少的块清除为回车(meta_txt): - for index, block_txt in enumerate(meta_txt): - if len(block_txt) < 100: - meta_txt[index] = '\n' - return meta_txt - meta_txt = 把字符太少的块清除为回车(meta_txt) - - def 清理多余的空行(meta_txt): - for index in reversed(range(1, len(meta_txt))): - if meta_txt[index] == '\n' and meta_txt[index-1] == '\n': - meta_txt.pop(index) - return meta_txt - meta_txt = 清理多余的空行(meta_txt) - - def 合并小写开头的段落块(meta_txt): - def starts_with_lowercase_word(s): - pattern = r"^[a-z]+" - match = re.match(pattern, s) - if match: - return True - else: - return False - for _ in range(100): - for index, block_txt in enumerate(meta_txt): - if starts_with_lowercase_word(block_txt): - if meta_txt[index-1] != '\n': - meta_txt[index-1] += ' ' - else: - meta_txt[index-1] = '' - meta_txt[index-1] += meta_txt[index] - meta_txt[index] = '\n' - return meta_txt - meta_txt = 合并小写开头的段落块(meta_txt) - meta_txt = 清理多余的空行(meta_txt) - - meta_txt = '\n'.join(meta_txt) - # 清除重复的换行 - for _ in range(5): - meta_txt = meta_txt.replace('\n\n', '\n') - - # 换行 -> 双换行 - meta_txt = meta_txt.replace('\n', '\n\n') - - ############################## <第 5 步,展示分割效果> ################################## - for f in finals: - print亮黄(f) - print亮绿('***************************') - - return meta_txt, page_one_meta diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/1st-studio-siberian-mouses-m-41-wmv 4.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/1st-studio-siberian-mouses-m-41-wmv 4.md deleted file mode 100644 index 67da28fe1c3786df59b4ab70fbfe60f12bdf6bab..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/1st-studio-siberian-mouses-m-41-wmv 4.md +++ /dev/null @@ -1,6 +0,0 @@ -

          1st-studio-siberian-mouses-m-41-wmv 4


          Download Zip ····· https://urlgoal.com/2uCLHP



          -
          - 3cee63e6c2
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Assetto Corsa Graphics Mod PORTABLE.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Assetto Corsa Graphics Mod PORTABLE.md deleted file mode 100644 index 51f899d7316746749d53bb472ecb74c00c833af3..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Assetto Corsa Graphics Mod PORTABLE.md +++ /dev/null @@ -1,28 +0,0 @@ - -

          How to Enhance Your Assetto Corsa Experience with Graphics Mods

          -

          Assetto Corsa is a racing simulation game that offers realistic physics, stunning visuals and a variety of cars and tracks to choose from. But if you want to take your Assetto Corsa experience to the next level, you might want to try some graphics mods that can improve the graphics quality, add new effects and features, and make the game look even more realistic and immersive.

          -

          assetto corsa graphics mod


          Download Ziphttps://urlgoal.com/2uCN4c



          -

          In this article, we will show you some of the best graphics mods for Assetto Corsa and how to install them. These mods will enhance your game's lighting, shadows, reflections, weather, textures, post-processing and more. You will need a decent PC to run these mods smoothly, so make sure you have the minimum system requirements before you proceed.

          -

          Best Graphics Mods for Assetto Corsa

          -

          There are many graphics mods for Assetto Corsa, but here are some of the most popular and recommended ones:

          -
            -
          • Sol: Sol is a dynamic weather and lighting system that simulates realistic day-night cycles, seasons, clouds, fog, rain, snow and more. Sol also adds new features such as ambient occlusion, lens flares, god rays, bloom and glare. Sol works with any track and car in Assetto Corsa and is compatible with other graphics mods.
          • -
          • Custom Shaders Patch: Custom Shaders Patch is a collection of shaders that improve the graphics quality and performance of Assetto Corsa. Custom Shaders Patch adds new effects such as volumetric lights, dynamic shadows, screen-space reflections, depth of field, motion blur and more. Custom Shaders Patch also enables some features that are not available in the vanilla game such as grass FX, smoke FX, particles FX and more.
          • -
          • Content Manager: Content Manager is a mod manager and launcher for Assetto Corsa that makes it easier to install and manage mods. Content Manager also offers many customization options and features such as custom filters, skins, setups, replays, screenshots and more. Content Manager is essential for using Sol and Custom Shaders Patch.
          • -
          -

          How to Install Graphics Mods for Assetto Corsa

          -

          To install graphics mods for Assetto Corsa, you will need to follow these steps:

          -

          -
            -
          1. Download Content Manager from here and install it.
          2. -
          3. Download Sol from here and extract the zip file.
          4. -
          5. Copy the content folder from the Sol zip file to your Assetto Corsa installation folder (usually C:\Program Files (x86)\Steam\steamapps\common\assettocorsa).
          6. -
          7. Download Custom Shaders Patch from here and extract the zip file.
          8. -
          9. Copy the content folder from the Custom Shaders Patch zip file to your Assetto Corsa installation folder (overwrite any existing files).
          10. -
          11. Launch Content Manager and go to Settings > Custom Shaders Patch > General Patch Settings.
          12. -
          13. Click on Install next to Sol integration.
          14. -
          15. Click on Apply at the bottom right corner.
          16. -
          17. Go back to Content Manager main menu and enjoy your enhanced Assetto Corsa experience!
          18. -

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodata 2011 Na Srpskom Download Free Torent.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodata 2011 Na Srpskom Download Free Torent.md deleted file mode 100644 index 685a6e3fdae71ce25de55c899dc2fd5b6e5de59d..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Autodata 2011 Na Srpskom Download Free Torent.md +++ /dev/null @@ -1,20 +0,0 @@ -

          Autodata 2011 Na Srpskom Download Free Torent


          Download ✶✶✶ https://urlgoal.com/2uCJXL



          -
          -Closed captioned sound on the screen? What else should be on? Which player can play this video on your computer? Which version should you have? What codec do you need? Note: you can use this information to fix the video playback problem. - -Download VLC Media Player - -The VLC Media Player opens everything you can dream of: You can play Blu-ray and DVD movies and some audio CDs, videos, photos, subtitles, mpeg, AVI, WMV, Audio files and also display all the video formats you can find on the internet. It plays all the audio and video formats which you see on the web. download vlc media player "Download the Free or Pro version, for as long as it's free.When you purchase a commercial version, you can use your downloaded version for as long as you like. - -Play Videos On Windows 8 With VLC Player - -When you have enabled the option "Open Network Streams" in the main menu, VLC will be able to stream content from many sources. VLC can play all the most common types of files: Whether MP3, AVI, FLV, Divx, Quick Time or other formats. Audio CDs, videos, photos, music, software, archives, subtitles and even discs from your optical drive. VLC is easy to use and install: it does not have a lot of options which can be confusing. VLC is a cross-platform media player that can play most audio and video files. - -There is no native support for subtitles and the display options for windowed mode are not that good. The application is quick and responsive, the interface is very simple, and it's possible to find all the necessary tools to play almost any file format. The application is free and Open Source Software, so you can use it for as long as you like. VLC's ability to play Blu-ray and DVD movies and other things is still limited. - -How To Fix - VLC Media Player Windows 10 - -You may need to change your network settings so that all these protocols are enabled. You can use this information to fix the video playback problem. VLC may also use your internet connection or a LAN connection to stream content from other devices on your network. The interface of VLC is very simple and straightforward, but it's not as powerful as something like Windows Media Player. It includes support for more video and audio file formats and also subtitle support. VLC is 4fefd39f24
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Binkdx8surfacetype4 Ahnenforschung Regen.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Binkdx8surfacetype4 Ahnenforschung Regen.md deleted file mode 100644 index a59bc28f120dd01699a6fb83020546d29225cd1e..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Binkdx8surfacetype4 Ahnenforschung Regen.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Binkdx8surfacetype@4 ahnenforschung regen


          Download ❤❤❤ https://urlgoal.com/2uCLt4



          -
          - d5da3c52bf
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Factorio V0.14.21 (32 And 64 Bits) No Survey No Password 2019.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Factorio V0.14.21 (32 And 64 Bits) No Survey No Password 2019.md deleted file mode 100644 index 148c5e8f46e72de727aa67df14aacd5bfdd1d595..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Factorio V0.14.21 (32 And 64 Bits) No Survey No Password 2019.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Factorio v0.14.21 (32 and 64 bits) no survey no password 2019


          Download Ziphttps://urlgoal.com/2uCJNa



          - - . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .⛄­ . . . .� 4fefd39f24
          -
          -
          -

          diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Herzog - Ein Herz Fuer Drogen-DE-2011-YSP.rar.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Herzog - Ein Herz Fuer Drogen-DE-2011-YSP.rar.md deleted file mode 100644 index a23efce28b2772282a7a16acf1284af7f5f9ecfa..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Herzog - Ein Herz Fuer Drogen-DE-2011-YSP.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Herzog - Ein Herz Fuer Drogen-DE-2011-YSP.rar


          Download File ✯✯✯ https://urlgoal.com/2uCJrC



          - -... for having the most positive influence in the world in 2011, 2013, and 2014. ... 1980s, New German Cinema directors such as Volker Schlöndorff, Werner Herzog, ... rzte j bei Mail Epc Auto Pvz Pht Ca. in Englisch Herz n r v O.l Radon Atg Krt ... POINT DE EIN NEM MO DELBUCH A EL ENCAJE EN CASTILL EL ENCAJE ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/reha/Stick_Tech/vdecoder/hifigan/utils.py b/spaces/reha/Stick_Tech/vdecoder/hifigan/utils.py deleted file mode 100644 index 84bff024f4d2e2de194b2a88ee7bbe5f0d33f67c..0000000000000000000000000000000000000000 --- a/spaces/reha/Stick_Tech/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/renumics/cifar10-outlier-low/README.md b/spaces/renumics/cifar10-outlier-low/README.md deleted file mode 100644 index 0ce3249ab875653b5f44dfcacc0d334e37f126a6..0000000000000000000000000000000000000000 --- a/spaces/renumics/cifar10-outlier-low/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Explore Outliers in CIFAR-10 with Spotlight -emoji: 📊 -colorFrom: gray -colorTo: blue -sdk: docker -pinned: false -license: mit -app_file: run.py -datasets: -- renumics/cifar10-outlier -- cifar10 -tags: -- renumics -- spotlight -- EDA -duplicated_from: renumics/cifar10-outlier ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/riccorl/relik-entity-linking/relik/retriever/trainer/__init__.py b/spaces/riccorl/relik-entity-linking/relik/retriever/trainer/__init__.py deleted file mode 100644 index f1b18bf79091418217ae2bb782c3796dfa8b5b56..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/retriever/trainer/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from relik.retriever.trainer.train import RetrieverTrainer diff --git a/spaces/ritwikbiswas/incoder-complete/static/index.html b/spaces/ritwikbiswas/incoder-complete/static/index.html deleted file mode 100644 index a1d70fc28fb1018c744a937a77e8449f4ae81a82..0000000000000000000000000000000000000000 --- a/spaces/ritwikbiswas/incoder-complete/static/index.html +++ /dev/null @@ -1,586 +0,0 @@ - - - - - - InCoder - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          -
          -
          - - - 64 -
          -
          - - - 0.6 -
          -
          -
          - - -
          -
          - -
          -
          -
          -
          - Syntax: - -
          -
          -
          -
          -

          Messages

          -
          -
          -
          -
          -
          - Generation queued, please wait... -
          -
          -
          -
          - - - - diff --git a/spaces/robin0307/MMOCR/configs/_base_/det_pipelines/maskrcnn_pipeline.py b/spaces/robin0307/MMOCR/configs/_base_/det_pipelines/maskrcnn_pipeline.py deleted file mode 100644 index fff3e071ea115843752f34de8141fa982b8ad14b..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/_base_/det_pipelines/maskrcnn_pipeline.py +++ /dev/null @@ -1,57 +0,0 @@ -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -train_pipeline = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='ScaleAspectJitter', - img_scale=None, - keep_ratio=False, - resize_type='indep_sample_in_range', - scale_range=(640, 2560)), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict( - type='RandomCropInstances', - target_size=(640, 640), - mask_type='union_all', - instance_key='gt_masks'), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] - -# for ctw1500 -img_scale_ctw1500 = (1600, 1600) -test_pipeline_ctw1500 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_ctw1500, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] - -# for icdar2015 -img_scale_icdar2015 = (1920, 1920) -test_pipeline_icdar2015 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_icdar2015, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/__init__.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/__init__.py deleted file mode 100644 index d6480a783be1afca2e7d414c24c44b20744db779..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .approx_max_iou_assigner import ApproxMaxIoUAssigner -from .ascend_assign_result import AscendAssignResult -from .ascend_max_iou_assigner import AscendMaxIoUAssigner -from .assign_result import AssignResult -from .atss_assigner import ATSSAssigner -from .base_assigner import BaseAssigner -from .center_region_assigner import CenterRegionAssigner -from .grid_assigner import GridAssigner -from .hungarian_assigner import HungarianAssigner -from .mask_hungarian_assigner import MaskHungarianAssigner -from .max_iou_assigner import MaxIoUAssigner -from .point_assigner import PointAssigner -from .region_assigner import RegionAssigner -from .sim_ota_assigner import SimOTAAssigner -from .task_aligned_assigner import TaskAlignedAssigner -from .uniform_assigner import UniformAssigner - -__all__ = [ - 'BaseAssigner', 'MaxIoUAssigner', 'ApproxMaxIoUAssigner', 'AssignResult', - 'PointAssigner', 'ATSSAssigner', 'CenterRegionAssigner', 'GridAssigner', - 'HungarianAssigner', 'RegionAssigner', 'UniformAssigner', 'SimOTAAssigner', - 'TaskAlignedAssigner', 'MaskHungarianAssigner', 'AscendAssignResult', - 'AscendMaxIoUAssigner' -] diff --git a/spaces/ronvolutional/http-server/dataset.py b/spaces/ronvolutional/http-server/dataset.py deleted file mode 100644 index 26d9108c537d6fbb2b054e23bc169e1c4fd2aa07..0000000000000000000000000000000000000000 --- a/spaces/ronvolutional/http-server/dataset.py +++ /dev/null @@ -1,19 +0,0 @@ -from datasets import load_dataset - -dataset = load_dataset("emotion", split="train") - -emotions = dataset.info.features["label"].names - -def query_emotion(start, end): - rows = dataset[start:end] - texts, labels = [rows[k] for k in rows.keys()] - - observations = [] - - for i, text in enumerate(texts): - observations.append({ - "text": text, - "emotion": emotions[labels[i]], - }) - - return observations diff --git a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Ls Magazine Issue 08 Happy Birthday ).md b/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Ls Magazine Issue 08 Happy Birthday ).md deleted file mode 100644 index 00abdfafe19623f2b25618fae8deb37ea6800bdd..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (Ls Magazine Issue 08 Happy Birthday ).md +++ /dev/null @@ -1,8 +0,0 @@ -

          HD Online Player (Ls Magazine Issue 08 Happy Birthday )


          Download Zip ===> https://tinurll.com/2uznbv



          -
          -April 16, 2564 BC - HD Online Player (Ls Magazine Issue 4 Movie 1-8) DOWNLOAD Early intercourse seems to be part of a group of problem behaviors in adolescents. This study, conducted in California, examined data regarding sexual behavior and behavior in adolescents (up to 14 years of age). -The study used a digital video camera to record one hour of video footage and see how it affects children's behavior. -The results show that early intercourse was associated with higher levels of aggressive behavior, higher levels of hostility, and higher levels of impulsivity and anxiety. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/runa91/barc_gradio/src/combined_model/model_shape_v7.py b/spaces/runa91/barc_gradio/src/combined_model/model_shape_v7.py deleted file mode 100644 index 807488d335e9a4f0870cff88a0540cc90b998f3f..0000000000000000000000000000000000000000 --- a/spaces/runa91/barc_gradio/src/combined_model/model_shape_v7.py +++ /dev/null @@ -1,500 +0,0 @@ - -import pickle as pkl -import numpy as np -import torchvision.models as models -from torchvision import transforms -import torch -from torch import nn -from torch.nn.parameter import Parameter -from kornia.geometry.subpix import dsnt # kornia 0.4.0 - -import os -import sys -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..')) -from stacked_hourglass.utils.evaluation import get_preds_soft -from stacked_hourglass import hg1, hg2, hg8 -from lifting_to_3d.linear_model import LinearModelComplete, LinearModel -from lifting_to_3d.inn_model_for_shape import INNForShape -from lifting_to_3d.utils.geometry_utils import rot6d_to_rotmat, rotmat_to_rot6d -from smal_pytorch.smal_model.smal_torch_new import SMAL -from smal_pytorch.renderer.differentiable_renderer import SilhRenderer -from bps_2d.bps_for_segmentation import SegBPS -from configs.SMAL_configs import UNITY_SMAL_SHAPE_PRIOR_DOGS as SHAPE_PRIOR -from configs.SMAL_configs import MEAN_DOG_BONE_LENGTHS_NO_RED, VERTEX_IDS_TAIL - - - -class SmallLinear(nn.Module): - def __init__(self, input_size=64, output_size=30, linear_size=128): - super(SmallLinear, self).__init__() - self.relu = nn.ReLU(inplace=True) - self.w1 = nn.Linear(input_size, linear_size) - self.w2 = nn.Linear(linear_size, linear_size) - self.w3 = nn.Linear(linear_size, output_size) - def forward(self, x): - # pre-processing - y = self.w1(x) - y = self.relu(y) - y = self.w2(y) - y = self.relu(y) - y = self.w3(y) - return y - - -class MyConv1d(nn.Module): - def __init__(self, input_size=37, output_size=30, start=True): - super(MyConv1d, self).__init__() - self.input_size = input_size - self.output_size = output_size - self.start = start - self.weight = Parameter(torch.ones((self.output_size))) - self.bias = Parameter(torch.zeros((self.output_size))) - def forward(self, x): - # pre-processing - if self.start: - y = x[:, :self.output_size] - else: - y = x[:, -self.output_size:] - y = y * self.weight[None, :] + self.bias[None, :] - return y - - -class ModelShapeAndBreed(nn.Module): - def __init__(self, n_betas=10, n_betas_limbs=13, n_breeds=121, n_z=512, structure_z_to_betas='default'): - super(ModelShapeAndBreed, self).__init__() - self.n_betas = n_betas - self.n_betas_limbs = n_betas_limbs # n_betas_logscale - self.n_breeds = n_breeds - self.structure_z_to_betas = structure_z_to_betas - if self.structure_z_to_betas == '1dconv': - if not (n_z == self.n_betas+self.n_betas_limbs): - raise ValueError - # shape branch - self.resnet = models.resnet34(pretrained=False) - # replace the first layer - n_in = 3 + 1 - self.resnet.conv1 = nn.Conv2d(n_in, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) - # replace the last layer - self.resnet.fc = nn.Linear(512, n_z) - # softmax - self.soft_max = torch.nn.Softmax(dim=1) - # fc network (and other versions) to connect z with betas - p_dropout = 0.2 - if self.structure_z_to_betas == 'default': - self.linear_betas = LinearModel(linear_size=1024, - num_stage=1, - p_dropout=p_dropout, - input_size=n_z, - output_size=self.n_betas) - self.linear_betas_limbs = LinearModel(linear_size=1024, - num_stage=1, - p_dropout=p_dropout, - input_size=n_z, - output_size=self.n_betas_limbs) - elif self.structure_z_to_betas == 'lin': - self.linear_betas = nn.Linear(n_z, self.n_betas) - self.linear_betas_limbs = nn.Linear(n_z, self.n_betas_limbs) - elif self.structure_z_to_betas == 'fc_0': - self.linear_betas = SmallLinear(linear_size=128, # 1024, - input_size=n_z, - output_size=self.n_betas) - self.linear_betas_limbs = SmallLinear(linear_size=128, # 1024, - input_size=n_z, - output_size=self.n_betas_limbs) - elif structure_z_to_betas == 'fc_1': - self.linear_betas = LinearModel(linear_size=64, # 1024, - num_stage=1, - p_dropout=0, - input_size=n_z, - output_size=self.n_betas) - self.linear_betas_limbs = LinearModel(linear_size=64, # 1024, - num_stage=1, - p_dropout=0, - input_size=n_z, - output_size=self.n_betas_limbs) - elif self.structure_z_to_betas == '1dconv': - self.linear_betas = MyConv1d(n_z, self.n_betas, start=True) - self.linear_betas_limbs = MyConv1d(n_z, self.n_betas_limbs, start=False) - elif self.structure_z_to_betas == 'inn': - self.linear_betas_and_betas_limbs = INNForShape(self.n_betas, self.n_betas_limbs, betas_scale=1.0, betas_limbs_scale=1.0) - else: - raise ValueError - # network to connect latent shape vector z with dog breed classification - self.linear_breeds = LinearModel(linear_size=1024, # 1024, - num_stage=1, - p_dropout=p_dropout, - input_size=n_z, - output_size=self.n_breeds) - # shape multiplicator - self.shape_multiplicator_np = np.ones(self.n_betas) - with open(SHAPE_PRIOR, 'rb') as file: - u = pkl._Unpickler(file) - u.encoding = 'latin1' - res = u.load() - # shape predictions are centered around the mean dog of our dog model - self.betas_mean_np = res['dog_cluster_mean'] - - def forward(self, img, seg_raw=None, seg_prep=None): - # img is the network input image - # seg_raw is before softmax and subtracting 0.5 - # seg_prep would be the prepared_segmentation - if seg_prep is None: - seg_prep = self.soft_max(seg_raw)[:, 1:2, :, :] - 0.5 - input_img_and_seg = torch.cat((img, seg_prep), axis=1) - res_output = self.resnet(input_img_and_seg) - dog_breed_output = self.linear_breeds(res_output) - if self.structure_z_to_betas == 'inn': - shape_output_orig, shape_limbs_output_orig = self.linear_betas_and_betas_limbs(res_output) - else: - shape_output_orig = self.linear_betas(res_output) * 0.1 - betas_mean = torch.tensor(self.betas_mean_np).float().to(img.device) - shape_output = shape_output_orig + betas_mean[None, 0:self.n_betas] - shape_limbs_output_orig = self.linear_betas_limbs(res_output) - shape_limbs_output = shape_limbs_output_orig * 0.1 - output_dict = {'z': res_output, - 'breeds': dog_breed_output, - 'betas': shape_output_orig, - 'betas_limbs': shape_limbs_output_orig} - return output_dict - - - -class LearnableShapedirs(nn.Module): - def __init__(self, sym_ids_dict, shapedirs_init, n_betas, n_betas_fixed=10): - super(LearnableShapedirs, self).__init__() - # shapedirs_init = self.smal.shapedirs.detach() - self.n_betas = n_betas - self.n_betas_fixed = n_betas_fixed - self.sym_ids_dict = sym_ids_dict - sym_left_ids = self.sym_ids_dict['left'] - sym_right_ids = self.sym_ids_dict['right'] - sym_center_ids = self.sym_ids_dict['center'] - self.n_center = sym_center_ids.shape[0] - self.n_left = sym_left_ids.shape[0] - self.n_sd = self.n_betas - self.n_betas_fixed # number of learnable shapedirs - # get indices to go from half_shapedirs to shapedirs - inds_back = np.zeros((3889)) - for ind in range(0, sym_center_ids.shape[0]): - ind_in_forward = sym_center_ids[ind] - inds_back[ind_in_forward] = ind - for ind in range(0, sym_left_ids.shape[0]): - ind_in_forward = sym_left_ids[ind] - inds_back[ind_in_forward] = sym_center_ids.shape[0] + ind - for ind in range(0, sym_right_ids.shape[0]): - ind_in_forward = sym_right_ids[ind] - inds_back[ind_in_forward] = sym_center_ids.shape[0] + sym_left_ids.shape[0] + ind - self.register_buffer('inds_back_torch', torch.Tensor(inds_back).long()) - # self.smal.shapedirs: (51, 11667) - # shapedirs: (3889, 3, n_sd) - # shapedirs_half: (2012, 3, n_sd) - sd = shapedirs_init[:self.n_betas, :].permute((1, 0)).reshape((-1, 3, self.n_betas)) - self.register_buffer('sd', sd) - sd_center = sd[sym_center_ids, :, self.n_betas_fixed:] - sd_left = sd[sym_left_ids, :, self.n_betas_fixed:] - self.register_parameter('learnable_half_shapedirs_c0', torch.nn.Parameter(sd_center[:, 0, :].detach())) - self.register_parameter('learnable_half_shapedirs_c2', torch.nn.Parameter(sd_center[:, 2, :].detach())) - self.register_parameter('learnable_half_shapedirs_l0', torch.nn.Parameter(sd_left[:, 0, :].detach())) - self.register_parameter('learnable_half_shapedirs_l1', torch.nn.Parameter(sd_left[:, 1, :].detach())) - self.register_parameter('learnable_half_shapedirs_l2', torch.nn.Parameter(sd_left[:, 2, :].detach())) - def forward(self): - device = self.learnable_half_shapedirs_c0.device - half_shapedirs_center = torch.stack((self.learnable_half_shapedirs_c0, \ - torch.zeros((self.n_center, self.n_sd)).to(device), \ - self.learnable_half_shapedirs_c2), axis=1) - half_shapedirs_left = torch.stack((self.learnable_half_shapedirs_l0, \ - self.learnable_half_shapedirs_l1, \ - self.learnable_half_shapedirs_l2), axis=1) - half_shapedirs_right = torch.stack((self.learnable_half_shapedirs_l0, \ - - self.learnable_half_shapedirs_l1, \ - self.learnable_half_shapedirs_l2), axis=1) - half_shapedirs_tot = torch.cat((half_shapedirs_center, half_shapedirs_left, half_shapedirs_right)) - shapedirs = torch.index_select(half_shapedirs_tot, dim=0, index=self.inds_back_torch) - shapedirs_complete = torch.cat((self.sd[:, :, :self.n_betas_fixed], shapedirs), axis=2) # (3889, 3, n_sd) - shapedirs_complete_prepared = torch.cat((self.sd[:, :, :10], shapedirs), axis=2).reshape((-1, 30)).permute((1, 0)) # (n_sd, 11667) - return shapedirs_complete, shapedirs_complete_prepared - - - - - -class ModelImageToBreed(nn.Module): - def __init__(self, arch='hg8', n_joints=35, n_classes=20, n_partseg=15, n_keyp=20, n_bones=24, n_betas=10, n_betas_limbs=7, n_breeds=121, image_size=256, n_z=512, thr_keyp_sc=None, add_partseg=True): - super(ModelImageToBreed, self).__init__() - self.n_classes = n_classes - self.n_partseg = n_partseg - self.n_betas = n_betas - self.n_betas_limbs = n_betas_limbs - self.n_keyp = n_keyp - self.n_bones = n_bones - self.n_breeds = n_breeds - self.image_size = image_size - self.upsample_seg = True - self.threshold_scores = thr_keyp_sc - self.n_z = n_z - self.add_partseg = add_partseg - # ------------------------------ STACKED HOUR GLASS ------------------------------ - if arch == 'hg8': - self.stacked_hourglass = hg8(pretrained=False, num_classes=self.n_classes, num_partseg=self.n_partseg, upsample_seg=self.upsample_seg, add_partseg=self.add_partseg) - else: - raise Exception('unrecognised model architecture: ' + arch) - # ------------------------------ SHAPE AND BREED MODEL ------------------------------ - self.breed_model = ModelShapeAndBreed(n_betas=self.n_betas, n_betas_limbs=self.n_betas_limbs, n_breeds=self.n_breeds, n_z=self.n_z) - def forward(self, input_img, norm_dict=None, bone_lengths_prepared=None, betas=None): - batch_size = input_img.shape[0] - device = input_img.device - # ------------------------------ STACKED HOUR GLASS ------------------------------ - hourglass_out_dict = self.stacked_hourglass(input_img) - last_seg = hourglass_out_dict['seg_final'] - last_heatmap = hourglass_out_dict['out_list_kp'][-1] - # - prepare keypoints (from heatmap) - # normalize predictions -> from logits to probability distribution - # last_heatmap_norm = dsnt.spatial_softmax2d(last_heatmap, temperature=torch.tensor(1)) - # keypoints = dsnt.spatial_expectation2d(last_heatmap_norm, normalized_coordinates=False) + 1 # (bs, 20, 2) - # keypoints_norm = dsnt.spatial_expectation2d(last_heatmap_norm, normalized_coordinates=True) # (bs, 20, 2) - keypoints_norm, scores = get_preds_soft(last_heatmap, return_maxval=True, norm_coords=True) - if self.threshold_scores is not None: - scores[scores>self.threshold_scores] = 1.0 - scores[scores<=self.threshold_scores] = 0.0 - # ------------------------------ SHAPE AND BREED MODEL ------------------------------ - # breed_model takes as input the image as well as the predicted segmentation map - # -> we need to split up ModelImageTo3d, such that we can use the silhouette - resnet_output = self.breed_model(img=input_img, seg_raw=last_seg) - pred_breed = resnet_output['breeds'] # (bs, n_breeds) - pred_betas = resnet_output['betas'] - pred_betas_limbs = resnet_output['betas_limbs'] - small_output = {'keypoints_norm': keypoints_norm, - 'keypoints_scores': scores} - small_output_reproj = {'betas': pred_betas, - 'betas_limbs': pred_betas_limbs, - 'dog_breed': pred_breed} - return small_output, None, small_output_reproj - -class ModelImageTo3d_withshape_withproj(nn.Module): - def __init__(self, arch='hg8', num_stage_comb=2, num_stage_heads=1, num_stage_heads_pose=1, trans_sep=False, n_joints=35, n_classes=20, n_partseg=15, n_keyp=20, n_bones=24, n_betas=10, n_betas_limbs=6, n_breeds=121, image_size=256, n_z=512, n_segbps=64*2, thr_keyp_sc=None, add_z_to_3d_input=True, add_segbps_to_3d_input=False, add_partseg=True, silh_no_tail=True, fix_flength=False, render_partseg=False, structure_z_to_betas='default', structure_pose_net='default', nf_version=None): - super(ModelImageTo3d_withshape_withproj, self).__init__() - self.n_classes = n_classes - self.n_partseg = n_partseg - self.n_betas = n_betas - self.n_betas_limbs = n_betas_limbs - self.n_keyp = n_keyp - self.n_bones = n_bones - self.n_breeds = n_breeds - self.image_size = image_size - self.threshold_scores = thr_keyp_sc - self.upsample_seg = True - self.silh_no_tail = silh_no_tail - self.add_z_to_3d_input = add_z_to_3d_input - self.add_segbps_to_3d_input = add_segbps_to_3d_input - self.add_partseg = add_partseg - assert (not self.add_segbps_to_3d_input) or (not self.add_z_to_3d_input) - self.n_z = n_z - if add_segbps_to_3d_input: - self.n_segbps = n_segbps # 64 - self.segbps_model = SegBPS() - else: - self.n_segbps = 0 - self.fix_flength = fix_flength - self.render_partseg = render_partseg - self.structure_z_to_betas = structure_z_to_betas - self.structure_pose_net = structure_pose_net - assert self.structure_pose_net in ['default', 'vae', 'normflow'] - self.nf_version = nf_version - self.register_buffer('betas_zeros', torch.zeros((1, self.n_betas))) - self.register_buffer('mean_dog_bone_lengths', torch.tensor(MEAN_DOG_BONE_LENGTHS_NO_RED, dtype=torch.float32)) - p_dropout = 0.2 # 0.5 - # ------------------------------ SMAL MODEL ------------------------------ - self.smal = SMAL(template_name='neutral') - # New for rendering without tail - f_np = self.smal.faces.detach().cpu().numpy() - self.f_no_tail_np = f_np[np.isin(f_np[:,:], VERTEX_IDS_TAIL).sum(axis=1)==0, :] - # in theory we could optimize for improved shapedirs, but we do not do that - # -> would need to implement regularizations - # -> there are better ways than changing the shapedirs - self.model_learnable_shapedirs = LearnableShapedirs(self.smal.sym_ids_dict, self.smal.shapedirs.detach(), self.n_betas, 10) - # ------------------------------ STACKED HOUR GLASS ------------------------------ - if arch == 'hg8': - self.stacked_hourglass = hg8(pretrained=False, num_classes=self.n_classes, num_partseg=self.n_partseg, upsample_seg=self.upsample_seg, add_partseg=self.add_partseg) - else: - raise Exception('unrecognised model architecture: ' + arch) - # ------------------------------ SHAPE AND BREED MODEL ------------------------------ - self.breed_model = ModelShapeAndBreed(n_betas=self.n_betas, n_betas_limbs=self.n_betas_limbs, n_breeds=self.n_breeds, n_z=self.n_z, structure_z_to_betas=self.structure_z_to_betas) - # ------------------------------ LINEAR 3D MODEL ------------------------------ - # 3d model -> from image to 3d parameters {2d keypoints from heatmap, pose, trans, flength} - self.soft_max = torch.nn.Softmax(dim=1) - input_size = self.n_keyp*3 + self.n_bones - self.model_3d = LinearModelComplete(linear_size=1024, - num_stage_comb=num_stage_comb, - num_stage_heads=num_stage_heads, - num_stage_heads_pose=num_stage_heads_pose, - trans_sep=trans_sep, - p_dropout=p_dropout, # 0.5, - input_size=input_size, - intermediate_size=1024, - output_info=None, - n_joints=n_joints, - n_z=self.n_z, - add_z_to_3d_input=self.add_z_to_3d_input, - n_segbps=self.n_segbps, - add_segbps_to_3d_input=self.add_segbps_to_3d_input, - structure_pose_net=self.structure_pose_net, - nf_version = self.nf_version) - # ------------------------------ RENDERING ------------------------------ - self.silh_renderer = SilhRenderer(image_size) - - def forward(self, input_img, norm_dict=None, bone_lengths_prepared=None, betas=None): - batch_size = input_img.shape[0] - device = input_img.device - # ------------------------------ STACKED HOUR GLASS ------------------------------ - hourglass_out_dict = self.stacked_hourglass(input_img) - last_seg = hourglass_out_dict['seg_final'] - last_heatmap = hourglass_out_dict['out_list_kp'][-1] - # - prepare keypoints (from heatmap) - # normalize predictions -> from logits to probability distribution - # last_heatmap_norm = dsnt.spatial_softmax2d(last_heatmap, temperature=torch.tensor(1)) - # keypoints = dsnt.spatial_expectation2d(last_heatmap_norm, normalized_coordinates=False) + 1 # (bs, 20, 2) - # keypoints_norm = dsnt.spatial_expectation2d(last_heatmap_norm, normalized_coordinates=True) # (bs, 20, 2) - keypoints_norm, scores = get_preds_soft(last_heatmap, return_maxval=True, norm_coords=True) - if self.threshold_scores is not None: - scores[scores>self.threshold_scores] = 1.0 - scores[scores<=self.threshold_scores] = 0.0 - # ------------------------------ LEARNABLE SHAPE MODEL ------------------------------ - # in our cvpr 2022 paper we do not change the shapedirs - # learnable_sd_complete has shape (3889, 3, n_sd) - # learnable_sd_complete_prepared has shape (n_sd, 11667) - learnable_sd_complete, learnable_sd_complete_prepared = self.model_learnable_shapedirs() - shapedirs_sel = learnable_sd_complete_prepared # None - # ------------------------------ SHAPE AND BREED MODEL ------------------------------ - # breed_model takes as input the image as well as the predicted segmentation map - # -> we need to split up ModelImageTo3d, such that we can use the silhouette - resnet_output = self.breed_model(img=input_img, seg_raw=last_seg) - pred_breed = resnet_output['breeds'] # (bs, n_breeds) - pred_z = resnet_output['z'] - # - prepare shape - pred_betas = resnet_output['betas'] - pred_betas_limbs = resnet_output['betas_limbs'] - # - calculate bone lengths - with torch.no_grad(): - use_mean_bone_lengths = False - if use_mean_bone_lengths: - bone_lengths_prepared = torch.cat(batch_size*[self.mean_dog_bone_lengths.reshape((1, -1))]) - else: - assert (bone_lengths_prepared is None) - bone_lengths_prepared = self.smal.caclulate_bone_lengths(pred_betas, pred_betas_limbs, shapedirs_sel=shapedirs_sel, short=True) - # ------------------------------ LINEAR 3D MODEL ------------------------------ - # 3d model -> from image to 3d parameters {2d keypoints from heatmap, pose, trans, flength} - # prepare input for 2d-to-3d network - keypoints_prepared = torch.cat((keypoints_norm, scores), axis=2) - if bone_lengths_prepared is None: - bone_lengths_prepared = torch.cat(batch_size*[self.mean_dog_bone_lengths.reshape((1, -1))]) - # should we add silhouette to 3d input? should we add z? - if self.add_segbps_to_3d_input: - seg_raw = last_seg - seg_prep_bps = self.soft_max(seg_raw)[:, 1, :, :] # class 1 is the dog - with torch.no_grad(): - seg_prep_np = seg_prep_bps.detach().cpu().numpy() - bps_output_np = self.segbps_model.calculate_bps_points_batch(seg_prep_np) # (bs, 64, 2) - bps_output = torch.tensor(bps_output_np, dtype=torch.float32).to(device).reshape((batch_size, -1)) - bps_output_prep = bps_output * 2. - 1 - input_vec_keyp_bones = torch.cat((keypoints_prepared.reshape((batch_size, -1)), bone_lengths_prepared), axis=1) - input_vec = torch.cat((input_vec_keyp_bones, bps_output_prep), dim=1) - elif self.add_z_to_3d_input: - # we do not use this in our cvpr 2022 version - input_vec_keyp_bones = torch.cat((keypoints_prepared.reshape((batch_size, -1)), bone_lengths_prepared), axis=1) - input_vec_additional = pred_z - input_vec = torch.cat((input_vec_keyp_bones, input_vec_additional), dim=1) - else: - input_vec = torch.cat((keypoints_prepared.reshape((batch_size, -1)), bone_lengths_prepared), axis=1) - # predict 3d parameters (those are normalized, we need to correct mean and std in a next step) - output = self.model_3d(input_vec) - # add predicted keypoints to the output dict - output['keypoints_norm'] = keypoints_norm - output['keypoints_scores'] = scores - # - denormalize 3d parameters -> so far predictions were normalized, now we denormalize them again - pred_trans = output['trans'] * norm_dict['trans_std'][None, :] + norm_dict['trans_mean'][None, :] # (bs, 3) - if self.structure_pose_net == 'default': - pred_pose_rot6d = output['pose'] + norm_dict['pose_rot6d_mean'][None, :] - elif self.structure_pose_net == 'normflow': - pose_rot6d_mean_zeros = torch.zeros_like(norm_dict['pose_rot6d_mean'][None, :]) - pose_rot6d_mean_zeros[:, 0, :] = norm_dict['pose_rot6d_mean'][None, 0, :] - pred_pose_rot6d = output['pose'] + pose_rot6d_mean_zeros - else: - pose_rot6d_mean_zeros = torch.zeros_like(norm_dict['pose_rot6d_mean'][None, :]) - pose_rot6d_mean_zeros[:, 0, :] = norm_dict['pose_rot6d_mean'][None, 0, :] - pred_pose_rot6d = output['pose'] + pose_rot6d_mean_zeros - pred_pose_reshx33 = rot6d_to_rotmat(pred_pose_rot6d.reshape((-1, 6))) - pred_pose = pred_pose_reshx33.reshape((batch_size, -1, 3, 3)) - pred_pose_rot6d = rotmat_to_rot6d(pred_pose_reshx33).reshape((batch_size, -1, 6)) - - if self.fix_flength: - output['flength'] = torch.zeros_like(output['flength']) - pred_flength = torch.ones_like(output['flength'])*2100 # norm_dict['flength_mean'][None, :] - else: - pred_flength_orig = output['flength'] * norm_dict['flength_std'][None, :] + norm_dict['flength_mean'][None, :] # (bs, 1) - pred_flength = pred_flength_orig.clone() # torch.abs(pred_flength_orig) - pred_flength[pred_flength_orig<=0] = norm_dict['flength_mean'][None, :] - - # ------------------------------ RENDERING ------------------------------ - # get 3d model (SMAL) - V, keyp_green_3d, _ = self.smal(beta=pred_betas, betas_limbs=pred_betas_limbs, pose=pred_pose, trans=pred_trans, get_skin=True, keyp_conf='green', shapedirs_sel=shapedirs_sel) - keyp_3d = keyp_green_3d[:, :self.n_keyp, :] # (bs, 20, 3) - # render silhouette - faces_prep = self.smal.faces.unsqueeze(0).expand((batch_size, -1, -1)) - if not self.silh_no_tail: - pred_silh_images, pred_keyp = self.silh_renderer(vertices=V, - points=keyp_3d, faces=faces_prep, focal_lengths=pred_flength) - else: - faces_no_tail_prep = torch.tensor(self.f_no_tail_np).to(device).expand((batch_size, -1, -1)) - pred_silh_images, pred_keyp = self.silh_renderer(vertices=V, - points=keyp_3d, faces=faces_no_tail_prep, focal_lengths=pred_flength) - # get torch 'Meshes' - torch_meshes = self.silh_renderer.get_torch_meshes(vertices=V, faces=faces_prep) - - # render body parts (not part of cvpr 2022 version) - if self.render_partseg: - raise NotImplementedError - else: - partseg_images = None - partseg_images_hg = None - - # ------------------------------ PREPARE OUTPUT ------------------------------ - # create output dictionarys - # output: contains all output from model_image_to_3d - # output_unnorm: same as output, but normalizations are undone - # output_reproj: smal output and reprojected keypoints as well as silhouette - keypoints_heatmap_256 = (output['keypoints_norm'] / 2. + 0.5) * (self.image_size - 1) - output_unnorm = {'pose_rotmat': pred_pose, - 'flength': pred_flength, - 'trans': pred_trans, - 'keypoints':keypoints_heatmap_256} - output_reproj = {'vertices_smal': V, - 'torch_meshes': torch_meshes, - 'keyp_3d': keyp_3d, - 'keyp_2d': pred_keyp, - 'silh': pred_silh_images, - 'betas': pred_betas, - 'betas_limbs': pred_betas_limbs, - 'pose_rot6d': pred_pose_rot6d, # used for pose prior... - 'dog_breed': pred_breed, - 'shapedirs': shapedirs_sel, - 'z': pred_z, - 'flength_unnorm': pred_flength, - 'flength': output['flength'], - 'partseg_images_rend': partseg_images, - 'partseg_images_hg_nograd': partseg_images_hg, - 'normflow_z': output['normflow_z']} - - return output, output_unnorm, output_reproj - - def render_vis_nograd(self, vertices, focal_lengths, color=0): - # this function is for visualization only - # vertices: (bs, n_verts, 3) - # focal_lengths: (bs, 1) - # color: integer, either 0 or 1 - # returns a torch tensor of shape (bs, image_size, image_size, 3) - with torch.no_grad(): - batch_size = vertices.shape[0] - faces_prep = self.smal.faces.unsqueeze(0).expand((batch_size, -1, -1)) - visualizations = self.silh_renderer.get_visualization_nograd(vertices, - faces_prep, focal_lengths, color=color) - return visualizations - diff --git a/spaces/runa91/bite_gradio/src/combined_model/train_main_image_to_3d_wbr_withref.py b/spaces/runa91/bite_gradio/src/combined_model/train_main_image_to_3d_wbr_withref.py deleted file mode 100644 index 4ca1fc2f372036d36209fca26ed09446a5a934c8..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/src/combined_model/train_main_image_to_3d_wbr_withref.py +++ /dev/null @@ -1,955 +0,0 @@ - -import torch -import torch.nn as nn -import torch.backends.cudnn -import torch.nn.parallel -from tqdm import tqdm -import os -import pathlib -from matplotlib import pyplot as plt -import cv2 -import numpy as np -import torch -import trimesh -import pickle as pkl -import csv -from scipy.spatial.transform import Rotation as R_sc - - -import sys -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..')) -from stacked_hourglass.utils.evaluation import accuracy, AverageMeter, final_preds, get_preds, get_preds_soft -from stacked_hourglass.utils.visualization import save_input_image_with_keypoints, save_input_image -from metrics.metrics import Metrics -from configs.SMAL_configs import EVAL_KEYPOINTS, KEYPOINT_GROUPS, SMAL_KEYPOINT_NAMES_FOR_3D_EVAL, SMAL_KEYPOINT_INDICES_FOR_3D_EVAL, SMAL_KEYPOINT_WHICHTOUSE_FOR_3D_EVAL -from combined_model.helper import eval_save_visualizations_and_meshes, eval_prepare_pck_and_iou, eval_add_preds_to_summary - -from smal_pytorch.smal_model.smal_torch_new import SMAL # for gc visualization -from src.combined_model.loss_utils.loss_utils import fit_plane -# from src.evaluation.sketchfab_evaluation.alignment_utils.calculate_v2v_error_release import compute_similarity_transform -# from src.evaluation.sketchfab_evaluation.alignment_utils.calculate_alignment_error import calculate_alignemnt_errors - -# --------------------------------------------------------------------------------------------------------------------------- -def do_training_epoch(train_loader, model, loss_module, loss_module_ref, device, data_info, optimiser, quiet=False, acc_joints=None, weight_dict=None, weight_dict_ref=None): - losses = AverageMeter() - losses_keyp = AverageMeter() - losses_silh = AverageMeter() - losses_shape = AverageMeter() - losses_pose = AverageMeter() - losses_class = AverageMeter() - losses_breed = AverageMeter() - losses_partseg = AverageMeter() - losses_ref_keyp = AverageMeter() - losses_ref_silh = AverageMeter() - losses_ref_pose = AverageMeter() - losses_ref_reg = AverageMeter() - accuracies = AverageMeter() - # Put the model in training mode. - model.train() - # prepare progress bar - iterable = enumerate(train_loader) - progress = None - if not quiet: - progress = tqdm(iterable, desc='Train', total=len(train_loader), ascii=True, leave=False) - iterable = progress - # information for normalization - norm_dict = { - 'pose_rot6d_mean': torch.from_numpy(data_info.pose_rot6d_mean).float().to(device), - 'trans_mean': torch.from_numpy(data_info.trans_mean).float().to(device), - 'trans_std': torch.from_numpy(data_info.trans_std).float().to(device), - 'flength_mean': torch.from_numpy(data_info.flength_mean).float().to(device), - 'flength_std': torch.from_numpy(data_info.flength_std).float().to(device)} - # prepare variables, put them on the right device - for i, (input, target_dict) in iterable: - batch_size = input.shape[0] - for key in target_dict.keys(): - if key == 'breed_index': - target_dict[key] = target_dict[key].long().to(device) - elif key in ['index', 'pts', 'tpts', 'target_weight', 'silh', 'silh_distmat_tofg', 'silh_distmat_tobg', 'sim_breed_index', 'img_border_mask']: - target_dict[key] = target_dict[key].float().to(device) - elif key in ['has_seg', 'gc']: - target_dict[key] = target_dict[key].to(device) - else: - pass - input = input.float().to(device) - - # ----------------------- do training step ----------------------- - assert model.training, 'model must be in training mode.' - with torch.enable_grad(): - # ----- forward pass ----- - output, output_unnorm, output_reproj, output_ref, output_ref_comp = model(input, norm_dict=norm_dict) - # ----- loss ----- - # --- from main network - loss, loss_dict = loss_module(output_reproj=output_reproj, - target_dict=target_dict, - weight_dict=weight_dict) - # ---from refinement network - loss_ref, loss_dict_ref = loss_module_ref(output_ref=output_ref, - output_ref_comp=output_ref_comp, - target_dict=target_dict, - weight_dict_ref=weight_dict_ref) - loss_total = loss + loss_ref - # ----- backward pass and parameter update ----- - optimiser.zero_grad() - loss_total.backward() - optimiser.step() - # ---------------------------------------------------------------- - - # prepare losses for progress bar - bs_fake = 1 # batch_size - losses.update(loss_dict['loss'] + loss_dict_ref['loss'], bs_fake) - losses_keyp.update(loss_dict['loss_keyp_weighted'], bs_fake) - losses_silh.update(loss_dict['loss_silh_weighted'], bs_fake) - losses_shape.update(loss_dict['loss_shape_weighted'], bs_fake) - losses_pose.update(loss_dict['loss_poseprior_weighted'], bs_fake) - losses_class.update(loss_dict['loss_class_weighted'], bs_fake) - losses_breed.update(loss_dict['loss_breed_weighted'], bs_fake) - losses_partseg.update(loss_dict['loss_partseg_weighted'], bs_fake) - losses_ref_keyp.update(loss_dict_ref['keyp_ref'], bs_fake) - losses_ref_silh.update(loss_dict_ref['silh_ref'], bs_fake) - loss_ref_pose = 0 - for l_name in ['pose_legs_side', 'pose_legs_tors', 'pose_tail_side', 'pose_tail_tors', 'pose_spine_side', 'pose_spine_tors']: - if l_name in loss_dict_ref.keys(): - loss_ref_pose += loss_dict_ref[l_name] - losses_ref_pose.update(loss_ref_pose, bs_fake) - loss_ref_reg = 0 - for l_name in ['reg_trans', 'reg_flength', 'reg_pose']: - if l_name in loss_dict_ref.keys(): - loss_ref_reg += loss_dict_ref[l_name] - losses_ref_reg.update(loss_ref_reg, bs_fake) - acc = - loss_dict['loss_keyp_weighted'] # this will be used to keep track of the 'best model' - accuracies.update(acc, bs_fake) - # Show losses as part of the progress bar. - if progress is not None: - my_string = 'Loss: {loss:0.4f}, loss_keyp: {loss_keyp:0.4f}, loss_silh: {loss_silh:0.4f}, loss_partseg: {loss_partseg:0.4f}, loss_shape: {loss_shape:0.4f}, loss_pose: {loss_pose:0.4f}, loss_class: {loss_class:0.4f}, loss_breed: {loss_breed:0.4f}, loss_ref_keyp: {loss_ref_keyp:0.4f}, loss_ref_silh: {loss_ref_silh:0.4f}, loss_ref_pose: {loss_ref_pose:0.4f}, loss_ref_reg: {loss_ref_reg:0.4f}'.format( - loss=losses.avg, - loss_keyp=losses_keyp.avg, - loss_silh=losses_silh.avg, - loss_shape=losses_shape.avg, - loss_pose=losses_pose.avg, - loss_class=losses_class.avg, - loss_breed=losses_breed.avg, - loss_partseg=losses_partseg.avg, - loss_ref_keyp=losses_ref_keyp.avg, - loss_ref_silh=losses_ref_silh.avg, - loss_ref_pose=losses_ref_pose.avg, - loss_ref_reg=losses_ref_reg.avg) - my_string_short = 'Loss: {loss:0.4f}, loss_keyp: {loss_keyp:0.4f}, loss_silh: {loss_silh:0.4f}, loss_ref_keyp: {loss_ref_keyp:0.4f}, loss_ref_silh: {loss_ref_silh:0.4f}, loss_ref_pose: {loss_ref_pose:0.4f}, loss_ref_reg: {loss_ref_reg:0.4f}'.format( - loss=losses.avg, - loss_keyp=losses_keyp.avg, - loss_silh=losses_silh.avg, - loss_ref_keyp=losses_ref_keyp.avg, - loss_ref_silh=losses_ref_silh.avg, - loss_ref_pose=losses_ref_pose.avg, - loss_ref_reg=losses_ref_reg.avg) - progress.set_postfix_str(my_string_short) - - return my_string, accuracies.avg - - -# --------------------------------------------------------------------------------------------------------------------------- -def do_validation_epoch(val_loader, model, loss_module, loss_module_ref, device, data_info, flip=False, quiet=False, acc_joints=None, save_imgs_path=None, weight_dict=None, weight_dict_ref=None, metrics=None, val_opt='default', test_name_list=None, render_all=False, pck_thresh=0.15, len_dataset=None): - losses = AverageMeter() - losses_keyp = AverageMeter() - losses_silh = AverageMeter() - losses_shape = AverageMeter() - losses_pose = AverageMeter() - losses_class = AverageMeter() - losses_breed = AverageMeter() - losses_partseg = AverageMeter() - losses_ref_keyp = AverageMeter() - losses_ref_silh = AverageMeter() - losses_ref_pose = AverageMeter() - losses_ref_reg = AverageMeter() - accuracies = AverageMeter() - if save_imgs_path is not None: - pathlib.Path(save_imgs_path).mkdir(parents=True, exist_ok=True) - # Put the model in evaluation mode. - model.eval() - # prepare progress bar - iterable = enumerate(val_loader) - progress = None - if not quiet: - progress = tqdm(iterable, desc='Valid', total=len(val_loader), ascii=True, leave=False) - iterable = progress - # summarize information for normalization - norm_dict = { - 'pose_rot6d_mean': torch.from_numpy(data_info.pose_rot6d_mean).float().to(device), - 'trans_mean': torch.from_numpy(data_info.trans_mean).float().to(device), - 'trans_std': torch.from_numpy(data_info.trans_std).float().to(device), - 'flength_mean': torch.from_numpy(data_info.flength_mean).float().to(device), - 'flength_std': torch.from_numpy(data_info.flength_std).float().to(device)} - batch_size = val_loader.batch_size - - return_mesh_with_gt_groundplane = True - if return_mesh_with_gt_groundplane: - remeshing_path = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/smal_data_remeshed/uniform_surface_sampling/my_smpl_39dogsnorm_Jr_4_dog_remesh4000_info.pkl' - with open(remeshing_path, 'rb') as fp: - remeshing_dict = pkl.load(fp) - remeshing_relevant_faces = torch.tensor(remeshing_dict['smal_faces'][remeshing_dict['faceid_closest']], dtype=torch.long, device=device) - remeshing_relevant_barys = torch.tensor(remeshing_dict['barys_closest'], dtype=torch.float32, device=device) - - - # from smal_pytorch.smal_model.smal_torch_new import SMAL - print('start: load smal default model (barc), but only for vertices') - smal = SMAL() - print('end: load smal default model (barc), but only for vertices') - smal_template_verts = smal.v_template.detach().cpu().numpy() - smal_faces = smal.faces.detach().cpu().numpy() - - - my_step = 0 - for index, (input, target_dict) in iterable: - - # prepare variables, put them on the right device - curr_batch_size = input.shape[0] - for key in target_dict.keys(): - if key == 'breed_index': - target_dict[key] = target_dict[key].long().to(device) - elif key in ['index', 'pts', 'tpts', 'target_weight', 'silh', 'silh_distmat_tofg', 'silh_distmat_tobg', 'sim_breed_index', 'img_border_mask']: - target_dict[key] = target_dict[key].float().to(device) - elif key in ['has_seg', 'gc']: - target_dict[key] = target_dict[key].to(device) - else: - pass - input = input.float().to(device) - - # ----------------------- do validation step ----------------------- - with torch.no_grad(): - # ----- forward pass ----- - # output: (['pose', 'flength', 'trans', 'keypoints_norm', 'keypoints_scores']) - # output_unnorm: (['pose_rotmat', 'flength', 'trans', 'keypoints']) - # output_reproj: (['vertices_smal', 'torch_meshes', 'keyp_3d', 'keyp_2d', 'silh', 'betas', 'pose_rot6d', 'dog_breed', 'shapedirs', 'z', 'flength_unnorm', 'flength']) - # target_dict: (['index', 'center', 'scale', 'pts', 'tpts', 'target_weight', 'breed_index', 'sim_breed_index', 'ind_dataset', 'silh']) - output, output_unnorm, output_reproj, output_ref, output_ref_comp = model(input, norm_dict=norm_dict) - # ----- loss ----- - if metrics == 'no_loss': - # --- from main network - loss, loss_dict = loss_module(output_reproj=output_reproj, - target_dict=target_dict, - weight_dict=weight_dict) - # ---from refinement network - loss_ref, loss_dict_ref = loss_module_ref(output_ref=output_ref, - output_ref_comp=output_ref_comp, - target_dict=target_dict, - weight_dict_ref=weight_dict_ref) - loss_total = loss + loss_ref - - # ---------------------------------------------------------------- - - - for result_network in ['normal', 'ref']: - # variabled that are not refined - hg_keyp_norm = output['keypoints_norm'] - hg_keyp_scores = output['keypoints_scores'] - betas = output_reproj['betas'] - betas_limbs = output_reproj['betas_limbs'] - zz = output_reproj['z'] - if result_network == 'normal': - # STEP 1: normal network - vertices_smal = output_reproj['vertices_smal'] - flength = output_unnorm['flength'] - pose_rotmat = output_unnorm['pose_rotmat'] - trans = output_unnorm['trans'] - pred_keyp = output_reproj['keyp_2d'] - pred_silh = output_reproj['silh'] - prefix = 'normal_' - else: - # STEP 1: refinement network - vertices_smal = output_ref['vertices_smal'] - flength = output_ref['flength'] - pose_rotmat = output_ref['pose_rotmat'] - trans = output_ref['trans'] - pred_keyp = output_ref['keyp_2d'] - pred_silh = output_ref['silh'] - prefix = 'ref_' - if return_mesh_with_gt_groundplane and 'gc' in target_dict.keys(): - bs = vertices_smal.shape[0] - target_gc_class = target_dict['gc'][:, :, 0] - sel_verts = torch.index_select(output_ref['vertices_smal'], dim=1, index=remeshing_relevant_faces.reshape((-1))).reshape((bs, remeshing_relevant_faces.shape[0], 3, 3)) - verts_remeshed = torch.einsum('ij,aijk->aik', remeshing_relevant_barys, sel_verts) - target_gc_class_remeshed = torch.einsum('ij,aij->ai', remeshing_relevant_barys, target_gc_class[:, remeshing_relevant_faces].to(device=device, dtype=torch.float32)) - target_gc_class_remeshed_prep = torch.round(target_gc_class_remeshed).to(torch.long) - - - - - - # import pdb; pdb.set_trace() - - # new for vertex wise ground contact - if (not model.graphcnn_type == 'inexistent') and (save_imgs_path is not None): - # import pdb; pdb.set_trace() - - sm = torch.nn.Softmax(dim=2) - ground_contact_probs = sm(output_ref['vertexwise_ground_contact']) - - for ind_img in range(ground_contact_probs.shape[0]): - # ind_img = 0 - if test_name_list is not None: - img_name = test_name_list[int(target_dict['index'][ind_img].cpu().detach().numpy())].replace('/', '_') - img_name = img_name.split('.')[0] - else: - img_name = str(index) + '_' + str(ind_img) - out_path_gcmesh = save_imgs_path + '/' + prefix + 'gcmesh_' + img_name + '.obj' - - gc_prob = ground_contact_probs[ind_img, :, 1] # contact probability - vert_colors = np.repeat(255*gc_prob.detach().cpu().numpy()[:, None], 3, 1) - my_mesh = trimesh.Trimesh(vertices=smal_template_verts, faces=smal_faces, process=False, maintain_order=True) - my_mesh.visual.vertex_colors = vert_colors - save_gc_mesh = True # False - if save_gc_mesh: - my_mesh.export(out_path_gcmesh) - - ''' - input_image = input[ind_img, :, :, :].detach().clone() - for t, m, s in zip(input_image, data_info.rgb_mean,data_info.rgb_stddev): t.add_(m) - input_image_np = input_image.detach().cpu().numpy().transpose(1, 2, 0) - out_path = save_debug_path + 'b' + str(ind_img) +'_input.png' - plt.imsave(out_path, input_image_np) - ''' - - # ------------------------------------- - - # import pdb; pdb.set_trace() - - - ''' - target_gc_class = target_dict['gc'][ind_img, :, 0] - - current_vertices_smal = vertices_smal[ind_img, :, :] - - points_centroid, plane_normal, error = fit_plane(current_vertices_smal[target_gc_class==1, :]) - ''' - - # calculate ground plane - # (see /is/cluster/work/nrueegg/icon_pifu_related/ICON/debug_code/curve_fitting_v2.py) - if return_mesh_with_gt_groundplane and 'gc' in target_dict.keys(): - - current_verts_remeshed = verts_remeshed[ind_img, :, :] - current_target_gc_class_remeshed_prep = target_gc_class_remeshed_prep[ind_img, ...] - - if current_target_gc_class_remeshed_prep.sum() > 3: - points_on_plane = current_verts_remeshed[current_target_gc_class_remeshed_prep==1, :] - data_centroid, plane_normal, error = fit_plane(points_on_plane) - nonplane_points_centered = current_verts_remeshed[current_target_gc_class_remeshed_prep==0, :] - data_centroid[None, :] - nonplane_points_projected = torch.matmul(plane_normal[None, :], nonplane_points_centered.transpose(0,1)) - - if nonplane_points_projected.sum() > 0: # plane normal points towards the animal - plane_normal = plane_normal.detach().cpu().numpy() - else: - plane_normal = - plane_normal.detach().cpu().numpy() - data_centroid = data_centroid.detach().cpu().numpy() - - - - # import pdb; pdb.set_trace() - - - desired_plane_normal_vector = np.asarray([[0, -1, 0]]) - # new approach: use cross product - rotation_axis = np.cross(plane_normal, desired_plane_normal_vector) # np.cross(plane_normal, desired_plane_normal_vector) - lengt_rotation_axis = np.linalg.norm(rotation_axis) # = sin(alpha) (because vectors have unit length) - angle = np.sin(lengt_rotation_axis) - rot = R_sc.from_rotvec(angle * rotation_axis * 1/lengt_rotation_axis) - rot_mat = rot[0].as_matrix() - rot_upsidedown = R_sc.from_rotvec(np.pi * np.asarray([[1, 0, 0]])) - # rot_upsidedown[0].apply(rot[0].apply(plane_normal)) - current_vertices_smal = vertices_smal[ind_img, :, :].detach().cpu().numpy() - new_smal_vertices = rot_upsidedown[0].apply(rot[0].apply(current_vertices_smal - data_centroid[None, :])) - my_mesh = trimesh.Trimesh(vertices=new_smal_vertices, faces=smal_faces, process=False, maintain_order=True) - vert_colors[:, 2] = 255 - my_mesh.visual.vertex_colors = vert_colors - out_path_gc_rotated = save_imgs_path + '/' + prefix + 'gc_rotated_' + img_name + '_new.obj' - my_mesh.export(out_path_gc_rotated) - - - - - - - '''# rot = R_sc.align_vectors(plane_normal.reshape((1, -1)), desired_plane_normal_vector) - desired_plane_normal_vector = np.asarray([[0, 1, 0]]) - - rot = R_sc.align_vectors(desired_plane_normal_vector, plane_normal.reshape((1, -1))) # inv - rot_mat = rot[0].as_matrix() - - - current_vertices_smal = vertices_smal[ind_img, :, :].detach().cpu().numpy() - new_smal_vertices = rot[0].apply((current_vertices_smal - data_centroid[None, :])) - - my_mesh = trimesh.Trimesh(vertices=new_smal_vertices, faces=smal_faces, process=False, maintain_order=True) - my_mesh.visual.vertex_colors = vert_colors - out_path_gc_rotated = save_imgs_path + '/' + prefix + 'gc_rotated_' + img_name + '_y.obj' - my_mesh.export(out_path_gc_rotated) - ''' - - - - - - - - - - # ---- - - - # ------------------------------------- - - - - - if index == 0: - if len_dataset is None: - len_data = val_loader.batch_size * len(val_loader) # 1703 - else: - len_data = len_dataset - if metrics == 'all' or metrics == 'no_loss': - if result_network == 'normal': - summaries = {'normal': dict(), 'ref': dict()} - summary = summaries['normal'] - else: - summary = summaries['ref'] - summary['pck'] = np.zeros((len_data)) - summary['pck_by_part'] = {group:np.zeros((len_data)) for group in KEYPOINT_GROUPS} - summary['acc_sil_2d'] = np.zeros(len_data) - summary['betas'] = np.zeros((len_data,betas.shape[1])) - summary['betas_limbs'] = np.zeros((len_data, betas_limbs.shape[1])) - summary['z'] = np.zeros((len_data, zz.shape[1])) - summary['pose_rotmat'] = np.zeros((len_data, pose_rotmat.shape[1], 3, 3)) - summary['flength'] = np.zeros((len_data, flength.shape[1])) - summary['trans'] = np.zeros((len_data, trans.shape[1])) - summary['breed_indices'] = np.zeros((len_data)) - summary['image_names'] = [] # len_data * [None] - else: - if result_network == 'normal': - summary = summaries['normal'] - else: - summary = summaries['ref'] - - if save_imgs_path is not None: - eval_save_visualizations_and_meshes(model, input, data_info, target_dict, test_name_list, vertices_smal, hg_keyp_norm, hg_keyp_scores, zz, betas, betas_limbs, pose_rotmat, trans, flength, pred_keyp, pred_silh, save_imgs_path, prefix, index, render_all=render_all) - - if metrics == 'all' or metrics == 'no_loss': - preds = eval_prepare_pck_and_iou(model, input, data_info, target_dict, test_name_list, vertices_smal, hg_keyp_norm, hg_keyp_scores, zz, betas, betas_limbs, pose_rotmat, trans, flength, pred_keyp, pred_silh, save_imgs_path, prefix, index, pck_thresh, progress=progress) - # add results for all images in this batch to lists - curr_batch_size = pred_keyp.shape[0] - eval_add_preds_to_summary(summary, preds, my_step, batch_size, curr_batch_size) - else: - # measure accuracy and record loss - bs_fake = 1 # batch_size - # import pdb; pdb.set_trace() - - - # save_imgs_path + '/' + prefix + 'rot_tex_pred_' + img_name + '.png' - # import pdb; pdb.set_trace() - ''' - for ind_img in range(len(target_dict['index'])): - try: - if test_name_list is not None: - img_name = test_name_list[int(target_dict['index'][ind_img].cpu().detach().numpy())].replace('/', '_') - img_name = img_name.split('.')[0] - else: - img_name = str(index) + '_' + str(ind_img) - all_image_names = ['keypoints_pred_' + img_name + '.png', 'normal_comp_pred_' + img_name + '.png', 'normal_rot_tex_pred_' + img_name + '.png', 'ref_comp_pred_' + img_name + '.png', 'ref_rot_tex_pred_' + img_name + '.png'] - all_saved_images = [] - for sub_img_name in all_image_names: - saved_img = cv2.imread(save_imgs_path + '/' + sub_img_name) - if not (saved_img.shape[0] == 256 and saved_img.shape[1] == 256): - saved_img = cv2.resize(saved_img, (256, 256)) - all_saved_images.append(saved_img) - final_image = np.concatenate(all_saved_images, axis=1) - save_imgs_path_sum = save_imgs_path.replace('test_', 'summary_test_') - if not os.path.exists(save_imgs_path_sum): os.makedirs(save_imgs_path_sum) - final_image_path = save_imgs_path_sum + '/summary_' + img_name + '.png' - cv2.imwrite(final_image_path, final_image) - except: - print('dont save a summary image') - ''' - - - bs_fake = 1 - if metrics == 'all' or metrics == 'no_loss': - # update progress bar - if progress is not None: - '''my_string = "PCK: {0:.2f}, IOU: {1:.2f}".format( - pck[:(my_step * batch_size + curr_batch_size)].mean(), - acc_sil_2d[:(my_step * batch_size + curr_batch_size)].mean())''' - my_string = "normal_PCK: {0:.2f}, normal_IOU: {1:.2f}, ref_PCK: {2:.2f}, ref_IOU: {3:.2f}".format( - summaries['normal']['pck'][:(my_step * batch_size + curr_batch_size)].mean(), - summaries['normal']['acc_sil_2d'][:(my_step * batch_size + curr_batch_size)].mean(), - summaries['ref']['pck'][:(my_step * batch_size + curr_batch_size)].mean(), - summaries['ref']['acc_sil_2d'][:(my_step * batch_size + curr_batch_size)].mean()) - progress.set_postfix_str(my_string) - else: - losses.update(loss_dict['loss'] + loss_dict_ref['loss'], bs_fake) - losses_keyp.update(loss_dict['loss_keyp_weighted'], bs_fake) - losses_silh.update(loss_dict['loss_silh_weighted'], bs_fake) - losses_shape.update(loss_dict['loss_shape_weighted'], bs_fake) - losses_pose.update(loss_dict['loss_poseprior_weighted'], bs_fake) - losses_class.update(loss_dict['loss_class_weighted'], bs_fake) - losses_breed.update(loss_dict['loss_breed_weighted'], bs_fake) - losses_partseg.update(loss_dict['loss_partseg_weighted'], bs_fake) - losses_ref_keyp.update(loss_dict_ref['keyp_ref'], bs_fake) - losses_ref_silh.update(loss_dict_ref['silh_ref'], bs_fake) - loss_ref_pose = 0 - for l_name in ['pose_legs_side', 'pose_legs_tors', 'pose_tail_side', 'pose_tail_tors', 'pose_spine_side', 'pose_spine_tors']: - loss_ref_pose += loss_dict_ref[l_name] - losses_ref_pose.update(loss_ref_pose, bs_fake) - loss_ref_reg = 0 - for l_name in ['reg_trans', 'reg_flength', 'reg_pose']: - loss_ref_reg += loss_dict_ref[l_name] - losses_ref_reg.update(loss_ref_reg, bs_fake) - acc = - loss_dict['loss_keyp_weighted'] # this will be used to keep track of the 'best model' - accuracies.update(acc, bs_fake) - # Show losses as part of the progress bar. - if progress is not None: - my_string = 'Loss: {loss:0.4f}, loss_keyp: {loss_keyp:0.4f}, loss_silh: {loss_silh:0.4f}, loss_partseg: {loss_partseg:0.4f}, loss_shape: {loss_shape:0.4f}, loss_pose: {loss_pose:0.4f}, loss_class: {loss_class:0.4f}, loss_breed: {loss_breed:0.4f}, loss_ref_keyp: {loss_ref_keyp:0.4f}, loss_ref_silh: {loss_ref_silh:0.4f}, loss_ref_pose: {loss_ref_pose:0.4f}, loss_ref_reg: {loss_ref_reg:0.4f}'.format( - loss=losses.avg, - loss_keyp=losses_keyp.avg, - loss_silh=losses_silh.avg, - loss_shape=losses_shape.avg, - loss_pose=losses_pose.avg, - loss_class=losses_class.avg, - loss_breed=losses_breed.avg, - loss_partseg=losses_partseg.avg, - loss_ref_keyp=losses_ref_keyp.avg, - loss_ref_silh=losses_ref_silh.avg, - loss_ref_pose=losses_ref_pose.avg, - loss_ref_reg=losses_ref_reg.avg) - my_string_short = 'Loss: {loss:0.4f}, loss_keyp: {loss_keyp:0.4f}, loss_silh: {loss_silh:0.4f}, loss_ref_keyp: {loss_ref_keyp:0.4f}, loss_ref_silh: {loss_ref_silh:0.4f}, loss_ref_pose: {loss_ref_pose:0.4f}, loss_ref_reg: {loss_ref_reg:0.4f}'.format( - loss=losses.avg, - loss_keyp=losses_keyp.avg, - loss_silh=losses_silh.avg, - loss_ref_keyp=losses_ref_keyp.avg, - loss_ref_silh=losses_ref_silh.avg, - loss_ref_pose=losses_ref_pose.avg, - loss_ref_reg=losses_ref_reg.avg) - progress.set_postfix_str(my_string_short) - my_step += 1 - if metrics == 'all': - return my_string, summaries # summary - elif metrics == 'no_loss': - return my_string, np.average(np.asarray(summaries['ref']['acc_sil_2d'])) # np.average(np.asarray(summary['acc_sil_2d'])) - else: - return my_string, accuracies.avg - - -# --------------------------------------------------------------------------------------------------------------------------- -def do_visual_epoch(val_loader, model, device, data_info, flip=False, quiet=False, acc_joints=None, save_imgs_path=None, weight_dict=None, weight_dict_ref=None, metrics=None, val_opt='default', test_name_list=None, render_all=False, pck_thresh=0.15, return_results=False, len_dataset=None): - if save_imgs_path is not None: - pathlib.Path(save_imgs_path).mkdir(parents=True, exist_ok=True) - all_results = [] - - # Put the model in evaluation mode. - model.eval() - - iterable = enumerate(val_loader) - - # information for normalization - norm_dict = { - 'pose_rot6d_mean': torch.from_numpy(data_info.pose_rot6d_mean).float().to(device), - 'trans_mean': torch.from_numpy(data_info.trans_mean).float().to(device), - 'trans_std': torch.from_numpy(data_info.trans_std).float().to(device), - 'flength_mean': torch.from_numpy(data_info.flength_mean).float().to(device), - 'flength_std': torch.from_numpy(data_info.flength_std).float().to(device)} - - - return_mesh_with_gt_groundplane = True - if return_mesh_with_gt_groundplane: - remeshing_path = '/is/cluster/work/nrueegg/icon_pifu_related/barc_for_bite/data/smal_data_remeshed/uniform_surface_sampling/my_smpl_39dogsnorm_Jr_4_dog_remesh4000_info.pkl' - with open(remeshing_path, 'rb') as fp: - remeshing_dict = pkl.load(fp) - remeshing_relevant_faces = torch.tensor(remeshing_dict['smal_faces'][remeshing_dict['faceid_closest']], dtype=torch.long, device=device) - remeshing_relevant_barys = torch.tensor(remeshing_dict['barys_closest'], dtype=torch.float32, device=device) - - # from smal_pytorch.smal_model.smal_torch_new import SMAL - print('start: load smal default model (barc), but only for vertices') - smal = SMAL() - print('end: load smal default model (barc), but only for vertices') - smal_template_verts = smal.v_template.detach().cpu().numpy() - smal_faces = smal.faces.detach().cpu().numpy() - - file_alignment_errors = open(save_imgs_path + '/a_ref_procrustes_alignmnet_errors.txt', 'a') # append mode - file_alignment_errors.write(" ----------- start evaluation ------------- \n ") - - csv_file_alignment_errors = open(save_imgs_path + '/a_ref_procrustes_alignmnet_errors.csv', 'w') # write mode - fieldnames = ['name', 'error'] - writer = csv.DictWriter(csv_file_alignment_errors, fieldnames=fieldnames) - writer.writeheader() - - my_step = 0 - for index, (input, target_dict) in iterable: - batch_size = input.shape[0] - input = input.float().to(device) - partial_results = {} - - # ----------------------- do visualization step ----------------------- - with torch.no_grad(): - output, output_unnorm, output_reproj, output_ref, output_ref_comp = model(input, norm_dict=norm_dict) - - - # import pdb; pdb.set_trace() - - - sm = torch.nn.Softmax(dim=2) - ground_contact_probs = sm(output_ref['vertexwise_ground_contact']) - - for result_network in ['normal', 'ref']: - # variabled that are not refined - hg_keyp_norm = output['keypoints_norm'] - hg_keyp_scores = output['keypoints_scores'] - betas = output_reproj['betas'] - betas_limbs = output_reproj['betas_limbs'] - zz = output_reproj['z'] - if result_network == 'normal': - # STEP 1: normal network - vertices_smal = output_reproj['vertices_smal'] - flength = output_unnorm['flength'] - pose_rotmat = output_unnorm['pose_rotmat'] - trans = output_unnorm['trans'] - pred_keyp = output_reproj['keyp_2d'] - pred_silh = output_reproj['silh'] - prefix = 'normal_' - else: - # STEP 1: refinement network - vertices_smal = output_ref['vertices_smal'] - flength = output_ref['flength'] - pose_rotmat = output_ref['pose_rotmat'] - trans = output_ref['trans'] - pred_keyp = output_ref['keyp_2d'] - pred_silh = output_ref['silh'] - prefix = 'ref_' - - bs = vertices_smal.shape[0] - # target_gc_class = target_dict['gc'][:, :, 0] - target_gc_class = torch.round(ground_contact_probs).long()[:, :, 1] - sel_verts = torch.index_select(output_ref['vertices_smal'], dim=1, index=remeshing_relevant_faces.reshape((-1))).reshape((bs, remeshing_relevant_faces.shape[0], 3, 3)) - verts_remeshed = torch.einsum('ij,aijk->aik', remeshing_relevant_barys, sel_verts) - target_gc_class_remeshed = torch.einsum('ij,aij->ai', remeshing_relevant_barys, target_gc_class[:, remeshing_relevant_faces].to(device=device, dtype=torch.float32)) - target_gc_class_remeshed_prep = torch.round(target_gc_class_remeshed).to(torch.long) - - - - - # index = i - # ind_img = 0 - for ind_img in range(batch_size): # range(min(12, batch_size)): # range(12): # [0]: #range(0, batch_size): - - # ind_img = 0 - if test_name_list is not None: - img_name = test_name_list[int(target_dict['index'][ind_img].cpu().detach().numpy())].replace('/', '_') - img_name = img_name.split('.')[0] - else: - img_name = str(index) + '_' + str(ind_img) - out_path_gcmesh = save_imgs_path + '/' + prefix + 'gcmesh_' + img_name + '.obj' - - gc_prob = ground_contact_probs[ind_img, :, 1] # contact probability - vert_colors = np.repeat(255*gc_prob.detach().cpu().numpy()[:, None], 3, 1) - my_mesh = trimesh.Trimesh(vertices=smal_template_verts, faces=smal_faces, process=False, maintain_order=True) - my_mesh.visual.vertex_colors = vert_colors - save_gc_mesh = False - if save_gc_mesh: - my_mesh.export(out_path_gcmesh) - - current_verts_remeshed = verts_remeshed[ind_img, :, :] - current_target_gc_class_remeshed_prep = target_gc_class_remeshed_prep[ind_img, ...] - - if current_target_gc_class_remeshed_prep.sum() > 3: - points_on_plane = current_verts_remeshed[current_target_gc_class_remeshed_prep==1, :] - data_centroid, plane_normal, error = fit_plane(points_on_plane) - nonplane_points_centered = current_verts_remeshed[current_target_gc_class_remeshed_prep==0, :] - data_centroid[None, :] - nonplane_points_projected = torch.matmul(plane_normal[None, :], nonplane_points_centered.transpose(0,1)) - - if nonplane_points_projected.sum() > 0: # plane normal points towards the animal - plane_normal = plane_normal.detach().cpu().numpy() - else: - plane_normal = - plane_normal.detach().cpu().numpy() - data_centroid = data_centroid.detach().cpu().numpy() - - - - # import pdb; pdb.set_trace() - - - desired_plane_normal_vector = np.asarray([[0, -1, 0]]) - # new approach: use cross product - rotation_axis = np.cross(plane_normal, desired_plane_normal_vector) # np.cross(plane_normal, desired_plane_normal_vector) - lengt_rotation_axis = np.linalg.norm(rotation_axis) # = sin(alpha) (because vectors have unit length) - angle = np.sin(lengt_rotation_axis) - rot = R_sc.from_rotvec(angle * rotation_axis * 1/lengt_rotation_axis) - rot_mat = rot[0].as_matrix() - rot_upsidedown = R_sc.from_rotvec(np.pi * np.asarray([[1, 0, 0]])) - # rot_upsidedown[0].apply(rot[0].apply(plane_normal)) - current_vertices_smal = vertices_smal[ind_img, :, :].detach().cpu().numpy() - new_smal_vertices = rot_upsidedown[0].apply(rot[0].apply(current_vertices_smal - data_centroid[None, :])) - my_mesh = trimesh.Trimesh(vertices=new_smal_vertices, faces=smal_faces, process=False, maintain_order=True) - vert_colors[:, 2] = 255 - my_mesh.visual.vertex_colors = vert_colors - out_path_gc_rotated = save_imgs_path + '/' + prefix + 'gc_rotated_' + img_name + '_new.obj' - my_mesh.export(out_path_gc_rotated) - - - - ''' - import pdb; pdb.set_trace() - - from src.evaluation.registration import preprocess_point_cloud, o3d_ransac, draw_registration_result - import open3d as o3d - import copy - - - mesh_gt_path = target_dict['mesh_path'][ind_img] - mesh_gt = o3d.io.read_triangle_mesh(mesh_gt_path) - - mesh_gt_verts = np.asarray(mesh_gt.vertices) - mesh_gt_faces = np.asarray(mesh_gt.triangles) - diag_gt = np.sqrt(sum((mesh_gt_verts.max(axis=0) - mesh_gt_verts.min(axis=0))**2)) - - mesh_pred_verts = np.asarray(new_smal_vertices) - mesh_pred_faces = np.asarray(smal_faces) - diag_pred = np.sqrt(sum((mesh_pred_verts.max(axis=0) - mesh_pred_verts.min(axis=0))**2)) - mesh_pred = o3d.geometry.TriangleMesh() - mesh_pred.vertices = o3d.utility.Vector3dVector(mesh_pred_verts) - mesh_pred.triangles = o3d.utility.Vector3iVector(mesh_pred_faces) - - # center the predicted mesh around 0 - trans = - mesh_pred_verts.mean(axis=0) - mesh_pred_verts_new = mesh_pred_verts + trans - # change the size of the predicted mesh - mesh_pred_verts_new = mesh_pred_verts_new * diag_gt / diag_pred - - # transform the predicted mesh (rough alignment) - mesh_pred_new = copy.deepcopy(mesh_pred) - mesh_pred_new.vertices = o3d.utility.Vector3dVector(np.asarray(mesh_pred_verts_new)) # normals should not have changed - voxel_size = 0.01 # 0.5 - distance_threshold = 0.015 # 0.005 # 0.02 # 1.0 - result, src_down, src_fpfh, dst_down, dst_fpfh = o3d_ransac(mesh_pred_new, mesh_gt, voxel_size=voxel_size, distance_threshold=distance_threshold, return_all=True) - transform = result.transformation - mesh_pred_transf = copy.deepcopy(mesh_pred_new).transform(transform) - - out_path_pred_transf = save_imgs_path + '/' + prefix + 'alignment_initial_' + img_name + '.obj' - o3d.io.write_triangle_mesh(out_path_pred_transf, mesh_pred_transf) - - # img_name_part = img_name.split(img_name.split('_')[-1] + '_')[0] - # out_path_gt = save_imgs_path + '/' + prefix + 'ground_truth_' + img_name_part + '.obj' - # o3d.io.write_triangle_mesh(out_path_gt, mesh_gt) - - - trans_init = transform - threshold = 0.02 # 0.1 # 0.02 - - n_points = 10000 - src = mesh_pred_new.sample_points_uniformly(number_of_points=n_points) - dst = mesh_gt.sample_points_uniformly(number_of_points=n_points) - - # reg_p2p = o3d.pipelines.registration.registration_icp(src_down, dst_down, threshold, trans_init, o3d.pipelines.registration.TransformationEstimationPointToPoint(), o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration=2000)) - reg_p2p = o3d.pipelines.registration.registration_icp(src, dst, threshold, trans_init, o3d.pipelines.registration.TransformationEstimationPointToPoint(), o3d.pipelines.registration.ICPConvergenceCriteria(max_iteration=2000)) - - # mesh_pred_transf_refined = copy.deepcopy(mesh_pred_new).transform(reg_p2p.transformation) - # out_path_pred_transf_refined = save_imgs_path + '/' + prefix + 'alignment_final_' + img_name + '.obj' - # o3d.io.write_triangle_mesh(out_path_pred_transf_refined, mesh_pred_transf_refined) - - - aligned_mesh_final = trimesh.Trimesh(mesh_pred_new.vertices, mesh_pred_new.triangles, vertex_colors=[0, 255, 0]) - gt_mesh = trimesh.Trimesh(mesh_gt.vertices, mesh_gt.triangles, vertex_colors=[255, 0, 0]) - scene = trimesh.Scene([aligned_mesh_final, gt_mesh]) - out_path_alignment_with_gt = save_imgs_path + '/' + prefix + 'alignment_with_gt_' + img_name + '.obj' - - scene.export(out_path_alignment_with_gt) - ''' - - # import pdb; pdb.set_trace() - - - # SMAL_KEYPOINT_NAMES_FOR_3D_EVAL # 17 keypoints - # prepare target - target_keyp_isvalid = target_dict['keypoints_3d'][ind_img, :, 3].detach().cpu().numpy() - keyp_to_use = (np.asarray(SMAL_KEYPOINT_WHICHTOUSE_FOR_3D_EVAL)==1)*(target_keyp_isvalid==1) - target_keyp_raw = target_dict['keypoints_3d'][ind_img, :, :3].detach().cpu().numpy() - target_keypoints = target_keyp_raw[keyp_to_use, :] - target_pointcloud = target_dict['pointcloud_points'][ind_img, :, :].detach().cpu().numpy() - # prepare prediction - pred_keypoints_raw = output_ref['vertices_smal'][ind_img, SMAL_KEYPOINT_INDICES_FOR_3D_EVAL, :].detach().cpu().numpy() - pred_keypoints = pred_keypoints_raw[keyp_to_use, :] - pred_pointcloud = verts_remeshed[ind_img, :, :].detach().cpu().numpy() - - - - - ''' - pred_keypoints_transf, pred_pointcloud_transf, procrustes_params = compute_similarity_transform(pred_keypoints, target_keypoints, num_joints=None, verts=pred_pointcloud) - pa_error = np.sqrt(np.sum((target_keypoints - pred_keypoints_transf) ** 2, axis=1)) - error_procrustes = np.mean(pa_error) - - - col_target = np.zeros((target_pointcloud.shape[0], 3), dtype=np.uint8) - col_target[:, 0] = 255 - col_pred = np.zeros((pred_pointcloud_transf.shape[0], 3), dtype=np.uint8) - col_pred[:, 1] = 255 - pc = trimesh.points.PointCloud(np.concatenate((target_pointcloud, pred_pointcloud_transf)), colors=np.concatenate((col_target, col_pred))) - out_path_pc = save_imgs_path + '/' + prefix + 'pointclouds_aligned_' + img_name + '.obj' - pc.export(out_path_pc) - - print(target_dict['mesh_path'][ind_img]) - print(error_procrustes) - file_alignment_errors.write(target_dict['mesh_path'][ind_img] + '\n') - file_alignment_errors.write('error: ' + str(error_procrustes) + ' \n') - - writer.writerow({'name': (target_dict['mesh_path'][ind_img]).split('/')[-1], 'error': str(error_procrustes)}) - - # import pdb; pdb.set_trace() - # alignment_dict = calculate_alignemnt_errors(output_ref['vertices_smal'][ind_img, :, :], target_dict['keypoints_3d'][ind_img, :, :], target_dict['pointcloud_points'][ind_img, :, :]) - # file_alignment_errors.write('error: ' + str(alignment_dict['error_procrustes']) + ' \n') - ''' - - - - - - - if index == 0: - if len_dataset is None: - len_data = val_loader.batch_size * len(val_loader) # 1703 - else: - len_data = len_dataset - if result_network == 'normal': - summaries = {'normal': dict(), 'ref': dict()} - summary = summaries['normal'] - else: - summary = summaries['ref'] - summary['pck'] = np.zeros((len_data)) - summary['pck_by_part'] = {group:np.zeros((len_data)) for group in KEYPOINT_GROUPS} - summary['acc_sil_2d'] = np.zeros(len_data) - summary['betas'] = np.zeros((len_data,betas.shape[1])) - summary['betas_limbs'] = np.zeros((len_data, betas_limbs.shape[1])) - summary['z'] = np.zeros((len_data, zz.shape[1])) - summary['pose_rotmat'] = np.zeros((len_data, pose_rotmat.shape[1], 3, 3)) - summary['flength'] = np.zeros((len_data, flength.shape[1])) - summary['trans'] = np.zeros((len_data, trans.shape[1])) - summary['breed_indices'] = np.zeros((len_data)) - summary['image_names'] = [] # len_data * [None] - # ['vertices_smal'] = np.zeros((len_data, vertices_smal.shape[1], 3)) - else: - if result_network == 'normal': - summary = summaries['normal'] - else: - summary = summaries['ref'] - - - # import pdb; pdb.set_trace() - - - eval_save_visualizations_and_meshes(model, input, data_info, target_dict, test_name_list, vertices_smal, hg_keyp_norm, hg_keyp_scores, zz, betas, betas_limbs, pose_rotmat, trans, flength, pred_keyp, pred_silh, save_imgs_path, prefix, index, render_all=render_all) - - - preds = eval_prepare_pck_and_iou(model, input, data_info, target_dict, test_name_list, vertices_smal, hg_keyp_norm, hg_keyp_scores, zz, betas, betas_limbs, pose_rotmat, trans, flength, pred_keyp, pred_silh, save_imgs_path, prefix, index, pck_thresh=None, skip_pck_and_iou=True) - # add results for all images in this batch to lists - curr_batch_size = pred_keyp.shape[0] - eval_add_preds_to_summary(summary, preds, my_step, batch_size, curr_batch_size, skip_pck_and_iou=True) - - # summary['vertices_smal'][my_step * batch_size:my_step * batch_size + curr_batch_size] = vertices_smal.detach().cpu().numpy() - - - - - - - - - - - - - - - - ''' - try: - if test_name_list is not None: - img_name = test_name_list[int(target_dict['index'][ind_img].cpu().detach().numpy())].replace('/', '_') - img_name = img_name.split('.')[0] - else: - img_name = str(index) + '_' + str(ind_img) - partial_results['img_name'] = img_name - visualizations = model.render_vis_nograd(vertices=output_reproj['vertices_smal'], - focal_lengths=output_unnorm['flength'], - color=0) # 2) - # save image with predicted keypoints - pred_unp = (output['keypoints_norm'][ind_img, :, :] + 1.) / 2 * (data_info.image_size - 1) - pred_unp_maxval = output['keypoints_scores'][ind_img, :, :] - pred_unp_prep = torch.cat((pred_unp, pred_unp_maxval), 1) - inp_img = input[ind_img, :, :, :].detach().clone() - if save_imgs_path is not None: - out_path = save_imgs_path + '/keypoints_pred_' + img_name + '.png' - save_input_image_with_keypoints(inp_img, pred_unp_prep, out_path=out_path, threshold=0.1, print_scores=True, ratio_in_out=1.0) # threshold=0.3 - # save predicted 3d model - # (1) front view - pred_tex = visualizations[ind_img, :, :, :].permute((1, 2, 0)).cpu().detach().numpy() / 256 - pred_tex_max = np.max(pred_tex, axis=2) - partial_results['tex_pred'] = pred_tex - if save_imgs_path is not None: - out_path = save_imgs_path + '/tex_pred_' + img_name + '.png' - plt.imsave(out_path, pred_tex) - input_image = input[ind_img, :, :, :].detach().clone() - for t, m, s in zip(input_image, data_info.rgb_mean, data_info.rgb_stddev): t.add_(m) - input_image_np = input_image.detach().cpu().numpy().transpose(1, 2, 0) - im_masked = cv2.addWeighted(input_image_np,0.2,pred_tex,0.8,0) - im_masked[pred_tex_max<0.01, :] = input_image_np[pred_tex_max<0.01, :] - partial_results['comp_pred'] = im_masked - if save_imgs_path is not None: - out_path = save_imgs_path + '/comp_pred_' + img_name + '.png' - plt.imsave(out_path, im_masked) - # (2) side view - vertices_cent = output_reproj['vertices_smal'] - output_reproj['vertices_smal'].mean(dim=1)[:, None, :] - roll = np.pi / 2 * torch.ones(1).float().to(device) - pitch = np.pi / 2 * torch.ones(1).float().to(device) - tensor_0 = torch.zeros(1).float().to(device) - tensor_1 = torch.ones(1).float().to(device) - RX = torch.stack([torch.stack([tensor_1, tensor_0, tensor_0]), torch.stack([tensor_0, torch.cos(roll), -torch.sin(roll)]),torch.stack([tensor_0, torch.sin(roll), torch.cos(roll)])]).reshape(3,3) - RY = torch.stack([ - torch.stack([torch.cos(pitch), tensor_0, torch.sin(pitch)]), - torch.stack([tensor_0, tensor_1, tensor_0]), - torch.stack([-torch.sin(pitch), tensor_0, torch.cos(pitch)])]).reshape(3,3) - vertices_rot = (torch.matmul(RY, vertices_cent.reshape((-1, 3))[:, :, None])).reshape((batch_size, -1, 3)) - vertices_rot[:, :, 2] = vertices_rot[:, :, 2] + torch.ones_like(vertices_rot[:, :, 2]) * 20 # 18 # *16 - visualizations_rot = model.render_vis_nograd(vertices=vertices_rot, - focal_lengths=output_unnorm['flength'], - color=0) # 2) - pred_tex = visualizations_rot[ind_img, :, :, :].permute((1, 2, 0)).cpu().detach().numpy() / 256 - pred_tex_max = np.max(pred_tex, axis=2) - partial_results['rot_tex_pred'] = pred_tex - if save_imgs_path is not None: - out_path = save_imgs_path + '/rot_tex_pred_' + img_name + '.png' - plt.imsave(out_path, pred_tex) - render_all = True - if render_all: - # save input image - inp_img = input[ind_img, :, :, :].detach().clone() - if save_imgs_path is not None: - out_path = save_imgs_path + '/image_' + img_name + '.png' - save_input_image(inp_img, out_path) - # save posed mesh - V_posed = output_reproj['vertices_smal'][ind_img, :, :].detach().cpu().numpy() - Faces = model.smal.f - mesh_posed = trimesh.Trimesh(vertices=V_posed, faces=Faces, process=False, maintain_order=True) - partial_results['mesh_posed'] = mesh_posed - if save_imgs_path is not None: - mesh_posed.export(save_imgs_path + '/mesh_posed_' + img_name + '.obj') - except: - print('pass...') - all_results.append(partial_results) - ''' - - my_step += 1 - - - file_alignment_errors.close() - csv_file_alignment_errors.close() - - - if return_results: - return all_results - else: - return summaries \ No newline at end of file diff --git a/spaces/safi842/FashionGen/netdissect/tool/makesample.py b/spaces/safi842/FashionGen/netdissect/tool/makesample.py deleted file mode 100644 index 36276267677360d8238a8dbf71e9753dcc327681..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/tool/makesample.py +++ /dev/null @@ -1,169 +0,0 @@ -''' -A simple tool to generate sample of output of a GAN, -subject to filtering, sorting, or intervention. -''' - -import torch, numpy, os, argparse, numbers, sys, shutil -from PIL import Image -from torch.utils.data import TensorDataset -from netdissect.zdataset import standard_z_sample -from netdissect.progress import default_progress, verbose_progress -from netdissect.autoeval import autoimport_eval -from netdissect.workerpool import WorkerBase, WorkerPool -from netdissect.nethook import edit_layers, retain_layers - -def main(): - parser = argparse.ArgumentParser(description='GAN sample making utility') - parser.add_argument('--model', type=str, default=None, - help='constructor for the model to test') - parser.add_argument('--pthfile', type=str, default=None, - help='filename of .pth file for the model') - parser.add_argument('--outdir', type=str, default='images', - help='directory for image output') - parser.add_argument('--size', type=int, default=100, - help='number of images to output') - parser.add_argument('--test_size', type=int, default=None, - help='number of images to test') - parser.add_argument('--layer', type=str, default=None, - help='layer to inspect') - parser.add_argument('--seed', type=int, default=1, - help='seed') - parser.add_argument('--maximize_units', type=int, nargs='+', default=None, - help='units to maximize') - parser.add_argument('--ablate_units', type=int, nargs='+', default=None, - help='units to ablate') - parser.add_argument('--quiet', action='store_true', default=False, - help='silences console output') - if len(sys.argv) == 1: - parser.print_usage(sys.stderr) - sys.exit(1) - args = parser.parse_args() - verbose_progress(not args.quiet) - - # Instantiate the model - model = autoimport_eval(args.model) - if args.pthfile is not None: - data = torch.load(args.pthfile) - if 'state_dict' in data: - meta = {} - for key in data: - if isinstance(data[key], numbers.Number): - meta[key] = data[key] - data = data['state_dict'] - model.load_state_dict(data) - # Unwrap any DataParallel-wrapped model - if isinstance(model, torch.nn.DataParallel): - model = next(model.children()) - # Examine first conv in model to determine input feature size. - first_layer = [c for c in model.modules() - if isinstance(c, (torch.nn.Conv2d, torch.nn.ConvTranspose2d, - torch.nn.Linear))][0] - # 4d input if convolutional, 2d input if first layer is linear. - if isinstance(first_layer, (torch.nn.Conv2d, torch.nn.ConvTranspose2d)): - z_channels = first_layer.in_channels - spatialdims = (1, 1) - else: - z_channels = first_layer.in_features - spatialdims = () - # Instrument the model if needed - if args.maximize_units is not None: - retain_layers(model, [args.layer]) - model.cuda() - - # Get the sample of z vectors - if args.maximize_units is None: - indexes = torch.arange(args.size) - z_sample = standard_z_sample(args.size, z_channels, seed=args.seed) - z_sample = z_sample.view(tuple(z_sample.shape) + spatialdims) - else: - # By default, if maximizing units, get a 'top 5%' sample. - if args.test_size is None: - args.test_size = args.size * 20 - z_universe = standard_z_sample(args.test_size, z_channels, - seed=args.seed) - z_universe = z_universe.view(tuple(z_universe.shape) + spatialdims) - indexes = get_highest_znums(model, z_universe, args.maximize_units, - args.size, seed=args.seed) - z_sample = z_universe[indexes] - - if args.ablate_units: - edit_layers(model, [args.layer]) - dims = max(2, max(args.ablate_units) + 1) # >=2 to avoid broadcast - model.ablation[args.layer] = torch.zeros(dims) - model.ablation[args.layer][args.ablate_units] = 1 - - save_znum_images(args.outdir, model, z_sample, indexes, - args.layer, args.ablate_units) - copy_lightbox_to(args.outdir) - - -def get_highest_znums(model, z_universe, max_units, size, - batch_size=100, seed=1): - # The model should have been instrumented already - retained_items = list(model.retained.items()) - assert len(retained_items) == 1 - layer = retained_items[0][0] - # By default, a 10% sample - progress = default_progress() - num_units = None - with torch.no_grad(): - # Pass 1: collect max activation stats - z_loader = torch.utils.data.DataLoader(TensorDataset(z_universe), - batch_size=batch_size, num_workers=2, - pin_memory=True) - scores = [] - for [z] in progress(z_loader, desc='Finding max activations'): - z = z.cuda() - model(z) - feature = model.retained[layer] - num_units = feature.shape[1] - max_feature = feature[:, max_units, ...].view( - feature.shape[0], len(max_units), -1).max(2)[0] - total_feature = max_feature.sum(1) - scores.append(total_feature.cpu()) - scores = torch.cat(scores, 0) - highest = (-scores).sort(0)[1][:size].sort(0)[0] - return highest - - -def save_znum_images(dirname, model, z_sample, indexes, layer, ablated_units, - name_template="image_{}.png", lightbox=False, batch_size=100, seed=1): - progress = default_progress() - os.makedirs(dirname, exist_ok=True) - with torch.no_grad(): - # Pass 2: now generate images - z_loader = torch.utils.data.DataLoader(TensorDataset(z_sample), - batch_size=batch_size, num_workers=2, - pin_memory=True) - saver = WorkerPool(SaveImageWorker) - if ablated_units is not None: - dims = max(2, max(ablated_units) + 1) # >=2 to avoid broadcast - mask = torch.zeros(dims) - mask[ablated_units] = 1 - model.ablation[layer] = mask[None,:,None,None].cuda() - for batch_num, [z] in enumerate(progress(z_loader, - desc='Saving images')): - z = z.cuda() - start_index = batch_num * batch_size - im = ((model(z) + 1) / 2 * 255).clamp(0, 255).byte().permute( - 0, 2, 3, 1).cpu() - for i in range(len(im)): - index = i + start_index - if indexes is not None: - index = indexes[index].item() - filename = os.path.join(dirname, name_template.format(index)) - saver.add(im[i].numpy(), filename) - saver.join() - -def copy_lightbox_to(dirname): - srcdir = os.path.realpath( - os.path.join(os.getcwd(), os.path.dirname(__file__))) - shutil.copy(os.path.join(srcdir, 'lightbox.html'), - os.path.join(dirname, '+lightbox.html')) - -class SaveImageWorker(WorkerBase): - def work(self, data, filename): - Image.fromarray(data).save(filename, optimize=True, quality=100) - -if __name__ == '__main__': - main() diff --git a/spaces/saitejad/llama-2-gen-with-speech/README.md b/spaces/saitejad/llama-2-gen-with-speech/README.md deleted file mode 100644 index db64224126b2c932b536a7395e0d2cf6aecedcef..0000000000000000000000000000000000000000 --- a/spaces/saitejad/llama-2-gen-with-speech/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Llama 2 Gen With Speech -emoji: 👀 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/scedlatioru/img-to-music/example/A4tech X7 G800v Driver Download !!EXCLUSIVE!!.md b/spaces/scedlatioru/img-to-music/example/A4tech X7 G800v Driver Download !!EXCLUSIVE!!.md deleted file mode 100644 index d2cae26c00bef549c13656b70846eb117449d818..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/A4tech X7 G800v Driver Download !!EXCLUSIVE!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

          a4tech x7 g800v driver download


          Download Ziphttps://gohhs.com/2uEzum



          -
          -Download driver x7 mouse Free oscar x7 mouse driver download software at Update. ... Download A4TECH X7 GAMING MOUSE XL-750BK OSCAR LASER GAMING ... G800MU G800V X7 G100 X7 G300 X7 G600 X7 G700 PK-130MG. T6. 1fdad05405
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Download Crystaldecisions Crystalreports Engine Version 10.2 3600.0 Empleo Parque Anthony Stepp.md b/spaces/scedlatioru/img-to-music/example/Download Crystaldecisions Crystalreports Engine Version 10.2 3600.0 Empleo Parque Anthony Stepp.md deleted file mode 100644 index 3c8ad500ab5844fcff2e8ec93fc7b18181df2197..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Download Crystaldecisions Crystalreports Engine Version 10.2 3600.0 Empleo Parque Anthony Stepp.md +++ /dev/null @@ -1,11 +0,0 @@ -

          Download Crystaldecisions Crystalreports Engine Version 10.2 3600.0 empleo parque anthony stepp


          Download Filehttps://gohhs.com/2uEzgl



          -
          -Nov 24, 2015 - Multimc: free launcher for Minecraft latest version: free custom launcher for Minecraft. . Hacked launcher for the sandbox video game. Download ... Download free multimc for minecraft 1.3.2 - Free Download. -Download free multimc for minecraft 1.2.2. -Free Download. -Download. -Download free multimc for minecraft 1.3.2 - Download for free. -Download free multimc for minecraft 1.3.2 - Download for free. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Mt8870 Proteus Lib Download.md b/spaces/scedlatioru/img-to-music/example/Mt8870 Proteus Lib Download.md deleted file mode 100644 index de6b84ae96c7eafb585f395c09e90689a30e591b..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Mt8870 Proteus Lib Download.md +++ /dev/null @@ -1,13 +0,0 @@ -

          Mt8870 Proteus Lib Download


          DOWNLOADhttps://gohhs.com/2uEA2V



          - -I am doing a project using dtmf ic mt8870 decoder. I want to simulate everything in... Download the mt8870 proteus library then import it. 1 votes 1 thanks. How to get mt8870 based ic based dtmf ic decoder? -I want to simulate everything in ic's dtmf decoder. -I also got the dtmf and mt8870 libraries but can't find where I can use. -I have also downloaded the mt8870 proteus library and I am importing it. -But I can't find a way to use it. -Thanks.1 answer -reidas The easiest way is to use the dtmf ic library, which can also decode dtmf data loaded from ic. -You don't need to import anything. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Psg Request Crack 2.1.3.3.rar !LINK!.md b/spaces/scedlatioru/img-to-music/example/Psg Request Crack 2.1.3.3.rar !LINK!.md deleted file mode 100644 index 9926de8a2edd67b4f596ec5036cc588b9acc0fee..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Psg Request Crack 2.1.3.3.rar !LINK!.md +++ /dev/null @@ -1,14 +0,0 @@ -

          psg request crack 2.1.3.3.rar


          DOWNLOAD === https://gohhs.com/2uEAH3



          - -During the years of production, the facility itself improved the quality of its products; the number of errors and faults in quality went down as the factory staff became better experienced. The development of the error-free process was closely linked to the improvements in the quality assurance of the production facility. - -In the 1970s, progress on quality-related issues gained momentum. The factory acquired the first integrated quality assurance system (IQUAS), and the first factory-wide quality system, the Quality Control (QC) plan and operating procedures, was developed. The material flow control system enabled the systematical generation of quality reports for the process and product-related control functions. A performance appraisal system was set up in 1979, which used a five-point rating system for the quality-related work of the employees. The first general and individual action plan for achieving better quality was developed in 1980. The first step towards systematic improvement was taken, the determination of basic requirements for a quality improvement plan and the implementation of a minimum basic process for each manufacturing process. The first set of Quality Indicators for process- and product-related quality was developed and put into use in 1983. By 1986, most of the basic requirements for the Quality Management were defined, and by 1989, the requirements for an independent Quality Assurance function were specified. - -In 1989, a first development of the Quality Function Approach (QFA) was developed. However, the first real implementation of the QFA in an industrial plant did not happen before the end of the 1990s. One of the factors that delayed the implementation of the QFA was that the high degree of automation that was being developed at the factory also made it difficult to assign a sufficient human resources to all organizational tasks. - -Concerning the development of the QFA, the industrial production function in general, and quality-related processes in particular, were represented by a network of coordination and cooperation, rather than a function that was hierarchically organized. In this process, an innovation approach, based on the dynamic development of the process in a self-regulating system, was used. At first, the process was characterized by non-linear interrelationships among the involved actors. However, as the process was shaped by the implementation of the QFA, there was a gradual shift from non-linear to linear processes. - -In the 1990s, the attention of the factory management was focused on improvement of productivity, efficiency and profitability of production, as well as on the internal and external effectiveness of the products 4fefd39f24
          -
          -
          -

          diff --git a/spaces/scedlatioru/img-to-music/example/Ravenfield Beta 5 Mods.md b/spaces/scedlatioru/img-to-music/example/Ravenfield Beta 5 Mods.md deleted file mode 100644 index 29c2c0b63ea8b34b6c56e0017c75bf562566d2a2..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Ravenfield Beta 5 Mods.md +++ /dev/null @@ -1,6 +0,0 @@ -

          ravenfield beta 5 mods


          Download Filehttps://gohhs.com/2uEAkg



          -
          -A quick guide on how to get Ravenfield mods from the Steam Workshop! This will show you how to access... https://steamcommunity.com/sharedfiles/filedetails/?id=1776143556 8a78ff9644
          -
          -
          -

          diff --git a/spaces/segments-tobias/conex/espnet2/utils/nested_dict_action.py b/spaces/segments-tobias/conex/espnet2/utils/nested_dict_action.py deleted file mode 100644 index 38ec57b31d0a6997ccf276c07dc3ba95ee1b7f78..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/utils/nested_dict_action.py +++ /dev/null @@ -1,106 +0,0 @@ -import argparse -import copy - -import yaml - - -class NestedDictAction(argparse.Action): - """Action class to append items to dict object. - - Examples: - >>> parser = argparse.ArgumentParser() - >>> _ = parser.add_argument('--conf', action=NestedDictAction, - ... default={'a': 4}) - >>> parser.parse_args(['--conf', 'a=3', '--conf', 'c=4']) - Namespace(conf={'a': 3, 'c': 4}) - >>> parser.parse_args(['--conf', 'c.d=4']) - Namespace(conf={'a': 4, 'c': {'d': 4}}) - >>> parser.parse_args(['--conf', 'c.d=4', '--conf', 'c=2']) - Namespace(conf={'a': 4, 'c': 2}) - >>> parser.parse_args(['--conf', '{d: 5, e: 9}']) - Namespace(conf={'d': 5, 'e': 9}) - - """ - - _syntax = """Syntax: - {op} = - {op} .= - {op} - {op} -e.g. - {op} a=4 - {op} a.b={{c: true}} - {op} {{"c": True}} - {op} {{a: 34.5}} -""" - - def __init__( - self, - option_strings, - dest, - nargs=None, - default=None, - choices=None, - required=False, - help=None, - metavar=None, - ): - super().__init__( - option_strings=option_strings, - dest=dest, - nargs=nargs, - default=copy.deepcopy(default), - type=None, - choices=choices, - required=required, - help=help, - metavar=metavar, - ) - - def __call__(self, parser, namespace, values, option_strings=None): - # --{option} a.b=3 -> {'a': {'b': 3}} - if "=" in values: - indict = copy.deepcopy(getattr(namespace, self.dest, {})) - key, value = values.split("=", maxsplit=1) - if not value.strip() == "": - value = yaml.load(value, Loader=yaml.Loader) - if not isinstance(indict, dict): - indict = {} - - keys = key.split(".") - d = indict - for idx, k in enumerate(keys): - if idx == len(keys) - 1: - d[k] = value - else: - if not isinstance(d.setdefault(k, {}), dict): - # Remove the existing value and recreates as empty dict - d[k] = {} - d = d[k] - - # Update the value - setattr(namespace, self.dest, indict) - else: - try: - # At the first, try eval(), i.e. Python syntax dict. - # e.g. --{option} "{'a': 3}" -> {'a': 3} - # This is workaround for internal behaviour of configargparse. - value = eval(values, {}, {}) - if not isinstance(value, dict): - syntax = self._syntax.format(op=option_strings) - mes = f"must be interpreted as dict: but got {values}\n{syntax}" - raise argparse.ArgumentTypeError(self, mes) - except Exception: - # and the second, try yaml.load - value = yaml.load(values, Loader=yaml.Loader) - if not isinstance(value, dict): - syntax = self._syntax.format(op=option_strings) - mes = f"must be interpreted as dict: but got {values}\n{syntax}" - raise argparse.ArgumentError(self, mes) - - d = getattr(namespace, self.dest, None) - if isinstance(d, dict): - d.update(value) - else: - # Remove existing params, and overwrite - setattr(namespace, self.dest, value) diff --git a/spaces/serdaryildiz/TRCaptionNet/Model/clip/model.py b/spaces/serdaryildiz/TRCaptionNet/Model/clip/model.py deleted file mode 100644 index 7d084e7a3e520c02eea14becd690b181d0c84b28..0000000000000000000000000000000000000000 --- a/spaces/serdaryildiz/TRCaptionNet/Model/clip/model.py +++ /dev/null @@ -1,437 +0,0 @@ -from collections import OrderedDict -from typing import Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.relu1 = nn.ReLU(inplace=True) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.relu2 = nn.ReLU(inplace=True) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu3 = nn.ReLU(inplace=True) - - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential(OrderedDict([ - ("-1", nn.AvgPool2d(stride)), - ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)), - ("1", nn.BatchNorm2d(planes * self.expansion)) - ])) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu1(self.bn1(self.conv1(x))) - out = self.relu2(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu3(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None): - super().__init__() - self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x, key=x, value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False - ) - - return x[0] - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, input_resolution=224, width=64): - super().__init__() - self.output_dim = output_dim - self.input_resolution = input_resolution - - # the 3-layer stem - self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(width // 2) - self.relu1 = nn.ReLU(inplace=True) - self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(width // 2) - self.relu2 = nn.ReLU(inplace=True) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.relu3 = nn.ReLU(inplace=True) - self.avgpool = nn.AvgPool2d(2) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim) - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - def stem(x): - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.avgpool(x) - return x - - x = x.type(self.conv1.weight.dtype) - x = stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)]) - - def forward(self, x: torch.Tensor): - return self.resblocks(x) - - -class VisionTransformer(nn.Module): - def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int): - super().__init__() - self.input_resolution = input_resolution - self.output_dim = output_dim - self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer(width, layers, heads) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - def forward(self, x: torch.Tensor): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - - # x = self.ln_post(x[:, 0, :]) - # - # if self.proj is not None: - # x = x @ self.proj - - return x - - -class CLIP(nn.Module): - def __init__(self, - embed_dim: int, - # vision - image_resolution: int, - vision_layers: Union[Tuple[int, int, int, int], int], - vision_width: int, - vision_patch_size: int, - # text - context_length: int, - vocab_size: int, - transformer_width: int, - transformer_heads: int, - transformer_layers: int - ): - super().__init__() - - self.context_length = context_length - - if isinstance(vision_layers, (tuple, list)): - vision_heads = vision_width * 32 // 64 - self.visual = ModifiedResNet( - layers=vision_layers, - output_dim=embed_dim, - heads=vision_heads, - input_resolution=image_resolution, - width=vision_width - ) - else: - vision_heads = vision_width // 64 - self.visual = VisionTransformer( - input_resolution=image_resolution, - patch_size=vision_patch_size, - width=vision_width, - layers=vision_layers, - heads=vision_heads, - output_dim=embed_dim - ) - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask() - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width)) - self.ln_final = LayerNorm(transformer_width) - - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - self.initialize_parameters() - - def initialize_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - if isinstance(self.visual, ModifiedResNet): - if self.visual.attnpool is not None: - std = self.visual.attnpool.c_proj.in_features ** -0.5 - nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std) - - for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]: - for name, param in resnet_block.named_parameters(): - if name.endswith("bn3.weight"): - nn.init.zeros_(param) - - proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - if self.text_projection is not None: - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - @property - def dtype(self): - return self.visual.conv1.weight.dtype - - def encode_image(self, image): - return self.visual(image.type(self.dtype)) - - def encode_text(self, text): - x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.type(self.dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x).type(self.dtype) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text): - image_features = self.encode_image(image) - text_features = self.encode_text(text) - - # normalized features - image_features = image_features / image_features.norm(dim=1, keepdim=True) - text_features = text_features / text_features.norm(dim=1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_image = logit_scale * image_features @ text_features.t() - logits_per_text = logits_per_image.t() - - # shape = [global_batch_size, global_batch_size] - return logits_per_image, logits_per_text - - -def convert_weights(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, nn.MultiheadAttention): - for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -def build_model(state_dict: dict): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_resolution = vision_patch_size * grid_size - else: - counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5) - vision_patch_size = None - assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0] - image_resolution = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks"))) - - model = CLIP( - embed_dim, - image_resolution, vision_layers, vision_width, vision_patch_size, - context_length, vocab_size, transformer_width, transformer_heads, transformer_layers - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - if key in state_dict: - del state_dict[key] - - convert_weights(model) - model.load_state_dict(state_dict) - return model.eval() diff --git a/spaces/shabnam91/Sanskrit-TTS/model_modules/attentions.py b/spaces/shabnam91/Sanskrit-TTS/model_modules/attentions.py deleted file mode 100644 index 86123bf662b92402ecd7baa5d5b76928930f7c49..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/model_modules/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -from model_modules import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/dataset.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/dataset.py deleted file mode 100644 index ba0e45be1e8878da0b07eb2128e218bbd7de82ef..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/dataset.py +++ /dev/null @@ -1,183 +0,0 @@ -import os -import random - -import numpy as np -import torch -import torch.utils.data -from tqdm import tqdm - -from uvr5_pack.lib_v5 import spec_utils - - -class VocalRemoverValidationSet(torch.utils.data.Dataset): - def __init__(self, patch_list): - self.patch_list = patch_list - - def __len__(self): - return len(self.patch_list) - - def __getitem__(self, idx): - path = self.patch_list[idx] - data = np.load(path) - - X, y = data["X"], data["y"] - - X_mag = np.abs(X) - y_mag = np.abs(y) - - return X_mag, y_mag - - -def make_pair(mix_dir, inst_dir): - input_exts = [".wav", ".m4a", ".mp3", ".mp4", ".flac"] - - X_list = sorted( - [ - os.path.join(mix_dir, fname) - for fname in os.listdir(mix_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - y_list = sorted( - [ - os.path.join(inst_dir, fname) - for fname in os.listdir(inst_dir) - if os.path.splitext(fname)[1] in input_exts - ] - ) - - filelist = list(zip(X_list, y_list)) - - return filelist - - -def train_val_split(dataset_dir, split_mode, val_rate, val_filelist): - if split_mode == "random": - filelist = make_pair( - os.path.join(dataset_dir, "mixtures"), - os.path.join(dataset_dir, "instruments"), - ) - - random.shuffle(filelist) - - if len(val_filelist) == 0: - val_size = int(len(filelist) * val_rate) - train_filelist = filelist[:-val_size] - val_filelist = filelist[-val_size:] - else: - train_filelist = [ - pair for pair in filelist if list(pair) not in val_filelist - ] - elif split_mode == "subdirs": - if len(val_filelist) != 0: - raise ValueError( - "The `val_filelist` option is not available in `subdirs` mode" - ) - - train_filelist = make_pair( - os.path.join(dataset_dir, "training/mixtures"), - os.path.join(dataset_dir, "training/instruments"), - ) - - val_filelist = make_pair( - os.path.join(dataset_dir, "validation/mixtures"), - os.path.join(dataset_dir, "validation/instruments"), - ) - - return train_filelist, val_filelist - - -def augment(X, y, reduction_rate, reduction_mask, mixup_rate, mixup_alpha): - perm = np.random.permutation(len(X)) - for i, idx in enumerate(tqdm(perm)): - if np.random.uniform() < reduction_rate: - y[idx] = spec_utils.reduce_vocal_aggressively( - X[idx], y[idx], reduction_mask - ) - - if np.random.uniform() < 0.5: - # swap channel - X[idx] = X[idx, ::-1] - y[idx] = y[idx, ::-1] - if np.random.uniform() < 0.02: - # mono - X[idx] = X[idx].mean(axis=0, keepdims=True) - y[idx] = y[idx].mean(axis=0, keepdims=True) - if np.random.uniform() < 0.02: - # inst - X[idx] = y[idx] - - if np.random.uniform() < mixup_rate and i < len(perm) - 1: - lam = np.random.beta(mixup_alpha, mixup_alpha) - X[idx] = lam * X[idx] + (1 - lam) * X[perm[i + 1]] - y[idx] = lam * y[idx] + (1 - lam) * y[perm[i + 1]] - - return X, y - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def make_training_set(filelist, cropsize, patches, sr, hop_length, n_fft, offset): - len_dataset = patches * len(filelist) - - X_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - y_dataset = np.zeros((len_dataset, 2, n_fft // 2 + 1, cropsize), dtype=np.complex64) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - starts = np.random.randint(0, X_pad.shape[2] - cropsize, patches) - ends = starts + cropsize - for j in range(patches): - idx = i * patches + j - X_dataset[idx] = X_pad[:, :, starts[j] : ends[j]] - y_dataset[idx] = y_pad[:, :, starts[j] : ends[j]] - - return X_dataset, y_dataset - - -def make_validation_set(filelist, cropsize, sr, hop_length, n_fft, offset): - patch_list = [] - patch_dir = "cs{}_sr{}_hl{}_nf{}_of{}".format( - cropsize, sr, hop_length, n_fft, offset - ) - os.makedirs(patch_dir, exist_ok=True) - - for i, (X_path, y_path) in enumerate(tqdm(filelist)): - basename = os.path.splitext(os.path.basename(X_path))[0] - - X, y = spec_utils.cache_or_load(X_path, y_path, sr, hop_length, n_fft) - coef = np.max([np.abs(X).max(), np.abs(y).max()]) - X, y = X / coef, y / coef - - l, r, roi_size = make_padding(X.shape[2], cropsize, offset) - X_pad = np.pad(X, ((0, 0), (0, 0), (l, r)), mode="constant") - y_pad = np.pad(y, ((0, 0), (0, 0), (l, r)), mode="constant") - - len_dataset = int(np.ceil(X.shape[2] / roi_size)) - for j in range(len_dataset): - outpath = os.path.join(patch_dir, "{}_p{}.npz".format(basename, j)) - start = j * roi_size - if not os.path.exists(outpath): - np.savez( - outpath, - X=X_pad[:, :, start : start + cropsize], - y=y_pad[:, :, start : start + cropsize], - ) - patch_list.append(outpath) - - return VocalRemoverValidationSet(patch_list) diff --git a/spaces/silencewing/server/youyou/.history/math_20230613231632.html b/spaces/silencewing/server/youyou/.history/math_20230613231632.html deleted file mode 100644 index cf9d1ece547aa66f80ab5e725ba0ec712b1308c8..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/.history/math_20230613231632.html +++ /dev/null @@ -1,234 +0,0 @@ - - - - - - - - - - Document - - - - -
          - - - - - - - - - - - - - - - - - - - - - - - - -
          题目
          答案
          正误
          得分
          -
          - - - - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Free Pool Billiard Games and Improve Your Skills and Strategy.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Free Pool Billiard Games and Improve Your Skills and Strategy.md deleted file mode 100644 index af00c29b3111807ae48343c379148c1db6d13959..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Free Pool Billiard Games and Improve Your Skills and Strategy.md +++ /dev/null @@ -1,120 +0,0 @@ - -

          Download Free Pool Billiard Games: A Guide for Beginners

          -

          Do you love playing pool billiards but don't have access to a real table? Do you want to enjoy the thrill and challenge of this classic game without spending any money? If so, you might be interested in downloading free pool billiard games on your device. In this article, we will explain what pool billiard games are, why you should download them, and how to do it. We will also recommend some of the best free pool billiard games to download and play. Let's get started!

          -

          What are pool billiard games?

          -

          Pool billiard games are video games that simulate the real-life game of pool billiards, also known as cue sports or simply pool. Pool billiards is a game that involves hitting balls with a cue stick on a table with pockets. The objective is to pocket all the balls of your color or type before your opponent does. There are different types and variations of pool billiards, such as 8-ball, 9-ball, snooker, and carom.

          -

          download free pool billiard games


          DOWNLOADhttps://ssurll.com/2uNURl



          -

          The history and rules of pool billiards

          -

          Pool billiards has a long and rich history that dates back to the 15th century. It originated from outdoor games such as croquet and golf, which were played with balls and sticks on grass. Later, these games were moved indoors and played on wooden tables covered with cloth. The pockets were added later to make the game more challenging and fun. Over time, different rules and variations of pool billiards emerged in different countries and regions.

          -

          The basic rules of pool billiards are simple: you have to hit a white ball (called the cue ball) with a cue stick to make it hit other balls (called the object balls) on the table. Depending on the type of game, you have to pocket all the balls of your color or type (such as solids or stripes) before your opponent does. You also have to avoid pocketing the cue ball or hitting the wrong balls. If you do, you will lose your turn or incur a penalty. The game ends when one player pockets all their balls or when a special ball (such as the 8-ball or the 9-ball) is pocketed.

          -

          The types and variations of pool billiards

          -

          There are many types and variations of pool billiards, each with its own rules and characteristics. Some of the most popular ones are:

          -
            -
          • 8-ball: This is the most common and popular type of pool billiards. It is played with 15 object balls: seven solids, seven stripes, and one black 8-ball. The players have to pocket all their balls (either solids or stripes) before pocketing the 8-ball. If a player pockets the 8-ball before clearing their balls or pockets it in the wrong pocket, they lose the game.
          • -
          • 9-ball: This is another popular type of pool billiards. It is played with nine object balls numbered from 1 to 9. The players have to hit the lowest-numbered ball on the table first and pocket any ball after that. The game ends when one player pockets the 9-ball.
          • -
          • Snooker: This is a type of pool billiards that originated in England. It is played with 22 object balls: 15 reds, six colors (yellow, green, brown, blue, pink, and black), and one white cue ball. The players have to alternate between hitting a red ball and a color ball and pocket them in a specific order. The game ends when one player scores more points than the other or when all the balls are pocketed.
          • -
          • Carom: This is a type of pool billiards that originated in France. It is played with three balls: one white, one yellow, and one red. The table has no pockets and the players have to hit both the other balls with their cue ball in one shot. The game ends when one player reaches a predetermined number of points.
          • -
          -

          Why download free pool billiard games?

          -

          Downloading free pool billiard games on your device can be a great way to enjoy this classic game without spending any money or going to a pool hall. Here are some of the benefits of playing pool billiards online:

          -

          The benefits of playing pool billiards online

          -
            -
          • Convenience: You can play pool billiards anytime and anywhere you want, as long as you have a device and an internet connection. You don't have to worry about finding a table, paying for it, or waiting for your turn. You can also pause and resume the game whenever you want.
          • -
          • Variety: You can choose from different types and variations of pool billiards, such as 8-ball, 9-ball, snooker, and carom. You can also customize the table, the balls, the cues, and the difficulty level according to your preference. You can also play against different opponents, such as the computer, your friends, or other online players.
          • -
          • Fun: Playing pool billiards online can be a lot of fun and entertainment. You can challenge yourself, improve your skills, learn new tricks, and compete with others. You can also chat with other players, make friends, and join tournaments. You can also earn coins, rewards, and trophies as you play.
          • -
          -

          The features and options of free pool billiard games

          -

          Free pool billiard games offer many features and options that make them enjoyable and realistic. Some of the common features and options are:

          -
            -
          • Graphics and sound: Free pool billiard games have high-quality graphics and sound that create a realistic and immersive experience. You can see the details of the table, the balls, the cues, and the environment. You can also hear the sound of the balls hitting each other, the cue stick striking the cue ball, and the balls falling into the pockets.
          • -
          • Physics and controls: Free pool billiard games have realistic physics and controls that simulate the real-life game of pool billiards. You can adjust the angle, the power, the spin, and the direction of your shots. You can also see the trajectory of the cue ball and the object balls on the table. You can use your mouse, keyboard, touch screen, or joystick to control your shots.
          • -
          • Modes and levels: Free pool billiard games have different modes and levels that suit different players and preferences. You can play solo or multiplayer mode, online or offline mode, practice or tournament mode, casual or competitive mode. You can also choose from different levels of difficulty, from easy to hard.
          • -
          -

          How to download free pool billiard games?

          -

          Downloading free pool billiard games on your device is easy and fast. Here are some steps and tips for downloading free pool billiard games:

          -

          The steps and tips for downloading free pool billiard games

          -
            -
          1. Choose your device: First, you need to decide which device you want to use to play pool billiards online. You can use a computer, a laptop, a tablet, or a smartphone. Make sure your device has enough storage space and meets the minimum requirements for running the game smoothly.
          2. -
          3. Choose your platform: Next, you need to choose which platform you want to use to download free pool billiard games. You can use a web browser, an app store, or a third-party website. Make sure you use a reliable and secure platform that offers legal and safe downloads.
          4. -
          5. Choose your game: Then, you need to choose which free pool billiard game you want to download. You can browse through different categories, genres, ratings, reviews, and recommendations. You can also search for specific keywords or names of games. Make sure you read the description, the features, the screenshots, and the ratings of the game before downloading it. Make sure you choose a game that suits your taste and preference.
          6. -
          7. Download and install your game: Finally, you need to download and install your chosen free pool billiard game on your device. You can follow the instructions and prompts on the screen to complete the process. Make sure you have a stable and fast internet connection and enough battery power. You may also need to accept some terms and conditions and grant some permissions to run the game.
          8. -
          -

          The best free pool billiard games to download

          -

          There are many free pool billiard games available to download on different platforms and devices. However, some of them stand out for their quality, popularity, and features. Here are some of the best free pool billiard games to download and play:

          -

          download free 8 ball pool games
          -download free 9 ball pool games
          -download free pool billiards pro game
          -download free 3D pool billiard games
          -download free offline pool billiard games
          -download free online pool billiard games
          -download free realistic pool billiard games
          -download free casual pool billiard games
          -download free multiplayer pool billiard games
          -download free single player pool billiard games
          -download free pool billiards challenge game
          -download free pool billiards practice game
          -download free pool billiards arcade game
          -download free pool billiards tournament game
          -download free pool billiards league game
          -download free pool billiards master game
          -download free pool billiards classic game
          -download free pool billiards deluxe game
          -download free pool billiards simulator game
          -download free pool billiards fun game
          -download free pool billiards snooker game
          -download free pool billiards trick shot game
          -download free pool billiards cue game
          -download free pool billiards table game
          -download free pool billiards balls game
          -download free miniclip 8 ball pool game
          -download free terrandroid pool billiards pro game
          -download free SNG 8 ball billiards offline game
          -download free best pool billiard games for android
          -download free top rated pool billiard games for android
          -download free new pool billiard games for android
          -download free latest pool billiard games for android
          -download free popular pool billiard games for android
          -download free awesome pool billiard games for android
          -download free cool pool billiard games for android
          -download free amazing pool billiard games for android
          -download free addictive pool billiard games for android
          -download free easy to play pool billiard games for android
          -download free hard to master pool billiard games for android
          -download free low size pool billiard games for android
          -download free high quality pool billiard games for android
          -download free no wifi needed pool billiard games for android
          -download free no ads included pool billiard games for android
          -download free no in app purchases required pool billiard games for android
          -download free editor's choice pool billiard games for android
          -download free teen rated pool billiard games for android
          -download free everyone rated pool billiard games for android
          -download free data safe and secure pool billiard games for android
          -download free verified ratings and reviews of pool billiard games for android

          -

          8 Ball Pool by Miniclip

          -

          This is one of the most popular and downloaded free pool billiard games in the world. It has over 500 million downloads and millions of active players. It offers a realistic and fun 8-ball pool experience with various modes, levels, tables, cues, and tournaments. You can play online with your friends or other players from around the world. You can also chat, send gifts, and join clubs. You can earn coins, cash, and rewards as you play and use them to upgrade your items and skills.

          -

          You can download 8 Ball Pool by Miniclip for free on Google Play Store, Apple App Store, or Amazon Appstore. You can also play it on your web browser at miniclip.com.

          -

          Pool Billiards Pro by TerranDroid

          -

          This is another popular and highly rated free pool billiard game that has over 100 million downloads. It offers a realistic and smooth pool billiards experience with various modes, levels, tables, cues, and challenges. You can play solo or against the computer or another player on the same device. You can also play online with other players from around the world. You can adjust the difficulty level, the aiming line, the sensitivity, and the sound effects according to your preference.

          -

          You can download Pool Billiards Pro by TerranDroid for free on Google Play Store or Amazon Appstore.

          -

          8 Ball Billiards Offline Pool by SNG Games

          -

          This is a simple and fun free pool billiard game that has over 10 million downloads. It offers a classic 8-ball pool experience with offline mode, meaning you don't need an internet connection to play it. You can play solo or against the computer or another player on the same device. You can also customize the table, the balls, the cues, and the background according to your preference.

          -

          You can download 8 Ball Billiards Offline Pool by SNG Games for free on Google Play Store.

          -

          Conclusion

          -

          Pool billiards is a classic game that can be enjoyed by anyone, anywhere, anytime. Thanks to technology, you can now download free pool billiard games on your device and play them online or offline. You can choose from different types and variations of pool billiards, such as 8-ball, 9-ball, snooker, and carom. You can also enjoy various features and options that make the game realistic and fun. You can also improve your skills, compete with others, and have fun.

          -

          If you are looking for some of the best free pool billiard games to download and play, we recommend 8 Ball Pool by Miniclip, Pool Billiards Pro by TerranDroid, and 8 Ball Billiards Offline Pool by SNG Games. These games offer a high-quality graphics, sound, and physics, as well as various modes, levels, and challenges. You can also play with your friends or other players online, or play offline without an internet connection. You can also customize your game and earn rewards as you play.

          -

          FAQs

          -

          Here are some of the frequently asked questions about downloading free pool billiard games:

          -
            -
          • Q: Are free pool billiard games safe to download?
          • -
          • A: Yes, free pool billiard games are safe to download as long as you use a reliable and secure platform, such as Google Play Store, Apple App Store, Amazon Appstore, or miniclip.com. You should also check the ratings, reviews, and permissions of the game before downloading it. You should also avoid downloading games from unknown or suspicious sources, as they may contain viruses or malware.
          • -
          • Q: Do free pool billiard games require an internet connection?
          • -
          • A: Some free pool billiard games require an internet connection to play online with other players or to access some features and options. However, some games also offer offline mode, which allows you to play without an internet connection. You can check the description and the features of the game before downloading it to see if it requires an internet connection or not.
          • -
          • Q: How much storage space do free pool billiard games take?
          • -
          • A: The storage space that free pool billiard games take depends on the game and the device you use. Generally, free pool billiard games take around 50 MB to 100 MB of storage space on your device. You can check the size of the game before downloading it to see how much storage space it takes. You can also delete some unused or unwanted files or apps on your device to free up some storage space.
          • -
          • Q: How can I improve my skills in free pool billiard games?
          • -
          • A: The best way to improve your skills in free pool billiard games is to practice regularly and learn from your mistakes. You can also watch some tutorials or tips online or read some guides or books on pool billiards. You can also play with different opponents, such as the computer, your friends, or other online players, and learn from their strategies and techniques.
          • -
          • Q: Can I play free pool billiard games with my friends?
          • -
          • A: Yes, you can play free pool billiard games with your friends online or offline. You can invite your friends to join you in a multiplayer mode online and chat with them while playing. You can also play with your friends on the same device offline by taking turns or using split-screen mode. You can also join clubs or tournaments with your friends and compete with other players.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download en WhatsApp Now and Discover a New Way of Communication.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download en WhatsApp Now and Discover a New Way of Communication.md deleted file mode 100644 index 531d98125ca2c8f0faa67e6f0edd234ba93419bd..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download en WhatsApp Now and Discover a New Way of Communication.md +++ /dev/null @@ -1,168 +0,0 @@ - -

          Download en WhatsApp: How to Install and Use the Popular Messaging App

          -

          WhatsApp is one of the most popular messaging apps in the world, with over 2 billion users in 180 countries. It allows you to send text and voice messages, make voice and video calls, share images, documents, locations, contacts, and other media with your friends and family for free. You can also create group chats with up to 256 people, post status updates that disappear after 24 hours, and use various stickers, emojis, GIFs, and more to express yourself.

          -

          In this article, we will show you how to download en WhatsApp on your mobile device (Android or iOS), desktop (Mac or Windows), or tablet (iPad or Android). We will also show you how to use the main features of WhatsApp on your device. Finally, we will discuss some of the reasons why you may want to switch to an alternative messaging app and what are some of the best options available.

          -

          download en whatsapp


          Download File ✑ ✑ ✑ https://ssurll.com/2uO0re



          -

          What is WhatsApp and Why Use It?

          -

          WhatsApp is a messaging app that uses your phone's internet connection (Wi-Fi or cellular data) to send and receive messages with other WhatsApp users. Unlike regular SMS text messages that may charge you per message or have character limits, WhatsApp messages are free* and unlimited. (*Data charges may apply. Contact your provider for details.)

          -

          WhatsApp also offers end-to-end encryption for all your messages and calls, which means that only you and the person you are communicating with can read or listen to them. No one else can access them, not even WhatsApp itself. This makes WhatsApp one of the most secure messaging apps available.

          -

          Some

          Some of the other features and benefits of WhatsApp are:

          -
            -
          • You can make high-quality voice and video calls with up to eight people at a time. You can also switch between voice and video during a call, or use the picture-in-picture mode to keep watching a video while chatting.
          • -
          • You can create group chats with up to 256 people and name them, mute them, customize notifications, or leave them. You can also use group video calls, group voice messages, or broadcast messages to multiple contacts at once.
          • -
          • You can post status updates that disappear after 24 hours and share your thoughts, feelings, photos, videos, or GIFs with your contacts. You can also view, reply, mute, or delete status updates from your contacts.
          • -
          • You can use various stickers, emojis, GIFs, and more to express yourself in your chats and status updates. You can also create your own stickers or download more from the sticker store.
          • -
          • You can use WhatsApp Web or Desktop to access your WhatsApp account from your computer's browser or app. You can also use WhatsApp on your iPad or Android tablet by scanning a QR code with your phone.
          • -
          • You can backup and restore your chat history and media using Google Drive (Android) or iCloud (iOS). You can also export individual chats or delete all chats.
          • -
          • You can use WhatsApp Business if you are a small business owner and want to communicate with your customers more efficiently. You can create a business profile, catalog, labels, quick replies, automated messages, and more.
          • -
          -

          How to Download WhatsApp on Your Mobile Device

          -

          Download WhatsApp for Android

          -

          If you have an Android device, you can download WhatsApp from the Google Play Store or the official website. Here are the steps to follow:

          -
            -
          1. Open the Google Play Store app on your device and search for WhatsApp Messenger. Alternatively, you can go to https://www.whatsapp.com/android/ on your device's browser and tap on Download Now.
          2. -
          3. Tap on Install and wait for the app to download and install on your device.
          4. -
          5. Open the WhatsApp app and tap on Agree and Continue to accept the terms of service and privacy policy.
          6. -
          7. Enter your phone number and tap on Next. You will receive a verification code via SMS. Enter the code in the app or wait for it to be detected automatically.
          8. -
          9. Create your profile by entering your name and choosing a profile picture. You can also import your contacts from your phone's address book.
          10. -
          11. That's it! You are now ready to use WhatsApp on your Android device.
          12. -

          Download WhatsApp for iOS

          -

          If you have an iOS device, you can download WhatsApp from the App Store or the official website. Here are the steps to follow:

          -
            -
          1. Open the App Store app on your device and search for WhatsApp Messenger. Alternatively, you can go to https://www.whatsapp.com/download/ on your device's browser and tap on Download from the App Store.
          2. -
          3. Tap on Get and wait for the app to download and install on your device.
          4. -
          5. Open the WhatsApp app and tap on Agree and Continue to accept the terms of service and privacy policy.
          6. -
          7. Enter your phone number and tap on Done. You will receive a verification code via SMS. Enter the code in the app or wait for it to be detected automatically.
          8. -
          9. Create your profile by entering your name and choosing a profile picture. You can also import your contacts from your phone's address book.
          10. -
          11. That's it! You are now ready to use WhatsApp on your iOS device.
          12. -
          -

          How to Download WhatsApp on Your Desktop or Tablet

          -

          Download WhatsApp for Mac or Windows

          -

          If you have a Mac or Windows computer, you can download WhatsApp from the official website or the Microsoft Store. Here are the steps to follow:

          -
            -
          1. Go to https://www.whatsapp.com/download/ on your computer's browser and choose either Download for Mac or Download for Windows depending on your operating system. Alternatively, you can go to the Microsoft Store app on your Windows computer and search for WhatsApp Desktop.
          2. -
          3. Download and install the WhatsApp app on your computer.
          4. -
          5. Open the WhatsApp app and scan the QR code with your mobile device. To do this, open WhatsApp on your mobile device, tap on Settings (iOS) or Menu (Android), and then tap on WhatsApp Web/Desktop. Point your phone's camera at the QR code on your computer screen.
          6. -
          7. That's it! You are now ready to use WhatsApp on your Mac or Windows computer.
          8. -

          Download WhatsApp for iPad or Android Tablet

          -

          If you have an iPad or an Android tablet, you can download WhatsApp from the App Store or the Google Play Store. However, you cannot use WhatsApp directly on your tablet, as it requires a phone number to verify your account. Instead, you can use WhatsApp Web on your tablet's browser and link it to your mobile device. Here are the steps to follow:

          -
            -
          1. Open the App Store app or the Google Play Store app on your tablet and search for WhatsApp Messenger. Download and install the app on your tablet.
          2. -
          3. Open the WhatsApp app on your tablet and tap on Agree and Continue to accept the terms of service and privacy policy.
          4. -
          5. You will see a message saying that WhatsApp is not supported on tablets. Tap on OK and then tap on Use WhatsApp Web.
          6. -
          7. You will be redirected to https://web.whatsapp.com/ on your tablet's browser. You will see a QR code on the screen.
          8. -
          9. Open WhatsApp on your mobile device, tap on Settings (iOS) or Menu (Android), and then tap on WhatsApp Web/Desktop. Point your phone's camera at the QR code on your tablet's screen.
          10. -
          11. That's it! You are now ready to use WhatsApp Web on your iPad or Android tablet.
          12. -
          -

          How to Use WhatsApp on Your Device

          -

          How to Send and Receive Messages

          -

          One of the main features of WhatsApp is sending and receiving messages with your contacts. You can send text, voice, image, video, document, location, or contact messages. You can also reply, delete, forward, or star messages. Here is how to do it:

          -

          download whatsapp for android
          -download whatsapp for ios
          -download whatsapp for windows
          -download whatsapp for mac
          -download whatsapp apk
          -download whatsapp web
          -download whatsapp messenger
          -download whatsapp plus
          -download whatsapp business
          -download whatsapp gb
          -download whatsapp video call
          -download whatsapp status
          -download whatsapp backup
          -download whatsapp stickers
          -download whatsapp mod
          -download whatsapp chat
          -download whatsapp beta
          -download whatsapp desktop
          -download whatsapp latest version
          -download whatsapp dark mode
          -download whatsapp qr code
          -download whatsapp update
          -download whatsapp photos
          -download whatsapp audio
          -download whatsapp themes
          -download whatsapp group link
          -download whatsapp dp
          -download whatsapp wallpaper
          -download whatsapp voice message
          -download whatsapp contacts
          -download whatsapp profile picture
          -download whatsapp app store
          -download whatsapp from google play
          -download whatsapp on pc
          -download whatsapp on laptop
          -download whatsapp on tablet
          -download whatsapp on ipad
          -download whatsapp on iphone
          -download whatsapp on samsung
          -download whatsapp on huawei
          -download whatsapp on nokia
          -download whatsapp on blackberry
          -download whatsapp on fire tablet
          -download whatsapp on chromebook
          -download whatsapp on jio phone
          -how to download en español in WhatsApp

          -
            -
          1. To start a chat with a contact or a group, open WhatsApp and tap on the chat icon at the bottom right corner (iOS) or the green chat icon at the bottom right corner (Android). You will see a list of your contacts and groups. Tap on the contact or group you want to chat with, or use the search bar to find them.
          2. -
          3. To send a text message, type your message in the text box at the bottom of the chat screen and tap on the send icon.
          4. -
          5. To send a voice message, tap and hold the microphone icon at the right of the text box and speak your message. Release the icon when you are done. To cancel a voice message, slide your finger to the left.
          6. -
          7. To send an image, video, document, location, or contact message, tap on the plus icon at the left of the text box (iOS) or the attachment icon at the top right corner (Android). You will see a menu of options. Choose the type of message you want to send and follow the instructions.
          8. -
          9. To reply to a message, swipe right on the message you want to reply to (iOS) or tap and hold the message and then tap on the reply icon (Android). Type your reply in the text box and tap on the send icon.
          10. -
          11. To delete a message, tap and hold the message you want to delete and then tap on the trash icon. You can choose to delete the message for yourself or for everyone in the chat. Note that you can only delete messages for everyone within an hour of sending them.
          12. -
          13. To forward a message, tap and hold To forward a message, tap and hold the message you want to forward and then tap on the forward icon. You can choose to forward the message to one or more contacts or groups, or to another app.
          14. -
          15. To star a message, tap and hold the message you want to star and then tap on the star icon. You can access your starred messages by tapping on Settings (iOS) or Menu (Android) and then tapping on Starred Messages.
          16. -
          -

          How to Make and Receive Calls

          -

          Another feature of WhatsApp is making and receiving voice and video calls with your contacts. You can make calls with up to eight people at a time. You can also mute, switch cameras, or end calls. Here is how to do it:

          -
            -
          1. To make a voice or video call with a contact or a group, open WhatsApp and tap on the calls icon at the bottom left corner (iOS) or the top right corner (Android). You will see a list of your recent calls. Tap on the contact or group you want to call, or use the search bar to find them. Alternatively, you can open a chat with the contact or group and tap on the voice or video call icon at the top right corner.
          2. -
          3. To receive a voice or video call from a contact or a group, you will see a notification on your device's screen. You can choose to accept, decline, or reply with a message.
          4. -
          5. To mute your microphone during a call, tap on the microphone icon at the bottom left corner of the call screen. To unmute, tap on it again.
          6. -
          7. To switch your camera during a video call, tap on the camera icon at the bottom right corner of the call screen. You can choose to use the front or rear camera of your device.
          8. -
          9. To end a call, tap on the red phone icon at the bottom center of the call screen.
          10. -
          -

          How to Post and View Status Updates

          -

          A third feature of WhatsApp is posting and viewing status updates with your contacts. You can post text, photo, video, or GIF status updates that disappear after 24 hours. You can also view, reply, mute, or delete status updates from your contacts. Here is how to do it:

          -
            -
          1. To post a status update, open WhatsApp and tap on the status icon at the bottom center (iOS) or the leftmost (Android). You will see a list of your contacts' status updates. Tap on My Status at the top of the list.
          2. -
          3. To post a text status update, tap on the pencil icon at the bottom right corner. Type your message and choose a background color and font style. Tap on Send when you are done.
          4. -
          5. To post a photo, video, or GIF status update, tap on the camera icon at the bottom left corner. You can either take a new photo or video, or choose one from your device's gallery. You can also add stickers, emojis, text, or drawings to your photo or video. Tap on Send when you are done.
          6. -
          7. To view a status update from a contact, tap on their name in the status list. You can swipe left or right to see more status updates from them.
          8. -
          9. To reply to a status update from a contact, swipe up on their status update and type your message in the text box. Tap on Send when you are done.
          10. -
          11. To mute a status update from a contact, tap and hold their name in the status list and then tap on Mute. To unmute, tap and hold their name again and then tap on Unmute.
          12. -
          13. To delete your own status update, tap on My Status in the status list and then tap on More (...) next to the status update you want to delete. Tap on Delete and then confirm.
          14. -

          How to Manage Your Account and Settings

          -

          A fourth feature of WhatsApp is managing your account and settings. You can change your profile picture, name, about, phone number, privacy settings, notifications settings, data usage settings, and more. Here is how to do it:

          -
            -
          1. To change your profile picture, name, or about, open WhatsApp and tap on Settings (iOS) or Menu (Android) and then tap on your name at the top of the screen. Tap on the camera icon to change your profile picture, or tap on the pencil icon to change your name or about. Tap on Save when you are done.
          2. -
          3. To change your phone number, open WhatsApp and tap on Settings (iOS) or Menu (Android) and then tap on Account. Tap on Change Number and follow the instructions. You will need to verify your new phone number with a code.
          4. -
          5. To change your privacy settings, open WhatsApp and tap on Settings (iOS) or Menu (Android) and then tap on Account. Tap on Privacy and choose who can see your last seen, profile photo, about, status, and live location. You can also turn on or off read receipts, blocked contacts, fingerprint lock, etc.
          6. -
          7. To change your notifications settings, open WhatsApp and tap on Settings (iOS) or Menu (Android) and then tap on Notifications. You can choose the tone, vibration, popup, light, etc. for your messages and calls. You can also mute notifications for a specific chat or group by opening the chat or group and tapping on the name at the top of the screen. Tap on Mute Notifications and choose how long you want to mute them.
          8. -
          9. To change your data usage settings, open WhatsApp and tap on Settings (iOS) or Menu (Android) and then tap on Data and Storage Usage. You can choose when to use mobile data or Wi-Fi for media downloads, calls, etc. You can also see how much data you have used for each chat or group.
          10. -
          -

          How to Switch to an Alternative Messaging App

          -

          While WhatsApp is a great messaging app with many features and benefits, some people may want to switch to an alternative app due to privacy concerns or other reasons. For example, some people may not like the fact that WhatsApp is owned by Facebook, which has a history of data breaches and controversies. Some people may also prefer other apps that offer more features or customization options.

          -

          If you are one of those people who want to switch to an alternative messaging app, here are some of the best options available:

          - - - - - -
          AppFeaturesBenefits
          Signal- End-to-end encryption for all messages and calls
          - Group chats with up to 1000 people
          - Voice and video calls with up to 8 people
          - Status updates with custom privacy settings
          - Stickers, emojis, GIFs, etc.
          - Disappearing messages
          - Screen lock and biometric authentication
          - One of the most secure and private messaging apps available
          - Open source and non-profit
          - Recommended by privacy advocates like Edward Snowden
          Telegram- End-to-end encryption for secret chats
          - Group chats with up to 200000 people
          - Voice and video calls with up to 30 people
          - Status updates with custom privacy settings
          - Stickers, emojis, GIFs, etc.
          - Disappearing messages
          - Cloud-based storage for unlimited media
          - Bots, channels, games, etc.
          - One of the most feature-rich and versatile messaging apps available
          - Independent and ad-free
          - Supports multiple devices and platforms
          iMessage- End-to-end encryption for all messages and calls
          - Group chats with up to 32 people
          - Voice and video calls with up to 32 people
          - Status updates with custom privacy settings
          - Stickers, emojis, GIFs, etc.
          - Disappearing messages
          - Screen effects, memoji, animoji, etc.
          - One of the most seamless and integrated messaging apps available for iOS users
          - Syncs with - One of the most seamless and integrated messaging apps available for iOS users - Syncs with your Apple ID, iCloud, and other Apple services - Works with Siri, FaceTime, AirDrop, etc.
          -

          Conclusion

          -

          WhatsApp is a popular messaging app that lets you send and receive messages, make and receive calls, post and view status updates, and more with your contacts for free. You can download WhatsApp on your mobile device, desktop, or tablet and use it with ease. However, if you are looking for an alternative app that offers more privacy, features, or customization options, you can try Signal, Telegram, iMessage, or other apps that suit your needs.

          -

          We hope this article has helped you learn how to download en WhatsApp and use it on your device. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!

          -

          FAQs

          -

          Q: How can I update WhatsApp to the latest version?

          -

          A: You can update WhatsApp to the latest version by going to the App Store (iOS) or the Google Play Store (Android) and tapping on Updates. You can also enable automatic updates for WhatsApp in your device's settings.

          -

          Q: How can I backup and restore my WhatsApp chats?

          -

          A: You can backup and restore your WhatsApp chats by going to Settings (iOS) or Menu (Android) and then tapping on Chats. Tap on Chat Backup and choose when and how to backup your chats. You can also tap on Restore Chat History to restore your chats from a previous backup.

          -

          Q: How can I block or unblock a contact on WhatsApp?

          -

          A: You can block or unblock a contact on WhatsApp by opening a chat with them and tapping on their name at the top of the screen. Tap on Block Contact or Unblock Contact. You can also go to Settings (iOS) or Menu (Android) and then tap on Account. Tap on Privacy and then tap on Blocked Contacts. You can see a list of your blocked contacts and add or remove them.

          -

          Q: How can I delete my WhatsApp account?

          -

          A: You can delete your WhatsApp account by going to Settings (iOS) or Menu (Android) and then tapping on Account. Tap on Delete My Account and follow the instructions. Note that deleting your account will erase your message history, remove you from all groups, delete your backups, and revoke your service agreement with WhatsApp.

          -

          Q: How can I contact WhatsApp support?

          -

          A: You can contact WhatsApp support by going to Settings (iOS) or Menu (Android) and then tapping on Help. Tap on Contact Us and fill out the form with your question or issue. You can also visit https://www.whatsapp.com/contact/ for more information.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Football Live HD 5.0 APK The Best App for Live Football Streaming.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Football Live HD 5.0 APK The Best App for Live Football Streaming.md deleted file mode 100644 index bd7e0b7ed383cb14fed4bde3ba321e3dc9667d0c..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Football Live HD 5.0 APK The Best App for Live Football Streaming.md +++ /dev/null @@ -1,106 +0,0 @@ -
          -

          Football Live HD 5.0 APK: Watch Live Football Matches on Your Android Device

          -

          If you are a football fan, you probably don't want to miss any of your favorite matches. However, sometimes you may not have access to a TV or a cable subscription, or you may be traveling or busy with other things. In that case, how can you watch live football matches on your Android device? The answer is simple: download Football Live HD 5.0 APK.

          -

          Introduction

          -

          In this article, we will tell you everything you need to know about Football Live HD 5.0 APK, a free app that lets you watch live football matches on your Android device. We will explain what it is, why you should download it, what features it offers, and how to download and install it on your device. By the end of this article, you will be able to enjoy live football matches anytime and anywhere with Football Live HD 5.0 APK.

          -

          football live hd 5.0 apk


          DOWNLOADhttps://ssurll.com/2uNS0q



          -

          What is Football Live HD 5.0 APK?

          -

          Football Live HD 5.0 APK is an app that allows you to watch live football matches on your Android device. It is not available on the Google Play Store, so you have to download it from a third-party source. The app is safe and secure, and it does not contain any viruses or malware.

          -

          Why should you download Football Live HD 5.0 APK?

          -

          There are many reasons why you should download Football Live HD 5.0 APK on your Android device. Here are some of them:

          -
            -
          • You can watch live football matches from various channels and leagues, such as Premier League, La Liga, Bundesliga, Serie A, Champions League, Europa League, World Cup, and more.
          • -
          • You can watch high-quality streaming with no buffering or lagging issues.
          • -
          • You can enjoy a user-friendly interface that is easy to navigate and use.
          • -
          • You do not need to pay any subscription fees or register an account to use the app.
          • -
          • You can save your mobile data by adjusting the video quality according to your network speed.
          • -
          -

          Features of Football Live HD 5.0 APK

          -

          Football Live HD 5.0 APK offers many features that make it one of the best apps for watching live football matches on your Android device. Here are some of the main features of the app:

          -

          High-quality streaming

          -

          The app provides high-quality streaming of live football matches with no buffering or lagging issues. You can watch the matches in HD quality or lower the quality if you have a slow network connection. You can also choose between different languages and subtitles for the commentary.

          -

          Multiple channels and leagues

          -

          The app covers multiple channels and leagues from around the world, so you can watch any match you want. You can watch matches from Premier League, La Liga, Bundesliga, Serie A, Champions League, Europa League, World Cup, and more. You can also browse through the schedule and fixtures of upcoming matches and set reminders for them.

          -

          User-friendly interface

          -

          The app has a user-friendly interface that is easy to navigate and use. You can access all the features and options from the main menu or the sidebar. You can also search for your favorite teams or players using the search bar. The app also supports dark mode and landscape mode for better viewing experience.

          -

          No subscription or registration required

          -

          The app does not require any subscription fees or registration to use the app. You can watch live football matches for free without any hassle. You just need to download the app and install it on your device, and you are good to go.

          -

          How to download and install Football Live HD 5.0 APK

          -

          If you want to download and install Football Live HD 5.0 APK on your Android device, you need to follow these simple steps:

          -

          football live tv hd android app free apk download
          -football live hd mod apk ads removed new 2.0
          -football live hd apk latest version for android
          -football live hd streaming app apk free download
          -football live hd 5.0 apk watch soccer matches online
          -football live hd 5.0 apk no subscription required
          -football live hd 5.0 apk best app for football fans
          -football live hd 5.0 apk enjoy live scores and fixtures
          -football live hd 5.0 apk compatible with android tv
          -football live hd 5.0 apk support world top soccer leagues
          -football live hd 5.0 apk update version with bug fixes
          -football live hd 5.0 apk easy to use and fast loading
          -football live hd 5.0 apk high quality video and audio
          -football live hd 5.0 apk download from official website
          -football live hd 5.0 apk reviews and ratings by users
          -football live hd 5.0 apk how to install and use guide
          -football live hd 5.0 apk features and benefits overview
          -football live hd 5.0 apk alternatives and similar apps
          -football live hd 5.0 apk tips and tricks to improve experience
          -football live hd 5.0 apk frequently asked questions and answers
          -football live hd 5.0 apk contact and support information
          -football live hd 5.0 apk privacy policy and terms of service
          -football live hd 5.0 apk download link and qr code scan
          -football live hd 5.0 apk share with friends and family
          -football live hd 5.0 apk feedback and suggestions welcome
          -football live hd 5.0 apk watch premier league matches live
          -football live hd 5.0 apk watch champions league matches live
          -football live hd 5.0 apk watch la liga matches live
          -football live hd 5.0 apk watch serie a matches live
          -football live hd 5.0 apk watch bundesliga matches live
          -football live hd 5.0 apk watch ligue 1 matches live
          -football live hd 5.0 apk watch mls matches live
          -football live hd 5.0 apk watch copa america matches live
          -football live hd 5.0 apk watch euro cup matches live
          -football live hd 5.0 apk watch world cup matches live
          -football live hd 5.0 apk watch olympic games matches live
          -football live hd 5.0 apk watch african cup of nations matches live
          -football live hd 5.0 apk watch asian cup matches live
          -football live hd 5.0 apk watch concacaf gold cup matches live
          -football live hd 5.0 apk watch international friendly matches live
          -football live hd 5.0 apk watch women's world cup matches live
          -football live hd 5.0 apk watch women's olympic games matches live
          -football live hd 5.0 apk watch u20 world cup matches live
          -football live hd 5.0 apk watch u17 world cup matches live
          -football live hd 5.0 apk watch fifa club world cup matches live
          -football live hd 5.0 apk watch uefa super cup matches live
          -football live hd 5.0 apk watch copa libertadores matches live
          -football live hd 5.0 apk watch copa sudamericana matches live
          -football live hd 5.0 apk watch afc champions league matches live

          -

          Step 1: Enable unknown sources on your device

          -

          Since the app is not available on the Google Play Store, you need to enable unknown sources on your device to allow the installation of third-party apps. To do this, go to Settings > Security > Unknown Sources and toggle it on.

          -

          Step 2: Download the APK file from a trusted source

          -

          Next, you need to download the APK file of the app from a trusted source. You can use the link below to download the latest version of Football Live HD 5.0 APK:

          -

          Football Live HD 5.0 APK Download

          -

          Make sure you have enough storage space on your device before downloading the file.

          -

          Step 3: Install the APK file on your device

          -

          Once you have downloaded the APK file, locate it in your file manager and tap on it to start the installation process. You may see a warning message asking for your permission to install the app. Tap on Install and wait for the installation to complete.

          -

          Step 4: Launch the app and enjoy live football matches

          -

          Finally, you can launch the app from your app drawer or home screen and start watching live football matches on your Android device. You can choose from various channels and leagues, adjust the video quality, and enjoy a user-friendly interface.

          -

          Conclusion

          -

          Football Live HD 5.0 APK is a great app for football fans who want to watch live football matches on their Android devices. It offers high-quality streaming, multiple channels and leagues, user-friendly interface, and no subscription or registration required. It is easy to download and install, and it is safe and secure. If you are looking for a free and reliable app to watch live football matches, you should definitely try Football Live HD 5.0 APK.

          -

          FAQs

          -

          Here are some of the frequently asked questions about Football Live HD 5.0 APK:

          -
            -
          1. Is Football Live HD 5.0 APK legal?
          2. -

            Yes, Football Live HD 5.0 APK is legal as long as you use it for personal and non-commercial purposes. However, some of the content may be subject to copyright laws in some countries, so you should be careful about the legal implications of using the app.

            -
          3. Is Football Live HD 5.0 APK safe and secure?
          4. -

            Yes, Football Live HD 5.0 APK is safe and secure, and it does not contain any viruses or malware. However, you should always download the app from a trusted source and scan it with an antivirus app before installing it on your device.

            -
          5. Does Football Live HD 5.0 APK work on all Android devices?
          6. -

            Yes, Football Live HD 5.0 APK works on all Android devices that run on Android 4.4 or higher. However, some devices may have compatibility issues or performance problems depending on the hardware and software specifications.

            -
          7. Does Football Live HD 5.0 APK require an internet connection?
          8. -

            Yes, Football Live HD 5.0 APK requires an internet connection to stream live football matches. You can use Wi-Fi or mobile data to access the app, but you should have a stable and fast network speed to avoid buffering or lagging issues.

            -
          9. Can I watch live football matches offline with Football Live HD 5.0 APK?
          10. -

            No, Football Live HD 5.0 APK does not support offline viewing of live football matches. You can only watch the matches when they are live and online. However, you can save your mobile data by adjusting the video quality according to your network speed.

            -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/deep/feature_extractor.py b/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/deep/feature_extractor.py deleted file mode 100644 index d524080cec6eaf0fbe6aa4ba280157303a9dd88f..0000000000000000000000000000000000000000 --- a/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/deep/feature_extractor.py +++ /dev/null @@ -1,65 +0,0 @@ -import torch -import torchvision.transforms as transforms -import numpy as np -import cv2 -import logging - -from .model import Net - -''' -特征提取器: -提取对应bounding box中的特征, 得到一个固定维度的embedding作为该bounding box的代表, -供计算相似度时使用。 - -模型训练是按照传统ReID的方法进行,使用Extractor类的时候输入为一个list的图片,得到图片对应的特征。 -''' - -class Extractor(object): - def __init__(self, model_path, use_cuda=True): - self.net = Net(reid=True) - self.device = "cuda" if torch.cuda.is_available() and use_cuda else "cpu" - state_dict = torch.load(model_path, map_location=lambda storage, loc: storage)['net_dict'] - self.net.load_state_dict(state_dict) - logger = logging.getLogger("root.tracker") - logger.info("Loading weights from {}... Done!".format(model_path)) - self.net.to(self.device) - self.size = (64, 128) - self.norm = transforms.Compose([ - # RGB图片数据范围是[0-255],需要先经过ToTensor除以255归一化到[0,1]之后, - # 再通过Normalize计算(x - mean)/std后,将数据归一化到[-1,1]。 - transforms.ToTensor(), - # mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]是从imagenet训练集中算出来的 - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), - ]) - - def _preprocess(self, im_crops): - """ - TODO: - 1. to float with scale from 0 to 1 - 2. resize to (64, 128) as Market1501 dataset did - 3. concatenate to a numpy array - 3. to torch Tensor - 4. normalize - """ - def _resize(im, size): - return cv2.resize(im.astype(np.float32)/255., size) - - im_batch = torch.cat([self.norm(_resize(im, self.size)).unsqueeze(0) for im in im_crops], dim=0).float() - return im_batch - -# __call__()是一个非常特殊的实例方法。该方法的功能类似于在类中重载 () 运算符, -# 使得类实例对象可以像调用普通函数那样,以“对象名()”的形式使用。 - def __call__(self, im_crops): - im_batch = self._preprocess(im_crops) - with torch.no_grad(): - im_batch = im_batch.to(self.device) - features = self.net(im_batch) - return features.cpu().numpy() - - -if __name__ == '__main__': - img = cv2.imread("demo.jpg")[:,:,(2,1,0)] - extr = Extractor("checkpoint/ckpt.t7") - feature = extr(img) - print(feature.shape) - diff --git a/spaces/smfry010/text-to-image/app.py b/spaces/smfry010/text-to-image/app.py deleted file mode 100644 index e12036adb228b2a0d8bbeb268e91e13f139f2015..0000000000000000000000000000000000000000 --- a/spaces/smfry010/text-to-image/app.py +++ /dev/null @@ -1,19 +0,0 @@ -import streamlit as st -from transformers import pipeline - -# SENTIMENT ANALYSIS -# sentimizer = pipeline('sentiment-analysis') -# text = st.text_area('enter some text:') - -# if text: -# out = sentimizer(text) -# st.json(out) - -# TEXT SUMMARIZATION -summarizer = pipeline("summarization", model="stevhliu/my_awesome_billsum_model") -text = st.text_area('Enter a long body of text to summarize:') - -if text: - # st.json(summarizer(text)) - out = summarizer(text) - st.write(out[0]['summary_text']) diff --git a/spaces/society-ethics/DiffusionFaceClustering/README.md b/spaces/society-ethics/DiffusionFaceClustering/README.md deleted file mode 100644 index a132dc7b97281c3e3a0328a7115e88fa0002185b..0000000000000000000000000000000000000000 --- a/spaces/society-ethics/DiffusionFaceClustering/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DiffusionFaceClustering -emoji: 📈 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/society-ethics/model-card-regulatory-check/app.py b/spaces/society-ethics/model-card-regulatory-check/app.py deleted file mode 100644 index c5ea3d9e593ec7d6a42232db47909948595fec4b..0000000000000000000000000000000000000000 --- a/spaces/society-ethics/model-card-regulatory-check/app.py +++ /dev/null @@ -1,199 +0,0 @@ -import os -import gradio as gr -from huggingface_hub import ModelCard, HfApi - -from compliance_checks import ( - ComplianceSuite, - ComplianceCheck, - IntendedPurposeCheck, - GeneralLimitationsCheck, - ComputationalRequirementsCheck, - EvaluationCheck, -) - -hf_writer = gr.HuggingFaceDatasetSaver( - os.getenv('HUGGING_FACE_HUB_TOKEN'), - organization="society-ethics", - dataset_name="model-card-regulatory-check-flags", - private=True -) - -hf_api = HfApi() - -checks = [ - IntendedPurposeCheck(), - GeneralLimitationsCheck(), - ComputationalRequirementsCheck(), - EvaluationCheck(), -] -suite = ComplianceSuite(checks=checks) - - -def status_emoji(status: bool): - return "✅" if status else "🛑" - - -def search_for_models(query: str): - if query.strip() == "": - return examples, ",".join([e[0] for e in examples]) - models = [m.id for m in list(iter(hf_api.list_models(search=query, limit=10)))] - model_samples = [[m] for m in models] - models_text = ",".join(models) - return model_samples, models_text - - -def load_model_card(index, options_string: str): - options = options_string.split(",") - model_id = options[index] - card = ModelCard.load(repo_id_or_path=model_id).content - return card - - -def run_compliance_check(model_card: str): - results = suite.run(model_card) - - return [ - *[gr.Accordion.update(label=f"{r.name} - {status_emoji(r.status)}", open=not r.status) for r in results], - *[gr.Markdown.update(value=r.to_string()) for r in results], - ] - - -def fetch_and_run_compliance_check(model_id: str): - model_card = ModelCard.load(repo_id_or_path=model_id).content - return run_compliance_check(model_card=model_card) - - -def compliance_result(compliance_check: ComplianceCheck): - accordion = gr.Accordion(label=f"{compliance_check.name}", open=False) - description = gr.Markdown("Run an evaluation to see results...") - - return accordion, description - - -def read_file(file_obj): - with open(file_obj.name) as f: - model_card = f.read() - return model_card - - -model_card_box = gr.TextArea(label="Model Card") - -# Have to destructure everything since I need to delay rendering. -col = gr.Column() -tab = gr.Tab(label="Results") -col2 = gr.Column() -compliance_results = [compliance_result(c) for c in suite.checks] -compliance_accordions = [c[0] for c in compliance_results] -compliance_descriptions = [c[1] for c in compliance_results] - -examples = [ - ["bigscience/bloom"], - ["roberta-base"], - ["openai/clip-vit-base-patch32"], - ["distilbert-base-cased-distilled-squad"], -] - -with gr.Blocks(css="""\ -#file-upload .boundedheight { - max-height: 100px; -} - -code { - overflow: scroll; -} -""") as demo: - gr.Markdown("""\ - # RegCheck AI - This Space matches information in [model cards](https://huggingface.co/docs/hub/model-cards) to proposed \ - regulatory compliance descriptions in the \ - [EU AI Act](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206). - This is a **prototype** to explore the feasibility of automatic checks for compliance, and is limited to specific \ - provisions of Article 13 of the Act, “Transparency and provision of information to users”. \ - **Please note: this is research work and NOT a commercial or legal product.** - """) - with gr.Accordion(label="Instructions", open=True): - gr.Markdown(""" - To check a model card, first load it by doing any one of the following: - - If the model is on the Hugging Face Hub, search for a model and select it from the results. - - If you have the model card on your computer as a Markdown file, select the "Upload your own card" tab and \ - click "Upload a Markdown file". - - Paste your model card's text directly into the "Model Card" text area. - """) - with gr.Accordion(label="Limitations", open=False): - gr.Markdown(""" - This tool should be treated as a Proof Of Concept, and is not designed for production-level use. - - This is currently designed to only work on **English** model cards. - - This tool relies on a very strict model card schema, which may be different from your model card. - - Only material in the main card body is considered – any data in the YAML frontmatter is disregarded. - - If your model card contains any HTML fragments, this tool might not be able to read your model card. - """) - - with gr.Row(): - with gr.Column(): - with gr.Tab(label="Load a card from the 🤗 Hugging Face Hub"): - with gr.Row(): - model_id_search = gr.Text(label="Model ID") - - search_results_text = gr.Text(visible=False, value=",".join([e[0] for e in examples])) - search_results_index = gr.Dataset( - label="Search Results", - components=[model_id_search], - samples=examples, - type="index", - ) - - model_id_search.change( - fn=search_for_models, - inputs=[model_id_search], - outputs=[search_results_index, search_results_text] - ) - - with gr.Tab(label="Upload your own card"): - file = gr.UploadButton(label="Upload a Markdown file", elem_id="file-upload") - # TODO: Bug – uploading more than once doesn't trigger the function? Gradio bug? - file.upload( - fn=read_file, - inputs=[file], - outputs=[model_card_box] - ) - - model_card_box.render() - - with col.render(): - with tab.render(): - with col2.render(): - for a, d in compliance_results: - with a.render(): - d.render() - - flag = gr.Button(value="Disagree with the result? Click here to flag it! 🚩") - flag_message = gr.Text( - show_label=False, - visible=False, - value="Thank you for flagging this! We'll use your report to improve the tool 🤗" - ) - - search_results_index.click( - fn=load_model_card, - inputs=[search_results_index, search_results_text], - outputs=[model_card_box] - ) - - model_card_box.change( - fn=run_compliance_check, - inputs=[model_card_box], - outputs=[*compliance_accordions, *compliance_descriptions] - ) - - flag.click( - fn=lambda x: hf_writer.flag(flag_data=[x]) and gr.Text.update(visible=True), - inputs=[model_card_box], - outputs=[flag_message] - ) - -hf_writer.setup( - components=[model_card_box], - flagging_dir="flagged" -) - -demo.launch() diff --git a/spaces/spencer/socm/embeddings.py b/spaces/spencer/socm/embeddings.py deleted file mode 100644 index 150307ba13b83a0d8c7657d7293b65dee4d70b0d..0000000000000000000000000000000000000000 --- a/spaces/spencer/socm/embeddings.py +++ /dev/null @@ -1,143 +0,0 @@ -import logging -import os - -import faiss -import torch - -logger = logging.getLogger(__name__) -logging.basicConfig(level=logging.INFO) - - -class FaissIndex: - def __init__( - self, - embedding_size=None, - faiss_index_location=None, - indexer=faiss.IndexFlatIP, - ): - - if embedding_size or faiss_index_location: - self.embedding_size = embedding_size - else: - raise ValueError("Must provide embedding_size") - - self.faiss_index_location = faiss_index_location - if faiss_index_location and os.path.exists(faiss_index_location): - self.index = faiss.read_index(faiss_index_location) - logger.info(f"Setting embedding size ({self.index.d}) to match saved index") - self.embedding_size = self.index.d - if os.path.exists(faiss_index_location + ".ids"): - with open(faiss_index_location + ".ids") as f: - self.id_list = f.read().split("\n") - elif self.index.ntotal > 0: - raise ValueError("Index file exists but ids file does not") - else: - self.id_list = [] - - else: - os.makedirs(os.path.dirname(faiss_index_location), exist_ok=True) - self.index = None - self.indexer = indexer - self.id_list = [] - - def faiss_init(self): - - index = self.indexer(self.embedding_size) - if self.faiss_index_location: - faiss.write_index(index, self.faiss_index_location) - self.index = index - - def add(self, inputs, ids, normalize=True): - - if not self.index: - self.faiss_init() - - if normalize: - faiss.normalize_L2(inputs) - self.index.add(inputs) - self.id_list.extend(ids) - - faiss.write_index(self.index, self.faiss_index_location) - with open(self.faiss_index_location + ".ids", "a") as f: - f.write("\n".join(ids) + "\n") - - def search(self, embedding, k=10, normalize=True): - - if len(embedding.shape): - embedding = embedding.reshape(1, -1) - if normalize: - faiss.normalize_L2(embedding) - D, I = self.index.search(embedding, k) - labels = [self.id_list[i] for i in I.squeeze()] - return D, I, labels - - def reset(self): - - if self.index: - self.index.reset() - self.id_list = [] - try: - os.remove(self.faiss_index_location) - os.remove(self.faiss_index_location + ".ids") - except FileNotFoundError: - pass - - def __len__(self): - if self.index: - return self.index.ntotal - return 0 - - -class VectorSearch: - def __init__(self): - self.places = self.load("places") - self.objects = self.load("objects") - - def load(self, index_name): - return FaissIndex( - faiss_index_location=f"faiss_indices/{index_name}.index", - ) - - def top_places(self, query_vec, k=5): - if isinstance(query_vec, torch.Tensor): - query_vec = query_vec.detach().numpy() - *_, results = self.places.search(query_vec, k=k) - return results - - def top_objects(self, query_vec, k=5): - if isinstance(query_vec, torch.Tensor): - query_vec = query_vec.detach().numpy() - *_, results = self.objects.search(query_vec, k=k) - return results - - def prompt_activities(self, query_vec, k=5, one_shot=False): - places = self.top_places(query_vec, k=k) - objects = self.top_objects(query_vec, k=k) - place_str = f"Places: {', '.join(places)}. " - object_str = f"Objects: {', '.join(objects)}. " - - act_str = "I might be doing these 3 activities: " - - zs = place_str + object_str + act_str - - example = ( - "Places: kitchen. Objects: coffee maker. " - f"{act_str}: eating, making breakfast, grinding coffee.\n " - ) - fs = example + place_str + object_str + act_str - if one_shot: - return (zs, fs) - - return zs, places, objects - - def prompt_summary(self, state_history: list, k=5): - - rec_strings = ["Event log:"] - for rec in state_history: - rec_strings.append( - f"Places: {', '.join(rec.places)}. " - f"Objects: {', '.join(rec.objects)}. " - f"Activities: {', '.join(rec.activities)} " - ) - question = "How would you summarize these events in a few full sentences? " - return "\n".join(rec_strings) + "\n" + question diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/lightconv_layer/setup.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/lightconv_layer/setup.py deleted file mode 100644 index 052635be79b466d0ad56cf5cf607bd10c2297ecf..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/lightconv_layer/setup.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from setuptools import setup -from torch.utils.cpp_extension import BuildExtension, CUDAExtension - - -setup( - name="lightconv_layer", - ext_modules=[ - CUDAExtension( - "lightconv_cuda", - [ - "lightconv_cuda.cpp", - "lightconv_cuda_kernel.cu", - ], - ), - ], - cmdclass={"build_ext": BuildExtension}, -) diff --git a/spaces/stomexserde/gpt4-ui/Examples/Billiards City Apk Mod Unlimited VERIFIED.md b/spaces/stomexserde/gpt4-ui/Examples/Billiards City Apk Mod Unlimited VERIFIED.md deleted file mode 100644 index 76dd7f19935f208dc7a2f27420ad35b7dd146cdb..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Billiards City Apk Mod Unlimited VERIFIED.md +++ /dev/null @@ -1,26 +0,0 @@ -
          -

          How to Download and Install Billiards City Apk Mod Unlimited for Android

          -

          Billiards City is a popular game that lets you play various types of billiards games online with realistic graphics and smooth gameplay. You can also chat with other players while you play and unlock new tables and clubs. But what if you want to enjoy the game without any limitations? That's where Billiards City Apk Mod Unlimited comes in.

          -

          Billiards City Apk Mod Unlimited


          Download Zip ————— https://urlgoal.com/2uI9jj



          -

          Billiards City Apk Mod Unlimited is a modified version of the original game that gives you unlimited money, gems, diamonds, and characters. You can also access menu mod features such as high damage, one hit, god mode, and more. With Billiards City Apk Mod Unlimited, you can play any level you want, buy any item you need, and have more fun with billiards.

          -

          If you are interested in downloading and installing Billiards City Apk Mod Unlimited for your Android device, here are the steps you need to follow:

          -
            -
          1. First, you need to uninstall the original version or any other mod version of Billiards City from your device.
          2. -
          3. Next, you need to enable installation from unknown sources on your device. To do this, go to Settings > Security > and check the box for unknown sources.
          4. -
          5. Then, you need to download the Billiards City Apk Mod Unlimited file from a reliable source. You can use the link below to download it directly.
          6. -
          7. After downloading the file, go to your file manager and locate the downloaded file. Tap on it and follow the instructions to install it on your device.
          8. -
          9. Finally, you can open the game and enjoy Billiards City Apk Mod Unlimited with all its features.
          10. -
          -

          That's it! You have successfully downloaded and installed Billiards City Apk Mod Unlimited for your Android device. Now you can play billiards online with unlimited resources and mod features. Have fun!

          -

          Note: This article is for educational purposes only. We do not endorse or promote any illegal or unethical activities. Please use Billiards City Apk Mod Unlimited at your own risk and discretion.

          - -

          If you want to know more about Billiards City and its features, here are some of the highlights of the game:

          -
            -
          • Different tables and clubs: You can choose from various tables with different shapes, colors, and number of holes. You can also unlock more advanced clubs that can help you hit better shots and improve your skills.
          • -
          • Awesome gaming experience: Billiards City uses realistic sound effects and responsive controls to give you a near-real billiards experience. You can feel the impact of the balls and the friction of the table as you play.
          • -
          • Various challenge levels: Besides the normal levels, Billiards City also offers interesting challenge levels that test your abilities and creativity. You may need to use different strategies and techniques to pass these levels.
          • -
          • Can also play offline: Billiards City has designed over 1000 levels that you can play offline without an internet connection. You can enjoy your billiards adventure anytime and anywhere.
          • -
          -

          Billiards City is a game that will keep you entertained and engaged for hours. Whether you are a beginner or a pro, you will find something to suit your taste and level in this game. Download Billiards City Apk Mod Unlimited now and have fun!

          7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Charas 720p In Hindi Dubbed Movie.md b/spaces/stomexserde/gpt4-ui/Examples/Charas 720p In Hindi Dubbed Movie.md deleted file mode 100644 index f8f768dbea02867008d105e97b34ba61943355dc..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Charas 720p In Hindi Dubbed Movie.md +++ /dev/null @@ -1,13 +0,0 @@ -
          -

          Charas: A Thrilling Action Movie in Hindi Dubbed

          -

          If you are looking for a thrilling action movie to watch, you might want to check out Charas, a 2004 Bollywood film that is now available in Hindi dubbed version. Charas is a story of two friends, Dev Anand (Jimmy Sheirgill) and Gaurav (Uday Chopra), who get involved in a drug trafficking case that puts their lives in danger. The movie features stunning locations, high-octane action sequences, and a gripping plot that will keep you on the edge of your seat.

          -

          Charas was directed by Tigmanshu Dhulia, who also wrote the screenplay and dialogues. The movie also stars Irrfan Khan, Namrata Shirodkar, Hrishitaa Bhatt, and Varun Badola in supporting roles. The music was composed by Raju Singh, while the cinematography was done by Setu. The movie was shot in various locations in India and Europe, including Himachal Pradesh, London, Amsterdam, and Prague.

          -

          Charas 720p in hindi dubbed movie


          DOWNLOAD ————— https://urlgoal.com/2uIaF1



          -

          Charas was released on May 7, 2004, and received mixed reviews from critics and audiences. The movie was praised for its performances, especially by Irrfan Khan, who played the role of a ruthless drug lord. The movie was also appreciated for its realistic portrayal of the drug trade and its consequences. However, some critics felt that the movie was too long, had a weak climax, and lacked originality.

          -

          Despite the mixed response, Charas has gained a cult following over the years, especially among the fans of action movies. The movie is now available in Hindi dubbed version with 720p quality on various online platforms. You can watch Charas online or download it for offline viewing. If you are looking for a thrilling action movie to watch this weekend, you should definitely give Charas a try.

          - -

          The plot of Charas revolves around the illegal cultivation and trade of charas (marijuana) in the Himalayan region of India. A young British botany student, Sam Higgins (Adam Bedi), goes missing after exploring the jungles in search of ayurvedic herbs. His girlfriend, Piya Goswami (Namrata Shirodkar), a journalist, comes to India to find him and uncovers the dark secrets of the drug mafia. She teams up with Dev Anand (Jimmy Sheirgill), a local guide and friend of Sam, who also has a personal vendetta against the drug lord, DCP Randhir Singh Rathore (Irrfan Khan), who killed his father.

          -

          Meanwhile, ACP Ashraf A. Khan (Uday Chopra), a young and honest cop, is assigned to bust the drug racket and arrest Rathore. He faces many obstacles and challenges from his corrupt seniors and colleagues, who are on Rathore's payroll. He also develops a romantic relationship with Naina Thakur (Hrishitaa Bhatt), a dancer and singer who works for Rathore. Ashraf soon realizes that Rathore is not only a powerful and ruthless criminal, but also a mastermind who has connections with international terrorists and arms dealers.

          -

          The movie takes many twists and turns as Dev, Piya, and Ashraf try to expose Rathore's crimes and bring him to justice. They also discover the truth about Sam's fate and his involvement in the drug trade. They face many dangers and betrayals along the way, but also find allies and friends who help them in their mission. The movie culminates in a climactic showdown between Rathore and his enemies, where the fate of millions of lives is at stake.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Hackers Breach FSB Contractor Expose Tor Deanonymization Project And More.md b/spaces/stomexserde/gpt4-ui/Examples/Hackers Breach FSB Contractor Expose Tor Deanonymization Project And More.md deleted file mode 100644 index 933f5ce37744f1f8d09267a65717b4920250b663..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Hackers Breach FSB Contractor Expose Tor Deanonymization Project And More.md +++ /dev/null @@ -1,18 +0,0 @@ - -

          Hackers breach FSB contractor, expose Tor deanonymization project and more

          -

          A group of hackers known as 0v1ru$ has claimed responsibility for breaching SyTech, a contractor for the Russian Federal Security Service (FSB), and leaking sensitive data on several secret projects. The projects include plans to deanonymize Tor users, monitor social media platforms, and map the Russian segment of the internet.

          -

          Hackers breach FSB contractor, expose Tor deanonymization project and more


          Download File ->>> https://urlgoal.com/2uI98s



          -

          The hackers gained access to SyTech's servers on July 13, 2019, and defaced the company's website with a smiling Yoba face, a meme popular among Russian internet users. They also posted screenshots of the stolen data on Twitter and shared it with Digital Revolution, another hacking group that has previously targeted FSB contractors.

          -

          Digital Revolution then published some of the leaked documents on its Twitter account and with various media outlets. The documents reveal that SyTech was involved in at least 20 projects for the FSB's unit 71330, also known as the Scientific Research Institute for System Analysis (NIISI).

          -

          One of the most notable projects is called Nautilus-S, which aims to deanonymize Tor users by creating rogue nodes on the network and using traffic analysis techniques. Tor is a software that enables anonymous communication by routing traffic through a series of encrypted relays. It is widely used by activists, journalists, dissidents, and criminals to evade censorship and surveillance.

          -

          -

          Another project is called Reward, which involves creating a system to monitor and analyze social media platforms such as Facebook, LinkedIn, and Twitter. The project's goal is to identify the sources of information that influence public opinion and shape social behavior.

          -

          A third project is called Hope, which seeks to map the Russian segment of the internet, known as RuNet, and identify its vulnerabilities and choke points. The project's purpose is to ensure the continuity and security of RuNet in case of external threats or isolation.

          -

          The leaked data also includes information on other projects related to network security, cyberattacks, encryption, and quantum computing. The extent of the damage caused by the breach is not clear yet, but it is likely to embarrass the FSB and expose its covert activities.

          -

          SyTech has not commented on the incident yet, but its website has been taken offline. The FSB has also remained silent on the matter, as it usually does when faced with cyberattacks. However, some experts believe that the hackers may face retaliation from the Russian authorities.

          - -

          The breach of SyTech is not the first time that hackers have targeted FSB contractors and exposed their secrets. In 2018, Digital Revolution hacked Quantum, another FSB contractor, and leaked documents on a project called Fronton, which involved creating a botnet of compromised Internet of Things (IoT) devices to launch cyberattacks.

          -

          In 2016, a group of hackers known as The Shadow Brokers stole and published a trove of hacking tools and exploits from the Equation Group, a cyberespionage unit linked to the US National Security Agency (NSA). Some of the leaked tools were later used by other hackers in devastating attacks such as WannaCry and NotPetya.

          -

          These incidents highlight the risks and challenges of outsourcing sensitive and classified work to private contractors, who may not have the same level of security and oversight as government agencies. They also show the growing sophistication and boldness of hackers who are willing to take on powerful adversaries and expose their secrets to the world.

          7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Hindi Movie Free Download Kafan.md b/spaces/stomexserde/gpt4-ui/Examples/Hindi Movie Free Download Kafan.md deleted file mode 100644 index add75e84bbb453e55625663e5e4c809fd7242f48..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Hindi Movie Free Download Kafan.md +++ /dev/null @@ -1,26 +0,0 @@ - -

          How to Watch Kafan (1990), a Classic Hindi Horror Movie, for Free Online

          -

          Kafan is a 1990 Hindi horror movie directed by Dhirendra Bohra and starring Javed Khan, Jamuna, Viju Khote, Mac Mohan, Johny Lever and Raza Murad. The movie revolves around a young girl who gets visions of murders and becomes possessed by a vengeful spirit. The movie is known for its spooky atmosphere, suspenseful plot and gruesome scenes.

          -

          If you are a fan of Hindi horror movies and want to watch Kafan for free online, you have come to the right place. In this article, we will tell you how to find and stream Kafan legally and safely on the internet. You will also learn some interesting facts and trivia about the movie that will enhance your viewing experience.

          -

          Hindi Movie Free Download Kafan


          DOWNLOAD ››››› https://urlgoal.com/2uIasP



          -

          Where to Watch Kafan (1990) Online for Free

          -

          There are many websites that claim to offer free downloads or streaming of Kafan, but most of them are either illegal, unsafe or unreliable. They may contain viruses, malware, pop-ups, ads or other unwanted content that can harm your device or compromise your privacy. Moreover, downloading or streaming pirated content is a violation of copyright laws and can get you in trouble with the authorities.

          -

          The best way to watch Kafan online for free is to use a legitimate and trusted platform that has the rights to stream the movie. One such platform is IMDb, which is a popular website that provides information and ratings about movies, TV shows and celebrities. IMDb also has a feature called IMDb TV, which allows you to watch thousands of movies and TV shows for free with ads.

          -

          To watch Kafan on IMDb TV, you need to create an account on IMDb and sign in. Then, you can go to the movie's page on IMDb [^1^] and click on the "Watch options" button. You will see a link that says "Watch free on IMDb TV". Click on it and you will be redirected to IMDb TV's website, where you can start streaming Kafan for free with ads.

          -

          Some Facts and Trivia About Kafan (1990)

          -

          Here are some interesting facts and trivia about Kafan that you may not know:

          -
            -
          • Kafan means "shroud" or "coffin" in Hindi.
          • -
          • The movie was inspired by a short story of the same name by Munshi Premchand, a famous Hindi writer. The story was published in 1936 and dealt with the themes of poverty, greed and death.
          • -
          • The movie was also influenced by Hollywood horror movies like The Exorcist (1973) and The Omen (1976).
          • -
          • The movie was shot in Mumbai and Lonavala, a hill station near Mumbai.
          • -
          • The movie was a low-budget production and faced many difficulties during filming. The director had to borrow money from his friends and relatives to complete the movie.
          • -
          • The movie was released on August 1, 1990 and received mixed reviews from critics and audiences. Some praised it for its originality, atmosphere and performances, while others criticized it for its poor production values, editing and direction.
          • -
          • The movie has a cult following among Hindi horror fans and is considered one of the best horror movies of the 1990s.
          • -
          -

          Conclusion

          -

          Kafan is a classic Hindi horror movie that deserves to be watched by fans of the genre. It has a unique story, a creepy mood and some memorable scenes that will keep you on the edge of your seat. If you want to watch Kafan online for free, you can use IMDb TV, which is a legal and safe platform that streams the movie with ads. You can also learn some facts and trivia about the movie that will make your viewing more enjoyable.

          -

          -

          We hope this article has helped you find out how to watch Kafan online for free. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!

          7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/ISpring Suite 9.1.0 Build 25298 Patch HOT.md b/spaces/stomexserde/gpt4-ui/Examples/ISpring Suite 9.1.0 Build 25298 Patch HOT.md deleted file mode 100644 index 89e09bcdf1fd0b1930c3ec34292ec6e19959720c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/ISpring Suite 9.1.0 Build 25298 Patch HOT.md +++ /dev/null @@ -1,39 +0,0 @@ - -```html -

          How to Use iSpring Suite 9.1.0 Build 25298 Patch to Create Professional Training Courses

          -

          iSpring Suite is a powerful software that allows you to turn your PowerPoint presentations into interactive and engaging e-learning courses. With iSpring Suite, you can add quizzes, simulations, interactions, videos, and more to your slides, and export them as HTML5 + Flash compatible files that can run on any device.

          -

          In this article, we will show you how to use iSpring Suite 9.1.0 Build 25298 Patch to install and activate the latest version of iSpring Suite on your computer. This patch is a crack that bypasses the software's license verification and lets you use it for free.

          -

          iSpring Suite 9.1.0 Build 25298 Patch


          Download >>>>> https://urlgoal.com/2uI9Vc



          -

          Step 1: Download iSpring Suite 9.1.0 Build 25298 Patch

          -

          The first step is to download the patch file from one of the links below:

          - -

          Make sure you have a reliable antivirus software installed on your computer before downloading any files from the internet.

          -

          Step 2: Install iSpring Suite

          -

          The next step is to install iSpring Suite on your computer. You can download the setup file from the official website of iSpring Solutions Inc., the developer of iSpring Suite:

          - -

          Run the setup file and follow the instructions to complete the installation process.

          -

          Step 3: Apply iSpring Suite 9.1.0 Build 25298 Patch

          -

          The final step is to apply the patch file to activate iSpring Suite on your computer. To do this, follow these steps:

          -
            -
          1. Close iSpring Suite if it is running.
          2. -
          3. Go to the folder where you downloaded the patch file and extract it using a file archiver such as WinRAR or 7-Zip.
          4. -
          5. Select the patch file appropriate for your system architecture (32-bit or 64-bit) and copy it.
          6. -
          7. Paste the patch file into the folder where you installed iSpring Suite (usually C:\Program Files (x86)\iSpring\iSpring Suite 9).
          8. -
          9. Run the patch file as administrator and click on "Patch".
          10. -
          11. Wait for the patching process to finish and close the patch window.
          12. -
          -

          Congratulations! You have successfully installed and activated iSpring Suite 9.1.0 Build 25298 on your computer. You can now launch iSpring Suite and start creating professional training courses with PowerPoint.

          -

          Disclaimer

          -

          This article is for educational purposes only. We do not condone or encourage any illegal or unethical use of software or intellectual property. Please respect the rights of the software developers and purchase a valid license if you want to use iSpring Suite for commercial or personal purposes.

          -

          - -```

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Jackpot Full Movie Tamil Download Hd.md b/spaces/stomexserde/gpt4-ui/Examples/Jackpot Full Movie Tamil Download Hd.md deleted file mode 100644 index b883d550cffb68cd5e681595132d9432f7eebdba..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Jackpot Full Movie Tamil Download Hd.md +++ /dev/null @@ -1,21 +0,0 @@ - -

          Jackpot: A Hilarious Tamil Comedy Action Movie

          -

          If you are looking for a fun and entertaining movie to watch, you might want to check out Jackpot, a 2019 Tamil comedy action film directed by Kalyaan and starring Jyothika and Revathi in the lead roles. The movie is about two smart con-women and a bunch of quirky gangsters who are in pursuit of a mythical vessel that has magical powers.

          -

          The movie is full of hilarious scenes and witty dialogues that will keep you laughing throughout. The movie also has some thrilling action sequences and a twisty plot that will keep you guessing until the end. The movie is a perfect blend of comedy and action that will appeal to all kinds of audiences.

          -

          Jackpot full movie tamil download hd


          Download File ✪✪✪ https://urlgoal.com/2uI6RZ



          -

          Jackpot is available to watch online on Prime Video[^1^], where you can enjoy the movie in high definition quality. You can also download the movie on your device and watch it offline anytime you want. Jackpot is a movie that you don't want to miss if you are looking for a fun-filled ride.

          Here are some of the reasons why you should watch Jackpot:

          -
            -
          • The movie has a stellar cast of talented actors who deliver amazing performances. Jyothika and Revathi play the roles of Akshaya and Masha, two con-women who use their wit and charm to pull off various heists. They have a great chemistry and a hilarious rapport that makes them a delight to watch. The movie also features Yogi Babu, Anandaraj, Rajendran, Mansoor Ali Khan and Motta Rajendran as some of the gangsters who are after the vessel. They add to the comedy quotient of the movie with their antics and expressions.
          • -
          • The movie has a unique and engaging plot that revolves around a mythical vessel that can grant any wish. The vessel is called Akshaya Patra, which means inexhaustible vessel in Sanskrit. It is a legendary object that was given to the Pandavas by Lord Krishna in the Mahabharata. The vessel can produce any kind of food or drink that one desires. The movie shows how the vessel changes hands over the years and how different people try to use it for their own purposes. The movie also has some twists and turns that will surprise you and keep you hooked.
          • -
          • The movie has some spectacular action scenes that will keep you on the edge of your seat. The movie has a lot of chase sequences, fights, explosions and stunts that are well-executed and thrilling to watch. The movie also has some stunning visuals and cinematography that capture the beauty and chaos of the scenes. The movie has a fast-paced and energetic vibe that will keep you entertained throughout.
          • -
          -

          Jackpot is a movie that will make you laugh, cheer and enjoy every moment of it. It is a perfect entertainer that you can watch with your family and friends. So, what are you waiting for? Go ahead and watch Jackpot on Prime Video today and have a blast!

          If you are wondering what makes Jackpot different from other comedy action movies, here are some of the factors that set it apart:

          -
            -
          • The movie has a strong female-centric theme that showcases the power and intelligence of women. The movie has two female protagonists who are smart, confident and fearless. They are not afraid to take on any challenge or risk and they always have each other's back. They also outsmart and outwit the male antagonists who underestimate them. The movie celebrates the strength and spirit of women in a humorous and inspiring way.
          • -
          • The movie has a rich cultural and historical background that adds to its charm and appeal. The movie explores the origin and significance of the Akshaya Patra, which is a part of the Indian mythology and culture. The movie also shows how the vessel has influenced the lives and events of different people and periods in history. The movie blends the ancient and the modern in a seamless and creative way.
          • -
          • The movie has a catchy and upbeat soundtrack that enhances the mood and atmosphere of the movie. The movie has six songs composed by Vishal Chandrasekhar, who is known for his versatile and innovative music. The songs are catchy, lively and fun to listen to. They also suit the tone and theme of the movie perfectly. The songs are sung by popular singers like Shreya Ghoshal, S.P. Balasubrahmanyam, Dhanush, Anirudh Ravichander and more. The songs are also accompanied by colorful and vibrant choreography that adds to the visual appeal of the movie.
          • -
          -

          Jackpot is a movie that has something for everyone. It is a movie that will make you laugh, thrill you, inspire you and entertain you. It is a movie that you will not regret watching. So, don't miss this opportunity to watch Jackpot on Prime Video and have a wonderful time!

          -

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/models/musicgen.py b/spaces/sub314xxl/MusicGen-Continuation/audiocraft/models/musicgen.py deleted file mode 100644 index 4078361d5a670d7700526dc66fd48ca86e41208d..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/models/musicgen.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using MusicGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import os -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes, WavCondition -from ..utils.autocast import TorchAutocast - - -MelodyList = tp.List[tp.Optional[torch.Tensor]] -MelodyType = tp.Union[torch.Tensor, MelodyList] - - -class MusicGen: - """MusicGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel): - self.name = name - self.compression_model = compression_model - self.lm = lm - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=15) # 15 seconds by default - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> int: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'melody', device='cuda'): - """Return pretrained model, we provide four models: - - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small - - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium - - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody - - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large - """ - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device) - lm = get_debug_lm_model(device) - return MusicGen(name, compression_model, lm) - - if name not in HF_MODEL_CHECKPOINTS_MAP: - raise ValueError( - f"{name} is not a valid checkpoint name. " - f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}" - ) - - cache_dir = os.environ.get('MUSICGEN_ROOT', None) - compression_model = load_compression_model(name, device=device, cache_dir=cache_dir) - lm = load_lm_model(name, device=device, cache_dir=cache_dir) - if name == 'melody': - lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True - - return MusicGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 30.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False): - """Set the generation parameters for MusicGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 30.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - """ - assert duration <= 30, "The MusicGen cannot generate more than 30 seconds" - self.generation_params = { - 'max_gen_len': int(duration * self.frame_rate), - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor: - """Generate samples in an unconditional manner. - - Args: - num_samples (int): Number of samples to be generated. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - descriptions: tp.List[tp.Optional[str]] = [None] * num_samples - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, - melody_sample_rate: int, progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text and melody. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as - melody conditioning. Should have shape [B, C, T] with B matching the description length, - C=1 or 2. It can be [C, T] if there is a single description. It can also be - a list of [C, T] tensors. - melody_sample_rate: (int): Sample rate of the melody waveforms. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if isinstance(melody_wavs, torch.Tensor): - if melody_wavs.dim() == 2: - melody_wavs = melody_wavs[None] - if melody_wavs.dim() != 3: - raise ValueError("Melody wavs should have a shape [B, C, T].") - melody_wavs = list(melody_wavs) - else: - for melody in melody_wavs: - if melody is not None: - assert melody.dim() == 2, "One melody in the list has the wrong number of dims." - - melody_wavs = [ - convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels) - if wav is not None else None - for wav in melody_wavs] - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None, - melody_wavs=melody_wavs) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - melody_wavs: tp.Optional[MelodyList] = None, - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms - used as melody conditioning. Defaults to None. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if melody_wavs is None: - for attr in attributes: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - if self.name != "melody": - raise RuntimeError("This model doesn't support melody conditioning. " - "Use the `melody` model.") - assert len(melody_wavs) == len(descriptions), \ - f"number of melody wavs must match number of descriptions! " \ - f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}" - for attr, melody in zip(attributes, melody_wavs): - if melody is None: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - attr.wav['self_wav'] = WavCondition( - melody.to(device=self.device), - torch.tensor([melody.shape[-1]], device=self.device)) - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody). - prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - print(f'{generated_tokens: 6d} / {tokens_to_generate: 6d}', end='\r') - - if prompt_tokens is not None: - assert self.generation_params['max_gen_len'] > prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - # generate by sampling from LM - with self.autocast: - gen_tokens = self.lm.generate(prompt_tokens, attributes, callback=callback, **self.generation_params) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Food Science Nutrition Sunetra Roday Pdf Download UPDATED.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Food Science Nutrition Sunetra Roday Pdf Download UPDATED.md deleted file mode 100644 index 9fb53517b290c4451ff7c2762cd8ac2956f25241..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Food Science Nutrition Sunetra Roday Pdf Download UPDATED.md +++ /dev/null @@ -1,52 +0,0 @@ -

          food science nutrition sunetra roday pdf download


          Download Zip 🌟 https://cinurl.com/2uEYwc



          - -izle. Health food science nutrition sunetra roday pdf izle, you can to download it in pdf file. Health food science nutrition sunetra roday pdf izle. - -The importance of food science nutrition, Free Download Food science nutrition nutritional and health benefits of fruits, free download, PDF file, ebook format, Food science nutrition is a must read for all in this century - -Health, Food, Nutrition the code base on MSDN, see the example code on GitHub for a full explanation of how it works. - -The first thing you do is set up your filter. I used: - -var filter = new Predicate(predicate); - -The Predicate is a TypeFilter. The predicate function is just a delegate to a function you pass in to calculate the filter. In my case, I wanted to use an attribute: - -public class MaxLengthAttribute : FilterMetadataAttribute - -{ - - public int MinLength get; set; - - public int MaxLength get; set; - - public override string Name get; = "MaxLength"; - - public override bool IsValidForRequest(object item) - - { - - var value = item as string; - - return value!= null && value.Length Q: - -Rails 4.1 how to get the current users working on? - -After some searches and reading i was able to figure out how to get the current user working on - -I'm trying to figure out how to do the same but with the following query - - @issue = Issue.find_by_user_id(current_user.id) - -in my Issue.rb model - -scope :with_user, lambda joins(:user).where('users.id =?', id) - -I get - -undefined method `id' for # - -I'm 4fefd39f24
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Usb-driver-intenso-tab-814.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Usb-driver-intenso-tab-814.md deleted file mode 100644 index ed03c1c201e21f12eae18e39167dac7679cea4c5..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Usb-driver-intenso-tab-814.md +++ /dev/null @@ -1,8 +0,0 @@ -
          -

          In this article I will discuss the Windows drivers to install the Intenso USB. Intel has discontinued the support of its USB controllers used in the original INTENSO USB. Although the embedded USB controller is not being used by many different products, the company still provide the product and hope the users will understand. Many device manufacturers are using other third-party USB controllers or add-in cards to add functionality in a device. If you need to install a driver, you may consider using a better Intel USB driver such as Intel.com/USB3.0_DRIVERS.

          -

          If you have own Windows USB driver and your Windows cannot find your new Intenso USB hard disk at all, then this article just for you. Watch this quick guide to create your own USB driver that Windows can recognize by name instead of device Id. This guide is for people who have a new Intenso USB hard disk and want to use it on Windows. Further more, how to use the Windows USB driver to create a bootable USB which can boot any Windows OS from your USB hard disk.

          -

          usb-driver-intenso-tab-814


          Download Filehttps://cinurl.com/2uEXDJ



          -

          usb-driver-intenso-tab-814
          If you still found the message even after you fixed the issue, it may be caused by virus or Trojan. Viruses and Trojans will transfer viruses into your computer or steal sensitive information if you use it without adequate protection. In this case, you should reinstall your system to prevent virus and Trojan infection.

          -

          We have uploaded the software on the USB Stick. For now, you can remove the microSD card from the SD card reader and insert it into a laptop or computer. The bootable USB stick is enabled for other PC users to use ( http://www.formac.com.tw/drivers/detail.php?no=12 ).

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/visit Nosteam Forum Html.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/visit Nosteam Forum Html.md deleted file mode 100644 index 67d58b7e86aeefa217cd9455c7e95aed3f97684e..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/visit Nosteam Forum Html.md +++ /dev/null @@ -1,6 +0,0 @@ -
          -

          these accounts were limited to the following countries: united states of america, brazil, russia, poland, india, and canada.

          valve indicated they didn't believe many steam accounts were stolen for the purpose of steam farming, however it was confirmed that some accounts were abused to purchase 2.6 million items for other users.

          lizard squad has since written that the intrusion caused "problems with valve's steam software. game play is slow or none at all when you log in. valve has quickly confirmed that there was no impact to customer data"

          note: valve has since moved their support forum out of the forums. you now access it by clicking your account name at the top right.

          december 2015, valve made steam and steam servers secure against most hacks, including this one, by using what they call a system of pre-installation checks that verify if things are as they should be before installation. you can read more about it at this forum post: 'steam/steam forums - about signing on to steam'

          -

          'visit nosteam forum html'


          DOWNLOADhttps://cinurl.com/2uEZ6b



          -

          november 29, 2016 a german security researcher named falko strenzke disclosed a vulnerability in steam web authentication that could allow an attacker to access your steam account. this article explains some the details of the vulnerability and how it works.

          as a result of this vulnerability hackers such as lizard squad used it to knock more than 100 million users offline by making them log on to their steam accounts over insecure web connections.

          on november 25, 2016 valve reported that at least 156 accounts were compromised and subsequently hacked, presumably in this manner.

          "you can read more about it at this forum post: 'steam/steam forums - account security was compromised'

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Atube Catcher Lista De Paginas Porno [UPDATED].md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Atube Catcher Lista De Paginas Porno [UPDATED].md deleted file mode 100644 index 05a13042bc75e52040a94ab5b3b1c7c1cac45360..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Atube Catcher Lista De Paginas Porno [UPDATED].md +++ /dev/null @@ -1,34 +0,0 @@ -

          Atube Catcher Lista De Paginas Porno


          Download Filehttps://urluss.com/2uCG9T



          -
          -New hd video hd porn download vids online. Two pusy and a pie. Sis kega landakkiya nasil bozburar. Pentasyon porno dalec paracetamol results. Panties underwear nyy sekarang dibagian nasi, Ponst nuruya noki.Q: - -Show that if $|a_n+1-a_n|\le|a_n-a_n-1|$ then $\lim_n\to\inftya_n=0$. - -Let $(a_n)$ be a sequence of real numbers. Show that if - -$$ - -|a_n+1-a_n|\le|a_n-a_n-1| - -then $\lim_n\to\inftya_n=0$. - -I have tried to prove it in the following way: - -If $a_n\le0$, then we have - -0=a_n-a_n-1\le a_n+1-a_n\le|a_n+1-a_n|\le|a_n-a_n-1| - -which implies that $a_n\to0$ as $n\to\infty$. - -If $a_n\ge0$, then we have - -However, I'm not so sure if this is correct, and I wonder if there is a more elegant way to prove it. - -A: - -This can be done by using the difference between two consecutive terms to get a recurrence relation for $|a_n-a_n-1|$. By the second inequality of the given assumption, we have $|a_n-a_n-1|\leq |a_n+1-a_n|$ and hence - -\begin 4fefd39f24
          -
          -
          -

          diff --git a/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/models.py b/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/models.py deleted file mode 100644 index 44c08d361bcb13b84b38dc29beff5cdaddad4ea2..0000000000000000000000000000000000000000 --- a/spaces/syaz01/rvc-anigames-v2/lib/infer_pack/models.py +++ /dev/null @@ -1,1124 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/syy404/whisper-webui/app-network.py b/spaces/syy404/whisper-webui/app-network.py deleted file mode 100644 index 7605c4b126dfc7dac188dce38551ca8ae84d67db..0000000000000000000000000000000000000000 --- a/spaces/syy404/whisper-webui/app-network.py +++ /dev/null @@ -1,3 +0,0 @@ -# Run the app with no audio file restrictions, and make it available on the network -from app import create_ui -create_ui(-1, server_name="0.0.0.0") \ No newline at end of file diff --git a/spaces/szukevin/VISOR-GPT/train/scripts/__init__.py b/spaces/szukevin/VISOR-GPT/train/scripts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/t13718236382/web-ui/_next/static/chunks/pages/_error-87afbe7e3d327810.js b/spaces/t13718236382/web-ui/_next/static/chunks/pages/_error-87afbe7e3d327810.js deleted file mode 100644 index dd0478f1fd5fffa460f08ed8f0dbaa12f066c205..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/web-ui/_next/static/chunks/pages/_error-87afbe7e3d327810.js +++ /dev/null @@ -1 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[820],{81981:function(n,_,u){(window.__NEXT_P=window.__NEXT_P||[]).push(["/_error",function(){return u(28476)}])}},function(n){n.O(0,[888,774,179],function(){return n(n.s=81981)}),_N_E=n.O()}]); \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Archon Classic Crack And Patch.md b/spaces/terfces0erbo/CollegeProjectV2/Archon Classic Crack And Patch.md deleted file mode 100644 index 165ea863eec67835b5784314c4e780bfedfc0321..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Archon Classic Crack And Patch.md +++ /dev/null @@ -1,9 +0,0 @@ -
          -

          thanks for the answer, im a beginner and i know it can be a hard choice. i have heard the very first black sabbath album and i didnt enjoy it but i love the sound of the deep purple albums from the first album to the last. i have the jackson blackheart and the jackson blackstar and also the fender jazzmasters. i have a rickenbacker telecaster. im going to buy the ampeg 1273 as it would be easy to replace the 1275 in the future. right now i am looking for a good amp for blues, rock, classic rock and i think i will get the sl5 for now.
          so this combo has a deep purple sound, a black sabbath sound and a jazzmaster sound.
          black sabbath sound is far more heavy but can be good in small rooms or practice (like my garage).

          -

          bjorn,
          i recommend that you get a fender princeton reverb and a boss ds-1 or similar. for the clean tone, get a boss or similar overdrive pedal. if you want to get the classic tones, get a fender bassman amp. the fender bassman is great. i think it has a little more headroom than the princeton. the fender bassman is very powerful, but also very controlled.

          -

          Archon Classic crack and patch


          Download Filehttps://bytlly.com/2uGjCR



          -

          ive never played a lt5 so i cant really describe the tone or fully recoomend it. based on the clips ive heard, it has a classic british tone similar to the ac and plexi so i would assume that its both a nice pedal platform and suitable for davids tones.

          -

          ive never played a lc50 so i cant really describe the tone or fully recoomend it. based on the clips ive heard, it has a classic british tone similar to the ac and plexi so i would assume that its both a nice pedal platform and suitable for davids tones.

          -

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/thealphhamerc/text-to-speech/README.md b/spaces/thealphhamerc/text-to-speech/README.md deleted file mode 100644 index f630de365c8b17dabb73d9cdfa987e2c9fa029fd..0000000000000000000000000000000000000000 --- a/spaces/thealphhamerc/text-to-speech/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text To Speech -emoji: 📈 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/thecentuaro/oai-proxy-geoblock-zov-edition/greeting.md b/spaces/thecentuaro/oai-proxy-geoblock-zov-edition/greeting.md deleted file mode 100644 index e8e56932660ed4e53855a8861ab1bd57c2f6189c..0000000000000000000000000000000000000000 --- a/spaces/thecentuaro/oai-proxy-geoblock-zov-edition/greeting.md +++ /dev/null @@ -1,15 +0,0 @@ -Все вопросы по поводу прокси на мою почту: nagibator1487@gmail.com - -Если вы хотите добавить ключей, я был бы очень признателен за помощь community и мог бы упомянуть вас здесь, как благотворителя. - -Или я могу отблагодарить вас в другом виде, к примеру, сделать вам карточку по вашей просьбе вне очереди с паком emotions. - - ---------------- - -All questions about proxy on my mail: nagibator1487@gmail.com. - -If you suddenly want to add keys, I will be immensely grateful to you for helping our community and I can include you here as a beneficiary. - -Or I can try to thank you in some other way, for example, by making you a card for your request out of turn and with a guaranteed pack of emotions. - diff --git a/spaces/thegenerativegeneration/FNeVR_demo/sync_batchnorm/comm.py b/spaces/thegenerativegeneration/FNeVR_demo/sync_batchnorm/comm.py deleted file mode 100644 index 922f8c4a3adaa9b32fdcaef09583be03b0d7eb2b..0000000000000000000000000000000000000000 --- a/spaces/thegenerativegeneration/FNeVR_demo/sync_batchnorm/comm.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# File : comm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import queue -import collections -import threading - -__all__ = ['FutureResult', 'SlavePipe', 'SyncMaster'] - - -class FutureResult(object): - """A thread-safe future implementation. Used only as one-to-one pipe.""" - - def __init__(self): - self._result = None - self._lock = threading.Lock() - self._cond = threading.Condition(self._lock) - - def put(self, result): - with self._lock: - assert self._result is None, 'Previous result has\'t been fetched.' - self._result = result - self._cond.notify() - - def get(self): - with self._lock: - if self._result is None: - self._cond.wait() - - res = self._result - self._result = None - return res - - -_MasterRegistry = collections.namedtuple('MasterRegistry', ['result']) -_SlavePipeBase = collections.namedtuple('_SlavePipeBase', ['identifier', 'queue', 'result']) - - -class SlavePipe(_SlavePipeBase): - """Pipe for master-slave communication.""" - - def run_slave(self, msg): - self.queue.put((self.identifier, msg)) - ret = self.result.get() - self.queue.put(True) - return ret - - -class SyncMaster(object): - """An abstract `SyncMaster` object. - - - During the replication, as the data parallel will trigger an callback of each module, all slave devices should - call `register(id)` and obtain an `SlavePipe` to communicate with the master. - - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected, - and passed to a registered callback. - - After receiving the messages, the master device should gather the information and determine to message passed - back to each slave devices. - """ - - def __init__(self, master_callback): - """ - - Args: - master_callback: a callback to be invoked after having collected messages from slave devices. - """ - self._master_callback = master_callback - self._queue = queue.Queue() - self._registry = collections.OrderedDict() - self._activated = False - - def __getstate__(self): - return {'master_callback': self._master_callback} - - def __setstate__(self, state): - self.__init__(state['master_callback']) - - def register_slave(self, identifier): - """ - Register an slave device. - - Args: - identifier: an identifier, usually is the device id. - - Returns: a `SlavePipe` object which can be used to communicate with the master device. - - """ - if self._activated: - assert self._queue.empty(), 'Queue is not clean before next initialization.' - self._activated = False - self._registry.clear() - future = FutureResult() - self._registry[identifier] = _MasterRegistry(future) - return SlavePipe(identifier, self._queue, future) - - def run_master(self, master_msg): - """ - Main entry for the master device in each forward pass. - The messages were first collected from each devices (including the master device), and then - an callback will be invoked to compute the message to be sent back to each devices - (including the master device). - - Args: - master_msg: the message that the master want to send to itself. This will be placed as the first - message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example. - - Returns: the message to be sent back to the master device. - - """ - self._activated = True - - intermediates = [(0, master_msg)] - for i in range(self.nr_slaves): - intermediates.append(self._queue.get()) - - results = self._master_callback(intermediates) - assert results[0][0] == 0, 'The first result should belongs to the master.' - - for i, res in results: - if i == 0: - continue - self._registry[i].result.put(res) - - for i in range(self.nr_slaves): - assert self._queue.get() is True - - return results[0][1] - - @property - def nr_slaves(self): - return len(self._registry) diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Bullet Raja Hd Mp4 Movie Download WORK.md b/spaces/tialenAdioni/chat-gpt-api/logs/Bullet Raja Hd Mp4 Movie Download WORK.md deleted file mode 100644 index 1e077aa5d03a40dc5fe448e7ec7c4a525cccd5fb..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Bullet Raja Hd Mp4 Movie Download WORK.md +++ /dev/null @@ -1,25 +0,0 @@ - -Here is a possible title and article with HTML formatting for the keyword "Bullet Raja Hd Mp4 Movie Download": - -

          Bullet Raja: A Thrilling Action Movie to Watch Online

          -

          Bullet Raja is a 2013 Hindi action movie starring Saif Ali Khan, Sonakshi Sinha and Jimmy Sheirgill. The movie follows the story of Raja Mishra, a common man who becomes a notorious gangster in the heartland of Uttar Pradesh. He challenges the system and the corrupt nexus of police, politicians and industrialists with his friend Rudra.

          -

          Bullet Raja Hd Mp4 Movie Download


          DOWNLOAD ->>> https://urlcod.com/2uK5Zv



          -

          The movie is directed by Tigmanshu Dhulia, who is known for his realistic and gritty films like Paan Singh Tomar and Saheb Biwi Aur Gangster. The movie has some high-octane action sequences, witty dialogues and a gripping plot. The movie also features Vidyut Jammwal, Gulshan Grover, Raj Babbar and Chunky Pandey in supporting roles.

          -

          If you are looking for a thrilling and entertaining movie to watch online, you can download Bullet Raja in HD MP4 format from various websites. You can also stream the movie online on platforms like YouTube, Netflix and Amazon Prime Video. However, you should be aware of the legal and ethical issues of downloading or watching pirated movies online.

          -

          Bullet Raja is a movie that will keep you hooked till the end with its fast-paced and engaging story. It is a movie that celebrates the spirit of rebellion and friendship. It is a movie that you should not miss if you are a fan of action and drama.

          Here is a possible continuation of the article: - -

          The movie has received mixed reviews from critics and audiences. Some praised the movie for its action, humor and performances, while others criticized it for its length, music and direction. The movie has a rating of 33% on Rotten Tomatoes, based on 6 reviews, and a rating of 3.5/5 on Times of India, based on 1 review. The movie also has a rating of 4.9/10 on IMDb, based on 2,320 votes.

          -

          The movie has some memorable scenes and dialogues that will make you laugh and cheer. Some of them are:

          -
            -
          • When Raja tells Rudra how he met Mitali: "Main usse shaadi karne wala tha - lekin uski shaadi ho gayi."
          • -
          • When Raja and Rudra kidnap Bajaj and demand a ransom: "Humne aapko kidnap kiya hai - aur humein pata hai ki aapke paas bahut paisa hai."
          • -
          • When Raja and Rudra face Arun Singh in a showdown: "Hum dono ko maaroge? Hum dono ko maaroge? Hum dono ko maaroge? Hum dono ko maaroge?"
          • -
          -

          Bullett Raja is a movie that will appeal to those who like action-packed and entertaining movies with a touch of realism and satire. It is a movie that showcases the talent and charisma of Saif Ali Khan and Jimmy Shergill as the lead pair. It is a movie that you can watch online and enjoy with your friends and family.

          -

          Here is a possible continuation of the article: - -

          The movie was made on a budget of ₹ 52 crore and collected ₹ 40 crore at the box office, making it a flop. The movie performed poorly in India as well as overseas markets. The movie faced competition from other releases like Gori Tere Pyaar Mein, Singh Saab the Great and Ram-Leela. The movie also received negative reviews from some critics who felt that the movie was too long, too loud and too violent.

          -

          However, the movie also had some positive aspects that were appreciated by some critics and audiences. The movie had some good performances by the lead actors, especially Saif Ali Khan and Jimmy Shergill, who shared a great chemistry and camaraderie on screen. The movie also had some catchy songs like Tamanche Pe Disco and Saamne Hai Savera, composed by Sajid-Wajid. The movie also had some stylish action scenes choreographed by Parvez Khan and Ravi Varma.

          -

          Bullett Raja is a movie that tried to blend the masala genre with the realistic style of Tigmanshu Dhulia. It is a movie that had some potential but failed to live up to it. It is a movie that you can watch online if you are a fan of Saif Ali Khan or action movies.

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Christina Perri Lovestrong Album Download Zip.md b/spaces/tialenAdioni/chat-gpt-api/logs/Christina Perri Lovestrong Album Download Zip.md deleted file mode 100644 index ede8e941e4cc1b1d30b9bc7eaff5222ed035584e..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Christina Perri Lovestrong Album Download Zip.md +++ /dev/null @@ -1,33 +0,0 @@ - -

          How to Download Christina Perri's Lovestrong Album for Free

          -

          If you are a fan of Christina Perri, you might be interested in downloading her debut album Lovestrong for free. Lovestrong is a pop album that features Perri's hit single "Jar of Hearts", as well as other songs about love, heartbreak, and empowerment. In this article, we will show you how to download Lovestrong album in zip format from two different sources: Archive.org and iTunes.

          -

          Download Lovestrong Album from Archive.org

          -

          Archive.org is a website that offers free access to millions of digital items, including books, music, videos, and more. You can download Christina Perri's Lovestrong album from Archive.org by following these steps:

          -

          Christina Perri Lovestrong Album Download Zip


          Download Ziphttps://urlcod.com/2uKaj0



          -
            -
          1. Go to https://archive.org/details/christina-perri-lovestrong.
          2. -
          3. Click on the "VBR ZIP" link under the "DOWNLOAD OPTIONS" section on the right side of the page.
          4. -
          5. Save the zip file to your computer or device.
          6. -
          7. Extract the zip file using a software like WinZip or 7-Zip.
          8. -
          9. Enjoy listening to the songs from Lovestrong album.
          10. -
          -

          Note that Archive.org offers Lovestrong album in MP3 format with variable bit rate (VBR). This means that the quality of the audio may vary depending on the song. If you prefer a higher quality audio, you may want to download Lovestrong album from iTunes instead.

          -

          Download Lovestrong Album from iTunes

          -

          iTunes is a popular online store that sells digital music, movies, TV shows, and more. You can download Christina Perri's Lovestrong album from iTunes by following these steps:

          -
            -
          1. Go to https://music.apple.com/us/album/lovestrong-deluxe-version/511578557.
          2. -
          3. Click on the "View in iTunes" button on the top right corner of the page.
          4. -
          5. If you don't have iTunes installed on your computer or device, you will be prompted to download and install it first.
          6. -
          7. Once you have iTunes open, you will see the Lovestrong album page. Click on the "Buy" button next to the album title.
          8. -
          9. You will need to sign in with your Apple ID and password, or create one if you don't have one already.
          10. -
          11. You will also need to provide your payment information, such as your credit card or PayPal account.
          12. -
          13. After you complete your purchase, you will be able to download Lovestrong album in zip format to your computer or device.
          14. -
          15. Extract the zip file using a software like WinZip or 7-Zip.
          16. -
          17. Enjoy listening to the songs from Lovestrong album.
          18. -
          -

          Note that iTunes offers Lovestrong album in AAC format with 256 kbps bit rate. This means that the quality of the audio is higher than that of MP3 format. However, you will also need to pay $9.99 for the album, or $1.29 for each song individually.

          -

          Conclusion

          -

          In this article, we have shown you how to download Christina Perri's Lovestrong album for free from Archive.org or for a fee from iTunes. Both sources offer Lovestrong album in zip format, which you can extract and play on your computer or device. However, there are some differences in terms of audio quality and price between the two sources. You can choose the one that suits your preferences and budget best. We hope you enjoy listening to Christina Perri's Lovestrong album!

          -

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Film Soe Hok Gie 40.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Film Soe Hok Gie 40.md deleted file mode 100644 index 80dce8fd9af85c0734cccfb529205a925349b6d3..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Film Soe Hok Gie 40.md +++ /dev/null @@ -1,22 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Download Film Soe Hok Gie 40": - -

          How to Download Film Soe Hok Gie 40 Online

          -

          Film Soe Hok Gie 40 is a biographical drama film that tells the story of Soe Hok Gie, an Indonesian activist who lived in the turbulent era of Soekarno and Soeharto. The film is based on his diary, Catatan Seorang Demonstran, and directed by Riri Riza. The film stars Nicholas Saputra as Soe Hok Gie, Donny Alamsyah as his friend Max, Lukman Sardi as his brother Arief, and Rosaline Oscar as his love interest Ira.

          -

          If you are interested in watching this film, you might be wondering how to download it online. Here are some steps you can follow:

          -

          Download Film Soe Hok Gie 40


          Download Filehttps://urlcod.com/2uK1ar



          -
            -
          1. Go to a website that offers free streaming or downloading of Indonesian films, such as BioskopGaul or Wixsite.[^1^] [^2^]
          2. -
          3. Search for the film title "Gie" or "Soe Hok Gie 40" in the search box.
          4. -
          5. Select the film from the list of results and click on the play button or the download link.
          6. -
          7. Enjoy watching the film on your device or save it for later viewing.
          8. -
          -

          Note that some websites might require you to register or create an account before you can access the film. Some websites might also have pop-up ads or malware that can harm your device. Be careful and use a reliable antivirus software when browsing these websites.

          -

          Alternatively, you can also watch the film on SoundCloud, where someone has uploaded the audio track of the film.[^3^] However, this might not be the best way to enjoy the film as you will miss out on the visual aspects of the film.

          -

          Film Soe Hok Gie 40 is a captivating and inspiring film that showcases the life and struggles of one of Indonesia's most influential activists. If you are interested in learning more about Indonesian history and politics, this film is a must-watch for you.

          Here is a possible continuation of the article: - -

          Soe Hok Gie was born in 1942 in Jakarta, Indonesia. He was of Chinese descent and came from a well-educated family. He studied at the University of Indonesia, where he majored in history and became involved in various student movements. He was known for his outspoken criticism of both Soekarno and Soeharto, the first and second presidents of Indonesia, who he saw as authoritarian and corrupt. He also advocated for social justice, human rights, and environmental protection. He was influenced by the writings of Tan Malaka, Sukarno, and Mahatma Gandhi.

          -

          Soe Hok Gie was also an avid mountaineer and nature lover. He often went on hiking trips with his friends and wrote about his experiences in his diary. He died at the age of 26 on December 16, 1969, when he was suffocated by poisonous gas while climbing Mount Semeru in East Java. His death was a shock to many of his friends and followers, who regarded him as a hero and a martyr. His diary was published posthumously in 1983 by his brother Arief Budiman, who is also a prominent activist and academic.

          -

          Film Soe Hok Gie 40 is a tribute to the legacy of Soe Hok Gie and his ideals. The film was released in 2005 to commemorate the 40th anniversary of his death. The film received positive reviews from critics and audiences alike, who praised the performance of Nicholas Saputra, the cinematography, the music, and the historical accuracy of the film. The film also won several awards, including the Best Film award at the Indonesian Film Festival in 2005.

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Mustafa lolu Gizli limler Hazinesi PDF The Ultimate Resource for Enthusiasts of Hidden Sciences.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Mustafa lolu Gizli limler Hazinesi PDF The Ultimate Resource for Enthusiasts of Hidden Sciences.md deleted file mode 100644 index 3b906835981bf52a2e16f60a39aaf51989ea29c0..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Mustafa lolu Gizli limler Hazinesi PDF The Ultimate Resource for Enthusiasts of Hidden Sciences.md +++ /dev/null @@ -1,62 +0,0 @@ - -

          Mustafa İloğlu Gizli İlimler Hazinesi: A Treasure of Hidden Sciences

          -

          Mustafa İloğlu is a Turkish author who has written a series of books on the topics of havas, büyü, vefk, cifr, ebced, cin, melek, ifrit, esma-i hüsna, dua and kenzül arş. These are terms related to the Islamic mystical tradition of Sufism, occultism, numerology, astrology, angelology, demonology and supplication. His books are titled Gizli İlimler Hazinesi, which means "Treasure of Hidden Sciences" in Turkish.

          -

          The books are very popular among the readers who are interested in these subjects and want to learn more about the secrets of the universe and the spiritual realm. The books contain various formulas, talismans, prayers and methods for achieving different purposes such as protection, healing, love, wealth, success and knowledge. The books also explain the meanings and benefits of the names of Allah and how to use them in different situations.

          -

          mustafa iloglu gizli ilimler hazinesi pdf download


          Downloadhttps://urlcod.com/2uKabm



          -

          The books are available in PDF format for download from various websites such as Academia.edu[^1^] and Scribd[^2^]. However, some of these websites may require registration or payment to access the full content. The books are also sold in print form in some bookstores and online platforms.

          -

          Mustafa İloğlu Gizli İlimler Hazinesi is a valuable resource for anyone who is interested in exploring the hidden sciences and mysteries of Islam and Sufism. The books are written in a simple and clear language that makes them easy to understand and follow. However, the books also warn the readers to be careful and respectful when dealing with these matters and not to misuse them for evil or selfish purposes.

          -

          mustafa iloglu gizli ilimler hazinesi pdf indir
          -mustafa iloglu gizli ilimler hazinesi kitapları pdf
          -mustafa iloglu gizli ilimler hazinesi 7 pdf
          -mustafa iloglu gizli ilimler hazinesi 1 pdf
          -mustafa iloglu gizli ilimler hazinesi 4 pdf
          -mustafa iloglu gizli ilimler hazinesi 2 pdf
          -mustafa iloglu gizli ilimler hazinesi 3 pdf
          -mustafa iloglu gizli ilimler hazinesi 5 pdf
          -mustafa iloglu gizli ilimler hazinesi 6 pdf
          -mustafa iloglu gizli ilimler hazinesi 8 pdf
          -mustafa iloglu gizli ilimler hazinesi 9 pdf
          -mustafa iloglu gizli ilimler hazinesi 10 pdf
          -mustafa iloglu gizli ilimler hazinesi oku pdf
          -mustafa iloglu gizli ilimler hazinesi online pdf
          -mustafa iloglu gizli ilimler hazinesi bedava pdf
          -mustafa iloglu gizli ilimler hazinesi ücretsiz pdf
          -mustafa iloglu gizli ilimler hazinesi full pdf
          -mustafa iloglu gizli ilimler hazinesi türkçe pdf
          -mustafa iloglu gizli ilimler hazinesi arapça pdf
          -mustafa iloglu gizli ilimler hazinesi yorumları pdf
          -mustafa iloglu gizli ilimler hazinesi özeti pdf
          -mustafa iloglu gizli ilimler hazinesi içeriği pdf
          -mustafa iloglu gizli ilimler hazinesi konusu pdf
          -mustafa iloglu gizli ilimler hazinesi hakkında pdf
          -mustafa iloglu gizli ilimler hazinesi nedir pdf
          -mustafa iloglu gizli ilimler hazinesi ne işe yarar pdf
          -mustafa iloglu gizli ilimler hazinesi nasıl okunur pdf
          -mustafa iloglu gizli ilimler hazinesi nasıl yazılır pdf
          -mustafa iloglu gizli ilimler hazinesi nasıl indirilir pdf
          -mustafa iloglu gizli ilimler hazinesi nasıl çalışılır pdf
          -mustafa iloglu gizli ilimler hazinesi nereden alınır pdf
          -mustafa iloglu gizli ilimler hazinesi nerede satılır pdf
          -mustafa iloglu gizli ilimler hazinesi nerede bulunur pdf
          -mustafa iloglu gizli ilimler hazinesi neler içerir pdf
          -mustafa ileglu secret sciences treasure download PDF (English translation)
          -مصطفى إيلوغلو خزينة العلوم السرية تحميل PDF (Arabic translation)
          -Mustafâ İloğlû Gizlî İlimlêr Hazinesî PDF İndir (Turkish transliteration)
          -Mustafā Īlūglū Khazīnat al-'Ulūm al-Sirriyyah Tahlīl PDF (Arabic transliteration)
          -Mustafā Iloglū Gizlī Ilmler Hazīnesī PDF Download (English transliteration)
          -مصطفی ایلوگلو خزینه علوم مخفی دانلود PDF (Persian translation)
          -Mustafā Īlūglū Khazīnah-'Ulūm-i Makhfī Dānlūd PDF (Persian transliteration)
          -Mustafā Iloglū Khazīnah-'Ulūm-i Makhfī Download PDF (English transliteration)
          -Mustafā Iloglū Gizlī Ilmler Hazīnesī Tahlīl PDF (Turkish transliteration)
          -Mustafā Iloglū Gizlī Ilmler Hazīnesī Oku PDF (Turkish transliteration)
          -Mustafā Iloglū Gizlī Ilmler Hazīnesī Online PDF (Turkish transliteration)
          -Mustafā Iloglū Gizlī Ilmler Hazīnesī Bedava PDF (Turkish transliteration)
          -Mustafā Iloglū Gizlī Ilmler Hazīnesī Ücretsiz PDF (Turkish transliteration)
          -Mustafā Iloglū Gizlī Ilmler Hazīnesī Full PDF (Turkish transliteration)
          -Mustafā Iloglū Gizlī Ilmler Hazīnesī Türkçe PDF (Turkish transliteration)

          - -

          Mustafa İloğlu is not to be confused with Mustafa İslamoğlu, another Turkish author and scholar who has written books on Islam, Quran, history and politics. Mustafa İslamoğlu is known for his reformist and rationalist approach to Islamic thought and his critique of traditional interpretations. He is also a popular speaker and lecturer who has given many conferences and seminars on various topics related to Islam and society.

          -

          Mustafa İloğlu, on the other hand, is more focused on the mystical and esoteric aspects of Islam and Sufism. He has a background in engineering and business, but he has also studied the hidden sciences under various masters and teachers. He claims to have inherited some of the secrets and knowledge from his ancestors who were also involved in these fields. He is also a poet and a musician who has composed songs and poems inspired by his spiritual experiences.

          -

          Mustafa İloğlu's books have been translated into several languages such as English, Kurdish, Arabic and Persian. He has also received many awards and honors for his contributions to the cultural and intellectual life of Turkey. He is currently the president of the Batı Anadolu Sanayici ve İşadamları Dernekleri Federasyonu (Western Anatolia Industrialists and Businessmen Associations Federation) and a member of the board of trustees of the İzmir Ekonomi Üniversitesi (İzmir Economy University). He is also active in various social and charitable projects.

          e753bf7129
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download HitmanPro and Clean Your PC from Malware in Minutes.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Download HitmanPro and Clean Your PC from Malware in Minutes.md deleted file mode 100644 index fcf38f7614ca72e9c282a7d7652f12a1ccb791ef..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download HitmanPro and Clean Your PC from Malware in Minutes.md +++ /dev/null @@ -1,55 +0,0 @@ -
          -

          How to Download HitmanPro and Remove Malware from Your PC

          -

          Malware is a serious threat to your computer's security and performance. Malware can infect your system through various ways, such as phishing emails, malicious downloads, compromised websites, and more. Malware can steal your personal information, damage your files, slow down your PC, and even hijack your browser.

          -

          download hitmanpro


          DOWNLOADhttps://urlcod.com/2uK7BE



          -

          That's why you need a reliable malware removal tool like HitmanPro. HitmanPro is a powerful and lightweight software that can scan and remove malware, viruses, trojans, worms, keyloggers, rootkits, trackers, spyware, and more from your PC. HitmanPro can detect and eliminate even the most advanced and persistent threats that other antivirus programs may miss.

          -

          In this article, we will show you how to download HitmanPro and use it to clean your PC from malware.

          -

          Download HitmanPro

          -

          To download HitmanPro, follow these steps:

          -

          -
            -
          1. Go to https://www.hitmanpro.com/en-us/downloads and choose the version that suits your needs. You can download HitmanPro for a one-time scan and removal of malware, or HitmanPro.Alert for continuous protection against complex attacks and exploits.
          2. -
          3. Click on "Download" and wait for the file to be downloaded on your PC.
          4. -
          5. Open the downloaded file and click on "Next" to start the installation process.
          6. -
          7. Accept the terms and conditions and click on "Next" again.
          8. -
          9. Choose where you want to save HitmanPro on your PC and create shortcuts if you want. Click on "Next" again.
          10. -
          11. Wait for the installation to finish and click on "Finish".
          12. -
          -

          Scan and Remove Malware with HitmanPro

          -

          To scan and remove malware with HitmanPro, follow these steps:

          -
            -
          1. Launch HitmanPro from your desktop or start menu.
          2. -
          3. Click on "Next" to start a scan of your PC. HitmanPro will scan your system for any traces of malware and display the results.
          4. -
          5. If any malware is found, you will have a free 30-day license to remove it. Click on "Activate Free License" and enter your email address to register.
          6. -
          7. Click on "Next" again and then on "Delete" to remove the malware from your PC.
          8. -
          9. Restart your PC if prompted by HitmanPro.
          10. -
          -

          Congratulations! You have successfully downloaded HitmanPro and removed malware from your PC. You can now enjoy a faster and safer PC experience.

          Why Choose HitmanPro?

          -

          HitmanPro is not just another antivirus program. It is a specialized malware removal tool that can complement your existing security software and provide an extra layer of protection. Here are some of the benefits of using HitmanPro:

          -
            -
          • HitmanPro uses advanced behavioral analysis and cloud technology to detect and remove malware that other programs may miss.
          • -
          • HitmanPro does not require installation and can run from a USB flash drive, CD/DVD, or a network attached storage.
          • -
          • HitmanPro is lightweight and fast. It only takes 10MB of space and can scan your PC in minutes.
          • -
          • HitmanPro can remove malware from your PC even if it is infected by a rootkit or a bootkit.
          • -
          • HitmanPro can restore your system to a pre-infected state by removing malicious registry entries, files, and shortcuts.
          • -
          -

          How to Upgrade to HitmanPro.Alert

          -

          If you want to enjoy continuous protection against complex attacks and exploits, you can upgrade to HitmanPro.Alert. HitmanPro.Alert is a proactive security solution that adds multiple layers of security to your PC. It can:

          -
            -
          • Block ransomware, keyloggers, webcam hijackers, and other advanced threats.
          • -
          • Prevent phishing, banking, and identity theft by encrypting your keystrokes and protecting your browser.
          • -
          • Stop malicious programs from exploiting vulnerabilities in your system and applications.
          • -
          • Monitor the behavior of all processes and stop any suspicious activity.
          • -
          • Protect your data from unauthorized access by encrypting your hard drive.
          • -
          -

          To upgrade to HitmanPro.Alert, follow these steps:

          -
            -
          1. Go to https://www.hitmanpro.com/en-us/downloads and click on "Buy Now" under HitmanPro.Alert.
          2. -
          3. You will be redirected to sophos.com to complete the purchase process. Choose the number of devices and the duration of your subscription and click on "Add to Cart".
          4. -
          5. Enter your billing information and payment method and click on "Place Order".
          6. -
          7. You will receive an email with your license key and instructions on how to activate HitmanPro.Alert on your PC.
          8. -
          9. Download and install HitmanPro.Alert on your PC and enter your license key when prompted.
          10. -
          -

          You have now upgraded to HitmanPro.Alert and secured your PC from advanced threats.

          ddb901b051
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download and Use CIMCO Edit for Free The Ultimate Guide.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Download and Use CIMCO Edit for Free The Ultimate Guide.md deleted file mode 100644 index ec76d45c5d202289f865af7d75e51cb52e0baf8f..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download and Use CIMCO Edit for Free The Ultimate Guide.md +++ /dev/null @@ -1,29 +0,0 @@ -
          -

          CIMCO Edit: The World's Most Popular CNC Program Editor

          -

          CIMCO Edit is a powerful and versatile software tool for editing and simulating CNC programs. Whether you are a professional CNC programmer, a hobbyist, or a student, CIMCO Edit can help you create and modify NC code faster and easier.

          -

          CIMCO Edit comes with many features that make it stand out from other CNC editors, such as:

          -

          cimco edit free


          Downloadhttps://urlcod.com/2uK658



          -
            -
          • NC-specific functions: syntax highlighting, code suggestions, error checking, line numbering, renumbering, character handling, XYZ range finder, tool compensation, math functions, and more.
          • -
          • NC-Assistant: an interactive interface that allows you to modify M and G codes with ease.
          • -
          • File Compare: a tool that identifies new, changed and deleted lines in different versions of NC programs.
          • -
          • Cycles and Macros: a library of predefined cycles and operations that you can insert and edit in your NC programs.
          • -
          • Tool Manager: a tool that lets you view and modify the tools in your NC programs, import tools from external systems, create your own libraries or choose from hundreds of predefined tools and holders.
          • -
          • Backplot and Solid Simulation: a graphical representation of your NC program that shows the toolpath, the workpiece, the tools and the holders in 2D or 3D.
          • -
          • Add-ons: optional modules that extend the functionality of CIMCO Edit with machine simulation, program management, 2D CAD/CAM, and more.
          • -
          -

          CIMCO Edit is fully integrated with the CIMCO suite of software products and can be customized to work with external systems. It supports all major CNC machine types and formats and can communicate with them via RS-232 or FTP.

          -

          If you want to try CIMCO Edit for yourself, you can download it for free from the CIMCO website. You can evaluate it for 30 days without any limitations. You can also access free online courses that teach you how to use CIMCO Edit for milling and turning, trigonometry, NC programming basics and more.

          -

          CIMCO Edit is the editor-of-choice for thousands of CNC programmers worldwide who demand a reliable, full-featured editing, simulation and communication tool. Don't miss this opportunity to join them and improve your CNC programming skills with CIMCO Edit.

          CIMCO Edit Add-ons: Optional Modules for Enhanced Functionality

          -

          CIMCO Edit is not only a great CNC editor, but also a platform that can be extended with optional modules that add more features and capabilities to your CNC programming workflow. These add-ons are fully integrated with CIMCO Edit and can be purchased separately or as part of a bundle.

          -

          Some of the most popular add-ons for CIMCO Edit are:

          -
            -
          • CIMCO Machine Simulation: a module that allows you to prove-out your NC code on a 3D model of your CNC machine and see the exact movement of components such as heads, spindles, fixtures, workpieces, and even peripheral devices. You can detect collisions, over-travel errors, and optimize your toolpaths for better performance and safety.
          • -
          • CIMCO NC-Base: a module that allows you to manage your CNC programs and related production documents in a centralized database. You can organize your files by folders, projects, customers, machines, or any other criteria. You can also track revisions, access rights, backups, and transfers to and from machines.
          • -
          • CIMCO CNC-Calc: a module that enables novice programmers to draw 2D geometrical contours, lay out toolpaths for mill and lathe, and simulate the resulting NC program. You can also import DXF files and convert them to NC code with ease.
          • -
          • CIMCO Mazatrol Viewer: a module that allows you to view Mazatrol files at your PC without the need for a Mazak machine. You can edit the parameters, check the toolpaths, and print the programs. This module is recommended by Mazak and supports most Mazatrol controls.
          • -
          -

          These are just some of the add-ons available for CIMCO Edit. You can find more information about them on the CIMCO website. You can also download them for free and evaluate them for 30 days along with CIMCO Edit.

          -

          CIMCO Edit add-ons are designed to enhance your CNC programming experience and help you achieve better results with less time and effort. Whether you need more simulation, management, or CAD/CAM capabilities, there is an add-on for you.

          ddb901b051
          -
          -
          \ No newline at end of file diff --git a/spaces/timqian/like-history/public/index.html b/spaces/timqian/like-history/public/index.html deleted file mode 100644 index f5e3b00c642f9de9317369f11aecf30c4c9dfd8d..0000000000000000000000000000000000000000 --- a/spaces/timqian/like-history/public/index.html +++ /dev/null @@ -1,43 +0,0 @@ - - - - - - - - - - - - - Like History - - - -
          - - - diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Come Scaricare Gratis Da Docsity 2021.md b/spaces/tioseFevbu/cartoon-converter/scripts/Come Scaricare Gratis Da Docsity 2021.md deleted file mode 100644 index 6b46625f4206dbfc8f9ea1a2a5eeb0f6bc6a851a..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Come Scaricare Gratis Da Docsity 2021.md +++ /dev/null @@ -1,37 +0,0 @@ - -I can try to generate a title and an article with SEO optimization and HTML formatting for the keyword "come scaricare gratis da docsity". Here is what I came up with: - -

          Come scaricare gratis da docsity: i migliori metodi per ottenere documenti di studio

          - -

          Docsity è una piattaforma online che permette di condividere e scaricare documenti di studio di ogni genere, come appunti, dispense, riassunti, esercizi e molto altro. Si tratta di un servizio molto utile per gli studenti universitari e non solo, che possono trovare materiale di qualità e aggiornato per preparare i loro esami.

          - -

          Tuttavia, non tutti i documenti presenti su Docsity sono gratuiti. Alcuni richiedono infatti dei punti, che si possono ottenere in vari modi: sottoscrivendo un abbonamento Premium, condividendo i propri documenti, recensendo quelli scaricati, rispondendo alle domande degli altri utenti e così via. Ma esistono anche dei metodi alternativi per scaricare gratis da Docsity senza raccogliere punti. Vediamo quali sono.

          -

          come scaricare gratis da docsity


          Download ———>>> https://urlcod.com/2uHyHI



          - -

          Come scaricare gratis da Docsity con la sorgente della pagina

          - -

          Uno dei metodi più semplici e veloci per scaricare gratis da Docsity senza punti è quello di utilizzare la sorgente della pagina web del documento che si vuole ottenere. Si tratta di un'operazione del tutto legale, che consiste nell'analizzare il codice HTML della pagina e convertirlo in testo tramite un editor online. Ecco come fare:

          - -
            -
          1. Accedi a Docsity e individua il documento che vuoi scaricare.
          2. -
          3. Clicca con il tasto destro del mouse su una parte vuota della pagina e seleziona la voce "Visualizza sorgente pagina" dal menu che si apre.
          4. -
          5. Seleziona tutto il contenuto della pagina che si apre utilizzando il comando CTRL+A (su Windows) o CMD+A (su Mac).
          6. -
          7. Copia il contenuto selezionato utilizzando il comando CTRL+C (su Windows) o CMD+C (su Mac).
          8. -
          9. Collegati a questo sito: https://html-online.com/editor/.
          10. -
          11. Assicurati che il riquadro a sinistra sia completamente vuoto, altrimenti clicca sul pulsante "Clean" in alto.
          12. -
          13. Incolla il contenuto copiato nel riquadro a sinistra utilizzando il comando CTRL+V (su Windows) o CMD+V (su Mac).
          14. -
          15. Sul riquadro a destra ti apparirà il documento che desideravi scaricare. Puoi copiarlo e incollarlo su un file di testo o stamparlo direttamente.
          16. -
          - -

          Come scaricare gratis da Docsity con un generatore di link

          - -

          Un altro metodo per scaricare gratis da Docsity senza punti è quello di usare un generatore di link, ovvero un sito web che permette di ottenere il link diretto al download del documento desiderato. Questo metodo è molto semplice ma non sempre funzionante, in quanto i link possono scadere o essere rimossi. Ecco come fare:

          - -
            -
          1. Accedi a Docsity e individua il documento che vuoi scaricare.
          2. -
          3. Copia l'indirizzo web del documento dalla barra degli indirizzi del browser.
          4. -
          5. Collegati a questo sito: https://www.howtechismade.com/guide/come-scaricare-gratis-da-docsity-anche-senza-punti/.
          6. -
          7. Incolla l'indirizzo web copiato nel campo di testo presente nella pagina.
          8. -
          9. Clicca sul pulsante azzurro "Scarica documento

            7196e7f11a
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Descarga E Instala Eset Nod 32 Antivirus 9 Para 32 Y 64 Bits Licencias De Por Vida 2019.md b/spaces/tioseFevbu/cartoon-converter/scripts/Descarga E Instala Eset Nod 32 Antivirus 9 Para 32 Y 64 Bits Licencias De Por Vida 2019.md deleted file mode 100644 index fc0aa9f0f8c89d7674864423666e219aad35f116..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Descarga E Instala Eset Nod 32 Antivirus 9 Para 32 Y 64 Bits Licencias De Por Vida 2019.md +++ /dev/null @@ -1,32 +0,0 @@ -
            -

            ¿Cómo descargar e instalar el antivirus Eset Nod 32 versión 9 con licencias de por vida?

            -

            Si estás buscando un antivirus eficaz, rápido y fácil de usar, quizás te interese conocer el Eset Nod 32, un software antivirus desarrollado por la empresa ESE, que tiene versiones para diferentes sistemas operativos y dispositivos. En este artículo, te explicamos cómo descargar e instalar el Eset Nod 32 Antivirus 9 para 32 y 64 bits, y cómo activarlo con licencias de por vida.

            -

            ¿Qué es el Eset Nod 32 Antivirus?

            -

            El Eset Nod 32 Antivirus es un programa que protege tu ordenador de todo tipo de amenazas informáticas, como virus, troyanos, gusanos, spyware y rootkits. Este antivirus utiliza un motor unificado, que permite la detección en tiempo real de nuevas amenazas o virus nuevos aún no catalogados. Gracias a esto, el Eset Nod 32 Antivirus es capaz de analizar el comportamiento sospechoso propio de malware y detener la infección antes de que afecte al ordenador del usuario.

            -

            Descarga E Instala Eset Nod 32 Antivirus 9 Para 32 Y 64 Bits || Licencias De Por Vida 2019


            Download Zip ✶✶✶ https://urlcod.com/2uHxdc



            -

            El Eset Nod 32 Antivirus es compatible con los sistemas operativos Windows, Linux y Mac Os. Además, tiene versiones para estación de trabajo, móviles, servidores gateway y correo electrónico, entre otros. Este antivirus se destaca por su velocidad, su bajo consumo de recursos y su facilidad de uso.

            -

            ¿Cómo descargar e instalar el Eset Nod 32 Antivirus 9?

            -

            Para descargar e instalar el Eset Nod 32 Antivirus 9, debes seguir estos pasos:

            -
              -
            1. Entra en el sitio web oficial de ESE: https://www.eset.com/es/
            2. -
            3. Haz clic en la pestaña "Productos" y selecciona "ESET NOD32 Antivirus".
            4. -
            5. Elige la opción "Descargar" y selecciona la versión que corresponda a tu sistema operativo (32 o 64 bits).
            6. -
            7. Guarda el archivo de instalación en tu ordenador y ejecútalo.
            8. -
            9. Sigue las instrucciones del asistente de instalación y acepta los términos y condiciones.
            10. -
            11. Cuando termine la instalación, reinicia tu ordenador.
            12. -
            -

            ¿Cómo activar el Eset Nod 32 Antivirus 9 con licencias de por vida?

            -

            Para activar el Eset Nod 32 Antivirus 9 con licencias de por vida, debes seguir estos pasos:

            -
              -
            1. Descarga el activador Tnod desde uno de estos enlaces:
              -- Para Windows 7/8: http://twineer.com/1VHW
              -- Para Windows 10: http://twineer.com/1VL3
            2. -
            3. Descomprime el archivo zip y ejecuta el programa Tnod.
            4. -
            5. Haz clic en el botón "Configurar" y selecciona la opción "Insertar licencia automáticamente".
            6. -
            7. Haz clic en el botón "Aceptar" y luego en el botón "Actualizar".
            8. -
            9. Espera a que el programa busque e inserte una licencia válida para tu antivirus.
            10. -
            11. Cuando termine el proceso, verás un mensaje que dice "Licencia insertada correctamente".
            12. -
            13. Listo. Ya tienes tu Eset Nod 32 Ant

              -

              e93f5a0c3f
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/DynOne V2.4 Incl Patched And Keygen BETTER [WiN OSX]-R2R.md b/spaces/tioseFevbu/cartoon-converter/scripts/DynOne V2.4 Incl Patched And Keygen BETTER [WiN OSX]-R2R.md deleted file mode 100644 index 47560d160b9fc3aca7be2ae9357590720fce614e..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/DynOne V2.4 Incl Patched And Keygen BETTER [WiN OSX]-R2R.md +++ /dev/null @@ -1,30 +0,0 @@ - -

              DynOne v2.4: A Powerful Multiband Compressor for Mixing and Mastering

              -

              If you are looking for a plugin that can help you achieve unrivalled dynamics control with ease and flexibility, you might want to check out DynOne v2.4 by Leapwing Audio. This plugin is a smart parallel multiband compressor that can enhance your sound with quality compression across five bands.

              -

              In this article, we will explore what DynOne v2.4 is, what it can do for you, how to use it, and some tips and tricks for getting the most out of it. Whether you are mixing or mastering, DynOne v2.4 can help you take your audio to the next level.

              -

              DynOne v2.4 Incl Patched and Keygen [WiN OSX]-R2R


              Download File »»» https://urlcod.com/2uHwIW



              -

              What is DynOne v2.4 and what are its features?

              -

              DynOne v2.4 is a plugin that was designed from the ground up to put five bands of quality parallel compression at your fingertips. Parallel compression is a technique that blends an uncompressed signal with a compressed one, resulting in a more balanced and natural sound.

              -

              DynOne v2.4 allows you to control the dynamics of each frequency band independently, using transparent and adjustable filters that preserve the nuances of your sound. You can also adjust the attack and release settings for each band, which are intelligent and responsive within pre-set values.

              -

              Some of the features that make DynOne v2.4 stand out from other multiband compressors are:

              -
                -
              • A unique weighting algorithm that lets you control the sidechain signal of each band
              • -
              • A center-side mode that lets you process the mid and side signals separately
              • -
              • A beautiful retina design with most common controls on one screen
              • -
              • A wide compatibility with Mac OSX (10.13 +) (M1-Native), Windows 8, 10 (64-bit only) in AAX-Native, VST, VST3 and AU formats
              • -
              • A dedicated support team that listens to the community and provides regular updates
              • -
              -

              DynOne v2.4 has received great feedback from some of the industry's top producers and engineers, such as Bob Ludwig, Joe Chiccarelli, Dave Pensado, Warren Huart, Ricky Damian, Marco Antonio Spaventi, Maria Elisa Ayerbe, and many more.

              -

              To get DynOne v2.4, you can visit Leapwing Audio's website and purchase it for €199 or download a free trial for 30 days. You will need an iLok account (free) to activate it.

              -

              How to use DynOne v2.4

              -

              Using DynOne v2. 4 is very simple and intuitive. Here are the main steps to follow: - How to set up the bands and crossovers - To set up the bands, you can use the sliders at the bottom of the interface to adjust the frequency range of each band. You can also use the buttons on the left to solo or mute each band, or use the knobs on the right to adjust the gain or bypass each band. - To set up the crossovers, you can use the buttons on the top to choose between linear phase or minimum phase filters, which affect the phase response and latency of the plugin. You can also use the buttons on the bottom to choose between 6 dB/octave or 12 dB/octave slopes, which affect the steepness of the filters. - How to adjust the compression settings for each band - To adjust the compression settings, you can use the knobs on the center of the interface to control the threshold, ratio, attack, and release of each band. You can also use the weighting knob to control how much of the sidechain signal is fed into each band's compressor. This allows you to fine-tune how much compression is applied to each frequency range. - To monitor the compression settings, you can use the meters on the center of the interface to see the input, output, and gain reduction levels of each band. You can also use the graph on the top to see how each band's compressor is reacting to the input signal. You can switch between RMS and peak modes by clicking on the graph. - How to use the sidechain, center-side mode, and output controls - To use the sidechain, you can click on the sidechain button on the top right corner of the interface to open a new window where you can adjust the sidechain settings. You can choose between internal or external sidechain sources, apply a high-pass or low-pass filter to the sidechain signal, or invert its polarity. You can also solo or mute the sidechain signal by clicking on the buttons below. - To use the center-side mode, you can click on the center-side button on the top right corner of the interface to switch between stereo and center-side modes. In center-side mode, you can process the mid and side signals separately using different compression settings for each band. This allows you to enhance or reduce the stereo width and depth of your sound. - To use the output controls, you can use the knobs on the bottom right corner of the interface to control the output gain, dry/wet mix, and auto gain compensation. You can also use the meters on the bottom right corner of the interface to see the input and output levels of the plugin. You can switch between VU and LUFS modes by clicking on the meters.

              Tips and tricks for using DynOne v2.4

              -

              DynOne v2.4 is a versatile plugin that can be used for various purposes and genres. Here are some tips and tricks for using DynOne v2.4 effectively:

              - - How to use DynOne v2.4 for sculpting sound, tightening up busses, enhancing vocals, and manipulating full mixes - To use DynOne v2.4 for sculpting sound, you can use the weighting knob to emphasize or de-emphasize certain frequency ranges in your sound. For example, you can increase the weighting of the low band to add more punch and weight to your drums, or decrease the weighting of the high band to reduce harshness and sibilance in your vocals. - To use DynOne v2.4 for tightening up busses, you can use the center-side mode to glue together the elements in your bus and create a more coherent and balanced sound. For example, you can use DynOne v2.4 on your drum bus to enhance the transients and stereo image of your drums, or on your synth bus to create more depth and width in your synths. - To use DynOne v2.4 for enhancing vocals, you can use the compression settings to smooth out the dynamics and add more presence and clarity to your vocals. For example, you can use a low threshold and a high ratio to compress the vocals evenly, or a high threshold and a low ratio to compress only the peaks and add more punch and energy to your vocals. - To use DynOne v2.4 for manipulating full mixes, you can use the dry/wet mix knob to blend in some parallel compression and add more loudness and density to your mix. For example, you can use a high dry/wet mix ratio to create a more aggressive and punchy mix, or a low dry/wet mix ratio to create a more subtle and natural mix. - How to use DynOne v2.4 with other plugins from Leapwing Audio and R2R - To use DynOne v2.4 with other plugins from Leapwing Audio, you can take advantage of their synergy and compatibility to create a powerful audio processing chain. For example, you can use StageOne to add more depth and width to your sound before or after using DynOne v2.4, or use RootOne to add more subharmonic content and low-end punch to your sound after using DynOne v2.4. - To use DynOne v2.4 with other plugins from R2R, you can benefit from their quality and variety to enhance your sound further. For example, you can use FabFilter Pro-Q 3 to apply some surgical EQ before or after using DynOne v2.4, or use Soundtoys Decapitator to add some saturation and distortion after using DynOne v2.4. - How to use the presets and customize them for your needs - To use the presets, you can click on the preset button on the top left corner of the interface to open a new window where you can browse through different categories and styles of presets. You can also save your own presets by clicking on the save button next to the preset button. - To customize the presets for your needs, you can tweak any of the parameters as you wish, or use the randomize button on the top left corner of the interface to generate some new settings based on the current preset. You can also compare different settings by using the A/B button on the top left corner of the interface.

              Conclusion

              -

              DynOne v2.4 is a powerful multiband compressor that can help you achieve unrivalled dynamics control with ease and flexibility. It offers five bands of quality parallel compression with transparent and adjustable filters, intelligent and responsive attack and release settings, a unique weighting algorithm, a center-side mode, a beautiful retina design, a wide compatibility, and a dedicated support team.

              -

              -

              DynOne v2.4 can be used for various purposes and genres, such as sculpting sound, tightening up busses, enhancing vocals, and manipulating full mixes. It can also be used with other plugins from Leapwing Audio and R2R to create a powerful audio processing chain.

              -

              DynOne v2.4 has received great feedback from some of the industry's top producers and engineers, who have used it for songs or genres that use DynOne v2.4. For example, you can hear DynOne v2.4 on the electronic music productions of Marco Antonio Spaventi, who has a tutorial video on how to use DynOne v2.4 for mastering electronic music. You can also hear DynOne v2.4 on the rock and pop mixes of Dave Pensado, who has an episode on his show Pensado's Place where he explores DynOne v2.4. We hope this article has given you a good overview of what DynOne v2.4 is and how to use it effectively. If you are interested in trying out DynOne v2.4 for yourself, you can download a free trial from Leapwing Audio's website and see how it can improve your sound.

              FAQs

              -

              Here are some frequently asked questions about DynOne v2.4:

              - - What is R2R and how does it relate to DynOne v2.4? - R2R is a group of hackers who crack and release software without authorization from the developers. They have released a cracked version of DynOne v2.4 that includes a patch and a keygen to bypass the iLok protection. However, we strongly advise against using the cracked version, as it may contain malware, viruses, or bugs that can harm your computer or your audio files. It is also illegal and unethical to use the cracked version, as it violates the intellectual property rights of Leapwing Audio and deprives them of their deserved income. - What is the difference between DynOne v2.4 and other multiband compressors? - DynOne v2.4 is different from other multiband compressors in several ways. First, it uses parallel compression instead of serial compression, which means that it blends the uncompressed signal with the compressed one, resulting in a more natural and balanced sound. Second, it uses transparent and adjustable filters that preserve the nuances of your sound, instead of using fixed or colored filters that may alter your sound. Third, it uses intelligent and responsive attack and release settings that work within pre-set values, instead of using manual or fixed settings that may not suit your sound. Fourth, it uses a unique weighting algorithm that lets you control the sidechain signal of each band, instead of using a fixed or global sidechain signal that may not be optimal for your sound. Fifth, it uses a center-side mode that lets you process the mid and side signals separately, instead of using a stereo mode that may not give you enough control over your stereo field. - How can I get support or feedback for using DynOne v2.4? - If you need support or feedback for using DynOne v2.4, you can contact Leapwing Audio through their website or their social media channels. You can also join their online community on Facebook or Discord, where you can interact with other users and share your experiences and tips. You can also check out their blog or their YouTube channel, where they post regular updates, tutorials, and reviews on their products. - What are some alternative plugins to DynOne v2.4? - Some alternative plugins to DynOne v2.4 are FabFilter Pro-MB, iZotope Ozone 9, Waves C6, Slate Digital FG-X, and Sonnox Oxford Dynamics. These plugins are also multiband compressors that offer different features and options for controlling your dynamics across multiple frequency bands. However, they may not have the same quality, transparency, flexibility, or ease of use as DynOne v2.4. - How can I learn more about multiband compression and audio production? - If you want to learn more about multiband compression and audio production, you can check out some online courses, books, blogs, podcasts, or videos that cover these topics in depth. Some examples are Mixing with Multiband Compression by Matthew Weiss, Mastering Audio by Bob Katz, The Pro Audio Files, The Mastering Show, and Produce Like A Pro. These resources can help you understand the theory and practice of multiband compression and audio production better.

              b2dd77e56b
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Florabella Collection ? Color And Light Photoshop Actions And Overlays NEW!.md b/spaces/tioseFevbu/cartoon-converter/scripts/Florabella Collection ? Color And Light Photoshop Actions And Overlays NEW!.md deleted file mode 100644 index 4b850b8afb312b4eece25deb86ace3946a880294..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Florabella Collection ? Color And Light Photoshop Actions And Overlays NEW!.md +++ /dev/null @@ -1,17 +0,0 @@ - -

              How to Enhance Your Photos with Florabella Collection – Color and Light Photoshop Actions and Overlays

              -

              If you are looking for a way to add some magic and sparkle to your photos, you might want to check out Florabella Collection – Color and Light Photoshop Actions and Overlays. This is a set of Photoshop tools that can help you create beautiful effects in both color and black and white, such as sun flares, light leaks, hazy light, retro sun, warm vintage, and more.

              -

              Florabella Collection – Color and Light Photoshop Actions and Overlays


              Download Zip 🗸 https://urlcod.com/2uHyvk



              -

              Florabella Collection – Color and Light Photoshop Actions and Overlays is compatible with English versions of Photoshop CS2, CS3, CS4, CS5, CS6, CC or Photoshop Elements 6-12[^1^]. It includes base actions, add-on actions, haze actions, enhancers, brushes, movable sun glows, movable sun streams, light leaks, hazy light infusions, and 20 new overlays[^1^]. You can also access exclusive editing video tutorials and a product guide/look book to learn how to use the actions and overlays effectively.

              -

              With Florabella Collection – Color and Light Photoshop Actions and Overlays, you can quickly transform your photos into works of art with just a few clicks. You can adjust the intensity and opacity of each action and overlay to suit your taste and style. You can also mix and match different effects to create your own unique look. Whether you want to achieve a crisp, clean, colorful look or an earthy, organic, natural look[^2^], you can find the right tools in this set.

              -

              Florabella Collection – Color and Light Photoshop Actions and Overlays is perfect for photographers who want to add some flair and drama to their portraits, landscapes, florals, or any other type of photos. You can use them for albums, cards, digital scrapbooking, or online display. You can also combine them with other Florabella products, such as textures, frames, papers, or templates[^2^], to create even more stunning results.

              -

              -

              If you are interested in purchasing Florabella Collection – Color and Light Photoshop Actions and Overlays, you can visit their website at http://www.florabellacollection.com/florabella-color-and-light-actions-and-overlays.html. The intro price is $99 for a limited time only[^1^], so don't miss this opportunity to get your hands on this amazing set of Photoshop tools.

              Here are some examples of photos edited with Florabella Collection – Color and Light Photoshop Actions and Overlays. You can see how the actions and overlays can enhance the mood, atmosphere, and quality of the photos. You can also see how different combinations of effects can create different styles and tones.

              -

              Photo edited with Florabella Collection – Color and Light Photoshop Actions and Overlays

              -

              This photo was edited with the Summer Love base action, the Retro Sun add-on action, the Hazy Sunflare haze action, the Quick Fill Flash enhancer, the Clarity Brush, the Backlit Skin Brightener brush, the Backlit Contrast Boost brush, the Hazy Light Brush, the Hazy Light Blur Brush, the Rich Color Pop Brush, the Diffused Light Glow movable sun glow, the Pearl Sun Stream movable sun stream, the Spring light leak, and the Soft Light hazy light infusion. You can see how the photo has a warm, sunny, and dreamy feel to it.

              -

              Photo edited with Florabella Collection – Color and Light Photoshop Actions and Overlays

              -

              This photo was edited with the Light Bright Pop base action, the Hazy Blue add-on action, the Top Light Haze haze action, the De-haze Boost enhancer, the Clarity Brush, the Backlit Skin Brightener brush, the Backlit Contrast Boost brush, the Hazy Light Brush, the Hazy Light Blur Brush, the Rich Color Pop Brush, the Pearl Sun Glow movable sun glow, the Pearl Sun Stream movable sun stream, the Winter light leak, and the Ruby hazy light infusion. You can see how the photo has a cool, crisp, and vibrant feel to it.

              -

              Photo edited with Florabella Collection – Color and Light Photoshop Actions and Overlays

              -

              This photo was edited with the Light Bright Matte base action, the Warm Vintage add-on action, the Center Light Haze haze action, the Brighten & Tone Yellows enhancer, the Clarity Brush, the Backlit Skin Brightener brush, the Backlit Contrast Boost brush, the Hazy Light Brush, the Hazy Light Blur Brush, the Rich Color Pop Brush, the Soft Warm Sun Glow movable sun glow, the Warm Sun Stream movable sun stream, the Fall light leak, and the Sorbet hazy light infusion. You can see how the photo has a soft, romantic, and nostalgic feel to it.

              7b8c122e87
              -
              -
              \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/_log.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/_log.py deleted file mode 100644 index 92c4c6a193873ce09629f6cfaa2dabc4f14ecb03..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/_log.py +++ /dev/null @@ -1,38 +0,0 @@ -"""Customize logging - -Defines custom logger class for the `logger.verbose(...)` method. - -init_logging() must be called before any other modules that call logging.getLogger. -""" - -import logging -from typing import Any, cast - -# custom log level for `--verbose` output -# between DEBUG and INFO -VERBOSE = 15 - - -class VerboseLogger(logging.Logger): - """Custom Logger, defining a verbose log-level - - VERBOSE is between INFO and DEBUG. - """ - - def verbose(self, msg: str, *args: Any, **kwargs: Any) -> None: - return self.log(VERBOSE, msg, *args, **kwargs) - - -def getLogger(name: str) -> VerboseLogger: - """logging.getLogger, but ensures our VerboseLogger class is returned""" - return cast(VerboseLogger, logging.getLogger(name)) - - -def init_logging() -> None: - """Register our VerboseLogger and VERBOSE log level. - - Should be called before any calls to getLogger(), - i.e. in pip._internal.__init__ - """ - logging.setLoggerClass(VerboseLogger) - logging.addLevelName(VERBOSE, "VERBOSE") diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py deleted file mode 100644 index 6e18f71b31b9fb85a6ca7a6b05ff4d2313951750..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/models/faster_rcnn_r50_caffe_c4.py +++ /dev/null @@ -1,112 +0,0 @@ -# model settings -norm_cfg = dict(type='BN', requires_grad=False) -model = dict( - type='FasterRCNN', - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=3, - strides=(1, 2, 2), - dilations=(1, 1, 1), - out_indices=(2, ), - frozen_stages=1, - norm_cfg=norm_cfg, - norm_eval=True, - style='caffe'), - rpn_head=dict( - type='RPNHead', - in_channels=1024, - feat_channels=1024, - anchor_generator=dict( - type='AnchorGenerator', - scales=[2, 4, 8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[16]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - shared_head=dict( - type='ResLayer', - depth=50, - stage=3, - stride=2, - dilation=1, - style='caffe', - norm_cfg=norm_cfg, - norm_eval=True), - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=1024, - featmap_strides=[16]), - bbox_head=dict( - type='BBoxHead', - with_avg_pool=True, - roi_feat_size=7, - in_channels=2048, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=0, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=12000, - max_per_img=2000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=6000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100))) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cascade_rcnn/cascade_rcnn_r50_caffe_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cascade_rcnn/cascade_rcnn_r50_caffe_fpn_1x_coco.py deleted file mode 100644 index c576c7496928eed58400ba11d71af8f4edc1c4b5..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cascade_rcnn/cascade_rcnn_r50_caffe_fpn_1x_coco.py +++ /dev/null @@ -1,38 +0,0 @@ -_base_ = './cascade_rcnn_r50_fpn_1x_coco.py' - -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe')) - -# use caffe img_norm -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/retinanet/retinanet_r50_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/retinanet/retinanet_r50_fpn_1x_coco.py deleted file mode 100644 index 04bd696b9589e37ad34c9fdd035b97e271d3b214..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/retinanet/retinanet_r50_fpn_1x_coco.py +++ /dev/null @@ -1,7 +0,0 @@ -_base_ = [ - '../_base_/models/retinanet_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/backbones/resnet.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/backbones/resnet.py deleted file mode 100644 index 27d7b2c9400afe81d716c5d7fee74fe60191d55a..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/backbones/resnet.py +++ /dev/null @@ -1,671 +0,0 @@ -import warnings - -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer, build_plugin_layer -from mmcv.runner import BaseModule -from torch.nn.modules.batchnorm import _BatchNorm - -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(BaseModule): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - super(BasicBlock, self).__init__(init_cfg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(BaseModule): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_cfg=None): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(init_cfg) - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - out = x - for name in plugin_names: - out = getattr(self, name)(x) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(BaseModule): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - stem_channels (int | None): Number of stem channels. If not specified, - it will be the same as `base_channels`. Default: None. - base_channels (int): Number of base channels of res layer. Default: 64. - in_channels (int): Number of input image channels. Default: 3. - num_stages (int): Resnet stages. Default: 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - position (str, required): Position inside block to insert - plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'. - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages'. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - pretrained (str, optional): model pretrained path. Default: None - init_cfg (dict or list[dict], optional): Initialization config dict. - Default: None - - Example: - >>> from mmdet.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=None, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - with_cp=False, - zero_init_residual=True, - pretrained=None, - init_cfg=None): - super(ResNet, self).__init__(init_cfg) - self.zero_init_residual = zero_init_residual - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - - block_init_cfg = None - assert not (init_cfg and pretrained), \ - 'init_cfg and pretrained cannot be setting at the same time' - if isinstance(pretrained, str): - warnings.warn('DeprecationWarning: pretrained is a deprecated, ' - 'please use "init_cfg" instead') - self.init_cfg = dict(type='Pretrained', checkpoint=pretrained) - elif pretrained is None: - if init_cfg is None: - self.init_cfg = [ - dict(type='Kaiming', layer='Conv2d'), - dict( - type='Constant', - val=1, - layer=['_BatchNorm', 'GroupNorm']) - ] - block = self.arch_settings[depth][0] - if self.zero_init_residual: - if block is BasicBlock: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm2')) - elif block is Bottleneck: - block_init_cfg = dict( - type='Constant', - val=0, - override=dict(name='norm3')) - else: - raise TypeError('pretrained must be a str or None') - - self.depth = depth - if stem_channels is None: - stem_channels = base_channels - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins, - init_cfg=block_init_cfg) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """Make plugins for ResNet ``stage_idx`` th stage. - - Currently we support to insert ``context_block``, - ``empirical_attention_block``, ``nonlocal_block`` into the backbone - like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be: - - Examples: - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose ``stage_idx=0``, the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->conv3->yyy->zzz1->zzz2 - - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - - .. code-block:: none - - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - r"""ResNetV1d variant described in `Bag of Tricks - `_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/spaces/toonist/DualStyleGAN/style.css b/spaces/toonist/DualStyleGAN/style.css deleted file mode 100644 index 3db1282909d7cb6d1c286f8fd31e029225ed5e3a..0000000000000000000000000000000000000000 --- a/spaces/toonist/DualStyleGAN/style.css +++ /dev/null @@ -1,19 +0,0 @@ -h1 { - text-align: center; -} -img#overview { - max-width: 1000px; - max-height: 600px; - display: block; - margin: auto; -} -img#style-image { - max-width: 1000px; - max-height: 600px; - display: block; - margin: auto; -} -img#visitor-badge { - display: block; - margin: auto; -} diff --git a/spaces/trysem/parrot-paraphraser/app.py b/spaces/trysem/parrot-paraphraser/app.py deleted file mode 100644 index 245ccc3635a2352dd087321ff6d1f5da37068de7..0000000000000000000000000000000000000000 --- a/spaces/trysem/parrot-paraphraser/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import gradio as gr - -from parrot import Parrot -import warnings - -warnings.filterwarnings("ignore") - -""" -uncomment to get reproducable paraphrase generations -def random_state(seed): - torch.manual_seed(seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(seed) - -random_state(1234) -""" - -# Init models (make sure you init ONLY once if you integrate this to your code) -parrot = Parrot(model_tag="prithivida/parrot_paraphraser_on_T5") - - -def generate_paraphases(phrase): - para_phrases = parrot.augment( - input_phrase=phrase, use_gpu=False, max_return_phrases=10 - ) - return "\n".join(["- " + item[0] for item in para_phrases]) - - -input_textbox = gr.Textbox(label="Type your sentence here", lines=5) -output_textbox = gr.Textbox(label="Paraphrases", lines=10) - -demo = gr.Interface( - fn=generate_paraphases, - inputs=input_textbox, - outputs=output_textbox, - examples=[ - "Can you recommed some upscale restaurants in Newyork?", - "What are the famous places we should not miss in Russia?", - ], -) - -demo.launch() diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Color Separation Software Free Download How to Screen Print with Multiple Colors.md b/spaces/usbethFlerru/sovits-modelsV2/example/Color Separation Software Free Download How to Screen Print with Multiple Colors.md deleted file mode 100644 index 68b2afbb29ce97238d2d23b0dd0f5237b33e3959..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Color Separation Software Free Download How to Screen Print with Multiple Colors.md +++ /dev/null @@ -1,33 +0,0 @@ - -

              Color separation is the process of separating the various colors within a design. In the Screen-Printing process, color separations are essential. For a T-shirt or other product to be printed properly, the colors that make up the design must first be separated.

              -

              Color Separation Software Free Download


              Download File --->>> https://urlcod.com/2uyX85



              -

              The general purpose of the color separation process is to prepare the art file so that a film positive may be printed properly. Color separation software breaks down images into the constituent parts required to duplicate an image. The components in the files may be printed in four colors: cyan, magenta, yellow, and black, abbreviated as CMYK. Or, perhaps more commonly in screen printing, they are separated by any variation in colors such as pink, red, light blue, dark blue, etc.

              -

              In addition to being specifically made for the task, Separation Studio NXT is also equipped with some of the best User Experience (UX) features. It is user-friendly and easy to use. As a user, the drag and drop functions and the toggle buttons make it super easy to navigate the product. In a nutshell, the product has a layout that feels welcoming to the user, which improves eventual productivity. The software also auto converts process colors to spot colors when users open the files. Another benefit of Separation Studio is that it makes it very easy to create and edit the under base, including slightly reducing the width of (i.e., choking) the under base.

              -

              As you consider your options and determine whether you should go for a paid separation software, you need to calculate your total costs. In screen printing and color separation, you need a couple of software (and other) products to execute the task completely. It would help if you considered all expenses along the way before making your choice. Considering the cost of Separation Studio alone will do no justice to your budget.

              -

              Also, ask yourself how fast you can churn out print-ready designs with separation software. Will it be significantly faster than your current rate, and will you be gaining more than you have spent overall? These are vital considerations before opting for paid software.

              -

              -

              In any case, color separation remains a two-step process; separating the colors and creating halftones (if the design requires), as we have established earlier. You can do those without a dedicated separation software with these two methods:

              -

              The first step, color separation, can be achieved on design software you probably already have, such as Photoshop. Sure, it may not be automated, but color separation on Photoshop for a few solid colors might take about 10-15 minutes, depending on the number of colors. There is barely any massive difference between this and a color separation software, which takes about 5-10 minutes. The second step, creating halftones manually, can also be done in Photoshop but may take a bit more time, extending to about 30 minutes or even an hour if the image is complex or photorealistic. However, there might be a way to reduce your overall time spent while still saving some costs.

              -

              However, if your budget allows, and you can fork out money for separation software and a RIP software, you can use Separation Studio for both color separation and halftones. While it may cost quite some money, it saves time. Separation Studio takes only about 10-20 minutes for both color separation and halftones (even for complex images).

              -

              Regarding paid software, our recommendation is that you weigh the factors important to you; the eventual costs, the time spent, and how they affect each other. Compared to other free or lower-cost options, a paid separation software should only be used in certain circumstances. For example, if your tasks are time-bound and quite voluminous, or if you often print complex images with many colors. If your business model does not regularly require working in such situations, then using free separation software or design tools like Photoshop is more than enough for you.

              -

              QuikSeps Professional is an entry-level color separation software program owned by UltraSeps and is basically the foundation of UltraSeps. Originally released in 1999, currently, it is available in its fifth version.

              -

              QuikSeps is an ideal choice for those looking to save a few dollars or is new to the screen printing business. It supports almost all color separations including Spot, Index, Stimulated, Grayscale, etc.

              -

              A spot color separation software by an Italy based company Grafco Ast; Xerio Sep optimizes polychromy screen printing by simplifying the color separation process for complex images. And thus helps save both time and money. Besides offering color separation, it enables white backprint creation, white and black color separation, four process color printing, and support for speeding up digital printing.

              -

              SimpleSeps offers easy, accurate, automatic, and high-end stimulated color separations for CorelDraw and PhotoPaint. In addition, you can get features like halftone processing and preview, RIP color management, color reduction, automated white base, and more. You can even install it on more than one computer and operate it from virtually anywhere.

              -

              UltraSeps claims to be one of the most advanced color separation software programs ever for screen printing. Its latest version 3.0 provides more than one color separation method including spot, grayscale, duo tone, Index, and almost any other that you can think of. With cutting-edge features like fleshtones, JPEG Repair, UltraSketch, underbase generation, and more; it is likely to become your first choice if your artworks employ a wide range of color separation techniques.

              -

              A color separation software plugin Adobe PhotoShop, T-Seps Software is a go-to plugin for up-gradation of the art department and quality of printing. It supports a wide range of color separation techniques including Spot Color, Index, Simulated Process, CMYK, Black, and White Monochrome. With productive features like built-in RIP, job proofs, mesh count, standard plastisol, it can be a perfect fit for Adobe Photoshop.

              -

              A plugin for Adobe Photoshop CS3-CC, it is suitable for multi-color printing as it auto-generates multiple colors from a simple RGB file. Customization of settings including Color Density, Saturation, Black and White levels, etc. automates repetitive parts of the color separation process and provides the much-needed flexibility to the print workflow.

              -

              With over 15,000 color separations for 45,00+ users so far, InkSeps is one of the popular color separation software programs. With an easy-to-use web browser APP and importers that automatically import and RIP color simulated process color separations, it can be a good choice for high-quality simulated-color separations.

              -

              Ensure that the color separation software you pick is compatible with the operating system of your computer and design software. In general, most of the color separation software come as add-ons for Illustrator, CorelDraw or Photoshop except for the few that are standalone variants.

              -

              In general, all the type of color separation software is suitable for all types of color separation methods. Still, there is a need to pick the software that is a perfect fit for the type of color separation you often use in your artwork.

              -

              A screen printing separation software program is integral to saving time and energy in printing high-quality output. Keeping in mind the above-mentioned points and referring to our list of top 10 color separation software can help you pick the right screen printing separation software for you and boost your customer experience.

              -

              In order to facilitate the execution of all these tasks, the Magic Buttons plugin was created. It contains the necessary tools of color separation for screen printing, with their help you simplify a number of routine tasks and can focus on the main thing.

              -

              When preparing a color separation file, it is very often necessary to edit a separate color channel. Since in software for color separation, channels are generated using a filter system, the color composition of channels that are close in color spectrum to each other can intersect. The filter cannot completely eliminate parasitic overlays from related parts of the color spectrum on its own. Just for independent editing of the color composition of a separate channel, the Manual Color tool is provided.

              -

              With this tool, you can edit (for example) the gray color so that it matches the shade you want. The plugin generates many types of gray for comfortable work, and you can edit each of them depending on your task. This gives a huge scope for solving creative and professional problems in color separation. In this video, we will just look at an example of how the Manual Color tool works with shades of gray.

              -

              When color separation of monochrome images to screen printing, tone accuracy is very important. Since all attention is focused not so much on colors, but on tone. In order to convey all the tonal hues of the image, you have several types of gray generation, manual black generation, several types of highlights.

              -

              Color separation for screen printing is the process where a program separates the different colors present in an image into individual images. Essentially, the process gives you a separate image for each color in the design.

              -

              Other processes, like spot color index color, use more than four colors of ink. The spot process uses a separate ink for every color in your image, so color separation will split a spot process design into as many images as there are colors.

              -

              Good color separation software can also help you reduce the number of screens you need per image. As part of the color separation process, these programs can figure out which colors to layer to create other colors. For multi-ink processes like index or spot color, this can save you even more time, energy, and supplies.

              -

              Many paid software options have a free trial available so you can find the one you like best before committing to it. If you want to use a truly free color separation software, the best choices are below.

              aaccfb2cb3
              -
              -
              \ No newline at end of file diff --git a/spaces/victor/autotrain-advanced-dreambooth/Dockerfile b/spaces/victor/autotrain-advanced-dreambooth/Dockerfile deleted file mode 100644 index 46dbc3221f848100b112fff3089e0a846994c4eb..0000000000000000000000000000000000000000 --- a/spaces/victor/autotrain-advanced-dreambooth/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM huggingface/autotrain-advanced:latest -CMD autotrain app --task dreambooth --port 7860 \ No newline at end of file diff --git a/spaces/videfikri/aicover/extract_feature_print.py b/spaces/videfikri/aicover/extract_feature_print.py deleted file mode 100644 index 987daabb9cf8a3259f673dc9bd7d24a15dadfde6..0000000000000000000000000000000000000000 --- a/spaces/videfikri/aicover/extract_feature_print.py +++ /dev/null @@ -1,104 +0,0 @@ -import os, sys, traceback - -# device=sys.argv[1] -n_part = int(sys.argv[2]) -i_part = int(sys.argv[3]) -if len(sys.argv) == 5: - exp_dir = sys.argv[4] -else: - i_gpu = sys.argv[4] - exp_dir = sys.argv[5] - os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu) - -import torch -import torch.nn.functional as F -import soundfile as sf -import numpy as np -from fairseq import checkpoint_utils - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - - -def printt(strr): - print(strr) - f.write("%s\n" % strr) - f.flush() - - -printt(sys.argv) -model_path = "hubert_base.pt" - -printt(exp_dir) -wavPath = "%s/1_16k_wavs" % exp_dir -outPath = "%s/3_feature256" % exp_dir -os.makedirs(outPath, exist_ok=True) - - -# wave must be 16k, hop_size=320 -def readwave(wav_path, normalize=False): - wav, sr = sf.read(wav_path) - assert sr == 16000 - feats = torch.from_numpy(wav).float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - if normalize: - with torch.no_grad(): - feats = F.layer_norm(feats, feats.shape) - feats = feats.view(1, -1) - return feats - - -# HuBERT model -printt("load model(s) from {}".format(model_path)) -models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", -) -model = models[0] -model = model.to(device) -printt("move model to %s" % device) -if device != "cpu": - model = model.half() -model.eval() - -todo = sorted(list(os.listdir(wavPath)))[i_part::n_part] -n = max(1, len(todo) // 10) # 最多打印十条 -if len(todo) == 0: - printt("no-feature-todo") -else: - printt("all-feature-%s" % len(todo)) - for idx, file in enumerate(todo): - try: - if file.endswith(".wav"): - wav_path = "%s/%s" % (wavPath, file) - out_path = "%s/%s" % (outPath, file.replace("wav", "npy")) - - if os.path.exists(out_path): - continue - - feats = readwave(wav_path, normalize=saved_cfg.task.normalize) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.half().to(device) - if device != "cpu" - else feats.to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - feats = feats.squeeze(0).float().cpu().numpy() - if np.isnan(feats).sum() == 0: - np.save(out_path, feats, allow_pickle=False) - else: - printt("%s-contains nan" % file) - if idx % n == 0: - printt("now-%s,all-%s,%s,%s" % (len(todo), idx, file, feats.shape)) - except: - printt(traceback.format_exc()) - printt("all-feature-done") diff --git a/spaces/videfikri/aicover/infer/trans_weights.py b/spaces/videfikri/aicover/infer/trans_weights.py deleted file mode 100644 index da0759627d3fee175a2311a5ac50ccb7f8db8ded..0000000000000000000000000000000000000000 --- a/spaces/videfikri/aicover/infer/trans_weights.py +++ /dev/null @@ -1,16 +0,0 @@ -import torch, pdb - -# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-suc\G_1000.pth")["model"]#sim_nsf# -# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-freeze-vocoder-flow-enc_q\G_1000.pth")["model"]#sim_nsf# -# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-freeze-vocoder\G_1000.pth")["model"]#sim_nsf# -# a=torch.load(r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-test\G_1000.pth")["model"]#sim_nsf# -a = torch.load( - r"E:\codes\py39\vits_vc_gpu_train\logs\ft-mi-no_opt-no_dropout\G_1000.pth" -)[ - "model" -] # sim_nsf# -for key in a.keys(): - a[key] = a[key].half() -# torch.save(a,"ft-mi-freeze-vocoder_true_1k.pt")# -# torch.save(a,"ft-mi-sim1k.pt")# -torch.save(a, "ft-mi-no_opt-no_dropout.pt") # diff --git a/spaces/vumichien/Generate_human_motion/pyrender/examples/duck.py b/spaces/vumichien/Generate_human_motion/pyrender/examples/duck.py deleted file mode 100644 index 9a94bad5bfb30493f7364f2e52cbb4badbccb2c7..0000000000000000000000000000000000000000 --- a/spaces/vumichien/Generate_human_motion/pyrender/examples/duck.py +++ /dev/null @@ -1,13 +0,0 @@ -from pyrender import Mesh, Scene, Viewer -from io import BytesIO -import numpy as np -import trimesh -import requests - -duck_source = "https://github.com/KhronosGroup/glTF-Sample-Models/raw/master/2.0/Duck/glTF-Binary/Duck.glb" - -duck = trimesh.load(BytesIO(requests.get(duck_source).content), file_type='glb') -duckmesh = Mesh.from_trimesh(list(duck.geometry.values())[0]) -scene = Scene(ambient_light=np.array([1.0, 1.0, 1.0, 1.0])) -scene.add(duckmesh) -Viewer(scene) diff --git a/spaces/weidacn/deepdanbooru/deepdanbooru/data/dataset_wrapper.py b/spaces/weidacn/deepdanbooru/deepdanbooru/data/dataset_wrapper.py deleted file mode 100644 index f7f4ee1252d9bc3d0f773e2d1d69c68d0327037d..0000000000000000000000000000000000000000 --- a/spaces/weidacn/deepdanbooru/deepdanbooru/data/dataset_wrapper.py +++ /dev/null @@ -1,117 +0,0 @@ -import random - -import numpy as np -import tensorflow as tf -import tensorflow_io as tfio - -import deepdanbooru as dd - - -class DatasetWrapper: - """ - Wrapper class for data pipelining/augmentation. - """ - - def __init__( - self, inputs, tags, width, height, scale_range, rotation_range, shift_range - ): - self.inputs = inputs - self.width = width - self.height = height - self.scale_range = scale_range - self.rotation_range = rotation_range - self.shift_range = shift_range - self.tag_all_array = np.array(tags) - - def get_dataset(self, minibatch_size): - dataset = tf.data.Dataset.from_tensor_slices(self.inputs) - dataset = dataset.map( - self.map_load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE - ) - dataset = dataset.apply(tf.data.experimental.ignore_errors()) - dataset = dataset.map( - self.map_transform_image_and_label, - num_parallel_calls=tf.data.experimental.AUTOTUNE, - ) - dataset = dataset.batch(minibatch_size) - dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) - # dataset = dataset.apply( - # tf.data.experimental.prefetch_to_device('/device:GPU:0')) - - return dataset - - def map_load_image(self, image_path, tag_string): - image_raw = tf.io.read_file(image_path) - try: - image = tf.io.decode_png(image_raw, channels=3) - except: - image = tfio.image.decode_webp(image_raw) - image = tfio.experimental.color.rgba_to_rgb(image) - - if self.scale_range: - pre_scale = self.scale_range[1] - else: - pre_scale = 1.0 - - size = (int(self.height * pre_scale), int(self.width * pre_scale)) - - image = tf.image.resize( - image, - size=size, - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True, - ) - - return (image, tag_string) - - def map_transform_image_and_label(self, image, tag_string): - return tf.py_function( - self.map_transform_image_and_label_py, - (image, tag_string), - (tf.float32, tf.float32), - ) - - def map_transform_image_and_label_py(self, image, tag_string): - # transform image - image = image.numpy() - - if self.scale_range: - scale = random.uniform(self.scale_range[0], self.scale_range[1]) * ( - 1.0 / self.scale_range[1] - ) - else: - scale = None - - if self.rotation_range: - rotation = random.uniform(self.rotation_range[0], self.rotation_range[1]) - else: - rotation = None - - if self.shift_range: - shift_x = random.uniform(self.shift_range[0], self.shift_range[1]) - shift_y = random.uniform(self.shift_range[0], self.shift_range[1]) - shift = (shift_x, shift_y) - else: - shift = None - - image = dd.image.transform_and_pad_image( - image=image, - target_width=self.width, - target_height=self.height, - rotation=rotation, - scale=scale, - shift=shift, - ) - - image = image / 255.0 # normalize to 0~1 - # image = image.astype(np.float32) - - # transform tag - tag_string = tag_string.numpy().decode() - tag_array = np.array(tag_string.split(" ")) - - labels = np.where(np.isin(self.tag_all_array, tag_array), 1, 0).astype( - np.float32 - ) - - return (image, labels) diff --git a/spaces/weishao2019/ChuanhuChatGPT/README.md b/spaces/weishao2019/ChuanhuChatGPT/README.md deleted file mode 100644 index e480de7b25ab44894a247cf70e9954fd1b15f934..0000000000000000000000000000000000000000 --- a/spaces/weishao2019/ChuanhuChatGPT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐯 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.22.1 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: JohnSmith9982/ChuanhuChatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/wendys-llc/panoptic-segment-anything/segment_anything/CONTRIBUTING.md b/spaces/wendys-llc/panoptic-segment-anything/segment_anything/CONTRIBUTING.md deleted file mode 100644 index 263991c9496cf29ed4b99e03a9fb9a38e6bfaf86..0000000000000000000000000000000000000000 --- a/spaces/wendys-llc/panoptic-segment-anything/segment_anything/CONTRIBUTING.md +++ /dev/null @@ -1,31 +0,0 @@ -# Contributing to segment-anything -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints, using the `linter.sh` script in the project's root directory. Linting requires `black==23.*`, `isort==5.12.0`, `flake8`, and `mypy`. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Facebook's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to segment-anything, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/whitphx/gradio-static-test/dist/assets/ModifyUpload-ee7ccefb.js b/spaces/whitphx/gradio-static-test/dist/assets/ModifyUpload-ee7ccefb.js deleted file mode 100644 index 082e3bf2e03c737fedcc8f46a6d08a4e40d04cd2..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/ModifyUpload-ee7ccefb.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as g,i as b,s as _,H as k,e as I,D as r,h as v,F as p,m as x,K as E,q as d,t as h,r as w,o as C,z as L,C as m,E as c,G as f,I as B,N as $,n as j,p as q,u as D}from"../lite.js";import"./Button-0391b19a.js";/* empty css */import"./ModifyUpload.svelte_svelte_type_style_lang-ba6baa96.js";function S(a){let e,l,t,s,n,o;return t=new a[0]({}),{c(){e=k("button"),l=k("div"),I(t.$$.fragment),r(l,"class","svelte-1p4r00v"),r(e,"aria-label",a[1]),r(e,"class","svelte-1p4r00v")},m(i,u){v(i,e,u),p(e,l),x(t,l,null),s=!0,n||(o=E(e,"click",a[2]),n=!0)},p(i,[u]){(!s||u&2)&&r(e,"aria-label",i[1])},i(i){s||(d(t.$$.fragment,i),s=!0)},o(i){h(t.$$.fragment,i),s=!1},d(i){i&&w(e),C(t),n=!1,o()}}}function F(a,e,l){let{Icon:t}=e,{label:s=""}=e;function n(o){L.call(this,a,o)}return a.$$set=o=>{"Icon"in o&&l(0,t=o.Icon),"label"in o&&l(1,s=o.label)},[t,s,n]}class z extends g{constructor(e){super(),b(this,e,F,S,_,{Icon:0,label:1})}}function G(a){let e,l,t,s;return{c(){e=m("svg"),l=m("g"),t=m("path"),s=m("path"),r(t,"d","M18,6L6.087,17.913"),c(t,"fill","none"),c(t,"fill-rule","nonzero"),c(t,"stroke-width","2px"),r(l,"transform","matrix(1.14096,-0.140958,-0.140958,1.14096,-0.0559523,0.0559523)"),r(s,"d","M4.364,4.364L19.636,19.636"),c(s,"fill","none"),c(s,"fill-rule","nonzero"),c(s,"stroke-width","2px"),r(e,"width","100%"),r(e,"height","100%"),r(e,"viewBox","0 0 24 24"),r(e,"version","1.1"),r(e,"xmlns","http://www.w3.org/2000/svg"),r(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),r(e,"xml:space","preserve"),r(e,"stroke","currentColor"),c(e,"fill-rule","evenodd"),c(e,"clip-rule","evenodd"),c(e,"stroke-linecap","round"),c(e,"stroke-linejoin","round")},m(n,o){v(n,e,o),p(e,l),p(l,t),p(e,s)},p:f,i:f,o:f,d(n){n&&w(e)}}}class H extends g{constructor(e){super(),b(this,e,null,G,_,{})}}function K(a){let e,l;return{c(){e=m("svg"),l=m("path"),r(l,"d","M17 3a2.828 2.828 0 1 1 4 4L7.5 20.5 2 22l1.5-5.5L17 3z"),r(e,"xmlns","http://www.w3.org/2000/svg"),r(e,"width","100%"),r(e,"height","100%"),r(e,"viewBox","0 0 24 24"),r(e,"fill","none"),r(e,"stroke","currentColor"),r(e,"stroke-width","1.5"),r(e,"stroke-linecap","round"),r(e,"stroke-linejoin","round"),r(e,"class","feather feather-edit-2")},m(t,s){v(t,e,s),p(e,l)},p:f,i:f,o:f,d(t){t&&w(e)}}}class N extends g{constructor(e){super(),b(this,e,null,K,_,{})}}function M(a){let e,l;return e=new z({props:{Icon:N,label:"Edit"}}),e.$on("click",a[3]),{c(){I(e.$$.fragment)},m(t,s){x(e,t,s),l=!0},p:f,i(t){l||(d(e.$$.fragment,t),l=!0)},o(t){h(e.$$.fragment,t),l=!1},d(t){C(e,t)}}}function P(a){let e,l,t,s,n=a[0]&&M(a);return t=new z({props:{Icon:H,label:"Clear"}}),t.$on("click",a[4]),{c(){e=k("div"),n&&n.c(),l=B(),I(t.$$.fragment),r(e,"class","svelte-19sk1im"),$(e,"not-absolute",!a[1]),c(e,"position",a[1]?"absolute":"static")},m(o,i){v(o,e,i),n&&n.m(e,null),p(e,l),x(t,e,null),s=!0},p(o,[i]){o[0]?n?(n.p(o,i),i&1&&d(n,1)):(n=M(o),n.c(),d(n,1),n.m(e,l)):n&&(j(),h(n,1,1,()=>{n=null}),q()),(!s||i&2)&&$(e,"not-absolute",!o[1]),i&2&&c(e,"position",o[1]?"absolute":"static")},i(o){s||(d(n),d(t.$$.fragment,o),s=!0)},o(o){h(n),h(t.$$.fragment,o),s=!1},d(o){o&&w(e),n&&n.d(),C(t)}}}function U(a,e,l){let{editable:t=!1}=e,{absolute:s=!0}=e;const n=D(),o=()=>n("edit"),i=u=>{n("clear"),u.stopPropagation()};return a.$$set=u=>{"editable"in u&&l(0,t=u.editable),"absolute"in u&&l(1,s=u.absolute)},[t,s,n,o,i]}class Q extends g{constructor(e){super(),b(this,e,U,P,_,{editable:0,absolute:1})}}export{H as C,z as I,Q as M}; -//# sourceMappingURL=ModifyUpload-ee7ccefb.js.map diff --git a/spaces/wilson1/bingo/tests/kblob.ts b/spaces/wilson1/bingo/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingo/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/xelu3banh/dpt-depth02/README.md b/spaces/xelu3banh/dpt-depth02/README.md deleted file mode 100644 index b122fb808b972ab92e4b79249db039fd2b076933..0000000000000000000000000000000000000000 --- a/spaces/xelu3banh/dpt-depth02/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dpt Depth Estimation -emoji: ⚡ -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 2.8.13 -app_file: app.py -pinned: false -duplicated_from: xelu3banh/dpt-depth01 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/inceptionresnetv2.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/inceptionresnetv2.py deleted file mode 100644 index 03e40348425a2b1bc73e6f336efae8e5525cc45c..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/inceptionresnetv2.py +++ /dev/null @@ -1,361 +0,0 @@ -""" -Code imported from https://github.com/Cadene/pretrained-models.pytorch -""" -from __future__ import division, absolute_import -import torch -import torch.nn as nn -import torch.utils.model_zoo as model_zoo - -__all__ = ['inceptionresnetv2'] - -pretrained_settings = { - 'inceptionresnetv2': { - 'imagenet': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/inceptionresnetv2-520b38e4.pth', - 'input_space': 'RGB', - 'input_size': [3, 299, 299], - 'input_range': [0, 1], - 'mean': [0.5, 0.5, 0.5], - 'std': [0.5, 0.5, 0.5], - 'num_classes': 1000 - }, - 'imagenet+background': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/inceptionresnetv2-520b38e4.pth', - 'input_space': 'RGB', - 'input_size': [3, 299, 299], - 'input_range': [0, 1], - 'mean': [0.5, 0.5, 0.5], - 'std': [0.5, 0.5, 0.5], - 'num_classes': 1001 - } - } -} - - -class BasicConv2d(nn.Module): - - def __init__(self, in_planes, out_planes, kernel_size, stride, padding=0): - super(BasicConv2d, self).__init__() - self.conv = nn.Conv2d( - in_planes, - out_planes, - kernel_size=kernel_size, - stride=stride, - padding=padding, - bias=False - ) # verify bias false - self.bn = nn.BatchNorm2d( - out_planes, - eps=0.001, # value found in tensorflow - momentum=0.1, # default pytorch value - affine=True - ) - self.relu = nn.ReLU(inplace=False) - - def forward(self, x): - x = self.conv(x) - x = self.bn(x) - x = self.relu(x) - return x - - -class Mixed_5b(nn.Module): - - def __init__(self): - super(Mixed_5b, self).__init__() - - self.branch0 = BasicConv2d(192, 96, kernel_size=1, stride=1) - - self.branch1 = nn.Sequential( - BasicConv2d(192, 48, kernel_size=1, stride=1), - BasicConv2d(48, 64, kernel_size=5, stride=1, padding=2) - ) - - self.branch2 = nn.Sequential( - BasicConv2d(192, 64, kernel_size=1, stride=1), - BasicConv2d(64, 96, kernel_size=3, stride=1, padding=1), - BasicConv2d(96, 96, kernel_size=3, stride=1, padding=1) - ) - - self.branch3 = nn.Sequential( - nn.AvgPool2d(3, stride=1, padding=1, count_include_pad=False), - BasicConv2d(192, 64, kernel_size=1, stride=1) - ) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - x2 = self.branch2(x) - x3 = self.branch3(x) - out = torch.cat((x0, x1, x2, x3), 1) - return out - - -class Block35(nn.Module): - - def __init__(self, scale=1.0): - super(Block35, self).__init__() - - self.scale = scale - - self.branch0 = BasicConv2d(320, 32, kernel_size=1, stride=1) - - self.branch1 = nn.Sequential( - BasicConv2d(320, 32, kernel_size=1, stride=1), - BasicConv2d(32, 32, kernel_size=3, stride=1, padding=1) - ) - - self.branch2 = nn.Sequential( - BasicConv2d(320, 32, kernel_size=1, stride=1), - BasicConv2d(32, 48, kernel_size=3, stride=1, padding=1), - BasicConv2d(48, 64, kernel_size=3, stride=1, padding=1) - ) - - self.conv2d = nn.Conv2d(128, 320, kernel_size=1, stride=1) - self.relu = nn.ReLU(inplace=False) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - x2 = self.branch2(x) - out = torch.cat((x0, x1, x2), 1) - out = self.conv2d(out) - out = out * self.scale + x - out = self.relu(out) - return out - - -class Mixed_6a(nn.Module): - - def __init__(self): - super(Mixed_6a, self).__init__() - - self.branch0 = BasicConv2d(320, 384, kernel_size=3, stride=2) - - self.branch1 = nn.Sequential( - BasicConv2d(320, 256, kernel_size=1, stride=1), - BasicConv2d(256, 256, kernel_size=3, stride=1, padding=1), - BasicConv2d(256, 384, kernel_size=3, stride=2) - ) - - self.branch2 = nn.MaxPool2d(3, stride=2) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - x2 = self.branch2(x) - out = torch.cat((x0, x1, x2), 1) - return out - - -class Block17(nn.Module): - - def __init__(self, scale=1.0): - super(Block17, self).__init__() - - self.scale = scale - - self.branch0 = BasicConv2d(1088, 192, kernel_size=1, stride=1) - - self.branch1 = nn.Sequential( - BasicConv2d(1088, 128, kernel_size=1, stride=1), - BasicConv2d( - 128, 160, kernel_size=(1, 7), stride=1, padding=(0, 3) - ), - BasicConv2d( - 160, 192, kernel_size=(7, 1), stride=1, padding=(3, 0) - ) - ) - - self.conv2d = nn.Conv2d(384, 1088, kernel_size=1, stride=1) - self.relu = nn.ReLU(inplace=False) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - out = torch.cat((x0, x1), 1) - out = self.conv2d(out) - out = out * self.scale + x - out = self.relu(out) - return out - - -class Mixed_7a(nn.Module): - - def __init__(self): - super(Mixed_7a, self).__init__() - - self.branch0 = nn.Sequential( - BasicConv2d(1088, 256, kernel_size=1, stride=1), - BasicConv2d(256, 384, kernel_size=3, stride=2) - ) - - self.branch1 = nn.Sequential( - BasicConv2d(1088, 256, kernel_size=1, stride=1), - BasicConv2d(256, 288, kernel_size=3, stride=2) - ) - - self.branch2 = nn.Sequential( - BasicConv2d(1088, 256, kernel_size=1, stride=1), - BasicConv2d(256, 288, kernel_size=3, stride=1, padding=1), - BasicConv2d(288, 320, kernel_size=3, stride=2) - ) - - self.branch3 = nn.MaxPool2d(3, stride=2) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - x2 = self.branch2(x) - x3 = self.branch3(x) - out = torch.cat((x0, x1, x2, x3), 1) - return out - - -class Block8(nn.Module): - - def __init__(self, scale=1.0, noReLU=False): - super(Block8, self).__init__() - - self.scale = scale - self.noReLU = noReLU - - self.branch0 = BasicConv2d(2080, 192, kernel_size=1, stride=1) - - self.branch1 = nn.Sequential( - BasicConv2d(2080, 192, kernel_size=1, stride=1), - BasicConv2d( - 192, 224, kernel_size=(1, 3), stride=1, padding=(0, 1) - ), - BasicConv2d( - 224, 256, kernel_size=(3, 1), stride=1, padding=(1, 0) - ) - ) - - self.conv2d = nn.Conv2d(448, 2080, kernel_size=1, stride=1) - if not self.noReLU: - self.relu = nn.ReLU(inplace=False) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - out = torch.cat((x0, x1), 1) - out = self.conv2d(out) - out = out * self.scale + x - if not self.noReLU: - out = self.relu(out) - return out - - -# ---------------- -# Model Definition -# ---------------- -class InceptionResNetV2(nn.Module): - """Inception-ResNet-V2. - - Reference: - Szegedy et al. Inception-v4, Inception-ResNet and the Impact of Residual - Connections on Learning. AAAI 2017. - - Public keys: - - ``inceptionresnetv2``: Inception-ResNet-V2. - """ - - def __init__(self, num_classes, loss='softmax', **kwargs): - super(InceptionResNetV2, self).__init__() - self.loss = loss - - # Modules - self.conv2d_1a = BasicConv2d(3, 32, kernel_size=3, stride=2) - self.conv2d_2a = BasicConv2d(32, 32, kernel_size=3, stride=1) - self.conv2d_2b = BasicConv2d( - 32, 64, kernel_size=3, stride=1, padding=1 - ) - self.maxpool_3a = nn.MaxPool2d(3, stride=2) - self.conv2d_3b = BasicConv2d(64, 80, kernel_size=1, stride=1) - self.conv2d_4a = BasicConv2d(80, 192, kernel_size=3, stride=1) - self.maxpool_5a = nn.MaxPool2d(3, stride=2) - self.mixed_5b = Mixed_5b() - self.repeat = nn.Sequential( - Block35(scale=0.17), Block35(scale=0.17), Block35(scale=0.17), - Block35(scale=0.17), Block35(scale=0.17), Block35(scale=0.17), - Block35(scale=0.17), Block35(scale=0.17), Block35(scale=0.17), - Block35(scale=0.17) - ) - self.mixed_6a = Mixed_6a() - self.repeat_1 = nn.Sequential( - Block17(scale=0.10), Block17(scale=0.10), Block17(scale=0.10), - Block17(scale=0.10), Block17(scale=0.10), Block17(scale=0.10), - Block17(scale=0.10), Block17(scale=0.10), Block17(scale=0.10), - Block17(scale=0.10), Block17(scale=0.10), Block17(scale=0.10), - Block17(scale=0.10), Block17(scale=0.10), Block17(scale=0.10), - Block17(scale=0.10), Block17(scale=0.10), Block17(scale=0.10), - Block17(scale=0.10), Block17(scale=0.10) - ) - self.mixed_7a = Mixed_7a() - self.repeat_2 = nn.Sequential( - Block8(scale=0.20), Block8(scale=0.20), Block8(scale=0.20), - Block8(scale=0.20), Block8(scale=0.20), Block8(scale=0.20), - Block8(scale=0.20), Block8(scale=0.20), Block8(scale=0.20) - ) - - self.block8 = Block8(noReLU=True) - self.conv2d_7b = BasicConv2d(2080, 1536, kernel_size=1, stride=1) - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.classifier = nn.Linear(1536, num_classes) - - def load_imagenet_weights(self): - settings = pretrained_settings['inceptionresnetv2']['imagenet'] - pretrain_dict = model_zoo.load_url(settings['url']) - model_dict = self.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - self.load_state_dict(model_dict) - - def featuremaps(self, x): - x = self.conv2d_1a(x) - x = self.conv2d_2a(x) - x = self.conv2d_2b(x) - x = self.maxpool_3a(x) - x = self.conv2d_3b(x) - x = self.conv2d_4a(x) - x = self.maxpool_5a(x) - x = self.mixed_5b(x) - x = self.repeat(x) - x = self.mixed_6a(x) - x = self.repeat_1(x) - x = self.mixed_7a(x) - x = self.repeat_2(x) - x = self.block8(x) - x = self.conv2d_7b(x) - return x - - def forward(self, x): - f = self.featuremaps(x) - v = self.global_avgpool(f) - v = v.view(v.size(0), -1) - - if not self.training: - return v - - y = self.classifier(v) - - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - return y, v - else: - raise KeyError('Unsupported loss: {}'.format(self.loss)) - - -def inceptionresnetv2(num_classes, loss='softmax', pretrained=True, **kwargs): - model = InceptionResNetV2(num_classes=num_classes, loss=loss, **kwargs) - if pretrained: - model.load_imagenet_weights() - return model diff --git a/spaces/xiaoti/Real-CUGAN/app.py b/spaces/xiaoti/Real-CUGAN/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/xiaoti/Real-CUGAN/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
              ' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
              ' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch() diff --git a/spaces/xiaoxuezi/spleeter/app.py b/spaces/xiaoxuezi/spleeter/app.py deleted file mode 100644 index 11e7d12f5ffe4493e21cd1d96de66a75886b74b3..0000000000000000000000000000000000000000 --- a/spaces/xiaoxuezi/spleeter/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import os -import gradio as gr - - -def function(audio_path, output_dir="example/output"): - filename = os.path.basename(audio_path).split('.')[0] - - cmd_result = os.popen(f'python spleeter separate -p spleeter:2stems -o {output_dir} {audio_path}').read() - print(cmd_result) - return f"example/output/{filename}/accompaniment.wav", f"example/output/{filename}/vocals.wav" - -inputs = [ - gr.inputs.Audio(source="upload", type="filepath", label=None, optional=False) -] -outputs = [ - gr.outputs.Audio(type="file", label='accompaniment'), - gr.outputs.Audio(type="file", label='vocals') -] - -examples = ["example/audio_example.mp3"] - - -iface = gr.Interface(fn=function, inputs=inputs, outputs=outputs, examples=examples) -iface.launch(share=True) - -if __name__ == "__main__": - function("example/audio_example.mp3", "example/output") - pass \ No newline at end of file diff --git a/spaces/xnetba/Chat_advance/assets/html/appearance_switcher.html b/spaces/xnetba/Chat_advance/assets/html/appearance_switcher.html deleted file mode 100644 index 9375071fbdfda7bfd622d7f7bd2dfdd0c494341b..0000000000000000000000000000000000000000 --- a/spaces/xnetba/Chat_advance/assets/html/appearance_switcher.html +++ /dev/null @@ -1,11 +0,0 @@ -
              - - {label} - - - - -
              diff --git a/spaces/xuqinyang/Baichuan-13B-Chat-Int8-Cpp/README.md b/spaces/xuqinyang/Baichuan-13B-Chat-Int8-Cpp/README.md deleted file mode 100644 index 31cb817714118414a0f49010055e2dfa360af010..0000000000000000000000000000000000000000 --- a/spaces/xuqinyang/Baichuan-13B-Chat-Int8-Cpp/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Baichuan 13B Chat -emoji: 💻 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -models: -- baichuan-inc/Baichuan-13B-Chat -duplicated_from: xqy2006/Baichuan-13B-Chat ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/xuxw98/TAPA/scripts/prepare_dolly.py b/spaces/xuxw98/TAPA/scripts/prepare_dolly.py deleted file mode 100644 index a40fa8ddc11aab95490107e419ed21603e3e2d9e..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/scripts/prepare_dolly.py +++ /dev/null @@ -1,133 +0,0 @@ -"""Implementation derived from https://github.com/tloen/alpaca-lora""" -import sys -from pathlib import Path - -# support running without installing as a package -wd = Path(__file__).parent.parent.resolve() -sys.path.append(str(wd)) - -import torch -import requests -import json -from torch.utils.data import random_split -from lit_llama.tokenizer import Tokenizer -from tqdm import tqdm - - -DATA_FILE = "https://huggingface.co/datasets/databricks/databricks-dolly-15k/resolve/main/databricks-dolly-15k.jsonl" -DATA_FILE_NAME = "dolly_data_cleaned.json" -IGNORE_INDEX = -1 - - -def prepare( - destination_path: Path = Path("data/dolly"), - tokenizer_path: Path = Path("checkpoints/lit-llama/tokenizer.model"), - test_split_size: int = 2000, - max_seq_length: int = 1024, - seed: int = 42, - mask_inputs: bool = False, # as in alpaca-lora -) -> None: - """Prepare the Dolly dataset for instruction tuning. - - The output is a training and validation dataset saved as `train.pt` and `val.pt`, - which stores the preprocessed and tokenized prompts and labels. - """ - - destination_path.mkdir(parents=True, exist_ok=True) - file_path = destination_path / DATA_FILE_NAME - download(file_path) - - # TODO: If we don't have the Meta weights, where do we get the tokenizer from? - tokenizer = Tokenizer(tokenizer_path) - - with open(file_path, "r") as file: - data = file.readlines() - data = [json.loads(line) for line in data] - for item in data: - item["input"] = item.pop("context") - item["output"] = item.pop("response") - - # Partition the dataset into train and test - train_split_size = len(data) - test_split_size - train_set, test_set = random_split( - data, - lengths=(train_split_size, test_split_size), - generator=torch.Generator().manual_seed(seed), - ) - train_set, test_set = list(train_set), list(test_set) - - print(f"train has {len(train_set):,} samples") - print(f"val has {len(test_set):,} samples") - - print("Processing train split ...") - train_set = [prepare_sample(sample, tokenizer, max_seq_length, mask_inputs) for sample in tqdm(train_set)] - torch.save(train_set, file_path.parent / "train.pt") - - print("Processing test split ...") - test_set = [prepare_sample(sample, tokenizer, max_seq_length, mask_inputs) for sample in tqdm(test_set)] - torch.save(test_set, file_path.parent / "test.pt") - - -def download(file_path: Path): - """Downloads the raw json data file and saves it in the given destination.""" - if file_path.exists(): - return - with open(file_path, "w") as f: - f.write(requests.get(DATA_FILE).text) - - -def prepare_sample(example: dict, tokenizer: Tokenizer, max_length: int, mask_inputs: bool = True): - """Processes a single sample. - - Each sample in the dataset consists of: - - instruction: A string describing the task - - input: A string holding a special input value for the instruction. - This only applies to some samples, and in others this is empty. - - output: The response string - - This function processes this data to produce a prompt text and a label for - supervised training. The prompt text is formed as a single message including both - the instruction and the input. The label/target is the same message but with the - response attached. - - Finally, both the prompt and the label get tokenized. If desired, all tokens - in the label that correspond to the original input prompt get masked out (default). - """ - full_prompt = generate_prompt(example) - full_prompt_and_response = full_prompt + example["output"] - encoded_full_prompt = tokenize(tokenizer, full_prompt, max_length=max_length, eos=False) - encoded_full_prompt_and_response = tokenize(tokenizer, full_prompt_and_response, eos=True, max_length=max_length) - - # The labels are the full prompt with response, but with the prompt masked out - labels = encoded_full_prompt_and_response.clone() - if mask_inputs: - labels[:len(encoded_full_prompt)] = IGNORE_INDEX - - return {**example, "input_ids": encoded_full_prompt_and_response, "input_ids_no_response": encoded_full_prompt, "labels": labels} - - -def tokenize(tokenizer: Tokenizer, string: str, max_length: int, eos=True) -> torch.Tensor: - return tokenizer.encode(string, bos=True, eos=eos, max_length=max_length) - - -def generate_prompt(example): - """Generates a standardized message to prompt the model with an instruction, optional input and a - 'response' field.""" - - if example["input"]: - return ( - f"Below is an instruction that describes a task, paired with an input that provides further context. " - "Write a response that appropriately completes the request.\n\n" - f"### Instruction:\n{example['instruction']}\n\n### Input:\n{example['input']}\n\n### Response:" - ) - return ( - f"Below is an instruction that describes a task. " - "Write a response that appropriately completes the request.\n\n" - f"### Instruction:\n{example['instruction']}\n\n### Response:" - ) - - -if __name__ == "__main__": - from jsonargparse import CLI - - CLI(prepare) diff --git a/spaces/yamashiro3/Whisper-gpt-voicescribe/README.md b/spaces/yamashiro3/Whisper-gpt-voicescribe/README.md deleted file mode 100644 index 6662e2d77003f102ad8b9640a7a8f461aadc2025..0000000000000000000000000000000000000000 --- a/spaces/yamashiro3/Whisper-gpt-voicescribe/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Whisper Gpt Voicescribe -emoji: 🏃 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.28.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/data/processors/glue.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/data/processors/glue.py deleted file mode 100644 index 3d22968c9d06323c7c1cd4b00e5fcd2e6cf3f35d..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/data/processors/glue.py +++ /dev/null @@ -1,643 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" GLUE processors and helpers""" - -import os -import warnings -from dataclasses import asdict -from enum import Enum -from typing import List, Optional, Union - -from ...tokenization_utils import PreTrainedTokenizer -from ...utils import is_tf_available, logging -from .utils import DataProcessor, InputExample, InputFeatures - - -if is_tf_available(): - import tensorflow as tf - -logger = logging.get_logger(__name__) - -DEPRECATION_WARNING = ( - "This {0} will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets " - "library. You can have a look at this example script for pointers: " - "https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py" -) - - -def glue_convert_examples_to_features( - examples: Union[List[InputExample], "tf.data.Dataset"], - tokenizer: PreTrainedTokenizer, - max_length: Optional[int] = None, - task=None, - label_list=None, - output_mode=None, -): - """ - Loads a data file into a list of `InputFeatures` - - Args: - examples: List of `InputExamples` or `tf.data.Dataset` containing the examples. - tokenizer: Instance of a tokenizer that will tokenize the examples - max_length: Maximum example length. Defaults to the tokenizer's max_len - task: GLUE task - label_list: List of labels. Can be obtained from the processor using the `processor.get_labels()` method - output_mode: String indicating the output mode. Either `regression` or `classification` - - Returns: - If the `examples` input is a `tf.data.Dataset`, will return a `tf.data.Dataset` containing the task-specific - features. If the input is a list of `InputExamples`, will return a list of task-specific `InputFeatures` which - can be fed to the model. - - """ - warnings.warn(DEPRECATION_WARNING.format("function"), FutureWarning) - if is_tf_available() and isinstance(examples, tf.data.Dataset): - if task is None: - raise ValueError("When calling glue_convert_examples_to_features from TF, the task parameter is required.") - return _tf_glue_convert_examples_to_features(examples, tokenizer, max_length=max_length, task=task) - return _glue_convert_examples_to_features( - examples, tokenizer, max_length=max_length, task=task, label_list=label_list, output_mode=output_mode - ) - - -if is_tf_available(): - - def _tf_glue_convert_examples_to_features( - examples: tf.data.Dataset, - tokenizer: PreTrainedTokenizer, - task=str, - max_length: Optional[int] = None, - ) -> tf.data.Dataset: - """ - Returns: - A `tf.data.Dataset` containing the task-specific features. - - """ - processor = glue_processors[task]() - examples = [processor.tfds_map(processor.get_example_from_tensor_dict(example)) for example in examples] - features = glue_convert_examples_to_features(examples, tokenizer, max_length=max_length, task=task) - label_type = tf.float32 if task == "sts-b" else tf.int64 - - def gen(): - for ex in features: - d = {k: v for k, v in asdict(ex).items() if v is not None} - label = d.pop("label") - yield (d, label) - - input_names = tokenizer.model_input_names - - return tf.data.Dataset.from_generator( - gen, - ({k: tf.int32 for k in input_names}, label_type), - ({k: tf.TensorShape([None]) for k in input_names}, tf.TensorShape([])), - ) - - -def _glue_convert_examples_to_features( - examples: List[InputExample], - tokenizer: PreTrainedTokenizer, - max_length: Optional[int] = None, - task=None, - label_list=None, - output_mode=None, -): - if max_length is None: - max_length = tokenizer.model_max_length - - if task is not None: - processor = glue_processors[task]() - if label_list is None: - label_list = processor.get_labels() - logger.info(f"Using label list {label_list} for task {task}") - if output_mode is None: - output_mode = glue_output_modes[task] - logger.info(f"Using output mode {output_mode} for task {task}") - - label_map = {label: i for i, label in enumerate(label_list)} - - def label_from_example(example: InputExample) -> Union[int, float, None]: - if example.label is None: - return None - if output_mode == "classification": - return label_map[example.label] - elif output_mode == "regression": - return float(example.label) - raise KeyError(output_mode) - - labels = [label_from_example(example) for example in examples] - - batch_encoding = tokenizer( - [(example.text_a, example.text_b) for example in examples], - max_length=max_length, - padding="max_length", - truncation=True, - ) - - features = [] - for i in range(len(examples)): - inputs = {k: batch_encoding[k][i] for k in batch_encoding} - - feature = InputFeatures(**inputs, label=labels[i]) - features.append(feature) - - for i, example in enumerate(examples[:5]): - logger.info("*** Example ***") - logger.info(f"guid: {example.guid}") - logger.info(f"features: {features[i]}") - - return features - - -class OutputMode(Enum): - classification = "classification" - regression = "regression" - - -class MrpcProcessor(DataProcessor): - """Processor for the MRPC data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_example_from_tensor_dict(self, tensor_dict): - """See base class.""" - return InputExample( - tensor_dict["idx"].numpy(), - tensor_dict["sentence1"].numpy().decode("utf-8"), - tensor_dict["sentence2"].numpy().decode("utf-8"), - str(tensor_dict["label"].numpy()), - ) - - def get_train_examples(self, data_dir): - """See base class.""" - logger.info(f"LOOKING AT {os.path.join(data_dir, 'train.tsv')}") - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_labels(self): - """See base class.""" - return ["0", "1"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training, dev and test sets.""" - examples = [] - for i, line in enumerate(lines): - if i == 0: - continue - guid = f"{set_type}-{i}" - text_a = line[3] - text_b = line[4] - label = None if set_type == "test" else line[0] - examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) - return examples - - -class MnliProcessor(DataProcessor): - """Processor for the MultiNLI data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_example_from_tensor_dict(self, tensor_dict): - """See base class.""" - return InputExample( - tensor_dict["idx"].numpy(), - tensor_dict["premise"].numpy().decode("utf-8"), - tensor_dict["hypothesis"].numpy().decode("utf-8"), - str(tensor_dict["label"].numpy()), - ) - - def get_train_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")), "dev_matched") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test_matched.tsv")), "test_matched") - - def get_labels(self): - """See base class.""" - return ["contradiction", "entailment", "neutral"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training, dev and test sets.""" - examples = [] - for i, line in enumerate(lines): - if i == 0: - continue - guid = f"{set_type}-{line[0]}" - text_a = line[8] - text_b = line[9] - label = None if set_type.startswith("test") else line[-1] - examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) - return examples - - -class MnliMismatchedProcessor(MnliProcessor): - """Processor for the MultiNLI Mismatched data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev_mismatched.tsv")), "dev_mismatched") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test_mismatched.tsv")), "test_mismatched") - - -class ColaProcessor(DataProcessor): - """Processor for the CoLA data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_example_from_tensor_dict(self, tensor_dict): - """See base class.""" - return InputExample( - tensor_dict["idx"].numpy(), - tensor_dict["sentence"].numpy().decode("utf-8"), - None, - str(tensor_dict["label"].numpy()), - ) - - def get_train_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_labels(self): - """See base class.""" - return ["0", "1"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training, dev and test sets.""" - test_mode = set_type == "test" - if test_mode: - lines = lines[1:] - text_index = 1 if test_mode else 3 - examples = [] - for i, line in enumerate(lines): - guid = f"{set_type}-{i}" - text_a = line[text_index] - label = None if test_mode else line[1] - examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) - return examples - - -class Sst2Processor(DataProcessor): - """Processor for the SST-2 data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_example_from_tensor_dict(self, tensor_dict): - """See base class.""" - return InputExample( - tensor_dict["idx"].numpy(), - tensor_dict["sentence"].numpy().decode("utf-8"), - None, - str(tensor_dict["label"].numpy()), - ) - - def get_train_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_labels(self): - """See base class.""" - return ["0", "1"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training, dev and test sets.""" - examples = [] - text_index = 1 if set_type == "test" else 0 - for i, line in enumerate(lines): - if i == 0: - continue - guid = f"{set_type}-{i}" - text_a = line[text_index] - label = None if set_type == "test" else line[1] - examples.append(InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) - return examples - - -class StsbProcessor(DataProcessor): - """Processor for the STS-B data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_example_from_tensor_dict(self, tensor_dict): - """See base class.""" - return InputExample( - tensor_dict["idx"].numpy(), - tensor_dict["sentence1"].numpy().decode("utf-8"), - tensor_dict["sentence2"].numpy().decode("utf-8"), - str(tensor_dict["label"].numpy()), - ) - - def get_train_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_labels(self): - """See base class.""" - return [None] - - def _create_examples(self, lines, set_type): - """Creates examples for the training, dev and test sets.""" - examples = [] - for i, line in enumerate(lines): - if i == 0: - continue - guid = f"{set_type}-{line[0]}" - text_a = line[7] - text_b = line[8] - label = None if set_type == "test" else line[-1] - examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) - return examples - - -class QqpProcessor(DataProcessor): - """Processor for the QQP data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_example_from_tensor_dict(self, tensor_dict): - """See base class.""" - return InputExample( - tensor_dict["idx"].numpy(), - tensor_dict["question1"].numpy().decode("utf-8"), - tensor_dict["question2"].numpy().decode("utf-8"), - str(tensor_dict["label"].numpy()), - ) - - def get_train_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_labels(self): - """See base class.""" - return ["0", "1"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training, dev and test sets.""" - test_mode = set_type == "test" - q1_index = 1 if test_mode else 3 - q2_index = 2 if test_mode else 4 - examples = [] - for i, line in enumerate(lines): - if i == 0: - continue - guid = f"{set_type}-{line[0]}" - try: - text_a = line[q1_index] - text_b = line[q2_index] - label = None if test_mode else line[5] - except IndexError: - continue - examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) - return examples - - -class QnliProcessor(DataProcessor): - """Processor for the QNLI data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_example_from_tensor_dict(self, tensor_dict): - """See base class.""" - return InputExample( - tensor_dict["idx"].numpy(), - tensor_dict["question"].numpy().decode("utf-8"), - tensor_dict["sentence"].numpy().decode("utf-8"), - str(tensor_dict["label"].numpy()), - ) - - def get_train_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_labels(self): - """See base class.""" - return ["entailment", "not_entailment"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training, dev and test sets.""" - examples = [] - for i, line in enumerate(lines): - if i == 0: - continue - guid = f"{set_type}-{line[0]}" - text_a = line[1] - text_b = line[2] - label = None if set_type == "test" else line[-1] - examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) - return examples - - -class RteProcessor(DataProcessor): - """Processor for the RTE data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_example_from_tensor_dict(self, tensor_dict): - """See base class.""" - return InputExample( - tensor_dict["idx"].numpy(), - tensor_dict["sentence1"].numpy().decode("utf-8"), - tensor_dict["sentence2"].numpy().decode("utf-8"), - str(tensor_dict["label"].numpy()), - ) - - def get_train_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_labels(self): - """See base class.""" - return ["entailment", "not_entailment"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training, dev and test sets.""" - examples = [] - for i, line in enumerate(lines): - if i == 0: - continue - guid = f"{set_type}-{line[0]}" - text_a = line[1] - text_b = line[2] - label = None if set_type == "test" else line[-1] - examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) - return examples - - -class WnliProcessor(DataProcessor): - """Processor for the WNLI data set (GLUE version).""" - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - warnings.warn(DEPRECATION_WARNING.format("processor"), FutureWarning) - - def get_example_from_tensor_dict(self, tensor_dict): - """See base class.""" - return InputExample( - tensor_dict["idx"].numpy(), - tensor_dict["sentence1"].numpy().decode("utf-8"), - tensor_dict["sentence2"].numpy().decode("utf-8"), - str(tensor_dict["label"].numpy()), - ) - - def get_train_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") - - def get_dev_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev") - - def get_test_examples(self, data_dir): - """See base class.""" - return self._create_examples(self._read_tsv(os.path.join(data_dir, "test.tsv")), "test") - - def get_labels(self): - """See base class.""" - return ["0", "1"] - - def _create_examples(self, lines, set_type): - """Creates examples for the training, dev and test sets.""" - examples = [] - for i, line in enumerate(lines): - if i == 0: - continue - guid = f"{set_type}-{line[0]}" - text_a = line[1] - text_b = line[2] - label = None if set_type == "test" else line[-1] - examples.append(InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label)) - return examples - - -glue_tasks_num_labels = { - "cola": 2, - "mnli": 3, - "mrpc": 2, - "sst-2": 2, - "sts-b": 1, - "qqp": 2, - "qnli": 2, - "rte": 2, - "wnli": 2, -} - -glue_processors = { - "cola": ColaProcessor, - "mnli": MnliProcessor, - "mnli-mm": MnliMismatchedProcessor, - "mrpc": MrpcProcessor, - "sst-2": Sst2Processor, - "sts-b": StsbProcessor, - "qqp": QqpProcessor, - "qnli": QnliProcessor, - "rte": RteProcessor, - "wnli": WnliProcessor, -} - -glue_output_modes = { - "cola": "classification", - "mnli": "classification", - "mnli-mm": "classification", - "mrpc": "classification", - "sst-2": "classification", - "sts-b": "regression", - "qqp": "classification", - "qnli": "classification", - "rte": "classification", - "wnli": "classification", -} diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ctrl/modeling_tf_ctrl.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ctrl/modeling_tf_ctrl.py deleted file mode 100644 index 70a5c17462595a195d4099d34899c0e7b1f58cb8..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ctrl/modeling_tf_ctrl.py +++ /dev/null @@ -1,838 +0,0 @@ -# coding=utf-8 -# Copyright 2018 Salesforce and HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TF 2.0 CTRL model.""" - -from __future__ import annotations - -from typing import Optional, Tuple, Union - -import numpy as np -import tensorflow as tf - -from ...modeling_tf_outputs import TFBaseModelOutputWithPast, TFCausalLMOutputWithPast, TFSequenceClassifierOutput -from ...modeling_tf_utils import ( - TFCausalLanguageModelingLoss, - TFModelInputType, - TFPreTrainedModel, - TFSequenceClassificationLoss, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import check_embeddings_within_bounds, shape_list, stable_softmax -from ...utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, logging -from .configuration_ctrl import CTRLConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "Salesforce/ctrl" -_CONFIG_FOR_DOC = "CTRLConfig" - -TF_CTRL_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "Salesforce/ctrl" - # See all CTRL models at https://huggingface.co/models?filter=ctrl -] - - -def angle_defn(pos, i, d_model_size): - angle_rates = 1 / np.power(10000, (2 * (i // 2)) / d_model_size) - return pos * angle_rates - - -def positional_encoding(position, d_model_size): - # create the sinusoidal pattern for the positional encoding - angle_rads = angle_defn(np.arange(position)[:, np.newaxis], np.arange(d_model_size)[np.newaxis, :], d_model_size) - - sines = np.sin(angle_rads[:, 0::2]) - cosines = np.cos(angle_rads[:, 1::2]) - pos_encoding = tf.convert_to_tensor(np.concatenate([sines, cosines], axis=-1)) - - return pos_encoding - - -def scaled_dot_product_attention(q, k, v, mask, attention_mask=None, head_mask=None): - # calculate attention - matmul_qk = tf.matmul(q, k, transpose_b=True) - - dk = tf.cast(shape_list(k)[-1], dtype=matmul_qk.dtype) - scaled_attention_logits = matmul_qk / tf.math.sqrt(dk) - - if mask is not None: - scaled_attention_logits += tf.cast(mask * -1e4, dtype=scaled_attention_logits.dtype) - - if attention_mask is not None: - # Apply the attention mask - attention_mask = tf.cast(attention_mask, dtype=scaled_attention_logits.dtype) - scaled_attention_logits = scaled_attention_logits + attention_mask - - attention_weights = stable_softmax(scaled_attention_logits, axis=-1) - - # Mask heads if we want to - if head_mask is not None: - attention_weights = attention_weights * head_mask - - output = tf.matmul(attention_weights, v) - - return output, attention_weights - - -class TFMultiHeadAttention(tf.keras.layers.Layer): - def __init__(self, d_model_size, num_heads, output_attentions=False, **kwargs): - super().__init__(**kwargs) - self.num_heads = num_heads - self.d_model_size = d_model_size - self.output_attentions = output_attentions - - self.depth = int(d_model_size / self.num_heads) - - self.Wq = tf.keras.layers.Dense(d_model_size, name="Wq") - self.Wk = tf.keras.layers.Dense(d_model_size, name="Wk") - self.Wv = tf.keras.layers.Dense(d_model_size, name="Wv") - - self.dense = tf.keras.layers.Dense(d_model_size, name="dense") - - def split_into_heads(self, x, batch_size): - x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth)) - return tf.transpose(x, perm=[0, 2, 1, 3]) - - def call(self, v, k, q, mask, layer_past, attention_mask, head_mask, use_cache, output_attentions, training=False): - batch_size = shape_list(q)[0] - - q = self.Wq(q) - k = self.Wk(k) - v = self.Wv(v) - - q = self.split_into_heads(q, batch_size) - k = self.split_into_heads(k, batch_size) - v = self.split_into_heads(v, batch_size) - - if layer_past is not None: - past_key, past_value = tf.unstack(layer_past, axis=0) - k = tf.concat((past_key, k), axis=-2) - v = tf.concat((past_value, v), axis=-2) - - if use_cache: - present = tf.stack((k, v), axis=0) - else: - present = (None,) - - output = scaled_dot_product_attention(q, k, v, mask, attention_mask, head_mask) - scaled_attention = tf.transpose(output[0], perm=[0, 2, 1, 3]) - attn = output[1] - original_size_attention = tf.reshape(scaled_attention, (batch_size, -1, self.d_model_size)) - output = self.dense(original_size_attention) - outputs = (output, present) - - if output_attentions: - outputs = outputs + (attn,) - - return outputs - - -class TFPointWiseFeedForwardLayer(tf.keras.layers.Layer): - def __init__(self, d_model_size, dff, **kwargs): - super().__init__(**kwargs) - - self.dense_0 = tf.keras.layers.Dense(dff, activation="relu", name="0") - self.dense_2 = tf.keras.layers.Dense(d_model_size, name="2") - - def call(self, inputs, trainable=False): - dense_0_output = self.dense_0(inputs) - dense_2_output = self.dense_2(dense_0_output) - - return dense_2_output - - -class TFEncoderLayer(tf.keras.layers.Layer): - def __init__( - self, d_model_size, num_heads, dff, rate=0.1, layer_norm_epsilon=1e-6, output_attentions=False, **kwargs - ): - super().__init__(**kwargs) - - self.output_attentions = output_attentions - - self.multi_head_attention = TFMultiHeadAttention( - d_model_size, num_heads, output_attentions=self.output_attentions, name="multi_head_attention" - ) - self.ffn = TFPointWiseFeedForwardLayer(d_model_size, dff, name="ffn") - - self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=layer_norm_epsilon, name="layernorm1") - self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=layer_norm_epsilon, name="layernorm2") - - self.dropout1 = tf.keras.layers.Dropout(rate) - self.dropout2 = tf.keras.layers.Dropout(rate) - - def call(self, x, mask, layer_past, attention_mask, head_mask, use_cache, output_attentions, training=False): - normed = self.layernorm1(x) - attn_outputs = self.multi_head_attention( - normed, - normed, - normed, - mask, - layer_past, - attention_mask, - head_mask, - use_cache, - output_attentions, - training=training, - ) - attn_output = attn_outputs[0] - attn_output = self.dropout1(attn_output, training=training) - out1 = x + attn_output - - out2 = self.layernorm2(out1) - ffn_output = self.ffn(out2) - ffn_output = self.dropout2(ffn_output, training=training) - out2 = out1 + ffn_output - - outputs = (out2,) + attn_outputs[1:] - return outputs - - -@keras_serializable -class TFCTRLMainLayer(tf.keras.layers.Layer): - config_class = CTRLConfig - - def __init__(self, config, **kwargs): - super().__init__(**kwargs) - - self.config = config - self.output_hidden_states = config.output_hidden_states - self.output_attentions = config.output_attentions - self.use_cache = config.use_cache - self.return_dict = config.use_return_dict - - self.d_model_size = config.n_embd - self.num_layers = config.n_layer - - self.pos_encoding = positional_encoding(config.n_positions, self.d_model_size) - - self.w = tf.keras.layers.Embedding( - input_dim=config.vocab_size, - output_dim=config.n_embd, - embeddings_initializer=get_initializer(config.initializer_range), - name="w", - ) - - self.dropout = tf.keras.layers.Dropout(config.embd_pdrop) - self.h = [ - TFEncoderLayer( - config.n_embd, - config.n_head, - config.dff, - config.resid_pdrop, - config.layer_norm_epsilon, - self.output_attentions, - name=f"h_._{i}", - ) - for i in range(config.n_layer) - ] - self.layernorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_epsilon, name="layernorm") - - def get_input_embeddings(self): - return self.w - - def set_input_embeddings(self, new_embeddings): - self.w = new_embeddings - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} - """ - raise NotImplementedError - - @unpack_inputs - def call( - self, - input_ids: TFModelInputType | None = None, - past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFBaseModelOutputWithPast]: - # If using past key value states, only the last tokens - # should be given as an input - if past_key_values is not None: - if input_ids is not None: - input_ids = input_ids[:, -1:] - if inputs_embeds is not None: - inputs_embeds = inputs_embeds[:, -1:] - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1:] - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = shape_list(input_ids) - input_ids = tf.reshape(input_ids, [-1, input_shape[-1]]) - elif inputs_embeds is not None: - input_shape = shape_list(inputs_embeds)[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if past_key_values is None: - past_length = 0 - past_key_values = [None] * len(self.h) - else: - past_length = shape_list(past_key_values[0][0])[-2] - if position_ids is None: - position_ids = tf.expand_dims(tf.range(past_length, input_shape[-1] + past_length, dtype=tf.int32), axis=0) - position_ids = tf.tile(position_ids, [input_shape[0], 1]) - - # Attention mask. - if attention_mask is not None: - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - attention_mask = tf.reshape(attention_mask, (input_shape[0], 1, 1, input_shape[1] + past_length)) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - - one_cst = tf.constant(1.0) - ten_thousand_cst = tf.constant(-10000.0) - attention_mask = tf.cast(attention_mask, dtype=one_cst.dtype) - attention_mask = tf.multiply(tf.subtract(one_cst, attention_mask), ten_thousand_cst) - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # head_mask has shape n_layer x batch x n_heads x N x N - if head_mask is not None: - raise NotImplementedError - else: - head_mask = [None] * self.num_layers - - if token_type_ids is not None: - token_type_ids = tf.reshape(token_type_ids, [-1, shape_list(token_type_ids)[-1]]) - token_type_embeds = self.w(token_type_ids) - token_type_embeds *= tf.math.sqrt(tf.cast(self.d_model_size, dtype=token_type_embeds.dtype)) - else: - token_type_embeds = tf.constant(0.0) - position_ids = tf.reshape(position_ids, [-1, shape_list(position_ids)[-1]]) - - if inputs_embeds is None: - check_embeddings_within_bounds(input_ids, self.w.input_dim) - inputs_embeds = self.w(input_ids) - seq_len = input_shape[-1] - mask = 1 - tf.linalg.band_part(tf.ones((seq_len, seq_len)), -1, 0) - - inputs_embeds *= tf.math.sqrt(tf.cast(self.d_model_size, inputs_embeds.dtype)) - - pos_embeds = tf.gather(self.pos_encoding, position_ids) - pos_embeds = tf.cast(pos_embeds, dtype=token_type_embeds.dtype) - hidden_states = inputs_embeds + pos_embeds + token_type_embeds - - hidden_states = self.dropout(hidden_states, training=training) - - output_shape = input_shape + [shape_list(hidden_states)[-1]] - presents = () if use_cache else None - all_hidden_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - for i, (h, layer_past) in enumerate(zip(self.h, past_key_values)): - if output_hidden_states: - all_hidden_states = all_hidden_states + (tf.reshape(hidden_states, output_shape),) - outputs = h( - hidden_states, - mask, - layer_past, - attention_mask, - head_mask[i], - use_cache, - output_attentions, - training=training, - ) - hidden_states, present = outputs[:2] - - if use_cache: - presents = presents + (present,) - - if output_attentions: - all_attentions = all_attentions + (outputs[2],) - - hidden_states = self.layernorm(hidden_states) - hidden_states = tf.reshape(hidden_states, output_shape) - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if output_attentions: - # let the number of heads free (-1) so we can extract attention even after head pruning - attention_output_shape = input_shape[:-1] + [-1] + shape_list(all_attentions[0])[-2:] - all_attentions = tuple(tf.reshape(t, attention_output_shape) for t in all_attentions) - - if not return_dict: - return tuple(v for v in [hidden_states, presents, all_hidden_states, all_attentions] if v is not None) - - return TFBaseModelOutputWithPast( - last_hidden_state=hidden_states, - past_key_values=presents, - hidden_states=all_hidden_states, - attentions=all_attentions, - ) - - -class TFCTRLPreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = CTRLConfig - base_model_prefix = "transformer" - - -CTRL_START_DOCSTRING = r""" - - This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it - as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and - behavior. - - - - TensorFlow models and layers in `transformers` accept two formats as input: - - - having all inputs as keyword arguments (like PyTorch models), or - - having all inputs as a list, tuple or dict in the first positional argument. - - The reason the second format is supported is that Keras methods prefer this format when passing inputs to models - and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just - pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second - format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with - the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first - positional argument: - - - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - - a dictionary with one or several input Tensors associated to the input names given in the docstring: - `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` - - Note that when creating models and layers with - [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry - about any of this, as you can just pass inputs like you would to any other Python function! - - - - Parameters: - config ([`CTRLConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -CTRL_INPUTS_DOCSTRING = r""" - Args: - input_ids (`Numpy array` or `tf.Tensor` of shape `(batch_size, input_ids_length)`): - `input_ids_length` = `sequence_length` if `past` is `None` else `past[0].shape[-2]` (`sequence_length` of - input past key value states). - - Indices of input sequence tokens in the vocabulary. - - If `past` is used, only input IDs that do not have their past calculated should be passed as `input_ids`. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and - [`PreTrainedTokenizer.encode`] for details. - - [What are input IDs?](../glossary#input-ids) - past (`List[tf.Tensor]` of length `config.n_layers`): - Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see - `past` output below). Can be used to speed up sequential decoding. The token ids which have their past - given to this model should not be passed as input ids as they have already been computed. - attention_mask (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`tf.Tensor` or `Numpy array` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - use_cache (`bool`, *optional*): - If set to `True`, `past` key value states are returned and can be used to speed up decoding (see `past`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the - config will be used instead. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. - training (`bool`, *optional*, defaults to `False`): - Whether or not to use the model in training mode (some modules like dropout modules have different - behaviors between training and evaluation). -""" - - -@add_start_docstrings( - "The bare CTRL Model transformer outputting raw hidden-states without any specific head on top.", - CTRL_START_DOCSTRING, -) -class TFCTRLModel(TFCTRLPreTrainedModel): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.transformer = TFCTRLMainLayer(config, name="transformer") - - @unpack_inputs - @add_start_docstrings_to_model_forward(CTRL_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFBaseModelOutputWithPast, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFBaseModelOutputWithPast]: - outputs = self.transformer( - input_ids=input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - return outputs - - -class TFCTRLBiasLayer(tf.keras.layers.Layer): - """ - Bias as a layer. It is used for serialization purposes: `tf.keras.Model.save_weights` stores on a per-layer basis, - so all weights have to be registered in a layer. - """ - - def __init__(self, shape, initializer, trainable, name, **kwargs): - super().__init__(name=name, **kwargs) - self.shape = shape - self.initializer = initializer - self.trainable = trainable - - def build(self, input_shape): - self.bias = self.add_weight( - name="bias", shape=self.shape, initializer=self.initializer, trainable=self.trainable - ) - super().build(input_shape) - - def call(self, x): - return x + self.bias - - -@add_start_docstrings( - """ - The CTRL Model transformer with a language modeling head on top (linear layer with weights tied to the input - embeddings). - """, - CTRL_START_DOCSTRING, -) -class TFCTRLLMHeadModel(TFCTRLPreTrainedModel, TFCausalLanguageModelingLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.transformer = TFCTRLMainLayer(config, name="transformer") - self.bias_layer = TFCTRLBiasLayer( - name="lm_head", shape=[1, config.vocab_size], initializer="zeros", trainable=True - ) - - def get_output_embeddings(self): - return self.get_input_embeddings() - - def set_output_embeddings(self, value): - self.set_input_embeddings(value) - - def get_bias(self): - return {"lm_head.bias": self.bias_layer.bias} - - def set_bias(self, value): - # Replaces the existing layers containing bias for correct (de)serialization. - vocab_size = value["lm_head.bias"].shape[-1] - self.bias_layer = TFCTRLBiasLayer( - name="final_logits_bias", shape=[1, vocab_size], initializer="zeros", trainable=True - ) - self.bias_layer.build(None) - self.bias_layer.bias.assign(value["lm_head.bias"]) - - # Copied from transformers.models.gpt2.modeling_tf_gpt2.TFGPT2LMHeadModel.prepare_inputs_for_generation - def prepare_inputs_for_generation(self, inputs, past_key_values=None, use_cache=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past_key_values: - inputs = tf.expand_dims(inputs[:, -1], -1) - if token_type_ids is not None: - token_type_ids = tf.expand_dims(token_type_ids[:, -1], -1) - - position_ids = kwargs.get("position_ids", None) - attention_mask = kwargs.get("attention_mask", None) - - if attention_mask is not None and position_ids is None: - position_ids = tf.math.cumsum(attention_mask, axis=-1, exclusive=True) - if past_key_values: - position_ids = tf.expand_dims(position_ids[:, -1], -1) - - return { - "input_ids": inputs, - "attention_mask": attention_mask, - "position_ids": position_ids, - "past_key_values": past_key_values, - "use_cache": use_cache, - "token_type_ids": token_type_ids, - } - - @unpack_inputs - @add_start_docstrings_to_model_forward(CTRL_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFCausalLMOutputWithPast, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFCausalLMOutputWithPast]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the cross entropy classification loss. Indices should be in `[0, ..., - config.vocab_size - 1]`. - """ - transformer_outputs = self.transformer( - input_ids=input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - hidden_states = transformer_outputs[0] - logits = tf.matmul(hidden_states, self.transformer.w.weights, transpose_b=True) - logits = self.bias_layer(logits) - - loss = None - if labels is not None: - # shift labels to the left and cut last logit token - shifted_logits = logits[:, :-1] - labels = labels[:, 1:] - loss = self.hf_compute_loss(labels, shifted_logits) - - if not return_dict: - output = (logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return TFCausalLMOutputWithPast( - loss=loss, - logits=logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) - - -@add_start_docstrings( - """ - The CTRL Model transformer with a sequence classification head on top (linear layer). - - [`TFCTRLForSequenceClassification`] uses the last token in order to do the classification, as other causal models - (e.g. GPT-1, GPT-2) do. - - Since it does classification on the last token, it requires to know the position of the last token. If a - `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If - no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the - padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in - each row of the batch). - """, - CTRL_START_DOCSTRING, -) -class TFCTRLForSequenceClassification(TFCTRLPreTrainedModel, TFSequenceClassificationLoss): - def __init__(self, config, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - self.num_labels = config.num_labels - self.classifier = tf.keras.layers.Dense( - config.num_labels, - kernel_initializer=get_initializer(config.initializer_range), - name="classifier", - use_bias=False, - ) - self.transformer = TFCTRLMainLayer(config, name="transformer") - - def get_output_embeddings(self): - # Remove after transformers v4.32. Fix this model's `test_model_common_attributes` test too. - logger.warning( - "Sequence classification models do not have output embeddings. `.get_output_embeddings` will be removed " - "in transformers v4.32." - ) - return self.transformer.w - - @unpack_inputs - @add_start_docstrings_to_model_forward(CTRL_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFSequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: TFModelInputType | None = None, - past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None, - attention_mask: np.ndarray | tf.Tensor | None = None, - token_type_ids: np.ndarray | tf.Tensor | None = None, - position_ids: np.ndarray | tf.Tensor | None = None, - head_mask: np.ndarray | tf.Tensor | None = None, - inputs_embeds: np.ndarray | tf.Tensor | None = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: np.ndarray | tf.Tensor | None = None, - training: Optional[bool] = False, - ) -> Union[Tuple, TFSequenceClassifierOutput]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the cross entropy classification loss. Indices should be in `[0, ..., - config.vocab_size - 1]`. - """ - - transformer_outputs = self.transformer( - input_ids=input_ids, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - hidden_states = transformer_outputs[0] - logits = self.classifier(hidden_states) - in_logits = None - if self.config.pad_token_id is None: - sequence_lengths = -1 - else: - if input_ids is not None: - sequence_lengths = ( - tf.argmax(tf.cast(tf.math.equal(input_ids, self.config.pad_token_id), input_ids.dtype), axis=-1) - - 1 - ) - sequence_lengths = tf.where(sequence_lengths >= 0, sequence_lengths, input_ids.shape[-1] - 1) - in_logits = tf.gather(logits, sequence_lengths, batch_dims=1, axis=1) - else: - sequence_lengths = -1 - logger.warning( - f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be " - "unexpected if using padding tokens in conjunction with `inputs_embeds.`" - ) - loss = None - - if labels is not None: - if input_ids is not None: - batch_size, sequence_length = shape_list(input_ids)[:2] - else: - batch_size, sequence_length = shape_list(inputs_embeds)[:2] - if self.config.pad_token_id is None and batch_size != 1: - raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.") - - if not tf.is_tensor(sequence_lengths): - in_logits = logits[0:batch_size, sequence_lengths] - - loss = self.hf_compute_loss(tf.reshape(labels, [-1, 1]), tf.reshape(in_logits, [-1, self.num_labels])) - - pooled_logits = in_logits if in_logits is not None else logits - - if not return_dict: - output = (pooled_logits,) + transformer_outputs[1:] - return ((loss,) + output) if loss is not None else output - - return TFSequenceClassifierOutput( - loss=loss, - logits=pooled_logits, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dpt/convert_dpt_hybrid_to_pytorch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dpt/convert_dpt_hybrid_to_pytorch.py deleted file mode 100644 index 0fa69adfaf39d54a8417c21328a30a6f5993eac4..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/dpt/convert_dpt_hybrid_to_pytorch.py +++ /dev/null @@ -1,316 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Convert DPT checkpoints from the original repository. URL: https://github.com/isl-org/DPT""" - - -import argparse -import json -from pathlib import Path - -import requests -import torch -from huggingface_hub import cached_download, hf_hub_url -from PIL import Image - -from transformers import DPTConfig, DPTForDepthEstimation, DPTForSemanticSegmentation, DPTImageProcessor -from transformers.utils import logging - - -logging.set_verbosity_info() -logger = logging.get_logger(__name__) - - -def get_dpt_config(checkpoint_url): - config = DPTConfig(embedding_type="hybrid") - - if "large" in checkpoint_url: - config.hidden_size = 1024 - config.intermediate_size = 4096 - config.num_hidden_layers = 24 - config.num_attention_heads = 16 - config.backbone_out_indices = [5, 11, 17, 23] - config.neck_hidden_sizes = [256, 512, 1024, 1024] - expected_shape = (1, 384, 384) - - if "nyu" or "midas" in checkpoint_url: - config.hidden_size = 768 - config.reassemble_factors = [1, 1, 1, 0.5] - config.neck_hidden_sizes = [256, 512, 768, 768] - config.num_labels = 150 - config.patch_size = 16 - expected_shape = (1, 384, 384) - config.use_batch_norm_in_fusion_residual = False - config.readout_type = "project" - - if "ade" in checkpoint_url: - config.use_batch_norm_in_fusion_residual = True - config.hidden_size = 768 - config.reassemble_stage = [1, 1, 1, 0.5] - config.num_labels = 150 - config.patch_size = 16 - repo_id = "huggingface/label-files" - filename = "ade20k-id2label.json" - id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r")) - id2label = {int(k): v for k, v in id2label.items()} - config.id2label = id2label - config.label2id = {v: k for k, v in id2label.items()} - expected_shape = [1, 150, 480, 480] - - return config, expected_shape - - -def remove_ignore_keys_(state_dict): - ignore_keys = ["pretrained.model.head.weight", "pretrained.model.head.bias"] - for k in ignore_keys: - state_dict.pop(k, None) - - -def rename_key(name): - if ( - "pretrained.model" in name - and "cls_token" not in name - and "pos_embed" not in name - and "patch_embed" not in name - ): - name = name.replace("pretrained.model", "dpt.encoder") - if "pretrained.model" in name: - name = name.replace("pretrained.model", "dpt.embeddings") - if "patch_embed" in name: - name = name.replace("patch_embed", "") - if "pos_embed" in name: - name = name.replace("pos_embed", "position_embeddings") - if "attn.proj" in name: - name = name.replace("attn.proj", "attention.output.dense") - if "proj" in name and "project" not in name: - name = name.replace("proj", "projection") - if "blocks" in name: - name = name.replace("blocks", "layer") - if "mlp.fc1" in name: - name = name.replace("mlp.fc1", "intermediate.dense") - if "mlp.fc2" in name: - name = name.replace("mlp.fc2", "output.dense") - if "norm1" in name and "backbone" not in name: - name = name.replace("norm1", "layernorm_before") - if "norm2" in name and "backbone" not in name: - name = name.replace("norm2", "layernorm_after") - if "scratch.output_conv" in name: - name = name.replace("scratch.output_conv", "head") - if "scratch" in name: - name = name.replace("scratch", "neck") - if "layer1_rn" in name: - name = name.replace("layer1_rn", "convs.0") - if "layer2_rn" in name: - name = name.replace("layer2_rn", "convs.1") - if "layer3_rn" in name: - name = name.replace("layer3_rn", "convs.2") - if "layer4_rn" in name: - name = name.replace("layer4_rn", "convs.3") - if "refinenet" in name: - layer_idx = int(name[len("neck.refinenet") : len("neck.refinenet") + 1]) - # tricky here: we need to map 4 to 0, 3 to 1, 2 to 2 and 1 to 3 - name = name.replace(f"refinenet{layer_idx}", f"fusion_stage.layers.{abs(layer_idx-4)}") - if "out_conv" in name: - name = name.replace("out_conv", "projection") - if "resConfUnit1" in name: - name = name.replace("resConfUnit1", "residual_layer1") - if "resConfUnit2" in name: - name = name.replace("resConfUnit2", "residual_layer2") - if "conv1" in name: - name = name.replace("conv1", "convolution1") - if "conv2" in name: - name = name.replace("conv2", "convolution2") - # readout blocks - if "pretrained.act_postprocess1.0.project.0" in name: - name = name.replace("pretrained.act_postprocess1.0.project.0", "neck.reassemble_stage.readout_projects.0.0") - if "pretrained.act_postprocess2.0.project.0" in name: - name = name.replace("pretrained.act_postprocess2.0.project.0", "neck.reassemble_stage.readout_projects.1.0") - if "pretrained.act_postprocess3.0.project.0" in name: - name = name.replace("pretrained.act_postprocess3.0.project.0", "neck.reassemble_stage.readout_projects.2.0") - if "pretrained.act_postprocess4.0.project.0" in name: - name = name.replace("pretrained.act_postprocess4.0.project.0", "neck.reassemble_stage.readout_projects.3.0") - - # resize blocks - if "pretrained.act_postprocess1.3" in name: - name = name.replace("pretrained.act_postprocess1.3", "neck.reassemble_stage.layers.0.projection") - if "pretrained.act_postprocess1.4" in name: - name = name.replace("pretrained.act_postprocess1.4", "neck.reassemble_stage.layers.0.resize") - if "pretrained.act_postprocess2.3" in name: - name = name.replace("pretrained.act_postprocess2.3", "neck.reassemble_stage.layers.1.projection") - if "pretrained.act_postprocess2.4" in name: - name = name.replace("pretrained.act_postprocess2.4", "neck.reassemble_stage.layers.1.resize") - if "pretrained.act_postprocess3.3" in name: - name = name.replace("pretrained.act_postprocess3.3", "neck.reassemble_stage.layers.2.projection") - if "pretrained.act_postprocess4.3" in name: - name = name.replace("pretrained.act_postprocess4.3", "neck.reassemble_stage.layers.3.projection") - if "pretrained.act_postprocess4.4" in name: - name = name.replace("pretrained.act_postprocess4.4", "neck.reassemble_stage.layers.3.resize") - if "pretrained" in name: - name = name.replace("pretrained", "dpt") - if "bn" in name: - name = name.replace("bn", "batch_norm") - if "head" in name: - name = name.replace("head", "head.head") - if "encoder.norm" in name: - name = name.replace("encoder.norm", "layernorm") - if "auxlayer" in name: - name = name.replace("auxlayer", "auxiliary_head.head") - if "backbone" in name: - name = name.replace("backbone", "backbone.bit.encoder") - - if ".." in name: - name = name.replace("..", ".") - - if "stem.conv" in name: - name = name.replace("stem.conv", "bit.embedder.convolution") - if "blocks" in name: - name = name.replace("blocks", "layers") - if "convolution" in name and "backbone" in name: - name = name.replace("convolution", "conv") - if "layer" in name and "backbone" in name: - name = name.replace("layer", "layers") - if "backbone.bit.encoder.bit" in name: - name = name.replace("backbone.bit.encoder.bit", "backbone.bit") - if "embedder.conv" in name: - name = name.replace("embedder.conv", "embedder.convolution") - if "backbone.bit.encoder.stem.norm" in name: - name = name.replace("backbone.bit.encoder.stem.norm", "backbone.bit.embedder.norm") - return name - - -# we split up the matrix of each encoder layer into queries, keys and values -def read_in_q_k_v(state_dict, config): - for i in range(config.num_hidden_layers): - # read in weights + bias of input projection layer (in timm, this is a single matrix + bias) - in_proj_weight = state_dict.pop(f"dpt.encoder.layer.{i}.attn.qkv.weight") - in_proj_bias = state_dict.pop(f"dpt.encoder.layer.{i}.attn.qkv.bias") - # next, add query, keys and values (in that order) to the state dict - state_dict[f"dpt.encoder.layer.{i}.attention.attention.query.weight"] = in_proj_weight[: config.hidden_size, :] - state_dict[f"dpt.encoder.layer.{i}.attention.attention.query.bias"] = in_proj_bias[: config.hidden_size] - state_dict[f"dpt.encoder.layer.{i}.attention.attention.key.weight"] = in_proj_weight[ - config.hidden_size : config.hidden_size * 2, : - ] - state_dict[f"dpt.encoder.layer.{i}.attention.attention.key.bias"] = in_proj_bias[ - config.hidden_size : config.hidden_size * 2 - ] - state_dict[f"dpt.encoder.layer.{i}.attention.attention.value.weight"] = in_proj_weight[ - -config.hidden_size :, : - ] - state_dict[f"dpt.encoder.layer.{i}.attention.attention.value.bias"] = in_proj_bias[-config.hidden_size :] - - -# We will verify our results on an image of cute cats -def prepare_img(): - url = "http://images.cocodataset.org/val2017/000000039769.jpg" - im = Image.open(requests.get(url, stream=True).raw) - return im - - -@torch.no_grad() -def convert_dpt_checkpoint(checkpoint_url, pytorch_dump_folder_path, push_to_hub, model_name, show_prediction): - """ - Copy/paste/tweak model's weights to our DPT structure. - """ - - # define DPT configuration based on URL - config, expected_shape = get_dpt_config(checkpoint_url) - # load original state_dict from URL - # state_dict = torch.hub.load_state_dict_from_url(checkpoint_url, map_location="cpu") - state_dict = torch.load(checkpoint_url, map_location="cpu") - # remove certain keys - remove_ignore_keys_(state_dict) - # rename keys - for key in state_dict.copy().keys(): - val = state_dict.pop(key) - state_dict[rename_key(key)] = val - # read in qkv matrices - read_in_q_k_v(state_dict, config) - - # load HuggingFace model - model = DPTForSemanticSegmentation(config) if "ade" in checkpoint_url else DPTForDepthEstimation(config) - model.load_state_dict(state_dict) - model.eval() - - # Check outputs on an image - size = 480 if "ade" in checkpoint_url else 384 - image_processor = DPTImageProcessor(size=size) - - image = prepare_img() - encoding = image_processor(image, return_tensors="pt") - - # forward pass - outputs = model(**encoding).logits if "ade" in checkpoint_url else model(**encoding).predicted_depth - - if show_prediction: - prediction = ( - torch.nn.functional.interpolate( - outputs.unsqueeze(1), - size=(image.size[1], image.size[0]), - mode="bicubic", - align_corners=False, - ) - .squeeze() - .cpu() - .numpy() - ) - - Image.fromarray((prediction / prediction.max()) * 255).show() - - if pytorch_dump_folder_path is not None: - Path(pytorch_dump_folder_path).mkdir(exist_ok=True) - print(f"Saving model to {pytorch_dump_folder_path}") - model.save_pretrained(pytorch_dump_folder_path) - print(f"Saving image processor to {pytorch_dump_folder_path}") - image_processor.save_pretrained(pytorch_dump_folder_path) - - if push_to_hub: - model.push_to_hub("ybelkada/dpt-hybrid-midas") - image_processor.push_to_hub("ybelkada/dpt-hybrid-midas") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument( - "--checkpoint_url", - default="https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", - type=str, - help="URL of the original DPT checkpoint you'd like to convert.", - ) - parser.add_argument( - "--pytorch_dump_folder_path", - default=None, - type=str, - required=False, - help="Path to the output PyTorch model directory.", - ) - parser.add_argument( - "--push_to_hub", - action="store_true", - ) - parser.add_argument( - "--model_name", - default="dpt-large", - type=str, - help="Name of the model, in case you're pushing to the hub.", - ) - parser.add_argument( - "--show_prediction", - action="store_true", - ) - - args = parser.parse_args() - convert_dpt_checkpoint( - args.checkpoint_url, args.pytorch_dump_folder_path, args.push_to_hub, args.model_name, args.show_prediction - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/flava/configuration_flava.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/flava/configuration_flava.py deleted file mode 100644 index 4125d91262200662a6d9e52f5f1802af901ce74a..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/flava/configuration_flava.py +++ /dev/null @@ -1,764 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Meta Platforms authors and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" FLAVA model configurations""" - -import os -from typing import Any, Dict, Union - -from ...configuration_utils import PretrainedConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -FLAVA_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "facebook/flava-full": "https://huggingface.co/facebook/flava-full/resolve/main/config.json", -} - - -class FlavaImageConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`FlavaImageModel`]. It is used to instantiate an - FLAVA model according to the specified arguments, defining the model architecture. - - Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA - [facebook/flava-full](https://huggingface.co/facebook/flava-full) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - - Args: - hidden_size (`int`, *optional*, defaults to 768): - Dimensionality of the encoder layers and the pooler layer. - num_hidden_layers (`int`, *optional*, defaults to 12): - Number of hidden layers in the Transformer encoder. - num_attention_heads (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (`int`, *optional*, defaults to 3072): - Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. - hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"selu"` and `"gelu_new"` are supported. - hidden_dropout_prob (`float`, *optional*, defaults to 0.0): - The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): - The dropout ratio for the attention probabilities. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-12): - The epsilon used by the layer normalization layers. - image_size (`int`, *optional*, defaults to 224): - The size (resolution) of each image. - patch_size (`int`, *optional*, defaults to 16): - The size (resolution) of each patch. - num_channels (`int`, *optional*, defaults to 3): - The number of input channels. - qkv_bias (`bool`, *optional*, defaults to `True`): - Whether to add a bias to the queries, keys and values. - mask_token (`bool`, *optional*, defaults to `True`): - Whether to use a mask token or not. Used in MIM (Masked Image Modeling) loss for FLAVA. - vocab_size (`int`, *optional*, defaults to 8192): - Vocabulary size of the [`FlavaImageCodebook`] used in conjunction with [`FlavaImageModel`] for MIM (Masked - Image Modeling) loss for FLAVA. - - Example: - - ```python - >>> from transformers import FlavaImageConfig, FlavaImageModel - - >>> # Initializing a FlavaImageModel with style configuration - >>> configuration = FlavaImageConfig() - - >>> # Initializing a FlavaImageModel model (with random weights) from the style configuration - >>> model = FlavaImageModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "flava_image_model" - - def __init__( - self, - hidden_size: int = 768, - num_hidden_layers: int = 12, - num_attention_heads: int = 12, - intermediate_size: int = 3072, - hidden_act: int = "gelu", - hidden_dropout_prob: float = 0.0, - attention_probs_dropout_prob: float = 0.0, - initializer_range: float = 0.02, - layer_norm_eps: float = 1e-12, - image_size: int = 224, - patch_size: int = 16, - num_channels: int = 3, - qkv_bias: bool = True, - mask_token: bool = True, - vocab_size: int = 8192, - **kwargs, - ): - super().__init__(**kwargs) - - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.intermediate_size = intermediate_size - self.hidden_act = hidden_act - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.initializer_range = initializer_range - self.layer_norm_eps = layer_norm_eps - self.image_size = image_size - self.patch_size = patch_size - self.num_channels = num_channels - self.qkv_bias = qkv_bias - self.mask_token = mask_token - self.vocab_size = vocab_size - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig": - cls._set_token_in_kwargs(kwargs) - - config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) - - # get the image config dict if we are loading from FlavaConfig - if config_dict.get("model_type") == "flava": - config_dict = config_dict["image_config"] - - if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type: - logger.warning( - f"You are using a model of type {config_dict['model_type']} to instantiate a model of type " - f"{cls.model_type}. This is not supported for all configurations of models and can yield errors." - ) - - return cls.from_dict(config_dict, **kwargs) - - -class FlavaTextConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`FlavaTextModel`]. It is used to instantiate an - FLAVA model according to the specified arguments, defining the model architecture. - - Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA - [facebook/flava-full](https://huggingface.co/facebook/flava-full) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - - Args: - vocab_size (`int`, *optional*, defaults to 30522): - Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`FlavaTextModel`]. - type_vocab_size (`int`, *optional*, defaults to 2): - The vocabulary size of the `token_type_ids` passed when calling [`FlavaTextModel`]. Note that even though - text encoder allows `token_type_ids`'s value as 2, for text-only pretraining and fine-tuning, only 1 is - used similar to RoBERTa. - max_position_embeddings (`int`, *optional*, defaults to 512): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). For VL, max_length passed to model is 77. - position_embedding_type (`str`, *optional*, defaults to `"absolute"`): - Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For - positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to - [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). - For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models - with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). - hidden_size (`int`, *optional*, defaults to 768): - Dimensionality of the encoder layers and the pooler layer. - num_hidden_layers (`int`, *optional*, defaults to 12): - Number of hidden layers in the Transformer encoder. - num_attention_heads (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (`int`, *optional*, defaults to 3072): - Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. - hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"selu"` and `"gelu_new"` are supported. - hidden_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention probabilities. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-12): - The epsilon used by the layer normalization layers. - image_size (`int`, *optional*, defaults to 224): - The size (resolution) of each image. - patch_size (`int`, *optional*, defaults to 16): - The size (resolution) of each patch. - num_channels (`int`, *optional*, defaults to 3): - The number of input channels. - qkv_bias (`bool`, *optional*, defaults to `True`): - Whether to add a bias to the queries, keys and values. - - Example: - - ```python - >>> from transformers import FlavaTextConfig, FlavaTextModel - - >>> # Initializing a FlavaTextModel with style configuration - >>> configuration = FlavaTextConfig() - - >>> # Initializing a FlavaTextModel model (with random weights) from the style configuration - >>> model = FlavaTextModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "flava_text_model" - - def __init__( - self, - vocab_size: int = 30522, - type_vocab_size: int = 2, - max_position_embeddings: int = 512, - position_embedding_type: str = "absolute", - hidden_size: int = 768, - num_hidden_layers: int = 12, - num_attention_heads: int = 12, - intermediate_size: int = 3072, - hidden_act: str = "gelu", - hidden_dropout_prob: float = 0.0, - attention_probs_dropout_prob: float = 0.0, - initializer_range: float = 0.02, - layer_norm_eps: float = 1e-12, - pad_token_id: int = 0, - qkv_bias: bool = True, - **kwargs, - ): - super().__init__(**kwargs) - - self.vocab_size = vocab_size - self.type_vocab_size = type_vocab_size - self.max_position_embeddings = max_position_embeddings - self.position_embedding_type = position_embedding_type - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.intermediate_size = intermediate_size - self.hidden_act = hidden_act - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.initializer_range = initializer_range - self.layer_norm_eps = layer_norm_eps - self.qkv_bias = qkv_bias - self.pad_token_id = pad_token_id - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig": - cls._set_token_in_kwargs(kwargs) - - config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) - - # get the text config dict if we are loading from FlavaConfig - if config_dict.get("model_type") == "flava": - config_dict = config_dict["text_config"] - - if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type: - logger.warning( - f"You are using a model of type {config_dict['model_type']} to instantiate a model of type " - f"{cls.model_type}. This is not supported for all configurations of models and can yield errors." - ) - - return cls.from_dict(config_dict, **kwargs) - - -class FlavaMultimodalConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`FlavaMultimodalModel`]. It is used to instantiate - an FLAVA model according to the specified arguments, defining the model architecture. - - Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA - [facebook/flava-full](https://huggingface.co/facebook/flava-full) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - - Args: - hidden_size (`int`, *optional*, defaults to 768): - Dimensionality of the encoder layers and the pooler layer. - num_hidden_layers (`int`, *optional*, defaults to 6): - Number of hidden layers in the Transformer encoder. - num_attention_heads (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (`int`, *optional*, defaults to 3072): - Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. - hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"selu"` and `"gelu_new"` are supported. - hidden_dropout_prob (`float`, *optional*, defaults to 0.0): - The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): - The dropout ratio for the attention probabilities. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-12): - The epsilon used by the layer normalization layers. - qkv_bias (`bool`, *optional*, defaults to `True`): - Whether to add a bias to the queries, keys and values. - use_cls_token (`bool`, *optional*, defaults to `True`): - Whether to use an extra CLS token for multimodal settings. Usually needed by the FLAVA model. - - - Example: - - ```python - >>> from transformers import FlavaMultimodalConfig, FlavaMultimodalModel - - >>> # Initializing a FlavaMultimodalModel with style configuration - >>> configuration = FlavaMultimodalConfig() - - >>> # Initializing a FlavaMultimodalModel model (with random weights) from the style configuration - >>> model = FlavaMultimodalModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "flava_multimodal_model" - - def __init__( - self, - hidden_size: int = 768, - num_hidden_layers: int = 6, - num_attention_heads: int = 12, - intermediate_size: int = 3072, - hidden_act: int = "gelu", - hidden_dropout_prob: int = 0.0, - attention_probs_dropout_prob: int = 0.0, - initializer_range: float = 0.02, - layer_norm_eps: float = 1e-12, - qkv_bias: bool = True, - use_cls_token: bool = True, - **kwargs, - ): - super().__init__(**kwargs) - - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.intermediate_size = intermediate_size - self.hidden_act = hidden_act - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.initializer_range = initializer_range - self.layer_norm_eps = layer_norm_eps - self.qkv_bias = qkv_bias - self.use_cls_token = use_cls_token - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig": - cls._set_token_in_kwargs(kwargs) - - config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) - - # get the multimodal config dict if we are loading from FlavaConfig - if config_dict.get("model_type") == "flava": - config_dict = config_dict["multimodal_config"] - - if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type: - logger.warning( - f"You are using a model of type {config_dict['model_type']} to instantiate a model of type " - f"{cls.model_type}. This is not supported for all configurations of models and can yield errors." - ) - - return cls.from_dict(config_dict, **kwargs) - - -class FlavaImageCodebookConfig(PretrainedConfig): - model_type = "flava_image_codebook" - - r""" - [`FlavaImageCodebookConfig`] is the configuration class to store the configuration of a [`FlavaImageCodebook`]. It - is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture. - Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA - [facebook/flava-image-codebook](https://huggingface.co/facebook/flava-image-codebook) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - num_groups (`int`, defaults to 4): - Number of groups to be created. This parameter as of now doesn't affect the model and is used for some - internal calculation and estimations. - input_channels (`int`, defaults to 3): - Number of channels in the image to be passed. - num_blocks_per_group (`int`, defaults to 2): - Number of conv-based blocks per group. - hidden_size (`int`, defaults to 256): - Size of hidden dim for the blocks. - vocab_size (`int`, defaults to 8192): - Size of the output vocabulary for the codebook. - freeze (`bool`, defaults to `True`): - Whether to freeze the weights of the model. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - kwargs (*optional*): - Dictionary of keyword arguments. - - Example: - - ```python - >>> from transformers import FlavaImageCodebookConfig, FlavaImageCodebook - - >>> # Initializing a FlavaImageCodebook with style configuration - >>> configuration = FlavaImageCodebookConfig() - - >>> # Initializing a FlavaImageCodebook model (with random weights) from the style configuration - >>> model = FlavaImageCodebook(configuration) - >>> # Accessing the model configuration - >>> configuration = model.config - ``` - """ - - def __init__( - self, - num_groups: int = 4, - input_channels: int = 3, - num_blocks_per_group: int = 2, - hidden_size: int = 256, - vocab_size: int = 8192, - freeze: int = True, - initializer_range: float = 0.02, - **kwargs, - ): - super().__init__(**kwargs) - self.num_groups = num_groups - self.input_channels = input_channels - self.num_blocks_per_group = num_blocks_per_group - self.hidden_size = hidden_size - self.vocab_size = vocab_size - self.freeze = freeze - self.initializer_range = initializer_range - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig": - cls._set_token_in_kwargs(kwargs) - - config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) - - # get the image codebook config dict if we are loading from FlavaConfig - if config_dict.get("model_type") == "flava": - config_dict = config_dict["image_codebook_config"] - - if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type: - logger.warning( - f"You are using a model of type {config_dict['model_type']} to instantiate a model of type " - f"{cls.model_type}. This is not supported for all configurations of models and can yield errors." - ) - - return cls.from_dict(config_dict, **kwargs) - - -class FlavaConfig(PretrainedConfig): - r""" - [`FlavaConfig`] is the configuration class to store the configuration of a [`FlavaModel`]. It is used to - instantiate FLAVA model according to the specified arguments, defining the text model, image model, image codebook - and multimodal model configs. Instantiating a configuration with the defaults will yield a similar configuration to - that of the FLAVA [facebook/flava-full](https://huggingface.co/facebook/flava-full) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - text_config (`dict`, *optional*): - Dictionary of configuration options used to initialize [`FlavaTextConfig`]. - image_config (`dict`, *optional*): - Dictionary of configuration options used to initialize [`FlavaImageConfig`]. - multimodal_config (`dict`, *optional*): - Dictionary of configuration options used to initialize [`FlavaMultimodalConfig`]. - hidden_size (`int`, *optional*, defaults to 768): - Dimensionality of the encoder layers and the pooler layer. - layer_norm_eps (`float`, *optional*, defaults to 1e-12): - The epsilon used by the layer normalization layers. - projection_dim (`int`, *optional*, defaults to 512): - Dimentionality of text and image projection layers. - logit_scale_init_value (`float`, *optional*, defaults to 2.6592): - The inital value of the *logit_scale* paramter. Default is used as per the original FLAVA/CLIP - implementation. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - ce_ignore_index (`int`, *optional*, defaults to -100): - Cross entropy index to ignore. - mim_weight (`float`, *optional*, defaults to 1.0): - Weight to be assigned to MIM (Masked Image Modeling) unimodal loss - mlm_weight (`float`, *optional*, defaults to 1.0): - Weight to be assigned to MLM (Masked Language Modeling) unimodal loss - global_contrastive_weight (`float`, *optional*, defaults to 1.0): - Weight to be assigned to global contrastive cross-alignment loss. - itm_weight (`float`, *optional*, defaults to 1.0): - Weight to be assigned to image-text matching multimodal loss. - mmm_image_weight (`float`, *optional*, defaults to 1.0): - Weight to be assigned to MMM loss's image part. - mmm_text_weight (`float`, *optional*, defaults to 1.0): - Weight to be assigned to MMM loss's text part. - global_backprop_contrastive (`bool`, *optional*, defaults to `True`): - Whether to use global backpropgation through all workers in contrastive loss. - skip_unmasked_multimodal_encoder (`bool`, *optional*, defaults to `True`): - Whether to skip running unmasked multimodal encoder whose outputs are not used by FLAVA losses. - return_loss (`bool`, *optional*, defaults to `True`): - Whether to return loss or not - - kwargs (*optional*): - Dictionary of keyword arguments. - - Example: - - ```python - >>> from transformers import FlavaConfig, FlavaModel, FlavaForPreTraining - - >>> # Initializing a FlavaConfig with style configuration - >>> configuration = FlavaConfig() - - >>> # Initializing a FlavaModel and FlavaForPreTraining model (with random weights) from the style configuration - >>> model = FlavaModel(configuration) - >>> model_pre = FlavaForPreTraining(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - >>> configuration_pre = model_pre.config - ``` - """ - - model_type = "flava" - - def __init__( - self, - image_config: Dict[str, Any] = None, - text_config: Dict[str, Any] = None, - multimodal_config: Dict[str, Any] = None, - image_codebook_config: Dict[str, Any] = None, - hidden_size: int = 768, - layer_norm_eps: float = 1e-12, - projection_dim: int = 768, - init_codebook: bool = True, - logit_scale_init_value: float = 2.6592, - initializer_range: float = 0.02, - ce_ignore_index: int = -100, - mim_weight: float = 1.0, - mlm_weight: float = 1.0, - global_contrastive_weight: float = 1.0, - itm_weight: float = 1.0, - mmm_image_weight: float = 1.0, - mmm_text_weight: float = 1.0, - global_backprop_contrastive: bool = True, - skip_unmasked_multimodal_encoder: bool = True, - return_loss: bool = True, - **kwargs, - ): - # If `_config_dict` exist, we use them for the backward compatibility. - # We pop out these 2 attributes before calling `super().__init__` to avoid them being saved (which causes a lot - # of confusion!). - text_config_dict = kwargs.pop("text_config_dict", None) - image_config_dict = kwargs.pop("image_config_dict", None) - multimodal_config_dict = kwargs.pop("multimodal_config_dict", None) - image_codebook_config_dict = kwargs.pop("image_codebook_config_dict", None) - - super().__init__(**kwargs) - - # Instead of simply assigning `[text|vision]_config_dict` to `[text|vision]_config`, we use the values in - # `[text|vision]_config_dict` to update the values in `[text|vision]_config`. The values should be same in most - # cases, but we don't want to break anything regarding `_config_dict` that existed before commit `8827e1b2`. - if text_config_dict is not None: - if text_config is None: - text_config = {} - - # This is the complete result when using `text_config_dict`. - _text_config_dict = FlavaTextConfig(**text_config_dict).to_dict() - - # Give a warning if the values exist in both `_text_config_dict` and `text_config` but being different. - for key, value in _text_config_dict.items(): - if key in text_config and value != text_config[key] and key not in ["transformers_version"]: - # If specified in `text_config_dict` - if key in text_config_dict: - message = ( - f"`{key}` is found in both `text_config_dict` and `text_config` but with different values. " - f'The value `text_config_dict["{key}"]` will be used instead.' - ) - # If inferred from default argument values (just to be super careful) - else: - message = ( - f"`text_config_dict` is provided which will be used to initialize `FlavaTextConfig`. The " - f'value `text_config["{key}"]` will be overriden.' - ) - logger.warning(message) - - # Update all values in `text_config` with the ones in `_text_config_dict`. - text_config.update(_text_config_dict) - - if image_config_dict is not None: - if image_config is None: - image_config = {} - - # This is the complete result when using `image_config_dict`. - _image_config_dict = FlavaImageConfig(**image_config_dict).to_dict() - # convert keys to string instead of integer - if "id2label" in _image_config_dict: - _image_config_dict["id2label"] = { - str(key): value for key, value in _image_config_dict["id2label"].items() - } - - # Give a warning if the values exist in both `_image_config_dict` and `image_config` but being different. - for key, value in _image_config_dict.items(): - if key in image_config and value != image_config[key] and key not in ["transformers_version"]: - # If specified in `image_config_dict` - if key in image_config_dict: - message = ( - f"`{key}` is found in both `image_config_dict` and `image_config` but with different " - f'values. The value `image_config_dict["{key}"]` will be used instead.' - ) - # If inferred from default argument values (just to be super careful) - else: - message = ( - f"`image_config_dict` is provided which will be used to initialize `FlavaImageConfig`. " - f'The value `image_config["{key}"]` will be overriden.' - ) - logger.warning(message) - - # Update all values in `image_config` with the ones in `_image_config_dict`. - image_config.update(_image_config_dict) - - if multimodal_config_dict is not None: - if multimodal_config is None: - multimodal_config = {} - - # This is the complete result when using `multimodal_config_dict`. - _multimodal_config_dict = FlavaMultimodalConfig(**multimodal_config_dict).to_dict() - - # Give a warning if the values exist in both `_multimodal_config_dict` and `multimodal_config` but being - # different. - for key, value in _multimodal_config_dict.items(): - if ( - key in multimodal_config - and value != multimodal_config[key] - and key not in ["transformers_version"] - ): - # If specified in `multimodal_config_dict` - if key in multimodal_config_dict: - message = ( - f"`{key}` is found in both `multimodal_config_dict` and `multimodal_config` but with " - f'different values. The value `multimodal_config_dict["{key}"]` will be used instead.' - ) - # If inferred from default argument values (just to be super careful) - else: - message = ( - f"`multimodal_config_dict` is provided which will be used to initialize " - f'`FlavaMultimodalConfig`. The value `multimodal_config["{key}"]` will be overriden.' - ) - logger.warning(message) - - # Update all values in `multimodal_config` with the ones in `_multimodal_config_dict`. - multimodal_config.update(_multimodal_config_dict) - - if image_codebook_config_dict is not None: - if image_codebook_config is None: - image_codebook_config = {} - - # This is the complete result when using `image_codebook_config_dict`. - _image_codebook_config_dict = FlavaImageCodebookConfig(**image_codebook_config_dict).to_dict() - - # Give a warning if the values exist in both `_image_codebook_config_dict` and `image_codebook_config` but - # being different. - for key, value in _image_codebook_config_dict.items(): - if ( - key in image_codebook_config - and value != image_codebook_config[key] - and key not in ["transformers_version"] - ): - # If specified in `image_codebook_config_dict` - if key in image_codebook_config_dict: - message = ( - f"`{key}` is found in both `image_codebook_config_dict` and `image_codebook_config` but " - f'with different values. The value `image_codebook_config_dict["{key}"]` will be used ' - "instead." - ) - # If inferred from default argument values (just to be super careful) - else: - message = ( - f"`image_codebook_config_dict` is provided which will be used to initialize " - f'`FlavaImageCodebookConfig`. The value `image_codebook_config["{key}"]` will be overriden.' - ) - logger.warning(message) - - # Update all values in `image_codebook_config` with the ones in `_image_codebook_config_dict`. - image_codebook_config.update(_image_codebook_config_dict) - - if image_config is None: - image_config = {} - logger.info("`image_config` is `None`. initializing the `FlavaImageConfig` with default values.") - - if text_config is None: - text_config = {} - logger.info("`text_config` is `None`. Initializing the `FlavaTextConfig` with default values.") - - if multimodal_config is None: - multimodal_config = {} - logger.info("`multimodal_config` is `None`. initializing the `FlavaMultimodalConfig` with default values.") - - if image_codebook_config is None: - image_codebook_config = {} - logger.info( - "`image_codebook_config` is `None`. initializing the `FlavaImageCodebookConfig` with default values." - ) - - self.image_config = FlavaImageConfig(**image_config) - self.text_config = FlavaTextConfig(**text_config) - self.multimodal_config = FlavaMultimodalConfig(**multimodal_config) - self.image_codebook_config = FlavaImageCodebookConfig(**image_codebook_config) - self.projection_dim = projection_dim - self.init_codebook = init_codebook - - self.hidden_size = hidden_size - self.layer_norm_eps = layer_norm_eps - self.initializer_range = initializer_range - self.logit_scale_init_value = logit_scale_init_value - self.initializer_factor = 1.0 - self.ce_ignore_index = ce_ignore_index - self.mim_weight = mim_weight - self.mlm_weight = mlm_weight - self.global_contrastive_weight = global_contrastive_weight - self.itm_weight = itm_weight - self.mmm_image_weight = mmm_image_weight - self.mmm_text_weight = mmm_text_weight - self.global_backprop_contrastive = global_backprop_contrastive - self.skip_unmasked_multimodal_encoder = skip_unmasked_multimodal_encoder - self.return_loss = return_loss - - @classmethod - def from_configs( - cls, - image_config: FlavaImageConfig, - text_config: FlavaTextConfig, - multimodal_config: FlavaMultimodalConfig, - image_codebook_config: FlavaImageCodebookConfig, - **kwargs, - ): - r""" - Instantiate a [`FlavaConfig`] (or a derived class) from flava text model configuration, flava image model - configuration, flava multimodal model and flava codebook model configuration. - - Returns: - [`FlavaConfig`]: An instance of a configuration object - """ - - return cls( - image_config=image_config.to_dict(), - text_config=text_config.to_dict(), - multimodal_config=multimodal_config.to_dict(), - image_codebook_config=image_codebook_config.to_dict(), - **kwargs, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mluke/convert_mluke_original_pytorch_checkpoint_to_pytorch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mluke/convert_mluke_original_pytorch_checkpoint_to_pytorch.py deleted file mode 100644 index f361082fb3c5162bed9d6364ac3dd3a7bdf92104..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mluke/convert_mluke_original_pytorch_checkpoint_to_pytorch.py +++ /dev/null @@ -1,229 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Convert mLUKE checkpoint.""" - -import argparse -import json -import os -from collections import OrderedDict - -import torch - -from transformers import LukeConfig, LukeForMaskedLM, MLukeTokenizer, XLMRobertaTokenizer -from transformers.tokenization_utils_base import AddedToken - - -@torch.no_grad() -def convert_luke_checkpoint(checkpoint_path, metadata_path, entity_vocab_path, pytorch_dump_folder_path, model_size): - # Load configuration defined in the metadata file - with open(metadata_path) as metadata_file: - metadata = json.load(metadata_file) - config = LukeConfig(use_entity_aware_attention=True, **metadata["model_config"]) - - # Load in the weights from the checkpoint_path - state_dict = torch.load(checkpoint_path, map_location="cpu")["module"] - - # Load the entity vocab file - entity_vocab = load_original_entity_vocab(entity_vocab_path) - # add an entry for [MASK2] - entity_vocab["[MASK2]"] = max(entity_vocab.values()) + 1 - config.entity_vocab_size += 1 - - tokenizer = XLMRobertaTokenizer.from_pretrained(metadata["model_config"]["bert_model_name"]) - - # Add special tokens to the token vocabulary for downstream tasks - entity_token_1 = AddedToken("", lstrip=False, rstrip=False) - entity_token_2 = AddedToken("", lstrip=False, rstrip=False) - tokenizer.add_special_tokens({"additional_special_tokens": [entity_token_1, entity_token_2]}) - config.vocab_size += 2 - - print(f"Saving tokenizer to {pytorch_dump_folder_path}") - tokenizer.save_pretrained(pytorch_dump_folder_path) - with open(os.path.join(pytorch_dump_folder_path, "tokenizer_config.json"), "r") as f: - tokenizer_config = json.load(f) - tokenizer_config["tokenizer_class"] = "MLukeTokenizer" - with open(os.path.join(pytorch_dump_folder_path, "tokenizer_config.json"), "w") as f: - json.dump(tokenizer_config, f) - - with open(os.path.join(pytorch_dump_folder_path, MLukeTokenizer.vocab_files_names["entity_vocab_file"]), "w") as f: - json.dump(entity_vocab, f) - - tokenizer = MLukeTokenizer.from_pretrained(pytorch_dump_folder_path) - - # Initialize the embeddings of the special tokens - ent_init_index = tokenizer.convert_tokens_to_ids(["@"])[0] - ent2_init_index = tokenizer.convert_tokens_to_ids(["#"])[0] - - word_emb = state_dict["embeddings.word_embeddings.weight"] - ent_emb = word_emb[ent_init_index].unsqueeze(0) - ent2_emb = word_emb[ent2_init_index].unsqueeze(0) - state_dict["embeddings.word_embeddings.weight"] = torch.cat([word_emb, ent_emb, ent2_emb]) - # add special tokens for 'entity_predictions.bias' - for bias_name in ["lm_head.decoder.bias", "lm_head.bias"]: - decoder_bias = state_dict[bias_name] - ent_decoder_bias = decoder_bias[ent_init_index].unsqueeze(0) - ent2_decoder_bias = decoder_bias[ent2_init_index].unsqueeze(0) - state_dict[bias_name] = torch.cat([decoder_bias, ent_decoder_bias, ent2_decoder_bias]) - - # Initialize the query layers of the entity-aware self-attention mechanism - for layer_index in range(config.num_hidden_layers): - for matrix_name in ["query.weight", "query.bias"]: - prefix = f"encoder.layer.{layer_index}.attention.self." - state_dict[prefix + "w2e_" + matrix_name] = state_dict[prefix + matrix_name] - state_dict[prefix + "e2w_" + matrix_name] = state_dict[prefix + matrix_name] - state_dict[prefix + "e2e_" + matrix_name] = state_dict[prefix + matrix_name] - - # Initialize the embedding of the [MASK2] entity using that of the [MASK] entity for downstream tasks - entity_emb = state_dict["entity_embeddings.entity_embeddings.weight"] - entity_mask_emb = entity_emb[entity_vocab["[MASK]"]].unsqueeze(0) - state_dict["entity_embeddings.entity_embeddings.weight"] = torch.cat([entity_emb, entity_mask_emb]) - # add [MASK2] for 'entity_predictions.bias' - entity_prediction_bias = state_dict["entity_predictions.bias"] - entity_mask_bias = entity_prediction_bias[entity_vocab["[MASK]"]].unsqueeze(0) - state_dict["entity_predictions.bias"] = torch.cat([entity_prediction_bias, entity_mask_bias]) - - model = LukeForMaskedLM(config=config).eval() - - state_dict.pop("entity_predictions.decoder.weight") - state_dict.pop("lm_head.decoder.weight") - state_dict.pop("lm_head.decoder.bias") - state_dict_for_hugging_face = OrderedDict() - for key, value in state_dict.items(): - if not (key.startswith("lm_head") or key.startswith("entity_predictions")): - state_dict_for_hugging_face[f"luke.{key}"] = state_dict[key] - else: - state_dict_for_hugging_face[key] = state_dict[key] - - missing_keys, unexpected_keys = model.load_state_dict(state_dict_for_hugging_face, strict=False) - - if set(unexpected_keys) != {"luke.embeddings.position_ids"}: - raise ValueError(f"Unexpected unexpected_keys: {unexpected_keys}") - if set(missing_keys) != { - "lm_head.decoder.weight", - "lm_head.decoder.bias", - "entity_predictions.decoder.weight", - }: - raise ValueError(f"Unexpected missing_keys: {missing_keys}") - - model.tie_weights() - assert (model.luke.embeddings.word_embeddings.weight == model.lm_head.decoder.weight).all() - assert (model.luke.entity_embeddings.entity_embeddings.weight == model.entity_predictions.decoder.weight).all() - - # Check outputs - tokenizer = MLukeTokenizer.from_pretrained(pytorch_dump_folder_path, task="entity_classification") - - text = "ISO 639-3 uses the code fas for the dialects spoken across Iran and アフガニスタン (Afghanistan)." - span = (0, 9) - encoding = tokenizer(text, entity_spans=[span], return_tensors="pt") - - outputs = model(**encoding) - - # Verify word hidden states - if model_size == "large": - raise NotImplementedError - else: # base - expected_shape = torch.Size((1, 33, 768)) - expected_slice = torch.tensor([[0.0892, 0.0596, -0.2819], [0.0134, 0.1199, 0.0573], [-0.0169, 0.0927, 0.0644]]) - - if not (outputs.last_hidden_state.shape == expected_shape): - raise ValueError( - f"Outputs.last_hidden_state.shape is {outputs.last_hidden_state.shape}, Expected shape is {expected_shape}" - ) - if not torch.allclose(outputs.last_hidden_state[0, :3, :3], expected_slice, atol=1e-4): - raise ValueError - - # Verify entity hidden states - if model_size == "large": - raise NotImplementedError - else: # base - expected_shape = torch.Size((1, 1, 768)) - expected_slice = torch.tensor([[-0.1482, 0.0609, 0.0322]]) - - if not (outputs.entity_last_hidden_state.shape == expected_shape): - raise ValueError( - f"Outputs.entity_last_hidden_state.shape is {outputs.entity_last_hidden_state.shape}, Expected shape is" - f" {expected_shape}" - ) - if not torch.allclose(outputs.entity_last_hidden_state[0, :3, :3], expected_slice, atol=1e-4): - raise ValueError - - # Verify masked word/entity prediction - tokenizer = MLukeTokenizer.from_pretrained(pytorch_dump_folder_path) - text = "Tokyo is the capital of ." - span = (24, 30) - encoding = tokenizer(text, entity_spans=[span], return_tensors="pt") - - outputs = model(**encoding) - - input_ids = encoding["input_ids"][0].tolist() - mask_position_id = input_ids.index(tokenizer.convert_tokens_to_ids("")) - predicted_id = outputs.logits[0][mask_position_id].argmax(dim=-1) - assert "Japan" == tokenizer.decode(predicted_id) - - predicted_entity_id = outputs.entity_logits[0][0].argmax().item() - multilingual_predicted_entities = [ - entity for entity, entity_id in tokenizer.entity_vocab.items() if entity_id == predicted_entity_id - ] - assert [e for e in multilingual_predicted_entities if e.startswith("en:")][0] == "en:Japan" - - # Finally, save our PyTorch model and tokenizer - print("Saving PyTorch model to {}".format(pytorch_dump_folder_path)) - model.save_pretrained(pytorch_dump_folder_path) - - -def load_original_entity_vocab(entity_vocab_path): - SPECIAL_TOKENS = ["[MASK]", "[PAD]", "[UNK]"] - - data = [json.loads(line) for line in open(entity_vocab_path)] - - new_mapping = {} - for entry in data: - entity_id = entry["id"] - for entity_name, language in entry["entities"]: - if entity_name in SPECIAL_TOKENS: - new_mapping[entity_name] = entity_id - break - new_entity_name = f"{language}:{entity_name}" - new_mapping[new_entity_name] = entity_id - return new_mapping - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument("--checkpoint_path", type=str, help="Path to a pytorch_model.bin file.") - parser.add_argument( - "--metadata_path", default=None, type=str, help="Path to a metadata.json file, defining the configuration." - ) - parser.add_argument( - "--entity_vocab_path", - default=None, - type=str, - help="Path to an entity_vocab.tsv file, containing the entity vocabulary.", - ) - parser.add_argument( - "--pytorch_dump_folder_path", default=None, type=str, help="Path to where to dump the output PyTorch model." - ) - parser.add_argument( - "--model_size", default="base", type=str, choices=["base", "large"], help="Size of the model to be converted." - ) - args = parser.parse_args() - convert_luke_checkpoint( - args.checkpoint_path, - args.metadata_path, - args.entity_vocab_path, - args.pytorch_dump_folder_path, - args.model_size, - ) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py deleted file mode 100644 index 99cd536d2f9880d2049390c45f73eb22335e1b82..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py +++ /dev/null @@ -1,533 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Dict, List, Optional, Tuple, Union -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, cat -from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage -from detectron2.utils.memory import retry_if_cuda_oom -from detectron2.utils.registry import Registry - -from ..anchor_generator import build_anchor_generator -from ..box_regression import Box2BoxTransform, _dense_box_regression_loss -from ..matcher import Matcher -from ..sampling import subsample_labels -from .build import PROPOSAL_GENERATOR_REGISTRY -from .proposal_utils import find_top_rpn_proposals - -RPN_HEAD_REGISTRY = Registry("RPN_HEAD") -RPN_HEAD_REGISTRY.__doc__ = """ -Registry for RPN heads, which take feature maps and perform -objectness classification and bounding box regression for anchors. - -The registered object will be called with `obj(cfg, input_shape)`. -The call should return a `nn.Module` object. -""" - - -""" -Shape shorthand in this module: - - N: number of images in the minibatch - L: number of feature maps per image on which RPN is run - A: number of cell anchors (must be the same for all feature maps) - Hi, Wi: height and width of the i-th feature map - B: size of the box parameterization - -Naming convention: - - objectness: refers to the binary classification of an anchor as object vs. not object. - - deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box - transform (see :class:`box_regression.Box2BoxTransform`), or 5d for rotated boxes. - - pred_objectness_logits: predicted objectness scores in [-inf, +inf]; use - sigmoid(pred_objectness_logits) to estimate P(object). - - gt_labels: ground-truth binary classification labels for objectness - - pred_anchor_deltas: predicted box2box transform deltas - - gt_anchor_deltas: ground-truth box2box transform deltas -""" - - -def build_rpn_head(cfg, input_shape): - """ - Build an RPN head defined by `cfg.MODEL.RPN.HEAD_NAME`. - """ - name = cfg.MODEL.RPN.HEAD_NAME - return RPN_HEAD_REGISTRY.get(name)(cfg, input_shape) - - -@RPN_HEAD_REGISTRY.register() -class StandardRPNHead(nn.Module): - """ - Standard RPN classification and regression heads described in :paper:`Faster R-CNN`. - Uses a 3x3 conv to produce a shared hidden state from which one 1x1 conv predicts - objectness logits for each anchor and a second 1x1 conv predicts bounding-box deltas - specifying how to deform each anchor into an object proposal. - """ - - @configurable - def __init__( - self, *, in_channels: int, num_anchors: int, box_dim: int = 4, conv_dims: List[int] = (-1,) - ): - """ - NOTE: this interface is experimental. - - Args: - in_channels (int): number of input feature channels. When using multiple - input features, they must have the same number of channels. - num_anchors (int): number of anchors to predict for *each spatial position* - on the feature map. The total number of anchors for each - feature map will be `num_anchors * H * W`. - box_dim (int): dimension of a box, which is also the number of box regression - predictions to make for each anchor. An axis aligned box has - box_dim=4, while a rotated box has box_dim=5. - conv_dims (list[int]): a list of integers representing the output channels - of N conv layers. Set it to -1 to use the same number of output channels - as input channels. - """ - super().__init__() - cur_channels = in_channels - # Keeping the old variable names and structure for backwards compatiblity. - # Otherwise the old checkpoints will fail to load. - if len(conv_dims) == 1: - out_channels = cur_channels if conv_dims[0] == -1 else conv_dims[0] - # 3x3 conv for the hidden representation - self.conv = self._get_rpn_conv(cur_channels, out_channels) - cur_channels = out_channels - else: - self.conv = nn.Sequential() - for k, conv_dim in enumerate(conv_dims): - out_channels = cur_channels if conv_dim == -1 else conv_dim - if out_channels <= 0: - raise ValueError( - f"Conv output channels should be greater than 0. Got {out_channels}" - ) - conv = self._get_rpn_conv(cur_channels, out_channels) - self.conv.add_module(f"conv{k}", conv) - cur_channels = out_channels - # 1x1 conv for predicting objectness logits - self.objectness_logits = nn.Conv2d(cur_channels, num_anchors, kernel_size=1, stride=1) - # 1x1 conv for predicting box2box transform deltas - self.anchor_deltas = nn.Conv2d(cur_channels, num_anchors * box_dim, kernel_size=1, stride=1) - - # Keeping the order of weights initialization same for backwards compatiblility. - for layer in self.modules(): - if isinstance(layer, nn.Conv2d): - nn.init.normal_(layer.weight, std=0.01) - nn.init.constant_(layer.bias, 0) - - def _get_rpn_conv(self, in_channels, out_channels): - return Conv2d( - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - activation=nn.ReLU(), - ) - - @classmethod - def from_config(cls, cfg, input_shape): - # Standard RPN is shared across levels: - in_channels = [s.channels for s in input_shape] - assert len(set(in_channels)) == 1, "Each level must have the same channel!" - in_channels = in_channels[0] - - # RPNHead should take the same input as anchor generator - # NOTE: it assumes that creating an anchor generator does not have unwanted side effect. - anchor_generator = build_anchor_generator(cfg, input_shape) - num_anchors = anchor_generator.num_anchors - box_dim = anchor_generator.box_dim - assert ( - len(set(num_anchors)) == 1 - ), "Each level must have the same number of anchors per spatial position" - return { - "in_channels": in_channels, - "num_anchors": num_anchors[0], - "box_dim": box_dim, - "conv_dims": cfg.MODEL.RPN.CONV_DIMS, - } - - def forward(self, features: List[torch.Tensor]): - """ - Args: - features (list[Tensor]): list of feature maps - - Returns: - list[Tensor]: A list of L elements. - Element i is a tensor of shape (N, A, Hi, Wi) representing - the predicted objectness logits for all anchors. A is the number of cell anchors. - list[Tensor]: A list of L elements. Element i is a tensor of shape - (N, A*box_dim, Hi, Wi) representing the predicted "deltas" used to transform anchors - to proposals. - """ - pred_objectness_logits = [] - pred_anchor_deltas = [] - for x in features: - t = self.conv(x) - pred_objectness_logits.append(self.objectness_logits(t)) - pred_anchor_deltas.append(self.anchor_deltas(t)) - return pred_objectness_logits, pred_anchor_deltas - - -@PROPOSAL_GENERATOR_REGISTRY.register() -class RPN(nn.Module): - """ - Region Proposal Network, introduced by :paper:`Faster R-CNN`. - """ - - @configurable - def __init__( - self, - *, - in_features: List[str], - head: nn.Module, - anchor_generator: nn.Module, - anchor_matcher: Matcher, - box2box_transform: Box2BoxTransform, - batch_size_per_image: int, - positive_fraction: float, - pre_nms_topk: Tuple[float, float], - post_nms_topk: Tuple[float, float], - nms_thresh: float = 0.7, - min_box_size: float = 0.0, - anchor_boundary_thresh: float = -1.0, - loss_weight: Union[float, Dict[str, float]] = 1.0, - box_reg_loss_type: str = "smooth_l1", - smooth_l1_beta: float = 0.0, - ): - """ - NOTE: this interface is experimental. - - Args: - in_features (list[str]): list of names of input features to use - head (nn.Module): a module that predicts logits and regression deltas - for each level from a list of per-level features - anchor_generator (nn.Module): a module that creates anchors from a - list of features. Usually an instance of :class:`AnchorGenerator` - anchor_matcher (Matcher): label the anchors by matching them with ground truth. - box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to - instance boxes - batch_size_per_image (int): number of anchors per image to sample for training - positive_fraction (float): fraction of foreground anchors to sample for training - pre_nms_topk (tuple[float]): (train, test) that represents the - number of top k proposals to select before NMS, in - training and testing. - post_nms_topk (tuple[float]): (train, test) that represents the - number of top k proposals to select after NMS, in - training and testing. - nms_thresh (float): NMS threshold used to de-duplicate the predicted proposals - min_box_size (float): remove proposal boxes with any side smaller than this threshold, - in the unit of input image pixels - anchor_boundary_thresh (float): legacy option - loss_weight (float|dict): weights to use for losses. Can be single float for weighting - all rpn losses together, or a dict of individual weightings. Valid dict keys are: - "loss_rpn_cls" - applied to classification loss - "loss_rpn_loc" - applied to box regression loss - box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou". - smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to - use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1" - """ - super().__init__() - self.in_features = in_features - self.rpn_head = head - self.anchor_generator = anchor_generator - self.anchor_matcher = anchor_matcher - self.box2box_transform = box2box_transform - self.batch_size_per_image = batch_size_per_image - self.positive_fraction = positive_fraction - # Map from self.training state to train/test settings - self.pre_nms_topk = {True: pre_nms_topk[0], False: pre_nms_topk[1]} - self.post_nms_topk = {True: post_nms_topk[0], False: post_nms_topk[1]} - self.nms_thresh = nms_thresh - self.min_box_size = float(min_box_size) - self.anchor_boundary_thresh = anchor_boundary_thresh - if isinstance(loss_weight, float): - loss_weight = {"loss_rpn_cls": loss_weight, "loss_rpn_loc": loss_weight} - self.loss_weight = loss_weight - self.box_reg_loss_type = box_reg_loss_type - self.smooth_l1_beta = smooth_l1_beta - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - in_features = cfg.MODEL.RPN.IN_FEATURES - ret = { - "in_features": in_features, - "min_box_size": cfg.MODEL.PROPOSAL_GENERATOR.MIN_SIZE, - "nms_thresh": cfg.MODEL.RPN.NMS_THRESH, - "batch_size_per_image": cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE, - "positive_fraction": cfg.MODEL.RPN.POSITIVE_FRACTION, - "loss_weight": { - "loss_rpn_cls": cfg.MODEL.RPN.LOSS_WEIGHT, - "loss_rpn_loc": cfg.MODEL.RPN.BBOX_REG_LOSS_WEIGHT * cfg.MODEL.RPN.LOSS_WEIGHT, - }, - "anchor_boundary_thresh": cfg.MODEL.RPN.BOUNDARY_THRESH, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS), - "box_reg_loss_type": cfg.MODEL.RPN.BBOX_REG_LOSS_TYPE, - "smooth_l1_beta": cfg.MODEL.RPN.SMOOTH_L1_BETA, - } - - ret["pre_nms_topk"] = (cfg.MODEL.RPN.PRE_NMS_TOPK_TRAIN, cfg.MODEL.RPN.PRE_NMS_TOPK_TEST) - ret["post_nms_topk"] = (cfg.MODEL.RPN.POST_NMS_TOPK_TRAIN, cfg.MODEL.RPN.POST_NMS_TOPK_TEST) - - ret["anchor_generator"] = build_anchor_generator(cfg, [input_shape[f] for f in in_features]) - ret["anchor_matcher"] = Matcher( - cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS, allow_low_quality_matches=True - ) - ret["head"] = build_rpn_head(cfg, [input_shape[f] for f in in_features]) - return ret - - def _subsample_labels(self, label): - """ - Randomly sample a subset of positive and negative examples, and overwrite - the label vector to the ignore value (-1) for all elements that are not - included in the sample. - - Args: - labels (Tensor): a vector of -1, 0, 1. Will be modified in-place and returned. - """ - pos_idx, neg_idx = subsample_labels( - label, self.batch_size_per_image, self.positive_fraction, 0 - ) - # Fill with the ignore label (-1), then set positive and negative labels - label.fill_(-1) - label.scatter_(0, pos_idx, 1) - label.scatter_(0, neg_idx, 0) - return label - - @torch.jit.unused - @torch.no_grad() - def label_and_sample_anchors( - self, anchors: List[Boxes], gt_instances: List[Instances] - ) -> Tuple[List[torch.Tensor], List[torch.Tensor]]: - """ - Args: - anchors (list[Boxes]): anchors for each feature map. - gt_instances: the ground-truth instances for each image. - - Returns: - list[Tensor]: - List of #img tensors. i-th element is a vector of labels whose length is - the total number of anchors across all feature maps R = sum(Hi * Wi * A). - Label values are in {-1, 0, 1}, with meanings: -1 = ignore; 0 = negative - class; 1 = positive class. - list[Tensor]: - i-th element is a Rx4 tensor. The values are the matched gt boxes for each - anchor. Values are undefined for those anchors not labeled as 1. - """ - anchors = Boxes.cat(anchors) - - gt_boxes = [x.gt_boxes for x in gt_instances] - image_sizes = [x.image_size for x in gt_instances] - del gt_instances - - gt_labels = [] - matched_gt_boxes = [] - for image_size_i, gt_boxes_i in zip(image_sizes, gt_boxes): - """ - image_size_i: (h, w) for the i-th image - gt_boxes_i: ground-truth boxes for i-th image - """ - - match_quality_matrix = retry_if_cuda_oom(pairwise_iou)(gt_boxes_i, anchors) - matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix) - # Matching is memory-expensive and may result in CPU tensors. But the result is small - gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device) - del match_quality_matrix - - if self.anchor_boundary_thresh >= 0: - # Discard anchors that go out of the boundaries of the image - # NOTE: This is legacy functionality that is turned off by default in Detectron2 - anchors_inside_image = anchors.inside_box(image_size_i, self.anchor_boundary_thresh) - gt_labels_i[~anchors_inside_image] = -1 - - # A vector of labels (-1, 0, 1) for each anchor - gt_labels_i = self._subsample_labels(gt_labels_i) - - if len(gt_boxes_i) == 0: - # These values won't be used anyway since the anchor is labeled as background - matched_gt_boxes_i = torch.zeros_like(anchors.tensor) - else: - # TODO wasted indexing computation for ignored boxes - matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor - - gt_labels.append(gt_labels_i) # N,AHW - matched_gt_boxes.append(matched_gt_boxes_i) - return gt_labels, matched_gt_boxes - - @torch.jit.unused - def losses( - self, - anchors: List[Boxes], - pred_objectness_logits: List[torch.Tensor], - gt_labels: List[torch.Tensor], - pred_anchor_deltas: List[torch.Tensor], - gt_boxes: List[torch.Tensor], - ) -> Dict[str, torch.Tensor]: - """ - Return the losses from a set of RPN predictions and their associated ground-truth. - - Args: - anchors (list[Boxes or RotatedBoxes]): anchors for each feature map, each - has shape (Hi*Wi*A, B), where B is box dimension (4 or 5). - pred_objectness_logits (list[Tensor]): A list of L elements. - Element i is a tensor of shape (N, Hi*Wi*A) representing - the predicted objectness logits for all anchors. - gt_labels (list[Tensor]): Output of :meth:`label_and_sample_anchors`. - pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape - (N, Hi*Wi*A, 4 or 5) representing the predicted "deltas" used to transform anchors - to proposals. - gt_boxes (list[Tensor]): Output of :meth:`label_and_sample_anchors`. - - Returns: - dict[loss name -> loss value]: A dict mapping from loss name to loss value. - Loss names are: `loss_rpn_cls` for objectness classification and - `loss_rpn_loc` for proposal localization. - """ - num_images = len(gt_labels) - gt_labels = torch.stack(gt_labels) # (N, sum(Hi*Wi*Ai)) - - # Log the number of positive/negative anchors per-image that's used in training - pos_mask = gt_labels == 1 - num_pos_anchors = pos_mask.sum().item() - num_neg_anchors = (gt_labels == 0).sum().item() - storage = get_event_storage() - storage.put_scalar("rpn/num_pos_anchors", num_pos_anchors / num_images) - storage.put_scalar("rpn/num_neg_anchors", num_neg_anchors / num_images) - - localization_loss = _dense_box_regression_loss( - anchors, - self.box2box_transform, - pred_anchor_deltas, - gt_boxes, - pos_mask, - box_reg_loss_type=self.box_reg_loss_type, - smooth_l1_beta=self.smooth_l1_beta, - ) - - valid_mask = gt_labels >= 0 - objectness_loss = F.binary_cross_entropy_with_logits( - cat(pred_objectness_logits, dim=1)[valid_mask], - gt_labels[valid_mask].to(torch.float32), - reduction="sum", - ) - normalizer = self.batch_size_per_image * num_images - losses = { - "loss_rpn_cls": objectness_loss / normalizer, - # The original Faster R-CNN paper uses a slightly different normalizer - # for loc loss. But it doesn't matter in practice - "loss_rpn_loc": localization_loss / normalizer, - } - losses = {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()} - return losses - - def forward( - self, - images: ImageList, - features: Dict[str, torch.Tensor], - gt_instances: Optional[List[Instances]] = None, - ): - """ - Args: - images (ImageList): input images of length `N` - features (dict[str, Tensor]): input data as a mapping from feature - map name to tensor. Axis 0 represents the number of images `N` in - the input data; axes 1-3 are channels, height, and width, which may - vary between feature maps (e.g., if a feature pyramid is used). - gt_instances (list[Instances], optional): a length `N` list of `Instances`s. - Each `Instances` stores ground-truth instances for the corresponding image. - - Returns: - proposals: list[Instances]: contains fields "proposal_boxes", "objectness_logits" - loss: dict[Tensor] or None - """ - features = [features[f] for f in self.in_features] - anchors = self.anchor_generator(features) - - pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features) - # Transpose the Hi*Wi*A dimension to the middle: - pred_objectness_logits = [ - # (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A) - score.permute(0, 2, 3, 1).flatten(1) - for score in pred_objectness_logits - ] - pred_anchor_deltas = [ - # (N, A*B, Hi, Wi) -> (N, A, B, Hi, Wi) -> (N, Hi, Wi, A, B) -> (N, Hi*Wi*A, B) - x.view(x.shape[0], -1, self.anchor_generator.box_dim, x.shape[-2], x.shape[-1]) - .permute(0, 3, 4, 1, 2) - .flatten(1, -2) - for x in pred_anchor_deltas - ] - - if self.training: - assert gt_instances is not None, "RPN requires gt_instances in training!" - gt_labels, gt_boxes = self.label_and_sample_anchors(anchors, gt_instances) - losses = self.losses( - anchors, pred_objectness_logits, gt_labels, pred_anchor_deltas, gt_boxes - ) - else: - losses = {} - proposals = self.predict_proposals( - anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes - ) - return proposals, losses - - def predict_proposals( - self, - anchors: List[Boxes], - pred_objectness_logits: List[torch.Tensor], - pred_anchor_deltas: List[torch.Tensor], - image_sizes: List[Tuple[int, int]], - ): - """ - Decode all the predicted box regression deltas to proposals. Find the top proposals - by applying NMS and removing boxes that are too small. - - Returns: - proposals (list[Instances]): list of N Instances. The i-th Instances - stores post_nms_topk object proposals for image i, sorted by their - objectness score in descending order. - """ - # The proposals are treated as fixed for joint training with roi heads. - # This approach ignores the derivative w.r.t. the proposal boxes’ coordinates that - # are also network responses. - with torch.no_grad(): - pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas) - return find_top_rpn_proposals( - pred_proposals, - pred_objectness_logits, - image_sizes, - self.nms_thresh, - self.pre_nms_topk[self.training], - self.post_nms_topk[self.training], - self.min_box_size, - self.training, - ) - - def _decode_proposals(self, anchors: List[Boxes], pred_anchor_deltas: List[torch.Tensor]): - """ - Transform anchors into proposals by applying the predicted anchor deltas. - - Returns: - proposals (list[Tensor]): A list of L tensors. Tensor i has shape - (N, Hi*Wi*A, B) - """ - N = pred_anchor_deltas[0].shape[0] - proposals = [] - # For each feature map - for anchors_i, pred_anchor_deltas_i in zip(anchors, pred_anchor_deltas): - B = anchors_i.tensor.size(1) - pred_anchor_deltas_i = pred_anchor_deltas_i.reshape(-1, B) - # Expand anchors to shape (N*Hi*Wi*A, B) - anchors_i = anchors_i.tensor.unsqueeze(0).expand(N, -1, -1).reshape(-1, B) - proposals_i = self.box2box_transform.apply_deltas(pred_anchor_deltas_i, anchors_i) - # Append feature map proposals with shape (N, Hi*Wi*A, B) - proposals.append(proposals_i.view(N, -1, B)) - return proposals diff --git a/spaces/ysharma/LLaVA_v1/llava/model/multimodal_projector/builder.py b/spaces/ysharma/LLaVA_v1/llava/model/multimodal_projector/builder.py deleted file mode 100644 index 31cd4f48e6055cd6d00a162af30b1c8139e26b57..0000000000000000000000000000000000000000 --- a/spaces/ysharma/LLaVA_v1/llava/model/multimodal_projector/builder.py +++ /dev/null @@ -1,51 +0,0 @@ -import torch -import torch.nn as nn -import re - - -class IdentityMap(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x, *args, **kwargs): - return x - - @property - def config(self): - return {"mm_projector_type": 'identity'} - - -class SimpleResBlock(nn.Module): - def __init__(self, channels): - super().__init__() - self.pre_norm = nn.LayerNorm(channels) - - self.proj = nn.Sequential( - nn.Linear(channels, channels), - nn.GELU(), - nn.Linear(channels, channels) - ) - def forward(self, x): - x = self.pre_norm(x) - return x + self.proj(x) - - -def build_vision_projector(config, delay_load=False, **kwargs): - projector_type = getattr(config, 'mm_projector_type', 'linear') - - if projector_type == 'linear': - return nn.Linear(config.mm_hidden_size, config.hidden_size) - - mlp_gelu_match = re.match(r'^mlp(\d+)x_gelu$', projector_type) - if mlp_gelu_match: - mlp_depth = int(mlp_gelu_match.group(1)) - modules = [nn.Linear(config.mm_hidden_size, config.hidden_size)] - for _ in range(1, mlp_depth): - modules.append(nn.GELU()) - modules.append(nn.Linear(config.hidden_size, config.hidden_size)) - return nn.Sequential(*modules) - - if projector_type == 'identity': - return IdentityMap() - - raise ValueError(f'Unknown projector type: {projector_type}') diff --git a/spaces/yukie/yukie-sovits3/vdecoder/__init__.py b/spaces/yukie/yukie-sovits3/vdecoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/train.py b/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/train.py deleted file mode 100644 index 410f19213866f388763f0c9ac21c24c09dd5dfea..0000000000000000000000000000000000000000 --- a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/so-vits-svc/train.py +++ /dev/null @@ -1,330 +0,0 @@ -import logging -import multiprocessing -import time - -logging.getLogger('matplotlib').setLevel(logging.WARNING) -logging.getLogger('numba').setLevel(logging.WARNING) - -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler - -import modules.commons as commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioCollate -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from modules.losses import ( - kl_loss, - generator_loss, discriminator_loss, feature_loss -) - -from modules.mel_processing import mel_spectrogram_torch, spec_to_mel_torch - -torch.backends.cudnn.benchmark = True -global_step = 0 -start_time = time.time() - -# os.environ['TORCH_DISTRIBUTED_DEBUG'] = 'INFO' - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - hps = utils.get_hparams() - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = hps.train.port - - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - # for pytorch on win, backend use gloo - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - collate_fn = TextAudioCollate() - all_in_mem = hps.train.all_in_mem # If you have enough memory, turn on this option to avoid disk IO and speed up training. - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps, all_in_mem=all_in_mem) - num_workers = 5 if multiprocessing.cpu_count() > 4 else multiprocessing.cpu_count() - if all_in_mem: - num_workers = 0 - train_loader = DataLoader(train_dataset, num_workers=num_workers, shuffle=False, pin_memory=True, - batch_size=hps.train.batch_size, collate_fn=collate_fn) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps, all_in_mem=all_in_mem) - eval_loader = DataLoader(eval_dataset, num_workers=1, shuffle=False, - batch_size=1, pin_memory=False, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) # , find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank]) - - skip_optimizer = False - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer) - epoch_str = max(epoch_str, 1) - name=utils.latest_checkpoint_path(hps.model_dir, "D_*.pth") - global_step=int(name[name.rfind("_")+1:name.rfind(".")])+1 - #global_step = (epoch_str - 1) * len(train_loader) - except: - print("load old checkpoint failed...") - epoch_str = 1 - global_step = 0 - if skip_optimizer: - epoch_str = 1 - global_step = 0 - - warmup_epoch = hps.train.warmup_epochs - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - # update learning rate - if epoch > 1: - scheduler_g.step() - scheduler_d.step() - # set up warm-up learning rate - if epoch <= warmup_epoch: - for param_group in optim_g.param_groups: - param_group['lr'] = hps.train.learning_rate / warmup_epoch * epoch - for param_group in optim_d.param_groups: - param_group['lr'] = hps.train.learning_rate / warmup_epoch * epoch - # training - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, - [train_loader, None], None, None) - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - # train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, items in enumerate(train_loader): - c, f0, spec, y, spk, lengths, uv = items - g = spk.cuda(rank, non_blocking=True) - spec, y = spec.cuda(rank, non_blocking=True), y.cuda(rank, non_blocking=True) - c = c.cuda(rank, non_blocking=True) - f0 = f0.cuda(rank, non_blocking=True) - uv = uv.cuda(rank, non_blocking=True) - lengths = lengths.cuda(rank, non_blocking=True) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - - with autocast(enabled=hps.train.fp16_run): - y_hat, ids_slice, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0 = net_g(c, f0, uv, spec, g=g, c_lengths=lengths, - spec_lengths=lengths) - - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_lf0 = F.mse_loss(pred_lf0, lf0) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl + loss_lf0 - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_kl] - reference_loss=0 - for i in losses: - reference_loss += i - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info(f"Losses: {[x.item() for x in losses]}, step: {global_step}, lr: {lr}, reference_loss: {reference_loss}") - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/kl": loss_kl, - "loss/g/lf0": loss_lf0}) - - # scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - # scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - # scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(), - pred_lf0[0, 0, :].detach().cpu().numpy()), - "all/norm_lf0": utils.plot_data_to_numpy(lf0[0, 0, :].cpu().numpy(), - norm_lf0[0, 0, :].detach().cpu().numpy()) - } - - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict - ) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 0) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - global_step += 1 - - if rank == 0: - global start_time - now = time.time() - durtaion = format(now - start_time, '.2f') - logger.info(f'====> Epoch: {epoch}, cost {durtaion} s') - start_time = now - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - with torch.no_grad(): - for batch_idx, items in enumerate(eval_loader): - c, f0, spec, y, spk, _, uv = items - g = spk[:1].cuda(0) - spec, y = spec[:1].cuda(0), y[:1].cuda(0) - c = c[:1].cuda(0) - f0 = f0[:1].cuda(0) - uv= uv[:1].cuda(0) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat = generator.module.infer(c, f0, uv, g=g) - - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - audio_dict.update({ - f"gen/audio_{batch_idx}": y_hat[0], - f"gt/audio_{batch_idx}": y[0] - }) - image_dict.update({ - f"gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()), - "gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy()) - }) - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/zhang-wei-jian/docker/node_modules/glob-parent/README.md b/spaces/zhang-wei-jian/docker/node_modules/glob-parent/README.md deleted file mode 100644 index 36a279384b14a8ea5b723b1d952c283a109f690e..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/glob-parent/README.md +++ /dev/null @@ -1,137 +0,0 @@ -

              - - - -

              - -# glob-parent - -[![NPM version][npm-image]][npm-url] [![Downloads][downloads-image]][npm-url] [![Azure Pipelines Build Status][azure-pipelines-image]][azure-pipelines-url] [![Travis Build Status][travis-image]][travis-url] [![AppVeyor Build Status][appveyor-image]][appveyor-url] [![Coveralls Status][coveralls-image]][coveralls-url] [![Gitter chat][gitter-image]][gitter-url] - -Extract the non-magic parent path from a glob string. - -## Usage - -```js -var globParent = require('glob-parent'); - -globParent('path/to/*.js'); // 'path/to' -globParent('/root/path/to/*.js'); // '/root/path/to' -globParent('/*.js'); // '/' -globParent('*.js'); // '.' -globParent('**/*.js'); // '.' -globParent('path/{to,from}'); // 'path' -globParent('path/!(to|from)'); // 'path' -globParent('path/?(to|from)'); // 'path' -globParent('path/+(to|from)'); // 'path' -globParent('path/*(to|from)'); // 'path' -globParent('path/@(to|from)'); // 'path' -globParent('path/**/*'); // 'path' - -// if provided a non-glob path, returns the nearest dir -globParent('path/foo/bar.js'); // 'path/foo' -globParent('path/foo/'); // 'path/foo' -globParent('path/foo'); // 'path' (see issue #3 for details) -``` - -## API - -### `globParent(maybeGlobString, [options])` - -Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below. - -#### options - -```js -{ - // Disables the automatic conversion of slashes for Windows - flipBackslashes: true -} -``` - -## Escaping - -The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters: - -- `?` (question mark) unless used as a path segment alone -- `*` (asterisk) -- `|` (pipe) -- `(` (opening parenthesis) -- `)` (closing parenthesis) -- `{` (opening curly brace) -- `}` (closing curly brace) -- `[` (opening bracket) -- `]` (closing bracket) - -**Example** - -```js -globParent('foo/[bar]/') // 'foo' -globParent('foo/\\[bar]/') // 'foo/[bar]' -``` - -## Limitations - -### Braces & Brackets -This library attempts a quick and imperfect method of determining which path -parts have glob magic without fully parsing/lexing the pattern. There are some -advanced use cases that can trip it up, such as nested braces where the outer -pair is escaped and the inner one contains a path separator. If you find -yourself in the unlikely circumstance of being affected by this or need to -ensure higher-fidelity glob handling in your library, it is recommended that you -pre-process your input with [expand-braces] and/or [expand-brackets]. - -### Windows -Backslashes are not valid path separators for globs. If a path with backslashes -is provided anyway, for simple cases, glob-parent will replace the path -separator for you and return the non-glob parent path (now with -forward-slashes, which are still valid as Windows path separators). - -This cannot be used in conjunction with escape characters. - -```js -// BAD -globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)' - -// GOOD -globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)' -``` - -If you are using escape characters for a pattern without path parts (i.e. -relative to `cwd`), prefix with `./` to avoid confusing glob-parent. - -```js -// BAD -globParent('foo \\[bar]') // 'foo ' -globParent('foo \\[bar]*') // 'foo ' - -// GOOD -globParent('./foo \\[bar]') // 'foo [bar]' -globParent('./foo \\[bar]*') // '.' -``` - -## License - -ISC - -[expand-braces]: https://github.com/jonschlinkert/expand-braces -[expand-brackets]: https://github.com/jonschlinkert/expand-brackets - -[downloads-image]: https://img.shields.io/npm/dm/glob-parent.svg -[npm-url]: https://www.npmjs.com/package/glob-parent -[npm-image]: https://img.shields.io/npm/v/glob-parent.svg - -[azure-pipelines-url]: https://dev.azure.com/gulpjs/gulp/_build/latest?definitionId=2&branchName=master -[azure-pipelines-image]: https://dev.azure.com/gulpjs/gulp/_apis/build/status/glob-parent?branchName=master - -[travis-url]: https://travis-ci.org/gulpjs/glob-parent -[travis-image]: https://img.shields.io/travis/gulpjs/glob-parent.svg?label=travis-ci - -[appveyor-url]: https://ci.appveyor.com/project/gulpjs/glob-parent -[appveyor-image]: https://img.shields.io/appveyor/ci/gulpjs/glob-parent.svg?label=appveyor - -[coveralls-url]: https://coveralls.io/r/gulpjs/glob-parent -[coveralls-image]: https://img.shields.io/coveralls/gulpjs/glob-parent/master.svg - -[gitter-url]: https://gitter.im/gulpjs/gulp -[gitter-image]: https://badges.gitter.im/gulpjs/gulp.svg diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/src/hasNewVersion.ts b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/src/hasNewVersion.ts deleted file mode 100644 index 31d5069f97d3abf90d2c105374c21c44bde35b82..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/src/hasNewVersion.ts +++ /dev/null @@ -1,40 +0,0 @@ -import semver from 'semver'; -import { createConfigDir, getLastUpdate, saveLastUpdate } from './cache'; -import getDistVersion from './getDistVersion'; -import { IUpdate } from './types'; - -const hasNewVersion = async ({ - pkg, - updateCheckInterval = 1000 * 60 * 60 * 24, - distTag = 'latest', - alwaysRun, - debug, -}: IUpdate) => { - createConfigDir(); - const lastUpdateCheck = getLastUpdate(pkg.name); - if ( - alwaysRun || - !lastUpdateCheck || - lastUpdateCheck < new Date().getTime() - updateCheckInterval - ) { - const latestVersion = await getDistVersion(pkg.name, distTag); - saveLastUpdate(pkg.name); - if (semver.gt(latestVersion, pkg.version)) { - return latestVersion; - } else if (debug) { - console.error( - `Latest version (${latestVersion}) not newer than current version (${pkg.version})` - ); - } - } else if (debug) { - console.error( - `Too recent to check for a new update. simpleUpdateNotifier() interval set to ${updateCheckInterval}ms but only ${ - new Date().getTime() - lastUpdateCheck - }ms since last check.` - ); - } - - return false; -}; - -export default hasNewVersion; diff --git a/spaces/zhiqwang/assets/README.md b/spaces/zhiqwang/assets/README.md deleted file mode 100644 index 25945126dbb175d274dc8f550cd794d2d47e0033..0000000000000000000000000000000000000000 --- a/spaces/zhiqwang/assets/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Yolort -emoji: 🔥 -colorFrom: purple -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/zhuyuheng/IMossGPT/assets/Kelpy-Codos.js b/spaces/zhuyuheng/IMossGPT/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/zhuyuheng/IMossGPT/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/zzz666/ChuanhuChatGPT/run_Linux.sh b/spaces/zzz666/ChuanhuChatGPT/run_Linux.sh deleted file mode 100644 index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000 --- a/spaces/zzz666/ChuanhuChatGPT/run_Linux.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$0") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/zzzzzz/text2image/app.py b/spaces/zzzzzz/text2image/app.py deleted file mode 100644 index 9b1d95bcbc4765f84916c9a4a84e334e274ab53e..0000000000000000000000000000000000000000 --- a/spaces/zzzzzz/text2image/app.py +++ /dev/null @@ -1,734 +0,0 @@ -import numpy as np -import gradio as gr -import paddlehub as hub - - -model = hub.Module(name='ernie_vilg') -language_translation_model = hub.Module(name='baidu_translate') -language_recognition_model = hub.Module(name='baidu_language_recognition') - -style_list = ['水彩','油画', '粉笔画', '卡通', '蜡笔画', '儿童画', '探索无限'] - -tips = {"en": "Tips: The input text will be translated into Chinese for generation", - "jp": "ヒント: 入力テキストは生成のために中国語に翻訳されます", - "kor": "힌트: 입력 텍스트는 생성을 위해 중국어로 번역됩니다"} - -count = 0 - -def translate_language(text_prompts): - global count - try: - count += 1 - tips_text = None - language_code = language_recognition_model.recognize(text_prompts) - if language_code != 'zh': - text_prompts = language_translation_model.translate(text_prompts, language_code, 'zh') - except Exception as e: - error_text = str(e) - return {status_text:error_text, language_tips_text:gr.update(visible=False)} - if language_code in tips: - tips_text = tips[language_code] - else: - tips_text = tips['en'] - if language_code == 'zh': - return {language_tips_text:gr.update(visible=False), translated_language:text_prompts, trigger_component: gr.update(value=count, visible=False)} - else: - return {language_tips_text:gr.update(visible=True, value=tips_text), translated_language:text_prompts, trigger_component: gr.update(value=count, visible=False)} - - -def inference(text_prompts, style_indx): - try: - style = style_list[style_indx] - results = model.generate_image( - text_prompts=text_prompts, style=style, visualization=False) - except Exception as e: - error_text = str(e) - return {status_text:error_text, gallery:None} - return {status_text:'Success', gallery:results[:6]} - - -title="ERNIE-ViLG" - -description="ERNIE-ViLG model, which supports text-to-image task." - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - #gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; - } - #gallery>div>.h-full { - min-height: 20rem; - } - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .prompt h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } -""" - -block = gr.Blocks(css=css) - -examples = [ - [ - '戴着眼镜的猫', - '油画(Oil painting)' - ], - [ - 'A cat with glasses', - '油画(Oil painting)' - ], - [ - '眼鏡をかけた猫', - '油画(Oil painting)' - ], - [ - '안경을 쓴 고양이', - '油画(Oil painting)' - ], - [ - '日落时的城市天际线,史前遗迹风格', - '油画(Oil painting)' - ], - [ - '一只猫坐在椅子上,戴着一副墨镜, low poly 风格', - '卡通(Cartoon)' - ], - [ - 'A cat sitting on a chair, wearing a pair of sunglasses, low poly style', - '油画(Oil painting)' - ], - [ - '猫が椅子に座ってサングラスをかけている、low polyスタイル', - '油画(Oil painting)' - ], - [ - '고양이 한 마리가 의자에 앉아 선글라스를 끼고 low poly 스타일을 하고 있다', - '油画(Oil painting)' - ], - [ - '一只猫坐在椅子上,戴着一副墨镜,秋天风格', - '探索无限(Explore infinity)' - ], - [ - '蒙娜丽莎,赛博朋克,宝丽来,33毫米,蒸汽波艺术', - '探索无限(Explore infinity)' - ], - [ - '一只猫坐在椅子上,戴着一副墨镜,海盗风格', - '探索无限(Explore infinity)' - ], - [ - '一条由闪电制成的令人敬畏的龙,概念艺术', - '探索无限(Explore infinity)' - ], - [ - 'An awesome dragon made of lightning, conceptual art', - '油画(Oil painting)' - ], - [ - '稲妻で作られた畏敬の念を抱かせる竜、コンセプトアート', - '油画(Oil painting)' - ], - [ - '번개로 만든 경외스러운 용, 개념 예술', - '油画(Oil painting)' - ], - [ - '梵高猫头鹰,蒸汽波艺术', - '探索无限(Explore infinity)' - ], - [ - '萨尔瓦多·达利描绘古代文明的超现实主义梦幻油画,写实风格', - '探索无限(Explore infinity)' - ], - [ - '夕阳日落时,阳光落在云层上,海面波涛汹涌,风景,胶片感', - '探索无限(Explore infinity)' - ], - [ - 'Sunset, the sun falls on the clouds, the sea is rough, the scenery is filmy', - '油画(Oil painting)' - ], - [ - '夕日が沈むと、雲の上に太陽の光が落ち、海面は波が荒く、風景、フィルム感', - '油画(Oil painting)' - ], - [ - '석양이 질 때 햇빛이 구름 위에 떨어지고, 해수면의 파도가 용솟음치며, 풍경, 필름감', - '油画(Oil painting)' - ], -] - -with block as demo: - gr.HTML( - """ -
              -
              - Paddlehub -
              -
              -

              - ERNIE-ViLG Demo -

              -
              -

              - ERNIE-ViLG is a state-of-the-art text-to-image model that generates - images from Chinese text. -

              -
              - """ - ) - with gr.Group(): - with gr.Box(): - with gr.Row().style(mobile_collapse=False, equal_height=True): - text = gr.Textbox( - label="Prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt, multiple languages are supported now.", - ).style( - border=(True, False, True, True), - rounded=(True, False, False, True), - container=False, - ) - - btn = gr.Button("Generate image").style( - margin=False, - rounded=(False, True, True, False), - ) - language_tips_text = gr.Textbox(label="language tips", show_label=False, visible=False, max_lines=1) - styles = gr.Dropdown(label="风格(style)", choices=['水彩(Watercolor)','油画(Oil painting)', '粉笔画(Chalk drawing)', '卡通(Cartoon)', '蜡笔画(Crayon drawing)', '儿童画(Children\'s drawing)', '探索无限(Explore infinity)'], value='探索无限(Explore infinity)', type="index") - gallery = gr.Gallery( - label="Generated images", show_label=False, elem_id="gallery" - ).style(grid=[2, 3], height="auto") - status_text = gr.Textbox( - label="处理状态(Process status)", - show_label=True, - max_lines=1, - interactive=False - ) - trigger_component = gr.Textbox(vaule="", visible=False) # This component is used for triggering inference funtion. - translated_language = gr.Textbox(vaule="", visible=False) - - # ex = gr.Examples(examples=examples, fn=translate_language, inputs=[text], outputs=[language_tips_text, status_text, trigger_component, translated_language], cache_examples=False) - # ex.dataset.headers = [""] - - - text.submit(translate_language, inputs=[text], outputs=[language_tips_text, status_text, trigger_component, translated_language]) - btn.click(translate_language, inputs=[text], outputs=[language_tips_text, status_text, trigger_component, translated_language]) - trigger_component.change(fn=inference, inputs=[translated_language, styles], outputs=[status_text, gallery]) - gr.HTML( - """ -
              -

              Prompt公式

              - Prompt = [形容词] [主语] ,[细节设定], [修饰语或者艺术家]。 - 关于各部分的构造方式和效果,可以参考YouPromptMe指南。 - 更多的模型,请关注 PaddleHub 官方Repo , 如果你觉得不错,请star收藏吧。 -

              Stars8.4k

              - 同时,可以在 aistudio 上使用免费的GPU体验更多案例。 -

              -
              -
              -

              Prompt format

              - Prompt = [adjective] [object], [details], [styles or artists]. - For more details, please refer to YouPromptMe Guide. - There are more interesting models in PaddleHub, if you think it's great, welcome to star PaddleHub. -

              Stars8.4k

              - Besides, you can use free GPU resourses in aistudio to enjoy more cases, have fun. -

              -
              - star Paddlehub - - """ - ) - gr.Markdown( - """ -在"探索无限"的风格模式下,画作的真实风格完全可以由你的prompt来决定。下面是一些参考案例: - -In "Explore infinity" style mode, how the image looks like is totally up to your prompt. Below are some cases: - -### 复古未来主义风格 - -| ![00472_000_一只猫坐在椅子上,戴着一副墨镜,复古未来主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00472_000_一只猫坐在椅子上,戴着一副墨镜,复古未来主义风格.jpg) | ![00472_000_日落时的城市天际线,复古未来主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00472_000_日落时的城市天际线,复古未来主义风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,复古未来主义风格 | 日落时的城市天际线,复古未来主义风格 | - - - -### 粉彩朋克风格 - -| ![00017_004_一只猫坐在椅子上,戴着一副墨镜,粉彩朋克风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00017_004_一只猫坐在椅子上,戴着一副墨镜,粉彩朋克风格.jpg) | ![00029_001_日落时的城市天际线,粉彩朋克风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00029_001_日落时的城市天际线,粉彩朋克风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,粉彩朋克风格 | 日落时的城市天际线,粉彩朋克风格 | - -### 史前遗迹风格 - -| ![00443_005_一只猫坐在椅子上,戴着一副墨镜,史前遗迹风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00443_005_一只猫坐在椅子上,戴着一副墨镜,史前遗迹风格.jpg) | ![00443_005_日落时的城市天际线,史前遗迹风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00443_005_日落时的城市天际线,史前遗迹风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,史前遗迹风格 | 日落时的城市天际线,史前遗迹风格 | - - - - -### 波普艺术风格 - -| ![00434_005_一只猫坐在椅子上,戴着一副墨镜,波普艺术风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00434_005_一只猫坐在椅子上,戴着一副墨镜,波普艺术风格.jpg) | ![00434_002_日落时的城市天际线,波普艺术风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00434_002_日落时的城市天际线,波普艺术风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,波普艺术风格 | 日落时的城市天际线,后世界末日风格 | - - - -### 迷幻风格 - -| ![00451_000_一只猫坐在椅子上,戴着一副墨镜,迷幻药风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00451_000_一只猫坐在椅子上,戴着一副墨镜,迷幻药风格.jpg) | ![00451_001_日落时的城市天际线,迷幻药风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00451_001_日落时的城市天际线,迷幻药风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,迷幻风格 | 日落时的城市天际线,迷幻风格 | - - -### 赛博朋克风格 - -| ![00142_003_一只猫坐在椅子上,戴着一副墨镜,赛博朋克风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00142_003_一只猫坐在椅子上,戴着一副墨镜,赛博朋克风格.jpg) | ![00142_000_日落时的城市天际线,赛博朋克风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00142_000_日落时的城市天际线,赛博朋克风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,赛博朋克风格 | 日落时的城市天际线,赛博朋克风格 | - - -### 纸箱风格 - - -| ![00081_000_一只猫坐在椅子上,戴着一副墨镜,纸箱风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00081_000_一只猫坐在椅子上,戴着一副墨镜,纸箱风格.jpg) | ![00081_000_日落时的城市天际线,纸箱风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00081_000_日落时的城市天际线,纸箱风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,纸箱风格 | 日落时的城市天际线,纸箱风格 | - -### 未来主义风格 - -| ![00083_000_一只猫坐在椅子上,戴着一副墨镜,未来主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00083_000_一只猫坐在椅子上,戴着一副墨镜,未来主义风格.jpg) | ![00083_002_日落时的城市天际线,未来主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00083_002_日落时的城市天际线,未来主义风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,未来主义风格 | 一只猫坐在椅子上,戴着一副墨镜,未来主义风格 | - - - -### 抽象技术风格 - -| ![00000_003_一只猫坐在椅子上,戴着一副墨镜, 抽象技术风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00000_003_一只猫坐在椅子上,戴着一副墨镜,抽象技术风格.jpg) | ![00000_004_日落时的城市天际线,抽象技术风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00000_004_日落时的城市天际线,抽象技术风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,抽象技术风格 | 日落时的城市天际线,抽象技术风格 | - - - - -### 海滩兔风格 - - -| ![00049_001_一只猫坐在椅子上,戴着一副墨镜,海滩兔风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00049_001_一只猫坐在椅子上,戴着一副墨镜,海滩兔风格.jpg) | ![00049_003_日落时的城市天际线,海滩兔风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00049_003_日落时的城市天际线,海滩兔风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,海滩兔风格 | 日落时的城市天际线,海滩兔风格 | - - -### 粉红公主风格 - -| ![00038_004_一只猫坐在椅子上,戴着一副墨镜,粉红公主风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00038_004_一只猫坐在椅子上,戴着一副墨镜,粉红公主风格.jpg) | ![00046_004_日落时的城市天际线,粉红公主风格-1](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00046_004_日落时的城市天际线,粉红公主风格-1.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,粉红公主风格 | 日落时的城市天际线,粉红公主风格 | - - -### 嬉皮士风格 - -| ![00275_002_一只猫坐在椅子上,戴着一副墨镜,嬉皮士风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00275_002_一只猫坐在椅子上,戴着一副墨镜,嬉皮士风格.jpg) | ![00275_001_日落时的城市天际线,嬉皮士风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00275_001_日落时的城市天际线,嬉皮士风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,嬉皮士风格 | 日落时的城市天际线,嬉皮士风格 | - -### 幻象之城风格 - -| ![00288_000_一只猫坐在椅子上,戴着一副墨镜,幻象之城风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00288_000_一只猫坐在椅子上,戴着一副墨镜,幻象之城风格.jpg) | ![00288_004_日落时的城市天际线,幻象之城风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00288_004_日落时的城市天际线,幻象之城风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,幻象之城风格 | 日落时的城市天际线,幻象之城风格 | - - -### 美人鱼风格 - -| ![00351_002_一只猫坐在椅子上,戴着一副墨镜,美人鱼风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00351_002_一只猫坐在椅子上,戴着一副墨镜,美人鱼风格.jpg) | ![00351_000_日落时的城市天际线,美人鱼风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00351_000_日落时的城市天际线,美人鱼风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,美人鱼风格 | 日落时的城市天际线,美人鱼风格 | - - -### 迷宫物语风格 - - -| ![00382_005_一只猫坐在椅子上,戴着一副墨镜,迷宫物语风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00382_005_一只猫坐在椅子上,戴着一副墨镜,迷宫物语风格.jpg) | ![00382_000_日落时的城市天际线,迷宫物语风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00382_000_日落时的城市天际线,迷宫物语风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,迷宫物语风格 | 日落时的城市天际线,迷宫物语风格 | - -### 仙女风格 - - -| ![00397_003_一只猫坐在椅子上,戴着一副墨镜,仙女风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00397_003_一只猫坐在椅子上,戴着一副墨镜,仙女风格.jpg) | ![00397_004_日落时的城市天际线,仙女风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00397_004_日落时的城市天际线,仙女风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,仙女风格 | 日落时的城市天际线,仙女风格 | - - - - - -### Low Poly 风格 - -| ![猫low-poly风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/猫low-poly风格.jpg) | ![sky-line-low-poly](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/sky-line-low-poly.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜, low poly 风格 | 日落时的城市天际线, low-poly | - - - - -### 浮世绘风格 - -| ![00564_001_一只猫坐在椅子上,戴着一副墨镜,浮世绘风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00564_001_一只猫坐在椅子上,戴着一副墨镜,浮世绘风格.jpg) | ![00564_002_日落时的城市天际线,浮世绘风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00564_002_日落时的城市天际线,浮世绘风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,浮世绘风格 | 日落时的城市天际线,浮世绘风格 | - -### 矢量心风格 - -| ![00573_001_一只猫坐在椅子上,戴着一副墨镜,矢量心风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00573_001_一只猫坐在椅子上,戴着一副墨镜,矢量心风格.jpg) | ![00573_005_日落时的城市天际线,矢量心风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00573_005_日落时的城市天际线,矢量心风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,矢量心风格 | 日落时的城市天际线,矢量心风格 | - - -### 摩托车手风格 - - -| ![00051_000_一只猫坐在椅子上,戴着一副墨镜,摩托车手风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00051_000_一只猫坐在椅子上,戴着一副墨镜,摩托车手风格.jpg) | ![日落时的城市天际线,摩托车手风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/日落时的城市天际线,摩托车手风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,摩托车手风格 | 日落时的城市天际线,摩托车手风格 | - - - -### 孟菲斯公司风格 - - -| ![00114_001_一只猫坐在椅子上,戴着一副墨镜,孟菲斯公司风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00114_001_一只猫坐在椅子上,戴着一副墨镜,孟菲斯公司风格.jpg) | ![00114_002_日落时的城市天际线,孟菲斯公司风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00114_002_日落时的城市天际线,孟菲斯公司风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,孟菲斯公司风格 | 日落时的城市天际线,孟菲斯公司风格 | - - -### 泥塑风格 - - -| ![一只猫坐在椅子上,戴着一副墨镜, 泥塑风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/一只猫坐在椅子上戴着一副墨镜泥塑风格.jpg) | ![00013_002_日落时的城市天际线, 泥塑](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00013_002_日落时的城市天际线,泥塑.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜, 泥塑风格 | 日落时的城市天际线, 泥塑风格 | - - - - -### 苔藓风格 - -| ![00006_001_一只猫坐在椅子上,戴着一副墨镜,苔藓风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00006_001_一只猫坐在椅子上,戴着一副墨镜,苔藓风格.jpg) | ![00004_004_日落时的城市天际线,苔藓风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00004_004_日落时的城市天际线,苔藓风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,苔藓风格 | 日落时的城市天际线,苔藓风格 | - - - -### 新浪潮风格 - -| ![00389_000_一只猫坐在椅子上,戴着一副墨镜,新浪潮风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00389_000_一只猫坐在椅子上,戴着一副墨镜,新浪潮风格.jpg) | ![00389_005_日落时的城市天际线,新浪潮风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00389_005_日落时的城市天际线,新浪潮风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,新浪潮风格 | 日落时的城市天际线,新浪潮风格 | - -### 嘻哈风格 - -| ![00274_000_一只猫坐在椅子上,戴着一副墨镜,嘻哈风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00274_000_一只猫坐在椅子上,戴着一副墨镜,嘻哈风格.jpg) | ![00274_005_日落时的城市天际线,嘻哈风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00274_005_日落时的城市天际线,嘻哈风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,嘻哈风格 | 日落时的城市天际线,嘻哈风格 | - -### 矢量图 - -| ![00177_001_一只猫坐在椅子上,戴着一副墨镜, 矢量图](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00177_001_一只猫坐在椅子上戴着一副墨镜矢量图.jpg) | ![00020_002_日落时的城市天际线, 矢量图](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00020_002_日落时的城市天际线矢量图.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜, 矢量图 | 日落时的城市天际线, 矢量图 | - -### 铅笔艺术 - - -| ![00203_000_一只猫坐在椅子上,戴着一副墨镜, 铅笔艺术](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00203_000_一只猫坐在椅子上戴着一副墨镜铅笔艺术.jpg) | ![00053_000_日落时的城市天际线, 铅笔艺术](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00053_000_日落时的城市天际线铅笔艺术.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜, 铅笔艺术 | 日落时的城市天际线, 铅笔艺术 | - - -### 女巫店风格 - -| ![00606_001_一只猫坐在椅子上,戴着一副墨镜,女巫店风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00606_001_一只猫坐在椅子上,戴着一副墨镜,女巫店风格.jpg) | ![00606_000_日落时的城市天际线,女巫店风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00606_000_日落时的城市天际线,女巫店风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,女巫店风格 | 日落时的城市天际线,女巫店风格 | - - - -### 4D 建模 - - -| ![00230_000_一只猫坐在椅子上,戴着一副墨镜, 4D 建模](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00230_000_一只猫坐在椅子上戴着一副墨镜4D建模.jpg) | ![00082_001_日落时的城市天际线, 4D 建模](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00082_001_日落时的城市天际线4D建模.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜, 4D 建模 | 日落时的城市天际线, 4D 建模 | - - - -### 水彩墨风格 - - -| ![00280_004_一只猫坐在椅子上,戴着一副墨镜, 水彩墨风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00280_004_一只猫坐在椅子上,戴着一副墨镜,水彩墨风格.jpg) | ![00130_004_日落时的城市天际线, 水彩墨风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00130_004_日落时的城市天际线,水彩墨风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜, 水彩墨风格 | 日落时的城市天际线, 水彩墨风格 | - - - -### 酸性精灵风格 - -| ![00001_004_一只猫坐在椅子上,戴着一副墨镜,酸性精灵风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00001_004_一只猫坐在椅子上,戴着一副墨镜,酸性精灵风格.jpg) | ![00001_004_日落时的城市天际线,酸性精灵风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00001_004_日落时的城市天际线,酸性精灵风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,酸性精灵风格 | 日落时的城市天际线,酸性精灵风格 | - - -### 海盗风格 - -| ![00427_002_一只猫坐在椅子上,戴着一副墨镜,海盗风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00427_002_一只猫坐在椅子上,戴着一副墨镜,海盗风格.jpg) | ![00427_000_日落时的城市天际线,海盗风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00427_000_日落时的城市天际线,海盗风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 日落时的城市天际线,海盗风格 | 一只猫坐在椅子上,戴着一副墨镜,海盗风格 | - - - -### 古埃及风格 - - -| ![00017_005_一只猫坐在椅子上,戴着一副墨镜,古埃及风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00017_005_一只猫坐在椅子上,戴着一副墨镜,古埃及风格.jpg) | ![00017_003_日落时的城市天际线,古埃及风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00017_003_日落时的城市天际线,古埃及风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,古埃及风格 | 日落时的城市天际线,古埃及风格 | - -### 风帽风格 - - -| ![戴着帽子的猫](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/戴着帽子的猫.jpg) | ![戴着帽子的城市](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/戴着帽子的城市.jpg) | -| --------------------------------------------------------- | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,风帽风格 | 日落时的城市天际线,风帽风格 | - -### 装饰艺术风格 - - -| ![00029_000_一只猫坐在椅子上,戴着一副墨镜,装饰艺术风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00029_000_一只猫坐在椅子上,戴着一副墨镜,装饰艺术风格.jpg) | ![00029_005_日落时的城市天际线,装饰艺术风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00029_005_日落时的城市天际线,装饰艺术风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,装饰艺术风格 | 日落时的城市天际线,装饰艺术风格 | - -### 极光风格 - - -| ![00035_004_一只猫坐在椅子上,戴着一副墨镜,极光风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00035_004_一只猫坐在椅子上,戴着一副墨镜,极光风格.jpg) | ![00035_003_日落时的城市天际线,极光风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00035_003_日落时的城市天际线,极光风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,极光风格 | 日落时的城市天际线,极光风格 | - -### 秋天风格 - - -| ![00036_005_一只猫坐在椅子上,戴着一副墨镜,秋天风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00036_005_一只猫坐在椅子上,戴着一副墨镜,秋天风格.jpg) | ![00036_003_日落时的城市天际线,秋天风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00036_003_日落时的城市天际线,秋天风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 日落时的城市天际线,秋天风格 | 一只猫坐在椅子上,戴着一副墨镜,秋天风格 | - -### 巴洛克风格 - - -| ![00046_002_一只猫坐在椅子上,戴着一副墨镜,巴洛克风格风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00046_002_一只猫坐在椅子上,戴着一副墨镜,巴洛克风格风格.jpg) | ![00046_003_日落时的城市天际线,巴洛克风格风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00046_003_日落时的城市天际线,巴洛克风格风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,巴洛克风格 | 日落时的城市天际线,巴洛克风格 | - -### 立体主义风格 - -| ![00128_002_一只猫坐在椅子上,戴着一副墨镜,立体主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00128_002_一只猫坐在椅子上,戴着一副墨镜,立体主义风格.jpg) | ![00128_004_日落时的城市天际线,立体主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00128_004_日落时的城市天际线,立体主义风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,立体主义风格 | 日落时的城市天际线,立体主义风格 | - - -### 黑暗自然主义风格 - -| ![00147_002_一只猫坐在椅子上,戴着一副墨镜,黑暗自然主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00147_002_一只猫坐在椅子上,戴着一副墨镜,黑暗自然主义风格.jpg) | ![00147_004_日落时的城市天际线,黑暗自然主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00147_004_日落时的城市天际线,黑暗自然主义风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,黑暗自然主义风格 | 日落时的城市天际线,黑暗自然主义风格 | - -### 表现主义风格 - -| ![00190_001_一只猫坐在椅子上,戴着一副墨镜,表现主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00190_001_一只猫坐在椅子上,戴着一副墨镜,表现主义风格.jpg) | ![00190_000_日落时的城市天际线,表现主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00190_000_日落时的城市天际线,表现主义风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,表现主义风格 | 日落时的城市天际线,表现主义风格 | - -### 野兽派风格 - -| ![00200_000_一只猫坐在椅子上,戴着一副墨镜,野兽派风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00200_000_一只猫坐在椅子上,戴着一副墨镜,野兽派风格.jpg) | ![00200_002_日落时的城市天际线,野兽派风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00200_002_日落时的城市天际线,野兽派风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,野兽派风格 | 日落时的城市天际线,野兽派风格 | - -### 鬼魂风格 - -| ![00226_001_一只猫坐在椅子上,戴着一副墨镜,鬼魂风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00226_001_一只猫坐在椅子上,戴着一副墨镜,鬼魂风格.jpg) | ![00226_002_日落时的城市天际线,鬼魂风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00226_002_日落时的城市天际线,鬼魂风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,鬼魂风格 | 日落时的城市天际线,鬼魂风格 | - -### 印象主义风格 - -| ![00289_000_一只猫坐在椅子上,戴着一副墨镜,印象主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00289_000_一只猫坐在椅子上,戴着一副墨镜,印象主义风格.jpg) | ![00289_001_日落时的城市天际线,印象主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00289_001_日落时的城市天际线,印象主义风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,印象主义风格 | 日落时的城市天际线,印象主义风格 | - -### 卡瓦伊风格 - -| ![00305_001_一只猫坐在椅子上,戴着一副墨镜,卡瓦伊风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00305_001_一只猫坐在椅子上,戴着一副墨镜,卡瓦伊风格.jpg) | ![00305_000_日落时的城市天际线,卡瓦伊风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00305_000_日落时的城市天际线,卡瓦伊风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,卡瓦伊风格 | 日落时的城市天际线,卡瓦伊风格 | - -### 极简主义风格 - -| ![00362_004_一只猫坐在椅子上,戴着一副墨镜,极简主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00362_004_一只猫坐在椅子上,戴着一副墨镜,极简主义风格.jpg) | ![00362_002_日落时的城市天际线,极简主义风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00362_002_日落时的城市天际线,极简主义风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,极简主义风格 | 日落时的城市天际线,极简主义风格 | - -### 水井惠郎风格 - -| ![00364_000_一只猫坐在椅子上,戴着一副墨镜,水井惠郎风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00364_000_一只猫坐在椅子上,戴着一副墨镜,水井惠郎风格.jpg) | ![00364_000_日落时的城市天际线,水井惠郎风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00364_000_日落时的城市天际线,水井惠郎风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,水井惠郎风格 | 日落时的城市天际线,水井惠郎风格 | - -### 照片写实风格 - -| ![00423_000_一只猫坐在椅子上,戴着一副墨镜,照片写实风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00423_000_一只猫坐在椅子上,戴着一副墨镜,照片写实风格.jpg) | ![00423_002_日落时的城市天际线,照片写实风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00423_002_日落时的城市天际线,照片写实风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,照片写实风格 | 日落时的城市天际线,照片写实风格 | - - -### 像素可爱风格 - -| ![00428_005_一只猫坐在椅子上,戴着一副墨镜,像素可爱风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00428_005_一只猫坐在椅子上,戴着一副墨镜,像素可爱风格.jpg) | ![00428_005_日落时的城市天际线,像素可爱风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00428_005_日落时的城市天际线,像素可爱风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,像素可爱风格 | 日落时的城市天际线,像素可爱风格 | - - - -### 雨天风格 - -| ![00067_002_一只猫坐在椅子上,戴着一副墨镜,雨天风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00067_002_一只猫坐在椅子上,戴着一副墨镜,雨天风格.jpg) | ![00050_003_日落时的城市天际线,雨天风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00050_003_日落时的城市天际线,雨天风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 日落时的城市天际线,雨天风格 | 一只猫坐在椅子上,戴着一副墨镜,雨天风格 | - -### 湿漉漉的风格 - -| ![00523_005_一只猫坐在椅子上,戴着一副墨镜,湿漉漉的风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00523_005_一只猫坐在椅子上,戴着一副墨镜,湿漉漉的风格.jpg) | ![00523_001_日落时的城市天际线,湿漉漉的风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00523_001_日落时的城市天际线,湿漉漉的风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,湿漉漉的风格 | 日落时的城市天际线,湿漉漉的风格 | - - -### 维京人风格 - -| ![00577_004_一只猫坐在椅子上,戴着一副墨镜,维京人风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00577_004_一只猫坐在椅子上,戴着一副墨镜,维京人风格.jpg) | ![00577_005_日落时的城市天际线,维京人风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00577_005_日落时的城市天际线,维京人风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,维京人风格 | 日落时的城市天际线,维京人风格 | - -### 后印象主义 - - -| ![一只猫坐在椅子上,戴着一副墨镜,风格:后印象主义](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style/一只猫坐在椅子上,戴着一副墨镜,风格:后印象主义.jpg) | ![日落时的城市天际线, 风格:后印象主义-v2](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style/日落时的城市天际线,风格:后印象主义-v2.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,风格:后印象主义 | 日落时的城市天际线, 风格:后印象主义-v2 | - -### 素人主义 - - -| ![一只猫坐在椅子上,戴着一副墨镜,风格:素人主义](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style/一只猫坐在椅子上,戴着一副墨镜,风格:素人主义.jpg) | ![日落时的城市天际线,风格:素人艺术](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style/日落时的城市天际线,风格:素人艺术.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,风格:素人主义 | 日落时的城市天际线, 风格:素人艺术 | - - - -### 碎核风格 - - -| ![00064_000_一只猫坐在椅子上,戴着一副墨镜,碎核风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00064_000_一只猫坐在椅子上,戴着一副墨镜,碎核风格.jpg) | ![00064_002_日落时的城市天际线,碎核风格](https://raw.githubusercontent.com/OleNet/YouPromptMe/gh-pages/you-prompt-me/images/art-style-1024/00064_002_日落时的城市天际线,碎核风格.jpg) | -| ------------------------------------------------------------ | ------------------------------------------------------------ | -| 一只猫坐在椅子上,戴着一副墨镜,碎核风格 | 日落时的城市天际线,碎核风格 | - - """ - ) - gr.HTML(''' - - ''') - -demo.queue(concurrency_count=80).launch() \ No newline at end of file