diff --git a/spaces/1-13-am/neural-style-transfer/README.md b/spaces/1-13-am/neural-style-transfer/README.md deleted file mode 100644 index 9cb3af6cfdc4eb9efcfc0ad6e916ef546e4629ce..0000000000000000000000000000000000000000 --- a/spaces/1-13-am/neural-style-transfer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Neural Style Transfer -emoji: 🦀 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.46.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/CASIO Classpad 3.0 [Emulator Crack] Serial Key Troubleshooting and Support for the Emulator.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/CASIO Classpad 3.0 [Emulator Crack] Serial Key Troubleshooting and Support for the Emulator.md deleted file mode 100644 index 7401d81c399f131606fd338afeb0e0328ce7522c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/CASIO Classpad 3.0 [Emulator Crack] Serial Key Troubleshooting and Support for the Emulator.md +++ /dev/null @@ -1,83 +0,0 @@ - -

CASIO Classpad 3.0 [Emulator Crack] Serial Key

-

Are you looking for a way to use CASIO Classpad 3.0 on your PC without buying the original calculator? Do you want to enjoy the features and benefits of CASIO Classpad 3.0 without spending a lot of money? If yes, then you might be interested in using an emulator with a crack and serial key.

-

CASIO Classpad 3.0 [Emulator Crack] Serial Key


Download File ——— https://byltly.com/2uKvRP



-

An emulator is a software that simulates the functions and features of another device or system on your PC. A crack is a file that modifies or bypasses the security features of a software to make it work without limitations or restrictions. A serial key is a code that activates or registers a software to make it valid or authentic.

-

In this article, we will explain what CASIO Classpad 3.0 is, what an emulator is, why you need an emulator for CASIO Classpad 3.0, how to get an emulator for CASIO Classpad 3.0, how to use an emulator for CASIO Classpad 3.0, how to get a crack and serial key for CASIO Classpad 3.0 emulator, how to use a crack and serial key for CASIO Classpad 3.0 emulator, what are the risks of using a crack and serial key for CASIO Classpad 3.0 emulator, how to avoid or solve the problems of using a crack and serial key for CASIO Classpad 3.0 emulator, and what are the alternatives to using a crack and serial key for CASIO Classpad 3.0 emulator.

-

By the end of this article, you will have a clear understanding of how to use CASIO Classpad 3.0 [Emulator Crack] Serial Key on your PC.

-

How to get CASIO Classpad 3.0 emulator crack for free
-CASIO Classpad 3.0 emulator crack download link
-CASIO Classpad 3.0 emulator crack activation code
-CASIO Classpad 3.0 emulator crack full version
-CASIO Classpad 3.0 emulator crack license key
-CASIO Classpad 3.0 emulator crack torrent
-CASIO Classpad 3.0 emulator crack patch
-CASIO Classpad 3.0 emulator crack keygen
-CASIO Classpad 3.0 emulator crack registration key
-CASIO Classpad 3.0 emulator crack product key
-CASIO Classpad 3.0 emulator crack software
-CASIO Classpad 3.0 emulator crack online
-CASIO Classpad 3.0 emulator crack generator
-CASIO Classpad 3.0 emulator crack no survey
-CASIO Classpad 3.0 emulator crack working
-CASIO Classpad 3.0 emulator crack latest
-CASIO Classpad 3.0 emulator crack updated
-CASIO Classpad 3.0 emulator crack review
-CASIO Classpad 3.0 emulator crack tutorial
-CASIO Classpad 3.0 emulator crack guide
-CASIO Classpad 3.0 emulator crack instructions
-CASIO Classpad 3.0 emulator crack tips
-CASIO Classpad 3.0 emulator crack tricks
-CASIO Classpad 3.0 emulator crack hacks
-CASIO Classpad 3.0 emulator crack cheats
-CASIO Classpad 3.0 emulator crack features
-CASIO Classpad 3.0 emulator crack benefits
-CASIO Classpad 3.0 emulator crack advantages
-CASIO Classpad 3.0 emulator crack disadvantages
-CASIO Classpad 3.0 emulator crack pros and cons
-CASIO Classpad 3.0 emulator crack comparison
-CASIO Classpad 3.0 emulator crack alternatives
-CASIO Classpad 3.0 emulator crack best practices
-CASIO Classpad 3.0 emulator crack requirements
-CASIO Classpad 3.0 emulator crack specifications
-CASIO Classpad 3.0 emulator crack system requirements
-CASIO Classpad 3.0 emulator crack compatibility
-CASIO Classpad 3.0 emulator crack support
-CASIO Classpad 3.0 emulator crack customer service
-CASIO Classpad 3.0 emulator crack feedback
-CASIO Classpad 3.0 emulator crack testimonials
-CASIO Classpad 3.0 emulator crack ratings
-CASIO Classpad 3.0 emulator crack quality
-CASIO Classpad 3.0 emulator crack performance
-CASIO Classpad 3.0 emulator crack reliability
-CASIO Classpad 3.0 emulator crack security
-CASIO Classpad 3.0 emulator crack privacy
-CASIO Classpad 3.0 emulator crack warranty
-CASIO Classpad 3.0 emulator crack refund policy
-CASIO Classpad 3.0 emulator crack discount code

-

What is CASIO Classpad 3.0?

-

CASIO Classpad 3.0 is a powerful software that simulates the functions and features of the CASIO Classpad 330 calculator on your PC. You can use it for learning, teaching, or doing complex calculations with ease.

-

Some of the features and benefits of CASIO Classpad 3.0 are:

- -

What is an emulator?

-

An emulator is a software that simulates the functions and features of another device or system on your PC. For example, you can use an emulator to play games designed for consoles such as PlayStation or Nintendo on your PC.

-

There are different types of emulators depending on the device or system they emulate. Some examples are:

- -

Why do you need an emulator for CASIO ClassPad

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Activehome Pro LINK Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Activehome Pro LINK Download.md deleted file mode 100644 index 346ac8eb662c55445d74f5460cd9cf97087a116f..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Activehome Pro LINK Download.md +++ /dev/null @@ -1,8 +0,0 @@ -
-

activehome pro is a powerful home automation system that allows you to control all sorts of devices, such as lights, locks, audio systems, and other appliances and devices. it also allows you to interact with your home through the web. it is a very powerful home automation system.

-

activehome pro has a versatile application programming interface (api) that lets you integrate activehome pro with other systems. api support for activehome pro includes:

-

activehome pro download


Download 🆗 https://imgfil.com/2uxZ0O



-

activehome acts as a central monitoring station for your home. it monitors the status of your lights and appliances and sends you alerts when it detects activity. activehome also monitors activity and status to help you find and resolve service calls. in addition to monitoring device status, activehome also reports the power consumption of each device to help you manage your energy consumption.

-

activehome pro will ensure that your lights and appliances are always off. however, you can set a schedule so that when no one is home, activehome will turn lights and appliances on. in addition, activehome pro keeps track of any malfunctions so that if a light or appliance is not working, you will know exactly where to find the problem. you can schedule activehome to turn lights and appliances on when you are away from home, and so that lights and appliances that are already on will turn off when you are away.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Como Eliminar Archivos Duplicados En Tu PC [2020].md b/spaces/1gistliPinn/ChatGPT4/Examples/Como Eliminar Archivos Duplicados En Tu PC [2020].md deleted file mode 100644 index df2e389fb6d5c49af5e57c477271a59c5fe282f9..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Como Eliminar Archivos Duplicados En Tu PC [2020].md +++ /dev/null @@ -1,6 +0,0 @@ -

Como Eliminar Archivos Duplicados en Tu PC [2020]


Download File 🌟 https://imgfil.com/2uy16g



- -el cambio aplicado en la version 16 de tu, se debe a una depreciación para no tener que cambiar su código, lo que significa que ya no se va a tener que actualizar los códigos existentes para que funcione con la versión nueva.[2020] en la version 17 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la version 18 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 19 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 20 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 21 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 22 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 23 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 24 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 25 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 26 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 27 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 28 para tu lo haría con un error ya que no eliminaba los archivos duplicados, el cambio aplicado en la versión 29 4fefd39f24
-
-
-

diff --git a/spaces/801artistry/RVC801/infer/modules/vc/modules.py b/spaces/801artistry/RVC801/infer/modules/vc/modules.py deleted file mode 100644 index 458cfbe860b23bdd8f07abc2934443e6b8b01c3a..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/modules/vc/modules.py +++ /dev/null @@ -1,526 +0,0 @@ -import os, sys -import traceback -import logging -now_dir = os.getcwd() -sys.path.append(now_dir) -logger = logging.getLogger(__name__) -import lib.globals.globals as rvc_globals -import numpy as np -import soundfile as sf -import torch -from io import BytesIO -from infer.lib.audio import load_audio -from infer.lib.audio import wav2 -from infer.lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from infer.modules.vc.pipeline import Pipeline -from infer.modules.vc.utils import * -import time -import scipy.io.wavfile as wavfile - -def note_to_hz(note_name): - SEMITONES = {'C': -9, 'C#': -8, 'D': -7, 'D#': -6, 'E': -5, 'F': -4, 'F#': -3, 'G': -2, 'G#': -1, 'A': 0, 'A#': 1, 'B': 2} - pitch_class, octave = note_name[:-1], int(note_name[-1]) - semitone = SEMITONES[pitch_class] - note_number = 12 * (octave - 4) + semitone - frequency = 440.0 * (2.0 ** (1.0/12)) ** note_number - return frequency - -class VC: - def __init__(self, config): - self.n_spk = None - self.tgt_sr = None - self.net_g = None - self.pipeline = None - self.cpt = None - self.version = None - self.if_f0 = None - self.version = None - self.hubert_model = None - - self.config = config - - def get_vc(self, sid, *to_return_protect): - logger.info("Get sid: " + sid) - - to_return_protect0 = { - "visible": self.if_f0 != 0, - "value": to_return_protect[0] - if self.if_f0 != 0 and to_return_protect - else 0.5, - "__type__": "update", - } - to_return_protect1 = { - "visible": self.if_f0 != 0, - "value": to_return_protect[1] - if self.if_f0 != 0 and to_return_protect - else 0.33, - "__type__": "update", - } - - if not sid: - if self.hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - logger.info("Clean model cache") - del ( - self.net_g, - self.n_spk, - self.vc, - self.hubert_model, - self.tgt_sr, - ) # ,cpt - self.hubert_model = ( - self.net_g - ) = self.n_spk = self.vc = self.hubert_model = self.tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ###楼下不这么折腾清理不干净 - self.if_f0 = self.cpt.get("f0", 1) - self.version = self.cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *self.cpt["config"], is_half=self.config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*self.cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *self.cpt["config"], is_half=self.config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*self.cpt["config"]) - del self.net_g, self.cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return ( - {"visible": False, "__type__": "update"}, - { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - }, - { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - }, - "", - "", - ) - #person = f'{os.getenv("weight_root")}/{sid}' - person = f'{sid}' - #logger.info(f"Loading: {person}") - logger.info(f"Loading...") - self.cpt = torch.load(person, map_location="cpu") - self.tgt_sr = self.cpt["config"][-1] - self.cpt["config"][-3] = self.cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = self.cpt.get("f0", 1) - self.version = self.cpt.get("version", "v1") - - synthesizer_class = { - ("v1", 1): SynthesizerTrnMs256NSFsid, - ("v1", 0): SynthesizerTrnMs256NSFsid_nono, - ("v2", 1): SynthesizerTrnMs768NSFsid, - ("v2", 0): SynthesizerTrnMs768NSFsid_nono, - } - - self.net_g = synthesizer_class.get( - (self.version, self.if_f0), SynthesizerTrnMs256NSFsid - )(*self.cpt["config"], is_half=self.config.is_half) - - del self.net_g.enc_q - - self.net_g.load_state_dict(self.cpt["weight"], strict=False) - self.net_g.eval().to(self.config.device) - if self.config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - - self.pipeline = Pipeline(self.tgt_sr, self.config) - n_spk = self.cpt["config"][-3] - index = {"value": get_index_path_from_model(sid), "__type__": "update"} - logger.info("Select index: " + index["value"]) - - return ( - ( - {"visible": False, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1 - ) - if to_return_protect - else {"visible": False, "maximum": n_spk, "__type__": "update"} - ) - - - def vc_single( - self, - sid, - input_audio_path0, - input_audio_path1, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - global total_time - total_time = 0 - start_time = time.time() - if not input_audio_path0 and not input_audio_path1: - return "You need to upload an audio", None - - if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))): - return "Audio was not properly selected or doesn't exist", None - - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'") - print("-------------------") - f0_up_key = int(f0_up_key) - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"Attempting to load {input_audio_path1}....") - audio = load_audio(file=input_audio_path1, - sr=16000, - DoFormant=rvc_globals.DoFormant, - Quefrency=rvc_globals.Quefrency, - Timbre=rvc_globals.Timbre) - - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - - if self.hubert_model is None: - self.hubert_model = load_hubert(self.config) - - try: - self.if_f0 = self.cpt.get("f0", 1) - except NameError: - message = "Model was not properly selected" - print(message) - return message, None - - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - - try: - audio_opt = self.pipeline.pipeline( - self.hubert_model, - self.net_g, - sid, - audio, - input_audio_path1, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - self.if_f0, - filter_radius, - self.tgt_sr, - resample_sr, - rms_mix_rate, - self.version, - protect, - crepe_hop_length, - f0_autotune, - f0_file=f0_file, - f0_min=f0_min, - f0_max=f0_max - ) - except AssertionError: - message = "Mismatching index version detected (v1 with v2, or v2 with v1)." - print(message) - return message, None - except NameError: - message = "RVC libraries are still loading. Please try again in a few seconds." - print(message) - return message, None - - if self.tgt_sr != resample_sr >= 16000: - self.tgt_sr = resample_sr - index_info = ( - "Index:\n%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - end_time = time.time() - total_time = end_time - start_time - - output_folder = "audio-outputs" - os.makedirs(output_folder, exist_ok=True) - output_filename = "generated_audio_{}.wav" - output_count = 1 - while True: - current_output_path = os.path.join(output_folder, output_filename.format(output_count)) - if not os.path.exists(current_output_path): - break - output_count += 1 - - wavfile.write(current_output_path, self.tgt_sr, audio_opt) - print(f"Generated audio saved to: {current_output_path}") - return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt) - except: - info = traceback.format_exc() - logger.warn(info) - return info, (None, None) - - def vc_single_dont_save( - self, - sid, - input_audio_path0, - input_audio_path1, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - global total_time - total_time = 0 - start_time = time.time() - if not input_audio_path0 and not input_audio_path1: - return "You need to upload an audio", None - - if (not os.path.exists(input_audio_path0)) and (not os.path.exists(os.path.join(now_dir, input_audio_path0))): - return "Audio was not properly selected or doesn't exist", None - - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"\nStarting inference for '{os.path.basename(input_audio_path1)}'") - print("-------------------") - f0_up_key = int(f0_up_key) - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - input_audio_path1 = input_audio_path1 or input_audio_path0 - print(f"Attempting to load {input_audio_path1}....") - audio = load_audio(file=input_audio_path1, - sr=16000, - DoFormant=rvc_globals.DoFormant, - Quefrency=rvc_globals.Quefrency, - Timbre=rvc_globals.Timbre) - - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - - if self.hubert_model is None: - self.hubert_model = load_hubert(self.config) - - try: - self.if_f0 = self.cpt.get("f0", 1) - except NameError: - message = "Model was not properly selected" - print(message) - return message, None - - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - - try: - audio_opt = self.pipeline.pipeline( - self.hubert_model, - self.net_g, - sid, - audio, - input_audio_path1, - times, - f0_up_key, - f0_method, - file_index, - index_rate, - self.if_f0, - filter_radius, - self.tgt_sr, - resample_sr, - rms_mix_rate, - self.version, - protect, - crepe_hop_length, - f0_autotune, - f0_file=f0_file, - f0_min=f0_min, - f0_max=f0_max - ) - except AssertionError: - message = "Mismatching index version detected (v1 with v2, or v2 with v1)." - print(message) - return message, None - except NameError: - message = "RVC libraries are still loading. Please try again in a few seconds." - print(message) - return message, None - - if self.tgt_sr != resample_sr >= 16000: - self.tgt_sr = resample_sr - index_info = ( - "Index:\n%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - end_time = time.time() - total_time = end_time - start_time - - return f"Success.\n {index_info}\nTime:\n npy:{times[0]}, f0:{times[1]}, infer:{times[2]}\nTotal Time: {total_time} seconds", (self.tgt_sr, audio_opt) - except: - info = traceback.format_exc() - logger.warn(info) - return info, (None, None) - - - def vc_multi( - self, - sid, - dir_path, - opt_root, - paths, - f0_up_key, - f0_method, - file_index, - file_index2, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - format1, - crepe_hop_length, - f0_min, - note_min, - f0_max, - note_max, - f0_autotune, - ): - if rvc_globals.NotesOrHertz and f0_method != 'rmvpe': - f0_min = note_to_hz(note_min) if note_min else 50 - f0_max = note_to_hz(note_max) if note_max else 1100 - print(f"Converted Min pitch: freq - {f0_min}\n" - f"Converted Max pitch: freq - {f0_max}") - else: - f0_min = f0_min or 50 - f0_max = f0_max or 1100 - try: - dir_path = ( - dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - os.makedirs(opt_root, exist_ok=True) - try: - if dir_path != "": - paths = [ - os.path.join(dir_path, name) for name in os.listdir(dir_path) - ] - else: - paths = [path.name for path in paths] - except: - traceback.print_exc() - paths = [path.name for path in paths] - infos = [] - for path in paths: - info, opt = self.vc_single( - sid, - path, - f0_up_key, - None, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ) - if "Success" in info: - try: - tgt_sr, audio_opt = opt - if format1 in ["wav", "flac"]: - sf.write( - "%s/%s.%s" - % (opt_root, os.path.basename(path), format1), - audio_opt, - tgt_sr, - ) - else: - path = "%s/%s.%s" % (opt_root, os.path.basename(path), format1) - with BytesIO() as wavf: - sf.write( - wavf, - audio_opt, - tgt_sr, - format="wav" - ) - wavf.seek(0, 0) - with open(path, "wb") as outf: - wav2(wavf, outf, format1) - except: - info += traceback.format_exc() - infos.append("%s->%s" % (os.path.basename(path), info)) - yield "\n".join(infos) - yield "\n".join(infos) - except: - yield traceback.format_exc() diff --git a/spaces/AIConsultant/MusicGen/tests/common_utils/__init__.py b/spaces/AIConsultant/MusicGen/tests/common_utils/__init__.py deleted file mode 100644 index 74ffcfef96fec35c99b2a1a053a61f44f7a8bbe9..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/tests/common_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .temp_utils import TempDirMixin -from .wav_utils import get_batch_white_noise, get_white_noise, save_wav diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/GenerSpeech/model/generspeech.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/GenerSpeech/model/generspeech.py deleted file mode 100644 index d6ee09c417ae8ac8ce6e3f02a60eea36f4f4ba05..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/GenerSpeech/model/generspeech.py +++ /dev/null @@ -1,260 +0,0 @@ -import torch -from modules.GenerSpeech.model.glow_modules import Glow -from modules.fastspeech.tts_modules import PitchPredictor -import random -from modules.GenerSpeech.model.prosody_util import ProsodyAligner, LocalStyleAdaptor -from utils.pitch_utils import f0_to_coarse, denorm_f0 -from modules.commons.common_layers import * -import torch.distributions as dist -from utils.hparams import hparams -from modules.GenerSpeech.model.mixstyle import MixStyle -from modules.fastspeech.fs2 import FastSpeech2 -import json -from modules.fastspeech.tts_modules import DEFAULT_MAX_SOURCE_POSITIONS, DEFAULT_MAX_TARGET_POSITIONS - -class GenerSpeech(FastSpeech2): - ''' - GenerSpeech: Towards Style Transfer for Generalizable Out-Of-Domain Text-to-Speech - https://arxiv.org/abs/2205.07211 - ''' - def __init__(self, dictionary, out_dims=None): - super().__init__(dictionary, out_dims) - - # Mixstyle - self.norm = MixStyle(p=0.5, alpha=0.1, eps=1e-6, hidden_size=self.hidden_size) - - # emotion embedding - self.emo_embed_proj = Linear(256, self.hidden_size, bias=True) - - # build prosody extractor - ## frame level - self.prosody_extractor_utter = LocalStyleAdaptor(self.hidden_size, hparams['nVQ'], self.padding_idx) - self.l1_utter = nn.Linear(self.hidden_size * 2, self.hidden_size) - self.align_utter = ProsodyAligner(num_layers=2) - - ## phoneme level - self.prosody_extractor_ph = LocalStyleAdaptor(self.hidden_size, hparams['nVQ'], self.padding_idx) - self.l1_ph = nn.Linear(self.hidden_size * 2, self.hidden_size) - self.align_ph = ProsodyAligner(num_layers=2) - - ## word level - self.prosody_extractor_word = LocalStyleAdaptor(self.hidden_size, hparams['nVQ'], self.padding_idx) - self.l1_word = nn.Linear(self.hidden_size * 2, self.hidden_size) - self.align_word = ProsodyAligner(num_layers=2) - - self.pitch_inpainter_predictor = PitchPredictor( - self.hidden_size, n_chans=self.hidden_size, - n_layers=3, dropout_rate=0.1, odim=2, - padding=hparams['ffn_padding'], kernel_size=hparams['predictor_kernel']) - - # build attention layer - self.max_source_positions = DEFAULT_MAX_SOURCE_POSITIONS - self.embed_positions = SinusoidalPositionalEmbedding( - self.hidden_size, self.padding_idx, - init_size=self.max_source_positions + self.padding_idx + 1, - ) - - # build post flow - cond_hs = 80 - if hparams.get('use_txt_cond', True): - cond_hs = cond_hs + hparams['hidden_size'] - - cond_hs = cond_hs + hparams['hidden_size'] * 3 # for emo, spk embedding and prosody embedding - self.post_flow = Glow( - 80, hparams['post_glow_hidden'], hparams['post_glow_kernel_size'], 1, - hparams['post_glow_n_blocks'], hparams['post_glow_n_block_layers'], - n_split=4, n_sqz=2, - gin_channels=cond_hs, - share_cond_layers=hparams['post_share_cond_layers'], - share_wn_layers=hparams['share_wn_layers'], - sigmoid_scale=hparams['sigmoid_scale'] - ) - self.prior_dist = dist.Normal(0, 1) - - - def forward(self, txt_tokens, mel2ph=None, ref_mel2ph=None, ref_mel2word=None, spk_embed=None, emo_embed=None, ref_mels=None, - f0=None, uv=None, skip_decoder=False, global_steps=0, infer=False, **kwargs): - ret = {} - encoder_out = self.encoder(txt_tokens) # [B, T, C] - src_nonpadding = (txt_tokens > 0).float()[:, :, None] - - # add spk/emo embed - spk_embed = self.spk_embed_proj(spk_embed)[:, None, :] - emo_embed = self.emo_embed_proj(emo_embed)[:, None, :] - - - # add dur - dur_inp = (encoder_out + spk_embed + emo_embed) * src_nonpadding - mel2ph = self.add_dur(dur_inp, mel2ph, txt_tokens, ret) - tgt_nonpadding = (mel2ph > 0).float()[:, :, None] - decoder_inp = self.expand_states(encoder_out, mel2ph) - decoder_inp = self.norm(decoder_inp, spk_embed + emo_embed) - - # add prosody VQ - ret['ref_mel2ph'] = ref_mel2ph - ret['ref_mel2word'] = ref_mel2word - prosody_utter_mel = self.get_prosody_utter(decoder_inp, ref_mels, ret, infer, global_steps) - prosody_ph_mel = self.get_prosody_ph(decoder_inp, ref_mels, ret, infer, global_steps) - prosody_word_mel = self.get_prosody_word(decoder_inp, ref_mels, ret, infer, global_steps) - - # add pitch embed - pitch_inp_domain_agnostic = decoder_inp * tgt_nonpadding - pitch_inp_domain_specific = (decoder_inp + spk_embed + emo_embed + prosody_utter_mel + prosody_ph_mel + prosody_word_mel) * tgt_nonpadding - predicted_pitch = self.inpaint_pitch(pitch_inp_domain_agnostic, pitch_inp_domain_specific, f0, uv, mel2ph, ret) - - # decode - decoder_inp = decoder_inp + spk_embed + emo_embed + predicted_pitch + prosody_utter_mel + prosody_ph_mel + prosody_word_mel - ret['decoder_inp'] = decoder_inp = decoder_inp * tgt_nonpadding - if skip_decoder: - return ret - ret['mel_out'] = self.run_decoder(decoder_inp, tgt_nonpadding, ret, infer=infer, **kwargs) - - # postflow - is_training = self.training - ret['x_mask'] = tgt_nonpadding - ret['spk_embed'] = spk_embed - ret['emo_embed'] = emo_embed - ret['ref_prosody'] = prosody_utter_mel + prosody_ph_mel + prosody_word_mel - self.run_post_glow(ref_mels, infer, is_training, ret) - return ret - - def get_prosody_ph(self, encoder_out, ref_mels, ret, infer=False, global_steps=0): - # get VQ prosody - if global_steps > hparams['vq_start'] or infer: - prosody_embedding, loss, ppl = self.prosody_extractor_ph(ref_mels, ret['ref_mel2ph'], no_vq=False) - ret['vq_loss_ph'] = loss - ret['ppl_ph'] = ppl - else: - prosody_embedding = self.prosody_extractor_ph(ref_mels, ret['ref_mel2ph'], no_vq=True) - - # add positional embedding - positions = self.embed_positions(prosody_embedding[:, :, 0]) - prosody_embedding = self.l1_ph(torch.cat([prosody_embedding, positions], dim=-1)) - - - # style-to-content attention - src_key_padding_mask = encoder_out[:, :, 0].eq(self.padding_idx).data - prosody_key_padding_mask = prosody_embedding[:, :, 0].eq(self.padding_idx).data - if global_steps < hparams['forcing']: - output, guided_loss, attn_emo = self.align_ph(encoder_out.transpose(0, 1), prosody_embedding.transpose(0, 1), - src_key_padding_mask, prosody_key_padding_mask, forcing=True) - else: - output, guided_loss, attn_emo = self.align_ph(encoder_out.transpose(0, 1), prosody_embedding.transpose(0, 1), - src_key_padding_mask, prosody_key_padding_mask, forcing=False) - - ret['gloss_ph'] = guided_loss - ret['attn_ph'] = attn_emo - return output.transpose(0, 1) - - def get_prosody_word(self, encoder_out, ref_mels, ret, infer=False, global_steps=0): - # get VQ prosody - if global_steps > hparams['vq_start'] or infer: - prosody_embedding, loss, ppl = self.prosody_extractor_word(ref_mels, ret['ref_mel2word'], no_vq=False) - ret['vq_loss_word'] = loss - ret['ppl_word'] = ppl - else: - prosody_embedding = self.prosody_extractor_word(ref_mels, ret['ref_mel2word'], no_vq=True) - - # add positional embedding - positions = self.embed_positions(prosody_embedding[:, :, 0]) - prosody_embedding = self.l1_word(torch.cat([prosody_embedding, positions], dim=-1)) - - - # style-to-content attention - src_key_padding_mask = encoder_out[:, :, 0].eq(self.padding_idx).data - prosody_key_padding_mask = prosody_embedding[:, :, 0].eq(self.padding_idx).data - if global_steps < hparams['forcing']: - output, guided_loss, attn_emo = self.align_word(encoder_out.transpose(0, 1), prosody_embedding.transpose(0, 1), - src_key_padding_mask, prosody_key_padding_mask, forcing=True) - else: - output, guided_loss, attn_emo = self.align_word(encoder_out.transpose(0, 1), prosody_embedding.transpose(0, 1), - src_key_padding_mask, prosody_key_padding_mask, forcing=False) - ret['gloss_word'] = guided_loss - ret['attn_word'] = attn_emo - return output.transpose(0, 1) - - def get_prosody_utter(self, encoder_out, ref_mels, ret, infer=False, global_steps=0): - # get VQ prosody - if global_steps > hparams['vq_start'] or infer: - prosody_embedding, loss, ppl = self.prosody_extractor_utter(ref_mels, no_vq=False) - ret['vq_loss_utter'] = loss - ret['ppl_utter'] = ppl - else: - prosody_embedding = self.prosody_extractor_utter(ref_mels, no_vq=True) - - # add positional embedding - positions = self.embed_positions(prosody_embedding[:, :, 0]) - prosody_embedding = self.l1_utter(torch.cat([prosody_embedding, positions], dim=-1)) - - - # style-to-content attention - src_key_padding_mask = encoder_out[:, :, 0].eq(self.padding_idx).data - prosody_key_padding_mask = prosody_embedding[:, :, 0].eq(self.padding_idx).data - if global_steps < hparams['forcing']: - output, guided_loss, attn_emo = self.align_utter(encoder_out.transpose(0, 1), prosody_embedding.transpose(0, 1), - src_key_padding_mask, prosody_key_padding_mask, forcing=True) - else: - output, guided_loss, attn_emo = self.align_utter(encoder_out.transpose(0, 1), prosody_embedding.transpose(0, 1), - src_key_padding_mask, prosody_key_padding_mask, forcing=False) - ret['gloss_utter'] = guided_loss - ret['attn_utter'] = attn_emo - return output.transpose(0, 1) - - - - def inpaint_pitch(self, pitch_inp_domain_agnostic, pitch_inp_domain_specific, f0, uv, mel2ph, ret): - if hparams['pitch_type'] == 'frame': - pitch_padding = mel2ph == 0 - if hparams['predictor_grad'] != 1: - pitch_inp_domain_agnostic = pitch_inp_domain_agnostic.detach() + hparams['predictor_grad'] * (pitch_inp_domain_agnostic - pitch_inp_domain_agnostic.detach()) - pitch_inp_domain_specific = pitch_inp_domain_specific.detach() + hparams['predictor_grad'] * (pitch_inp_domain_specific - pitch_inp_domain_specific.detach()) - - pitch_domain_agnostic = self.pitch_predictor(pitch_inp_domain_agnostic) - pitch_domain_specific = self.pitch_inpainter_predictor(pitch_inp_domain_specific) - pitch_pred = pitch_domain_agnostic + pitch_domain_specific - ret['pitch_pred'] = pitch_pred - - use_uv = hparams['pitch_type'] == 'frame' and hparams['use_uv'] - if f0 is None: - f0 = pitch_pred[:, :, 0] # [B, T] - if use_uv: - uv = pitch_pred[:, :, 1] > 0 # [B, T] - f0_denorm = denorm_f0(f0, uv if use_uv else None, hparams, pitch_padding=pitch_padding) - pitch = f0_to_coarse(f0_denorm) # start from 0 [B, T_txt] - ret['f0_denorm'] = f0_denorm - ret['f0_denorm_pred'] = denorm_f0(pitch_pred[:, :, 0], (pitch_pred[:, :, 1] > 0) if use_uv else None, hparams, pitch_padding=pitch_padding) - if hparams['pitch_type'] == 'ph': - pitch = torch.gather(F.pad(pitch, [1, 0]), 1, mel2ph) - ret['f0_denorm'] = torch.gather(F.pad(ret['f0_denorm'], [1, 0]), 1, mel2ph) - ret['f0_denorm_pred'] = torch.gather(F.pad(ret['f0_denorm_pred'], [1, 0]), 1, mel2ph) - pitch_embed = self.pitch_embed(pitch) - return pitch_embed - - def run_post_glow(self, tgt_mels, infer, is_training, ret): - x_recon = ret['mel_out'].transpose(1, 2) - g = x_recon - B, _, T = g.shape - if hparams.get('use_txt_cond', True): - g = torch.cat([g, ret['decoder_inp'].transpose(1, 2)], 1) - g_spk_embed = ret['spk_embed'].repeat(1, T, 1).transpose(1, 2) - g_emo_embed = ret['emo_embed'].repeat(1, T, 1).transpose(1, 2) - l_ref_prosody = ret['ref_prosody'].transpose(1, 2) - g = torch.cat([g, g_spk_embed, g_emo_embed, l_ref_prosody], dim=1) - prior_dist = self.prior_dist - if not infer: - if is_training: - self.train() - x_mask = ret['x_mask'].transpose(1, 2) - y_lengths = x_mask.sum(-1) - g = g.detach() - tgt_mels = tgt_mels.transpose(1, 2) - z_postflow, ldj = self.post_flow(tgt_mels, x_mask, g=g) - ldj = ldj / y_lengths / 80 - ret['z_pf'], ret['ldj_pf'] = z_postflow, ldj - ret['postflow'] = -prior_dist.log_prob(z_postflow).mean() - ldj.mean() - else: - x_mask = torch.ones_like(x_recon[:, :1, :]) - z_post = prior_dist.sample(x_recon.shape).to(g.device) * hparams['noise_scale'] - x_recon_, _ = self.post_flow(z_post, x_mask, g, reverse=True) - x_recon = x_recon_ - ret['mel_out'] = x_recon.transpose(1, 2) \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/diff/candidate_decoder.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/diff/candidate_decoder.py deleted file mode 100644 index 133a51a61942027c255841e2638e296238c07a30..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/diff/candidate_decoder.py +++ /dev/null @@ -1,96 +0,0 @@ -from modules.fastspeech.tts_modules import FastspeechDecoder -# from modules.fastspeech.fast_tacotron import DecoderRNN -# from modules.fastspeech.speedy_speech.speedy_speech import ConvBlocks -# from modules.fastspeech.conformer.conformer import ConformerDecoder -import torch -from torch.nn import functional as F -import torch.nn as nn -import math -from utils.hparams import hparams -from .diffusion import Mish -Linear = nn.Linear - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -def Conv1d(*args, **kwargs): - layer = nn.Conv1d(*args, **kwargs) - nn.init.kaiming_normal_(layer.weight) - return layer - - -class FFT(FastspeechDecoder): - def __init__(self, hidden_size=None, num_layers=None, kernel_size=None, num_heads=None): - super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads) - dim = hparams['residual_channels'] - self.input_projection = Conv1d(hparams['audio_num_mel_bins'], dim, 1) - self.diffusion_embedding = SinusoidalPosEmb(dim) - self.mlp = nn.Sequential( - nn.Linear(dim, dim * 4), - Mish(), - nn.Linear(dim * 4, dim) - ) - self.get_mel_out = Linear(hparams['hidden_size'], 80, bias=True) - self.get_decode_inp = Linear(hparams['hidden_size'] + dim + dim, - hparams['hidden_size']) # hs + dim + 80 -> hs - - def forward(self, spec, diffusion_step, cond, padding_mask=None, attn_mask=None, return_hiddens=False): - """ - :param spec: [B, 1, 80, T] - :param diffusion_step: [B, 1] - :param cond: [B, M, T] - :return: - """ - x = spec[:, 0] - x = self.input_projection(x).permute([0, 2, 1]) # [B, T, residual_channel] - diffusion_step = self.diffusion_embedding(diffusion_step) - diffusion_step = self.mlp(diffusion_step) # [B, dim] - cond = cond.permute([0, 2, 1]) # [B, T, M] - - seq_len = cond.shape[1] # [T_mel] - time_embed = diffusion_step[:, None, :] # [B, 1, dim] - time_embed = time_embed.repeat([1, seq_len, 1]) # # [B, T, dim] - - decoder_inp = torch.cat([x, cond, time_embed], dim=-1) # [B, T, dim + H + dim] - decoder_inp = self.get_decode_inp(decoder_inp) # [B, T, H] - x = decoder_inp - - ''' - Required x: [B, T, C] - :return: [B, T, C] or [L, B, T, C] - ''' - padding_mask = x.abs().sum(-1).eq(0).data if padding_mask is None else padding_mask - nonpadding_mask_TB = 1 - padding_mask.transpose(0, 1).float()[:, :, None] # [T, B, 1] - if self.use_pos_embed: - positions = self.pos_embed_alpha * self.embed_positions(x[..., 0]) - x = x + positions - x = F.dropout(x, p=self.dropout, training=self.training) - # B x T x C -> T x B x C - x = x.transpose(0, 1) * nonpadding_mask_TB - hiddens = [] - for layer in self.layers: - x = layer(x, encoder_padding_mask=padding_mask, attn_mask=attn_mask) * nonpadding_mask_TB - hiddens.append(x) - if self.use_last_norm: - x = self.layer_norm(x) * nonpadding_mask_TB - if return_hiddens: - x = torch.stack(hiddens, 0) # [L, T, B, C] - x = x.transpose(1, 2) # [L, B, T, C] - else: - x = x.transpose(0, 1) # [B, T, C] - - x = self.get_mel_out(x).permute([0, 2, 1]) # [B, 80, T] - return x[:, None, :, :] \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/layers/upsample.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/layers/upsample.py deleted file mode 100644 index 18c6397c420a81fadc5320e3a48f3249534decd8..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/parallel_wavegan/layers/upsample.py +++ /dev/null @@ -1,183 +0,0 @@ -# -*- coding: utf-8 -*- - -"""Upsampling module. - -This code is modified from https://github.com/r9y9/wavenet_vocoder. - -""" - -import numpy as np -import torch -import torch.nn.functional as F - -from . import Conv1d - - -class Stretch2d(torch.nn.Module): - """Stretch2d module.""" - - def __init__(self, x_scale, y_scale, mode="nearest"): - """Initialize Stretch2d module. - - Args: - x_scale (int): X scaling factor (Time axis in spectrogram). - y_scale (int): Y scaling factor (Frequency axis in spectrogram). - mode (str): Interpolation mode. - - """ - super(Stretch2d, self).__init__() - self.x_scale = x_scale - self.y_scale = y_scale - self.mode = mode - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, C, F, T). - - Returns: - Tensor: Interpolated tensor (B, C, F * y_scale, T * x_scale), - - """ - return F.interpolate( - x, scale_factor=(self.y_scale, self.x_scale), mode=self.mode) - - -class Conv2d(torch.nn.Conv2d): - """Conv2d module with customized initialization.""" - - def __init__(self, *args, **kwargs): - """Initialize Conv2d module.""" - super(Conv2d, self).__init__(*args, **kwargs) - - def reset_parameters(self): - """Reset parameters.""" - self.weight.data.fill_(1. / np.prod(self.kernel_size)) - if self.bias is not None: - torch.nn.init.constant_(self.bias, 0.0) - - -class UpsampleNetwork(torch.nn.Module): - """Upsampling network module.""" - - def __init__(self, - upsample_scales, - nonlinear_activation=None, - nonlinear_activation_params={}, - interpolate_mode="nearest", - freq_axis_kernel_size=1, - use_causal_conv=False, - ): - """Initialize upsampling network module. - - Args: - upsample_scales (list): List of upsampling scales. - nonlinear_activation (str): Activation function name. - nonlinear_activation_params (dict): Arguments for specified activation function. - interpolate_mode (str): Interpolation mode. - freq_axis_kernel_size (int): Kernel size in the direction of frequency axis. - - """ - super(UpsampleNetwork, self).__init__() - self.use_causal_conv = use_causal_conv - self.up_layers = torch.nn.ModuleList() - for scale in upsample_scales: - # interpolation layer - stretch = Stretch2d(scale, 1, interpolate_mode) - self.up_layers += [stretch] - - # conv layer - assert (freq_axis_kernel_size - 1) % 2 == 0, "Not support even number freq axis kernel size." - freq_axis_padding = (freq_axis_kernel_size - 1) // 2 - kernel_size = (freq_axis_kernel_size, scale * 2 + 1) - if use_causal_conv: - padding = (freq_axis_padding, scale * 2) - else: - padding = (freq_axis_padding, scale) - conv = Conv2d(1, 1, kernel_size=kernel_size, padding=padding, bias=False) - self.up_layers += [conv] - - # nonlinear - if nonlinear_activation is not None: - nonlinear = getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params) - self.up_layers += [nonlinear] - - def forward(self, c): - """Calculate forward propagation. - - Args: - c : Input tensor (B, C, T). - - Returns: - Tensor: Upsampled tensor (B, C, T'), where T' = T * prod(upsample_scales). - - """ - c = c.unsqueeze(1) # (B, 1, C, T) - for f in self.up_layers: - if self.use_causal_conv and isinstance(f, Conv2d): - c = f(c)[..., :c.size(-1)] - else: - c = f(c) - return c.squeeze(1) # (B, C, T') - - -class ConvInUpsampleNetwork(torch.nn.Module): - """Convolution + upsampling network module.""" - - def __init__(self, - upsample_scales, - nonlinear_activation=None, - nonlinear_activation_params={}, - interpolate_mode="nearest", - freq_axis_kernel_size=1, - aux_channels=80, - aux_context_window=0, - use_causal_conv=False - ): - """Initialize convolution + upsampling network module. - - Args: - upsample_scales (list): List of upsampling scales. - nonlinear_activation (str): Activation function name. - nonlinear_activation_params (dict): Arguments for specified activation function. - mode (str): Interpolation mode. - freq_axis_kernel_size (int): Kernel size in the direction of frequency axis. - aux_channels (int): Number of channels of pre-convolutional layer. - aux_context_window (int): Context window size of the pre-convolutional layer. - use_causal_conv (bool): Whether to use causal structure. - - """ - super(ConvInUpsampleNetwork, self).__init__() - self.aux_context_window = aux_context_window - self.use_causal_conv = use_causal_conv and aux_context_window > 0 - # To capture wide-context information in conditional features - kernel_size = aux_context_window + 1 if use_causal_conv else 2 * aux_context_window + 1 - # NOTE(kan-bayashi): Here do not use padding because the input is already padded - self.conv_in = Conv1d(aux_channels, aux_channels, kernel_size=kernel_size, bias=False) - self.upsample = UpsampleNetwork( - upsample_scales=upsample_scales, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - interpolate_mode=interpolate_mode, - freq_axis_kernel_size=freq_axis_kernel_size, - use_causal_conv=use_causal_conv, - ) - - def forward(self, c): - """Calculate forward propagation. - - Args: - c : Input tensor (B, C, T'). - - Returns: - Tensor: Upsampled tensor (B, C, T), - where T = (T' - aux_context_window * 2) * prod(upsample_scales). - - Note: - The length of inputs considers the context window size. - - """ - c_ = self.conv_in(c) - c = c_[:, :, :-self.aux_context_window] if self.use_causal_conv else c_ - return self.upsample(c) diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/svs/task.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/svs/task.py deleted file mode 100644 index 896970e5318071d0d406f3d2378462cb77356925..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/svs/task.py +++ /dev/null @@ -1,84 +0,0 @@ -import torch - -import utils -from modules.diff.diffusion import GaussianDiffusion -from modules.diff.net import DiffNet -from tasks.tts.fs2 import FastSpeech2Task -from utils.hparams import hparams - - -DIFF_DECODERS = { - 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']), -} - - -class DiffFsTask(FastSpeech2Task): - def build_tts_model(self): - mel_bins = hparams['audio_num_mel_bins'] - self.model = GaussianDiffusion( - phone_encoder=self.phone_encoder, - out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams), - timesteps=hparams['timesteps'], - loss_type=hparams['diff_loss_type'], - spec_min=hparams['spec_min'], spec_max=hparams['spec_max'], - ) - - def run_model(self, model, sample, return_output=False, infer=False): - txt_tokens = sample['txt_tokens'] # [B, T_t] - target = sample['mels'] # [B, T_s, 80] - mel2ph = sample['mel2ph'] # [B, T_s] - f0 = sample['f0'] - uv = sample['uv'] - energy = sample['energy'] - spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids') - if hparams['pitch_type'] == 'cwt': - cwt_spec = sample[f'cwt_spec'] - f0_mean = sample['f0_mean'] - f0_std = sample['f0_std'] - sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph) - - output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed, - ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer) - - losses = {} - if 'diff_loss' in output: - losses['mel'] = output['diff_loss'] - self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses) - if hparams['use_pitch_embed']: - self.add_pitch_loss(output, sample, losses) - if hparams['use_energy_embed']: - self.add_energy_loss(output['energy_pred'], energy, losses) - if not return_output: - return losses - else: - return losses, output - - def _training_step(self, sample, batch_idx, _): - log_outputs = self.run_model(self.model, sample) - total_loss = sum([v for v in log_outputs.values() if isinstance(v, torch.Tensor) and v.requires_grad]) - log_outputs['batch_size'] = sample['txt_tokens'].size()[0] - log_outputs['lr'] = self.scheduler.get_lr()[0] - return total_loss, log_outputs - - def validation_step(self, sample, batch_idx): - outputs = {} - outputs['losses'] = {} - outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False) - outputs['total_loss'] = sum(outputs['losses'].values()) - outputs['nsamples'] = sample['nsamples'] - outputs = utils.tensors_to_scalars(outputs) - if batch_idx < hparams['num_valid_plots']: - _, model_out = self.run_model(self.model, sample, return_output=True, infer=True) - self.plot_mel(batch_idx, sample['mels'], model_out['mel_out']) - return outputs - - def build_scheduler(self, optimizer): - return torch.optim.lr_scheduler.StepLR(optimizer, hparams['decay_steps'], gamma=0.5) - - def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx): - if optimizer is None: - return - optimizer.step() - optimizer.zero_grad() - if self.scheduler is not None: - self.scheduler.step(self.global_step // hparams['accumulate_grad_batches']) diff --git a/spaces/AIWaves/SOP_Generation-single/Action/__init__.py b/spaces/AIWaves/SOP_Generation-single/Action/__init__.py deleted file mode 100644 index bb85ebbfc6ae1d83770263a1744fe14cb687931d..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/Action/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .base_action import Action \ No newline at end of file diff --git a/spaces/AIWaves/SOP_Generation-single/Component/ToolComponent.py b/spaces/AIWaves/SOP_Generation-single/Component/ToolComponent.py deleted file mode 100644 index 95da2abdb7e8b7b5283763587f23ecc29e8ec35f..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/Component/ToolComponent.py +++ /dev/null @@ -1,887 +0,0 @@ -from abc import abstractmethod -import uuid -from text2vec import semantic_search -from utils import ( - get_relevant_history, - load_knowledge_base_qa, - load_knowledge_base_UnstructuredFile, - get_embedding, - extract, -) -import json -from typing import Dict, List -import os -from googleapiclient.discovery import build -import requests -from selenium import webdriver -from selenium.webdriver.common.by import By -from selenium.webdriver.support.ui import WebDriverWait -from selenium.webdriver.support import expected_conditions as EC -from bs4 import BeautifulSoup -import base64 -import re -from datetime import datetime, timedelta -from typing import Tuple, List, Any, Dict -from email.mime.text import MIMEText -from email.mime.multipart import MIMEMultipart -from google.auth.transport.requests import Request -from google.oauth2.credentials import Credentials -from google_auth_oauthlib.flow import InstalledAppFlow -from googleapiclient.discovery import build -from googleapiclient.errors import HttpError -from tqdm import tqdm - -class ToolComponent: - def __init__(self): - pass - - @abstractmethod - def func(self): - pass - -class KnowledgeBaseComponent(ToolComponent): - """ - Inject knowledge base - top_k : Top_k with the highest matching degree - type : "QA" or others - knowledge_base(json_path) : knowledge_base_path - """ - def __init__(self, top_k, type, knowledge_base): - super().__init__() - self.top_k = top_k - self.type = type - self.knowledge_base = knowledge_base - - if self.type == "QA": - ( - self.kb_embeddings, - self.kb_questions, - self.kb_answers, - self.kb_chunks, - ) = load_knowledge_base_qa(self.knowledge_base) - else: - self.kb_embeddings, self.kb_chunks = load_knowledge_base_UnstructuredFile( - self.knowledge_base - ) - - def func(self, agent): - query = ( - agent.long_term_memory[-1]["content"] - if len(agent.long_term_memory) > 0 - else "" - ) - knowledge = "" - query = extract(query, "query") - query_embedding = get_embedding(query) - hits = semantic_search(query_embedding, self.kb_embeddings, top_k=50) - hits = hits[0] - temp = [] - if self.type == "QA": - for hit in hits: - matching_idx = hit["corpus_id"] - if self.kb_chunks[matching_idx] in temp: - pass - else: - knowledge = ( - knowledge - + f"question:{self.kb_questions[matching_idx]},answer:{self.kb_answers[matching_idx]}\n\n" - ) - temp.append(self.kb_answers[matching_idx]) - if len(temp) == 1: - break - print(hits[0]["score"]) - score = hits[0]["score"] - if score < 0.5: - return {"prompt": "No matching knowledge base"} - else: - return {"prompt": "The relevant content is: " + knowledge + "\n"} - else: - for hit in hits: - matching_idx = hit["corpus_id"] - if self.kb_chunks[matching_idx] in temp: - pass - else: - knowledge = knowledge + f"{self.kb_answers[matching_idx]}\n\n" - temp.append(self.kb_answers[matching_idx]) - if len(temp) == self.top_k: - break - print(hits[0]["score"]) - score = hits[0]["score"] - if score < 0.5: - return {"prompt": "No matching knowledge base"} - else: - print(knowledge) - return {"prompt": "The relevant content is: " + knowledge + "\n"} - - -class StaticComponent(ToolComponent): - "Return static response" - def __init__(self, output): - super().__init__() - self.output = output - - def func(self, agent): - outputdict = {"response": self.output} - return outputdict - - -class ExtractComponent(ToolComponent): - """ - Extract keywords based on the current scene and store them in the environment - extract_words(list) : Keywords to be extracted - system_prompt & last_prompt : Prompt to extract keywords - """ - def __init__( - self, - extract_words, - system_prompt, - last_prompt=None, - ): - super().__init__() - self.extract_words = extract_words - self.system_prompt = system_prompt - self.default_prompt = ( - "Please strictly adhere to the following format for outputting:\n" - ) - for extract_word in extract_words: - self.default_prompt += ( - f"<{extract_word}> the content you need to extract " - ) - self.last_prompt = last_prompt if last_prompt else self.default_prompt - - def func(self, agent): - response = agent.LLM.get_response( - agent.long_term_memory, - self.system_prompt, - self.last_prompt, - stream=False, - ) - for extract_word in self.extract_words: - key = extract(response, extract_word) - key = key if key else response - agent.environment.shared_memory[extract_word] = key - - return {} - - -"""Search sources: chatgpt/search engines/specific search sources/can even be multimodal (if it comes to clothing)""" - - -class WebSearchComponent(ToolComponent): - """search engines""" - - __ENGINE_NAME__: List = ["google", "bing"] - - def __init__(self, engine_name: str, api: Dict): - """ - :param engine_name: The name of the search engine used - :param api: Pass in a dictionary, such as {"bing":"key1", "google":"key2", ...}, of course each value can also be a list, or more complicated - """ - super(WebSearchComponent, self).__init__() - """Determine whether the key and engine_name of the api are legal""" - - assert engine_name in WebSearchComponent.__ENGINE_NAME__ - for api_name in api: - assert api_name in WebSearchComponent.__ENGINE_NAME__ - - self.api = api - self.engine_name = engine_name - - self.search: Dict = {"bing": self._bing_search, "google": self._google_search} - - def _bing_search(self, query: str, **kwargs): - """Initialize search hyperparameters""" - subscription_key = self.api["bing"] - search_url = "https://api.bing.microsoft.com/v7.0/search" - headers = {"Ocp-Apim-Subscription-Key": subscription_key} - params = { - "q": query, - "textDecorations": True, - "textFormat": "HTML", - "count": 10, - } - """start searching""" - response = requests.get(search_url, headers=headers, params=params) - response.raise_for_status() - results = response.json()["webPages"]["value"] - """execute""" - metadata_results = [] - for result in results: - metadata_result = { - "snippet": result["snippet"], - "title": result["name"], - "link": result["url"], - } - metadata_results.append(metadata_result) - return {"meta data": metadata_results} - - def _google_search(self, query: str, **kwargs): - """Initialize search hyperparameters""" - api_key = self.api[self.engine_name]["api_key"] - cse_id = self.api[self.engine_name]["cse_id"] - service = build("customsearch", "v1", developerKey=api_key) - """start searching""" - results = ( - service.cse().list(q=query, cx=cse_id, num=10, **kwargs).execute()["items"] - ) - """execute""" - metadata_results = [] - for result in results: - metadata_result = { - "snippet": result["snippet"], - "title": result["title"], - "link": result["link"], - } - metadata_results.append(metadata_result) - return {"meta data": metadata_results} - - def func(self, agent, **kwargs) -> Dict: - query = ( - agent.long_term_memory[-1]["content"] - if len(agent.long_term_memory) > 0 - else " " - ) - response = agent.LLM.get_response( - None, - system_prompt=f"Please analyze the provided conversation and identify keywords that can be used for a search engine query. Format the output as extracted keywords:\nConversation:\n{query}", - stream=False, - ) - response = extract(response, "keywords") - query = response if response else query - - search_results = self.search[self.engine_name](query=query, **kwargs) - information = "" - for i in search_results["meta data"][:5]: - information += i["snippet"] - return { - "prompt": "You can refer to the following information to reply:\n" - + information - } - - def convert_search_engine_to(self, engine_name): - assert engine_name in WebSearchComponent.__ENGINE_NAME__ - self.engine_name = engine_name - - -class WebCrawlComponent(ToolComponent): - """Open a single web page for crawling""" - - def __init__(self): - super(WebCrawlComponent, self).__init__() - - def func(self, agent_dict) -> Dict: - url = agent_dict["url"] - print(f"crawling {url} ......") - content = "" - """Crawling content from url may need to be carried out according to different websites, such as wiki, baidu, zhihu, etc.""" - driver = webdriver.Chrome() - try: - """open url""" - driver.get(url) - - """wait 20 second""" - wait = WebDriverWait(driver, 20) - wait.until(EC.presence_of_element_located((By.TAG_NAME, "body"))) - - """crawl code""" - page_source = driver.page_source - - """parse""" - soup = BeautifulSoup(page_source, "html.parser") - - """concatenate""" - for paragraph in soup.find_all("p"): - content = f"{content}\n{paragraph.get_text()}" - except Exception as e: - print("Error:", e) - finally: - """quit""" - driver.quit() - return {"content": content.strip()} - - -class MailComponent(ToolComponent): - __VALID_ACTION__ = ["read", "send"] - - def __init__( - self, cfg_file: str, default_action: str = "read", name: str = "e-mail" - ): - """'../config/google_mail.json'""" - super(MailComponent, self).__init__(name) - self.name = name - assert ( - default_action.lower() in self.__VALID_ACTION__ - ), f"Action `{default_action}` is not allowed! The valid action is in `{self.__VALID_ACTION__}`" - self.action = default_action.lower() - self.credential = self._login(cfg_file) - - def _login(self, cfg_file: str): - SCOPES = [ - "https://www.googleapis.com/auth/gmail.readonly", - "https://www.googleapis.com/auth/gmail.send", - ] - creds = None - if os.path.exists("token.json"): - print("Login Successfully!") - creds = Credentials.from_authorized_user_file("token.json", SCOPES) - if not creds or not creds.valid: - print("Please authorize in an open browser.") - if creds and creds.expired and creds.refresh_token: - creds.refresh(Request()) - else: - flow = InstalledAppFlow.from_client_secrets_file(cfg_file, SCOPES) - creds = flow.run_local_server(port=0) - # Save the credentials for the next run - with open("token.json", "w") as token: - token.write(creds.to_json()) - return creds - - def _read(self, mail_dict: dict): - credential = self.credential - state = mail_dict["state"] if "state" in mail_dict else None - time_between = ( - mail_dict["time_between"] if "time_between" in mail_dict else None - ) - sender_mail = mail_dict["sender_mail"] if "sender_mail" in mail_dict else None - only_both = mail_dict["only_both"] if "only_both" in mail_dict else False - order_by_time = ( - mail_dict["order_by_time"] if "order_by_time" in mail_dict else "descend" - ) - include_word = ( - mail_dict["include_word"] if "include_word" in mail_dict else None - ) - exclude_word = ( - mail_dict["exclude_word"] if "exclude_word" in mail_dict else None - ) - MAX_SEARCH_CNT = ( - mail_dict["MAX_SEARCH_CNT"] if "MAX_SEARCH_CNT" in mail_dict else 50 - ) - number = mail_dict["number"] if "number" in mail_dict else 10 - if state is None: - state = "all" - if time_between is not None: - assert isinstance(time_between, tuple) - assert len(time_between) == 2 - assert state in ["all", "unread", "read", "sent"] - if only_both: - assert sender_mail is not None - if sender_mail is not None: - assert isinstance(sender_mail, str) - assert credential - assert order_by_time in ["descend", "ascend"] - - def generate_query(): - query = "" - if state in ["unread", "read"]: - query = f"is:{state}" - if state in ["sent"]: - query = f"in:{state}" - if only_both: - query = f"{query} from:{sender_mail} OR to:{sender_mail}" - if sender_mail is not None and not only_both: - query = f"{query} from:({sender_mail})" - if include_word is not None: - query = f"{query} {include_word}" - if exclude_word is not None: - query = f"{query} -{exclude_word}" - if time_between is not None: - TIME_FORMAT = "%Y/%m/%d" - t1, t2 = time_between - if t1 == "now": - t1 = datetime.now().strftime(TIME_FORMAT) - if t2 == "now": - t2 = datetime.now().strftime(TIME_FORMAT) - if isinstance(t1, str) and isinstance(t2, str): - t1 = datetime.strptime(t1, TIME_FORMAT) - t2 = datetime.strptime(t2, TIME_FORMAT) - elif isinstance(t1, str) and isinstance(t2, int): - t1 = datetime.strptime(t1, TIME_FORMAT) - t2 = t1 + timedelta(days=t2) - elif isinstance(t1, int) and isinstance(t2, str): - t2 = datetime.strptime(t2, TIME_FORMAT) - t1 = t2 + timedelta(days=t1) - else: - assert False, "invalid time" - if t1 > t2: - t1, t2 = t2, t1 - query = f"{query} after:{t1.strftime(TIME_FORMAT)} before:{t2.strftime(TIME_FORMAT)}" - return query.strip() - - def sort_by_time(data: List[Dict]): - if order_by_time == "descend": - reverse = True - else: - reverse = False - sorted_data = sorted( - data, - key=lambda x: datetime.strptime(x["time"], "%Y-%m-%d %H:%M:%S"), - reverse=reverse, - ) - return sorted_data - - try: - service = build("gmail", "v1", credentials=credential) - results = ( - service.users() - .messages() - .list(userId="me", labelIds=["INBOX"], q=generate_query()) - .execute() - ) - - messages = results.get("messages", []) - email_data = list() - - if not messages: - print("No eligible emails.") - return None - else: - pbar = tqdm(total=min(MAX_SEARCH_CNT, len(messages))) - for cnt, message in enumerate(messages): - pbar.update(1) - if cnt >= MAX_SEARCH_CNT: - break - msg = ( - service.users() - .messages() - .get( - userId="me", - id=message["id"], - format="full", - metadataHeaders=None, - ) - .execute() - ) - - subject = "" - for header in msg["payload"]["headers"]: - if header["name"] == "Subject": - subject = header["value"] - break - - sender = "" - for header in msg["payload"]["headers"]: - if header["name"] == "From": - sender = re.findall( - r"\b[\w\.-]+@[\w\.-]+\.\w+\b", header["value"] - )[0] - break - body = "" - if "parts" in msg["payload"]: - for part in msg["payload"]["parts"]: - if part["mimeType"] == "text/plain": - data = part["body"]["data"] - body = base64.urlsafe_b64decode(data).decode("utf-8") - break - - email_info = { - "sender": sender, - "time": datetime.fromtimestamp( - int(msg["internalDate"]) / 1000 - ).strftime("%Y-%m-%d %H:%M:%S"), - "subject": subject, - "body": body, - } - email_data.append(email_info) - pbar.close() - email_data = sort_by_time(email_data)[0:number] - return {"results": email_data} - except Exception as e: - print(e) - return None - - def _send(self, mail_dict: dict): - recipient_mail = mail_dict["recipient_mail"] - subject = mail_dict["subject"] - body = mail_dict["body"] - credential = self.credential - service = build("gmail", "v1", credentials=credential) - - message = MIMEMultipart() - message["to"] = recipient_mail - message["subject"] = subject - - message.attach(MIMEText(body, "plain")) - - raw_message = base64.urlsafe_b64encode(message.as_bytes()).decode("utf-8") - try: - message = ( - service.users() - .messages() - .send(userId="me", body={"raw": raw_message}) - .execute() - ) - return {"state": True} - except HttpError as error: - print(error) - return {"state": False} - - def func(self, mail_dict: dict): - if "action" in mail_dict: - assert mail_dict["action"].lower() in self.__VALID_ACTION__ - self.action = mail_dict["action"] - functions = {"read": self._read, "send": self._send} - return functions[self.action](mail_dict) - - def convert_action_to(self, action_name: str): - assert ( - action_name.lower() in self.__VALID_ACTION__ - ), f"Action `{action_name}` is not allowed! The valid action is in `{self.__VALID_ACTION__}`" - self.action = action_name.lower() - - -class WeatherComponet(ToolComponent): - def __init__(self, api_key, name="weather", TIME_FORMAT="%Y-%m-%d"): - super(WeatherComponet, self).__init__(name) - self.name = name - self.TIME_FORMAT = TIME_FORMAT - self.api_key = api_key - - def _parse(self, data): - dict_data: dict = {} - for item in data["data"]: - date = item["datetime"] - dict_data[date] = {} - if "weather" in item: - dict_data[date]["description"] = item["weather"]["description"] - mapping = { - "temp": "temperature", - "max_temp": "max_temperature", - "min_temp": "min_temperature", - "precip": "accumulated_precipitation", - } - for key in ["temp", "max_temp", "min_temp", "precip"]: - if key in item: - dict_data[date][mapping[key]] = item[key] - return dict_data - - def _query(self, city_name, country_code, start_date, end_date): - """https://www.weatherbit.io/api/historical-weather-daily""" - # print(datetime.strftime(start_date, self.TIME_FORMAT), datetime.strftime(datetime.now(), self.TIME_FORMAT), end_date, datetime.strftime(datetime.now()+timedelta(days=1), self.TIME_FORMAT)) - if start_date == datetime.strftime( - datetime.now(), self.TIME_FORMAT - ) and end_date == datetime.strftime( - datetime.now() + timedelta(days=1), self.TIME_FORMAT - ): - """today""" - url = f"https://api.weatherbit.io/v2.0/current?city={city_name}&country={country_code}&key={self.api_key}" - else: - url = f"https://api.weatherbit.io/v2.0/history/daily?&city={city_name}&country={country_code}&start_date={start_date}&end_date={end_date}&key={self.api_key}" - response = requests.get(url) - data = response.json() - return self._parse(data) - - def func(self, weather_dict: Dict) -> Dict: - TIME_FORMAT = self.TIME_FORMAT - # Beijing, Shanghai - city_name = weather_dict["city_name"] - # CN, US - country_code = weather_dict["country_code"] - # 2020-02-02 - start_date = datetime.strftime( - datetime.strptime(weather_dict["start_date"], self.TIME_FORMAT), - self.TIME_FORMAT, - ) - end_date = weather_dict["end_date"] if "end_date" in weather_dict else None - if end_date is None: - end_date = datetime.strftime( - datetime.strptime(start_date, TIME_FORMAT) + timedelta(days=-1), - TIME_FORMAT, - ) - else: - end_date = datetime.strftime( - datetime.strptime(weather_dict["end_date"], self.TIME_FORMAT), - self.TIME_FORMAT, - ) - if datetime.strptime(start_date, TIME_FORMAT) > datetime.strptime( - end_date, TIME_FORMAT - ): - start_date, end_date = end_date, start_date - assert start_date != end_date - return self._query(city_name, country_code, start_date, end_date) - - -class TranslateComponent(ToolComponent): - __SUPPORT_LANGUAGE__ = [ - "af", - "am", - "ar", - "as", - "az", - "ba", - "bg", - "bn", - "bo", - "bs", - "ca", - "cs", - "cy", - "da", - "de", - "dsb", - "dv", - "el", - "en", - "es", - "et", - "eu", - "fa", - "fi", - "fil", - "fj", - "fo", - "fr", - "fr-CA", - "ga", - "gl", - "gom", - "gu", - "ha", - "he", - "hi", - "hr", - "hsb", - "ht", - "hu", - "hy", - "id", - "ig", - "ikt", - "is", - "it", - "iu", - "iu-Latn", - "ja", - "ka", - "kk", - "km", - "kmr", - "kn", - "ko", - "ku", - "ky", - "ln", - "lo", - "lt", - "lug", - "lv", - "lzh", - "mai", - "mg", - "mi", - "mk", - "ml", - "mn-Cyrl", - "mn-Mong", - "mr", - "ms", - "mt", - "mww", - "my", - "nb", - "ne", - "nl", - "nso", - "nya", - "or", - "otq", - "pa", - "pl", - "prs", - "ps", - "pt", - "pt-PT", - "ro", - "ru", - "run", - "rw", - "sd", - "si", - "sk", - "sl", - "sm", - "sn", - "so", - "sq", - "sr-Cyrl", - "sr-Latn", - "st", - "sv", - "sw", - "ta", - "te", - "th", - "ti", - "tk", - "tlh-Latn", - "tlh-Piqd", - "tn", - "to", - "tr", - "tt", - "ty", - "ug", - "uk", - "ur", - "uz", - "vi", - "xh", - "yo", - "yua", - "yue", - "zh-Hans", - "zh-Hant", - "zu", - ] - - def __init__( - self, api_key, location, default_target_language="zh-cn", name="translate" - ): - super(TranslateComponent, self).__init__(name) - self.name = name - self.api_key = api_key - self.location = location - self.default_target_language = default_target_language - - def func(self, translate_dict: Dict) -> Dict: - content = translate_dict["content"] - target_language = self.default_target_language - if "target_language" in translate_dict: - target_language = translate_dict["target_language"] - assert ( - target_language in self.__SUPPORT_LANGUAGE__ - ), f"language `{target_language}` is not supported." - - endpoint = "https://api.cognitive.microsofttranslator.com" - - path = "/translate" - constructed_url = endpoint + path - - params = {"api-version": "3.0", "to": target_language} - - headers = { - "Ocp-Apim-Subscription-Key": self.api_key, - "Ocp-Apim-Subscription-Region": self.location, - "Content-type": "application/json", - "X-ClientTraceId": str(uuid.uuid4()), - } - - body = [{"text": content}] - - request = requests.post( - constructed_url, params=params, headers=headers, json=body - ) - response = request.json() - response = json.dumps( - response, - sort_keys=True, - ensure_ascii=False, - indent=4, - separators=(",", ": "), - ) - response = eval(response) - return {"result": response[0]["translations"][0]["text"]} - - -class APIComponent(ToolComponent): - def __init__(self): - super(APIComponent, self).__init__() - - def func(self, agent) -> Dict: - pass - - -class FunctionComponent(ToolComponent): - def __init__( - self, - functions, - function_call="auto", - response_type="response", - your_function=None, - ): - super().__init__() - self.functions = functions - self.function_call = function_call - self.parameters = {} - self.available_functions = {} - self.response_type = response_type - if your_function: - function_name = your_function["name"] - function_content = your_function["content"] - exec(function_content) - self.available_functions[function_name] = eval(function_name) - - for function in self.functions: - self.parameters[function["name"]] = list( - function["parameters"]["properties"].keys() - ) - self.available_functions[function["name"]] = eval(function["name"]) - - def func(self, agent): - messages = agent.long_term_memory - outputdict = {} - query = agent.long_term_memory[-1].content if len(agent.long_term_memory) > 0 else " " - relevant_history = get_relevant_history( - query, - agent.long_term_memory[:-1], - agent.chat_embeddings[:-1], - ) - response = agent.LLM.get_response( - messages, - None, - functions=self.functions, - stream=False, - function_call=self.function_call, - relevant_history=relevant_history, - ) - response_message = response - if response_message.get("function_call"): - function_name = response_message["function_call"]["name"] - fuction_to_call = self.available_functions[function_name] - function_args = json.loads(response_message["function_call"]["arguments"]) - input_args = {} - for args_name in self.parameters[function_name]: - input_args[args_name] = function_args.get(args_name) - function_response = fuction_to_call(**input_args) - if self.response_type == "response": - outputdict["response"] = function_response - elif self.response_type == "prompt": - outputdict["prompt"] = function_response - - return outputdict - - -class CodeComponent(ToolComponent): - def __init__(self, file_name, keyword) -> None: - super().__init__() - self.file_name = file_name - self.keyword = keyword - self.system_prompt = ( - "you need to extract the modified code as completely as possible." - ) - self.last_prompt = ( - f"Please strictly adhere to the following format for outputting: \n" - ) - self.last_prompt += ( - f"<{self.keyword}> the content you need to extract " - ) - - def func(self, agent): - response = agent.LLM.get_response( - agent.long_term_memory, - self.system_prompt, - self.last_prompt, - stream=False, - ) - code = extract(response, self.keyword) - code = code if code else response - os.makedirs("output_code", exist_ok=True) - file_name = "output_code/" + self.file_name - codes = code.split("\n") - if codes[0] == "```python": - codes.remove(codes[0]) - if codes[-1] == "```": - codes.remove(codes[-1]) - code = "\n".join(codes) - with open(file_name, "w", encoding="utf-8") as f: - f.write(code) - return {} diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Abhilashvj/planogram-compliance/utils/loggers/wandb/__init__.py b/spaces/Abhilashvj/planogram-compliance/utils/loggers/wandb/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AgentVerse/agentVerse/README.md b/spaces/AgentVerse/agentVerse/README.md deleted file mode 100644 index 29a3ee2289b0a52ae3b91f4d3be9cb77417b149e..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/README.md +++ /dev/null @@ -1,429 +0,0 @@ ---- -title: AgentVerse -sdk: gradio -license: apache-2.0 -emoji: 🤖 -colorFrom: indigo -colorTo: indigo ---- - -

🤖 AgentVerse 🪐

- -

-

A Framework for Multi-LLM Environment Simulation

-

- -

- - License: Apache2 - - - Python Version - - - Build - - - Code Style: Black - - - Contributions: Welcome - - -

- -

- -

- -

- 【English | Chinese】 -

- -**AgentVerse** offers a versatile framework that streamlines the process of creating custom multi-agent environments for large language models (LLMs). Designed to facilitate swift development and customization with minimal effort, our framework empowers researchers to concentrate on their research, rather than being bogged down by implementation details. - -⚠️⚠️⚠️ We're refactoring the code, and the goal is to provide a flexibility to construct simulation(without a predefined goal) and task-solving(with a specific goal) environments. Please note that this README is slightly outdated, we will update it soon. If you require a stable version that exclusively supports simulation environments, you can use [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1) branch. - ---- - -## ✨ Features - -- 🥳 **Efficient Environment Building:** Our framework provides a collection of essential building blocks for effortlessly creating a multi-agent environment. With only a few lines in a configuration file, you can easily construct basic environments such as a chat room for LLMs. This process entails defining the environment's settings and prompts for LLMs, enabling researchers like you to concentrate on experimentation and analysis. - -- ⚙️ **Customizable Components**: AgentVerse simplifies the multi-agent environment by dividing it into five functional modules and defining their respective interfaces. For complex environments that cannot be constructed directly using the basic modules offered in AgentVerse, you can customize one or more of the interfaces within these five functional modules to efficiently create your own multi-agent environment according to your requirements. - -- 🛠 **Tools (Plugins) Utilization**: AgentVerse supports the multi-agent environments with tools. Currently, AgentVerse supports tools provided in [BMTools](https://github.com/OpenBMB/BMTools). - -## 📰 What's New -- [2023/10/5] 💡 We release the code of our paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848), and refactor our codebase to enable the creation of both simulation and task-solving environment! We have placed the code for Minecraft example in the paper at the [`minecraft`](https://github.com/OpenBMB/AgentVerse/tree/minecraft) branch. Our tool-using example will soon be updated to the `main` branch. Stay tuned! - -- [2023/8/22] 📝 We're excited to share our work-in-progress paper [AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents](https://arxiv.org/abs/2308.10848) related to this repository. -

-Screen Shot 2023-09-01 at 12 08 57 PM -

- -- [2023/6/5] 🎉 We are thrilled to present an array of [demos](#-simple-demo-video), including [NLP Classroom](#nlp-classroom), [Prisoner Dilemma](#prisoner-dilemma), [Software Design](#software-design), [Database Administrator](#database-administrator-dba), and a simple [H5 Pokemon Game](#pokemon) that enables the interaction with the characters in Pokemon! Try out these demos and have fun! -- [2023/5/1] 🚀 [AgentVerse](https://github.com/OpenBMB/AgentVerse) is officially launched! - -## 🌟 Join Us! -AgentVerse is on a mission to revolutionize the multi-agent environment for large language models, and we're eagerly looking for passionate collaborators to join us on this exciting journey. -### How Can You Contribute? -- **Code Development**: If you're an engineer, help us refine, optimize, and expand the current framework. We're always looking for talented developers to enhance our existing features and develop new modules. - -- **Documentation and Tutorials**: If you have a knack for writing, help us improve our documentation, create tutorials, or write blog posts to make AgentVerse more accessible to the broader community. - -- **Application Exploration**: If you're intrigued by multi-agent applications and are eager to experiment using AgentVerse, we'd be thrilled to support your journey and see what you create! - -- **Feedback and Suggestions**: Use AgentVerse and provide us with feedback. Your insights can lead to potential improvements and ensure that our framework remains top-notch. - -Also, if you're passionate about advancing the frontiers of multi-agent environments and are eager to dive deeper into research, we invite you to join our team at THUNLP. To explore this exciting opportunity and embark on a collaborative journey with us, please reach out to [chenweize1998@gmail.com](chenweize1998@gmail.com) and [yushengsu.thu@gmail.com](yushengsu.thu@gmail.com) and express your interest. We're keen to welcome motivated individuals like you to our lab! - -👉Also, check our Discord: https://discord.gg/cnutfCtC. - -## 🗓 Coming Soon -- [x] Code release of our [paper](https://arxiv.org/abs/2308.10848) -- [ ] Add documentation -- [ ] Support more sophisticated memory for conversation history -- [ ] Add support for local LLM - - -## 👾 Simple Demo Video - -We demonstrate the following cases that are expertly crafted by AgentVerse. - - - - - -#### NLP Classroom -In the NLP class, the professor and students engage in interactive communication. When students have a question, they raise their hands and patiently wait for the professor to call on them. Only after being called on by the professor, can students speak and ask their questions. - -Use the following command to launch the NLP Classroom example: -```bash -python agentverse_command/main_simulation_gui.py --task simulation/nlp_classroom_9players -``` - -[Wacth the NLP Classroom Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/6ea07850-595e-4a28-a82e-f863011353c2) - - -#### Prisoner Dilemma -A prisoner's Dilemma is a thought experiment that challenges two completely rational agents to a dilemma: they can cooperate with their partner for mutual benefit or betray their partner ("defect") for individual reward. - -Use the following command to launch the Prisoner Dilemma example: -```bash -python agentverse_command/main_simulation_gui.py --task simulation/prisoner_dilemma -``` - -[Wacth the Prisoner's Dilemma Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/017c46e5-c738-4fca-9352-b008e2d518bd) - - -#### Software Design -In the Software Design example, a code writer, a code tester and a code reviewer collaborate on the code generation problem. Given a problem, the code writer first composes the code implementation. The code tester runs the unit tests and provides the feedback. The code viewer then generates a review. After collecting the test feedback and review, the code writer iteratively refines the code. - -Use the following command to launch the Software Design example: -```bash -python agentverse_command/main_simulation_gui.py --task simulation/sde_team/sde_team_2players -``` - -[Wacth the Software Design Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/5058066a-abee-490d-8659-b4e54661626a) - - -#### [Database Administrator (DBA)](https://github.com/TsinghuaDatabaseGroup/DB-GPT) - -In the database diagnosis scenario, the Chief DBA monitors the system anomalies (e.g., slow queries, locks, crash down). If detected, the domain experts are alerted to analyze root causes, share insights, and suggest optimization solutions together. The Chief DBA then provides a summarized report to the user. - -```bash -python agentverse_command/main_simulation_gui.py --task simulation/db_diag -``` - -[Wacth the DBA Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/c633419d-afbb-47d4-bb12-6bb512e7af3a) - -#### [Text Evaluation (ChatEval)](https://github.com/chanchimin/ChatEval) -In the context of the text evaluation scenario, we recommend users explore the [ChatEval](https://github.com/chanchimin/ChatEval) repo. They've implemented a multi-agent referee team on AgentVerse to assess the quality of text generated by different models. When given two distinct pieces of text, roles within ChatEval can autonomously debate the nuances and disparities, drawing upon their assigned personas, and subsequently provide their judgments. Experiments indicate that their referee team, enriched with diverse roles specified in [config.yaml](#2-configuring-the-agents), aligns more closely with human evaluations. This demo is built upon the [Fastchat](https://github.com/lm-sys/FastChat) repo, and we'd like to express our appreciation for their foundational work. - - -[Wacth the ChatEval Video](https://github.com/OpenBMB/AgentVerse/assets/75533759/58f33468-f15b-4bac-ae01-8d0780019f85) - -#### Pokemon -**Currently available only in [`release-0.1`](https://github.com/OpenBMB/AgentVerse/tree/release-0.1)**. In the game, agents can walk around the game world, and interact with one another. As a player, you take on the role of an agent and can engage with others at any time. There are 6 characters in the Pokémon environment who appeared in Pokemon Emerald: [May](https://bulbapedia.bulbagarden.net/wiki/May_(game)), [Professor Birch](https://bulbapedia.bulbagarden.net/wiki/Professor_Birch), [Steven Stone](https://bulbapedia.bulbagarden.net/wiki/Steven_Stone), [Maxie](https://bulbapedia.bulbagarden.net/wiki/Maxie), [Archie](https://bulbapedia.bulbagarden.net/wiki/Archie) and [Joseph](https://bulbapedia.bulbagarden.net/wiki/Mr._Stone). - -To launch the Pokemon game, first launch a local server with the following command: -```bash -uvicorn pokemon_server:app --reload --port 10002 -``` -Then open another terminal in the project's root path and run the following command: -```bash -cd ui -# If you do not have npm installed, you need to install it before running the following commands -# https://docs.npmjs.com/downloading-and-installing-node-js-and-npm -# We have tested on npm@9.6.4, node@20.0.0 -npm install -npm run watch -``` -Wait for the compilation to complete, and have fun! (WASD for moving around, and SPACE for launching a conversation.) - -[Wacth the Pokemon Video](https://github.com/OpenBMB/AgentVerse/assets/11704492/4d07da68-f942-4205-b558-f155e95782e7) - - - -## Contents - -- [✨ Features](#-features) -- [📰 What's New](#-whats-new) -- [🌟 Join Us!](#-join-us) - - [How Can You Contribute?](#how-can-you-contribute) -- [🗓 Coming Soon](#-coming-soon) -- [👾 Simple Demo Video](#-simple-demo-video) - - [NLP Classroom](#nlp-classroom) - - [Prisoner Dilemma](#prisoner-dilemma) - - [Software Design](#software-design) - - [Database Administrator (DBA)](#database-administrator-dba) - - [Text Evaluation (ChatEval)](#text-evaluation-chateval) - - [Pokemon](#pokemon) -- [Contents](#contents) -- [🚀 Getting Started](#-getting-started) - - [Installation](#installation) - - [Simulation CLI Example](#simulation-cli-example) - - [Simulation Local Website Demo](#simulation-local-website-demo) - - [Task-Solving CLI Example](#task-solving-cli-example) -- [💡 Philosophy](#-philosophy) - - [Environment](#environment) - - [Agent](#agent) -- [✍️ Customize Your Own Environment](#️-customize-your-own-environment) - - [A Simple Example: Building a Classroom Environment](#a-simple-example-building-a-classroom-environment) - - [1. Creating a Task Directory and Configuring the Environment](#1-creating-a-task-directory-and-configuring-the-environment) - - [2. Configuring the Agents](#2-configuring-the-agents) - - [3. Writing an Output Parser](#3-writing-an-output-parser) - - [Customization Guide for More Complex Environments](#customization-guide-for-more-complex-environments) -- [🔎 Examples](#-examples) -- [Star History](#star-history) -- [Citation](#citation) -- [Contact](#contact) - - - -## 🚀 Getting Started - -### Installation - -```bash -pip install -U agentverse -``` -Or you can install the package by manually cloning the latest repository -```bash -git clone https://github.com/OpenBMB/AgentVerse.git --depth 1 -cd AgentVerse -pip install -r requirements.txt -``` -Some users have reported problems installing the `orjson` required by `gradio`. One simple workaround is to install it with Anaconda `conda install -c conda-forge orjson`. - -You also need to export your OpenAI API key as follows: -```bash -# Export your OpenAI API key -export OPENAI_API_KEY="your_api_key_here" -# Or if you are using Azure -export AZURE_OPENAI_API_KEY="your_api_key_here" -export AZURE_OPENAI_API_BASE="your_api_base_here" -``` - -If you want use Azure OpenAI services, pleas export your Azure OpenAI key and OpenAI API base as follows: -```bash -export AZURE_OPENAI_API_KEY="your_api_key_here" -export AZURE_OPENAI_API_BASE="your_api_base_here" -``` - -If you want to use the tools provided by BMTools, you need to install BMTools as follows: -```bash -git clone git+https://github.com/OpenBMB/BMTools.git -cd BMTools -pip install -r requirements.txt -python setup.py develop -``` - - - - -### Simulation CLI Example - -You can create a multi-agent environments provided by us. Using the classroom scenario as an example. In this scenario, there are nine agents, one playing the role of a professor and the other eight as students. - -```shell -python3 agentverse_command/main_simulation_cli.py --task simulation/nlp_classroom_9players -# or if you have installed AgentVerse via pip -agentverse-simulation --task simulation/nlp_classroom_9players -``` - -### Simulation Local Website Demo - -We also provide a local website demo for this environment. You can launch it with - -```shell -python3 agentverse_command/main_simulation_gui.py --task simulation/nlp_classroom_9players -# or if you have installed AgentVerse via pip -agentverse-simulation-gui --task simulation/nlp_classroom_9players -``` -After successfully launching the local server, you can visit [http://127.0.0.1:7860/](http://127.0.0.1:7860/) to view the classroom environment. - -### Task-Solving CLI Example - -To run the experiments with the task-solving environment proposed in our [paper](https://arxiv.org/abs/2308.10848), you can use the following command: - -```shell -# Run the Humaneval benchmark using gpt-3.5-turbo -python3 agentverse_command/main_tasksolving_cli.py --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite -# or if you have installed AgentVerse via pip -agentverse-tasksolving --task tasksolving/humaneval/gpt-3.5 --dataset_path data/humaneval/test.jsonl --overwrite -``` - -You can take a look at `agentverse/tasks/tasksolving` for more experiments we have done in our paper. - - -## 💡 Philosophy - -### Environment - -At the core of our framework is the environment, which plays a crucial role in enabling researchers to study the behavior of agents under different conditions. We believe that the environment should be flexible and extensible, allowing researchers to easily customize it to fit their needs. To achieve this, we have abstracted the environment into five rule components, and implementing different environments is actually implementing different rules: - -- **Describer**: This component provides a description of the environment at each turn for each agent. You can customize the describer to define the specific requirements of their environment, such as the agents with whom an agent can interact. -- **Order**: This component defines the order in which agents take actions within the environment. You can customize the order to reflect the desired interaction between agents. We provide several basic order options, including `random`, `sequential`, and `concurrent` (in which all agents take an action in each turn). -- **Selector**: This component selects the valid messages generated by agents. Sometimes agents may generate invalid responses, and the selector is used to filter out unexpected results. -- **Updater**: This component updates the memory of each agent. In certain cases, the response generated by one agent should not be seen by all agents (e.g., if agents are in different rooms). For each response, the updater updates only the agents who can see it. -- **Visibility**: This component maintains the list of agents that each agent can see throughout the environment's changes. For example, when an agent moves from one room to another, the list of visible agents of each agent should be updated by `visibility`. - -By abstracting the environment into these five components, we have created a highly flexible and extensible framework that enables researchers to easily build and customize their own multi-agent environments. - -### Agent - -Another fundamental component is the agent. Currently we provide two types of agents: **ConversationAgent** and **ToolAgent**. You can also customize your own agent by inheriting BaseAgent class (tutorial coming soon). - -## ✍️ Customize Your Own Environment - -We have provided several examples in the `agentverse/tasks` directory. To customize your environment, you should - -1. Create a task directory in `agentverse/tasks` -2. Write the configuration file -3. Write the output parser that parses the response of your agents. -4. Add your parser in `agentverse/tasks/__init__.py` - -We will use a simple example in `agentverse/tasks/nlp_classroom_3players` to illustrate the procedure. - -### A Simple Example: Building a Classroom Environment - -To illustrate how to customize your environment, we'll use a simple example of building a classroom environment where one agent is the professor, one is the student, and one is the teaching assistant. - -##### 1. Creating a Task Directory and Configuring the Environment - -First, we need to create a task directory and write our configuration file for the environment. In the `agentverse/tasks` directory, create a new directory called `nlp_classroom_3players`. Inside this directory, create a `config.yaml` file and write the following configuration: - -```yaml -# config.yaml -environment: - env_type: basic # Use the basic environment provided in AgentVerse - max_turns: 10 # Specify the maximum number of dialogue turns - rule: - order: - type: sequential # Use the sequential order - visibility: - type: all # Each message can be seen by all agents - selector: - type: basic # Basic selector (do not select) - updater: - type: basic # Basic updater (update the message to all agents) - describer: - type: basic # Basic describer (no description) -``` - -This configuration specifies that we will use the basic environment provided in AgentVerse, with a maximum of 10 dialogue turns. We'll use the sequential order, with all messages visible to all agents. We won't be using any selectors, our updater will update the messages to all the agents and our describer will provide no description. - -##### 2. Configuring the Agents - -Next, we'll configure the agents. In the `config.yaml` file, we'll add the configuration for each agent. Here's an example configuration for the professor: - -```yaml -# config.yaml -agents: - - - agent_type: conversation - name: Professor Micheal # Name of the agent - role_description: You are Prof. Micheal, ... # Description of the agent - memory: - memory_type: chat_history # Will store all the chat history - prompt_template: *professor_prompt - llm: - llm_type: text-davinci-003 # Will use OpenAICompletion LLM - model: text-davinci-003 # The arguments passed to the api call - temperature: 0.7 - max_tokens: 250 -``` - -In this example, we'll use the `conversation` agent type. We've given the agent a name and a description, and we'll store the chat history in memory. We've also provided a prompt template with placeholders marked as ${placeholder}. These will be instantiated by the `_fill_prompt_template` method of the agent. - -##### 3. Writing an Output Parser - -The next step is to write a simple parser for your agent's response. Because you may have specified the output format in your prompt template, you need to provide a corresponding parser. In this example, we inform the model to output in the following format in our prompt template - -``` -Action: Speak -Action Input: (the content) -``` - -We'll write a parser to extract the content from the agent's response. Refer to the code for more details. We've decorated our parser function with `@output_parser_registry.register('classroom_parser')` to register it with our framework. Finally, we import our parser in `agentverse/tasks/__init__.py`. - -With these steps, we've successfully built a simple classroom environment and customized it for our needs. - -### Customization Guide for More Complex Environments - -While we provide a basic framework for building environments with our five rule components, more complex environments may require further customization. A detailed documentation and tutorial is coming soon. Here we briefly introduce some steps you can take to customize your environment: - -1. **Customize the five rule components**. Each rule component has an interface, allowing you to customize its behavior to suit your specific needs. It's important to note that these components are not necessarily independent and can interact through the `rule_params` dictionary in the environment. You can create your own rule components and integrate them with the existing ones to build more complex interactions between agents. -2. **Customize the environment itself**. Our `basic` environment provides a default execution order for the five rule components that is suitable for most cases, but you can inherit the `BaseEnvironment` class and write your own `run` method to implement a more sophisticated execution order. -3. **Customize the agent**. Depending on your specific use case, you may also need to inherit the `BaseAgent` class. For example, you may want to use your local LLM as your agents or create agents with specialized knowledge or skills. - - - -## 🔎 Examples - -Currently, we offer some simple examples in the `agentverse/tasks` directory, each demonstrating different possibilities of our framework. While the performance of these examples may not be optimal due to limited prompt engineering, they are intended to showcase the capabilities of our framework, such as allowing the use of tools. - -Here's a brief overview of each example: - -1. `nlp_classroom_3players`: This example illustrates a simple case in which agents will speak in sequential order. -2. `nlp_classroom_9players`: This is an NLP class example. Here, students can raise their hand when they have a question, and the professor can call on the students to let them ask. Students are only allowed to speak after they are called on. -3. `nlp_classroom_9players_group`: This example showcases group discussions. The professor may initiate a group discussion when needed, and students can exclusively interact with fellow students within the same group during the discussion. -4. `nlp_classroom_3players_withtool`: Students in this classroom can use Bing search API when listening to the class. -5. `math_problem_2players_tools`: A simple example demonstrating how two agents can use the WolframAlpha API to play an arithmetic game. -6. `prisoner_dilema`: The Prisoner's Dilemma is a thought experiment involving two rational agents facing a choice between cooperating for mutual benefit or betraying their partner for individual gain. -7. `db_diag`: The Chief DBA monitors (agents) the database system for anomalies and alerts memory and CPU agents if any are detected. They (agents) analyze root causes and suggest optimization solutions. The Chief DBA (agent) provides a diagnosis summary to the user, who can give instructions or evaluate the proposed solutions' effectiveness. -8. `sde_team`: In the SDE team, code writer, code tester and code reviewer collaborate on the code generation problem. -9. `pokemon`: This example intimates Pokemon game. - - -## Star History - -[![Star History Chart](https://api.star-history.com/svg?repos=OpenBMB/AgentVerse&type=Date)](https://star-history.com/#OpenBMB/AgentVerse&Date) - - -## Citation -If you find this repo helpful, feel free to cite us. -``` -@article{chen2023agentverse, - title={Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents}, - author={Chen, Weize and Su, Yusheng and Zuo, Jingwei and Yang, Cheng and Yuan, Chenfei and Qian, Chen and Chan, Chi-Min and Qin, Yujia and Lu, Yaxi and Xie, Ruobing and others}, - journal={arXiv preprint arXiv:2308.10848}, - year={2023} -} -``` - -## Contact - -Weize Chen: chenweize1998@gmail.com - -[Yusheng Su](https://yushengsu-thu.github.io/): yushengsu.thu@gmail.com \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/OpenColorPicker.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/OpenColorPicker.js deleted file mode 100644 index 123c72353323009575fede993304ae21d56ce377..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/OpenColorPicker.js +++ /dev/null @@ -1,53 +0,0 @@ -import CreateColorPicker from './CreateColorPicker.js'; -import DropDown from '../../../dropdown/DropDown.js'; - -var OpenColorPicker = function () { - if (this.colorPicker) { - return; - } - - // Layout it to get full height - var colorPicker = CreateColorPicker.call(this).layout(); - - var dropDownBehavior = new DropDown(colorPicker, { - // Transition - duration: { - in: this.colorPickerEaseInDuration, - out: this.colorPickerEaseOutDuration - }, - transitIn: this.colorPickerTransitInCallback, - transitOut: this.colorPickerTransitOutCallback, - - // Position - expandDirection: this.colorPickerExpandDirection, - - alignTargetX: this, - alignTargetY: this, - - bounds: this.colorPickerBounds, - - // Close condition - touchOutsideClose: true, - }) - .on('open', function () { - // After popping up - // Can click - colorPicker.on('valuechange', function (value) { - this.setValue(value); - }, this); - }, this) - - .on('close', function () { - this.colorPicker = undefined; - this.dropDownBehavior = undefined; - }, this) - - this.colorPicker = colorPicker; - this.dropDownBehavior = dropDownBehavior; - - this.pin(colorPicker); - - return this; -} - -export default OpenColorPicker; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/childbehaviors/Visible.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/childbehaviors/Visible.js deleted file mode 100644 index b0c1659608c980c21d85e60701b15d4acade3984..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/childbehaviors/Visible.js +++ /dev/null @@ -1,21 +0,0 @@ -import IndexOf from '../../../../plugins/utils/object/IndexOf.js'; -import Container from '../../container/Container.js'; - -const ContainerSetChildVisible = Container.prototype.setChildVisible; - -export default { - setChildVisible(child, visible) { - var key; - if (typeof (child) === 'string') { - var key = child; - child = this.sizerChildren[key]; - } else { - key = IndexOf(this.sizerChildren, child); - } - if (visible === undefined) { - visible = (this.currentChildKey === key) ? true : false; - } - ContainerSetChildVisible.call(this, child, visible); - return this; - } -} \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/tome.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/tome.md deleted file mode 100644 index c2158f539a65d87a9a394298f22c20fa87898d8b..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/optimization/tome.md +++ /dev/null @@ -1,116 +0,0 @@ - - -# Token Merging - -Token Merging (introduced in [Token Merging: Your ViT But Faster](https://arxiv.org/abs/2210.09461)) works by merging the redundant tokens / patches progressively in the forward pass of a Transformer-based network. It can speed up the inference latency of the underlying network. - -After Token Merging (ToMe) was released, the authors released [Token Merging for Fast Stable Diffusion](https://arxiv.org/abs/2303.17604), which introduced a version of ToMe which is more compatible with Stable Diffusion. We can use ToMe to gracefully speed up the inference latency of a [`DiffusionPipeline`]. This doc discusses how to apply ToMe to the [`StableDiffusionPipeline`], the expected speedups, and the qualitative aspects of using ToMe on the [`StableDiffusionPipeline`]. - -## Using ToMe - -The authors of ToMe released a convenient Python library called [`tomesd`](https://github.com/dbolya/tomesd) that lets us apply ToMe to a [`DiffusionPipeline`] like so: - -```diff -from diffusers import StableDiffusionPipeline -import tomesd - -pipeline = StableDiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 -).to("cuda") -+ tomesd.apply_patch(pipeline, ratio=0.5) - -image = pipeline("a photo of an astronaut riding a horse on mars").images[0] -``` - -And that’s it! - -`tomesd.apply_patch()` exposes [a number of arguments](https://github.com/dbolya/tomesd#usage) to let us strike a balance between the pipeline inference speed and the quality of the generated tokens. Amongst those arguments, the most important one is `ratio`. `ratio` controls the number of tokens that will be merged during the forward pass. For more details on `tomesd`, please refer to the original repository https://github.com/dbolya/tomesd and [the paper](https://arxiv.org/abs/2303.17604). - -## Benchmarking `tomesd` with `StableDiffusionPipeline` - -We benchmarked the impact of using `tomesd` on [`StableDiffusionPipeline`] along with [xformers](https://huggingface.co/docs/diffusers/optimization/xformers) across different image resolutions. We used A100 and V100 as our test GPU devices with the following development environment (with Python 3.8.5): - -```bash -- `diffusers` version: 0.15.1 -- Python version: 3.8.16 -- PyTorch version (GPU?): 1.13.1+cu116 (True) -- Huggingface_hub version: 0.13.2 -- Transformers version: 4.27.2 -- Accelerate version: 0.18.0 -- xFormers version: 0.0.16 -- tomesd version: 0.1.2 -``` - -We used this script for benchmarking: [https://gist.github.com/sayakpaul/27aec6bca7eb7b0e0aa4112205850335](https://gist.github.com/sayakpaul/27aec6bca7eb7b0e0aa4112205850335). Following are our findings: - -### A100 - -| Resolution | Batch size | Vanilla | ToMe | ToMe + xFormers | ToMe speedup (%) | ToMe + xFormers speedup (%) | -| --- | --- | --- | --- | --- | --- | --- | -| 512 | 10 | 6.88 | 5.26 | 4.69 | 23.54651163 | 31.83139535 | -| | | | | | | | -| 768 | 10 | OOM | 14.71 | 11 | | | -| | 8 | OOM | 11.56 | 8.84 | | | -| | 4 | OOM | 5.98 | 4.66 | | | -| | 2 | 4.99 | 3.24 | 3.1 | 35.07014028 | 37.8757515 | -| | 1 | 3.29 | 2.24 | 2.03 | 31.91489362 | 38.29787234 | -| | | | | | | | -| 1024 | 10 | OOM | OOM | OOM | | | -| | 8 | OOM | OOM | OOM | | | -| | 4 | OOM | 12.51 | 9.09 | | | -| | 2 | OOM | 6.52 | 4.96 | | | -| | 1 | 6.4 | 3.61 | 2.81 | 43.59375 | 56.09375 | - -***The timings reported here are in seconds. Speedups are calculated over the `Vanilla` timings.*** - -### V100 - -| Resolution | Batch size | Vanilla | ToMe | ToMe + xFormers | ToMe speedup (%) | ToMe + xFormers speedup (%) | -| --- | --- | --- | --- | --- | --- | --- | -| 512 | 10 | OOM | 10.03 | 9.29 | | | -| | 8 | OOM | 8.05 | 7.47 | | | -| | 4 | 5.7 | 4.3 | 3.98 | 24.56140351 | 30.1754386 | -| | 2 | 3.14 | 2.43 | 2.27 | 22.61146497 | 27.70700637 | -| | 1 | 1.88 | 1.57 | 1.57 | 16.4893617 | 16.4893617 | -| | | | | | | | -| 768 | 10 | OOM | OOM | 23.67 | | | -| | 8 | OOM | OOM | 18.81 | | | -| | 4 | OOM | 11.81 | 9.7 | | | -| | 2 | OOM | 6.27 | 5.2 | | | -| | 1 | 5.43 | 3.38 | 2.82 | 37.75322284 | 48.06629834 | -| | | | | | | | -| 1024 | 10 | OOM | OOM | OOM | | | -| | 8 | OOM | OOM | OOM | | | -| | 4 | OOM | OOM | 19.35 | | | -| | 2 | OOM | 13 | 10.78 | | | -| | 1 | OOM | 6.66 | 5.54 | | | - -As seen in the tables above, the speedup with `tomesd` becomes more pronounced for larger image resolutions. It is also interesting to note that with `tomesd`, it becomes possible to run the pipeline on a higher resolution, like 1024x1024. - -It might be possible to speed up inference even further with [`torch.compile()`](https://huggingface.co/docs/diffusers/optimization/torch2.0). - -## Quality - -As reported in [the paper](https://arxiv.org/abs/2303.17604), ToMe can preserve the quality of the generated images to a great extent while speeding up inference. By increasing the `ratio`, it is possible to further speed up inference, but that might come at the cost of a deterioration in the image quality. - -To test the quality of the generated samples using our setup, we sampled a few prompts from the “Parti Prompts” (introduced in [Parti](https://parti.research.google/)) and performed inference with the [`StableDiffusionPipeline`] in the following settings: - -- Vanilla [`StableDiffusionPipeline`] -- [`StableDiffusionPipeline`] + ToMe -- [`StableDiffusionPipeline`] + ToMe + xformers - -We didn’t notice any significant decrease in the quality of the generated samples. Here are samples: - -![tome-samples](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/tome/tome_samples.png) - -You can check out the generated samples [here](https://wandb.ai/sayakpaul/tomesd-results/runs/23j4bj3i?workspace=). We used [this script](https://gist.github.com/sayakpaul/8cac98d7f22399085a060992f411ecbd) for conducting this experiment. \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/optimization/onnx.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/optimization/onnx.md deleted file mode 100644 index d52110b8c1fbd4b09614ce5b76e79e136b71e959..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/optimization/onnx.md +++ /dev/null @@ -1,65 +0,0 @@ - - - -# 추론을 위해 ONNX 런타임을 사용하는 방법 - -🤗 Diffusers는 ONNX Runtime과 호환되는 Stable Diffusion 파이프라인을 제공합니다. 이를 통해 ONNX(CPU 포함)를 지원하고 PyTorch의 가속 버전을 사용할 수 없는 모든 하드웨어에서 Stable Diffusion을 실행할 수 있습니다. - -## 설치 - -다음 명령어로 ONNX Runtime를 지원하는 🤗 Optimum를 설치합니다: - -``` -pip install optimum["onnxruntime"] -``` - -## Stable Diffusion 추론 - -아래 코드는 ONNX 런타임을 사용하는 방법을 보여줍니다. `StableDiffusionPipeline` 대신 `OnnxStableDiffusionPipeline`을 사용해야 합니다. -PyTorch 모델을 불러오고 즉시 ONNX 형식으로 변환하려는 경우 `export=True`로 설정합니다. - -```python -from optimum.onnxruntime import ORTStableDiffusionPipeline - -model_id = "runwayml/stable-diffusion-v1-5" -pipe = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) -prompt = "a photo of an astronaut riding a horse on mars" -images = pipe(prompt).images[0] -pipe.save_pretrained("./onnx-stable-diffusion-v1-5") -``` - -파이프라인을 ONNX 형식으로 오프라인으로 내보내고 나중에 추론에 사용하려는 경우, -[`optimum-cli export`](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) 명령어를 사용할 수 있습니다: - -```bash -optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ -``` - -그 다음 추론을 수행합니다: - -```python -from optimum.onnxruntime import ORTStableDiffusionPipeline - -model_id = "sd_v15_onnx" -pipe = ORTStableDiffusionPipeline.from_pretrained(model_id) -prompt = "a photo of an astronaut riding a horse on mars" -images = pipe(prompt).images[0] -``` - -Notice that we didn't have to specify `export=True` above. - -[Optimum 문서](https://huggingface.co/docs/optimum/)에서 더 많은 예시를 찾을 수 있습니다. - -## 알려진 이슈들 - -- 여러 프롬프트를 배치로 생성하면 너무 많은 메모리가 사용되는 것 같습니다. 이를 조사하는 동안, 배치 대신 반복 방법이 필요할 수도 있습니다. diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/outputs.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/outputs.py deleted file mode 100644 index 37b11561d1e1ee5d5cb40c7630b132e1f451c5b0..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/outputs.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Generic utilities -""" - -from collections import OrderedDict -from dataclasses import fields -from typing import Any, Tuple - -import numpy as np - -from .import_utils import is_torch_available - - -def is_tensor(x): - """ - Tests if `x` is a `torch.Tensor` or `np.ndarray`. - """ - if is_torch_available(): - import torch - - if isinstance(x, torch.Tensor): - return True - - return isinstance(x, np.ndarray) - - -class BaseOutput(OrderedDict): - """ - Base class for all model outputs as dataclass. Has a `__getitem__` that allows indexing by integer or slice (like a - tuple) or strings (like a dictionary) that will ignore the `None` attributes. Otherwise behaves like a regular - Python dictionary. - - - - You can't unpack a [`BaseOutput`] directly. Use the [`~utils.BaseOutput.to_tuple`] method to convert it to a tuple - first. - - - """ - - def __post_init__(self): - class_fields = fields(self) - - # Safety and consistency checks - if not len(class_fields): - raise ValueError(f"{self.__class__.__name__} has no fields.") - - first_field = getattr(self, class_fields[0].name) - other_fields_are_none = all(getattr(self, field.name) is None for field in class_fields[1:]) - - if other_fields_are_none and isinstance(first_field, dict): - for key, value in first_field.items(): - self[key] = value - else: - for field in class_fields: - v = getattr(self, field.name) - if v is not None: - self[field.name] = v - - def __delitem__(self, *args, **kwargs): - raise Exception(f"You cannot use ``__delitem__`` on a {self.__class__.__name__} instance.") - - def setdefault(self, *args, **kwargs): - raise Exception(f"You cannot use ``setdefault`` on a {self.__class__.__name__} instance.") - - def pop(self, *args, **kwargs): - raise Exception(f"You cannot use ``pop`` on a {self.__class__.__name__} instance.") - - def update(self, *args, **kwargs): - raise Exception(f"You cannot use ``update`` on a {self.__class__.__name__} instance.") - - def __getitem__(self, k): - if isinstance(k, str): - inner_dict = dict(self.items()) - return inner_dict[k] - else: - return self.to_tuple()[k] - - def __setattr__(self, name, value): - if name in self.keys() and value is not None: - # Don't call self.__setitem__ to avoid recursion errors - super().__setitem__(name, value) - super().__setattr__(name, value) - - def __setitem__(self, key, value): - # Will raise a KeyException if needed - super().__setitem__(key, value) - # Don't call self.__setattr__ to avoid recursion errors - super().__setattr__(key, value) - - def to_tuple(self) -> Tuple[Any]: - """ - Convert self to a tuple containing all the attributes/keys that are not `None`. - """ - return tuple(self[k] for k in self.keys()) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fcos/fcos_r50_caffe_fpn_gn-head_4x4_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fcos/fcos_r50_caffe_fpn_gn-head_4x4_1x_coco.py deleted file mode 100644 index 2816b16f64dbcbfecd779650aaae0ca6cee0d810..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/fcos/fcos_r50_caffe_fpn_gn-head_4x4_1x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -# TODO: Remove this config after benchmarking all related configs -_base_ = 'fcos_r50_caffe_fpn_gn-head_1x_coco.py' - -data = dict(samples_per_gpu=4, workers_per_gpu=4) diff --git a/spaces/AquaSuisei/ChatGPTXE/chatgpt - macOS.command b/spaces/AquaSuisei/ChatGPTXE/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/AquaSuisei/ChatGPTXE/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/ArcanAlt/arcanDream/Dockerfile b/spaces/ArcanAlt/arcanDream/Dockerfile deleted file mode 100644 index 6953fc05439efb70991552cf56f28365b5b6c15b..0000000000000000000000000000000000000000 --- a/spaces/ArcanAlt/arcanDream/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18 - -WORKDIR /app - -RUN npm install express express-http-proxy - -COPY . . - -EXPOSE 7860 - -CMD [ "node", "server.js" ] \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/zoneinfo/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/zoneinfo/__init__.py deleted file mode 100644 index 34f11ad66c88047f2c049a4cdcc937b4b78ea6d6..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/zoneinfo/__init__.py +++ /dev/null @@ -1,167 +0,0 @@ -# -*- coding: utf-8 -*- -import warnings -import json - -from tarfile import TarFile -from pkgutil import get_data -from io import BytesIO - -from dateutil.tz import tzfile as _tzfile - -__all__ = ["get_zonefile_instance", "gettz", "gettz_db_metadata"] - -ZONEFILENAME = "dateutil-zoneinfo.tar.gz" -METADATA_FN = 'METADATA' - - -class tzfile(_tzfile): - def __reduce__(self): - return (gettz, (self._filename,)) - - -def getzoneinfofile_stream(): - try: - return BytesIO(get_data(__name__, ZONEFILENAME)) - except IOError as e: # TODO switch to FileNotFoundError? - warnings.warn("I/O error({0}): {1}".format(e.errno, e.strerror)) - return None - - -class ZoneInfoFile(object): - def __init__(self, zonefile_stream=None): - if zonefile_stream is not None: - with TarFile.open(fileobj=zonefile_stream) as tf: - self.zones = {zf.name: tzfile(tf.extractfile(zf), filename=zf.name) - for zf in tf.getmembers() - if zf.isfile() and zf.name != METADATA_FN} - # deal with links: They'll point to their parent object. Less - # waste of memory - links = {zl.name: self.zones[zl.linkname] - for zl in tf.getmembers() if - zl.islnk() or zl.issym()} - self.zones.update(links) - try: - metadata_json = tf.extractfile(tf.getmember(METADATA_FN)) - metadata_str = metadata_json.read().decode('UTF-8') - self.metadata = json.loads(metadata_str) - except KeyError: - # no metadata in tar file - self.metadata = None - else: - self.zones = {} - self.metadata = None - - def get(self, name, default=None): - """ - Wrapper for :func:`ZoneInfoFile.zones.get`. This is a convenience method - for retrieving zones from the zone dictionary. - - :param name: - The name of the zone to retrieve. (Generally IANA zone names) - - :param default: - The value to return in the event of a missing key. - - .. versionadded:: 2.6.0 - - """ - return self.zones.get(name, default) - - -# The current API has gettz as a module function, although in fact it taps into -# a stateful class. So as a workaround for now, without changing the API, we -# will create a new "global" class instance the first time a user requests a -# timezone. Ugly, but adheres to the api. -# -# TODO: Remove after deprecation period. -_CLASS_ZONE_INSTANCE = [] - - -def get_zonefile_instance(new_instance=False): - """ - This is a convenience function which provides a :class:`ZoneInfoFile` - instance using the data provided by the ``dateutil`` package. By default, it - caches a single instance of the ZoneInfoFile object and returns that. - - :param new_instance: - If ``True``, a new instance of :class:`ZoneInfoFile` is instantiated and - used as the cached instance for the next call. Otherwise, new instances - are created only as necessary. - - :return: - Returns a :class:`ZoneInfoFile` object. - - .. versionadded:: 2.6 - """ - if new_instance: - zif = None - else: - zif = getattr(get_zonefile_instance, '_cached_instance', None) - - if zif is None: - zif = ZoneInfoFile(getzoneinfofile_stream()) - - get_zonefile_instance._cached_instance = zif - - return zif - - -def gettz(name): - """ - This retrieves a time zone from the local zoneinfo tarball that is packaged - with dateutil. - - :param name: - An IANA-style time zone name, as found in the zoneinfo file. - - :return: - Returns a :class:`dateutil.tz.tzfile` time zone object. - - .. warning:: - It is generally inadvisable to use this function, and it is only - provided for API compatibility with earlier versions. This is *not* - equivalent to ``dateutil.tz.gettz()``, which selects an appropriate - time zone based on the inputs, favoring system zoneinfo. This is ONLY - for accessing the dateutil-specific zoneinfo (which may be out of - date compared to the system zoneinfo). - - .. deprecated:: 2.6 - If you need to use a specific zoneinfofile over the system zoneinfo, - instantiate a :class:`dateutil.zoneinfo.ZoneInfoFile` object and call - :func:`dateutil.zoneinfo.ZoneInfoFile.get(name)` instead. - - Use :func:`get_zonefile_instance` to retrieve an instance of the - dateutil-provided zoneinfo. - """ - warnings.warn("zoneinfo.gettz() will be removed in future versions, " - "to use the dateutil-provided zoneinfo files, instantiate a " - "ZoneInfoFile object and use ZoneInfoFile.zones.get() " - "instead. See the documentation for details.", - DeprecationWarning) - - if len(_CLASS_ZONE_INSTANCE) == 0: - _CLASS_ZONE_INSTANCE.append(ZoneInfoFile(getzoneinfofile_stream())) - return _CLASS_ZONE_INSTANCE[0].zones.get(name) - - -def gettz_db_metadata(): - """ Get the zonefile metadata - - See `zonefile_metadata`_ - - :returns: - A dictionary with the database metadata - - .. deprecated:: 2.6 - See deprecation warning in :func:`zoneinfo.gettz`. To get metadata, - query the attribute ``zoneinfo.ZoneInfoFile.metadata``. - """ - warnings.warn("zoneinfo.gettz_db_metadata() will be removed in future " - "versions, to use the dateutil-provided zoneinfo files, " - "ZoneInfoFile object and query the 'metadata' attribute " - "instead. See the documentation for details.", - DeprecationWarning) - - if len(_CLASS_ZONE_INSTANCE) == 0: - _CLASS_ZONE_INSTANCE.append(ZoneInfoFile(getzoneinfofile_stream())) - return _CLASS_ZONE_INSTANCE[0].metadata diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/setopt.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/setopt.py deleted file mode 100644 index 6358c0451b2d0036e3821d897fb6f7ab436ee4a9..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/setopt.py +++ /dev/null @@ -1,149 +0,0 @@ -from distutils.util import convert_path -from distutils import log -from distutils.errors import DistutilsOptionError -import distutils -import os -import configparser - -from setuptools import Command - -__all__ = ['config_file', 'edit_config', 'option_base', 'setopt'] - - -def config_file(kind="local"): - """Get the filename of the distutils, local, global, or per-user config - - `kind` must be one of "local", "global", or "user" - """ - if kind == 'local': - return 'setup.cfg' - if kind == 'global': - return os.path.join( - os.path.dirname(distutils.__file__), 'distutils.cfg' - ) - if kind == 'user': - dot = os.name == 'posix' and '.' or '' - return os.path.expanduser(convert_path("~/%spydistutils.cfg" % dot)) - raise ValueError( - "config_file() type must be 'local', 'global', or 'user'", kind - ) - - -def edit_config(filename, settings, dry_run=False): - """Edit a configuration file to include `settings` - - `settings` is a dictionary of dictionaries or ``None`` values, keyed by - command/section name. A ``None`` value means to delete the entire section, - while a dictionary lists settings to be changed or deleted in that section. - A setting of ``None`` means to delete that setting. - """ - log.debug("Reading configuration from %s", filename) - opts = configparser.RawConfigParser() - opts.optionxform = lambda x: x - opts.read([filename]) - for section, options in settings.items(): - if options is None: - log.info("Deleting section [%s] from %s", section, filename) - opts.remove_section(section) - else: - if not opts.has_section(section): - log.debug("Adding new section [%s] to %s", section, filename) - opts.add_section(section) - for option, value in options.items(): - if value is None: - log.debug( - "Deleting %s.%s from %s", - section, option, filename - ) - opts.remove_option(section, option) - if not opts.options(section): - log.info("Deleting empty [%s] section from %s", - section, filename) - opts.remove_section(section) - else: - log.debug( - "Setting %s.%s to %r in %s", - section, option, value, filename - ) - opts.set(section, option, value) - - log.info("Writing %s", filename) - if not dry_run: - with open(filename, 'w') as f: - opts.write(f) - - -class option_base(Command): - """Abstract base class for commands that mess with config files""" - - user_options = [ - ('global-config', 'g', - "save options to the site-wide distutils.cfg file"), - ('user-config', 'u', - "save options to the current user's pydistutils.cfg file"), - ('filename=', 'f', - "configuration file to use (default=setup.cfg)"), - ] - - boolean_options = [ - 'global-config', 'user-config', - ] - - def initialize_options(self): - self.global_config = None - self.user_config = None - self.filename = None - - def finalize_options(self): - filenames = [] - if self.global_config: - filenames.append(config_file('global')) - if self.user_config: - filenames.append(config_file('user')) - if self.filename is not None: - filenames.append(self.filename) - if not filenames: - filenames.append(config_file('local')) - if len(filenames) > 1: - raise DistutilsOptionError( - "Must specify only one configuration file option", - filenames - ) - self.filename, = filenames - - -class setopt(option_base): - """Save command-line options to a file""" - - description = "set an option in setup.cfg or another config file" - - user_options = [ - ('command=', 'c', 'command to set an option for'), - ('option=', 'o', 'option to set'), - ('set-value=', 's', 'value of the option'), - ('remove', 'r', 'remove (unset) the value'), - ] + option_base.user_options - - boolean_options = option_base.boolean_options + ['remove'] - - def initialize_options(self): - option_base.initialize_options(self) - self.command = None - self.option = None - self.set_value = None - self.remove = None - - def finalize_options(self): - option_base.finalize_options(self) - if self.command is None or self.option is None: - raise DistutilsOptionError("Must specify --command *and* --option") - if self.set_value is None and not self.remove: - raise DistutilsOptionError("Must specify --set-value or --remove") - - def run(self): - edit_config( - self.filename, { - self.command: {self.option.replace('-', '_'): self.set_value} - }, - self.dry_run - ) diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/reverse.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/reverse.h deleted file mode 100644 index 955825217d0857720bccfe0241704b679f80504f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/reverse.h +++ /dev/null @@ -1,98 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include - -namespace thrust -{ -namespace cuda_cub { - -template -ResultIt __host__ __device__ -reverse_copy(execution_policy &policy, - ItemsIt first, - ItemsIt last, - ResultIt result); - -template -void __host__ __device__ -reverse(execution_policy &policy, - ItemsIt first, - ItemsIt last); - -} // namespace cuda_cub -} // end namespace thrust - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace cuda_cub { - -template -ResultIt __host__ __device__ -reverse_copy(execution_policy &policy, - ItemsIt first, - ItemsIt last, - ResultIt result) -{ - return cuda_cub::copy(policy, - make_reverse_iterator(last), - make_reverse_iterator(first), - result); -} - -template -void __host__ __device__ -reverse(execution_policy &policy, - ItemsIt first, - ItemsIt last) -{ - typedef typename thrust::iterator_difference::type difference_type; - - // find the midpoint of [first,last) - difference_type N = thrust::distance(first, last); - ItemsIt mid(first); - thrust::advance(mid, N / 2); - - cuda_cub::swap_ranges(policy, first, mid, make_reverse_iterator(last)); -} - - -} // namespace cuda_cub -} // end namespace thrust -#endif diff --git a/spaces/CVPR/WALT/mmdet/models/detectors/cornernet.py b/spaces/CVPR/WALT/mmdet/models/detectors/cornernet.py deleted file mode 100644 index bb8ccc1465ab66d1615ca16701a533a22b156295..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/detectors/cornernet.py +++ /dev/null @@ -1,95 +0,0 @@ -import torch - -from mmdet.core import bbox2result, bbox_mapping_back -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class CornerNet(SingleStageDetector): - """CornerNet. - - This detector is the implementation of the paper `CornerNet: Detecting - Objects as Paired Keypoints `_ . - """ - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(CornerNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) - - def merge_aug_results(self, aug_results, img_metas): - """Merge augmented detection bboxes and score. - - Args: - aug_results (list[list[Tensor]]): Det_bboxes and det_labels of each - image. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - - Returns: - tuple: (bboxes, labels) - """ - recovered_bboxes, aug_labels = [], [] - for bboxes_labels, img_info in zip(aug_results, img_metas): - img_shape = img_info[0]['img_shape'] # using shape before padding - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - bboxes, labels = bboxes_labels - bboxes, scores = bboxes[:, :4], bboxes[:, -1:] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip) - recovered_bboxes.append(torch.cat([bboxes, scores], dim=-1)) - aug_labels.append(labels) - - bboxes = torch.cat(recovered_bboxes, dim=0) - labels = torch.cat(aug_labels) - - if bboxes.shape[0] > 0: - out_bboxes, out_labels = self.bbox_head._bboxes_nms( - bboxes, labels, self.bbox_head.test_cfg) - else: - out_bboxes, out_labels = bboxes, labels - - return out_bboxes, out_labels - - def aug_test(self, imgs, img_metas, rescale=False): - """Augment testing of CornerNet. - - Args: - imgs (list[Tensor]): Augmented images. - img_metas (list[list[dict]]): Meta information of each image, e.g., - image size, scaling factor, etc. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Note: - ``imgs`` must including flipped image pairs. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - img_inds = list(range(len(imgs))) - - assert img_metas[0][0]['flip'] + img_metas[1][0]['flip'], ( - 'aug test must have flipped image pair') - aug_results = [] - for ind, flip_ind in zip(img_inds[0::2], img_inds[1::2]): - img_pair = torch.cat([imgs[ind], imgs[flip_ind]]) - x = self.extract_feat(img_pair) - outs = self.bbox_head(x) - bbox_list = self.bbox_head.get_bboxes( - *outs, [img_metas[ind], img_metas[flip_ind]], False, False) - aug_results.append(bbox_list[0]) - aug_results.append(bbox_list[1]) - - bboxes, labels = self.merge_aug_results(aug_results, img_metas) - bbox_results = bbox2result(bboxes, labels, self.bbox_head.num_classes) - - return [bbox_results] diff --git a/spaces/CVPR/WALT/mmdet/models/losses/iou_loss.py b/spaces/CVPR/WALT/mmdet/models/losses/iou_loss.py deleted file mode 100644 index eba6f18b80981ca891c1add37007e6bf478c651f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/losses/iou_loss.py +++ /dev/null @@ -1,436 +0,0 @@ -import math - -import mmcv -import torch -import torch.nn as nn - -from mmdet.core import bbox_overlaps -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def iou_loss(pred, target, linear=False, eps=1e-6): - """IoU loss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - The loss is calculated as negative log of IoU. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - linear (bool, optional): If True, use linear scale of loss instead of - log scale. Default: False. - eps (float): Eps to avoid log(0). - - Return: - torch.Tensor: Loss tensor. - """ - ious = bbox_overlaps(pred, target, is_aligned=True).clamp(min=eps) - if linear: - loss = 1 - ious - else: - loss = -ious.log() - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def bounded_iou_loss(pred, target, beta=0.2, eps=1e-3): - """BIoULoss. - - This is an implementation of paper - `Improving Object Localization with Fitness NMS and Bounded IoU Loss. - `_. - - Args: - pred (torch.Tensor): Predicted bboxes. - target (torch.Tensor): Target bboxes. - beta (float): beta parameter in smoothl1. - eps (float): eps to avoid NaN. - """ - pred_ctrx = (pred[:, 0] + pred[:, 2]) * 0.5 - pred_ctry = (pred[:, 1] + pred[:, 3]) * 0.5 - pred_w = pred[:, 2] - pred[:, 0] - pred_h = pred[:, 3] - pred[:, 1] - with torch.no_grad(): - target_ctrx = (target[:, 0] + target[:, 2]) * 0.5 - target_ctry = (target[:, 1] + target[:, 3]) * 0.5 - target_w = target[:, 2] - target[:, 0] - target_h = target[:, 3] - target[:, 1] - - dx = target_ctrx - pred_ctrx - dy = target_ctry - pred_ctry - - loss_dx = 1 - torch.max( - (target_w - 2 * dx.abs()) / - (target_w + 2 * dx.abs() + eps), torch.zeros_like(dx)) - loss_dy = 1 - torch.max( - (target_h - 2 * dy.abs()) / - (target_h + 2 * dy.abs() + eps), torch.zeros_like(dy)) - loss_dw = 1 - torch.min(target_w / (pred_w + eps), pred_w / - (target_w + eps)) - loss_dh = 1 - torch.min(target_h / (pred_h + eps), pred_h / - (target_h + eps)) - loss_comb = torch.stack([loss_dx, loss_dy, loss_dw, loss_dh], - dim=-1).view(loss_dx.size(0), -1) - - loss = torch.where(loss_comb < beta, 0.5 * loss_comb * loss_comb / beta, - loss_comb - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def giou_loss(pred, target, eps=1e-7): - r"""`Generalized Intersection over Union: A Metric and A Loss for Bounding - Box Regression `_. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - - Return: - Tensor: Loss tensor. - """ - gious = bbox_overlaps(pred, target, mode='giou', is_aligned=True, eps=eps) - loss = 1 - gious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def diou_loss(pred, target, eps=1e-7): - r"""`Implementation of Distance-IoU Loss: Faster and Better - Learning for Bounding Box Regression, https://arxiv.org/abs/1911.08287`_. - - Code is modified from https://github.com/Zzh-tju/DIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - # DIoU - dious = ious - rho2 / c2 - loss = 1 - dious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def ciou_loss(pred, target, eps=1e-7): - r"""`Implementation of paper `Enhancing Geometric Factors into - Model Learning and Inference for Object Detection and Instance - Segmentation `_. - - Code is modified from https://github.com/Zzh-tju/CIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - factor = 4 / math.pi**2 - v = factor * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - - # CIoU - cious = ious - (rho2 / c2 + v**2 / (1 - ious + v)) - loss = 1 - cious - return loss - - -@LOSSES.register_module() -class IoULoss(nn.Module): - """IoULoss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - - Args: - linear (bool): If True, use linear scale of loss instead of log scale. - Default: False. - eps (float): Eps to avoid log(0). - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Weight of loss. - """ - - def __init__(self, - linear=False, - eps=1e-6, - reduction='mean', - loss_weight=1.0): - super(IoULoss, self).__init__() - self.linear = linear - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. Options are "none", "mean" and "sum". - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if (weight is not None) and (not torch.any(weight > 0)) and ( - reduction != 'none'): - return (pred * weight).sum() # 0 - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # iou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * iou_loss( - pred, - target, - weight, - linear=self.linear, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class BoundedIoULoss(nn.Module): - - def __init__(self, beta=0.2, eps=1e-3, reduction='mean', loss_weight=1.0): - super(BoundedIoULoss, self).__init__() - self.beta = beta - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * bounded_iou_loss( - pred, - target, - weight, - beta=self.beta, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class GIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(GIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * giou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class DIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(DIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * diou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class CIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(CIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * ciou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/cascade_roi_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/cascade_roi_head.py deleted file mode 100644 index 45b6f36a386cd37c50cc43666fcc516f2e14d868..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/roi_heads/cascade_roi_head.py +++ /dev/null @@ -1,507 +0,0 @@ -import torch -import torch.nn as nn - -from mmdet.core import (bbox2result, bbox2roi, bbox_mapping, build_assigner, - build_sampler, merge_aug_bboxes, merge_aug_masks, - multiclass_nms) -from ..builder import HEADS, build_head, build_roi_extractor -from .base_roi_head import BaseRoIHead -from .test_mixins import BBoxTestMixin, MaskTestMixin - - -@HEADS.register_module() -class CascadeRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin): - """Cascade roi head including one bbox head and one mask head. - - https://arxiv.org/abs/1712.00726 - """ - - def __init__(self, - num_stages, - stage_loss_weights, - bbox_roi_extractor=None, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - shared_head=None, - train_cfg=None, - test_cfg=None): - assert bbox_roi_extractor is not None - assert bbox_head is not None - assert shared_head is None, \ - 'Shared head is not supported in Cascade RCNN anymore' - self.num_stages = num_stages - self.stage_loss_weights = stage_loss_weights - super(CascadeRoIHead, self).__init__( - bbox_roi_extractor=bbox_roi_extractor, - bbox_head=bbox_head, - mask_roi_extractor=mask_roi_extractor, - mask_head=mask_head, - shared_head=shared_head, - train_cfg=train_cfg, - test_cfg=test_cfg) - - def init_bbox_head(self, bbox_roi_extractor, bbox_head): - """Initialize box head and box roi extractor. - - Args: - bbox_roi_extractor (dict): Config of box roi extractor. - bbox_head (dict): Config of box in box head. - """ - self.bbox_roi_extractor = nn.ModuleList() - self.bbox_head = nn.ModuleList() - if not isinstance(bbox_roi_extractor, list): - bbox_roi_extractor = [ - bbox_roi_extractor for _ in range(self.num_stages) - ] - if not isinstance(bbox_head, list): - bbox_head = [bbox_head for _ in range(self.num_stages)] - assert len(bbox_roi_extractor) == len(bbox_head) == self.num_stages - for roi_extractor, head in zip(bbox_roi_extractor, bbox_head): - self.bbox_roi_extractor.append(build_roi_extractor(roi_extractor)) - self.bbox_head.append(build_head(head)) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize mask head and mask roi extractor. - - Args: - mask_roi_extractor (dict): Config of mask roi extractor. - mask_head (dict): Config of mask in mask head. - """ - self.mask_head = nn.ModuleList() - if not isinstance(mask_head, list): - mask_head = [mask_head for _ in range(self.num_stages)] - assert len(mask_head) == self.num_stages - for head in mask_head: - self.mask_head.append(build_head(head)) - if mask_roi_extractor is not None: - self.share_roi_extractor = False - self.mask_roi_extractor = nn.ModuleList() - if not isinstance(mask_roi_extractor, list): - mask_roi_extractor = [ - mask_roi_extractor for _ in range(self.num_stages) - ] - assert len(mask_roi_extractor) == self.num_stages - for roi_extractor in mask_roi_extractor: - self.mask_roi_extractor.append( - build_roi_extractor(roi_extractor)) - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - - def init_assigner_sampler(self): - """Initialize assigner and sampler for each stage.""" - self.bbox_assigner = [] - self.bbox_sampler = [] - if self.train_cfg is not None: - for idx, rcnn_train_cfg in enumerate(self.train_cfg): - self.bbox_assigner.append( - build_assigner(rcnn_train_cfg.assigner)) - self.current_stage = idx - self.bbox_sampler.append( - build_sampler(rcnn_train_cfg.sampler, context=self)) - - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if self.with_shared_head: - self.shared_head.init_weights(pretrained=pretrained) - for i in range(self.num_stages): - if self.with_bbox: - self.bbox_roi_extractor[i].init_weights() - self.bbox_head[i].init_weights() - if self.with_mask: - if not self.share_roi_extractor: - self.mask_roi_extractor[i].init_weights() - self.mask_head[i].init_weights() - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask heads - if self.with_mask: - mask_rois = rois[:100] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def _bbox_forward(self, stage, x, rois): - """Box head forward function used in both training and testing.""" - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - cls_score, bbox_pred = bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, stage, x, sampling_results, gt_bboxes, - gt_labels, rcnn_train_cfg): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(stage, x, rois) - bbox_targets = self.bbox_head[stage].get_targets( - sampling_results, gt_bboxes, gt_labels, rcnn_train_cfg) - loss_bbox = self.bbox_head[stage].loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update( - loss_bbox=loss_bbox, rois=rois, bbox_targets=bbox_targets) - return bbox_results - - def _mask_forward(self, stage, x, rois): - """Mask head forward function used in both training and testing.""" - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - mask_pred = mask_head(mask_feats) - - mask_results = dict(mask_pred=mask_pred) - return mask_results - - def _mask_forward_train(self, - stage, - x, - sampling_results, - gt_masks, - rcnn_train_cfg, - bbox_feats=None): - """Run forward function and calculate loss for mask head in - training.""" - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward(stage, x, pos_rois) - - mask_targets = self.mask_head[stage].get_targets( - sampling_results, gt_masks, rcnn_train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head[stage].loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results.update(loss_mask=loss_mask) - return mask_results - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - losses = dict() - for i in range(self.num_stages): - self.current_stage = i - rcnn_train_cfg = self.train_cfg[i] - lw = self.stage_loss_weights[i] - - # assign gts and sample proposals - sampling_results = [] - if self.with_bbox or self.with_mask: - bbox_assigner = self.bbox_assigner[i] - bbox_sampler = self.bbox_sampler[i] - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - - for j in range(num_imgs): - assign_result = bbox_assigner.assign( - proposal_list[j], gt_bboxes[j], gt_bboxes_ignore[j], - gt_labels[j]) - sampling_result = bbox_sampler.sample( - assign_result, - proposal_list[j], - gt_bboxes[j], - gt_labels[j], - feats=[lvl_feat[j][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = self._bbox_forward_train(i, x, sampling_results, - gt_bboxes, gt_labels, - rcnn_train_cfg) - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train( - i, x, sampling_results, gt_masks, rcnn_train_cfg, - bbox_results['bbox_feats']) - for name, value in mask_results['loss_mask'].items(): - losses[f's{i}.{name}'] = ( - value * lw if 'loss' in name else value) - - # refine bboxes - if i < self.num_stages - 1: - pos_is_gts = [res.pos_is_gt for res in sampling_results] - # bbox_targets is a tuple - roi_labels = bbox_results['bbox_targets'][0] - with torch.no_grad(): - roi_labels = torch.where( - roi_labels == self.bbox_head[i].num_classes, - bbox_results['cls_score'][:, :-1].argmax(1), - roi_labels) - proposal_list = self.bbox_head[i].refine_bboxes( - bbox_results['rois'], roi_labels, - bbox_results['bbox_pred'], pos_is_gts, img_metas) - - return losses - - def simple_test(self, x, proposal_list, img_metas, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - num_imgs = len(proposal_list) - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # "ms" in variable names means multi-stage - ms_bbox_result = {} - ms_segm_result = {} - ms_scores = [] - rcnn_test_cfg = self.test_cfg - - rois = bbox2roi(proposal_list) - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - - # split batch bbox prediction back to each image - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - num_proposals_per_img = tuple( - len(proposals) for proposals in proposal_list) - rois = rois.split(num_proposals_per_img, 0) - cls_score = cls_score.split(num_proposals_per_img, 0) - if isinstance(bbox_pred, torch.Tensor): - bbox_pred = bbox_pred.split(num_proposals_per_img, 0) - else: - bbox_pred = self.bbox_head[i].bbox_pred_split( - bbox_pred, num_proposals_per_img) - ms_scores.append(cls_score) - - if i < self.num_stages - 1: - bbox_label = [s[:, :-1].argmax(dim=1) for s in cls_score] - rois = torch.cat([ - self.bbox_head[i].regress_by_class(rois[j], bbox_label[j], - bbox_pred[j], - img_metas[j]) - for j in range(num_imgs) - ]) - - # average scores of each image by stages - cls_score = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(num_imgs) - ] - - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(num_imgs): - det_bbox, det_label = self.bbox_head[-1].get_bboxes( - rois[i], - cls_score[i], - bbox_pred[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - - if torch.onnx.is_in_onnx_export(): - return det_bboxes, det_labels - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head[-1].num_classes) - for i in range(num_imgs) - ] - ms_bbox_result['ensemble'] = bbox_results - - if self.with_mask: - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - mask_classes = self.mask_head[-1].num_classes - segm_results = [[[] for _ in range(mask_classes)] - for _ in range(num_imgs)] - else: - if rescale and not isinstance(scale_factors[0], float): - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - det_bboxes[i][:, :4] * - scale_factors[i] if rescale else det_bboxes[i][:, :4] - for i in range(len(det_bboxes)) - ] - mask_rois = bbox2roi(_bboxes) - num_mask_rois_per_img = tuple( - _bbox.size(0) for _bbox in _bboxes) - aug_masks = [] - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - mask_pred = mask_results['mask_pred'] - # split batch mask prediction back to each image - mask_pred = mask_pred.split(num_mask_rois_per_img, 0) - aug_masks.append( - [m.sigmoid().cpu().numpy() for m in mask_pred]) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] - for _ in range(self.mask_head[-1].num_classes)]) - else: - aug_mask = [mask[i] for mask in aug_masks] - merged_masks = merge_aug_masks( - aug_mask, [[img_metas[i]]] * self.num_stages, - rcnn_test_cfg) - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, _bboxes[i], det_labels[i], - rcnn_test_cfg, ori_shapes[i], scale_factors[i], - rescale) - segm_results.append(segm_result) - ms_segm_result['ensemble'] = segm_results - - if self.with_mask: - results = list( - zip(ms_bbox_result['ensemble'], ms_segm_result['ensemble'])) - else: - results = ms_bbox_result['ensemble'] - - return results - - def aug_test(self, features, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - rcnn_test_cfg = self.test_cfg - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(features, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - # "ms" in variable names means multi-stage - ms_scores = [] - - rois = bbox2roi([proposals]) - for i in range(self.num_stages): - bbox_results = self._bbox_forward(i, x, rois) - ms_scores.append(bbox_results['cls_score']) - - if i < self.num_stages - 1: - bbox_label = bbox_results['cls_score'][:, :-1].argmax( - dim=1) - rois = self.bbox_head[i].regress_by_class( - rois, bbox_label, bbox_results['bbox_pred'], - img_meta[0]) - - cls_score = sum(ms_scores) / float(len(ms_scores)) - bboxes, scores = self.bbox_head[-1].get_bboxes( - rois, - cls_score, - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - - bbox_result = bbox2result(det_bboxes, det_labels, - self.bbox_head[-1].num_classes) - - if self.with_mask: - if det_bboxes.shape[0] == 0: - segm_result = [[[] - for _ in range(self.mask_head[-1].num_classes)] - ] - else: - aug_masks = [] - aug_img_metas = [] - for x, img_meta in zip(features, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - for i in range(self.num_stages): - mask_results = self._mask_forward(i, x, mask_rois) - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - aug_img_metas.append(img_meta) - merged_masks = merge_aug_masks(aug_masks, aug_img_metas, - self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head[-1].get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - rcnn_test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return [(bbox_result, segm_result)] - else: - return [bbox_result] diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/test_mixins.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/test_mixins.py deleted file mode 100644 index c28ed61deb946f0ffca70733fb7ddf84d1aec885..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/roi_heads/test_mixins.py +++ /dev/null @@ -1,368 +0,0 @@ -import logging -import sys - -import torch - -from mmdet.core import (bbox2roi, bbox_mapping, merge_aug_bboxes, - merge_aug_masks, multiclass_nms) - -logger = logging.getLogger(__name__) - -if sys.version_info >= (3, 7): - from mmdet.utils.contextmanagers import completed - - -class BBoxTestMixin(object): - - if sys.version_info >= (3, 7): - - async def async_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False, - bbox_semaphore=None, - global_lock=None): - """Asynchronized test for box head without augmentation.""" - rois = bbox2roi(proposals) - roi_feats = self.bbox_roi_extractor( - x[:len(self.bbox_roi_extractor.featmap_strides)], rois) - if self.with_shared_head: - roi_feats = self.shared_head(roi_feats) - sleep_interval = rcnn_test_cfg.get('async_sleep_interval', 0.017) - - async with completed( - __name__, 'bbox_head_forward', - sleep_interval=sleep_interval): - cls_score, bbox_pred = self.bbox_head(roi_feats) - - img_shape = img_metas[0]['img_shape'] - scale_factor = img_metas[0]['scale_factor'] - det_bboxes, det_labels = self.bbox_head.get_bboxes( - rois, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=rescale, - cfg=rcnn_test_cfg) - return det_bboxes, det_labels - - def simple_test_bboxes(self, - x, - img_metas, - proposals, - rcnn_test_cfg, - rescale=False): - """Test only det bboxes without augmentation. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - proposals (Tensor or List[Tensor]): Region proposals. - rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN. - rescale (bool): If True, return boxes in original image space. - Default: False. - - Returns: - tuple[list[Tensor], list[Tensor]]: The first list contains - the boxes of the corresponding image in a batch, each - tensor has the shape (num_boxes, 5) and last dimension - 5 represent (tl_x, tl_y, br_x, br_y, score). Each Tensor - in the second list is the labels with shape (num_boxes, ). - The length of both lists should be equal to batch_size. - """ - # get origin input shape to support onnx dynamic input shape - if torch.onnx.is_in_onnx_export(): - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shapes = img_metas[0]['img_shape_for_onnx'] - else: - img_shapes = tuple(meta['img_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # The length of proposals of different batches may be different. - # In order to form a batch, a padding operation is required. - if isinstance(proposals, list): - # padding to form a batch - max_size = max([proposal.size(0) for proposal in proposals]) - for i, proposal in enumerate(proposals): - supplement = proposal.new_full( - (max_size - proposal.size(0), proposal.size(1)), 0) - proposals[i] = torch.cat((supplement, proposal), dim=0) - rois = torch.stack(proposals, dim=0) - else: - rois = proposals - - batch_index = torch.arange( - rois.size(0), device=rois.device).float().view(-1, 1, 1).expand( - rois.size(0), rois.size(1), 1) - rois = torch.cat([batch_index, rois[..., :4]], dim=-1) - batch_size = rois.shape[0] - num_proposals_per_img = rois.shape[1] - - # Eliminate the batch dimension - rois = rois.view(-1, 5) - bbox_results = self._bbox_forward(x, rois) - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - - # Recover the batch dimension - rois = rois.reshape(batch_size, num_proposals_per_img, -1) - cls_score = cls_score.reshape(batch_size, num_proposals_per_img, -1) - - if not torch.onnx.is_in_onnx_export(): - # remove padding - supplement_mask = rois[..., -1] == 0 - cls_score[supplement_mask, :] = 0 - - # bbox_pred would be None in some detector when with_reg is False, - # e.g. Grid R-CNN. - if bbox_pred is not None: - # the bbox prediction of some detectors like SABL is not Tensor - if isinstance(bbox_pred, torch.Tensor): - bbox_pred = bbox_pred.reshape(batch_size, - num_proposals_per_img, -1) - if not torch.onnx.is_in_onnx_export(): - bbox_pred[supplement_mask, :] = 0 - else: - # TODO: Looking forward to a better way - # For SABL - bbox_preds = self.bbox_head.bbox_pred_split( - bbox_pred, num_proposals_per_img) - # apply bbox post-processing to each image individually - det_bboxes = [] - det_labels = [] - for i in range(len(proposals)): - # remove padding - supplement_mask = proposals[i][..., -1] == 0 - for bbox in bbox_preds[i]: - bbox[supplement_mask] = 0 - det_bbox, det_label = self.bbox_head.get_bboxes( - rois[i], - cls_score[i], - bbox_preds[i], - img_shapes[i], - scale_factors[i], - rescale=rescale, - cfg=rcnn_test_cfg) - det_bboxes.append(det_bbox) - det_labels.append(det_label) - return det_bboxes, det_labels - else: - bbox_pred = None - - return self.bbox_head.get_bboxes( - rois, - cls_score, - bbox_pred, - img_shapes, - scale_factors, - rescale=rescale, - cfg=rcnn_test_cfg) - - def aug_test_bboxes(self, feats, img_metas, proposal_list, rcnn_test_cfg): - """Test det bboxes with test time augmentation.""" - aug_bboxes = [] - aug_scores = [] - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - # TODO more flexible - proposals = bbox_mapping(proposal_list[0][:, :4], img_shape, - scale_factor, flip, flip_direction) - rois = bbox2roi([proposals]) - bbox_results = self._bbox_forward(x, rois) - bboxes, scores = self.bbox_head.get_bboxes( - rois, - bbox_results['cls_score'], - bbox_results['bbox_pred'], - img_shape, - scale_factor, - rescale=False, - cfg=None) - aug_bboxes.append(bboxes) - aug_scores.append(scores) - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas, rcnn_test_cfg) - det_bboxes, det_labels = multiclass_nms(merged_bboxes, merged_scores, - rcnn_test_cfg.score_thr, - rcnn_test_cfg.nms, - rcnn_test_cfg.max_per_img) - return det_bboxes, det_labels - - -class MaskTestMixin(object): - - if sys.version_info >= (3, 7): - - async def async_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False, - mask_test_cfg=None): - """Asynchronized test for mask head without augmentation.""" - # image shape of the first image in the batch (only one) - ori_shape = img_metas[0]['ori_shape'] - scale_factor = img_metas[0]['scale_factor'] - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - if rescale and not isinstance(scale_factor, - (float, torch.Tensor)): - scale_factor = det_bboxes.new_tensor(scale_factor) - _bboxes = ( - det_bboxes[:, :4] * - scale_factor if rescale else det_bboxes) - mask_rois = bbox2roi([_bboxes]) - mask_feats = self.mask_roi_extractor( - x[:len(self.mask_roi_extractor.featmap_strides)], - mask_rois) - - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - if mask_test_cfg and mask_test_cfg.get('async_sleep_interval'): - sleep_interval = mask_test_cfg['async_sleep_interval'] - else: - sleep_interval = 0.035 - async with completed( - __name__, - 'mask_head_forward', - sleep_interval=sleep_interval): - mask_pred = self.mask_head(mask_feats) - segm_result = self.mask_head.get_seg_masks( - mask_pred, _bboxes, det_labels, self.test_cfg, ori_shape, - scale_factor, rescale) - return segm_result - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Simple test for mask head without augmentation.""" - # image shapes of images in the batch - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - # The length of proposals of different batches may be different. - # In order to form a batch, a padding operation is required. - if isinstance(det_bboxes, list): - # padding to form a batch - max_size = max([bboxes.size(0) for bboxes in det_bboxes]) - for i, (bbox, label) in enumerate(zip(det_bboxes, det_labels)): - supplement_bbox = bbox.new_full( - (max_size - bbox.size(0), bbox.size(1)), 0) - supplement_label = label.new_full((max_size - label.size(0), ), - 0) - det_bboxes[i] = torch.cat((supplement_bbox, bbox), dim=0) - det_labels[i] = torch.cat((supplement_label, label), dim=0) - det_bboxes = torch.stack(det_bboxes, dim=0) - det_labels = torch.stack(det_labels, dim=0) - - batch_size = det_bboxes.size(0) - num_proposals_per_img = det_bboxes.shape[1] - - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - det_bboxes = det_bboxes[..., :4] - if rescale: - if not isinstance(scale_factors[0], float): - scale_factors = det_bboxes.new_tensor(scale_factors) - det_bboxes = det_bboxes * scale_factors.unsqueeze(1) - - batch_index = torch.arange( - det_bboxes.size(0), device=det_bboxes.device).float().view( - -1, 1, 1).expand(det_bboxes.size(0), det_bboxes.size(1), 1) - mask_rois = torch.cat([batch_index, det_bboxes], dim=-1) - mask_rois = mask_rois.view(-1, 5) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - try: - mask_full_pred, mask_occ_pred = mask_pred - except: - mask_full_pred = mask_pred - mask_occ_pred = mask_pred - - - # Recover the batch dimension - mask_full_preds = mask_full_pred.reshape(batch_size, num_proposals_per_img, - *mask_full_pred.shape[1:]) - - mask_occ_preds = mask_occ_pred.reshape(batch_size, num_proposals_per_img, - *mask_occ_pred.shape[1:]) - - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(batch_size): - mask_full_pred = mask_full_preds[i] - mask_occ_pred = mask_occ_preds[i] - det_bbox = det_bboxes[i] - det_label = det_labels[i] - - # remove padding - supplement_mask = det_bbox[..., -1] != 0 - mask_full_pred = mask_full_pred[supplement_mask] - mask_occ_pred = mask_occ_pred[supplement_mask] - det_bbox = det_bbox[supplement_mask] - det_label = det_label[supplement_mask] - - if det_label.shape[0] == 0: - segm_results.append([[] - for _ in range(self.mask_head.num_classes) - ]) - else: - segm_result_vis = self.mask_head.get_seg_masks( - mask_full_pred[:,0:1], det_bbox, det_label, self.test_cfg, - ori_shapes[i], scale_factors[i], rescale) - - segm_result_occ = self.mask_head.get_seg_masks( - mask_occ_pred[:,0:1], det_bbox, det_label, self.test_cfg, - ori_shapes[i], scale_factors[i], rescale) - - segm_result = segm_result_vis - segm_result[1] = segm_result_occ[0] - - segm_results.append(segm_result) - return segm_results - - def aug_test_mask(self, feats, img_metas, det_bboxes, det_labels): - """Test for mask head with test time augmentation.""" - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta in zip(feats, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - flip_direction = img_meta[0]['flip_direction'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip, flip_direction) - mask_rois = bbox2roi([_bboxes]) - mask_results = self._mask_forward(x, mask_rois) - # convert to numpy array to save memory - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - self.test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return segm_result diff --git a/spaces/ChallengeHub/Chinese-LangChain/create_knowledge.py b/spaces/ChallengeHub/Chinese-LangChain/create_knowledge.py deleted file mode 100644 index ee36198a3e110a637c415f17a4938f2eab2d3faa..0000000000000000000000000000000000000000 --- a/spaces/ChallengeHub/Chinese-LangChain/create_knowledge.py +++ /dev/null @@ -1,79 +0,0 @@ -#!/usr/bin/env python -# -*- coding:utf-8 _*- -""" -@author:quincy qiang -@license: Apache Licence -@file: create_knowledge.py -@time: 2023/04/18 -@contact: yanqiangmiffy@gamil.com -@software: PyCharm -@description: - emoji:https://emojixd.com/pocket/science -""" -import os -import pandas as pd -from langchain.schema import Document -from langchain.document_loaders import UnstructuredFileLoader -from langchain.embeddings.huggingface import HuggingFaceEmbeddings -from langchain.vectorstores import FAISS -from tqdm import tqdm -# 中文Wikipedia数据导入示例: -embedding_model_name = '/root/pretrained_models/text2vec-large-chinese' -docs_path = '/root/GoMall/Knowledge-ChatGLM/cache/financial_research_reports' -embeddings = HuggingFaceEmbeddings(model_name=embedding_model_name) - - -# Wikipedia数据处理 - -# docs = [] - -# with open('docs/zh_wikipedia/zhwiki.sim.utf8', 'r', encoding='utf-8') as f: -# for idx, line in tqdm(enumerate(f.readlines())): -# metadata = {"source": f'doc_id_{idx}'} -# docs.append(Document(page_content=line.strip(), metadata=metadata)) -# -# vector_store = FAISS.from_documents(docs, embeddings) -# vector_store.save_local('cache/zh_wikipedia/') - - - -docs = [] - -with open('cache/zh_wikipedia/wiki.zh-sim-cleaned.txt', 'r', encoding='utf-8') as f: - for idx, line in tqdm(enumerate(f.readlines())): - metadata = {"source": f'doc_id_{idx}'} - docs.append(Document(page_content=line.strip(), metadata=metadata)) - -vector_store = FAISS.from_documents(docs, embeddings) -vector_store.save_local('cache/zh_wikipedia/') - - -# 金融研报数据处理 -# docs = [] -# -# for doc in tqdm(os.listdir(docs_path)): -# if doc.endswith('.txt'): -# # print(doc) -# loader = UnstructuredFileLoader(f'{docs_path}/{doc}', mode="elements") -# doc = loader.load() -# docs.extend(doc) -# vector_store = FAISS.from_documents(docs, embeddings) -# vector_store.save_local('cache/financial_research_reports') - - -# 英雄联盟 - -docs = [] - -lol_df = pd.read_csv('cache/lol/champions.csv') -# lol_df.columns = ['id', '英雄简称', '英雄全称', '出生地', '人物属性', '英雄类别', '英雄故事'] -print(lol_df) - -for idx, row in lol_df.iterrows(): - metadata = {"source": f'doc_id_{idx}'} - text = ' '.join(row.values) - # for col in ['英雄简称', '英雄全称', '出生地', '人物属性', '英雄类别', '英雄故事']: - # text += row[col] - docs.append(Document(page_content=text, metadata=metadata)) - -vector_store = FAISS.from_documents(docs, embeddings) -vector_store.save_local('cache/lol/') diff --git a/spaces/Cvandi/remake/realesrgan/data/realesrgan_paired_dataset.py b/spaces/Cvandi/remake/realesrgan/data/realesrgan_paired_dataset.py deleted file mode 100644 index 386c8d72496245dae8df033c2ebbd76b41ff45f1..0000000000000000000000000000000000000000 --- a/spaces/Cvandi/remake/realesrgan/data/realesrgan_paired_dataset.py +++ /dev/null @@ -1,108 +0,0 @@ -import os -from basicsr.data.data_util import paired_paths_from_folder, paired_paths_from_lmdb -from basicsr.data.transforms import augment, paired_random_crop -from basicsr.utils import FileClient, imfrombytes, img2tensor -from basicsr.utils.registry import DATASET_REGISTRY -from torch.utils import data as data -from torchvision.transforms.functional import normalize - - -@DATASET_REGISTRY.register() -class RealESRGANPairedDataset(data.Dataset): - """Paired image dataset for image restoration. - - Read LQ (Low Quality, e.g. LR (Low Resolution), blurry, noisy, etc) and GT image pairs. - - There are three modes: - 1. 'lmdb': Use lmdb files. - If opt['io_backend'] == lmdb. - 2. 'meta_info': Use meta information file to generate paths. - If opt['io_backend'] != lmdb and opt['meta_info'] is not None. - 3. 'folder': Scan folders to generate paths. - The rest. - - Args: - opt (dict): Config for train datasets. It contains the following keys: - dataroot_gt (str): Data root path for gt. - dataroot_lq (str): Data root path for lq. - meta_info (str): Path for meta information file. - io_backend (dict): IO backend type and other kwarg. - filename_tmpl (str): Template for each filename. Note that the template excludes the file extension. - Default: '{}'. - gt_size (int): Cropped patched size for gt patches. - use_hflip (bool): Use horizontal flips. - use_rot (bool): Use rotation (use vertical flip and transposing h - and w for implementation). - - scale (bool): Scale, which will be added automatically. - phase (str): 'train' or 'val'. - """ - - def __init__(self, opt): - super(RealESRGANPairedDataset, self).__init__() - self.opt = opt - self.file_client = None - self.io_backend_opt = opt['io_backend'] - # mean and std for normalizing the input images - self.mean = opt['mean'] if 'mean' in opt else None - self.std = opt['std'] if 'std' in opt else None - - self.gt_folder, self.lq_folder = opt['dataroot_gt'], opt['dataroot_lq'] - self.filename_tmpl = opt['filename_tmpl'] if 'filename_tmpl' in opt else '{}' - - # file client (lmdb io backend) - if self.io_backend_opt['type'] == 'lmdb': - self.io_backend_opt['db_paths'] = [self.lq_folder, self.gt_folder] - self.io_backend_opt['client_keys'] = ['lq', 'gt'] - self.paths = paired_paths_from_lmdb([self.lq_folder, self.gt_folder], ['lq', 'gt']) - elif 'meta_info' in self.opt and self.opt['meta_info'] is not None: - # disk backend with meta_info - # Each line in the meta_info describes the relative path to an image - with open(self.opt['meta_info']) as fin: - paths = [line.strip() for line in fin] - self.paths = [] - for path in paths: - gt_path, lq_path = path.split(', ') - gt_path = os.path.join(self.gt_folder, gt_path) - lq_path = os.path.join(self.lq_folder, lq_path) - self.paths.append(dict([('gt_path', gt_path), ('lq_path', lq_path)])) - else: - # disk backend - # it will scan the whole folder to get meta info - # it will be time-consuming for folders with too many files. It is recommended using an extra meta txt file - self.paths = paired_paths_from_folder([self.lq_folder, self.gt_folder], ['lq', 'gt'], self.filename_tmpl) - - def __getitem__(self, index): - if self.file_client is None: - self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt) - - scale = self.opt['scale'] - - # Load gt and lq images. Dimension order: HWC; channel order: BGR; - # image range: [0, 1], float32. - gt_path = self.paths[index]['gt_path'] - img_bytes = self.file_client.get(gt_path, 'gt') - img_gt = imfrombytes(img_bytes, float32=True) - lq_path = self.paths[index]['lq_path'] - img_bytes = self.file_client.get(lq_path, 'lq') - img_lq = imfrombytes(img_bytes, float32=True) - - # augmentation for training - if self.opt['phase'] == 'train': - gt_size = self.opt['gt_size'] - # random crop - img_gt, img_lq = paired_random_crop(img_gt, img_lq, gt_size, scale, gt_path) - # flip, rotation - img_gt, img_lq = augment([img_gt, img_lq], self.opt['use_hflip'], self.opt['use_rot']) - - # BGR to RGB, HWC to CHW, numpy to tensor - img_gt, img_lq = img2tensor([img_gt, img_lq], bgr2rgb=True, float32=True) - # normalize - if self.mean is not None or self.std is not None: - normalize(img_lq, self.mean, self.std, inplace=True) - normalize(img_gt, self.mean, self.std, inplace=True) - - return {'lq': img_lq, 'gt': img_gt, 'lq_path': lq_path, 'gt_path': gt_path} - - def __len__(self): - return len(self.paths) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/testTools.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/testTools.py deleted file mode 100644 index be6116132d93a6a5f692f5b8465be346aad7ca5c..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/testTools.py +++ /dev/null @@ -1,229 +0,0 @@ -"""Helpers for writing unit tests.""" - -from collections.abc import Iterable -from io import BytesIO -import os -import re -import shutil -import sys -import tempfile -from unittest import TestCase as _TestCase -from fontTools.config import Config -from fontTools.misc.textTools import tobytes -from fontTools.misc.xmlWriter import XMLWriter - - -def parseXML(xmlSnippet): - """Parses a snippet of XML. - - Input can be either a single string (unicode or UTF-8 bytes), or a - a sequence of strings. - - The result is in the same format that would be returned by - XMLReader, but the parser imposes no constraints on the root - element so it can be called on small snippets of TTX files. - """ - # To support snippets with multiple elements, we add a fake root. - reader = TestXMLReader_() - xml = b"" - if isinstance(xmlSnippet, bytes): - xml += xmlSnippet - elif isinstance(xmlSnippet, str): - xml += tobytes(xmlSnippet, "utf-8") - elif isinstance(xmlSnippet, Iterable): - xml += b"".join(tobytes(s, "utf-8") for s in xmlSnippet) - else: - raise TypeError( - "expected string or sequence of strings; found %r" - % type(xmlSnippet).__name__ - ) - xml += b"" - reader.parser.Parse(xml, 0) - return reader.root[2] - - -def parseXmlInto(font, parseInto, xmlSnippet): - parsed_xml = [e for e in parseXML(xmlSnippet.strip()) if not isinstance(e, str)] - for name, attrs, content in parsed_xml: - parseInto.fromXML(name, attrs, content, font) - parseInto.populateDefaults() - return parseInto - - -class FakeFont: - def __init__(self, glyphs): - self.glyphOrder_ = glyphs - self.reverseGlyphOrderDict_ = {g: i for i, g in enumerate(glyphs)} - self.lazy = False - self.tables = {} - self.cfg = Config() - - def __getitem__(self, tag): - return self.tables[tag] - - def __setitem__(self, tag, table): - self.tables[tag] = table - - def get(self, tag, default=None): - return self.tables.get(tag, default) - - def getGlyphID(self, name): - return self.reverseGlyphOrderDict_[name] - - def getGlyphIDMany(self, lst): - return [self.getGlyphID(gid) for gid in lst] - - def getGlyphName(self, glyphID): - if glyphID < len(self.glyphOrder_): - return self.glyphOrder_[glyphID] - else: - return "glyph%.5d" % glyphID - - def getGlyphNameMany(self, lst): - return [self.getGlyphName(gid) for gid in lst] - - def getGlyphOrder(self): - return self.glyphOrder_ - - def getReverseGlyphMap(self): - return self.reverseGlyphOrderDict_ - - def getGlyphNames(self): - return sorted(self.getGlyphOrder()) - - -class TestXMLReader_(object): - def __init__(self): - from xml.parsers.expat import ParserCreate - - self.parser = ParserCreate() - self.parser.StartElementHandler = self.startElement_ - self.parser.EndElementHandler = self.endElement_ - self.parser.CharacterDataHandler = self.addCharacterData_ - self.root = None - self.stack = [] - - def startElement_(self, name, attrs): - element = (name, attrs, []) - if self.stack: - self.stack[-1][2].append(element) - else: - self.root = element - self.stack.append(element) - - def endElement_(self, name): - self.stack.pop() - - def addCharacterData_(self, data): - self.stack[-1][2].append(data) - - -def makeXMLWriter(newlinestr="\n"): - # don't write OS-specific new lines - writer = XMLWriter(BytesIO(), newlinestr=newlinestr) - # erase XML declaration - writer.file.seek(0) - writer.file.truncate() - return writer - - -def getXML(func, ttFont=None): - """Call the passed toXML function and return the written content as a - list of lines (unicode strings). - Result is stripped of XML declaration and OS-specific newline characters. - """ - writer = makeXMLWriter() - func(writer, ttFont) - xml = writer.file.getvalue().decode("utf-8") - # toXML methods must always end with a writer.newline() - assert xml.endswith("\n") - return xml.splitlines() - - -def stripVariableItemsFromTTX( - string: str, - ttLibVersion: bool = True, - checkSumAdjustment: bool = True, - modified: bool = True, - created: bool = True, - sfntVersion: bool = False, # opt-in only -) -> str: - """Strip stuff like ttLibVersion, checksums, timestamps, etc. from TTX dumps.""" - # ttlib changes with the fontTools version - if ttLibVersion: - string = re.sub(' ttLibVersion="[^"]+"', "", string) - # sometimes (e.g. some subsetter tests) we don't care whether it's OTF or TTF - if sfntVersion: - string = re.sub(' sfntVersion="[^"]+"', "", string) - # head table checksum and creation and mod date changes with each save. - if checkSumAdjustment: - string = re.sub('', "", string) - if modified: - string = re.sub('', "", string) - if created: - string = re.sub('', "", string) - return string - - -class MockFont(object): - """A font-like object that automatically adds any looked up glyphname - to its glyphOrder.""" - - def __init__(self): - self._glyphOrder = [".notdef"] - - class AllocatingDict(dict): - def __missing__(reverseDict, key): - self._glyphOrder.append(key) - gid = len(reverseDict) - reverseDict[key] = gid - return gid - - self._reverseGlyphOrder = AllocatingDict({".notdef": 0}) - self.lazy = False - - def getGlyphID(self, glyph): - gid = self._reverseGlyphOrder[glyph] - return gid - - def getReverseGlyphMap(self): - return self._reverseGlyphOrder - - def getGlyphName(self, gid): - return self._glyphOrder[gid] - - def getGlyphOrder(self): - return self._glyphOrder - - -class TestCase(_TestCase): - def __init__(self, methodName): - _TestCase.__init__(self, methodName) - # Python 3 renamed assertRaisesRegexp to assertRaisesRegex, - # and fires deprecation warnings if a program uses the old name. - if not hasattr(self, "assertRaisesRegex"): - self.assertRaisesRegex = self.assertRaisesRegexp - - -class DataFilesHandler(TestCase): - def setUp(self): - self.tempdir = None - self.num_tempfiles = 0 - - def tearDown(self): - if self.tempdir: - shutil.rmtree(self.tempdir) - - def getpath(self, testfile): - folder = os.path.dirname(sys.modules[self.__module__].__file__) - return os.path.join(folder, "data", testfile) - - def temp_dir(self): - if not self.tempdir: - self.tempdir = tempfile.mkdtemp() - - def temp_font(self, font_path, file_name): - self.temp_dir() - temppath = os.path.join(self.tempdir, file_name) - shutil.copy2(font_path, temppath) - return temppath diff --git a/spaces/EuroPython2022/pulsar-clip/README.md b/spaces/EuroPython2022/pulsar-clip/README.md deleted file mode 100644 index bf7cd8333378ebe4bd874633d4398c0d1ba5e60f..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/pulsar-clip/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Pulsar Clip -emoji: 😻 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.1.4b5 -app_file: app.py -pinned: false -license: agpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/infer_gt_mel.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/infer_gt_mel.py deleted file mode 100644 index 033b821a5d21a1232f1786bce5616b12e01488ad..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/infer_gt_mel.py +++ /dev/null @@ -1,74 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from diffusion.unit2mel import load_model_vocoder - - -class DiffGtMel: - def __init__(self, project_path=None, device=None): - self.project_path = project_path - if device is not None: - self.device = device - else: - self.device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.model = None - self.vocoder = None - self.args = None - - def flush_model(self, project_path, ddsp_config=None): - if (self.model is None) or (project_path != self.project_path): - model, vocoder, args = load_model_vocoder(project_path, device=self.device) - if self.check_args(ddsp_config, args): - self.model = model - self.vocoder = vocoder - self.args = args - - def check_args(self, args1, args2): - if args1.data.block_size != args2.data.block_size: - raise ValueError("DDSP与DIFF模型的block_size不一致") - if args1.data.sampling_rate != args2.data.sampling_rate: - raise ValueError("DDSP与DIFF模型的sampling_rate不一致") - if args1.data.encoder != args2.data.encoder: - raise ValueError("DDSP与DIFF模型的encoder不一致") - return True - - def __call__(self, audio, f0, hubert, volume, acc=1, spk_id=1, k_step=0, method='pndm', - spk_mix_dict=None, start_frame=0): - input_mel = self.vocoder.extract(audio, self.args.data.sampling_rate) - out_mel = self.model( - hubert, - f0, - volume, - spk_id=spk_id, - spk_mix_dict=spk_mix_dict, - gt_spec=input_mel, - infer=True, - infer_speedup=acc, - method=method, - k_step=k_step, - use_tqdm=False) - if start_frame > 0: - out_mel = out_mel[:, start_frame:, :] - f0 = f0[:, start_frame:, :] - output = self.vocoder.infer(out_mel, f0) - if start_frame > 0: - output = F.pad(output, (start_frame * self.vocoder.vocoder_hop_size, 0)) - return output - - def infer(self, audio, f0, hubert, volume, acc=1, spk_id=1, k_step=0, method='pndm', silence_front=0, - use_silence=False, spk_mix_dict=None): - start_frame = int(silence_front * self.vocoder.vocoder_sample_rate / self.vocoder.vocoder_hop_size) - if use_silence: - audio = audio[:, start_frame * self.vocoder.vocoder_hop_size:] - f0 = f0[:, start_frame:, :] - hubert = hubert[:, start_frame:, :] - volume = volume[:, start_frame:, :] - _start_frame = 0 - else: - _start_frame = start_frame - audio = self.__call__(audio, f0, hubert, volume, acc=acc, spk_id=spk_id, k_step=k_step, - method=method, spk_mix_dict=spk_mix_dict, start_frame=_start_frame) - if use_silence: - if start_frame > 0: - audio = F.pad(audio, (start_frame * self.vocoder.vocoder_hop_size, 0)) - return audio diff --git a/spaces/GMFTBY/PandaGPT/model/ImageBind/__init__.py b/spaces/GMFTBY/PandaGPT/model/ImageBind/__init__.py deleted file mode 100644 index d872d0725710d6dde3af3b6e05382922f074338b..0000000000000000000000000000000000000000 --- a/spaces/GMFTBY/PandaGPT/model/ImageBind/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .models import imagebind_model -from .models.imagebind_model import ModalityType diff --git a/spaces/GaenKoki/voicevox/speaker_info/388f246b-8c41-4ac1-8e2d-5d79f3ff56d9/policy.md b/spaces/GaenKoki/voicevox/speaker_info/388f246b-8c41-4ac1-8e2d-5d79f3ff56d9/policy.md deleted file mode 100644 index 0328c63112a40f44145440562c8fe2d56ac86e38..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/speaker_info/388f246b-8c41-4ac1-8e2d-5d79f3ff56d9/policy.md +++ /dev/null @@ -1,3 +0,0 @@ -dummy2 policy - -https://voicevox.hiroshiba.jp/ diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/block_on_cylinder_on_pallet.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/block_on_cylinder_on_pallet.py deleted file mode 100644 index d29f6d6de4c60bd0e6a5a30261ea09bc6ed05b9d..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/block_on_cylinder_on_pallet.py +++ /dev/null @@ -1,58 +0,0 @@ -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class BlockOnCylinderOnPallet(Task): - """Pick up each block and place it on the corresponding colored cylinder, which are located in specific positions on a pallet.""" - - def __init__(self): - super().__init__() - self.max_steps = 15 - self.lang_template = "place the {} cylinder on the pallet" - self.lang_template_2 = "place the {} block on the {} cylinder" - - self.task_completed_desc = "done placing blocks on cylinders and cylinder on pallet." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add pallet. - pallet_size = (0.35, 0.35, 0.01) - pallet_pose = self.get_random_pose(env, pallet_size) - pallet_urdf = 'pallet/pallet.urdf' - env.add_object(pallet_urdf, pallet_pose, 'fixed') - - # Define colors. - block_colors = ['red'] - cylinder_colors = ['blue'] - - # Add cylinders. - cylinder_size = (0.04, 0.04, 0.06) - cylinder_template = 'cylinder/cylinder-template.urdf' - cylinders = [] - - - replace = {'DIM': cylinder_size, 'HALF': (cylinder_size[0] / 2, cylinder_size[1] / 2, cylinder_size[2] / 2), 'COLOR': block_colors[0]} - cylinder_urdf = self.fill_template(cylinder_template, replace) - cylinder_pose = self.get_random_pose(env, cylinder_size) - cylinder_id = env.add_object(cylinder_urdf, cylinder_pose) - cylinders.append(cylinder_id) - - # Add blocks. - block_size = (0.04, 0.04, 0.04) - block_urdf = 'block/block.urdf' - blocks = [] - block_pose = self.get_random_pose(env, block_size) - block_id = env.add_object(block_urdf, block_pose, color=cylinder_colors[0]) - blocks.append(block_id) - - # Goal: place the cylinder on top of the pallet - self.add_goal(objs=[cylinders[0]], matches=np.ones((1, 1)), targ_poses=[pallet_pose], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1/2, language_goal=self.lang_template.format(cylinder_colors[0])) - - - # Goal: place the block on top of the cylinder - language_goal = self.lang_template_2.format(block_colors[0], cylinder_colors[0]) - self.add_goal(objs=[blocks[0]], matches=np.ones((1, 1)), targ_poses=[pallet_pose], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1/2, language_goal=language_goal) diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_cued_ball_corner_sorting.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_cued_ball_corner_sorting.py deleted file mode 100644 index b24285254c772733bbdfb70ca226c0c618a208c0..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_cued_ball_corner_sorting.py +++ /dev/null @@ -1,62 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class ColorCuedBallCornerSorting(Task): - """Pick up each colored ball and place it in the corner of the same color while avoiding a zone marked by small blocks.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "place the {color} ball in the {color} corner" - self.task_completed_desc = "done sorting balls." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add corners. - corner_size = (0.05, 0.05, 0.05) - corner_urdf = 'corner/corner-template.urdf' - corner_colors = ['red', 'blue', 'green', 'yellow'] - corner_poses = [] - for color in corner_colors: - corner_pose = self.get_random_pose(env, corner_size) - env.add_object(corner_urdf, corner_pose, color=color, category='fixed') - corner_poses.append(corner_pose) - - # Add balls. - balls = [] - ball_size = (0.04, 0.04, 0.04) - ball_urdf = 'ball/ball-template.urdf' - for color in corner_colors: - ball_pose = self.get_random_pose(env, ball_size) - ball_id = env.add_object(ball_urdf, ball_pose, color=color) - balls.append(ball_id) - - # Add zone. - zone_size = (0.2, 0.2, 0.05) - zone_pose = self.get_random_pose(env, zone_size) - zone_urdf = 'zone/zone.urdf' - env.add_object(zone_urdf, zone_pose, 'fixed') - - # Add blocks. - block_size = (0.04, 0.04, 0.04) - block_urdf = 'block/block_for_anchors.urdf' - for _ in range(4): - block_pose = self.get_random_pose(env, block_size) - env.add_object(block_urdf, block_pose) - - # Goal: each ball is in the corner of the same color. - for i in range(4): - self.add_goal(objs=[balls[i]], matches=np.ones((1, 1)), targ_poses=[corner_poses[i]], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1/4, - language_goal=self.lang_template.format(color=corner_colors[i])) \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_ordered_insertion_new.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_ordered_insertion_new.py deleted file mode 100644 index 72cc3f4f34d8822ba14e7a7e9c73b1e995304a8f..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/color_ordered_insertion_new.py +++ /dev/null @@ -1,52 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class ColorOrderedInsertionNew(Task): - """Insert differently-colored ell objects into the matching color fixture in a specific order.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "put the {color} L shape block in the L shape hole" - self.task_completed_desc = "done with insertion." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Define colors and their order - colors = ['red', 'blue', 'green', 'yellow'] - color_order = {color: i for i, color in enumerate(colors)} - - # Add fixtures. - fixture_size = (0.12, 0.12, 0.02) - fixture_urdf = 'insertion/fixture.urdf' - fixtures = [] - for color in colors: - fixture_pose = self.get_random_pose(env, fixture_size) - fixture_id = env.add_object(fixture_urdf, fixture_pose, color=utils.COLORS[color], category='fixed') - fixtures.append(fixture_id) - - # Add ell objects. - ell_size = (0.04, 0.04, 0.04) - ell_urdf = 'insertion/ell.urdf' - ells = [] - for color in colors: - ell_pose = self.get_random_pose(env, ell_size) - ell_id = env.add_object(ell_urdf, ell_pose, color=utils.COLORS[color]) - ells.append(ell_id) - - # Goal: each ell is inserted into the matching color fixture in the correct order. - for i, ell in enumerate(ells): - self.add_goal(objs=[ell], matches=np.ones((1, 1)), targ_poses=[p.getBasePositionAndOrientation(fixtures[i])], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / len(ells), - language_goal=self.lang_template.format(color=colors[i])) \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/move_piles_along_line.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/move_piles_along_line.py deleted file mode 100644 index b3963dfaa5d7551149c72ce8fe759393424fbd66..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/move_piles_along_line.py +++ /dev/null @@ -1,70 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils - -class MovePilesAlongLine(Task): - """Move three piles of small blocks, each pile a different color (red, blue, green), - along three matching colored lines to three separate zones of the same color using a spatula.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "move the piles of blocks along the lines to the matching colored zones" - self.task_completed_desc = "done moving piles." - self.primitive = primitives.push - self.ee = Spatula - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add three colored lines. - line_template = 'line/line-template.urdf' - line_colors = ['red', 'blue', 'green'] - line_poses = [] - for color in line_colors: - line_size = self.get_random_size(0.1, 0.15, 0.1, 0.15, 0.05, 0.05) - line_pose = self.get_random_pose(env, line_size) - replace = {'DIM': line_size, 'HALF': (line_size[0] / 2, line_size[1] / 2, line_size[2] / 2), 'COLOR': color} - line_urdf = self.fill_template(line_template, replace) - env.add_object(line_urdf, line_pose, 'fixed') - line_poses.append(line_pose) - - # Add three colored zones. - zone_template = 'zone/zone.urdf' - zone_poses = [] - for color in line_colors: - zone_size = self.get_random_size(0.1, 0.15, 0.1, 0.15, 0.05, 0.05) - zone_pose = self.get_random_pose(env, zone_size) - replace = {'DIM': zone_size, 'HALF': (zone_size[0] / 2, zone_size[1] / 2, zone_size[2] / 2), 'COLOR': color} - zone_urdf = self.fill_template(zone_template, replace) - env.add_object(zone_urdf, zone_pose, 'fixed') - zone_poses.append(zone_pose) - - # Add three piles of small blocks. - block_template = 'block/small.urdf' - block_colors = ['red', 'blue', 'green'] - block_ids = [] - for color in block_colors: - block_size = self.get_random_size(0.1, 0.15, 0.1, 0.15, 0.05, 0.05) - block_pose = self.get_random_pose(env, block_size) - replace = {'DIM': block_size, 'HALF': (block_size[0] / 2, block_size[1] / 2, block_size[2] / 2), 'COLOR': color} - block_urdf = self.fill_template(block_template, replace) - block_id = env.add_object(block_urdf, block_pose) - block_ids.append(block_id) - - # Add goals. - for i in range(3): - self.add_goal(objs=[block_ids[i]], matches=np.ones((1, 1)), targ_poses=[zone_poses[i]], replace=False, - rotations=False, metric='zone', params=[(zone_poses[i], zone_size)], step_max_reward=1/3, - language_goal=self.lang_template) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/coder/tblr_bbox_coder.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/coder/tblr_bbox_coder.py deleted file mode 100644 index edaffaf1fa252857e1a660ea14a613e2466fb52c..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/coder/tblr_bbox_coder.py +++ /dev/null @@ -1,198 +0,0 @@ -import mmcv -import torch - -from ..builder import BBOX_CODERS -from .base_bbox_coder import BaseBBoxCoder - - -@BBOX_CODERS.register_module() -class TBLRBBoxCoder(BaseBBoxCoder): - """TBLR BBox coder. - - Following the practice in `FSAF `_, - this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, - right) and decode it back to the original. - - Args: - normalizer (list | float): Normalization factor to be - divided with when coding the coordinates. If it is a list, it should - have length of 4 indicating normalization factor in tblr dims. - Otherwise it is a unified float factor for all dims. Default: 4.0 - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, normalizer=4.0, clip_border=True): - super(BaseBBoxCoder, self).__init__() - self.normalizer = normalizer - self.clip_border = clip_border - - def encode(self, bboxes, gt_bboxes): - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes`` in the (top, left, - bottom, right) order. - - Args: - bboxes (torch.Tensor): source boxes, e.g., object proposals. - gt_bboxes (torch.Tensor): target of the transformation, e.g., - ground truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bboxes2tblr( - bboxes, gt_bboxes, normalizer=self.normalizer) - return encoded_bboxes - - def decode(self, bboxes, pred_bboxes, max_shape=None): - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - bboxes (torch.Tensor): Basic boxes.Shape (B, N, 4) or (N, 4) - pred_bboxes (torch.Tensor): Encoded boxes with shape - (B, N, 4) or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - - Returns: - torch.Tensor: Decoded boxes. - """ - decoded_bboxes = tblr2bboxes( - bboxes, - pred_bboxes, - normalizer=self.normalizer, - max_shape=max_shape, - clip_border=self.clip_border) - - return decoded_bboxes - - -@mmcv.jit(coderize=True) -def bboxes2tblr(priors, gts, normalizer=4.0, normalize_by_wh=True): - """Encode ground truth boxes to tblr coordinate. - - It first convert the gt coordinate to tblr format, - (top, bottom, left, right), relative to prior box centers. - The tblr coordinate may be normalized by the side length of prior bboxes - if `normalize_by_wh` is specified as True, and it is then normalized by - the `normalizer` factor. - - Args: - priors (Tensor): Prior boxes in point form - Shape: (num_proposals,4). - gts (Tensor): Coords of ground truth for each prior in point-form - Shape: (num_proposals, 4). - normalizer (Sequence[float] | float): normalization parameter of - encoded boxes. If it is a list, it has to have length = 4. - Default: 4.0 - normalize_by_wh (bool): Whether to normalize tblr coordinate by the - side length (wh) of prior bboxes. - - Return: - encoded boxes (Tensor), Shape: (num_proposals, 4) - """ - - # dist b/t match center and prior's center - if not isinstance(normalizer, float): - normalizer = torch.tensor(normalizer, device=priors.device) - assert len(normalizer) == 4, 'Normalizer must have length = 4' - assert priors.size(0) == gts.size(0) - prior_centers = (priors[:, 0:2] + priors[:, 2:4]) / 2 - xmin, ymin, xmax, ymax = gts.split(1, dim=1) - top = prior_centers[:, 1].unsqueeze(1) - ymin - bottom = ymax - prior_centers[:, 1].unsqueeze(1) - left = prior_centers[:, 0].unsqueeze(1) - xmin - right = xmax - prior_centers[:, 0].unsqueeze(1) - loc = torch.cat((top, bottom, left, right), dim=1) - if normalize_by_wh: - # Normalize tblr by anchor width and height - wh = priors[:, 2:4] - priors[:, 0:2] - w, h = torch.split(wh, 1, dim=1) - loc[:, :2] /= h # tb is normalized by h - loc[:, 2:] /= w # lr is normalized by w - # Normalize tblr by the given normalization factor - return loc / normalizer - - -@mmcv.jit(coderize=True) -def tblr2bboxes(priors, - tblr, - normalizer=4.0, - normalize_by_wh=True, - max_shape=None, - clip_border=True): - """Decode tblr outputs to prediction boxes. - - The process includes 3 steps: 1) De-normalize tblr coordinates by - multiplying it with `normalizer`; 2) De-normalize tblr coordinates by the - prior bbox width and height if `normalize_by_wh` is `True`; 3) Convert - tblr (top, bottom, left, right) pair relative to the center of priors back - to (xmin, ymin, xmax, ymax) coordinate. - - Args: - priors (Tensor): Prior boxes in point form (x0, y0, x1, y1) - Shape: (N,4) or (B, N, 4). - tblr (Tensor): Coords of network output in tblr form - Shape: (N, 4) or (B, N, 4). - normalizer (Sequence[float] | float): Normalization parameter of - encoded boxes. By list, it represents the normalization factors at - tblr dims. By float, it is the unified normalization factor at all - dims. Default: 4.0 - normalize_by_wh (bool): Whether the tblr coordinates have been - normalized by the side length (wh) of prior bboxes. - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - - Return: - encoded boxes (Tensor): Boxes with shape (N, 4) or (B, N, 4) - """ - if not isinstance(normalizer, float): - normalizer = torch.tensor(normalizer, device=priors.device) - assert len(normalizer) == 4, 'Normalizer must have length = 4' - assert priors.size(0) == tblr.size(0) - if priors.ndim == 3: - assert priors.size(1) == tblr.size(1) - - loc_decode = tblr * normalizer - prior_centers = (priors[..., 0:2] + priors[..., 2:4]) / 2 - if normalize_by_wh: - wh = priors[..., 2:4] - priors[..., 0:2] - w, h = torch.split(wh, 1, dim=-1) - # Inplace operation with slice would failed for exporting to ONNX - th = h * loc_decode[..., :2] # tb - tw = w * loc_decode[..., 2:] # lr - loc_decode = torch.cat([th, tw], dim=-1) - # Cannot be exported using onnx when loc_decode.split(1, dim=-1) - top, bottom, left, right = loc_decode.split((1, 1, 1, 1), dim=-1) - xmin = prior_centers[..., 0].unsqueeze(-1) - left - xmax = prior_centers[..., 0].unsqueeze(-1) + right - ymin = prior_centers[..., 1].unsqueeze(-1) - top - ymax = prior_centers[..., 1].unsqueeze(-1) + bottom - - bboxes = torch.cat((xmin, ymin, xmax, ymax), dim=-1) - - if clip_border and max_shape is not None: - if not isinstance(max_shape, torch.Tensor): - max_shape = priors.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(priors) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = priors.new_tensor(0) - max_xy = torch.cat([max_shape, max_shape], - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/__init__.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/__init__.py deleted file mode 100644 index 297aa228277768eb0ba0e8a377f19704d1feeca8..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/losses/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -from .accuracy import Accuracy, accuracy -from .ae_loss import AssociativeEmbeddingLoss -from .balanced_l1_loss import BalancedL1Loss, balanced_l1_loss -from .cross_entropy_loss import (CrossEntropyLoss, binary_cross_entropy, - cross_entropy, mask_cross_entropy) -from .focal_loss import FocalLoss, sigmoid_focal_loss -from .gaussian_focal_loss import GaussianFocalLoss -from .gfocal_loss import DistributionFocalLoss, QualityFocalLoss -from .ghm_loss import GHMC, GHMR -from .iou_loss import (BoundedIoULoss, CIoULoss, DIoULoss, GIoULoss, IoULoss, - bounded_iou_loss, iou_loss) -from .kd_loss import KnowledgeDistillationKLDivLoss -from .mse_loss import MSELoss, mse_loss -from .pisa_loss import carl_loss, isr_p -from .smooth_l1_loss import L1Loss, SmoothL1Loss, l1_loss, smooth_l1_loss -from .utils import reduce_loss, weight_reduce_loss, weighted_loss -from .varifocal_loss import VarifocalLoss - -__all__ = [ - 'accuracy', 'Accuracy', 'cross_entropy', 'binary_cross_entropy', - 'mask_cross_entropy', 'CrossEntropyLoss', 'sigmoid_focal_loss', - 'FocalLoss', 'smooth_l1_loss', 'SmoothL1Loss', 'balanced_l1_loss', - 'BalancedL1Loss', 'mse_loss', 'MSELoss', 'iou_loss', 'bounded_iou_loss', - 'IoULoss', 'BoundedIoULoss', 'GIoULoss', 'DIoULoss', 'CIoULoss', 'GHMC', - 'GHMR', 'reduce_loss', 'weight_reduce_loss', 'weighted_loss', 'L1Loss', - 'l1_loss', 'isr_p', 'carl_loss', 'AssociativeEmbeddingLoss', - 'GaussianFocalLoss', 'QualityFocalLoss', 'DistributionFocalLoss', - 'VarifocalLoss', 'KnowledgeDistillationKLDivLoss' -] diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_40k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_40k_pascal_context.py deleted file mode 100644 index 9d493ef527bb161be98d0e4ea433104b3bb9ff48..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_40k_pascal_context.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3_r50-d8.py', - '../_base_/datasets/pascal_context.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=60), - auxiliary_head=dict(num_classes=60), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_512x1024_40k_cityscapes.py deleted file mode 100644 index f30646ede7b036e6c82c335729b19f92293efb35..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] -model = dict( - backbone=dict(dilations=(1, 1, 1, 2), strides=(1, 2, 2, 1)), - decode_head=dict(dilation=6), - auxiliary_head=dict(dilation=6)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x512_40k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x512_40k_voc12aug.py deleted file mode 100644 index 40d9190fba223251b794c105b036e4794865f785..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_512x512_40k_voc12aug.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './nonlocal_r50-d8_512x512_40k_voc12aug.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/fcn_s101-d8_512x512_160k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/fcn_s101-d8_512x512_160k_ade20k.py deleted file mode 100644 index dcee8c280e833825f84b944c6db21e9a43125e06..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/fcn_s101-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = '../fcn/fcn_r101-d8_512x512_160k_ade20k.py' -model = dict( - pretrained='open-mmlab://resnest101', - backbone=dict( - type='ResNeSt', - stem_channels=128, - radix=2, - reduction_factor=4, - avg_down_stride=True)) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/sisnr.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/sisnr.py deleted file mode 100644 index 30f1fa1de9aca22758b6665609a1eacc0bd992ca..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/sisnr.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp - -import torch -from torch import nn -from torch.nn import functional as F - - -def _unfold(a: torch.Tensor, kernel_size: int, stride: int) -> torch.Tensor: - """Given input of size [*OT, T], output Tensor of size [*OT, F, K] - with K the kernel size, by extracting frames with the given stride. - This will pad the input so that `F = ceil(T / K)`. - see https://github.com/pytorch/pytorch/issues/60466 - """ - *shape, length = a.shape - n_frames = math.ceil(length / stride) - tgt_length = (n_frames - 1) * stride + kernel_size - a = F.pad(a, (0, tgt_length - length)) - strides = list(a.stride()) - assert strides[-1] == 1, "data should be contiguous" - strides = strides[:-1] + [stride, 1] - return a.as_strided([*shape, n_frames, kernel_size], strides) - - -def _center(x: torch.Tensor) -> torch.Tensor: - return x - x.mean(-1, True) - - -def _norm2(x: torch.Tensor) -> torch.Tensor: - return x.pow(2).sum(-1, True) - - -class SISNR(nn.Module): - """SISNR loss. - - Input should be [B, C, T], output is scalar. - - Args: - sample_rate (int): Sample rate. - segment (float or None): Evaluate on chunks of that many seconds. If None, evaluate on - entire audio only. - overlap (float): Overlap between chunks, i.e. 0.5 = 50 % overlap. - epsilon (float): Epsilon value for numerical stability. - """ - def __init__( - self, - sample_rate: int = 16000, - segment: tp.Optional[float] = 20, - overlap: float = 0.5, - epsilon: float = torch.finfo(torch.float32).eps, - ): - super().__init__() - self.sample_rate = sample_rate - self.segment = segment - self.overlap = overlap - self.epsilon = epsilon - - def forward(self, out_sig: torch.Tensor, ref_sig: torch.Tensor) -> torch.Tensor: - B, C, T = ref_sig.shape - assert ref_sig.shape == out_sig.shape - - if self.segment is None: - frame = T - stride = T - else: - frame = int(self.segment * self.sample_rate) - stride = int(frame * (1 - self.overlap)) - - epsilon = self.epsilon * frame # make epsilon prop to frame size. - - gt = _unfold(ref_sig, frame, stride) - est = _unfold(out_sig, frame, stride) - if self.segment is None: - assert gt.shape[-1] == 1 - - gt = _center(gt) - est = _center(est) - dot = torch.einsum("bcft,bcft->bcf", gt, est) - - proj = dot[:, :, :, None] * gt / (epsilon + _norm2(gt)) - noise = est - proj - - sisnr = 10 * ( - torch.log10(epsilon + _norm2(proj)) - torch.log10(epsilon + _norm2(noise)) - ) - return -1 * sisnr[..., 0].mean() diff --git a/spaces/HLasse/textdescriptives/data_viewer.py b/spaces/HLasse/textdescriptives/data_viewer.py deleted file mode 100644 index ae191efa34573e87b8ab9505cf8b1521ffa8ff24..0000000000000000000000000000000000000000 --- a/spaces/HLasse/textdescriptives/data_viewer.py +++ /dev/null @@ -1,26 +0,0 @@ -""" -Class for showing header and download button in the same row. -""" - -import streamlit as st - - -class DataViewer: - def _convert_df_to_csv(self, data, **kwargs): - return data.to_csv(**kwargs).encode("utf-8") - - def _header_and_download( - self, header, data, file_name, key=None, label="Download", help="Download data" - ): - col1, col2 = st.columns([9, 2]) - with col1: - st.subheader(header) - with col2: - st.write("") - st.download_button( - label=label, - data=self._convert_df_to_csv(data, index=False), - file_name=file_name, - key=key, - help=help, - ) diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/dissect.html b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/dissect.html deleted file mode 100644 index e6bf4e9a418abdfef5ba09c4182bd71cf1420e52..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/dissect.html +++ /dev/null @@ -1,399 +0,0 @@ - - - - - - - - - - - - -
- - - -
-
- - - -
- -
-
-{{lrec.interpretable}}/{{lrec.units.length}} units - -covering {{lrec.labels.length}} concepts -with IoU ≥ {{dissect.iou_threshold}} - -
- -
-sort by - -{{rank.name}} - - - -
-
- *-{{ metric }} -
-
-
- {{rank.name}} -
-
-
- -
- -
- -
-
-
{{urec[lk+'_label']}}
-
{{lrec.layer}} unit {{urec.unit}} ({{urec[lk+'_cat']}}) iou {{urec[lk + '_iou'] | fixed(2)}} {{lk}} {{urec[lk] | fixed(2)}}
-
-
- -
- -
- - - - - - - diff --git a/spaces/Hallucinate/demo/midas/midas_net_custom.py b/spaces/Hallucinate/demo/midas/midas_net_custom.py deleted file mode 100644 index 50e4acb5e53d5fabefe3dde16ab49c33c2b7797c..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/midas/midas_net_custom.py +++ /dev/null @@ -1,128 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder - - -class MidasNet_small(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True, - blocks={'expand': True}): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet_small, self).__init__() - - use_pretrained = False if path else True - - self.channels_last = channels_last - self.blocks = blocks - self.backbone = backbone - - self.groups = 1 - - features1=features - features2=features - features3=features - features4=features - self.expand = False - if "expand" in self.blocks and self.blocks['expand'] == True: - self.expand = True - features1=features - features2=features*2 - features3=features*4 - features4=features*8 - - self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable) - - self.scratch.activation = nn.ReLU(False) - - self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners) - - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1), - self.scratch.activation, - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - if path: - self.load(path) - - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - if self.channels_last==True: - print("self.channels_last = ", self.channels_last) - x.contiguous(memory_format=torch.channels_last) - - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) - - - -def fuse_model(m): - prev_previous_type = nn.Identity() - prev_previous_name = '' - previous_type = nn.Identity() - previous_name = '' - for name, module in m.named_modules(): - if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU: - # print("FUSED ", prev_previous_name, previous_name, name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True) - elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d: - # print("FUSED ", prev_previous_name, previous_name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True) - # elif previous_type == nn.Conv2d and type(module) == nn.ReLU: - # print("FUSED ", previous_name, name) - # torch.quantization.fuse_modules(m, [previous_name, name], inplace=True) - - prev_previous_type = previous_type - prev_previous_name = previous_name - previous_type = type(module) - previous_name = name \ No newline at end of file diff --git a/spaces/Hallucinate/demo/taming/data/custom.py b/spaces/Hallucinate/demo/taming/data/custom.py deleted file mode 100644 index 33f302a4b55ba1e8ec282ec3292b6263c06dfb91..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/taming/data/custom.py +++ /dev/null @@ -1,38 +0,0 @@ -import os -import numpy as np -import albumentations -from torch.utils.data import Dataset - -from taming.data.base import ImagePaths, NumpyPaths, ConcatDatasetWithIndex - - -class CustomBase(Dataset): - def __init__(self, *args, **kwargs): - super().__init__() - self.data = None - - def __len__(self): - return len(self.data) - - def __getitem__(self, i): - example = self.data[i] - return example - - - -class CustomTrain(CustomBase): - def __init__(self, size, training_images_list_file): - super().__init__() - with open(training_images_list_file, "r") as f: - paths = f.read().splitlines() - self.data = ImagePaths(paths=paths, size=size, random_crop=False) - - -class CustomTest(CustomBase): - def __init__(self, size, test_images_list_file): - super().__init__() - with open(test_images_list_file, "r") as f: - paths = f.read().splitlines() - self.data = ImagePaths(paths=paths, size=size, random_crop=False) - - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_iitb.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_iitb.sh deleted file mode 100644 index a884e20839e2a41a57405cb6af362e37bd16ab6f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_iitb.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - -IITB=$WORKDIR_ROOT/IITB -mkdir -p $IITB -pushd $IITB - -wget http://www.cfilt.iitb.ac.in/~moses/iitb_en_hi_parallel/iitb_corpus_download/parallel.tgz -tar -xvzf parallel.tgz - -wget http://www.cfilt.iitb.ac.in/~moses/iitb_en_hi_parallel/iitb_corpus_download/dev_test.tgz -tar -xvzf dev_test.tgz - -DESTDIR=${WORKDIR_ROOT}/ML50/raw/ - -cp parallel/IITB.en-hi.en $DESTDIR/train.hi_IN-en_XX.en_XX -cp parallel/IITB.en-hi.hi $DESTDIR/train.hi_IN-en_XX.hi_IN - -cp dev_test/dev.en $DESTDIR/valid.hi_IN-en_XX.en_XX -cp dev_test/dev.hi $DESTDIR/valid.hi_IN-en_XX.hi_IN - -cp dev_test/test.en $DESTDIR/test.hi_IN-en_XX.en_XX -cp dev_test/test.hi $DESTDIR/test.hi_IN-en_XX.hi_IN -popd \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py deleted file mode 100644 index 7faae73119321af0b34fe8e26499a2ef5577291a..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -for file in os.listdir(os.path.dirname(__file__)): - if file.endswith(".py") and not file.startswith("_"): - criterion_name = file[: file.find(".py")] - importlib.import_module( - "examples.speech_text_joint_to_text.criterions." + criterion_name - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/frm_text_to_speech_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/frm_text_to_speech_dataset.py deleted file mode 100644 index 125b1fc0c0a67190e6d9ba4866664cbc9006a142..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/audio/frm_text_to_speech_dataset.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory.abs - -import csv -import logging -import os.path as op -from typing import List, Optional - -import numpy as np -import torch -from fairseq.data import Dictionary -from fairseq.data.audio.speech_to_text_dataset import ( - S2TDataConfig -) -from fairseq.data.audio.text_to_speech_dataset import ( - TextToSpeechDataset, TextToSpeechDatasetCreator -) - -logger = logging.getLogger(__name__) - - -class FrmTextToSpeechDataset(TextToSpeechDataset): - def __init__( - self, - split: str, - is_train_split: bool, - data_cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None, - do_chunk=False, - chunk_bound=-1, - chunk_init=50, - chunk_incr=5, - add_eos=True, - dedup=True, - ref_fpu=-1 - ): - # It assumes texts are encoded at a fixed frame-rate - super().__init__( - split=split, - is_train_split=is_train_split, - data_cfg=data_cfg, - audio_paths=audio_paths, - n_frames=n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id - ) - - self.do_chunk = do_chunk - self.chunk_bound = chunk_bound - self.chunk_init = chunk_init - self.chunk_incr = chunk_incr - self.add_eos = add_eos - self.dedup = dedup - self.ref_fpu = ref_fpu - - self.chunk_size = -1 - - if do_chunk: - assert self.chunk_incr >= 0 - assert self.pre_tokenizer is None - - def __getitem__(self, index): - index, source, target, speaker_id, _, _, _ = super().__getitem__(index) - if target[-1].item() == self.tgt_dict.eos_index: - target = target[:-1] - - fpu = source.size(0) / target.size(0) # frame-per-unit - fps = self.n_frames_per_step - assert ( - self.ref_fpu == -1 or - abs((fpu * fps - self.ref_fpu) / self.ref_fpu) < 0.1 - ), f"{fpu*fps} != {self.ref_fpu}" - - # only chunk training split - if self.is_train_split and self.do_chunk and self.chunk_size > 0: - lang = target[:int(self.data_cfg.prepend_tgt_lang_tag)] - text = target[int(self.data_cfg.prepend_tgt_lang_tag):] - size = len(text) - chunk_size = min(self.chunk_size, size) - chunk_start = np.random.randint(size - chunk_size + 1) - text = text[chunk_start:chunk_start+chunk_size] - target = torch.cat((lang, text), 0) - - f_size = int(np.floor(chunk_size * fpu)) - f_start = int(np.floor(chunk_start * fpu)) - assert(f_size > 0) - source = source[f_start:f_start+f_size, :] - - if self.dedup: - target = torch.unique_consecutive(target) - - if self.add_eos: - eos_idx = self.tgt_dict.eos_index - target = torch.cat((target, torch.LongTensor([eos_idx])), 0) - - return index, source, target, speaker_id - - def set_epoch(self, epoch): - if self.is_train_split and self.do_chunk: - old = self.chunk_size - self.chunk_size = self.chunk_init + epoch * self.chunk_incr - if self.chunk_bound > 0: - self.chunk_size = min(self.chunk_size, self.chunk_bound) - logger.info(( - f"{self.split}: setting chunk size " - f"from {old} to {self.chunk_size}" - )) - - -class FrmTextToSpeechDatasetCreator(TextToSpeechDatasetCreator): - # inherit for key names - @classmethod - def from_tsv( - cls, - root: str, - data_cfg: S2TDataConfig, - split: str, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split: bool, - n_frames_per_step: int, - speaker_to_id, - do_chunk: bool = False, - chunk_bound: int = -1, - chunk_init: int = 50, - chunk_incr: int = 5, - add_eos: bool = True, - dedup: bool = True, - ref_fpu: float = -1 - ) -> FrmTextToSpeechDataset: - tsv_path = op.join(root, f"{split}.tsv") - if not op.isfile(tsv_path): - raise FileNotFoundError(f"Dataset not found: {tsv_path}") - with open(tsv_path) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - s = [dict(e) for e in reader] - assert len(s) > 0 - - ids = [ss[cls.KEY_ID] for ss in s] - audio_paths = [ - op.join(data_cfg.audio_root, ss[cls.KEY_AUDIO]) for ss in s - ] - n_frames = [int(ss[cls.KEY_N_FRAMES]) for ss in s] - tgt_texts = [ss[cls.KEY_TGT_TEXT] for ss in s] - src_texts = [ss.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for ss in s] - speakers = [ss.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for ss in s] - src_langs = [ss.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for ss in s] - tgt_langs = [ss.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for ss in s] - - return FrmTextToSpeechDataset( - split=split, - is_train_split=is_train_split, - data_cfg=data_cfg, - audio_paths=audio_paths, - n_frames=n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id, - do_chunk=do_chunk, - chunk_bound=chunk_bound, - chunk_init=chunk_init, - chunk_incr=chunk_incr, - add_eos=add_eos, - dedup=dedup, - ref_fpu=ref_fpu - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py deleted file mode 100644 index a25433dd8edae2f0b52d7d0eeeb829cabc6b4b89..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py +++ /dev/null @@ -1,289 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def gen_forward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "lightconv_cuda.cuh" - -std::vector lightconv_cuda_forward(at::Tensor input, at::Tensor filters, int padding_l) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = filters.size(0); - const auto filterSize = filters.size(1); - - const auto numFiltersInBlock = numFeatures / numHeads; - - const dim3 blocks(minibatch, numFeatures); - - auto output = at::zeros_like(input); - auto stream = at::cuda::getCurrentCUDAStream(); -""" - - sequence_if = """ - if (sequenceLength <= {seq}) {{ - switch(filterSize) {{ -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {pad}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "lightconv_forward", ([&] {{ - lightconv_forward_kernel<{k}, {b_size}, {pad}, scalar_t> - <<>>( - input.data(), - filters.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - output.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping forward pass" << std::endl; - } - break; -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping forward pass" << std::endl; - } -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - final_return = """ - } - - return {output}; -} -""" - - with open("lightconv_cuda_forward.cu", "w") as forward: - forward.write(head) - for seq in seqs: - forward.write(sequence_if.format(seq=seq)) - for k in kernels: - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=seq, pad=pad)) - forward.write(bad_padding) - forward.write(bad_filter) - forward.write(con_else) - - forward.write(final_else) - for k in kernels: - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=seq, pad=pad)) - forward.write(bad_padding) - forward.write(bad_filter) - forward.write(final_return) - - -def gen_backward(): - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "lightconv_cuda.cuh" - -std::vector lightconv_cuda_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters) { - - // gradWrtInput - const int minibatch = input.size(0); - const int numFeatures = input.size(1); - const int sequenceLength = input.size(2); - - const int numHeads = filters.size(0); - const int filterSize = filters.size(1); - - const dim3 gradBlocks(minibatch, numFeatures); - const dim3 weightGradFirstpassShortBlocks(minibatch, numHeads); - const dim3 weightGradSecondpassBlocks(numHeads, filterSize); - - const int numFiltersInBlock = numFeatures / numHeads; - - auto gradInput = at::zeros_like(input); - auto gradFilters = at::zeros_like(filters); - - at::DeviceGuard g(input.device()); - auto stream = at::cuda::getCurrentCUDAStream(); - - switch(filterSize) { -""" - - sequence_if = """ - if (sequenceLength <= {seq}) {{ -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {p}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "lightconv_backward", ([&] {{ - lightconv_grad_wrt_input_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - gradOutput.data(), - filters.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - gradInput.data()); - -""" - - weight_grad_short = """ - at::Tensor tempSumGradFilters = at::zeros({{minibatch, numHeads, filterSize}}, input.options().dtype(at::kFloat)); - lightconv_grad_wrt_weights_firstpass_short_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - input.data(), - gradOutput.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - tempSumGradFilters.data() - ); - - lightconv_grad_wrt_weights_secondpass_short_kernel<{k}, {b_size}, scalar_t> - <<>>( - tempSumGradFilters.data(), - minibatch, - numFiltersInBlock, - gradFilters.data() - ); - }})); - }} else -""" - - weight_grad = """ - at::Tensor tempSumGradFilters = at::zeros({{minibatch, numFeatures, filterSize}}, input.options().dtype(at::kFloat)); - lightconv_grad_wrt_weights_firstpass_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - input.data(), - gradOutput.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - tempSumGradFilters.data() - ); - - lightconv_grad_wrt_weights_secondpass_kernel<{k}, {b_size}, scalar_t> - <<>>( - tempSumGradFilters.data(), - minibatch, - numFiltersInBlock, - gradFilters.data() - ); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping backward pass" << std::endl; - } -""" - - breakout = """ - break; -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping backward pass" << std::endl; -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - last_return = """ - } - return {gradInput, gradFilters}; -} -""" - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - thresh = [32, 32, 64, 128, 256, -1, -1, -1] - max_mem = [-1, -1, -1, -1, -1, 192, 96, 64] - - with open("lightconv_cuda_backward.cu", "w") as backward: - backward.write(head) - for (k, t, mem) in zip(kernels, thresh, max_mem): - backward.write(case_k.format(k=k)) - for seq in seqs: - if (t == -1 or seq <= t) and (mem == -1 or seq < mem): - backward.write(sequence_if.format(seq=seq)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=seq, p=p)) - backward.write(weight_grad_short.format(k=k, b_size=seq, p=p)) - backward.write(bad_padding) - else: - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=32, p=p)) - backward.write(weight_grad.format(k=k, b_size=32, p=p)) - backward.write(bad_padding) - backward.write(breakout) - break - backward.write(con_else) - backward.write(bad_filter) - backward.write(last_return) - - -if __name__ == "__main__": - gen_forward() - gen_backward() diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/ngram_repeat_block.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/ngram_repeat_block.py deleted file mode 100644 index 854125149448a2d37ad2773cd1e6d614e73e0e79..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/ngram_repeat_block.py +++ /dev/null @@ -1,150 +0,0 @@ -# Originally from Microsoft Corporation. -# Licensed under the MIT License. - -""" Wrapper for ngram_repeat_block cuda extension """ -import torch -from torch import nn - -import math -from typing import Dict, List, Optional -import warnings - -try: - from fairseq import ngram_repeat_block_cuda - - EXTENSION_BUILT = True -except ImportError: - EXTENSION_BUILT = False - - -def is_cuda_extension_usable() -> bool: - """Check whether ngram_repeat_block_cuda is built properly""" - if not EXTENSION_BUILT or not torch.cuda.is_available(): - return False - bsz = 2 - tokens = torch.tensor([[4, 4, 3, 2], [1, 2, 3, 4]], dtype=torch.long, device="cuda") - lprobs = torch.rand((8, 12), device="cuda") - try: - outputs = ngram_repeat_block_cuda.forward(tokens, lprobs, bsz, 3, 4, 3) - outputs = outputs + 4 # This line breaks if the extension is built incorrectly. - return True - except RuntimeError: - warnings.warn( - "NGramRepeatBlock extension must be rebuilt." - 'Run TORCH_CUDA_ARCH_LIST="6.0;6.1;7.0" python setup.py build_ext --inplace' - ) - return False - - -class NGramRepeatBlock(nn.Module): - """ Wrapper class for calling ngram_repeat_block cuda extension """ - - def __init__(self, no_repeat_ngram_size: int, use_extension: bool = True): - super().__init__() - self.use_extension = is_cuda_extension_usable() if use_extension else False - self.no_repeat_ngram_size = no_repeat_ngram_size - - def reset_parameters(self): - pass - - @torch.jit.unused - def call_cuda_extension( - self, - tokens, - lprobs, - bsz: int, - beam_size: int, - step: int, - ): - return ngram_repeat_block_cuda.forward( - tokens, lprobs, bsz, step, beam_size, self.no_repeat_ngram_size - ) - - def forward( - self, - tokens, - lprobs, - bsz: int, - beam_size: int, - step: int, - ): - """ - Args: - tokens(Tensor): Input tokens(Bsz*beam, seq_len) - lprobs(Tensor): likelihood probability, - Expected to be updated in place.(Bsz*beam, vocab_size) - bsz(int): batch size - step(int): current step - beam_size(int): beam size - no_repeat_ngram_size(int): Ngram size - """ - msg = f"expected {bsz *beam_size} got" - assert tokens.size(0) == bsz * beam_size, f"{msg} {tokens.size(0)}" - assert lprobs.size(0) == bsz * beam_size, f"{msg} {lprobs.size(0)}" - if self.use_extension: - return self.call_cuda_extension(tokens, lprobs, bsz, beam_size, step) - - else: - return self._no_repeat_ngram( - tokens, - lprobs, - bsz, - beam_size, - step, - ) - - def _no_repeat_ngram(self, tokens, lprobs, bsz: int, beam_size: int, step: int): - """For each hypothesis generate a list of previous ngrams and set associated lprobs to -inf""" - gen_ngrams: List[Dict[str, List[int]]] = [ - torch.jit.annotate(Dict[str, List[int]], {}) - for bbsz_idx in range(bsz * beam_size) - ] - cpu_tokens = tokens.cpu() - for bbsz_idx in range(bsz * beam_size): - gen_tokens: List[int] = cpu_tokens[bbsz_idx].tolist() - for ngram in self.transpose_list( - [gen_tokens[i:] for i in range(self.no_repeat_ngram_size)] - ): - key = ",".join([str(x) for x in ngram[:-1]]) - gen_ngrams[bbsz_idx][key] = gen_ngrams[bbsz_idx].get( - key, torch.jit.annotate(List[int], []) - ) + [ngram[-1]] - if step + 2 - self.no_repeat_ngram_size >= 0: - # no banned tokens if we haven't generated no_repeat_ngram_size tokens yet - banned_tokens = [ - self.calculate_banned_tokens( - tokens, step, gen_ngrams, self.no_repeat_ngram_size, bbsz_idx - ) - for bbsz_idx in range(bsz * beam_size) - ] - else: - banned_tokens = [ - torch.jit.annotate(List[int], []) for bbsz_idx in range(bsz * beam_size) - ] - for bbsz_idx in range(bsz * beam_size): - lprobs[bbsz_idx][ - torch.tensor(banned_tokens[bbsz_idx], dtype=torch.int64) - ] = torch.tensor(-math.inf).to(lprobs) - return lprobs - - @staticmethod - def calculate_banned_tokens( - tokens, - step: int, - gen_ngrams: List[Dict[str, List[int]]], - no_repeat_ngram_size: int, - bbsz_idx: int, - ): - tokens_list: List[int] = tokens[ - bbsz_idx, step + 2 - no_repeat_ngram_size : step + 1 - ].tolist() - # before decoding the next token, prevent decoding of ngrams that have already appeared - ngram_index = ",".join([str(x) for x in tokens_list]) - return gen_ngrams[bbsz_idx].get(ngram_index, torch.jit.annotate(List[int], [])) - - @staticmethod - def transpose_list(l: List[List[int]]): - # GeneratorExp aren't supported in TS so ignoring the lint - min_len = min([len(x) for x in l]) # noqa - l2 = [[row[i] for row in l] for i in range(min_len)] - return l2 diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/utils.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/utils.py deleted file mode 100644 index 71e9b2c99e053e2d4239074a67d64b834898c348..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/hifi_gan/utils.py +++ /dev/null @@ -1,57 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm - -matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + "????????") - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] diff --git a/spaces/Harveenchadha/oiTrans/scripts/postprocess_translate.py b/spaces/Harveenchadha/oiTrans/scripts/postprocess_translate.py deleted file mode 100644 index 9334aaadb21168cb42ac3ff5e34ded386f00e95c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/scripts/postprocess_translate.py +++ /dev/null @@ -1,110 +0,0 @@ -INDIC_NLP_LIB_HOME = "indic_nlp_library" -INDIC_NLP_RESOURCES = "indic_nlp_resources" -import sys - -from indicnlp import transliterate - -sys.path.append(r"{}".format(INDIC_NLP_LIB_HOME)) -from indicnlp import common - -common.set_resources_path(INDIC_NLP_RESOURCES) -from indicnlp import loader - -loader.load() -from sacremoses import MosesPunctNormalizer -from sacremoses import MosesTokenizer -from sacremoses import MosesDetokenizer -from collections import defaultdict - -import indicnlp -from indicnlp.tokenize import indic_tokenize -from indicnlp.tokenize import indic_detokenize -from indicnlp.normalize import indic_normalize -from indicnlp.transliterate import unicode_transliterate - - -def postprocess( - infname, outfname, input_size, lang, common_lang="hi", transliterate=False -): - """ - parse fairseq interactive output, convert script back to native Indic script (in case of Indic languages) and detokenize. - - infname: fairseq log file - outfname: output file of translation (sentences not translated contain the dummy string 'DUMMY_OUTPUT' - input_size: expected number of output sentences - lang: language - """ - - consolidated_testoutput = [] - # with open(infname,'r',encoding='utf-8') as infile: - # consolidated_testoutput= list(map(lambda x: x.strip(), filter(lambda x: x.startswith('H-'),infile) )) - # consolidated_testoutput.sort(key=lambda x: int(x.split('\t')[0].split('-')[1])) - # consolidated_testoutput=[ x.split('\t')[2] for x in consolidated_testoutput ] - - consolidated_testoutput = [(x, 0.0, "") for x in range(input_size)] - temp_testoutput = [] - with open(infname, "r", encoding="utf-8") as infile: - temp_testoutput = list( - map( - lambda x: x.strip().split("\t"), - filter(lambda x: x.startswith("H-"), infile), - ) - ) - temp_testoutput = list( - map(lambda x: (int(x[0].split("-")[1]), float(x[1]), x[2]), temp_testoutput) - ) - for sid, score, hyp in temp_testoutput: - consolidated_testoutput[sid] = (sid, score, hyp) - consolidated_testoutput = [x[2] for x in consolidated_testoutput] - - if lang == "en": - en_detok = MosesDetokenizer(lang="en") - with open(outfname, "w", encoding="utf-8") as outfile: - for sent in consolidated_testoutput: - outfile.write(en_detok.detokenize(sent.split(" ")) + "\n") - else: - xliterator = unicode_transliterate.UnicodeIndicTransliterator() - with open(outfname, "w", encoding="utf-8") as outfile: - for sent in consolidated_testoutput: - if transliterate: - outstr = indic_detokenize.trivial_detokenize( - xliterator.transliterate(sent, common_lang, lang), lang - ) - else: - outstr = indic_detokenize.trivial_detokenize(sent, lang) - outfile.write(outstr + "\n") - - -if __name__ == "__main__": - # # The path to the local git repo for Indic NLP library - # INDIC_NLP_LIB_HOME="indic_nlp_library" - # INDIC_NLP_RESOURCES = "indic_nlp_resources" - # sys.path.append('{}'.format(INDIC_NLP_LIB_HOME)) - # common.set_resources_path(INDIC_NLP_RESOURCES) - # # The path to the local git repo for Indic NLP Resources - # INDIC_NLP_RESOURCES="" - - # sys.path.append('{}'.format(INDIC_NLP_LIB_HOME)) - # common.set_resources_path(INDIC_NLP_RESOURCES) - - # loader.load() - - infname = sys.argv[1] - outfname = sys.argv[2] - input_size = int(sys.argv[3]) - lang = sys.argv[4] - if len(sys.argv) == 5: - transliterate = False - elif len(sys.argv) == 6: - transliterate = sys.argv[5] - if transliterate.lower() == "true": - transliterate = True - else: - transliterate = False - else: - print(f"Invalid arguments: {sys.argv}") - exit() - - postprocess( - infname, outfname, input_size, lang, common_lang="hi", transliterate=transliterate - ) diff --git a/spaces/Harveenchadha/oiTrans/subword-nmt/apply_bpe.py b/spaces/Harveenchadha/oiTrans/subword-nmt/apply_bpe.py deleted file mode 100644 index 25996c808d02643c45d0ee0a837b5b291f8aa4f8..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/subword-nmt/apply_bpe.py +++ /dev/null @@ -1,448 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# Author: Rico Sennrich - -"""Use operations learned with learn_bpe.py to encode a new text. -The text will not be smaller, but use only a fixed vocabulary, with rare words -encoded as variable-length sequences of subword units. - -Reference: -Rico Sennrich, Barry Haddow and Alexandra Birch (2015). Neural Machine Translation of Rare Words with Subword Units. -Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany. -""" - -from __future__ import unicode_literals, division - -import sys -import os -import inspect -import codecs -import io -import argparse -import re -import warnings -import random -import tempfile -from multiprocessing import Pool, cpu_count - -# hack for python2/3 compatibility -from io import open -argparse.open = open - -class BPE(object): - - def __init__(self, codes, merges=-1, separator='@@', vocab=None, glossaries=None): - - codes.seek(0) - offset=1 - - # check version information - firstline = codes.readline() - if firstline.startswith('#version:'): - self.version = tuple([int(x) for x in re.sub(r'(\.0+)*$','', firstline.split()[-1]).split(".")]) - offset += 1 - else: - self.version = (0, 1) - codes.seek(0) - - self.bpe_codes = [tuple(item.strip('\r\n ').split(' ')) for (n, item) in enumerate(codes.read().rstrip('\n').split('\n')) if (n < merges or merges == -1)] - - for i, item in enumerate(self.bpe_codes): - if len(item) != 2: - sys.stderr.write('Error: invalid line {0} in BPE codes file: {1}\n'.format(i+offset, ' '.join(item))) - sys.stderr.write('The line should exist of exactly two subword units, separated by whitespace\n') - sys.exit(1) - - # some hacking to deal with duplicates (only consider first instance) - self.bpe_codes = dict([(code,i) for (i,code) in reversed(list(enumerate(self.bpe_codes)))]) - - self.bpe_codes_reverse = dict([(pair[0] + pair[1], pair) for pair,i in self.bpe_codes.items()]) - - self.separator = separator - - self.vocab = vocab - - self.glossaries = glossaries if glossaries else [] - - self.glossaries_regex = re.compile('^({})$'.format('|'.join(glossaries))) if glossaries else None - - self.cache = {} - - def process_lines(self, filename, outfile, dropout=0, num_workers=1): - - if sys.version_info < (3, 0): - print("Parallel mode is only supported in Python3.") - sys.exit(1) - - if num_workers == 1: - _process_lines(self, filename, outfile, dropout, 0, 0) - elif num_workers > 1: - with open(filename, encoding="utf-8") as f: - size = os.fstat(f.fileno()).st_size - chunk_size = int(size / num_workers) - offsets = [0 for _ in range(num_workers + 1)] - for i in range(1, num_workers): - f.seek(chunk_size * i) - pos = f.tell() - while True: - try: - line = f.readline() - break - except UnicodeDecodeError: - pos -= 1 - f.seek(pos) - offsets[i] = f.tell() - assert 0 <= offsets[i] < 1e20, "Bad new line separator, e.g. '\\r'" - res_files = [] - pool = Pool(processes=num_workers) - for i in range(num_workers): - tmp = tempfile.NamedTemporaryFile(delete=False) - tmp.close() - res_files.append(tmp) - pool.apply_async(_process_lines, (self, filename, tmp.name, dropout, offsets[i], offsets[i + 1])) - pool.close() - pool.join() - for i in range(num_workers): - with open(res_files[i].name, encoding="utf-8") as fi: - for line in fi: - outfile.write(line) - os.remove(res_files[i].name) - else: - raise ValueError('`num_workers` is expected to be a positive number, but got {}.'.format(num_workers)) - - def process_line(self, line, dropout=0): - """segment line, dealing with leading and trailing whitespace""" - - out = "" - - leading_whitespace = len(line)-len(line.lstrip('\r\n ')) - if leading_whitespace: - out += line[:leading_whitespace] - - out += self.segment(line, dropout) - - trailing_whitespace = len(line)-len(line.rstrip('\r\n ')) - if trailing_whitespace and trailing_whitespace != len(line): - out += line[-trailing_whitespace:] - - return out - - def segment(self, sentence, dropout=0): - """segment single sentence (whitespace-tokenized string) with BPE encoding""" - segments = self.segment_tokens(sentence.strip('\r\n ').split(' '), dropout) - return ' '.join(segments) - - def segment_tokens(self, tokens, dropout=0): - """segment a sequence of tokens with BPE encoding""" - output = [] - for word in tokens: - # eliminate double spaces - if not word: - continue - new_word = [out for segment in self._isolate_glossaries(word) - for out in encode(segment, - self.bpe_codes, - self.bpe_codes_reverse, - self.vocab, - self.separator, - self.version, - self.cache, - self.glossaries_regex, - dropout)] - - for item in new_word[:-1]: - output.append(item + self.separator) - output.append(new_word[-1]) - - return output - - def _isolate_glossaries(self, word): - word_segments = [word] - for gloss in self.glossaries: - word_segments = [out_segments for segment in word_segments - for out_segments in isolate_glossary(segment, gloss)] - return word_segments - -def _process_lines(bpe, filename, outfile, dropout, begin, end): - if isinstance(outfile, str): - fo = open(outfile, "w", encoding="utf-8") - else: - fo = outfile - with open(filename, encoding="utf-8") as f: - f.seek(begin) - line = f.readline() - while line: - pos = f.tell() - assert 0 <= pos < 1e20, "Bad new line separator, e.g. '\\r'" - if end > 0 and pos > end: - break - fo.write(bpe.process_line(line, dropout)) - line = f.readline() - if isinstance(outfile, str): - fo.close() - -def create_parser(subparsers=None): - - if subparsers: - parser = subparsers.add_parser('apply-bpe', - formatter_class=argparse.RawDescriptionHelpFormatter, - description="learn BPE-based word segmentation") - else: - parser = argparse.ArgumentParser( - formatter_class=argparse.RawDescriptionHelpFormatter, - description="learn BPE-based word segmentation") - - parser.add_argument( - '--input', '-i', type=argparse.FileType('r'), default=sys.stdin, - metavar='PATH', - help="Input file (default: standard input).") - parser.add_argument( - '--codes', '-c', type=argparse.FileType('r'), metavar='PATH', - required=True, - help="File with BPE codes (created by learn_bpe.py).") - parser.add_argument( - '--merges', '-m', type=int, default=-1, - metavar='INT', - help="Use this many BPE operations (<= number of learned symbols)"+ - "default: Apply all the learned merge operations") - parser.add_argument( - '--output', '-o', type=argparse.FileType('w'), default=sys.stdout, - metavar='PATH', - help="Output file (default: standard output)") - parser.add_argument( - '--separator', '-s', type=str, default='@@', metavar='STR', - help="Separator between non-final subword units (default: '%(default)s'))") - parser.add_argument( - '--vocabulary', type=argparse.FileType('r'), default=None, - metavar="PATH", - help="Vocabulary file (built with get_vocab.py). If provided, this script reverts any merge operations that produce an OOV.") - parser.add_argument( - '--vocabulary-threshold', type=int, default=None, - metavar="INT", - help="Vocabulary threshold. If vocabulary is provided, any word with frequency < threshold will be treated as OOV") - parser.add_argument( - '--dropout', type=float, default=0, - metavar="P", - help="Dropout BPE merge operations with probability P (Provilkov et al., 2019). Use this on training data only.") - parser.add_argument( - '--glossaries', type=str, nargs='+', default=None, - metavar="STR", - help="Glossaries. Words matching any of the words/regex provided in glossaries will not be affected "+ - "by the BPE (i.e. they will neither be broken into subwords, nor concatenated with other subwords. "+ - "Can be provided as a list of words/regex after the --glossaries argument. Enclose each regex in quotes.") - parser.add_argument( - '--seed', type=int, default=None, - metavar="S", - help="Random seed for the random number generators (e.g. for BPE dropout with --dropout).") - parser.add_argument( - '--num-workers', type=int, default=1, - help="Number of processors to process texts, only supported in Python3. If -1, set `multiprocessing.cpu_count()`. (default: %(default)s)") - - return parser - -def encode(orig, bpe_codes, bpe_codes_reverse, vocab, separator, version, cache, glossaries_regex=None, dropout=0): - """Encode word based on list of BPE merge operations, which are applied consecutively - """ - - if not dropout and orig in cache: - return cache[orig] - - if glossaries_regex and glossaries_regex.match(orig): - cache[orig] = (orig,) - return (orig,) - - if len(orig) == 1: - return orig - - if version == (0, 1): - word = list(orig) + [''] - elif version == (0, 2): # more consistent handling of word-final segments - word = list(orig[:-1]) + [orig[-1] + ''] - else: - raise NotImplementedError - - while len(word) > 1: - - # get list of symbol pairs; optionally apply dropout - pairs = [(bpe_codes[pair],i,pair) for (i,pair) in enumerate(zip(word, word[1:])) if (not dropout or random.random() > dropout) and pair in bpe_codes] - - if not pairs: - break - - #get first merge operation in list of BPE codes - bigram = min(pairs)[2] - - # find start position of all pairs that we want to merge - positions = [i for (rank,i,pair) in pairs if pair == bigram] - - i = 0 - new_word = [] - bigram = ''.join(bigram) - for j in positions: - # merges are invalid if they start before current position. This can happen if there are overlapping pairs: (x x x -> xx x) - if j < i: - continue - new_word.extend(word[i:j]) # all symbols before merged pair - new_word.append(bigram) # merged pair - i = j+2 # continue after merged pair - new_word.extend(word[i:]) # add all symbols until end of word - word = new_word - - # don't print end-of-word symbols - if word[-1] == '': - word = word[:-1] - elif word[-1].endswith(''): - word[-1] = word[-1][:-4] - - word = tuple(word) - if vocab: - word = check_vocab_and_split(word, bpe_codes_reverse, vocab, separator) - - cache[orig] = word - return word - -def recursive_split(segment, bpe_codes, vocab, separator, final=False): - """Recursively split segment into smaller units (by reversing BPE merges) - until all units are either in-vocabulary, or cannot be split futher.""" - - try: - if final: - left, right = bpe_codes[segment + ''] - right = right[:-4] - else: - left, right = bpe_codes[segment] - except: - #sys.stderr.write('cannot split {0} further.\n'.format(segment)) - yield segment - return - - if left + separator in vocab: - yield left - else: - for item in recursive_split(left, bpe_codes, vocab, separator, False): - yield item - - if (final and right in vocab) or (not final and right + separator in vocab): - yield right - else: - for item in recursive_split(right, bpe_codes, vocab, separator, final): - yield item - -def check_vocab_and_split(orig, bpe_codes, vocab, separator): - """Check for each segment in word if it is in-vocabulary, - and segment OOV segments into smaller units by reversing the BPE merge operations""" - - out = [] - - for segment in orig[:-1]: - if segment + separator in vocab: - out.append(segment) - else: - #sys.stderr.write('OOV: {0}\n'.format(segment)) - for item in recursive_split(segment, bpe_codes, vocab, separator, False): - out.append(item) - - segment = orig[-1] - if segment in vocab: - out.append(segment) - else: - #sys.stderr.write('OOV: {0}\n'.format(segment)) - for item in recursive_split(segment, bpe_codes, vocab, separator, True): - out.append(item) - - return out - - -def read_vocabulary(vocab_file, threshold): - """read vocabulary file produced by get_vocab.py, and filter according to frequency threshold. - """ - - vocabulary = set() - - for line in vocab_file: - word, freq = line.strip('\r\n ').split(' ') - freq = int(freq) - if threshold == None or freq >= threshold: - vocabulary.add(word) - - return vocabulary - -def isolate_glossary(word, glossary): - """ - Isolate a glossary present inside a word. - - Returns a list of subwords. In which all 'glossary' glossaries are isolated - - For example, if 'USA' is the glossary and '1934USABUSA' the word, the return value is: - ['1934', 'USA', 'B', 'USA'] - """ - # regex equivalent of (if word == glossary or glossary not in word) - if re.match('^'+glossary+'$', word) or not re.search(glossary, word): - return [word] - else: - segments = re.split(r'({})'.format(glossary), word) - segments, ending = segments[:-1], segments[-1] - segments = list(filter(None, segments)) # Remove empty strings in regex group. - return segments + [ending.strip('\r\n ')] if ending != '' else segments - -if __name__ == '__main__': - - currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) - newdir = os.path.join(currentdir, 'subword_nmt') - if os.path.isdir(newdir): - warnings.simplefilter('default') - warnings.warn( - "this script's location has moved to {0}. This symbolic link will be removed in a future version. Please point to the new location, or install the package and use the command 'subword-nmt'".format(newdir), - DeprecationWarning - ) - - # python 2/3 compatibility - if sys.version_info < (3, 0): - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin) - else: - sys.stdin = io.TextIOWrapper(sys.stdin.buffer, encoding='utf-8') - sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8') - sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', write_through=True, line_buffering=True) - - parser = create_parser() - args = parser.parse_args() - - if args.num_workers <= 0: - args.num_workers = cpu_count() - - # read/write files as UTF-8 - args.codes = codecs.open(args.codes.name, encoding='utf-8') - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - if args.vocabulary: - args.vocabulary = codecs.open(args.vocabulary.name, encoding='utf-8') - - if args.vocabulary: - vocabulary = read_vocabulary(args.vocabulary, args.vocabulary_threshold) - else: - vocabulary = None - - if sys.version_info < (3, 0): - args.separator = args.separator.decode('UTF-8') - if args.glossaries: - args.glossaries = [g.decode('UTF-8') for g in args.glossaries] - if args.num_workers > 1: - args.num_workers = 1 - warnings.warn("Parallel mode is only supported in Python3. Using 1 processor instead.") - - if args.seed is not None: - random.seed(args.seed) - - bpe = BPE(args.codes, args.merges, args.separator, vocabulary, args.glossaries) - - if args.input.name == '' or args.num_workers == 1: - if args.num_workers > 1: - warnings.warn("In parallel mode, the input cannot be STDIN. Using 1 processor instead.") - for line in args.input: - args.output.write(bpe.process_line(line, args.dropout)) - else: - bpe.process_lines(args.input.name, args.output, args.dropout, args.num_workers) diff --git a/spaces/HiepPhuocSS/TimeSFormer/capture_picture.py b/spaces/HiepPhuocSS/TimeSFormer/capture_picture.py deleted file mode 100644 index ae19a7a4e98104436edaf878e3474d2c2b39f9d8..0000000000000000000000000000000000000000 --- a/spaces/HiepPhuocSS/TimeSFormer/capture_picture.py +++ /dev/null @@ -1,20 +0,0 @@ -import streamlit as st -import cv2 -import numpy as np - -img_file_buffer = st.camera_input("Take a picture") - -if img_file_buffer is not None: - # To read image file buffer with OpenCV: - bytes_data = img_file_buffer.getvalue() - cv2_img: np.ndarray = cv2.imdecode( - np.frombuffer(bytes_data, np.uint8), cv2.IMREAD_COLOR - ) - - # Check the type of cv2_img: - # Should output: - st.write(type(cv2_img)) - - # Check the shape of cv2_img: - # Should output shape: (height, width, channels) - st.write(cv2_img.shape) diff --git a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_wmt20.sh b/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_wmt20.sh deleted file mode 100644 index 31cd5c76b75081331ae03c5ea70ea7ddebaa06e1..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/multilingual/data_scripts/download_wmt20.sh +++ /dev/null @@ -1,547 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - - -set -x -e - -# TODO update the workdir and dest dir name -# put fasttext model -WORKDIR=$WORKDIR_ROOT -# put intermediate files -TMP_DIR=$WORKDIR_ROOT/tmp/tmp_wmt20_lowres_download -# output {train,valid,test} files to dest -DEST=$WORKDIR_ROOT/ML50/raw - -UTILS=$PWD/utils - -# per dataset locations -COMMONCRAWL_DIR=$TMP_DIR/commoncrawl -YANDEX_CORPUS=$WORKDIR_ROOT/wmt20/official/ru/yandex/1mcorpus.zip -# unzipped -CZENG_CORPUS=$WORKDIR_ROOT/wmt20/official/cs/czeng/czeng20-train -CCMT_DIR=$WORKDIR_ROOT/wmt20/official/zh/ccmt/parallel - -download_and_select() { - SUBFOLDER=$1 - URL=$2 - UNCOMPRESS_CMD=$3 - LANG=$4 - INPUT_FILEPATH=$5 - if [[ $# -gt 5 ]]; then - LANG_COL=$6 - EN_COL=$7 - fi - - mkdir -p $SUBFOLDER - cd $SUBFOLDER - wget -nc --content-disposition $URL - $UNCOMPRESS_CMD - - if [[ $# -gt 5 ]]; then - cut -f$LANG_COL $INPUT_FILEPATH > $INPUT_FILEPATH.$LANG - cut -f$EN_COL $INPUT_FILEPATH > $INPUT_FILEPATH.en - fi - cd .. - - ln -sf $SUBFOLDER/$INPUT_FILEPATH.$LANG $SUBFOLDER.$LANG - ln -sf $SUBFOLDER/$INPUT_FILEPATH.en $SUBFOLDER.en -} - -prepare_lid() { - pip install fasttext - - # TODO specify global workdir - MODEL=$WORKDIR/fasttext/lid.176.bin - LID_MULTI=$UTILS/fasttext_multi_filter.py - - if [ ! -f "$MODEL" ]; then - echo "downloading fasttext lid model..." - mkdir -p $WORKDIR/fasttext - wget -nc https://dl.fbaipublicfiles.com/fasttext/supervised-models/lid.176.bin -O $MODEL - fi -} - -prepare_moses() { - pushd $UTILS - echo 'Cloning Moses github repository (for tokenization scripts)...' - git clone https://github.com/moses-smt/mosesdecoder.git - popd -} - -lid_filter() { - # TODO specify global workdir - MODEL=$WORKDIR/fasttext/lid.176.bin - LID_MULTI=$UTILS/fasttext_multi_filter.py - - prepare_lid - - SRC=$1 - SRC_FILE=$2 - SRC_OUTPUT=$3 - TGT=$4 - TGT_FILE=$5 - TGT_OUTPUT=$6 - python $LID_MULTI --model $MODEL --inputs $SRC_FILE $TGT_FILE --langs $SRC $TGT --outputs $SRC_OUTPUT $TGT_OUTPUT -} - -prepare_ja_ted() { - mkdir -p ted - cd ted - - wget -nc https://wit3.fbk.eu/archive/2017-01-trnted//texts/en/ja/en-ja.tgz - tar -zxvf en-ja.tgz - cat en-ja/train.tags.en-ja.en | grep -v -P "^[ ]*\<" | sed 's/^[ \t]*//g' | sed 's/[ \t]*$//g' > en-ja/train.en-ja.en - cat en-ja/train.tags.en-ja.ja | grep -v -P "^[ ]*\<" | sed 's/^[ \t]*//g' | sed 's/[ \t]*$//g' > en-ja/train.en-ja.ja - - cd .. - ln -sf ted/en-ja/train.en-ja.ja ted.ja - ln -sf ted/en-ja/train.en-ja.en ted.en -} - -prepare_ja() { - OUTPUT_DIR=$TMP_DIR/ja - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select paracrawl "http://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/release/2.0/bitext/en-ja.tar.gz" "tar -zxvf en-ja.tar.gz" ja en-ja/en-ja.bicleaner05.txt 4 3 & - download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.en-ja.tsv.gz" "gunzip -f news-commentary-v15.en-ja.tsv.gz" ja news-commentary-v15.en-ja.tsv 2 1 & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.ja-en.tsv.gz" "gunzip -f wikititles-v2.ja-en.tsv.gz" ja wikititles-v2.ja-en.tsv 1 2 & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-ja.langid.tsv.gz" "gunzip -f WikiMatrix.v1.en-ja.langid.tsv.gz" ja WikiMatrix.v1.en-ja.langid.tsv 3 2 & - download_and_select subtitle "https://nlp.stanford.edu/projects/jesc/data/split.tar.gz" "tar -zxvf split.tar.gz" ja split/train 2 1 & - download_and_select kftt "http://www.phontron.com/kftt/download/kftt-data-1.0.tar.gz" "tar -zxvf kftt-data-1.0.tar.gz" ja kftt-data-1.0/data/orig/kyoto-train & - - prepare_ja_ted & - - # ted data needs to - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.ja" | sort -V | xargs cat > all.ja - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter ja all.ja $DEST/train.ja_XX-en_XX.ja_XX en all.en $DEST/train.ja_XX-en_XX.en_XX -} - -prepare_ta() { - OUTPUT_DIR=$TMP_DIR/ta - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.ta-en.tsv.gz" "gunzip -f wikititles-v2.ta-en.tsv.gz" ta wikititles-v2.ta-en.tsv 1 2 & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-ta.langid.tsv.gz" "gunzip -f WikiMatrix.v1.en-ta.langid.tsv.gz" ta WikiMatrix.v1.en-ta.langid.tsv 3 2 & - download_and_select pmindia "http://data.statmt.org/pmindia/v1/parallel/pmindia.v1.ta-en.tsv" "" ta pmindia.v1.ta-en.tsv 2 1 & - download_and_select tanzil "https://object.pouta.csc.fi/OPUS-Tanzil/v1/moses/en-ta.txt.zip" "unzip en-ta.txt.zip" ta Tanzil.en-ta & - download_and_select pib "http://preon.iiit.ac.in/~jerin/resources/datasets/pib-v0.tar" "tar -xvf pib-v0.tar" ta pib/en-ta/train & - download_and_select mkb "http://preon.iiit.ac.in/~jerin/resources/datasets/mkb-v0.tar" "tar -xvf mkb-v0.tar" ta mkb/en-ta/mkb & - download_and_select ufal "http://ufal.mff.cuni.cz/~ramasamy/parallel/data/v2/en-ta-parallel-v2.tar.gz" "tar -zxvf en-ta-parallel-v2.tar.gz" ta en-ta-parallel-v2/corpus.bcn.train & - - wait - - # need special handling for nlpc - mkdir -p nlpc - cd nlpc - wget -nc https://raw.githubusercontent.com/nlpc-uom/English-Tamil-Parallel-Corpus/master/En-Ta%20Corpus/En-Ta%20English.txt - wget -nc https://github.com/nlpc-uom/English-Tamil-Parallel-Corpus/raw/master/En-Ta%20Corpus/En-Ta%20Tamil.txt - tail -n +4 "En-Ta English.txt" > en-ta.en - tail -n +4 "En-Ta Tamil.txt" > en-ta.ta - cd .. - ln -sf nlpc/en-ta.en nlpc.en - ln -sf nlpc/en-ta.ta nlpc.ta - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.ta" | sort -V | xargs cat > all.ta - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter ta all.ta $DEST/train.ta_IN-en_XX.ta_IN en all.en $DEST/train.ta_IN-en_XX.en_XX -} - -prepare_iu() { - OUTPUT_DIR=$TMP_DIR/iu - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select nh "https://nrc-digital-repository.canada.ca/eng/view/dataset/?id=c7e34fa7-7629-43c2-bd6d-19b32bf64f60" "tar -zxvf Nunavut-Hansard-Inuktitut-English-Parallel-Corpus-3.0.1.tgz" iu Nunavut-Hansard-Inuktitut-English-Parallel-Corpus-3.0/NunavutHansard > /dev/null & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.iu-en.tsv.gz" "gunzip -f wikititles-v2.iu-en.tsv.gz" iu wikititles-v2.iu-en.tsv 1 2 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.iu" | sort -V | xargs cat | nh/Nunavut-Hansard-Inuktitut-English-Parallel-Corpus-3.0/scripts/normalize-iu-spelling.pl > all.iu - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - paste all.iu all.en | awk -F $'\t' '$1!=""&&$2!=""' > all.iuen - cut -f1 all.iuen > $DEST/train.iu_CA-en_XX.iu_CA - cut -f2 all.iuen > $DEST/train.iu_CA-en_XX.en_XX -} - -prepare_km() { - OUTPUT_DIR=$TMP_DIR/km - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select paracrawl "http://data.statmt.org/wmt20/translation-task/ps-km/wmt20-sent.en-km.xz" "unxz wmt20-sent.en-km.zx" km wmt20-sent.en-km 2 1 & - - # km-parallel has multiple sets, concat all of them together - mkdir -p opus - cd opus - wget -nc "http://data.statmt.org/wmt20/translation-task/ps-km/km-parallel.tgz" - tar -zxvf km-parallel.tgz - find ./km-parallel -maxdepth 1 -name "*.km" | sort -V | xargs cat > opus.km - find ./km-parallel -maxdepth 1 -name "*.en" | sort -V | xargs cat > opus.en - cd .. - ln -sf opus/opus.km . - ln -sf opus/opus.en . - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.km" | sort -V | xargs cat > all.km - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter km all.km $DEST/train.km_KH-en_XX.km_KH en all.en $DEST/train.km_KH-en_XX.en_XX -} - -prepare_ps() { - OUTPUT_DIR=$TMP_DIR/ps - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select paracrawl "http://data.statmt.org/wmt20/translation-task/ps-km/wmt20-sent.en-ps.xz" "unxz wmt20-sent.en-ps.xz" ps wmt20-sent.en-ps 2 1 & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.ps-en.tsv.gz" "gunzip -f wikititles-v2.ps-en.tsv.gz" ps wikititles-v2.ps-en.tsv 1 2 & - # ps-parallel has multiple sets, concat all of them together - mkdir -p opus - cd opus - wget -nc "http://data.statmt.org/wmt20/translation-task/ps-km/ps-parallel.tgz" - tar -zxvf ps-parallel.tgz - find ./ps-parallel -maxdepth 1 -name "*.ps" | sort -V | xargs cat > opus.ps - find ./ps-parallel -maxdepth 1 -name "*.en" | sort -V | xargs cat > opus.en - cd .. - ln -sf opus/opus.ps opus.ps - ln -sf opus/opus.en opus.en - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.ps" | sort -V | xargs cat > all.ps - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter ps all.ps $DEST/train.ps_AF-en_XX.ps_AF en all.en $DEST/train.ps_AF-en_XX.en_XX -} - -download_commoncrawl() { - mkdir -p $COMMONCRAWL_DIR - cd $COMMONCRAWL_DIR - - wget -nc "http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz" - tar -zxvf training-parallel-commoncrawl.tgz -} -link_commoncrawl() { - LANG=$1 - ln -sf $COMMONCRAWL_DIR/commoncrawl.$LANG-en.en commoncrawl.en - ln -sf $COMMONCRAWL_DIR/commoncrawl.$LANG-en.$LANG commoncrawl.$LANG -} - -strip_xlf() { - INPUT_FILE=$1 - SRC=$2 - TGT=$3 - grep ']*>//g' | sed 's/<[^<>]*>$//g' > $INPUT_FILE.$SRC - grep ']*>//g' | sed 's/<[^<>]*>$//g' > $INPUT_FILE.$TGT -} - -download_and_process_tilde() { - URL=$1 - UNCOMPRESS_CMD=$2 - FILENAME=$3 - LANG=$4 - PROCESS_CMD=$5 - - mkdir -p tilde - cd tilde - wget -nc $URL - $UNCOMPRESS_CMD - echo "executing cmd" - echo $PROCESS_CMD - $PROCESS_CMD - cd .. - ln -sf tilde/$FILENAME.$LANG tilde.$LANG - ln -sf tilde/$FILENAME.en tilde.en -} - -prepare_cs() { - OUTPUT_DIR=$TMP_DIR/cs - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - #download_and_select europarl "http://www.statmt.org/europarl/v10/training/europarl-v10.cs-en.tsv.gz" "gunzip europarl-v10.cs-en.tsv.gz" cs europarl-v10.cs-en.tsv 1 2 & - #download_and_select paracrawl "https://s3.amazonaws.com/web-language-models/paracrawl/release5.1/en-cs.txt.gz" "gunzip en-cs.txt.gz" cs en-cs.txt 2 1 & - #link_commoncrawl cs - #download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.cs-en.tsv.gz" "gunzip news-commentary-v15.cs-en.tsv.gz" cs news-commentary-v15.cs-en.tsv 1 2 & - #download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.cs-en.tsv.gz" "gunzip wikititles-v2.cs-en.tsv.gz" cs wikititles-v2.cs-en.tsv 1 2 & - #download_and_process_tilde "http://data.statmt.org/wmt20/translation-task/rapid/RAPID_2019.cs-en.xlf.gz" "gunzip RAPID_2019.cs-en.xlf.gz" RAPID_2019.cs-en.xlf cs "strip_xlf RAPID_2019.cs-en.xlf cs en" & - #download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.cs-en.langid.tsv.gz" "gunzip WikiMatrix.v1.cs-en.langid.tsv.gz" cs WikiMatrix.v1.cs-en.langid.tsv 2 3 & - - #wait - - # remove previous results - #rm -f all.?? - #find ./ -maxdepth 1 -name "*.cs" | sort -V | xargs cat > all.cs - #find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - if [ -z $CZENG_CORPUS ] ; - then - echo "Please download CZENG_CORPUS manually and place them at $CZENG_CORPUS. Exitting..." - exit - fi - cat $CZENG_CORPUS | sed '/^$/d' | cut -f5 > all.cs - cat $CZENG_CORPUS | sed '/^$/d' | cut -f6 > all.en - - lid_filter cs all.cs $DEST/train.cs_CZ-en_XX.cs_CZ en all.en $DEST/train.cs_CZ-en_XX.en_XX -} - -prepare_de() { - OUTPUT_DIR=$TMP_DIR/de - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select europarl "http://www.statmt.org/europarl/v10/training/europarl-v10.de-en.tsv.gz" "gunzip europarl-v10.de-en.tsv.gz" de europarl-v10.de-en.tsv 1 2 & - download_and_select paracrawl "https://s3.amazonaws.com/web-language-models/paracrawl/release5.1/en-de.txt.gz" "gunzip en-de.txt.gz" de en-de.txt 2 1 & - link_commoncrawl de - download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.de-en.tsv.gz" "gunzip news-commentary-v15.de-en.tsv.gz" de news-commentary-v15.de-en.tsv 1 2 & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.de-en.tsv.gz" "gunzip wikititles-v2.de-en.tsv.gz" de wikititles-v2.de-en.tsv 1 2 & - download_and_process_tilde "http://data.statmt.org/wmt20/translation-task/rapid/RAPID_2019.de-en.xlf.gz" "gunzip RAPID_2019.de-en.xlf.gz" RAPID_2019.de-en.xlf de "strip_xlf RAPID_2019.de-en.xlf de en" & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.de-en.langid.tsv.gz" "gunzip WikiMatrix.v1.de-en.langid.tsv.gz" de WikiMatrix.v1.de-en.langid.tsv 2 3 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.de" | sort -V | xargs cat > all.de - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter de all.de $DEST/train.de_DE-en_XX.de_DE en all.en $DEST/train.de_DE-en_XX.en_XX -} - -prepare_tmx() { - TMX_FILE=$1 - git clone https://github.com/amake/TMX2Corpus $UTILS/tmx2corpus - pip install tinysegmenter - - python $UTILS/tmx2corpus/tmx2corpus.py $TMX_FILE -} - -prepare_pl() { - OUTPUT_DIR=$TMP_DIR/pl - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - # download_and_select europarl "http://www.statmt.org/europarl/v10/training/europarl-v10.pl-en.tsv.gz" "gunzip europarl-v10.pl-en.tsv.gz" pl europarl-v10.pl-en.tsv 1 2 & - # download_and_select paracrawl "https://s3.amazonaws.com/web-language-models/paracrawl/release5.1/en-pl.txt.gz" "gunzip en-pl.txt.gz" pl en-pl.txt 2 1 & - # download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.pl-en.tsv.gz" "gunzip wikititles-v2.pl-en.tsv.gz" pl wikititles-v2.pl-en.tsv 1 2 & - download_and_select tilde "https://tilde-model.s3-eu-west-1.amazonaws.com/rapid2019.en-pl.tmx.zip" "gunzip rapid2019.en-pl.tmx.zip" bitext pl "prepare_tmx RAPID_2019.UNIQUE.en-pl.tmx" & - # download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-pl.langid.tsv.gz" "gunzip WikiMatrix.v1.en-pl.langid.tsv.gz" pl WikiMatrix.v1.en-pl.langid.tsv 3 2 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.pl" | sort -V | xargs cat > all.pl - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter pl all.pl $DEST/train.pl_PL-en_XX.pl_PL en all.en $DEST/train.pl_PL-en_XX.en_XX -} - -prepare_uncorpus() { - $URLS=$1 - $FILES=$2 - - mkdir -p uncorpus - cd uncorpus - - for URL in $URLS; do - wget -nc $URL - done - cat $FILES > uncorpus.tar.gz - tar -zxvf uncorpus.tar.gz - - cd .. - ln -sf uncorpus/en-$LANG/UNv1.0.en-$LANG.$LANG uncorpus.$LANG - ln -sf uncorpus/en-$LANG/UNv1.0.en-$LANG.en uncorpus.en -} - -prepare_yandex() { - mkdir -p yandex - cd yandex - unzip $YANDEX_CORPUS ./ - cd .. - ln -s yandex/corpus.en_ru.1m.en yandex.en - ln -s yandex/corpus.en_ru.1m.ru yandex.ru -} - -prepare_ru() { - OUTPUT_DIR=$TMP_DIR/ru - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select paracrawl "https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-ru.zipporah0-dedup-clean.tgz" "tar -zxvf paracrawl-release1.en-ru.zipporah0-dedup-clean.tgz" ru paracrawl-release1.en-ru.zipporah0-dedup-clean & - link_commoncrawl ru - download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.en-ru.tsv.gz" "gunzip news-commentary-v15.en-ru.tsv.gz" ru news-commentary-v15.en-ru.tsv 2 1 & - prepare_yandex & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.ru-en.tsv.gz" "gunzip wikititles-v2.ru-en.tsv.gz" ru wikititles-v2.ru-en.tsv 1 2 & - prepare_uncorpus "https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00 https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01 https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.02" "UNv1.0.en-ru.tar.gz.00 UNv1.0.en-ru.tar.gz.01 UNv1.0.en-ru.tar.gz.02" & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-ru.langid.tsv.gz" "gunzip WikiMatrix.v1.en-ru.langid.tsv.gz" ru WikiMatrix.v1.en-ru.langid.tsv 3 2 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.ru" | sort -V | xargs cat > all.ru - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter ru all.ru $DEST/train.ru_RU-en_XX.ru_RU en all.en $DEST/train.ru_RU-en_XX.en_XX -} - -prepare_ccmt() { - mkdir -p ccmt - cd ccmt - # assume ccmt data is already unzipped under CCMT_DIR folder - cat $CCMT_DIR/datum2017/Book*_cn.txt | sed 's/ //g' > datum2017.detok.zh - cat $CCMT_DIR/datum2017/Book*_en.txt > datum2017.detok.en - cat $CCMT_DIR/casict2011/casict-A_ch.txt $CCMT_DIR/casict2011/casict-B_ch.txt $CCMT_DIR/casict2015/casict2015_ch.txt $CCMT_DIR/datum2015/datum_ch.txt $CCMT_DIR/neu2017/NEU_cn.txt datum2017.detok.zh > ccmt.zh - cat $CCMT_DIR/casict2011/casict-A_en.txt $CCMT_DIR/casict2011/casict-B_en.txt $CCMT_DIR/casict2015/casict2015_en.txt $CCMT_DIR/datum2015/datum_en.txt $CCMT_DIR/neu2017/NEU_en.txt datum2017.detok.en > ccmt.en - cd .. - ln -sf ccmt/ccmt.zh ccmt.zh - ln -sf ccmt/ccmt.en ccmt.en -} - -prepare_zh() { - OUTPUT_DIR=$TMP_DIR/zh - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.en-zh.tsv.gz" "gunzip news-commentary-v15.en-zh.tsv.gz" zh news-commentary-v15.en-zh.tsv 2 1 & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.zh-en.tsv.gz" "gunzip wikititles-v2.zh-en.tsv.gz" zh wikititles-v2.zh-en.tsv 1 2 & - prepare_uncorpus "https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.01" "UNv1.0.en-zh.tar.gz.00 UNv1.0.en-zh.tar.gz.01" & - prepare_ccmt & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-zh.langid.tsv.gz" "gunzip WikiMatrix.v1.en-zh.langid.tsv.gz" zh WikiMatrix.v1.en-zh.langid.tsv 3 2 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.zh" | sort -V | xargs cat > all.zh - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter zh all.zh $DEST/train.zh_CN-en_XX.zh_CN en all.en $DEST/train.zh_CN-en_XX.en_XX -} - -prepare_tests() { - OUTPUT_DIR=$TMP_DIR - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - wget -nc http://data.statmt.org/wmt20/translation-task/dev.tgz - tar -zxvf dev.tgz - cd dev - - cat newsdev2020-jaen-src.ja.sgm | $UTILS/strip_sgm.sh > newsdev2020-jaen.ja - cat newsdev2020-jaen-ref.en.sgm | $UTILS/strip_sgm.sh > newsdev2020-jaen.en - split newsdev2020-jaen.ja -a 0 -n r/1/2 > $DEST/valid.ja_XX-en_XX.ja_XX - split newsdev2020-jaen.en -a 0 -n r/1/2 > $DEST/valid.ja_XX-en_XX.en_XX - split newsdev2020-jaen.ja -a 0 -n r/2/2 > $DEST/test.ja_XX-en_XX.ja_XX - split newsdev2020-jaen.en -a 0 -n r/2/2 > $DEST/test.ja_XX-en_XX.en_XX - - cat newsdev2020-iuen-src.iu.sgm | strip_sgm.sh > newsdev2020-iuen.iu - cat newsdev2020-iuen-ref.en.sgm | strip_sgm.sh > newsdev2020-iuen.en - split newsdev2020-iuen.iu -a 0 -n r/1/2 > $DEST/valid.iu_CA-en_XX.iu_CA - split newsdev2020-iuen.en -a 0 -n r/1/2 > $DEST/valid.iu_CA-en_XX.en_XX - split newsdev2020-iuen.iu -a 0 -n r/2/2 > $DEST/test.iu_CA-en_XX.iu_CA - split newsdev2020-iuen.en -a 0 -n r/2/2 > $DEST/test.iu_CA-en_XX.en_XX - - cat newsdev2020-taen-src.ta.sgm | strip_sgm.sh > newsdev2020-taen.ta - cat newsdev2020-taen-ref.en.sgm | strip_sgm.sh > newsdev2020-taen.en - split newsdev2020-taen.ta -a 0 -n r/1/2 > $DEST/valid.ta_IN-en_XX.ta_IN - split newsdev2020-taen.en -a 0 -n r/1/2 > $DEST/valid.ta_IN-en_XX.en_XX - split newsdev2020-taen.ta -a 0 -n r/2/2 > $DEST/test.ta_IN-en_XX.ta_IN - split newsdev2020-taen.en -a 0 -n r/2/2 > $DEST/test.ta_IN-en_XX.en_XX - - cp wikipedia.dev.km-en.km $DEST/valid.km_KH-en_XX.km_KH - cp wikipedia.dev.km-en.en $DEST/valid.km_KH-en_XX.en_XX - cp wikipedia.devtest.km-en.km $DEST/test.km_KH-en_XX.km_KH - cp wikipedia.devtest.km-en.en $DEST/test.km_KH-en_XX.en_XX - - cp wikipedia.dev.ps-en.ps $DEST/valid.ps_AF-en_XX.ps_AF - cp wikipedia.dev.ps-en.en $DEST/valid.ps_AF-en_XX.en_XX - cp wikipedia.devtest.ps-en.ps $DEST/test.ps_AF-en_XX.ps_AF - cp wikipedia.devtest.ps-en.en $DEST/test.ps_AF-en_XX.en_XX - - cat newsdev2020-plen-src.pl.sgm | strip_sgm.sh > newsdev2020-plen.pl - cat newsdev2020-plen-ref.en.sgm | strip_sgm.sh > newsdev2020-plen.en - split newsdev2020-plen.pl -a 0 -n r/1/2 > $DEST/valid.pl_PL-en_XX.pl_PL - split newsdev2020-plen.en -a 0 -n r/1/2 > $DEST/valid.pl_PL-en_XX.en_XX - split newsdev2020-plen.pl -a 0 -n r/2/2 > $DEST/test.pl_PL-en_XX.pl_PL - split newsdev2020-plen.en -a 0 -n r/2/2 > $DEST/test.pl_PL-en_XX.en_XX - - cat newstest2018-encs-src.en.sgm | strip_sgm.sh > $DEST/valid.en_XX-cs_CZ.en_XX - cat newstest2018-encs-ref.cs.sgm | strip_sgm.sh > $DEST/valid.en_XX-cs_CZ.cs_CZ - cat newstest2019-encs-src.en.sgm | strip_sgm.sh > $DEST/test.en_XX-cs_CZ.en_XX - cat newstest2019-encs-ref.cs.sgm | strip_sgm.sh > $DEST/test.en_XX-cs_CZ.cs_CZ - - cat newstest2018-deen-src.de.sgm | strip_sgm.sh > $DEST/valid.de_DE-en_XX.de_DE - cat newstest2018-deen-ref.en.sgm | strip_sgm.sh > $DEST/valid.de_DE-en_XX.en_XX - cat newstest2018-ende-src.en.sgm | strip_sgm.sh > $DEST/valid.en_XX-de_DE.en_XX - cat newstest2018-ende-ref.de.sgm | strip_sgm.sh > $DEST/valid.en_XX-de_DE.de_DE - cat newstest2019-deen-src.de.sgm | strip_sgm.sh > $DEST/test.de_DE-en_XX.de_DE - cat newstest2019-deen-ref.en.sgm | strip_sgm.sh > $DEST/test.de_DE-en_XX.en_XX - cat newstest2019-ende-src.en.sgm | strip_sgm.sh > $DEST/test.en_XX-de_DE.en_XX - cat newstest2019-ende-ref.de.sgm | strip_sgm.sh > $DEST/test.en_XX-de_DE.de_DE - - cat newstest2018-ruen-src.ru.sgm | strip_sgm.sh > $DEST/valid.ru_RU-en_XX.ru_RU - cat newstest2018-ruen-ref.en.sgm | strip_sgm.sh > $DEST/valid.ru_RU-en_XX.en_XX - cat newstest2018-enru-src.en.sgm | strip_sgm.sh > $DEST/valid.en_XX-ru_RU.en_XX - cat newstest2018-enru-ref.ru.sgm | strip_sgm.sh > $DEST/valid.en_XX-ru_RU.ru_RU - cat newstest2019-ruen-src.ru.sgm | strip_sgm.sh > $DEST/test.ru_RU-en_XX.ru_RU - cat newstest2019-ruen-ref.en.sgm | strip_sgm.sh > $DEST/test.ru_RU-en_XX.en_XX - cat newstest2019-enru-src.en.sgm | strip_sgm.sh > $DEST/test.en_XX-ru_RU.en_XX - cat newstest2019-enru-ref.ru.sgm | strip_sgm.sh > $DEST/test.en_XX-ru_RU.ru_RU - - cat newstest2018-zhen-src.zh.sgm | strip_sgm.sh > $DEST/valid.zh_CN-en_XX.zh_CN - cat newstest2018-zhen-ref.en.sgm | strip_sgm.sh > $DEST/valid.zh_CN-en_XX.en_XX - cat newstest2018-enzh-src.en.sgm | strip_sgm.sh > $DEST/valid.en_XX-zh_CN.en_XX - cat newstest2018-enzh-ref.zh.sgm | strip_sgm.sh > $DEST/valid.en_XX-zh_CN.zh_CN - cat newstest2019-zhen-src.zh.sgm | strip_sgm.sh > $DEST/test.zh_CN-en_XX.zh_CN - cat newstest2019-zhen-ref.en.sgm | strip_sgm.sh > $DEST/test.zh_CN-en_XX.en_XX - cat newstest2019-enzh-src.en.sgm | strip_sgm.sh > $DEST/test.en_XX-zh_CN.en_XX - cat newstest2019-enzh-ref.zh.sgm | strip_sgm.sh > $DEST/test.en_XX-zh_CN.zh_CN -} - -mkdir -p $DEST - -prepare_lid -prepare_moses -download_commoncrawl - -prepare_ja & -prepare_ta & -prepare_km & -prepare_ps & -prepare_iu & -prepare_cs & -prepare_de & -prepare_pl & -prepare_ru & -prepare_zh & - -# prepare valid/test set -prepare_tests & - -# wait - -# TODO remove intermediate files -# rm -rf $TMP_DIR diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/append_token_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/append_token_dataset.py deleted file mode 100644 index 87695bd0f5fcb6b10247e3b743340623e6438cc1..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/append_token_dataset.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class AppendTokenDataset(BaseWrapperDataset): - def __init__(self, dataset, token=None): - super().__init__(dataset) - self.token = token - if token is not None: - self._sizes = np.array(dataset.sizes) + 1 - else: - self._sizes = dataset.sizes - - def __getitem__(self, idx): - item = self.dataset[idx] - if self.token is not None: - item = torch.cat([item, item.new([self.token])]) - return item - - @property - def sizes(self): - return self._sizes - - def num_tokens(self, index): - n = self.dataset.num_tokens(index) - if self.token is not None: - n += 1 - return n - - def size(self, index): - n = self.dataset.size(index) - if self.token is not None: - n += 1 - return n diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/huffman/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/huffman/__init__.py deleted file mode 100644 index 9b61fafadba28f65fe78a28b2099368b83cfcf41..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/huffman/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .huffman_coder import HuffmanCodeBuilder, HuffmanCoder -from .huffman_mmap_indexed_dataset import ( - HuffmanMMapIndex, - HuffmanMMapIndexedDataset, - HuffmanMMapIndexedDatasetBuilder, - vocab_file_path, -) - -__all__ = [ - "HuffmanCoder", - "HuffmanCodeBuilder", - "HuffmanMMapIndexedDatasetBuilder", - "HuffmanMMapIndexedDataset", - "HuffmanMMapIndex", - "vocab_file_path", -] diff --git a/spaces/IPN/FirstSpaceTEST_Gradio/README.md b/spaces/IPN/FirstSpaceTEST_Gradio/README.md deleted file mode 100644 index fb2c79230b5333798f1cff48d26e348dcee87bee..0000000000000000000000000000000000000000 --- a/spaces/IPN/FirstSpaceTEST_Gradio/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FirstSpaceTEST_Gradio -emoji: 📉 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/ITESM/streamlit_graphs/README.md b/spaces/ITESM/streamlit_graphs/README.md deleted file mode 100644 index 3126a83e8cd266cdfc1ad57adbcf13e56656639f..0000000000000000000000000000000000000000 --- a/spaces/ITESM/streamlit_graphs/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Streamlit_graphs -emoji: 📊 -colorFrom: pink -colorTo: purple -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Imran1/Yelp-reviews/app.py b/spaces/Imran1/Yelp-reviews/app.py deleted file mode 100644 index 22440200a0e54138b24928a31082696377b82c9f..0000000000000000000000000000000000000000 --- a/spaces/Imran1/Yelp-reviews/app.py +++ /dev/null @@ -1,26 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSequenceClassification - -tokenizer = AutoTokenizer.from_pretrained("Imran1/sentimen_analysis_yelp") - -model = AutoModelForSequenceClassification.from_pretrained("Imran1/sentimen_analysis_yelp") - -from transformers import pipeline -Data= pipeline("text-classification", model=model, tokenizer=tokenizer,top_k=5) - - - -Label=[] -Score=[] -def sentiment(text): - data = Data(text)[0] - for i in range (5): - L=data[i]["label"] - S=data[i]["score"] - Label.append(L) - Score.append(S) - return dict(zip(Label,Score)) - -import gradio as gr -exmp=["the food is not good.","oh I really love this food "] - -gr.Interface(fn=sentiment, inputs="text", outputs="label", examples=exmp,title= "Yelp reviews").launch() \ No newline at end of file diff --git a/spaces/JKLUCY99/voice-cloning/README.md b/spaces/JKLUCY99/voice-cloning/README.md deleted file mode 100644 index cd3a4b1967f6e541fffac5e8e5e51e49f6677ca7..0000000000000000000000000000000000000000 --- a/spaces/JKLUCY99/voice-cloning/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Voice Cloning -emoji: 😻 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: nateraw/voice-cloning ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/data/data_sampler.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/data/data_sampler.py deleted file mode 100644 index 575452d9f844a928f7f42296c81635cfbadec7c2..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/data/data_sampler.py +++ /dev/null @@ -1,48 +0,0 @@ -import math -import torch -from torch.utils.data.sampler import Sampler - - -class EnlargedSampler(Sampler): - """Sampler that restricts data loading to a subset of the dataset. - - Modified from torch.utils.data.distributed.DistributedSampler - Support enlarging the dataset for iteration-based training, for saving - time when restart the dataloader after each epoch - - Args: - dataset (torch.utils.data.Dataset): Dataset used for sampling. - num_replicas (int | None): Number of processes participating in - the training. It is usually the world_size. - rank (int | None): Rank of the current process within num_replicas. - ratio (int): Enlarging ratio. Default: 1. - """ - - def __init__(self, dataset, num_replicas, rank, ratio=1): - self.dataset = dataset - self.num_replicas = num_replicas - self.rank = rank - self.epoch = 0 - self.num_samples = math.ceil(len(self.dataset) * ratio / self.num_replicas) - self.total_size = self.num_samples * self.num_replicas - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - indices = torch.randperm(self.total_size, generator=g).tolist() - - dataset_size = len(self.dataset) - indices = [v % dataset_size for v in indices] - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) - - def __len__(self): - return self.num_samples - - def set_epoch(self, epoch): - self.epoch = epoch diff --git a/spaces/JeffJing/ZookChatBot/steamship/utils/kv_store.py b/spaces/JeffJing/ZookChatBot/steamship/utils/kv_store.py deleted file mode 100644 index 06ebe75d6720ef024e0ef3b3db168e82d9f78cc6..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/utils/kv_store.py +++ /dev/null @@ -1,115 +0,0 @@ -"""A simple key-value store implemented atop Files and Tags.""" - -from typing import Any, Dict, List, Optional, Tuple - -from steamship import Block, File, Steamship, Tag - -KV_STORE_MARKER = "__init__" - - -class KeyValueStore: - """A simple key value store implemented in Steamship. - - Instances of the KeyValueStore are identified by its `namespace`. - This store_identifier corresponds to a File that will be created with a special tag identifying it. - - Entries of the KeyValueStore are saved as `Tag` objects with: - * Kind = "KeyValueStore" - * Name = the key of the (kv) pair - * Value = a dict set to the value - - Note that the value is always saved as a dict object. To save a string or int, wrap it in a dict. - - WARNING: - - This is essentially a clever hack atop Steamship's tag system to provide mutable key-value storage. It is in the - steamship.utils package because it's proven useful once or twice. But in general, if you find yourself heavily - relying upon it, consider reaching out to us at hello@steamship.com to let us know, and we'll up-prioritize - adding a proper key-value API. - """ - - client: Steamship - store_identifier: str - - def __init__(self, client: Steamship, store_identifier: str = "KeyValueStore"): - """Create a new KeyValueStore instance. - - Args: - client (Steamship): The Steamship client. - store_identifier (str): The store_identifier which identifies this KeyValueStore instance. You can have multiple, separate KeyValueStore instances in a workspace using this implementation. - """ - self.client = client - self.store_identifier = f"kv-store-{store_identifier}" - - def _get_file(self, or_create: bool = False) -> Optional[File]: - status_files = File.query(self.client, f'filetag and kind "{self.store_identifier}"').files - if len(status_files) == 0: - if not or_create: - return None - return File.create( - self.client, - blocks=[Block(text="")], - tags=[Tag(kind=self.store_identifier, name=KV_STORE_MARKER)], - ) - else: - return status_files[0] - - def get(self, key: str) -> Optional[Dict]: - """Get the value represented by `key`.""" - file = self._get_file() - - if file is None: - return None - - for tag in file.tags: - if tag.kind == self.store_identifier and tag.name == key: - return tag.value - - def delete(self, key: str) -> bool: - """Delete the entry represented by `key`""" - file = self._get_file() - - if file is None: - return False - - deleted = False - for tag in file.tags: - if tag.kind == self.store_identifier and tag.name == key: - tag.delete() - deleted = True - - return deleted - - def set(self, key: str, value: Dict[str, Any]): - """Set the entry (key, value).""" - - # First delete it if it exists to avoid duplicate tags. - self.delete(key) - - # Now get/create the file - file = self._get_file(or_create=True) - - req = Tag(file_id=file.id, kind=self.store_identifier, name=key, value=value) - return self.client.post("tag/create", req, expect=Tag) - - def items(self, filter_keys: Optional[List[str]] = None) -> List[Tuple[str, Dict[str, Any]]]: - """Return all key-value entries as a list of (key, value) tuples. - - If `filter_keys` is provided, only returns keys within that list.""" - - file = self._get_file(or_create=True) - return [ - (tag.name, tag.value) - for tag in file.tags - if ( - tag.kind == self.store_identifier - and tag.name != KV_STORE_MARKER - and (filter_keys is None or tag.name in filter_keys) - ) - ] - - def reset(self): - """Delete all key-values.""" - file = self._get_file() - if file is not None: - file.delete() diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/sliders.js b/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/sliders.js deleted file mode 100644 index 1351f3ae3902c374b3f5f73b2787c5ec1989bafd..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/sliders.js +++ /dev/null @@ -1,22 +0,0 @@ - -var rangeInputs = null; -var numberInputs = null; - - -function setSlider() { - function setSliderRange() { - var range = document.querySelectorAll('input[type="range"]'); - range.forEach(range => { - range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%'; - }); - } - rangeInputs = document.querySelectorAll('input[type="range"]'); - numberInputs = document.querySelectorAll('input[type="number"]') - setSliderRange(); - rangeInputs.forEach(rangeInput => { - rangeInput.addEventListener('input', setSliderRange); - }); - numberInputs.forEach(numberInput => { - numberInput.addEventListener('input', setSliderRange); - }) -} diff --git a/spaces/Juno360219/stabilityai-stable-diffusion-2-1/index.html b/spaces/Juno360219/stabilityai-stable-diffusion-2-1/index.html deleted file mode 100644 index 58275de3b1c343a98420342baa076b9baaafa157..0000000000000000000000000000000000000000 --- a/spaces/Juno360219/stabilityai-stable-diffusion-2-1/index.html +++ /dev/null @@ -1,19 +0,0 @@ - - - - - - My static Space - - - -
-

Welcome to your static Space!

-

You can modify this app directly by editing index.html in the Files and versions tab.

-

- Also don't forget to check the - Spaces documentation. -

-
- - diff --git a/spaces/Kevin676/Clone-Your-Voice/synthesizer/utils/text.py b/spaces/Kevin676/Clone-Your-Voice/synthesizer/utils/text.py deleted file mode 100644 index 7a56876b6b38f28abeedc0553f61e1e2a659e522..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Clone-Your-Voice/synthesizer/utils/text.py +++ /dev/null @@ -1,75 +0,0 @@ -from synthesizer.utils.symbols import symbols -from synthesizer.utils import cleaners -import re - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - -# Regular expression matching text enclosed in curly braces: -_curly_re = re.compile(r"(.*?)\{(.+?)\}(.*)") - - -def text_to_sequence(text, cleaner_names): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - - The text can optionally have ARPAbet sequences enclosed in curly braces embedded - in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street." - - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - - Returns: - List of integers corresponding to the symbols in the text - """ - sequence = [] - - # Check for curly braces and treat their contents as ARPAbet: - while len(text): - m = _curly_re.match(text) - if not m: - sequence += _symbols_to_sequence(_clean_text(text, cleaner_names)) - break - sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names)) - sequence += _arpabet_to_sequence(m.group(2)) - text = m.group(3) - - # Append EOS token - sequence.append(_symbol_to_id["~"]) - return sequence - - -def sequence_to_text(sequence): - """Converts a sequence of IDs back to a string""" - result = "" - for symbol_id in sequence: - if symbol_id in _id_to_symbol: - s = _id_to_symbol[symbol_id] - # Enclose ARPAbet back in curly braces: - if len(s) > 1 and s[0] == "@": - s = "{%s}" % s[1:] - result += s - return result.replace("}{", " ") - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception("Unknown cleaner: %s" % name) - text = cleaner(text) - return text - - -def _symbols_to_sequence(symbols): - return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)] - - -def _arpabet_to_sequence(text): - return _symbols_to_sequence(["@" + s for s in text.split()]) - - -def _should_keep_symbol(s): - return s in _symbol_to_id and s not in ("_", "~") diff --git a/spaces/Kimata/multimodal_deepfake_detection/utils/logger.py b/spaces/Kimata/multimodal_deepfake_detection/utils/logger.py deleted file mode 100644 index 869ce74a0758449a37dfce6d607aec773a268eb2..0000000000000000000000000000000000000000 --- a/spaces/Kimata/multimodal_deepfake_detection/utils/logger.py +++ /dev/null @@ -1,58 +0,0 @@ -import logging -import time -from datetime import timedelta - - -class LogFormatter: - def __init__(self): - self.start_time = time.time() - - def format(self, record): - elapsed_seconds = round(record.created - self.start_time) - - prefix = "%s - %s - %s" % ( - record.levelname, - time.strftime("%x %X"), - timedelta(seconds=elapsed_seconds), - ) - message = record.getMessage() - message = message.replace("\n", "\n" + " " * (len(prefix) + 3)) - return "%s - %s" % (prefix, message) - - -def create_logger(filepath, args): - # create log formatter - log_formatter = LogFormatter() - - # create file handler and set level to debug - file_handler = logging.FileHandler(filepath, "a") - file_handler.setLevel(logging.DEBUG) - file_handler.setFormatter(log_formatter) - - # create console handler and set level to info - console_handler = logging.StreamHandler() - console_handler.setLevel(logging.INFO) - console_handler.setFormatter(log_formatter) - - # create logger and set level to debug - logger = logging.getLogger() - logger.handlers = [] - logger.setLevel(logging.DEBUG) - logger.propagate = False - logger.addHandler(file_handler) - logger.addHandler(console_handler) - - # reset logger elapsed time - def reset_time(): - log_formatter.start_time = time.time() - - logger.reset_time = reset_time - - logger.info( - "\n".join( - "%s: %s" % (k, str(v)) - for k, v in sorted(dict(vars(args)).items(), key=lambda x: x[0]) - ) - ) - - return logger diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/mask2former_layers.py b/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/mask2former_layers.py deleted file mode 100644 index dcc604e277d91151334ed520d78e6a5a8f388036..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/layers/transformer/mask2former_layers.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmcv.cnn import build_norm_layer -from mmengine.model import ModuleList -from torch import Tensor - -from .deformable_detr_layers import DeformableDetrTransformerEncoder -from .detr_layers import DetrTransformerDecoder, DetrTransformerDecoderLayer - - -class Mask2FormerTransformerEncoder(DeformableDetrTransformerEncoder): - """Encoder in PixelDecoder of Mask2Former.""" - - def forward(self, query: Tensor, query_pos: Tensor, - key_padding_mask: Tensor, spatial_shapes: Tensor, - level_start_index: Tensor, valid_ratios: Tensor, - reference_points: Tensor, **kwargs) -> Tensor: - """Forward function of Transformer encoder. - - Args: - query (Tensor): The input query, has shape (bs, num_queries, dim). - query_pos (Tensor): The positional encoding for query, has shape - (bs, num_queries, dim). If not None, it will be added to the - `query` before forward function. Defaults to None. - key_padding_mask (Tensor): The `key_padding_mask` of `self_attn` - input. ByteTensor, has shape (bs, num_queries). - spatial_shapes (Tensor): Spatial shapes of features in all levels, - has shape (num_levels, 2), last dimension represents (h, w). - level_start_index (Tensor): The start index of each level. - A tensor has shape (num_levels, ) and can be represented - as [0, h_0*w_0, h_0*w_0+h_1*w_1, ...]. - valid_ratios (Tensor): The ratios of the valid width and the valid - height relative to the width and the height of features in all - levels, has shape (bs, num_levels, 2). - reference_points (Tensor): The initial reference, has shape - (bs, num_queries, 2) with the last dimension arranged - as (cx, cy). - - Returns: - Tensor: Output queries of Transformer encoder, which is also - called 'encoder output embeddings' or 'memory', has shape - (bs, num_queries, dim) - """ - for layer in self.layers: - query = layer( - query=query, - query_pos=query_pos, - key_padding_mask=key_padding_mask, - spatial_shapes=spatial_shapes, - level_start_index=level_start_index, - valid_ratios=valid_ratios, - reference_points=reference_points, - **kwargs) - return query - - -class Mask2FormerTransformerDecoder(DetrTransformerDecoder): - """Decoder of Mask2Former.""" - - def _init_layers(self) -> None: - """Initialize decoder layers.""" - self.layers = ModuleList([ - Mask2FormerTransformerDecoderLayer(**self.layer_cfg) - for _ in range(self.num_layers) - ]) - self.embed_dims = self.layers[0].embed_dims - self.post_norm = build_norm_layer(self.post_norm_cfg, - self.embed_dims)[1] - - -class Mask2FormerTransformerDecoderLayer(DetrTransformerDecoderLayer): - """Implements decoder layer in Mask2Former transformer.""" - - def forward(self, - query: Tensor, - key: Tensor = None, - value: Tensor = None, - query_pos: Tensor = None, - key_pos: Tensor = None, - self_attn_mask: Tensor = None, - cross_attn_mask: Tensor = None, - key_padding_mask: Tensor = None, - **kwargs) -> Tensor: - """ - Args: - query (Tensor): The input query, has shape (bs, num_queries, dim). - key (Tensor, optional): The input key, has shape (bs, num_keys, - dim). If `None`, the `query` will be used. Defaults to `None`. - value (Tensor, optional): The input value, has the same shape as - `key`, as in `nn.MultiheadAttention.forward`. If `None`, the - `key` will be used. Defaults to `None`. - query_pos (Tensor, optional): The positional encoding for `query`, - has the same shape as `query`. If not `None`, it will be added - to `query` before forward function. Defaults to `None`. - key_pos (Tensor, optional): The positional encoding for `key`, has - the same shape as `key`. If not `None`, it will be added to - `key` before forward function. If None, and `query_pos` has the - same shape as `key`, then `query_pos` will be used for - `key_pos`. Defaults to None. - self_attn_mask (Tensor, optional): ByteTensor mask, has shape - (num_queries, num_keys), as in `nn.MultiheadAttention.forward`. - Defaults to None. - cross_attn_mask (Tensor, optional): ByteTensor mask, has shape - (num_queries, num_keys), as in `nn.MultiheadAttention.forward`. - Defaults to None. - key_padding_mask (Tensor, optional): The `key_padding_mask` of - `self_attn` input. ByteTensor, has shape (bs, num_value). - Defaults to None. - - Returns: - Tensor: forwarded results, has shape (bs, num_queries, dim). - """ - - query = self.cross_attn( - query=query, - key=key, - value=value, - query_pos=query_pos, - key_pos=key_pos, - attn_mask=cross_attn_mask, - key_padding_mask=key_padding_mask, - **kwargs) - query = self.norms[0](query) - query = self.self_attn( - query=query, - key=query, - value=query, - query_pos=query_pos, - key_pos=query_pos, - attn_mask=self_attn_mask, - **kwargs) - query = self.norms[1](query) - query = self.ffn(query) - query = self.norms[2](query) - - return query diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/scienceqa.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/scienceqa.py deleted file mode 100644 index f0205a2cdd923dbc985f4990043a9d5c16ca125c..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/scienceqa.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -from typing import Callable, List, Sequence - -import mmengine -from mmengine.dataset import BaseDataset -from mmengine.fileio import get_file_backend - -from mmpretrain.registry import DATASETS - - -@DATASETS.register_module() -class ScienceQA(BaseDataset): - """ScienceQA dataset. - - This dataset is used to load the multimodal data of ScienceQA dataset. - - Args: - data_root (str): The root directory for ``data_prefix`` and - ``ann_file``. - split (str): The split of dataset. Options: ``train``, ``val``, - ``test``, ``trainval``, ``minival``, and ``minitest``. - split_file (str): The split file of dataset, which contains the - ids of data samples in the split. - ann_file (str): Annotation file path. - data_prefix (dict): Prefix for data field. Defaults to - ``dict(img_path='')``. - pipeline (Sequence): Processing pipeline. Defaults to an empty tuple. - **kwargs: Other keyword arguments in :class:`BaseDataset`. - """ - - def __init__(self, - data_root: str, - split: str, - split_file: str, - ann_file: str, - data_prefix: dict = dict(img_path=''), - pipeline: Sequence[Callable] = (), - **kwargs): - - assert split in [ - 'train', 'val', 'test', 'trainval', 'minival', 'minitest' - ], f'Invalid split {split}' - self.split = split - self.split_file = os.path.join(data_root, split_file) - - super().__init__( - data_root=data_root, - ann_file=ann_file, - data_prefix=data_prefix, - pipeline=pipeline, - **kwargs) - - def load_data_list(self) -> List[dict]: - """Load data list.""" - img_prefix = self.data_prefix['img_path'] - annotations = mmengine.load(self.ann_file) - current_data_split = mmengine.load(self.split_file)[self.split] # noqa - - file_backend = get_file_backend(img_prefix) - - data_list = [] - for data_id in current_data_split: - ann = annotations[data_id] - data_info = { - 'image_id': - data_id, - 'question': - ann['question'], - 'choices': - ann['choices'], - 'gt_answer': - ann['answer'], - 'hint': - ann['hint'], - 'image_name': - ann['image'], - 'task': - ann['task'], - 'grade': - ann['grade'], - 'subject': - ann['subject'], - 'topic': - ann['topic'], - 'category': - ann['category'], - 'skill': - ann['skill'], - 'lecture': - ann['lecture'], - 'solution': - ann['solution'], - 'split': - ann['split'], - 'img_path': - file_backend.join_path(img_prefix, data_id, ann['image']) - if ann['image'] is not None else None, - 'has_image': - True if ann['image'] is not None else False, - } - data_list.append(data_info) - - return data_list diff --git a/spaces/LanguageBind/LanguageBind/data/process_depth.py b/spaces/LanguageBind/LanguageBind/data/process_depth.py deleted file mode 100644 index bd33584022802f6e59c94185543ec3347c655f99..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/data/process_depth.py +++ /dev/null @@ -1,55 +0,0 @@ -import PIL -import cv2 -import numpy as np -import torch -from PIL import Image -from torch import nn -from torchvision import transforms -from open_clip.constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD - - -def opencv_loader(path): - return cv2.imread(path, cv2.IMREAD_UNCHANGED).astype('float32') - - -class DepthNorm(nn.Module): - def __init__( - self, - max_depth=0, - min_depth=0.01, - ): - super().__init__() - self.max_depth = max_depth - self.min_depth = min_depth - self.scale = 1000.0 # nyuv2 abs.depth - - def forward(self, image): - # image = np.array(image) - depth_img = image / self.scale # (H, W) in meters - depth_img = depth_img.clip(min=self.min_depth) - if self.max_depth != 0: - depth_img = depth_img.clip(max=self.max_depth) - depth_img /= self.max_depth # 0-1 - else: - depth_img /= depth_img.max() - depth_img = torch.from_numpy(depth_img).unsqueeze(0).repeat(3, 1, 1) # assume image - return depth_img.to(torch.get_default_dtype()) - -def get_depth_transform(args): - transform = transforms.Compose( - [ - DepthNorm(max_depth=args.max_depth), - transforms.Resize(224, interpolation=transforms.InterpolationMode.BICUBIC), - transforms.CenterCrop(224), - transforms.Normalize(OPENAI_DATASET_MEAN, OPENAI_DATASET_STD), # assume image - # transforms.Normalize((0.5, ), (0.5, )) # 0-1 to norm distribution - # transforms.Normalize((0.0418, ), (0.0295, )) # sun rgb-d imagebind - # transforms.Normalize((0.02, ), (0.00295, )) # nyuv2 - ] - ) - return transform - -def load_and_transform_depth(depth_path, transform): - depth = opencv_loader(depth_path) - depth_outputs = transform(depth) - return {'pixel_values': depth_outputs} diff --git a/spaces/Layer6/TR0N/bad_words.py b/spaces/Layer6/TR0N/bad_words.py deleted file mode 100644 index 7c18410967d678138df803c469a3ce837b6d7a8e..0000000000000000000000000000000000000000 --- a/spaces/Layer6/TR0N/bad_words.py +++ /dev/null @@ -1 +0,0 @@ -bad_words = ["4r5e", "5h1t", "5hit", "a55", "anal", "anus", "ar5e", "arrse", "arse", "ass", "ass-fucker", "asses", "assfucker", "assfukka", "asshole", "assholes", "asswhole", "a_s_s", "b!tch", "b00bs", "b17ch", "b1tch", "ballbag", "balls", "ballsack", "bastard", "beastial", "beastiality", "bellend", "bestial", "bestiality", "bi+ch", "biatch", "bitch", "bitcher", "bitchers", "bitches", "bitchin", "bitching", "bloody", "blow job", "blowjob", "blowjobs", "boiolas", "bollock", "bollok", "boner", "boob", "boobs", "booobs", "boooobs", "booooobs", "booooooobs", "breasts", "buceta", "bugger", "bum", "bunny fucker", "butt", "butthole", "buttmuch", "buttplug", "c0ck", "c0cksucker", "carpet muncher", "cawk", "chink", "cipa", "cl1t", "clit", "clitoris", "clits", "cnut", "cock", "cock-sucker", "cockface", "cockhead", "cockmunch", "cockmuncher", "cocks", "cocksuck", "cocksucked", "cocksucker", "cocksucking", "cocksucks", "cocksuka", "cocksukka", "cok", "cokmuncher", "coksucka", "coon", "cox", "crap", "cum", "cummer", "cumming", "cums", "cumshot", "cunilingus", "cunillingus", "cunnilingus", "cunt", "cuntlick", "cuntlicker", "cuntlicking", "cunts", "cyalis", "cyberfuc", "cyberfuck", "cyberfucked", "cyberfucker", "cyberfuckers", "cyberfucking", "d1ck", "damn", "dick", "dickhead", "dildo", "dildos", "dink", "dinks", "dirsa", "dlck", "dog-fucker", "doggin", "dogging", "donkeyribber", "doosh", "duche", "dyke", "ejaculate", "ejaculated", "ejaculates", "ejaculating", "ejaculatings", "ejaculation", "ejakulate", "f u c k", "f u c k e r", "f4nny", "fag", "fagging", "faggitt", "faggot", "faggs", "fagot", "fagots", "fags", "fanny", "fannyflaps", "fannyfucker", "fanyy", "fatass", "fcuk", "fcuker", "fcuking", "feck", "fecker", "felching", "fellate", "fellatio", "fingerfuck", "fingerfucked", "fingerfucker", "fingerfuckers", "fingerfucking", "fingerfucks", "fistfuck", "fistfucked", "fistfucker", "fistfuckers", "fistfucking", "fistfuckings", "fistfucks", "flange", "fook", "fooker", "fuck", "fucka", "fucked", "fucker", "fuckers", "fuckhead", "fuckheads", "fuckin", "fucking", "fuckings", "fuckingshitmotherfucker", "fuckme", "fucks", "fuckwhit", "fuckwit", "fudge packer", "fudgepacker", "fuk", "fuker", "fukker", "fukkin", "fuks", "fukwhit", "fukwit", "fux", "fux0r", "f_u_c_k", "gangbang", "gangbanged", "gangbangs", "gaylord", "gaysex", "goatse", "God", "god-dam", "god-damned", "goddamn", "goddamned", "hardcoresex", "hell", "heshe", "hoar", "hoare", "hoer", "homo", "hore", "horniest", "horny", "hotsex", "jack-off", "jackoff", "jap", "jerk-off", "jism", "jiz", "jizm", "jizz", "kawk", "knob", "knobead", "knobed", "knobend", "knobhead", "knobjocky", "knobjokey", "kock", "kondum", "kondums", "kum", "kummer", "kumming", "kums", "kunilingus", "l3i+ch", "l3itch", "labia", "lust", "lusting", "m0f0", "m0fo", "m45terbate", "ma5terb8", "ma5terbate", "masochist", "master-bate", "masterb8", "masterbat*", "masterbat3", "masterbate", "masterbation", "masterbations", "masturbate", "mo-fo", "mof0", "mofo", "mothafuck", "mothafucka", "mothafuckas", "mothafuckaz", "mothafucked", "mothafucker", "mothafuckers", "mothafuckin", "mothafucking", "mothafuckings", "mothafucks", "mother fucker", "motherfuck", "motherfucked", "motherfucker", "motherfuckers", "motherfuckin", "motherfucking", "motherfuckings", "motherfuckka", "motherfucks", "muff", "mutha", "muthafecker", "muthafuckker", "muther", "mutherfucker", "n1gga", "n1gger", "nazi", "nigg3r", "nigg4h", "nigga", "niggah", "niggas", "niggaz", "nigger", "niggers", "nob", "nob jokey", "nobhead", "nobjocky", "nobjokey", "numbnuts", "nutsack", "orgasim", "orgasims", "orgasm", "orgasms", "p0rn", "pawn", "pecker", "penis", "penisfucker", "phonesex", "phuck", "phuk", "phuked", "phuking", "phukked", "phukking", "phuks", "phuq", "pigfucker", "pimpis", "piss", "pissed", "pisser", "pissers", "pisses", "pissflaps", "pissin", "pissing", "pissoff", "poop", "porn", "porno", "pornography", "pornos", "prick", "pricks", "pron", "pube", "pusse", "pussi", "pussies", "pussy", "pussys", "rectum", "retard", "rimjaw", "rimming", "s hit", "s.o.b.", "sadist", "schlong", "screwing", "scroat", "scrote", "scrotum", "semen", "sex", "sh!+", "sh!t", "sh1t", "shag", "shagger", "shaggin", "shagging", "shemale", "shi+", "shit", "shitdick", "shite", "shited", "shitey", "shitfuck", "shitfull", "shithead", "shiting", "shitings", "shits", "shitted", "shitter", "shitters", "shitting", "shittings", "shitty", "skank", "slut", "sluts", "smegma", "smut", "snatch", "son-of-a-bitch", "spac", "spunk", "s_h_i_t", "t1tt1e5", "t1tties", "teets", "teez", "testical", "testicle", "tit", "titfuck", "tits", "titt", "tittie5", "tittiefucker", "titties", "tittyfuck", "tittywank", "titwank", "tosser", "turd", "tw4t", "twat", "twathead", "twatty", "twunt", "twunter", "v14gra", "v1gra", "vagina", "viagra", "vulva", "w00se", "wang", "wank", "wanker", "wanky", "whoar", "whore", "willies", "willy", "xrated", "xxx"] diff --git a/spaces/Lihuchen/AcroBERT/constant.py b/spaces/Lihuchen/AcroBERT/constant.py deleted file mode 100644 index 6f2176e36f2092c93461490a07c61512282f1397..0000000000000000000000000000000000000000 --- a/spaces/Lihuchen/AcroBERT/constant.py +++ /dev/null @@ -1,59 +0,0 @@ -""" -from https://github.com/amirveyseh/MadDog under CC BY-NC-SA 4.0 -Define constants. -""" -EMB_INIT_RANGE = 1.0 - -# vocab -PAD_TOKEN = '' -PAD_ID = 0 -UNK_TOKEN = '' -UNK_ID = 1 - -VOCAB_PREFIX = [PAD_TOKEN, UNK_TOKEN] - -# hard-coded mappings from fields to ids -SUBJ_NER_TO_ID = {PAD_TOKEN: 0, UNK_TOKEN: 1, 'ORGANIZATION': 2, 'PERSON': 3} - -OBJ_NER_TO_ID = {PAD_TOKEN: 0, UNK_TOKEN: 1, 'PERSON': 2, 'ORGANIZATION': 3, 'DATE': 4, 'NUMBER': 5, 'TITLE': 6, 'COUNTRY': 7, 'LOCATION': 8, 'CITY': 9, 'MISC': 10, 'STATE_OR_PROVINCE': 11, 'DURATION': 12, 'NATIONALITY': 13, 'CAUSE_OF_DEATH': 14, 'CRIMINAL_CHARGE': 15, 'RELIGION': 16, 'URL': 17, 'IDEOLOGY': 18} - -NER_TO_ID = {PAD_TOKEN: 0, UNK_TOKEN: 1, 'O': 2, 'PERSON': 3, 'ORGANIZATION': 4, 'LOCATION': 5, 'DATE': 6, 'NUMBER': 7, 'MISC': 8, 'DURATION': 9, 'MONEY': 10, 'PERCENT': 11, 'ORDINAL': 12, 'TIME': 13, 'SET': 14} - -POS_TO_ID = {PAD_TOKEN: 0, UNK_TOKEN: 1, 'NNP': 2, 'NN': 3, 'IN': 4, 'DT': 5, ',': 6, 'JJ': 7, 'NNS': 8, 'VBD': 9, 'CD': 10, 'CC': 11, '.': 12, 'RB': 13, 'VBN': 14, 'PRP': 15, 'TO': 16, 'VB': 17, 'VBG': 18, 'VBZ': 19, 'PRP$': 20, ':': 21, 'POS': 22, '\'\'': 23, '``': 24, '-RRB-': 25, '-LRB-': 26, 'VBP': 27, 'MD': 28, 'NNPS': 29, 'WP': 30, 'WDT': 31, 'WRB': 32, 'RP': 33, 'JJR': 34, 'JJS': 35, '$': 36, 'FW': 37, 'RBR': 38, 'SYM': 39, 'EX': 40, 'RBS': 41, 'WP$': 42, 'PDT': 43, 'LS': 44, 'UH': 45, '#': 46} - -DEPREL_TO_ID = {PAD_TOKEN: 0, UNK_TOKEN: 1, 'punct': 2, 'compound': 3, 'case': 4, 'nmod': 5, 'det': 6, 'nsubj': 7, 'amod': 8, 'conj': 9, 'dobj': 10, 'ROOT': 11, 'cc': 12, 'nmod:poss': 13, 'mark': 14, 'advmod': 15, 'appos': 16, 'nummod': 17, 'dep': 18, 'ccomp': 19, 'aux': 20, 'advcl': 21, 'acl:relcl': 22, 'xcomp': 23, 'cop': 24, 'acl': 25, 'auxpass': 26, 'nsubjpass': 27, 'nmod:tmod': 28, 'neg': 29, 'compound:prt': 30, 'mwe': 31, 'parataxis': 32, 'root': 33, 'nmod:npmod': 34, 'expl': 35, 'csubj': 36, 'cc:preconj': 37, 'iobj': 38, 'det:predet': 39, 'discourse': 40, 'csubjpass': 41} - -RULES = { - 'schwartz': True, - 'character': True, - 'roman': False, - 'low_short_threshold': False, - 'bounded_schwartz': True, - 'remove_punctuation': False, - 'no_parentheses': False, - 'high_recall_character_match': False, - 'initial_capitals': True, - 'hyphen_in_acronym': False, - 'starting_lower_case': False, - 'check_all_capitals': True, - 'merge_hyphened_acronyms': False, - 'ignore_punc_in_parentheses': True, - 'capture_embedded_acronym': True, - 'extend_punc': False, - 'small_window': True, - 'no_beginning_stop_word': True, - 'ignore_right_hand': True, - 'ignore_dot': True, - 'template': False, - 'map_chars': False, - 'high_recall_schwartz': False, - "default_diction": False - } - -NEGATIVE_LABEL = 'no_relation' - -# LABEL_TO_ID = {'metering data collector': 0, 'mobile data challenge': 1, 'multiple description coding': 2, 'support vector machine': 3, 'state vector machine': 4, 'principle component analysis': 5, 'maximum entropy regularizer': 6, 'music emotion research': 7, 'music emotion recognition': 8, 'support vector classification': 9, 'support vector classifier': 10, 'scalable video coding': 11, 'convolutional neural network': 12, 'condensed nearest neighbor': 13, 'network is called pingyin': 14, 'convolutional networks': 15, 'complicated neural networks': 16, 'citation nearest neighbour': 17, 'forward error correction': 18, 'forward erasure correction': 19, 'federal election candidate': 20, 'finite state machine': 21, 'fast sweeping method': 22, 'latent dirichlet allocation': 23, 'linear discriminant analysis': 24, 'labeled data': 25, 'area under curve': 26, 'area under the roc curve': 27, 'area under the curve': 28, 'area under curve - receiver operating characteristic': 29, 'area under the receiver operating characteristic curve': 30, 'curve': 31, 'area under roc curve': 32, 'area - under - curve': 33, 'frame recall': 34, 'faster r - cnn': 35, 'fooling rate': 36, 'fails regarding': 37, 'term frequency': 38, 'trend filtering': 39, 'tensor factorization': 40, 'transcription factor': 41, 'speech synthesis': 42, 'single stage': 43, 'stochastic search': 44, 'social status': 45, 'spectrum sensing': 46, 'severe sepsis': 47, 'scheduled sampling': 48, 'secondary structure': 49, 'simple sum': 50, 'markov chain monte carlo': 51, 'monte carlo markov chain': 52, 'markov chain monte carlo method': 53, 'global positioning system': 54, 'general pattern search': 55, 'global positioning sensor': 56, 'generalized propensity score': 57, 'principal component analysis': 58, 'primary component analysis': 59, 'probabilistic principal component analysis': 60, 'posterior cortical atrophy': 61, 'mean squared error': 62, 'model selection eqn': 63, 'minimum square error': 64, 'extreme learning machinean': 65, 'singular value decomposition': 66, 'inductive logic programming': 67, 'integer linear programming': 68, 'integer linear program': 69, 'adaptive multi': 70, 'adaptive multi - view feature selection': 71, 'mixture time invariant': 72, 'medical text indexer': 73, 'siamese neural network': 74, 'spiking neural networks': 75, 'bidirectional long short term memory': 76, 'bidirectional lstm': 77, 'sentence order prediction': 78, 'secrecy outage probability': 79, 'singing voice detection': 80, 'singular vector decomposition': 81, 'maximum likelihood estimation': 82, 'maximum likelihood': 83, 'maximum log - likelihood estimation': 84, 'cosine distance': 85, 'contrastive divergence': 86, 'consecutive disks': 87, 'critical difference': 88, 'chamfer distance': 89, 'contact distance': 90, 'cover difference': 91, 'chemical diagram': 92, "crohn 's disease": 93, 'optical character recognition': 94, 'one - to - one character replacements': 95, 'graph neural network': 96, 'graph neural networks': 97, 'machine reading comprehension': 98, 'maximal ratio combining': 99, 'magnetic resonance coupling': 100, 'maximum ratio combination': 101, 'stable abstraction principle': 102, 'simple amplitude presitortion': 103, 'determinantal point process': 104, 'disjoint paths problem': 105, 'quantile random forest': 106, 'quantile regression forest': 107, 'google cloud messaging': 108, 'generalized cell - to - cell mapping': 109, 'general circulation model': 110, 'galois / counter mode': 111, 'global circulation model': 112, 'poisson point process': 113, 'palm point process': 114, 'fully convolutional neural network': 115, 'fully convolutional network': 116, 'fully connected network': 117, '3d fully convolutional neural': 118, 'recurrent neural network': 119, 'random neural networks': 120, 'recursive neural network': 121, 'recurrent neural net': 122, 'reverse nearest neighbour': 123, 'machine learning': 124, 'model logic': 125, 'malware landscape': 126, 'mortar luminance': 127, 'bandwidth constraint': 128, 'betweenness centrality': 129, 'between class': 130, 'broadcast channel': 131, 'blockchain': 132, 'bayesian network': 133, 'batch normalization': 134, 'dynamic deterministic effects propagation networks': 135, 'telephone conversations': 136, 'time - continuous waveforms': 137, 'town crier': 138, 'tumor core': 139, 'time - continuous': 140, 'target country': 141, 'total cover': 142, 'traffic class': 143, 'total correlation': 144, 'tree structures': 145, 'two state': 146, 'terminal stance': 147, 'temperature scaling': 148, 'temperature - based sampling': 149, 'tabu search': 150, 'triadic simmelian backbone': 151, 'thompson sampling': 152, 'time series': 153, 'time switching': 154, 'target syntactic': 155, 'text summarization': 156, 'triadic simmelian': 157, 'tessellation shader': 158, 'convolutional neural networks': 159, 'convolutional': 160, 'connected neural networks': 161, 'high performance computing': 162, 'heterogeneous platforms che2009rodinia': 163, 'hardware performance counters': 164, 'low density parity check': 165, 'low - rate structured': 166, 'orthogonal spectrum sharing': 167, 'open source software': 168, 'median absolute difference': 169, 'median absolute deviations': 170, 'map attention decision': 171, 'partial optimal slacking': 172, 'part of speech': 173, 'part - of - speech': 174, 'recurrent neural networks': 175, 'recurrent neural networks(rnns': 176, 'stochastic gradient descent': 177, 'simple gradient descent': 178, 'you look only once': 179, 'you only look once': 180, 'you only look onceyolo2016': 181, 'shot multibox detector': 182, 'solid state disk': 183, 'single shot multi - box detector': 184, 'single shot detection': 185, 'feature alignment': 186, 'foundry also': 187, 'fractional anisotropy': 188, 'feedback alignment': 189, 'fault analysis': 190, 'failure analysis': 191, 'firefly algorithm': 192, 'false alarm': 193, 'fault attack': 194, 'gaussian process adaptation': 195, 'generalized procrustes analysis': 196, 'graph partition algorithm': 197, 'adaptive patch selection': 198, 'adaptive patch search': 199, 'american physical society': 200, 'augmented path schema': 201, 'finite element method': 202, 'finite element methodgm97': 203, 'computed tomography': 204, 'constraint theory': 205, 'contributor trust': 206, 'conditional training': 207, 'crowd trust': 208, 'considered the task': 209, 'confidential transactions': 210, 'coordinated turn': 211, 'class table': 212, 'computer aided diagnosi': 213, 'coronary artery disease': 214, 'computer aided design': 215, 'computer assisted design': 216, 'computer aided diagnosis': 217, 'imbalance ratio': 218, 'individual rationality': 219, 'information retrieval': 220, 'interference range': 221, 'immediate regret': 222, 'influence rank': 223, 'influence ratio': 224, 'integrates results': 225, 'incremental relaying': 226, 'image resolution': 227, 'inactive region': 228, 'satisfiability modulo theory': 229, 'statistical machine translation': 230, 'statistical mt': 231, 'semantic mask transfer': 232, 'entity relationship': 233, 'error rate': 234, 'erdos - renyi': 235, 'experience replay': 236, 'error ratio': 237, 'estrogen receptor': 238, 'entity recognition': 239, 'encoder rnn': 240, 'journal citation report': 241, 'jointly convex representation': 242, 'region of interest': 243, 'requirement of increasing': 244, 'region outlined in green indicates': 245, 'stochastic topic block model': 246, 'stochastic tensor block model': 247, 'long short term memory': 248, 'long short - term memory models': 249, 'long - term short - term memory recurrent neural network': 250, 'long short - term memory networks': 251, 'long short - term memory neural network': 252, 'long short - term memory': 253, 'eyes closed': 254, 'end component': 255, 'evolutionary computation': 256, 'equivalence class': 257, 'eigenvector centrality': 258, 'effective concentration': 259, 'empty categories': 260, 'emergent configuration': 261, 'non - crossing edge pairs': 262, 'user equilibrium': 263, 'unreal engine': 264, 'user equipment': 265, 'o - d demand estimation': 266, 'ordinary differential equation': 267, 'o - d estimation': 268, 'vector initialization': 269, 'variable importance': 270, 'variational inference': 271, 'variational information': 272, 'vegetation indices': 273, 'random indexing': 274, 'rand index': 275, 'distribution matching': 276, 'discovery of models': 277, 'dialog management': 278, 'directional modulation': 279, 'dynamic multi': 280, 'data management': 281, 'dialog manager': 282, 'amplified spontaneous emission': 283, 'average scale error': 284, 'achievable information rates': 285, 'application instance role': 286, 'bit error rate': 287, 'better bit error rate': 288, 'tracking logic': 289, 'transfer learning': 290, 'request to send': 291, 'real time strategy': 292, 'clear to send': 293, 'constrained topological sort': 294, 'medium access control': 295, 'multiple access channels': 296, 'mandatory access control': 297, 'message authentication code': 298, 'metropolitan airports commission': 299, 'multiply accumulate': 300, 'multiple access control': 301, 'distributed coordination function': 302, 'discriminative correlation filter': 303, 'weisfeiler - lehman kernel': 304, 'weisfeiler - lehman': 305, 'graphlet kernel': 306, 'greedy knapsack': 307, 'deep convolutional neural network': 308, 'dynamic convolutional neural network': 309, 'power spectral density': 310, 'phase shift difference': 311, 'corresponding arcs': 312, 'current account': 313, 'contention adaptions': 314, 'combinatorial auction': 315, 'cumulative activation': 316, 'cellular automata': 317, 'coordinate ascent': 318, 'classification accuracy': 319, 'context adaptation': 320, 'cardiac amyloidosis': 321, 'center for applied internet data analysis': 322, 'contextual attention': 323, 'conversational analysis': 324, 'certificate authority': 325, 'community animator': 326, 'conditioning augmentation': 327, 'character - level accuracy': 328, 'coded aperture': 329, 'iterative closest point': 330, 'inductive conformal prediction': 331, 'iterative cache placement': 332, 'mean absolute error': 333, 'maximum absolute error': 334, 'mean maximum absolute error': 335, 'maquinas de aprendizaje extremo': 336, 'mean average error': 337, 'peak signal - to - noise ratio': 338, 'peak signal noise ratio': 339, 'generative adversarial network': 340, 'manifold geometry': 341, 'general adversarial net': 342, 'generative adversarial neural network': 343, 'deep q - learning networks': 344, 'deep q networks': 345, 'deep q - network': 346, 'deep q learning': 347, 'double deep q network(ddqn': 348, 'duelling deep q - learning networks': 349, 'duelling deep q networks': 350, 'dulling deep q - network': 351, 'deep recurrent q - learning network': 352, 'deep recurrent q networks': 353, 'policy gradient deep neural networks': 354, 'policy gradient neural network': 355, 'deep deterministic policy gradient': 356, 'deep deterministic policy gradientlillicrap2015continuous': 357, 'reinforcement learning': 358, 'representation learning': 359, 'robot learning': 360, 'relative location': 361, 'restrained lloyd': 362, 'resource limitations': 363, 'robust locomotion': 364, 'retore logicit': 365, 'redefine linear': 366, 'conditional random field': 367, 'constant rate factor': 368, 'correlation robust function': 369, 'noun phrase': 370, 'non - emptiness problem': 371, 'neural pooling': 372, 'no peepholes': 373, 'not present': 374, 'natural problem': 375, 'new persian': 376, 'neural processes': 377, 'named entity recognition': 378, 'named entity recognitionnamed': 379, 'named entity recognizer': 380, 'deep neural network': 381, 'deep artificial neural networks': 382, 'deep neural network(dnn': 383, 'dense neural network': 384, 'domain - specific language': 385, 'distributed spectrum ledger': 386, 'cross validation': 387, 'constant velocity': 388, 'computer vision': 389, 'crowd votes': 390, 'oblivious transfer': 391, 'optimal transport': 392, 'orthogonal training': 393, 'optimality theory': 394, 'multi - party computation': 395, 'model predictive control': 396, 'massively parallel computation': 397, 'audio commons': 398, 'attack criteria': 399, 'auto - correlation': 400, 'actor - critic': 401, 'atrous convolution': 402, 'access category': 403, 'autonomic computing': 404, 'activation clustering': 405, 'admission control': 406, 'alternating current': 407, 'access categories': 408, 'avoid congestion': 409, 'afterward - confirm': 410, 'astronomy and astrophysics': 411, 'astronomy astrophysics': 412, 'authorship attribution': 413, 'affine arithmetic': 414, 'adamic adar': 415, 'temporal interactions': 416, 'threshold initialization': 417, 'tone injection': 418, 'temporal information': 419, 'confidentiality , integrity , and availability': 420, 'central intelligence agency': 421, 'preimage resistant': 422, 'preference ratio': 423, 'patient record': 424, 'pilot reuse': 425, 'precision - recall': 426, 'perfect reconstruction': 427, 'passage retrieval': 428, 'pagerank': 429, 'perfectly reconstructible': 430, 'collision resistant': 431, 'cognitive radio': 432, 'communication region': 433, 'containment relations': 434, 'collective rationality': 435, 'coreference resolution': 436, 'code rate': 437, 'carriage returns': 438, 'contention resolution': 439, 'contains relatively': 440, 'interference factor': 441, 'instantaneous frequency': 442, 'intermediate frequency': 443, 'indian and foreign': 444, 'isolation forest': 445, 'simple power analysis': 446, 'saturation peak analysis': 447, 'spatial preferential attachment': 448, 'scalar multiplication': 449, 'spatial modulation': 450, 'scattering modulation': 451, 'streaming multiprocessors': 452, 'synthesis module': 453, 'stream multiprocessor': 454, 'speaker model': 455, 'supplementary material': 456, 'spectral matching': 457, 'social media': 458, 'single service manager': 459, 'service manager': 460, 'shared memory': 461, 'state machine': 462, 'system model': 463, 'point multiplication': 464, 'polarization - multiplexed': 465, 'probabilistic model': 466, 'physical machines': 467, 'prediction model': 468, 'randomized projective coordinate': 469, 'remote procedure calls': 470, 'fixed point multiplication': 471, 'face prediction model': 472, 'message passing interface': 473, 'multiple parallel instances': 474, 'global arrays': 475, 'genetic algorithm': 476, 'graduated assignment': 477, 'greater applications': 478, 'louisiana state university': 479, 'load - store unit': 480, 'pittsburgh supercomputing center': 481, 'partial set cover': 482, 'paper sentence classification': 483, 'machine translation': 484, 'microsoft translator:(http://www.microsofttranslator.com': 485, 'into english': 486, 'merkle tree': 487, 'computer systems': 488, 'computer science': 489, 'clonal selection': 490, 'connection size': 491, 'computational science': 492, 'centralized solution': 493, 'compressive sensing': 494, 'core semantics': 495, 'coordinated scheduling': 496, 'charging station': 497, 'constraint solver': 498, 'conventional sparsity': 499, 'compressed sensing': 500, 'critical section': 501, 'common subset': 502, 'content store': 503, 'case - sensitive': 504, 'consensus score': 505, 'code - switching': 506, 'cluster - specific': 507, 'length of stay': 508, 'line of sight': 509, 'random forest': 510, 'radio frequency': 511, 'random forest classifier': 512, 'regression function': 513, 'regression forest': 514, 'register file': 515, 'hourly - similarity': 516, 'horn and schunck': 517, 'hierarchical softmax': 518, 'vector space model': 519, 'vacationing server model': 520, 'charging current': 521, 'corpus callosum': 522, 'collision cone': 523, 'cross - correlation': 524, 'creative commons': 525, 'central cloud': 526, 'classifier chain': 527, 'closeness centrality': 528, 'constant charging': 529, 'corresponding charging': 530, 'cover complexity': 531, 'connected caveman': 532, 'constant current': 533, 'collaboration coefficient': 534, 'covert channels': 535, 'correlation constraints': 536, 'core connected': 537, 'deep belief network': 538, 'dynamic bayesian network': 539, 'deep belief network models': 540, 'directed belief net': 541, 'dimension reduction': 542, 'demand response': 543, 'diagnosis record': 544, 'detecting repetitions': 545, 'dispersion reduction': 546, 'digit reversal': 547, 'differential rectifier': 548, 'deprived rejected': 549, 'dimensionality reduction': 550, 'document retrieval': 551, 'decoder rnn': 552, 'expectation maximization': 553, 'exact match': 554, 'equivalently maximizes': 555, 'electron microscopy': 556, 'feature selection': 557, 'frame semantic': 558, 'fraudulent services': 559, 'fully sampled': 560, 'fragment shader': 561, 'moving average': 562, 'multiple assignment': 563, 'merlin - arthur': 564, 'mobile agent': 565, 'method a': 566, 'particle swarm optimization': 567, 'orthogonal least - square': 568, 'power system operations': 569, 'artificial bee colony': 570, 'atlas of biosynthetic gene clusters': 571, 'absorbing boundary condition': 572, 'message passing': 573, 'matching pursuit': 574, 'meets performance': 575, 'most popular': 576, 'max pooling': 577, 'mean precision': 578, 'mask pyramid': 579, 'bare metal': 580, 'black males': 581, 'virtual machine': 582, 'visual module': 583, 'von mises': 584, 'building educational applications': 585, 'bond energy algorithm': 586, 'maximum mean discrepancy': 587, 'minimizes marginal distribution': 588, 'fully connected': 589, 'fusion center': 590, 'fixed confidence': 591, 'filter controls': 592, 'fusion centre': 593, 'frame content': 594, 'fashion compatibility': 595, 'fiscal code': 596, 'function processor': 597, 'false positive': 598, 'frequency partitioning': 599, 'floating point': 600, 'failure prediction': 601, 'fixed point': 602, 'software defined radio': 603, 'semidefine relaxation': 604, 'structured domain randomization': 605, 'backoff': 606, 'bayesian optimisation': 607, 'boiler on': 608, 'euler - lagrange': 609, 'entity linking': 610, 'edge length': 611, 'episode length': 612, 'external links': 613, 'least squares boosting': 614, 'least significant bit': 615, 'fisher information matrix': 616, 'fragment identifier messaging': 617, 'fast iterative method': 618, 'direction facilities': 619, 'dominating frequencies': 620, 'agent communication language': 621, 'access control list': 622, 'australian research council': 623, 'adaptive - robust control': 624, 'synthetic aperture radar': 625, 'socially assistive robots': 626, 'search and rescue': 627, 'sensing application recently': 628, 'open systems interconnection': 629, 'open source initiative': 630, 'random linear coding': 631, 'random linear codes': 632, 'radio link control': 633, 'quality of experience': 634, 'quality of user experience': 635, 'running sum': 636, 'residual splash': 637, 'rate - selective': 638, 'relay station': 639, 'random search': 640, 'remote sensing': 641, 'recommender systems': 642, 'rate splitting': 643, 'randomly sampled': 644, 'results show': 645, 'random split': 646, 'rate saturation': 647, 'real satellite': 648, 'neural network': 649, 'nearest neighbor': 650, 'nearest neighboring': 651, 'model efficiently': 652, 'mixture - of - experts': 653, 'mixture of experts': 654, 'sequential model - based algorithm configuration': 655, 'sequential model - based optimization for general algorithm configuration': 656, 'negative binomial': 657, 'naive bayes': 658, 'new brunswick': 659, 'higher - order spectra': 660, 'higher order statistics': 661, 'structural accuracy': 662, 'single architecture': 663, 'stacked autoencoders': 664, 'simulated annealing': 665, 'signal analysis': 666, 'sensitivity analysis': 667, 'scheme as': 668, 'strongly adaptive': 669, 'sensing antennas': 670, 'significance and accuracy': 671, 'satisfies aass': 672, 'situational awareness': 673, 'subspace alignment': 674, 'steepest ascent': 675, 'scores anatomical': 676, 'string analysis': 677, 'expected improvement': 678, 'epidemic intelligence': 679, 'event interaction': 680, 'matthews correlation coefficient': 681, 'minimum coefficient correlation': 682, 'mobile cloud computing': 683, 'mesoscale cellular convection': 684, 'maximal connected component': 685, 'receiver operating characteristic': 686, 'receiver operating curve': 687, "receiver operating characteristic 's": 688, 'restricted orthogonal constants': 689, 'naive bayes classifier': 690, 'non - parametric bayesian classification': 691, 'string kernel': 692, 'septic shock': 693, 'global vectors for word representation': 694, 'global word vectors': 695, 'medial temporal lobe': 696, 'methods that learn': 697, 'multi - task learning': 698, 'mobile edge computing': 699, 'multi - access edge computing': 700, 'imitation learning': 701, 'intermediate level': 702, 'dynamic movement primitives': 703, 'digital motion processor': 704, 'graph compression problem': 705, 'grid connection point': 706, 'intellectual property': 707, 'internet protocol': 708, 'inductive programming': 709, 'inverse proportion': 710, 'intercept probability': 711, 'image preprocessing': 712, 'integer programming': 713, 'integer program': 714, 'transport layer security': 715, 'terrestrial laser scanning': 716, 'probe attempt detector': 717, 'presentation attack detection': 718, 'active shape model': 719, 'alphabet set multiplier': 720, 'dynamic mirror descent': 721, 'digital micro - mirror device': 722, 'deficient mapping dissolution': 723, 'exponential moving average': 724, 'ecological momentary assessment': 725, 'mean absolute percentage error': 726, 'mean average percent error': 727, 'maximum - a - posteriori': 728, 'maximum a posteriori': 729, 'mean average precision': 730, 'max a posterior': 731, 'measures average precision': 732, 'maximum a posteriori probability': 733, 'objects(maximum a posteriori': 734, 'distinguished name': 735, 'destination node': 736, 'pre - activation convolutional cell(the': 737, 'pearson correlation coefficient': 738, 'adversarial loss': 739, 'active learning': 740, 'ultrasound': 741, 'united states': 742, 'uncertainty sampling': 743, 'magnetic resonance': 744, 'minimum read': 745, 'majority rule': 746, 'model risk': 747, 'middle resolution': 748, 'mean recall': 749, 'machine reading': 750, 'meaning representation': 751, 'morphological richness': 752, 'mixed reality': 753, 'high - resolution': 754, 'heart rate': 755, 'inception score': 756, 'importance sampling': 757, 'information systems': 758, 'large deviation principle': 759, 'low degeneracy partition': 760, 'local differential privacy': 761, 'general data protection regulation': 762, 'general data protection rule': 763, 'average causal effect': 764, 'advanced combined encoder': 765, 'average coverage error': 766, 'orthogonal pilot sequences': 767, 'one posterior sample': 768, 'code division multiple access': 769, 'code division multiple access)(cdma': 770, 'maximum distance separable': 771, 'minimum dominating set': 772, 'multi - dimensional scaling': 773, 'intrusion prevention system': 774, 'inverse propensity scaling': 775, 'interactive proof systems': 776, 'building management system': 777, 'battery management system': 778, 'prepositional phrase': 779, 'point process': 780, 'pairwise product': 781, 'pairwise perturbation': 782, 'privacy preferences': 783, 'present and predominant': 784, 'promise problems': 785, 'particle filter': 786, 'pareto - fair': 787, 'propagation fusion': 788, 'power flow': 789, 'poloidal field': 790, 'naive fusion': 791, 'noise figure': 792, 'normalizing flows': 793, 'new foundations': 794, 'technology acceptance model': 795, 'transparent attention model': 796, 'discrete fourier transform': 797, 'density functional theory': 798, 'discrete - time fourier transform': 799, 'discrete fourier transformation': 800, 'disk failure tolerant': 801, 'design - for - test': 802, 'conditional kernel density': 803, 'child key derivation': 804, 'chronic kidney disease': 805, 'auto - regression': 806, 'average recall': 807, 'anaphora resolution': 808, 'augmented reality': 809, 'accumulated reward': 810, 'cloud service providers': 811, 'constraint satisfaction problems': 812, 'consistency availability partition': 813, 'cumulative accuracy profit': 814, 'carrier - less amplitude and phase': 815, 'data store module': 816, 'data stream manager': 817, 'demand side management': 818, 'distributional semantic model': 819, 'digital surface model': 820, 'content addressed storage': 821, 'computer algebra systems': 822, 'consensus attention sum': 823, 'progressive disease': 824, "prisoner 's dilemma": 825, 'pu - primary destination': 826, 'positive definite': 827, "parkinson 's disease": 828, 'positive definiteness': 829, 'prisoner dilemma': 830, 'pixel discussion': 831, "parkinson 's progression markers initiative": 832, 'positive pointwise mutual information': 833, 'point - wise mutual information': 834, 'magnetic resonance imaging': 835, 'mr imaging': 836, 'electronic health records': 837, 'energy harvesting receivers': 838, 'linear complementarity problem': 839, 'locally compact polish': 840, 'longest common prefix': 841, 'linearly compressed page': 842, 'strong dominance': 843, 'secure digital': 844, 'standard deviation': 845, 'strategic dependency': 846, 'soft decision': 847, 'symbolic differentiation': 848, 'sphere decoding': 849, 'selection diversity': 850, 'stochastically dominate': 851, 'structural diagram': 852, 'generalized value function': 853, 'gradient vector flow': 854, 'adaptive segmentation algorithm': 855, 'accessible surface area': 856, 'gaussian mixture model': 857, 'group marching method': 858, 'molecular dynamics': 859, 'morphological disambiguation': 860, 'mixed decoding': 861, 'model distillation': 862, 'memoryless deterministic': 863, 'mean diffusivity': 864, 'massa de dados': 865, 'multiple description': 866, 'missed detection': 867, 'linear programming': 868, 'label powerset': 869, 'linear programscls19': 870, 'linear program': 871, 'lagrangian relaxation': 872, 'label propagation': 873, 'critical path method': 874, 'cost per mille impressions': 875, 'competition performance metric': 876, 'completely positive maps': 877, 'clique percolation method': 878, 'continuous profile model': 879, 'cost per mille': 880, 'context mover': 881, 'distance': 882, 'hausdorff distance': 883, 'high definition': 884, 'hard decision': 885, 'harmonic distortion': 886, "huntington 's disease": 887, 'levenshtein distance': 888, 'line difference': 889, 'loads data': 890, 'large deviation': 891, 'link density': 892, 'liver': 893, 'lateral inhibition': 894, 'stomach': 895, 'split - turnip': 896, 'sleep telemetry': 897, 'semantic tagging': 898, 'single trails': 899, 'smart thermostat': 900, 'steiner tree': 901, 'duodenum': 902, 'direct urls': 903, 'left kidney': 904, 'logik klassische': 905, 'right kidney': 906, 'root key': 907, 'alloctcsharing': 908, 'alloctc - sharing': 909, 'russian dolls model': 910, 'representational dissimilarity matrix': 911, 'nudged elastic band': 912, 'next event backtracking': 913, 'collaborative filtering': 914, 'crest factor': 915, 'code - mixed factor': 916, 'complexity factor': 917, 'correlation filter': 918, 'click - through rate': 919, 'click through rates': 920, 'collaborative topic regression': 921, 'character transfer rate': 922, 'matrix factorization': 923, 'model fair': 924, 'membership function': 925, 'model - free': 926, 'energy transmitters': 927, 'evidence theory': 928, 'enhancing tumor': 929, 'elastic transformations': 930, 'emission tomography': 931, 'approximate nearest neighbor': 932, 'artificial neural network': 933, 'probability distribution functions': 934, 'probability density functions': 935, 'artificial intelligence': 936, 'article influence': 937, 'author increase': 938, 'deep packet inspection': 939, 'data processing inequality': 940, 'key performance indicators': 941, 'key performance indices': 942, 'logistic regression': 943, 'learning rate': 944, 'linear regression': 945, 'low resolution': 946, 'lp relaxation': 947, 'low rank': 948, 'true positive rate': 949, 'tensor product representation': 950, 'round robin': 951, 'recurrent refinement': 952, 'relevance rate': 953, 'relative ranking': 954, 'reverse reachable': 955, 'language modeling': 956, 'language model': 957, 'langugae models': 958, 'logarithmically scaled magnitude': 959, 'lagrange multiplier method': 960, 'levenberg macquardt': 961, 'root system architecture': 962, 'rivest shamir adleman': 963, 'operating systems': 964, 'overlap success': 965, 'output stride': 966, 'orientation score': 967, 'operating system': 968, 'sdn ran controller': 969, 'sparse representation - based classification': 970, "spearsman 's rank correlation": 971, 'sparse representation classification': 972, 'sparse representation based classifier': 973, 'long term evolution': 974, 'language transmission engine': 975, 'heterogeneous network': 976, 'hierarchical network': 977, 'long short term memory networks': 978, 'prediction intervals': 979, 'provider independent': 980, 'power iteration': 981, 'purchase intention': 982, 'linear - time temporal logic': 983, 'linear temporal logic': 984, 'linear time logic': 985, 'probabilistic neural network': 986, 'product - based neural network': 987, 'progressive neural networks': 988, 'resource description framework': 989, 'rate distortion function': 990, 'resource description format': 991, 'random decision forests': 992, 'gross domestic product': 993, 'generalized differential privacy': 994, 'good distribution practice': 995, 'smart object': 996, 'stack overflow': 997, 'surrogate outcomes': 998, 'security management provider': 999, 'symmetric multi processor': 1000, 'stable marriage problem': 1001, 'distributed control': 1002, 'dublin core': 1003, 'direct current': 1004, 'dual connectivity': 1005, 'disorder constraints': 1006, 'disconnected components': 1007, 'direct click': 1008, 'descriptive complexity': 1009, 'data consistency': 1010, 'datacenter': 1011, 'dice coefficient': 1012, 'deep convolutional': 1013, 'deficit counter': 1014, 'dynamic cluster': 1015, 'approximate dynamic programming': 1016, 'absolute derivative privacy': 1017, 'energy storage': 1018, 'end systolic': 1019, 'evolutionary strategies': 1020, 'encrypted sharing': 1021, 'event synchronization': 1022, 'enterprise storage': 1023, 'entropy search': 1024, 'elevation angle spread': 1025, 'exhaustive search': 1026, 'external search': 1027, 'embedding weight sharing': 1028, 'attention deficit hyperactivity disorder': 1029, 'attention deficit hyperactive disorder': 1030, 'temporal resolution': 1031, 'tone reservation': 1032, 'average outage duration': 1033, 'angle opening distance': 1034, 'arithmetic mean': 1035, 'activation maximization': 1036, 'alternating minimization': 1037, 'quadratic programming': 1038, 'quantum pareto': 1039, 'quantisation parameter': 1040, 'test case prioritization': 1041, "top concept 's popularity": 1042, 'transductive conformal prediction': 1043, 'transmission control protocol': 1044, 'stanford drone dataset': 1045, 'standard desktop display': 1046, 'class activation maps': 1047, 'class activation mapping': 1048, 'virtual adversarial training': 1049, 'visceral adipose tissue': 1050, 'gaussian process': 1051, 'geometric programming': 1052, 'spectral angle distance': 1053, 'speech activity detection': 1054, 'original images': 1055, 'operational intensity': 1056, 'stacked refinement': 1057, 'secrecy rate': 1058, 'segment representation': 1059, 'spatial resolution': 1060, 'success rate': 1061, 'super resolution': 1062, 'speech recognition': 1063, 'small resolution': 1064, 'strategic rationale': 1065, 'simulation results': 1066, 'systematic review': 1067, 'space situational awareness': 1068, 'static single assignment': 1069, 'super sense': 1070, 'stanford sentiment treebank': 1071, 'shows similar trend': 1072, 'sound pressure level': 1073, 'success weighted by ( normalized inverse ) path length': 1074, 'shortest path length': 1075, 'standard plane location': 1076, 'true positives': 1077, 'temporal pooler': 1078, 'false negatives': 1079, 'focusing network': 1080, 'true negative': 1081, 'total noise': 1082, 'average precision': 1083, 'access point': 1084, 'asymptotic preserving': 1085, 'associated press': 1086, 'acute pancreatitis': 1087, 'access part': 1088, 'affinity propagation': 1089, 'dimension estimation': 1090, 'differential evolution': 1091, 'dataexplorer': 1092, 'details': 1093, 'deterministic equivalent': 1094, 'data efficiency': 1095, 'pulse amplitude': 1096, 'partitioning around medoid': 1097, 'passive acoustic monitoring': 1098, 'pulse amplitude modulation': 1099, 'markov geographic model': 1100, 'manifold geometry matching': 1101, 'automatic speech recognition': 1102, 'average sum rate': 1103, 'arabic dialect identification': 1104, 'exponential random graph models': 1105, 'exponential - family random graph models': 1106, 'description length': 1107, 'deep learning': 1108, 'dice loss': 1109, 'description logics': 1110, 'downlink': 1111, 'distributed ledger': 1112, 'depth loss': 1113, 'description logicsfirst': 1114, 'dogleg': 1115, 'context free grammar': 1116, 'control flow graph': 1117, 'constraint programming': 1118, 'cyclic prefic': 1119, 'clustered placement': 1120, 'central processor': 1121, 'canonical polyadic': 1122, 'completely positive': 1123, 'constraint problem': 1124, 'control program': 1125, 'candecomp / parafac': 1126, 'conformal prediction': 1127, 'core periphery': 1128, 'local search': 1129, 'least squares': 1130, 'logarithmically spaced': 1131, 'linear systemswe': 1132, 'location service': 1133, 'dynamic programming': 1134, 'distance precision': 1135, 'declustered placement': 1136, 'dirichlet process': 1137, 'drift - plus penalty': 1138, 'dropped pronoun': 1139, 'direct proportion': 1140, 'differential privacy': 1141, 'disjunctive programming': 1142, 'dronemap planner': 1143, 'dynamic program': 1144, 'convergence layer protocol': 1145, 'coin - or linear program solver': 1146, 'special airworthiness certificate': 1147, 'soft actor critic': 1148, 'high altitude platform': 1149, 'hybrid access point': 1150, 'software defined networking': 1151, 'software - defined radio': 1152, 'network function virtualization': 1153, 'virtualized network functions': 1154, 'ccsd file delivery protocol': 1155, 'file delivery protocol': 1156, 'multi - layer same - resolution compressed': 1157, 'mobile switching center': 1158, 'channel reliability measurement': 1159, 'counterfactual risk minimization': 1160, 'large displacement optical flow': 1161, 'local distance - based outlier factor': 1162, 'black and anandan': 1163, 'binary agreement': 1164, 'barabasi albert': 1165, 'bundle adjustment': 1166, 'balanced accuracy': 1167, 'bee algorithm': 1168, 'biçimbilimsel ayrıştırıcılara': 1169, 'modelling simulation': 1170, 'multiple sclerosis': 1171, 'mean shift': 1172, 'missed speech': 1173, 'main - sequence': 1174, 'multiple segment multiple instance learning': 1175, 'mid stance': 1176, 'mobile station': 1177, 'medical sentiment': 1178, 'shortest dependency path': 1179, 'semi - definite programming': 1180, 'stable dependencies principle': 1181, 'canonical correlation analysis': 1182, 'canonical correlation analysis(kernel': 1183, 'low - rank multimodal fusion': 1184, 'lower membership function': 1185, 'multi - layer perceptron': 1186, 'multilayer perceptron': 1187, 'perceptrones multicapa': 1188, 'multi - layer neural network': 1189, 'multiple layer perception': 1190, 'global average pooling': 1191, 'generative adversarial perturbations': 1192, 'generalized assignment problem': 1193, 'global average precision': 1194, 'group average pool': 1195, 'beat per minute': 1196, 'business process modelling': 1197, 'bias disparities': 1198, 'bjontegaard delta': 1199, 'block diagonalization': 1200, 'benders decomposition': 1201, 'electronic hospital record': 1202, 'question answering': 1203, 'question - answer': 1204, 'quantum annealing': 1205, 'nonnegative matrix factorization': 1206, 'negative matrix factorization': 1207, 'non - negative matrix deconvolution': 1208, 'non - negative matrix factorizationding:2006': 1209, 'concordance correlation coefficient': 1210, 'congruence coefficient correlation': 1211, 'rich club': 1212, 'radon consistency': 1213, 'reading comprehension': 1214, 'relation classification': 1215, 'remote control': 1216, 'resource control': 1217, 'recurrent convolution': 1218, 'radio control': 1219, 'rate constrained': 1220, 'region covariance based method': 1221, 'red clump': 1222, 'reservoir computing': 1223, 'current iteration': 1224, 'confidence intervals': 1225, 'constructive interference': 1226, 'class imbalance': 1227, 'conditional independence': 1228, 'current instruction': 1229, 'cochlear implant': 1230, 'continuous integration': 1231, 'close - in': 1232, 'computational intelligence': 1233, 'conditionally independent': 1234, 'stimulus onset asynchrony': 1235, 'service oriented architecture': 1236, 'neural machine translation': 1237, 'neural equivalent': 1238, 'medication assisted treatment': 1239, 'motionless analysis of traffic': 1240, 'multi - fingered adaptive tactile grasping': 1241, 'million song dataset': 1242, 'modified list sphere decoding': 1243, 'most significant digit': 1244, 'geometric brownian motion': 1245, 'gradient boosting machine': 1246, 'autonomous system': 1247, 'angular spread': 1248, 'adaptive softmax': 1249, 'ancillary service': 1250, 'azimuth angle spread': 1251, 'attention sum': 1252, 'antenna spacing': 1253, 'total difficulty': 1254, 'time - discrete': 1255, 'technical debt': 1256, 'temporal difference': 1257, 'temporal dimension': 1258, 'training dataset': 1259, 'training data': 1260, 'target dependent': 1261, 'top - down': 1262, 'time - domain': 1263, 'train dataset': 1264, 'consumer price index': 1265, 'conditional predictive impact': 1266, 'relational neighbors': 1267, 'radical nephrectomy': 1268, 'random graphs': 1269, 'relay nodes': 1270, 'random noise': 1271, 'radial normalization': 1272, 'secondary node': 1273, 'source node': 1274, 'substantia nigra': 1275, 'spectral normalization': 1276, 'online social networks': 1277, 'online social network': 1278, 'same place different time': 1279, 'same place different time transmission': 1280, 'random vaccination': 1281, 'right ventricle': 1282, 'randomized voting': 1283, 'random voting': 1284, 'random variable': 1285, 'resilience vector': 1286, 'range view': 1287, 'acquaintance vaccination': 1288, 'anti - virus': 1289, 'antivirus': 1290, 'autonomous vehicle': 1291, 'automated vehicle': 1292, 'air change rates': 1293, 'absolute category rating': 1294, 'common neighbours': 1295, 'clustered networks': 1296, 'core network': 1297, 'cognitively normal': 1298, 'common name': 1299, 'common noun': 1300, 'bot': 1301, 'brownian': 1302, '50x50 fiber coupler': 1303, 'convolution layer with channels of input': 1304, 'conformity': 1305, 'color filter arrays': 1306, 'constriction factor approach': 1307, 'counterfactual future advantage': 1308, 'data availability statement': 1309, 'disclosure avoidance system': 1310, 'unmanned aerial vehicles': 1311, 'unmanned air vehicles': 1312, 'hybrid fusion': 1313, 'high frequency': 1314, 'mean - centering': 1315, 'myocardium': 1316, 'monte carlo': 1317, 'multi connectivity': 1318, 'marginal contribution': 1319, 'markov chain': 1320, 'mutual cover': 1321, 'matrix converter': 1322, 'absolute trajectory error': 1323, 'average translation error': 1324, 'relative pose error': 1325, 'retinal pigment epithelium': 1326, 'codeword mixture sampling': 1327, 'counting monadic second': 1328, 'dynamic vision sensor': 1329, 'dynamic voltage scaling': 1330, 'entire distributions': 1331, 'economic dispatch': 1332, 'end diastolic': 1333, 'emergency department': 1334, 'embedded deformation': 1335, 'euclidean distance': 1336, 'energy detection': 1337, 'australian privacy principles': 1338, 'a posteriori probability': 1339, 'canalizing map': 1340, 'centroid methods': 1341, 'confusion matrix': 1342, 'continental margin': 1343, 'corporate messaging': 1344, 'choir mix': 1345, 'coded modulation': 1346, 'sleep cassette': 1347, 'sum capacity': 1348, 'steering control': 1349, 'smallest class': 1350, 'successive cancellation': 1351, 'score contextualisation': 1352, 'self cover': 1353, 'subset compared': 1354, 'spectral clustering': 1355, 'smart contract': 1356, 'self consistency': 1357, 'selection combining': 1358, 'sum capacities': 1359, 'symmetry condition': 1360, 'single connectivity': 1361, 'special case': 1362, 'spatial crowdsourcing': 1363, 'strongly connected': 1364, 'similarity weight': 1365, 'sliding window': 1366, 'small - world': 1367, 'recurrent convolutional neural network': 1368, 'region based convolutional neural network': 1369, 'entity set expansion': 1370, 'extract similar entities': 1371, 'gradient episodic memory': 1372, 'grid entropy measurement': 1373, 'orthogonal least square': 1374, 'ordinary least square': 1375, 'opportunistic spectrum access': 1376, 'obstructive sleep apnoea': 1377, 'satisfaction function': 1378, 'sequential fixing': 1379, 'scale free': 1380, 'structure fusion': 1381, 'small faces': 1382, 'scale - free': 1383, 'separable footprints': 1384, 'state - feedback': 1385, 'cumulative distribution function': 1386, 'cumulative density function': 1387, 'cluster head': 1388, 'cluster head(ch': 1389, 'constraint handling': 1390, 'belief propagation': 1391, 'bin packing': 1392, 'basis pursuit': 1393, 'backprop': 1394, 'back propagation': 1395, 'backdoor poisoning': 1396, 'bundle protocol': 1397, 'best performing': 1398, 'latent class': 1399, 'local conditioning': 1400, 'line card': 1401, 'least confidence': 1402, 'largest class': 1403, 'land cover': 1404, 'latent clustering': 1405, 'lyrics comprehension': 1406, 'foveal tilt effects': 1407, 'full time employment': 1408, 'hough transform': 1409, 'hoeffding tree': 1410, 'the persistence of mortar cues': 1411, 'persistent mortar cues': 1412, 'symbol error rate': 1413, 'speaker error rate': 1414, 'speech emotion recognition': 1415, 'base station': 1416, 'beam search': 1417, 'brier score': 1418, 'batch size': 1419, 'standard beam search': 1420, 'bayesian sets': 1421, 'bidirectional similarity': 1422, 'cooperative non orthogonal multiple access': 1423, 'conventional orthogonal multiple access': 1424, 'best target': 1425, 'back translation': 1426, 'bernoulli trial': 1427, 'random beamforming': 1428, 'resource blocks': 1429, 'rank - based': 1430, 'reduced basis': 1431, 'random -reduced basis': 1432, 'rosi braidotti': 1433, 'universal dependencies': 1434, 'unified distillation': 1435, 'gaussian noise': 1436, 'grid name': 1437, 'gauss - newton': 1438, 'high - order order orthogonal iteration': 1439, 'higher order orthogonal iteration': 1440, 'graph convolutional neural network': 1441, 'geodesic convolution neural network': 1442, 'graph convolutional neural networks': 1443, 'generalised convolutional neural network': 1444, 'graph convolution networks': 1445, 'global convolution networks': 1446, 'accuracy': 1447, 'accuracies': 1448, 'rectified linear unit': 1449, 'repeated convolutional': 1450, 'hierarchical attention network': 1451, 'heterogeneous attributed network': 1452, 'hierarchical matching pursuit': 1453, 'hypermutations with mutation potential': 1454, 'area under precision recall': 1455, 'area under the precision vs. recall curve': 1456, 'byte pair encoding': 1457, 'backward partial execution': 1458, 'kernel density estimation': 1459, 'kernel distribution estimation': 1460, 'autism spectrum disorders': 1461, 'average surface distance': 1462, 'sliced wasserstein distance': 1463, 'semantic web deployment': 1464, 'behance artistic media': 1465, 'best alignment metric': 1466, 'bandwidth allocation model': 1467, 'spatial skeleton realignment': 1468, 'sparse signal recovery': 1469, 'spectral super - resolution': 1470, 'energy buffer': 1471, 'energy beam': 1472, 'satellite imagery': 1473, 'semantic inpainting': 1474, 'signal - to - noise ratio': 1475, 'signal to noise ratio': 1476, 'simulation and numerical results': 1477, 'artificial noise': 1478, 'attention network': 1479, 'environment sound classification': 1480, 'ergodic sum capacity': 1481, 'traveling salesman problem': 1482, 'triad significance profile': 1483, 'locality preserving projections': 1484, 'load planning problem': 1485, 'local fisher': 1486, 'discriminant analysis': 1487, 'intelligent transportation system': 1488, 'interrupted time series': 1489, 'intelligent tutoring systems': 1490, 'corticospinal tract': 1491, 'china standard time': 1492, 'conditional mutual information': 1493, 'code - mixed index': 1494, 'successive convex approximation': 1495, 'scatter component analysis': 1496, 'smart cut algorithm': 1497, 'internet service providers': 1498, 'image signal processor': 1499, 'british national corpus': 1500, 'brown news corpus': 1501, 'mean average conceptual similarity': 1502, 'minimum average conceptual similarity': 1503, "american diabetes association 's": 1504, 'adaptive data augmentation': 1505, 'delay spread': 1506, 'direct sharing': 1507, 'data structure': 1508, 'data sharing': 1509, 'differentiated softmax': 1510, 'dempster - shafer': 1511, 'detection scores': 1512, 'subcutaneous adipose tissue': 1513, 'boolean satisfiability': 1514, 'satisfiability solving': 1515, 'modern standard arabic': 1516, 'multilevel splitting algorithm': 1517, 'multiple sequence alignment': 1518, 'national research foundation': 1519, 'national research foundation of korea': 1520, 'dual energy subtraction': 1521, 'defence equipment support': 1522, 'superposition of functional contours': 1523, 'service function chaining': 1524, 'purity': 1525, 'patterns': 1526, 'perceptual linear prediction': 1527, 'poisson line process': 1528, 'free response operating characteristic': 1529, 'free - response receiver operating characteristic': 1530, 'free receiver operating characteristic': 1531, 'false positive rate': 1532, 'fuzzy preference relation': 1533, 'boosted decision trees': 1534, 'bi - directional domain translation': 1535, 'process arrival pattern': 1536, 'policy administration point': 1537, 'roofline model': 1538, 'robot middleware': 1539, 'representation mixing': 1540, 'resource management': 1541, 'encoded archival description': 1542, 'exponential absolute distance': 1543, 'class activation mappings': 1544, 'deep context prediction': 1545, 'darwin correspondence project': 1546, 'velocity obstacle': 1547, 'visual odometry': 1548, 'error correcting code': 1549, 'elliptic curve cryptography': 1550, 'exchange solution in the considered': 1551, 'autonomous system number': 1552, 'average sample number': 1553, 'latent semantic analysis': 1554, 'licensed shared access': 1555, 'sequential monte carlo': 1556, 'sliding mode control': 1557, 'statistical model checking': 1558, 'secure multiparty computation': 1559, 'internet engineering task force': 1560, 'internet engineering task force(https://ietf.org/': 1561, 'eyes open': 1562, 'earth observation': 1563, 'evolutionary distribution algorithm': 1564, 'exploratory data analysis': 1565, 'deep reinforcement learning': 1566, 'distributional reinforcement learning': 1567, 'policy gradient': 1568, 'policy generator': 1569, 'property graph': 1570, 'range of motion': 1571, 'reduced - order models': 1572, 'intraclass correlation coefficient': 1573, 'implicit computational complexity': 1574, 'minimum bandwidth regenerating': 1575, 'minimum bounding rectangle': 1576, 'decision tree': 1577, 'delivery teams': 1578, 'recursive least squares': 1579, 'regularized least squares': 1580, 'random local search': 1581, 'strictly piecewise': 1582, 'streaming processors': 1583, 'set partitioning': 1584, 'subspace pursuit': 1585, 'stream processor': 1586, 'shilling profiles': 1587, 'spectral': 1588, 'semantic parsing': 1589, 'shortest path': 1590, 'spatial pooler': 1591, 'standards poors': 1592, 'sao paulo': 1593, 'set point': 1594, 'splitting problem': 1595, 'strictly local': 1596, 'separation logic': 1597, 'supervised learning': 1598, 'constrained least squares': 1599, 'beginning': 1600, 'complementary learning systems': 1601, 'adversarial risk analysis': 1602, 'accumulate repeat accumulate': 1603, 'points of interest': 1604, 'projection of interest': 1605, 'received signal strength': 1606, 'radio signal strength': 1607, 'random subcarrier selection': 1608, 'constrained spherical deconvolution': 1609, 'critical sensor density': 1610, 'contextual sentence decomposition': 1611, 'pixel - wise normalization': 1612, 'pointnet based': 1613, 'partial nephrectomy': 1614, 'provider aggregatable': 1615, 'philadelphia': 1616, 'physical access': 1617, "peano 's arithmetics": 1618, 'parallel attention': 1619, 'preferential attachment': 1620, 'power allocation considerations': 1621, 'power allocation': 1622, 'presburger arithmetic': 1623, 'fast dormancy': 1624, 'finite differences': 1625, 'density function': 1626, 'fractal dimension': 1627, 'fully - digital': 1628, 'initial contact': 1629, 'integrated circuit': 1630, 'independent cascading': 1631, 'statistical parameter mapping': 1632, 'saliency prediction model': 1633, 'spatial pyramid matching': 1634, 'amplify and forward': 1635, 'advanced feature': 1636, 'alzheimer': 1637, 'disease neuroimaging initiative': 1638, "alzheimer 's disease neuroimaging initiative": 1639, 'earth mover': 1640, "earth mover 's distance": 1641, 'excessive mapping dissolution': 1642, 'recurrent neural network language models': 1643, 'recurrent neural network - based language model': 1644, 'cyclic redundancy check': 1645, 'collaborative representation classification': 1646, 'return to launch': 1647, 'register transfer level': 1648, 'independent multiple kernel learning': 1649, 'single - task multiple kernel learning': 1650, 'artifact disentanglement network': 1651, 'activity driven networks': 1652, 'threshold updation': 1653, 'translation unit': 1654, 'oxford english corpus': 1655, 'online elliptical clustering': 1656, 'basic skill module': 1657, 'basic safety messages': 1658, 'bayesian neural networks': 1659, 'binarized neural network': 1660, 'binary neural networks': 1661, 'gradient variance regularizer': 1662, 'global visual representations': 1663, 'multiple input multiple output': 1664, 'massive multiple - input multiple - output': 1665, 'strongly connected components': 1666, 'static camera clusters': 1667, 'radiation therapy': 1668, 'retweets': 1669, 'reparameterization trick': 1670, 'response time': 1671, 'ruthes': 1672, 'random target': 1673, 'region template': 1674, 'reaction wheels': 1675, 'random walk': 1676, 'rolling window': 1677, 'left ventricle': 1678, 'left ventricular': 1679, 'las vegas': 1680, 'large volumetric': 1681, 'renormalization group': 1682, 'riemmanian geometry': 1683, 'real graphs': 1684, 'reber grammar': 1685, 'internet research task force': 1686, 'internet research task force(https://irtf.org/': 1687, 'asteroidal triple': 1688, 'all threshold': 1689, 'adversarial training': 1690, 'all trials': 1691, 'adversarially trained': 1692, 'adaptive threshold': 1693, 'correct classification ratio': 1694, 'cross - document coreference resolution': 1695, 'correct correction rate': 1696, 'gain minus pain': 1697, 'global max pooling': 1698, "children 's book test": 1699, 'consensus - before - talk': 1700, 'area under the receiver operator characteristic': 1701, 'area under receiver operating characteristic curve': 1702, 'dynamic assignment ratio': 1703, 'defence application register': 1704, 'temporary scope association': 1705, 'taobao search advertising': 1706, 'temporal semantic analysis': 1707, 'myocardial infarction': 1708, 'mutual information': 1709, 'motor imagery': 1710, 'mathematical induction': 1711, 'model in': 1712, 'premature ventricular contraction': 1713, 'passive voltage contrast': 1714, 'wavelet transform': 1715, 'wild type': 1716, 'whole tumor': 1717, 'william thackeray': 1718, 'automated anatomical labeling': 1719, 'ambient assisted living': 1720, 'denoised auto - encoder': 1721, 'data assimilation': 1722, 'deterministic annealing': 1723, 'domain adaptation': 1724, 'data augmentation': 1725, 'direct assessment': 1726, 'dialogue acts': 1727, 'distribution alignment': 1728, 'music information retrieval': 1729, 'music instrument recognition': 1730, 'music information research': 1731, 'discrete cosine transform': 1732, 'document creation time': 1733, 'download completion time': 1734, 'discrete cosine transformation': 1735, 'partial least square': 1736, 'physical layer security': 1737, 'progressive lesion segmentation': 1738, 'near - ir': 1739, 'near - infrared': 1740, 'program counter': 1741, 'point cloud': 1742, 'program committee': 1743, 'principal component': 1744, 'central nervous system': 1745, 'copenhagen networks study': 1746, "alzheimer 's disease": 1747, 'automatic differentiation': 1748, 'audit department': 1749, 'anomaly detection': 1750, 'anomaly - based detection': 1751, 'auction distribution': 1752, 'artificially - degraded': 1753, 'axial diffusivity': 1754, 'path loss': 1755, 'polarity loss': 1756, 'programming language': 1757, 'parallel lexicon': 1758, 'photoluminescence': 1759, 'cumulative matching characteristic': 1760, 'crude monte carlo': 1761, 'c / c++ debugging interface': 1762, 'communicative development index': 1763, 'character error rate': 1764, 'classification error rate': 1765, 'clustering error rate': 1766, 'fire emblem': 1767, 'finite element': 1768, 'feature extraction': 1769, 'network coding': 1770, 'normalized correlation': 1771, 'north carolina': 1772, 'new classes': 1773, 'noise clinic': 1774, 'network centre': 1775, 'next corollary': 1776, 'node classification': 1777, 'news commentary': 1778, 'values applied': 1779, 'valence and arousal': 1780, 'connectionist temporal classification': 1781, 'common test conditions': 1782, 'zero forcing': 1783, 'zero - filled': 1784, 'spectral efficiency': 1785, 'situation entity': 1786, 'smarteda': 1787, 'sequential exploring': 1788, 'software engineering': 1789, 'strong elimination': 1790, 'signed error': 1791, 'speech enhancement': 1792, 'signal enhancement': 1793, 'squared exponential': 1794, 'selective eraser': 1795, 'small enough': 1796, 'systems engineering': 1797, 'disk array controller': 1798, 'distributed admission control': 1799, 'group rotate declustering': 1800, 'ground range detected': 1801, 'aerial laser scanner': 1802, 'alternating least squares': 1803, 'cross entropy': 1804, 'contrastive estimation': 1805, 'context entities': 1806, 'category embeddings': 1807, 'context encoder': 1808, 'crossing event': 1809, 'imperialist competitive algorithm': 1810, 'independent component analysis': 1811, 'weight superiority': 1812, 'word shape': 1813, 'word sequence': 1814, 'write skew': 1815, 'semantic correlation maximization': 1816, 'spatial compositional model': 1817, 'scanning capacitance microscopy': 1818, 'blind forwarding': 1819, 'basic feature': 1820, 'bayes factor': 1821, 'bilateral filtering': 1822, 'black females': 1823, 'binary function': 1824, 'brute force search': 1825, 'bilateral filter': 1826, 'bayesian filtering': 1827, 'provider - aware forwarding': 1828, 'plenacoustic function': 1829, 'garbage collector': 1830, 'graph cuts': 1831, 'garbage collection cycle': 1832, 'graph convolution': 1833, 'sequential importance sampling': 1834, 'social identification system': 1835, 'voice conversion': 1836, 'virtual classifier': 1837, 'imagenet large scale visual recognition challenge': 1838, 'imagenet large scale visual recognition competition': 1839, 'prediction shift': 1840, 'parameter server': 1841, 'personal storage': 1842, 'probabilistic serial': 1843, 'power splitting': 1844, 'the power splitting': 1845, 'projective simulation': 1846, 'processor sharing': 1847, 'unlabeled attachment score': 1848, 'unmanned aircraft systems': 1849, 'business intelligence': 1850, 'bayesian inference': 1851, 'bilinear interpolation': 1852, 'nash equilibrium': 1853, 'named entity': 1854, 'named entities': 1855, 'nested experiments': 1856, 'factorization machines': 1857, 'formal methods': 1858, 'feature matching': 1859, 'flash memory': 1860, 'forward models': 1861, 'fowlkes mallows index': 1862, 'feature map': 1863, 'f1-measure': 1864, 'frequency modulation': 1865, 'finite mixture': 1866, 'fuzzy measure': 1867, 'mean relative error': 1868, 'median recovery error': 1869, 'structural similarity index measure': 1870, 'structural similarity': 1871, 'structural similarity metric': 1872, 'structural similarity index': 1873, 'belief , desire , intention': 1874, "beck 's depression inventory": 1875, 'transformation encoder': 1876, 'taylor expansion': 1877, 'transformation error': 1878, 'temporal expressions': 1879, 'anytime parameter - free thresholding': 1880, 'advanced persistent threat': 1881, 'new radio': 1882, 'nuclear receptor': 1883, 'bag of word': 1884, 'bags of words': 1885, 'adaptive radix tree': 1886, 'adaptive resonance theory': 1887, 'response time property': 1888, 'genetically modified': 1889, 'gradient magnitude': 1890, 'graph matching': 1891, 'generator matrix': 1892, 'lower bound': 1893, 'lovasz bregman': 1894, 'instructions per cycle': 1895, 'individual pitch control': 1896, 'international patent classification': 1897, 'sustaining low - level viral load': 1898, 'sustained low viral load': 1899, 'load pattern': 1900, 'viral load': 1901, "lomonosov 's turnip": 1902, 'likelihood test': 1903, 'linear threshold': 1904, 'luby transform': 1905, 'label transfer': 1906, 'document object model': 1907, 'degrees of measurement': 1908, 'equivalent series resistances': 1909, 'extended support release': 1910, 'peak current mode': 1911, 'phase change memory': 1912, 'permanent customer model': 1913, 'web ontology language': 1914, 'web ontology': 1915, 'imaginary batches': 1916, 'information bottleneck': 1917, 'immersed boundary': 1918, 'aspect - aware topic model': 1919, 'aware topic model': 1920, 'aware latent factor model': 1921, 'aspect - aware latent factor model': 1922, 'flatten layer': 1923, 'federated learning': 1924, 'fixated locations': 1925, 'group delay': 1926, 'gradient descent': 1927, 'baseband phase difference': 1928, 'basis pursuit denoising': 1929, 'squared cosine proximity': 1930, 'simultaneous closeness - performance': 1931, 'explicit congestion notification': 1932, 'edge computing node': 1933, 'highly rated , have a large volume of reviews': 1934, 'highly rated and have received': 1935, 'gradient reversal layer': 1936, 'goal - oriented requirement language': 1937, 'dialogue system technology challenge': 1938, 'dialog state tracking challenge': 1939, 'gold standard': 1940, 'group sweep': 1941, 'gauss seidel': 1942, 'geometric sequence': 1943, 'genetic search': 1944, "google scholar 's": 1945, 'instant messaging': 1946, 'identity mapping': 1947, 'intensity modulation': 1948, 'influence maps': 1949, 'interference margin': 1950, 'index modulation': 1951, 'user interface': 1952, 'uniform indicator': 1953, 'k nearest neighbors': 1954, 'k - nearest neighbors': 1955, 'multi - genre natural language inference': 1956, 'multinli': 1957, 'packet reception rate': 1958, 'pre - reduced ring': 1959, 'dropped pronouns': 1960, 'dependency pairs': 1961, 'neural logic machines': 1962, 'neural language modelling': 1963, 'anomaly correlation coefficient': 1964, 'clustering accuracy': 1965, 'adaptive cruise control': 1966, 'intrusion detection systems': 1967, 'intrusion detection setting': 1968, 'agent based models': 1969, 'agent based modeling': 1970, 'conjugate gradient': 1971, 'correspondence grouping': 1972, 'context - guided attention': 1973, 'contour generator': 1974, 'context gating': 1975, 'chemical graph': 1976, 'candidate generation': 1977, 'distributed generation': 1978, 'dynamic graph': 1979, 'discontinuous galerkin': 1980, 'domain generalization': 1981, 'single nucleotide polymorphisms': 1982, 'state neighborhood probability': 1983, 'multifactor dimensionality reduction': 1984, 'message dropping rate': 1985, 'cumulative link': 1986, 'coupling layers': 1987, 'curriculum learning': 1988, 'continual learning': 1989, 'classical logic': 1990, 'directed interval class': 1991, 'dynamic induction control': 1992, 'deviance information criterion': 1993, 'accepting end component': 1994, 'automatic exposure control': 1995, 'rtutor': 1996, 'recall': 1997, 'result': 1998, 'recommendation': 1999, 'syntactic symmetric pattern': 2000, 'skip - gram with negative sampling': 2001, 'one class classifier': 2002, 'output constrained covariance': 2003, 'output covariance constrained': 2004, 'open circuit condition': 2005, 'adaptive on time': 2006, 'adaptively optimised threshold': 2007, 'random access': 2008, 'ring allreduce': 2009, 'resource allocation': 2010, 'random attack': 2011, 'remote attestation': 2012, 'right atrium': 2013, 'probabilistic matrix factorization': 2014, 'probability mass function': 2015, 'department of energy': 2016, 'design of experiment': 2017, 'diffractive optical element': 2018, 'replies': 2019, 'reciprocal pagerank': 2020, 'reference point': 2021, 'random priority': 2022, 'replacement paths': 2023, 'favorite or liked tweets': 2024, 'from table': 2025, 'constraint satisfaction problem': 2026, 'promise constraint satisfaction problems': 2027, 'content security policy': 2028, 'constraint satisfaction based virtual machine placement': 2029, 'coverage sampling problem': 2030, 'common spatial patterns': 2031, 'pedestrian dead reckoning': 2032, 'packet delivery ratio': 2033, 'transmission time interval': 2034, 'time transmit interval': 2035, 'mathematical model': 2036, 'maximum mark': 2037, 'processing element': 2038, 'portable executable': 2039, 'fast fourier transform': 2040, 'fast fourier transformation': 2041, 'feature finding team': 2042, 'taint dependency sequences': 2043, 'training data set': 2044, 'deep encoder - decoder adversarial reconstruction': 2045, 'decoder adversarial reconstruction network': 2046, 'relative total variation': 2047, 'relative total variance': 2048, 'group testing': 2049, 'generic tool': 2050, 'ground truth': 2051, 'graph traversal': 2052, 'google translate': 2053, 'compressive spectrum sensing': 2054, 'chirp spread spectrum': 2055, 'cooperative spectrum sensing': 2056, 'cascade style sheet': 2057, 'initialization vector': 2058, 'intersection viewer': 2059, 'optical flow': 2060, 'objective function': 2061, 'time to first byte': 2062, 'median time to first byte': 2063, 'vehicular fog computing': 2064, 'vector filed consensus': 2065, 'key performance indicator': 2066, 'key performance index': 2067, 'transport block sizes': 2068, 'terrestrial base station': 2069, 'separator': 2070, 'symbol error probability': 2071, 'boundary refinement': 2072, 'binary relevance': 2073, 'bug reports': 2074, 'belief revision': 2075, 'best response': 2076, 'bone region': 2077, 'random walker algorithm': 2078, 'right wing authoritarianism': 2079, 'recurrent weighted average': 2080, 'downlink control information': 2081, 'downlink control indicator': 2082, 'resource elements': 2083, 'relation extraction': 2084, 'renewable energy': 2085, 'relations extracted': 2086, 'requirements elicitation': 2087, 'referring expression': 2088, 'direction of arrival': 2089, 'direction - of - arrival': 2090, 'probability density function': 2091, 'probability distribution function': 2092, 'portable document format': 2093, 'primary distribution format': 2094, 'perform adversarial training': 2095, 'process arrival time': 2096, 'successive interference cancellation': 2097, 'static induction control': 2098, 'self interference cancellation': 2099, 'direct feedback alignment': 2100, 'deterministic finite automaton': 2101, 'takagi sugeno kwan': 2102, 'takagi - sugeno - kwan': 2103, 'information embedding cost': 2104, 'international electrotechnical commission': 2105, 'introspective adversarial network': 2106, 'interference as noise': 2107, 'weakly normalizing': 2108, 'weak normalization': 2109, 'weight normalization': 2110, 'grey level co - occurrence matrix': 2111, 'grey level cooccurrence matrix': 2112, 'trusted third party': 2113, 'total transmit power': 2114, 'state space model': 2115, 'statistical shape modeling': 2116, 'whole slide image': 2117, 'word sense induction': 2118, 'concurrent dialogue acts': 2119, 'christen democratisch appel': 2120, 'canonical discriminant analysis': 2121, 'continuous decomposition analysis': 2122, 'black level subtraction': 2123, 'bayesian learning': 2124, 'normalized scan - path saliency': 2125, 'non - local self similar': 2126, 'virtual switch instances': 2127, 'variational system identification': 2128, 'voltage source inverter': 2129, 'proper orthogonal decomposition': 2130, 'performed on deformation': 2131, 'rank residual constraint': 2132, 'radio resource control': 2133, 'apache software foundation': 2134, 'african swine fever': 2135, 'almost known sets': 2136, 'asimmetric kernel scaling': 2137, 'intrinsic control error': 2138, 'interactive connectivity establishment': 2139, 'partial dependence plots': 2140, 'product display page': 2141, 'policy decision point': 2142, 'real data': 2143, 'reciprocal degree centrality': 2144, 'residual denoiser': 2145, 'research and development': 2146, 'relative difference': 2147, 'reciprocal degree': 2148, 'iris thickness': 2149, 'immediate threshold': 2150, 'inferior temporal': 2151, 'image translation': 2152, 'proportional integral derivative': 2153, 'process is discovered': 2154, 'coprime blur pairs': 2155, 'compact bilinear pooling': 2156, 'random finite set': 2157, 'rain fog snow': 2158, 'non - deterministic finite - state automaton': 2159, 'nondeterministic finite automata': 2160, 'indian face database': 2161, 'icelandic frequency dictionary': 2162, 'call detail records': 2163, 'critical design review': 2164, 'clock difference relations': 2165, 'peer - to - peer': 2166, 'peer to peer': 2167, 'discrete base problem': 2168, 'determinisable by pruning': 2169, 'digital back propagation': 2170, 'fixed - size ordinally forgetting encoding': 2171, 'fixed ordinally - forgetting encoding': 2172, 'non - line of sight': 2173, 'non line of sigh': 2174, 'bluetooth low energy': 2175, 'bilingual lexicon extraction': 2176, 'region templates framework': 2177, 'real time factor': 2178, 'probabilistic multivariate tensor factorization': 2179, 'probabilistic multivariate tensor factorization framework': 2180, 'binary space partitioning': 2181, 'bulk synchronous parallel': 2182, 'quadrilateral simmelian backbone': 2183, 'quadrilateral simmelian': 2184, 'root certificate authority': 2185, 'ripple carry adder': 2186, 'root cause analysis': 2187, 'reverse classification accuracy': 2188, 'local binary pattern': 2189, 'loopy belief propagation': 2190, 'effective functional flow adjacency': 2191, 'effective flow functional adjacency': 2192, 'lifelong metric learning': 2193, 'log marginal likelihood': 2194, 'lifelong machine learning': 2195, 'anisotropic diffusion filter': 2196, 'automatically defined function': 2197, 'data modification layer': 2198, 'declarative ml language': 2199, 'basic question': 2200, 'bayesian quadrature': 2201, 'logical access': 2202, 'layout analysis': 2203, 'left atrium': 2204, 'location area': 2205, 'near perfect reconstruction': 2206, 'normalized probabilistic rand': 2207, 'small -distance lead': 2208, 'significance - aware information bottlenecked adversarial network': 2209, 'permission enforcement point': 2210, 'policy enforcement point': 2211, 'pignistic probability transformation': 2212, 'privacy preserving techniques': 2213, 'upper bound': 2214, 'algebraic upper bound': 2215, 'direct memory access': 2216, 'dynamic mechanical analysis': 2217, 'data market austria': 2218, 'side - stream dark field': 2219, 'signed distance function': 2220, 'signed distance field': 2221, 'symmetric uncertainty': 2222, 'secondary user': 2223, 'relevance proximity graph': 2224, 'robust principal graph': 2225, 'permutation invariant training': 2226, 'pending interest table': 2227, 'code block': 2228, 'content - based': 2229, 'circular buffered': 2230, 'compression benchmark': 2231, 'causal box': 2232, 'dynamic mode factorization': 2233, 'drone - cell management frame': 2234, 'hierarchical automatic relevance determination': 2235, 'hierarchically constructed automatic relevance determination': 2236, 'dtw barycenter averaging': 2237, 'deterministic buchi automaton': 2238, 'semantic role labeling': 2239, 'state representation learning': 2240, 'statistical relational learning': 2241, 'research question': 2242, 'reformulated queries': 2243, 'id sent as part of the parameter': 2244, 'subsampling proportional to': 2245, 'stochastic block model': 2246, 'standard bit mutations': 2247, 'shape boltzmann machine': 2248, 'friendly jammers': 2249, 'friendly jamming': 2250, 'featherweight java': 2251, '3d morphable model': 2252, 'the 3d morphable model': 2253, 'structure propagation fusion': 2254, 'shortest path forest': 2255, 'harmonic mean': 2256, 'hybrid model': 2257, 'linked open data': 2258, 'level of detail': 2259, 'basic activity driven networks': 2260, 'basic activity driven network model': 2261, 'grasp type detection': 2262, 'grasp type dataset': 2263, 'trap control register': 2264, 'transductive cascaded regression': 2265, 'low energy': 2266, 'label equivalence': 2267, 'first in first out': 2268, 'first - in first - out': 2269, 'longest common subsequence': 2270, 'local causal states': 2271, 'features from accelerated segment test': 2272, 'file and storage technologies': 2273, 'streaming simd extensions': 2274, 'spherical semantic embedding': 2275, 'dynamic sparse reparameterization': 2276, 'dynamic source routing': 2277, 'graph pattern matching': 2278, 'matchinggraph pattern matching': 2279, 'pinyin w/ tones': 2280, 'polyglot wikipedia from alrfou:2013conll': 2281, 'virtual reality': 2282, 'visibility region': 2283, 'vigilance reward': 2284, 'music performance analysis': 2285, 'message passing algorithm': 2286, 'semantic alignment network': 2287, 'saturation analysis': 2288, 'subject alternate name': 2289, 'stacked attention network': 2290, 'self attention network': 2291, 'subspace system identification': 2292, 'social system identification': 2293, 'software sustainability institute': 2294, 'specific structure in': 2295, 'technical debt management': 2296, 'temporal difference model': 2297, 'time division multiplexing': 2298, 'network science': 2299, 'negative sampling': 2300, 'neutron star': 2301, 'secondary users': 2302, 'spectrum usage': 2303, 'primary base station': 2304, 'public broadcasting service': 2305, 'positive coalgebraic logics': 2306, 'point cloud library': 2307, 'path consistency learning': 2308, 'dynamic memory network': 2309, 'default mode network': 2310, 'social force model': 2311, 'structural factorization machine': 2312, 'structure from motion': 2313, 'guided filtering': 2314, 'gabor filter': 2315, 'centralized differential privacy': 2316, 'classical dynamic programming': 2317, 'receiver operation curves': 2318, 'receiver operating characteristics': 2319, 'semi - autonomous machine': 2320, 'speaker - addressee model': 2321, 'search of associative memory': 2322, 'self - assessment manikin': 2323, 'tone mapping': 2324, 'teacher mark': 2325, 'turing machine': 2326, 'distributed affinity dual approximation': 2327, 'dual adversarial domain adaptation': 2328, 'deep embedded clustering': 2329, 'dense - captioning event': 2330, 'open research knowledge graph': 2331, 'open research knowledge graph(http://orkg.org': 2332, 'controlled natural language': 2333, 'certain natural language': 2334, 'tumor necrosis factor': 2335, 'tumor necrosis factor alpha': 2336, 'proposal indexing network': 2337, 'phrase indexing network': 2338, 'unmet system demand': 2339, 'unambiguous state discrimination': 2340, 'mixed integer program': 2341, 'mixed integer programming': 2342, 'healthy control': 2343, 'hill - climbing': 2344, 'hierarchical classification': 2345, 'generalized second price': 2346, 'global statistics pooling': 2347, 'effective sample size': 2348, 'evolutionary stable strategies': 2349, 'low density spreading': 2350, 'linear dynamical system': 2351, 'bit - interleaved coded modulation': 2352, 'bit - interleaved coding and modulation': 2353, 'adversarially robust distillation': 2354, 'automatic relevance determination': 2355, 'accelerated robust distillation': 2356, 'network lifetime': 2357, 'natural language': 2358, 'incomplete lineage sorting': 2359, 'iterated local search': 2360, 'adversarial machine learning': 2361, 'actor modeling language': 2362, 'mobile network': 2363, 'master node': 2364, 'memory networks': 2365, 'mobile node': 2366, 'soft edit distance': 2367, 'sound event detection': 2368, 'standard edit distance': 2369, 'multiple online battle arena': 2370, 'multiplayer online battle arena': 2371, 'user datagram protocol': 2372, 'universal dependency parse': 2373, 'structural sparsity learning': 2374, 'scleral spur location': 2375, 'semi supervised learning': 2376, 'scratch': 2377, 'sparse compositional regression': 2378, 'skin conductance response': 2379, 'remote radio heads': 2380, 'active remote radio heads': 2381, 'concurrent kleene algebra': 2382, 'centered kernel alignment': 2383, 'relational database service': 2384, 'running digital sum': 2385, 'neural belief tracker': 2386, 'nyström basis transfer': 2387, 'white females': 2388, 'weighted fusion': 2389, 'tor browser bundle': 2390, 'threading building blocks': 2391, 'document index': 2392, 'dyadic indicator': 2393, 'direct inspection': 2394, 'dependency injection': 2395, 'turbo product decoder': 2396, 'total project delay': 2397, 'neural image caption': 2398, 'network interface card': 2399, 'new instances and classes': 2400, 'data science and analytics': 2401, 'digital signature algorithm': 2402, 'magnetic resonance spectroscopy': 2403, 'multiset rewriting systems': 2404, 'context - only attention': 2405, 'carbon - oxygen': 2406, 'neural sequence prediction': 2407, 'next sentence prediction': 2408, 'favoured granted': 2409, 'filter gate': 2410, 'binary symmetric channel': 2411, 'base station controller': 2412, 'blind spot detection': 2413, 'berkeley segmentation dataset': 2414, 'frame per second': 2415, 'false projection selection': 2416, 'signal processing systems': 2417, 'surcharge pricing scheme': 2418, 'signal probability skey': 2419, 'per - pixel - error': 2420, 'predictive performance equation': 2421, 'elevated mean scan statistic': 2422, 'event management system': 2423, 'elevated mean scan': 2424, 'expectation - based poisson statistic': 2425, 'expectation - based poisson': 2426, 'false acceptance rate': 2427, 'flow annotation replanning': 2428, 'generalized linear model': 2429, 'general linear model': 2430, 'computational fluid dynamics': 2431, 'carrier frequency difference': 2432, 'direct sparse odometry': 2433, 'distribution system operator': 2434, 'character - based statistical machine translation': 2435, 'character - based neural machine translation': 2436, 'difference target propagation': 2437, 'dynamic trajectory predictor': 2438, 'gradient initialization': 2439, 'graph isomorphism': 2440, 'staggered sample selection': 2441, 'stochastically stable states': 2442, 'dialogue state tracker': 2443, 'discrete sine transform': 2444, 'item description': 2445, 'information decoding': 2446, 'input data': 2447, 'interleaved declustering': 2448, 'factored evolutionary algorithms': 2449, 'finite element analysis': 2450, 'systems biology': 2451, 'symmetry breaking': 2452, 'stein variational policy gradient method': 2453, 'stein variational policy gradient': 2454, 'skip gram': 2455, 'stochastic gradient': 2456, 'central limit theorem': 2457, 'cognitive load theory': 2458, 'michigan english test': 2459, 'multi - edge type': 2460, 'foundation for intelligent physical agents': 2461, 'foundation for intelligent physical agents(http://www.fipa.org': 2462, 'geographic source routing': 2463, 'group sparsity residual': 2464, 'group sparse representation': 2465, 'video - aware unequal error protection': 2466, 'video - aware uep': 2467, 'gaussian process regression': 2468, 'gamma passing rate': 2469, 'static timing analysis': 2470, 'super - twisting algorithms': 2471, 'resistive random access memory': 2472, 'resistive ram': 2473, 'signal - to - interference - plus - noise ratio': 2474, 'signal - to - interference+noise ratio': 2475, 'hybrid monte carlo': 2476, 'hamiltonian monte carlo': 2477, 'laplacian - based shape matching': 2478, 'lock sweeping method': 2479, 'inverse optimization': 2480, 'iterative optimization': 2481, 'interacting object': 2482, 'low altitude platform': 2483, 'linear assignment problem': 2484, 'licklider transmission protocol': 2485, 'long term potentiation': 2486, 'grid - based motion statistics': 2487, 'gaussian material synthesis': 2488, 'dynamics canalization map': 2489, 'discontinuous conduction mode': 2490, 'deep choice model': 2491, 'discrete choice models': 2492, 'device configuration manager': 2493, 'minimum generation error': 2494, 'multi - granularity embedding': 2495, 'multigrid in energy': 2496, 'maximal consistent set': 2497, 'modulation and coding scheme': 2498, 'maximum cardinality search': 2499, 'orthogonal procrustes': 2500, 'old persian': 2501, 'outage probabilities': 2502, 'outage probability': 2503, 'original precision': 2504, 'orienteering problem': 2505, 'old persian.(there': 2506, 'common weakness enumeration': 2507, 'character - enhanced word embedding': 2508, 'chinese word embeddings': 2509, 'minimum storage regenerating': 2510, 'mining software repositories': 2511, 'piecewise -testable': 2512, 'physical therapy': 2513, 'productive time': 2514, 'proof time': 2515, 'bidirectional gru': 2516, 'bidirectional gated recurrent unit': 2517, 'physically based rendererpbrt': 2518, 'physically based ray tracing': 2519, 'bgp routing protocol': 2520, 'bgp security': 2521, 'generalized likelihood ratio test': 2522, 'generalized lrt': 2523, 'national health service': 2524, "nurses ' health study": 2525, 'leicester scientific corpus': 2526, 'long skip connections': 2527, 'single task learning': 2528, 'signal temporal logic': 2529, 'shows that learning these tasks': 2530, 'standard template library': 2531, 'swedish blog sentences': 2532, 'small - cell base stations': 2533, 'piecewise aggregation approximation': 2534, 'principal axis analysis': 2535, 'log processing': 2536, 'gpu log processing': 2537, 'common phone set': 2538, 'current population survey': 2539, 'mitral valve prolapse': 2540, 'million veterans program': 2541, 'matrix pair beamformer': 2542, 'modified poisson blending': 2543, 'multiple importance sampling': 2544, 'maximal independent set': 2545, 'large faces': 2546, 'late fusion': 2547, 'line feeds': 2548, 'security operations center': 2549, 'standard occupation classification': 2550, 'specific state of charge': 2551, 'state of charge': 2552, 'interactive voice response': 2553, 'immersive virtual reality': 2554, 'intent analyst': 2555, 'interval analysis': 2556, 'incremental approximation': 2557, 'interference alignment': 2558, 'information foraging theory': 2559, 'information flow tracking': 2560, 'augmented random search': 2561, 'addressee and response selection': 2562, 'ultra - light companion or planet': 2563, 'uplink': 2564, 'statistical compressed sensing': 2565, 'shortest common superstring': 2566, 'spoken conversational search': 2567, 'sub - carrier spacing': 2568, 'semantic question answering': 2569, 'spoken question answering': 2570, 'averaged word embeddings': 2571, 'address windowing extensions': 2572, 'transverse abdominal section': 2573, 'transmit antenna selection': 2574, 'information extraction': 2575, 'intelligent element': 2576, 'integral equation': 2577, 'causal effect map': 2578, 'circled entropy measurement': 2579, 'cross entropy methods': 2580, '-nearest nighbours': 2581, 'nearest neighbors': 2582, 'user model': 2583, 'upsampling module': 2584, 'internal limiting membrane': 2585, 'information lifecycle management': 2586, 'red , green , blue': 2587, 'red giant branch': 2588, 'state - of - the - art': 2589, 'state of the art': 2590, 'cumulative spectral gradient': 2591, 'cost sharing game': 2592, 'peak power contract': 2593, 'payment per click': 2594, 'pay per click': 2595, 'eight medical grade': 2596, 'electromyograph': 2597, 'eight electromyography': 2598, 'digital signal processing': 2599, 'discrete sequence production': 2600, 'maximum voice frequency': 2601, 'matching vector families': 2602, "fisher 's discriminant analysis": 2603, 'functional data analysis': 2604, 'topological data analysis': 2605, 'targeted degree - based attack': 2606, 'equivalent rectangular bandwidth': 2607, 'enhanced residual block': 2608, 'enhanced hybrid simultaneous': 2609, 'enhanced hybrid swipt protocol': 2610, 'sequential importance resampling': 2611, 'source to interferences ratio': 2612, 'explicit matrix factorization': 2613, 'eclipse modeling framework': 2614, 'electromagnetic fields': 2615, 'derandomized local search': 2616, 'depth - limited search': 2617, 'analytic imaging diagnostics arena': 2618, 'atomic , independent , declarative , and absolute': 2619, 'neural turing machine': 2620, 'neural topic model': 2621, 'domain name system': 2622, 'domain name service': 2623, 'expedited forwarding': 2624, 'ejection fraction': 2625, 'error feedback': 2626, 'movie triplets corpus': 2627, 'machine type communications': 2628, 'interactive skill modules': 2629, 'industrial , scientific and medical': 2630, 'partial transmit sequences': 2631, 'public transportation system': 2632, 'pulse discrete time': 2633, 'poisson delaunay tessellations': 2634, 'pulse width modulation': 2635, 'partial weighted matching': 2636, 'graphlet correlation distance': 2637, 'greatest common divisor': 2638, 'calling contexts graphs': 2639, 'combinatory categorial grammar': 2640, 'chromatic correction gratings': 2641, 'mean closest points': 2642, 'matern cluster process': 2643, 'optic radiation': 2644, 'operations research': 2645, 'opportunistic relaying': 2646, 'dual path network': 2647, 'deep pyramid network': 2648, 'lunar transfer trajectory': 2649, 'locally threshold testable': 2650, 'minimal edit distance': 2651, 'multimedia event detection': 2652, 'frame error rate': 2653, 'facial expression recognition': 2654, 'autoencoder': 2655, 'associative experiment': 2656, 'absolute error': 2657, 'answer extraction': 2658, 'bits per pixel': 2659, 'binomial point process': 2660, 'average perpendicular distance': 2661, 'artifact pyramid decoding': 2662, 'scanning electron microscopy': 2663, 'squared entropy measurement': 2664, 'simple event model': 2665, 'sense and avoid': 2666, 'sample average approximation': 2667, 'point - to - multipoint': 2668, 'persistent turing machine': 2669, 'continuous wavelet transform': 2670, 'complex wavelet transform': 2671, 'transmission antennas': 2672, 'threshold algorithm': 2673, 'histogram of gradients': 2674, 'histogram of oriented gradient': 2675, 'normalized cumulative entropy': 2676, 'noise contrastive estimation': 2677, 'entrance pupil': 2678, 'exponent parikh': 2679, 'efficient path': 2680, 'evolutionary programming': 2681, 'europarl': 2682, 'correlated orienteering problem': 2683, 'centralized optimization problem': 2684, 'operation': 2685, 'origin': 2686, 'out - links': 2687, 'quantitative myotonia assessment': 2688, 'quantum merlin arthur(more': 2689, 'minimum risk training': 2690, 'maximum ratio transmission': 2691, 'vibrational coupled cluster': 2692, 'vehicular cloud computing': 2693, 'restricted isometry constant': 2694, 'risk inflation criterion': 2695, 'mitral valve': 2696, 'memory vector': 2697, 'optical coherence tomography': 2698, 'odd cycle transversal': 2699, 'open access': 2700, 'ocular artifacts': 2701, 'orthogonal array': 2702, 'targeted betweenness - based attack': 2703, 'tailor based allocation': 2704, 'conversational question answering': 2705, 'conversational question answering systems.(coqa': 2706, 'integrated development environment': 2707, 'interprocedural distributive environment': 2708, 'dynamic competition hypothesis': 2709, 'discriminant cross - modal hashing': 2710, 'document understanding conference': 2711, 'document understanding conference(http://duc.nist.gov': 2712, 'local_constraint_multilevels_wasserstein_means': 2713, 'mwm with shared atoms on the measure space': 2714, 'generalized additive models': 2715, 'generative adversarial metric': 2716, 'total variation diminishing': 2717, 'threshold voltage defined': 2718, 'random utility modelwe': 2719, 'random utility maximization': 2720, 'temporal random indexing': 2721, 'toyota research institute': 2722, 'quantile regression': 2723, 'quadruple range': 2724, 'motion blurring': 2725, 'model - based': 2726, 'maximal biclique': 2727, 'method b': 2728, 'atomic function computation': 2729, 'automated fare collection(afc': 2730, 'automatic fact checking': 2731, 'optimized link state routing protocol': 2732, 'optimised link state routing': 2733, 'power service layer': 2734, 'probabilistic soft logic': 2735, 'recurrent power law': 2736, 'routing for low - power and lossy network': 2737, 'routing protocol for low - power and lossy networks': 2738, 'smart object network': 2739, 'smart objects': 2740, 'subtractive pixel adjacency matrix': 2741, 'state preparation and measurement errors': 2742, 'patient side manipulator': 2743, 'precoding - aided spatial modulation': 2744, 'new instances': 2745, 'neat image': 2746, 'national instruments': 2747, 'noun incorporation': 2748, 'network interface': 2749, 'optimal power flow': 2750, 'optimal pareto front': 2751, 'reward weighted regression': 2752, 'reward re - weighted regression': 2753, 'asia - pacific region': 2754, 'asia pacific network information centre': 2755, 'delay tolerant networks': 2756, 'domain transfer network': 2757, 'disruption tolerant networking': 2758, 'parabolic variational inequality': 2759, 'perpendicular vegetation index': 2760, 'expected reciprocal rank': 2761, 'exact recovery ratio': 2762, "hubert 's index": 2763, 'histogram intersection': 2764, 'temporal concept analysis': 2765, 'task component architecture': 2766, 'normalized uniformity coefficient': 2767, 'next utterance classification': 2768, 'oz computation model': 2769, 'original component manufacturers': 2770, 'cross conformal prediction': 2771, 'convex - concave procedure': 2772, 'latent hierarchical': 2773, 'latent hierarchy': 2774, 'joint decoding': 2775, 'joint diagonalization': 2776, 'local discriminant embedding': 2777, 'learnable dictionary encoding': 2778, 'positive and negative affect schedule': 2779, 'positive affect negative affect scale': 2780, 'universal composability': 2781, 'unit commitment': 2782, 'concave points detection': 2783, 'coherent point drift': 2784, 'coal mine disaster': 2785, 'header dictionary triple': 2786, 'header , dictionary , triples': 2787, 'shapley share coefficient': 2788, 'sparse subspace clustering': 2789, 'similarity sensitive coding': 2790, 'pulse density modulated': 2791, 'probability distribution matrix': 2792, 'use case map': 2793, 'ultrametric contour map': 2794, 'enron mail corpus': 2795, 'enron e - mail corpus': 2796, 'goal achievement time': 2797, 'generalized all threshold': 2798, 'zenith total delay': 2799, 'zenithal tropospheric delays': 2800, 'formal concept analysis': 2801, 'forward capacity auctions': 2802, 'logistic regression model': 2803, 'bottom - up': 2804, 'bandwidth units': 2805, 'boolean coalgebraic logics': 2806, 'bilateral convolutional layers': 2807, 'analyze questions generated': 2808, 'automatic question generation': 2809, 'did not converge': 2810, 'differentiable neural computer': 2811, 'explicit factor modelszhang2014': 2812, 'explicit factor modelsefm': 2813, 'cumulative spectrum energy': 2814, 'common subexpression elimination': 2815, 'mean rank difference': 2816, 'maximal ratio diversity': 2817, 'quadrature': 2818, 'quartet': 2819, 'net energy metering': 2820, 'new economy movement': 2821, 'command line interface': 2822, 'cuneiform language identification': 2823, 'mobility prediction clustering algorithm': 2824, 'multi - linear principal components analysis': 2825, 'rapidly - exploring random trees': 2826, 'rapidly - exploring tree': 2827, 'rapidly - exploring random tree of trees': 2828, 'query fusion': 2829, 'quality factor': 2830, 'quadratic form': 2831, 'butterfly optimization algorithm': 2832, 'bilevel optimization algorithm': 2833, 'high prr': 2834, 'hawkes processes': 2835, 'civil aviation authority': 2836, 'clump assignment array': 2837, 'adjacent channel interference': 2838, 'artificial collective intelligence': 2839, 'generalized linear models': 2840, 'general linear models': 2841, 'optical burst - switched': 2842, 'optimal brain surgeon': 2843, 'seventh dialog system technology challenge': 2844, 'dialog system technology challenge': 2845} -LABEL_TO_ID = {'Communications_Decency_Act': 0, 'Christian_Democratic_Appeal': 1, 'British_Broadcasting_Company': 2, 'British_Broadcasting_Corporation': 3, 'Rhodesian_African_Rifles': 4, 'Royal_Australian_Regiment': 5, 'Aalborg_BK': 6, 'Aalborg_Boldspilklub': 7, 'blood–brain_barrier': 8, 'Better_Business_Bureau': 9, 'Control_Data_Corporation': 10, 'Centers_for_Disease_Control': 11, 'Chief_of_the_Defence_Force': 12, 'cumulative_distribution_function': 13, 'Independent_Power_Producers': 14, 'Irish_Parliamentary_Party': 15, 'isopentenyl_pyrophosphate': 16, 'Basketball_Bundesliga': 17, 'British_Basketball_League': 18, 'Congress_for_Democracy_and_Progress': 19, 'census-designated_place': 20, 'Suburban_Development_Area': 21, 'Singapore_Democratic_Alliance': 22, 'Royal_Bank_of_Canada': 23, 'red_blood_cells': 24, 'credit_default_swaps': 25, 'Chief_of_the_Defence_Staff': 26, 'discrete_Fourier_transform': 27, 'density_functional_theory': 28, 'Social_Democratic_Front': 29, 'Syrian_Democratic_Forces': 30, 'Social_Democratic_Federation': 31, 'runs_batted_in': 32, 'Reserve_Bank_of_India': 33, 'Judicial_Service_Commission': 34, 'Johnson_Space_Center': 35, 'Environmental_Impact_Assessment': 36, 'Environmental_Investigation_Agency': 37, 'United_States_Army_Air_Forces': 38, 'U.S._Army_Air_Forces': 39, 'Singapore_Democratic_Party': 40, 'Social_Democratic_Party': 41, 'anti-aircraft': 42, 'American_Association': 43, 'Alcoholics_Anonymous': 44, 'American_Automobile_Association': 45, 'Asistencia_Asesoría_y_Administración': 46, 'American_Anthropological_Association': 47, 'adult_contemporary': 48, 'alternating_current': 49, 'Alaskan_Air_Command': 50, 'and_alternative_communication': 51, 'Army_Air_Corps': 52, 'Advanced_Audio_Coding': 53, 'autofocus': 54, 'atrial_fibrillation': 55, 'Army_Air_Forces': 56, 'Army_Airfield': 57, 'Amnesty_International': 58, 'artificial_intelligence': 59, 'Windows_Media_Audio': 60, 'Wildlife_Management_Area': 61, 'Assembly_Member': 62, 'amplitude_modulation': 63, 'United_States_Geological_Survey': 64, 'U.S._Geological_Survey': 65, 'Advanced_Placement': 66, 'armor-piercing': 67, 'Advanced_Placement_Program': 68, 'Associated_Press': 69, 'Action_Points': 70, 'Access_Point': 71, 'American_Academy_of_Pediatrics': 72, 'Aam_Aadmi_Party': 73, 'Southeastern_Conference': 74, 'Securities_and_Exchange_Commission': 75, 'selected_to_the_All-Southeastern_Conference': 76, 'augmented_reality': 77, 'androgen_receptor': 78, 'Radio_Corporation_of_America': 79, 'Rabbinical_Council_of_America': 80, 'Reformed_Church_in_America': 81, 'Battlecruiser_Squadron': 82, 'Bowl_Championship_Series': 83, 'African_Union': 84, 'astronomical_units': 85, 'atrioventricular': 86, 'Alternative_Vote': 87, 'adult_video': 88, 'Irish_Republican_Brotherhood': 89, 'International_Rugby_Board': 90, 'Institutional_Review_Board': 91, 'Irish_Republican_Army': 92, 'Individual_Retirement_Accounts': 93, 'University_Interscholastic_League': 94, 'United_Irish_League': 95, 'Intercontinental_Rally_Challenge': 96, 'Internet_Relay_Chat': 97, 'International_Rescue_Committee': 98, 'Internal_Revenue_Code': 99, 'docosahexaenoic_acid': 100, 'Defence_Housing_Authority': 101, 'Royal_Canadian_Navy': 102, 'Royal_College_of_Nursing': 103, 'Bachelor_of_Arts': 104, 'British_Airways': 105, 'Commission_of_Fine_Arts': 106, 'Chartered_Financial_Analyst': 107, "Cat_Fanciers'_Association": 108, 'Chinese_Football_Association': 109, 'Country_Fire_Authority': 110, 'American_Basketball_Association': 111, 'American_Bicycle_Association': 112, 'computational_fluid_dynamics': 113, 'contracts_for_difference': 114, 'American_Broadcasting_Company': 115, 'ATP-binding_cassette': 116, 'Australian_Broadcasting_Corporation': 117, 'American_Bowling_Congress': 118, 'Regimental_Combat_Team': 119, 'randomized_controlled_trials': 120, 'radar_cross-section': 121, 'Reaction_Control_System': 122, 'compact_fluorescent_lamps': 123, 'Canadian_Football_League': 124, 'British_Leyland': 125, 'breech-loading': 126, 'International_Racquetball_Tour': 127, 'item_response_theory': 128, 'American_Basketball_League': 129, 'Australian_Baseball_League': 130, 'British_Petroleum': 131, 'before_present': 132, 'Bachelor_of_Science': 133, 'Battle_Squadron': 134, 'chronic_fatigue_syndrome': 135, 'Central_Flying_School': 136, 'Canadian_Federation_of_Students': 137, 'dihydrotestosterone': 138, 'distributed_hash_table': 139, 'acrylonitrile_butadiene_styrene': 140, 'anti-lock_braking_system': 141, 'Australian_Bureau_of_Statistics': 142, 'Asia-Pacific_Broadcasting_Union': 143, 'Airman_Battle_Uniform': 144, 'radio_direction_finding': 145, 'Resource_Description_Framework': 146, 'Industry_Standard_Architecture': 147, 'instruction_set_architecture': 148, 'International_Society_of_Automation': 149, 'Internal_Security_Act': 150, 'Science_Foundation_Ireland': 151, 'Sustainable_Forestry_Initiative': 152, 'XML_Paper_Specification': 153, 'X-ray_photoelectron_spectroscopy': 154, 'Defense_Intelligence_Agency': 155, 'Detroit_Institute_of_Arts': 156, 'Serious_Fraud_Office': 157, 'San_Francisco_Opera': 158, 'Inter-Services_Intelligence': 159, 'Indian_Statistical_Institute': 160, 'Islamic_State_of_Iraq': 161, 'Air_Combat_Command': 162, 'anterior_cingulate_cortex': 163, 'Atlantic_Coast_Conference': 164, 'Alpine_Club_of_Canada': 165, 'Asian_Cricket_Council': 166, 'Accident_Compensation_Corporation': 167, 'certificate_of_deposit': 168, 'compact_disc': 169, 'Basic_Education_Funding': 170, 'British_Expeditionary_Force': 171, 'International_Solidarity_Movement': 172, 'interstellar_medium': 173, 'Institute_for_Supply_Management': 174, 'accumulated_cyclone_energy': 175, 'angiotensin-converting_enzyme': 176, 'cystic_fibrosis': 177, 'constant_frequency': 178, 'Canadian_Forces': 179, 'computer-generated_imagery': 180, 'Common_Gateway_Interface': 181, 'anterior_cruciate_ligament': 182, 'access_control_lists': 183, 'International_Skating_Union': 184, 'Iowa_State_University': 185, 'International_Space_University': 186, 'carbon_dioxide': 187, 'Commanding_Officer': 188, 'carbon_monoxide': 189, 'cerebral_palsy': 190, 'Communist_Party': 191, 'conditioned_response': 192, 'Caledonian_Railway': 193, 'Challenger_Series': 194, 'conditioned_stimulus': 195, 'American_Community_Survey': 196, 'American_Colonization_Society': 197, 'American_Chemical_Society': 198, 'Pan_Africanist_Congress': 199, 'political_action_committee': 200, 'Soka_Gakkai_International': 201, 'Silicon_Graphics': 202, 'Silicon_Graphics,_Inc.': 203, 'Board_of_Control_for_Cricket_in_India': 204, 'Bank_of_Credit_and_Commerce_International': 205, 'Human_Rights_Campaign': 206, 'Human_Rights_Council': 207, 'Philippine_Airlines': 208, 'Phase_Alternating_Line': 209, 'Canadian_Hockey_Association': 210, 'College_Hockey_America': 211, 'Chicago_Housing_Authority': 212, 'district_attorney': 213, 'Democratic_Alliance': 214, 'American_Dental_Association': 215, 'Americans_with_Disabilities_Act': 216, 'Deutsche_Bahn': 217, 'Deutsche_Bundesbahn': 218, 'Earth_Liberation_Front': 219, 'extremely_low_frequency': 220, 'Eritrean_Liberation_Front': 221, 'Asian_Development_Bank': 222, 'Apple_Desktop_Bus': 223, 'dendritic_cells': 224, 'direct_current': 225, 'Democracy': 226, 'Air_Defense_Command': 227, 'analog-to-digital_converter': 228, 'American-Arab_Anti-Discrimination_Committee': 229, 'Central_Hockey_League': 230, 'Canadian_Hockey_League': 231, 'General_Post_Office': 232, 'Government_Printing_Office': 233, 'Interplanetary_Transport_System': 234, 'Intelligent_Transportation_Systems': 235, 'World_Psychiatric_Association': 236, 'Works_Progress_Administration': 237, 'Wi-Fi_Protected_Access': 238, 'California_Highway_Patrol': 239, 'combined_heat_and_power': 240, 'Denominación_de_Origen': 241, 'dissolved_oxygen': 242, 'Hostage_Rescue_Team': 243, 'hormone_replacement_therapy': 244, 'International_Telecommunication_Union': 245, 'International_Triathlon_Union': 246, 'Democratic_Party': 247, 'determiner_phrase': 248, 'displaced_persons': 249, 'Royal_Field_Artillery': 250, 'Royal_Fleet_Auxiliary': 251, 'Professional_Bowlers_Association': 252, 'Philippine_Basketball_Association': 253, 'Royal_Flying_Corps': 254, 'Reconstruction_Finance_Corporation': 255, 'Request_for_Comments': 256, 'alternative_dispute_resolution': 257, 'adverse_drug_reactions': 258, 'Higher_School_Certificate': 259, 'hematopoietic_stem_cells': 260, 'Culinary_Institute_of_America': 261, 'Central_Intelligence_Agency': 262, 'Premier_Basketball_League': 263, 'Philippine_Basketball_League': 264, 'problem-based_learning': 265, 'planetary_boundary_layer': 266, 'Counter_Intelligence_Corps': 267, 'Canadian_Islamic_Congress': 268, 'Electronic_Arts': 269, 'Environmental_Assessment': 270, 'Electro-Motive_Division': 271, 'Electro-Motive_Diesel': 272, 'entorhinal_cortex': 273, 'Executive_Committee': 274, 'European_Community': 275, 'European_Commission': 276, 'Federation_of_Association_Football': 277, 'Fédération_Internationale_de_Football_Association': 278, 'Atomic_Energy_Commission': 279, 'Australian_Electoral_Commission': 280, 'Pharmaceutical_Benefits_Scheme': 281, 'Public_Broadcasting_Service': 282, 'Champions_Indoor_Football': 283, 'California_Interscholastic_Federation': 284, 'Congress_of_Industrial_Organizations': 285, 'Chief_Information_Officer': 286, 'Hubble_Space_Telescope': 287, 'High_Speed_Train': 288, 'Harmonized_Sales_Tax': 289, 'Holden_Special_Vehicles': 290, 'herpes_simplex_virus': 291, 'volatile_organic_compounds': 292, 'Vereenigde_Oost-Indische_Compagnie': 293, 'Canadian_Interuniversity_Sport': 294, 'Commonwealth_of_Independent_States': 295, 'electric_multiple_unit': 296, 'Eastern_Michigan_University': 297, 'epithelial-mesenchymal_transition': 298, 'Emergency_Medical_Technician': 299, 'endoplasmic_reticulum': 300, 'estrogen_receptor': 301, 'University_of_Minnesota_Duluth': 302, 'Universal_Media_Disc': 303, 'annual_average_daily_traffic': 304, 'average_annual_daily_traffic': 305, 'apical_ectodermal_ridge': 306, 'Agri-Energy_Roundtable': 307, 'Pakistan_Cricket_Board': 308, 'printed_circuit_board': 309, 'polychlorinated_biphenyls': 310, 'Audio_Engineering_Society': 311, 'Advanced_Encryption_Standard': 312, 'principal_component_analysis': 313, 'Presbyterian_Church_in_America': 314, 'Special_Interest_Group': 315, 'Schweizerische_Industrie_Gesellschaft': 316, 'electric_vehicle': 317, 'exposure_value': 318, 'Police_and_Crime_Commissioner': 319, 'Press_Complaints_Commission': 320, 'Pacific_Coast_Conference': 321, 'downloadable_content': 322, 'Democratic_Leadership_Council': 323, 'Argentine_Football_Association': 324, 'American_Family_Association': 325, 'American_Football_Association': 326, 'Asian_Football_Confederation': 327, 'Australian_Flying_Corps': 328, 'American_Football_Conference': 329, 'Central_South_African_Railways': 330, 'combat_search_and_rescue': 331, 'Patent_Cooperation_Treaty': 332, 'Primary_Care_Trusts': 333, 'Union_for_Democracy_and_Progress': 334, 'United_Nations_Development_Programme': 335, 'All-America_Football_Conference': 336, 'Australian_Air_Force_Cadets': 337, 'Fourth_International': 338, 'Forza_Italia': 339, 'American_Film_Institute': 340, 'Australian_Film_Institute': 341, 'Australian_Football_League': 342, 'American_Federation_of_Labor': 343, 'American_Football_League': 344, 'Armed_Forces_of_Liberia': 345, 'Arizona_Fall_League': 346, 'Arena_Football_League': 347, 'Fabrique_Nationale': 348, 'Front': 349, 'Digital_Light_Processing': 350, 'Democratic_Labor_Party': 351, 'Royal_Hibernian_Academy': 352, 'Royal_Horse_Artillery': 353, 'Australian_Federal_Police': 354, 'Armed_Forces_of_the_Philippines': 355, 'Agence_France-Presse': 356, 'Americans_for_Prosperity': 357, 'United_National_Congress': 358, 'University_of_North_Carolina': 359, 'United_Nations_Command': 360, 'personal_digital_assistants': 361, 'Population_and_Community_Development_Association': 362, 'partial_differential_equations': 363, 'Pennsylvania_Department_of_Education': 364, 'phosphodiesterase': 365, 'Australian_Securities_and_Investments_Commission': 366, 'application-specific_integrated_circuit': 367, "International_Workingmen's_Association": 368, 'International_Wrestling_Association': 369, 'probability_density_function': 370, 'Portable_Document_Format': 371, 'Housing_and_Urban_Development': 372, 'head-up_display': 373, 'general_aviation': 374, 'General_Assembly': 375, 'Bureau_of_Indian_Affairs': 376, 'Board_of_Immigration_Appeals': 377, 'British_Island_Airways': 378, 'George_Cross': 379, 'gas_chromatography': 380, 'gastrointestinal': 381, 'Geographical_Indication': 382, 'Grandmaster': 383, 'genetically_modified': 384, 'General_Motors': 385, 'general_manager': 386, 'general_practitioner': 387, 'Grand_Prix': 388, 'glucocorticoid_receptor': 389, 'Gorkha_Rifles': 390, 'Olympic_Council_of_Asia': 391, 'Orthodox_Church_in_America': 392, 'of_the_Currency': 393, 'Official_Charts_Company': 394, 'National_Association_of_Evangelicals': 395, 'National_Academy_of_Engineering': 396, 'Family_Research_Council': 397, 'Federal_Radio_Commission': 398, 'Environmental_Protection_Agency': 399, 'eicosapentaenoic_acid': 400, 'Washington_State_University': 401, 'Women_Superstars_Uncensored': 402, 'Electronic_Product_Code': 403, 'European_Patent_Convention': 404, 'American_Historical_Association': 405, 'American_Heart_Association': 406, 'National_Association_of_Manufacturers': 407, 'Non-Aligned_Movement': 408, 'transmembrane_segments': 409, 'transcranial_magnetic_stimulation': 410, 'high_definition': 411, "Huntington's_disease": 412, 'positron_emission_tomography': 413, 'polyethylene_terephthalate': 414, 'network-attached_storage': 415, 'National_Academy_of_Sciences': 416, 'European_Patent_Office': 417, 'erythropoietin': 418, 'Country_Liberal_Party': 419, 'Communist_Labor_Party': 420, 'East_Pakistan_Rifles': 421, 'electron_paramagnetic_resonance': 422, 'United_Progressive_Alliance': 423, 'United_Productions_of_America': 424, 'Armed_Forces_Revolutionary_Council': 425, 'Air_Force_Reserve_Command': 426, 'Central_London_Railway': 427, 'Common_Language_Runtime': 428, 'Hewlett-Packard': 429, 'high_pressure': 430, 'service_level_agreement': 431, 'South_Lebanon_Army': 432, 'Symbionese_Liberation_Army': 433, 'Sierra_Leone_Army': 434, 'home_runs': 435, 'heart_rate': 436, 'human_resources': 437, 'heparan_sulfate': 438, 'hydrogen_sulfide': 439, 'National_Bicycle_Association': 440, 'National_Basketball_Association': 441, 'prefrontal_cortex': 442, 'Pacific_Fur_Company': 443, 'Private_First_Class': 444, 'Operational_Detachment_Alpha': 445, 'Official_Development_Assistance': 446, 'National_Bus_Company': 447, 'National_Broadcasting_Company': 448, 'Novo_Basquete_Brasil': 449, 'National_Bank_of_Belgium': 450, 'North_American_Bridge_Championship': 451, 'National_Association_of_Basketball_Coaches': 452, 'Free_Syrian_Army': 453, 'Financial_Services_Authority': 454, 'Farm_Security_Administration': 455, 'census_metropolitan_area': 456, 'Country_Music_Association': 457, 'Canadian_Medical_Association': 458, 'critical_micelle_concentration': 459, 'Central_Military_Commission': 460, 'computer-mediated_communication': 461, 'single-lens_reflex_camera': 462, 'single-lens_reflex': 463, 'China_Motor_Bus': 464, 'cosmic_microwave_background': 465, 'International_Baccalaureate': 466, 'Intelligence_Bureau': 467, 'National_Basketball_League': 468, 'National_Bicycle_League': 469, 'Ubiquitin-Proteasome_System': 470, 'United_Parcel_Service': 471, 'Chief_Mechanical_Engineer': 472, 'Chicago_Mercantile_Exchange': 473, 'coronal_mass_ejection': 474, 'degrees_of_freedom': 475, 'depth_of_field': 476, 'Intercity': 477, 'Intelligence_Community': 478, 'integrated_circuits': 479, 'Australian_Imperial_Force': 480, 'American_Indoor_Football': 481, 'finite_state_machine': 482, 'Federated_States_of_Micronesia': 483, 'Advanced_Idea_Mechanics': 484, 'AOL_Instant_Messenger': 485, 'American_Indian_Movement': 486, 'Alternative_Investment_Market': 487, 'spinal_muscular_atrophy': 488, 'supplementary_motor_area': 489, 'innings_pitched': 490, 'Internet_Protocol': 491, 'intellectual_property': 492, 'Compact_Muon_Solenoid': 493, 'Church_Missionary_Society': 494, 'content_management_system': 495, 'Single_Member_Constituency': 496, 'Small_Magellanic_Cloud': 497, 'Supreme_Military_Council': 498, 'San_Miguel_Corporation': 499, 'infrared': 500, 'international_relations': 501, 'Carnegie_Mellon_University': 502, 'Central_Michigan_University': 503, 'small_and_medium_enterprises': 504, 'Standard-Model_Extension': 505, 'Australian_Institute_of_Sport': 506, 'Automatic_Identification_System': 507, 'information_technology': 508, 'inclusive_tour': 509, 'intravenous': 510, 'initialization_vector': 511, 'message_authentication_code': 512, 'Mixed_Armistice_Commissions': 513, 'Mid-American_Conference': 514, 'Media_Access_Control': 515, 'Military_Airlift_Command': 516, 'Middle_Atlantic_Conferences': 517, 'Northern_Counties_Committee': 518, 'National_Council_of_Churches': 519, 'National_Cadet_Corps': 520, 'Free_Trade_Agreement': 521, 'Federal_Transit_Administration': 522, 'Equal_Rights_Amendment': 523, 'earned_run_average': 524, 'Southern_Methodist_University': 525, 'Singapore_Management_University': 526, 'Nuova_Camorra_Organizzata': 527, 'non-commissioned_officers': 528, 'enterprise_resource_planning': 529, 'event-related_potential': 530, 'effective_radiated_power': 531, 'Valley_Transportation_Authority': 532, 'ventral_tegmental_area': 533, 'Democratic_Progressive_Party': 534, 'Director_of_Public_Prosecutions': 535, 'Detroit_Public_Schools': 536, 'Department_of_Public_Safety': 537, 'Emergency_Response_Team': 538, 'Ellinikí_Radiofonía_Tileórasi': 539, 'carbon_nanotubes': 540, 'Confederación_Nacional_del_Trabajo': 541, 'National_Democratic_Alliance': 542, 'National_Defence_Academy': 543, 'Nuclear_Decommissioning_Authority': 544, 'Oceania_Football_Confederation': 545, 'orbitofrontal_cortex': 546, 'European_Space_Agency': 547, 'Entertainment_Software_Association': 548, 'Employment_and_Support_Allowance': 549, 'Endangered_Species_Act': 550, 'embryonic_stem_cells': 551, 'electronic_stability_control': 552, 'single_nucleotide_polymorphisms': 553, 'Scottish_National_Party': 554, 'Bermuda_Militia_Artillery': 555, 'British_Medical_Association': 556, 'New_Democratic_Party': 557, 'National_Democratic_Party': 558, 'Broadcast_Music,_Inc.': 559, 'body_mass_index': 560, 'World_Wrestling_Association': 561, 'World_Wrestling_All-Stars': 562, 'best_management_practices': 563, 'bone_morphogenetic_protein': 564, 'non-small-cell_lung_carcinoma': 565, 'non-small_cell_lung_cancer': 566, 'University_of_Southern_California': 567, 'Ulster_Special_Constabulary': 568, 'World_Wrestling': 569, 'World_Wrestling_Federation': 570, 'World_Wrestling_Entertainment': 571, 'World_Wildlife_Fund': 572, 'Brooklyn–Manhattan_Transit_Corporation': 573, 'Basic_Military_Training': 574, 'Sony_Online_Entertainment': 575, 'Special_Operations_Executive': 576, 'University_of_South_Florida': 577, 'University_of_San_Francisco': 578, 'Newspaper_Enterprise_Association': 579, 'National_Education_Association': 580, 'National_Endowment_for_the_Arts': 581, 'Melbourne_Cricket_Club': 582, 'Millennium_Challenge_Corporation': 583, 'Marylebone_Cricket_Club': 584, 'Light_AA': 585, 'Light_Anti-Aircraft': 586, 'National_Electrical_Code': 587, 'Northeast_Corridor': 588, 'National_Executive_Committee': 589, 'Northeast_Conference': 590, 'Proto-Indo-European_language': 591, 'Proto-Indo-European': 592, 'Corporation_for_Public_Broadcasting': 593, 'Communist_Party_of_Burma': 594, 'Coalition_Provisional_Authority': 595, 'Certified_Public_Accountant': 596, 'Communist_Party_of_America': 597, 'Communist_Party_of_Australia': 598, 'Arab_Liberation_Army': 599, 'American_Library_Association': 600, 'Alliance': 601, 'National_Energy_Program': 602, 'New_Economic_Policy': 603, 'nucleotide_excision_repair': 604, 'North_Eastern_Railway': 605, 'Malawi_Congress_Party': 606, 'Master_Control_Program': 607, 'Malayan_Communist_Party': 608, 'norepinephrine_transporter': 609, 'National_Educational_Television': 610, 'Nottingham_Express_Transit': 611, 'Communist_Party_of_India': 612, 'Consumer_Price_Index': 613, 'Center_for_Public_Integrity': 614, 'Digital_Radio_Mondiale': 615, 'digital_rights_management': 616, 'Direct_Rendering_Manager': 617, "Convention_People's_Party": 618, "Cambodian_People's_Party": 619, 'Communist_Party_of_the_Philippines': 620, 'United_Talent_Agency': 621, 'Utah_Transit_Authority': 622, 'Union_de_Transports_Aériens': 623, 'cardiopulmonary_resuscitation': 624, 'Canadian_Pacific_Railway': 625, 'Antigua_Labour_Party': 626, 'Australian_Labor_Party': 627, 'Bangladesh_Nationalist_Party': 628, 'British_National_Party': 629, 'Canadian_Security_Intelligence_Service': 630, 'Center_for_Strategic_and_International_Studies': 631, 'Educational_Testing_Service': 632, 'emissions_trading_scheme': 633, 'University_Technical_College': 634, 'United_Technologies_Corporation': 635, 'Chicago_Public_Schools': 636, 'Civilian_Public_Service': 637, 'Crown_Prosecution_Service': 638, 'Socialist_Party_of_Canada': 639, 'Storm_Prediction_Center': 640, 'Advanced_Life_Support': 641, 'amyotrophic_lateral_sclerosis': 642, 'National_Firearms_Act': 643, 'nondeterministic_finite_automaton': 644, 'Ontario_Hockey_Association': 645, 'Office_of_Hawaiian_Affairs': 646, 'near_field_communication': 647, 'National_Football_Conference': 648, 'Scottish_Premier_League': 649, 'sound_pressure_level': 650, 'American_Motorcyclist_Association': 651, 'American_Medical_Association': 652, 'American_Missionary_Association': 653, 'Air_Mobility_Command': 654, 'Army_Materiel_Command': 655, 'American_Motors_Corporation': 656, 'Medicine': 657, 'Municipal_District': 658, 'Socialist_Party_of_Serbia': 659, 'Super_Proton_Synchrotron': 660, 'Advanced_Micro_Devices': 661, 'age-related_macular_degeneration': 662, 'Maldivian_Democratic_Party': 663, 'Ministry_of_Defence_Police': 664, 'Bureau_of_Prisons': 665, 'blowout_preventer': 666, 'Military_Police': 667, 'Member_of_Parliament': 668, 'decision_support_systems': 669, 'Diplomatic_Security_Service': 670, 'Metropolitan_Railway': 671, 'Midland_Railway': 672, 'mobile_station': 673, 'mass_spectrometry': 674, 'Master_of_Science': 675, 'multiple_sclerosis': 676, 'Republic_of_China': 677, 'Royal_Observer_Corps': 678, 'Russian_Orthodox_Church': 679, 'Organisation_of_Islamic_Cooperation': 680, 'Organisation_of_the_Islamic_Conference': 681, 'Landing_Craft_Assault': 682, 'life_cycle_assessment': 683, 'liquid_crystal_display': 684, 'Lesotho_Congress_for_Democracy': 685, 'United_Nations_High_Commissioner_for_Refugees': 686, 'UN_High_Commissioner_for_Refugees': 687, 'read-only_memory': 688, 'Royal_Ontario_Museum': 689, 'Community_Reinvestment_Act': 690, 'Canada_Revenue_Agency': 691, 'Australian_National_Airways': 692, 'All_Nippon_Airways': 693, 'Afghan_National_Army': 694, 'American_Numismatic_Association': 695, 'cyclic_redundancy_check': 696, 'Civil_Rights_Congress': 697, 'Christian_Reformed_Church': 698, 'remotely_operated_underwater_vehicle': 699, 'remotely_operated_vehicle': 700, 'British_Phonographic_Industry': 701, 'Bank_of_the_Philippine_Islands': 702, 'London_and_Continental_Railways': 703, 'Log_Cabin_Republicans': 704, 'League_Championship_Series': 705, 'London_Controlling_Section': 706, 'natural_killer': 707, 'North_Korean': 708, 'nitrogen_dioxide': 709, 'nitric_oxide': 710, 'National_Party': 711, 'nanoparticles': 712, 'noun_phrase': 713, 'Shuttle_Solid_Rocket_Booster': 714, 'solid_rocket_boosters': 715, "People's_Liberation_Army": 716, 'Port_of_London_Authority': 717, 'American_League_Championship_Series': 718, 'Airborne_Launch_Control_System': 719, 'Commercial_Resupply_Services': 720, 'Congressional_Research_Service': 721, 'Supreme_Revolutionary_Council': 722, 'Student_Representative_Council': 723, 'phospholipase_C': 724, 'Palestinian_Legislative_Council': 725, 'programmable_logic_controllers': 726, 'rocket-propelled_grenade': 727, 'role-playing_video_game': 728, 'role-playing_game': 729, 'Richmond_Professional_Institute': 730, 'Rensselaer_Polytechnic_Institute': 731, 'System_of_Rice_Intensification': 732, 'Stanford_Research_Institute': 733, 'House_Committee_on_Un-American_Activities': 734, 'House_Un-American_Activities_Committee': 735, 'Central_Statistical_Agency_of_Ethiopia': 736, 'Combined_Statistical_Area': 737, 'child_sexual_abuse': 738, 'Canadian_Standards_Association': 739, 'product_lifecycle_management': 740, 'Pamantasan_ng_Lungsod_ng_Maynila': 741, 'Railway_Post_Office': 742, 'Royal_Philharmonic_Orchestra': 743, 'Reverse_Polish_Notation': 744, 'Radio_Philippines_Network': 745, 'U.S._Army_Corps_of_Engineers': 746, 'United_States_Army_Corps_of_Engineers': 747, 'National_Historic_Landmark': 748, 'National_Hockey_League': 749, 'Air_Officer_Commanding': 750, "Appellation_d'origine_contrôlée": 751, 'Church_of_Scientology_International': 752, 'Church_of_South_India': 753, 'conserved_signature_indels': 754, 'Committee_for_Skeptical_Inquiry': 755, 'Météo-France': 756, 'Météo-France_office_in_Réunion': 757, 'National_Highway_System': 758, 'National_Health_Service': 759, 'Canada_Steamship_Lines': 760, 'Canadian_Soccer_League': 761, 'University_of_Western_Australia': 762, 'Universal_Wrestling_Association': 763, 'Communicating_Sequential_Processes': 764, 'concentrated_solar_power': 765, 'Cascading_Style_Sheets': 766, 'Content_Scramble_System': 767, 'Colorado_State_University': 768, 'Christian_Social_Union': 769, 'California_State_University': 770, 'Ordnance_Survey': 771, 'operating_system': 772, 'Orthodox_Union': 773, 'Oakland_University': 774, 'Open_University': 775, 'National_Intelligence_Agency': 776, 'National_Investigation_Agency': 777, 'Sensitive_Security_Information': 778, 'Supplemental_Security_Income': 779, 'Voice_over_IP': 780, 'Voice_over_Internet_Protocol': 781, 'Solid_State_Logic': 782, 'Secure_Sockets_Layer': 783, 'Pakistan_Muslim_League': 784, 'progressive_multifocal_leukoencephalopathy': 785, 'Palestinian_Authority': 786, "People's_Association": 787, 'Peano_arithmetic': 788, "People's_Alliance": 789, 'Palestinian_National_Authority': 790, 'program_counter': 791, 'player_characters': 792, 'personal_computer': 793, 'Philippine_Constabulary': 794, 'Progressive_Conservative': 795, 'Association_for_Progressive_Communications': 796, "All_People's_Congress": 797, 'All_Progressives_Congress': 798, 'armored_personnel_carriers': 799, 'antigen-presenting_cells': 800, 'supersonic_transport': 801, 'sea_surface_temperature': 802, 'National_Institutes_of_Technology': 803, 'National_Invitation_Tournament': 804, 'National_Intelligence_Service': 805, 'Naval_Investigative_Service': 806, 'Academic_Performance_Index': 807, 'American_Petroleum_Institute': 808, 'application_programming_interface': 809, 'active_pharmaceutical_ingredients': 810, 'Air_Pollution_Index': 811, 'particulate_matter': 812, 'Prime_Minister': 813, 'Partido_Popular': 814, 'Progressive_Party': 815, 'polypropylene': 816, "People's_Party": 817, 'Asia_Pulp_&_Paper': 818, 'amyloid_precursor_protein': 819, 'public_relations': 820, 'Party': 821, 'proportional_representation': 822, 'Communist_Party_USA': 823, 'Communist_Party_of_the_United_States': 824, 'European_Court_of_Human_Rights': 825, 'European_Convention_on_Human_Rights': 826, 'Portable_Network_Graphics': 827, 'Papua_New_Guinea': 828, 'Natural_Environment_Research_Council': 829, 'North_American_Electric_Reliability_Corporation': 830, 'scanning_tunneling_microscope': 831, 'Société_de_transport_de_Montréal': 832, 'Birmingham_Small_Arms_Company': 833, 'Boy_Scouts_of_America': 834, "People's_National_Party": 835, 'Philippine_National_Police': 836, 'bovine_spongiform_encephalopathy': 837, 'Bombay_Stock_Exchange': 838, 'United_Kingdom_Independence_Party': 839, 'UK_Independence_Party': 840, 'Bangko_Sentral_ng_Pilipinas': 841, 'Bulgarian_Socialist_Party': 842, 'British_Socialist_Party': 843, 'Bahujan_Samaj_Party': 844, 'Baloch_Students_Organization': 845, 'Boston_Symphony_Orchestra': 846, 'methyl_isocyanate': 847, 'Malaysian_Indian_Congress': 848, 'Israeli_Air_Force': 849, 'Indian_Air_Force': 850, 'Ninoy_Aquino_International_Airport': 851, 'National_Association_of_Intercollegiate_Athletics': 852, 'rheumatoid_arthritis': 853, 'Royal_Artillery': 854, 'chemical_vapor_deposition': 855, 'cardiovascular_disease': 856, 'Australian_Research_Council': 857, 'American_Red_Cross': 858, 'Royal_Engineers': 859, 'Regular_Royal_Engineers': 860, 'radio_frequency': 861, 'Rhodesian_Front': 862, 'alternate_reality_game': 863, 'Amphibious_Ready_Group': 864, 'Jim_Crockett_Promotions': 865, 'Japanese_Communist_Party': 866, 'Parliamentary_Assembly_of_the_Council_of_Europe': 867, 'Property_Assessed_Clean_Energy': 868, 'Indian_Administrative_Service': 869, 'Institute_for_Advanced_Study': 870, 'rural_municipality': 871, 'river_mile': 872, 'Royal_Navy': 873, 'registered_nurse': 874, 'Anti-Revolutionary_Party': 875, 'Address_Resolution_Protocol': 876, 'Resolution_Trust_Corporation': 877, 'Religious_Technology_Center': 878, 'Society_of_the_Divine_Word': 879, 'singular_value_decomposition': 880, 'National_League_A': 881, 'National_Liberation_Army': 882, 'antiretroviral_therapy': 883, 'Advanced_Rapid_Transit': 884, 'reentry_vehicle': 885, 'recreational_vehicle': 886, 'Independent_Broadcasting_Authority': 887, 'Israel_Broadcasting_Authority': 888, 'Important_Bird_Area': 889, 'Intercontinental_Broadcasting_Corporation': 890, 'Iraq_Body_Count_project': 891, 'Communications_Workers_of_America': 892, 'Clean_Water_Act': 893, 'Sturmabteilung': 894, 'South_Australia': 895, 'Advertising_Standards_Authority': 896, 'American_Sociological_Association': 897, 'American_Speed_Association': 898, 'Army_Security_Agency': 899, 'Real-time_Transport_Protocol': 900, 'Rádio_e_Televisão_de_Portugal': 901, 'real-time_strategy': 902, 'Radio_Television_of_Serbia': 903, "People's_Power_Party": 904, 'Pakistan_Peoples_Party': 905, 'Public_Private_Partnership': 906, 'purchasing_power_parity': 907, "People's_Progressive_Party": 908, 'Point-to-Point_Protocol': 909, 'Secure_Digital': 910, 'Sicherheitsdienst': 911, 'atrial_septal_defect': 912, 'autism_spectrum_disorder': 913, 'neuro-linguistic_programming': 914, 'natural_language_processing': 915, 'Political_Party_of_Radicals': 916, "Polish_Workers'_Party": 917, 'Special_Forces': 918, 'science_fiction': 919, 'System_of_Units': 920, 'Situationist_International': 921, 'American_Sign_Language': 922, 'American_Soccer_League': 923, 'Socialist_Party': 924, 'Southern_Pacific': 925, 'College_World_Series': 926, 'Co-operative_Wholesale_Society': 927, 'sarcoplasmic_reticulum': 928, 'Southern_Railway': 929, 'Super_Sport': 930, 'Schutzstaffel': 931, 'International_Criminal_Court': 932, 'International_Cricket_Council': 933, 'Interstate_Commerce_Commission': 934, 'Heavy_Anti-Aircraft': 935, 'Heavy_AA': 936, 'Intercity-Express': 937, 'Intercontinental_Exchange': 938, 'internal_combustion_engine': 939, 'Institution_of_Civil_Engineers': 940, 'Immigration_and_Customs_Enforcement': 941, 'Iron_Crown_Enterprises': 942, 'implantable_cardioverter-defibrillator': 943, 'International_Classification_of_Diseases': 944, 'International_Canoe_Federation': 945, 'inertial_confinement_fusion': 946, 'Air_Training_Command': 947, 'air_traffic_control': 948, 'Automatic_Train_Control': 949, 'Air_Transport_Command': 950, 'Air_Training_Corps': 951, 'Teachta_Dála': 952, 'touchdown': 953, 'Fédération_Internationale_du_Sport_Automobile': 954, 'Foreign_Intelligence_Surveillance_Act': 955, 'Standard_Widget_Toolkit': 956, 'South_West_Trains': 957, 'nuclear_magnetic_resonance_spectroscopy': 958, 'nuclear_magnetic_resonance': 959, 'Task_Force': 960, 'Territorial_Force': 961, 'Alcohol,_Tobacco_and_Firearms': 962, 'American_Type_Founders': 963, 'Indian_Cricket_League': 964, 'International_Computers_Limited': 965, 'Incident_Command_System': 966, 'Indian_Civil_Service': 967, 'Institute_for_Creation_Research': 968, 'Intercolonial_Railway': 969, 'transmembrane': 970, 'Transcendental_Meditation': 971, 'automated_teller_machines': 972, 'Asynchronous_Transfer_Mode': 973, 'Islamic_Courts_Union': 974, 'intensive_care_unit': 975, 'International_Computers_and_Tabulators': 976, 'information_and_communications_technology': 977, 'Automatic_Train_Operation': 978, 'Australian_Taxation_Office': 979, 'adenosine_triphosphate': 980, 'Association_of_Tennis_Professionals': 981, 'Automatic_Train_Protection': 982, 'Terrestrial_Time': 983, 'Tourist_Trophy': 984, 'Associated_Television': 985, 'all-terrain_vehicles': 986, 'Partido_Revolucionario_Institucional': 987, 'Public_Radio_International': 988, 'Institute_for_Defense_Analyses': 989, 'International_Development_Association': 990, 'Negro_National_League': 991, 'National_Natural_Landmarks': 992, 'United_Nations_Space_Command': 993, 'United_Nations_Security_Council': 994, 'African_Union_Commission': 995, 'American_University_in_Cairo': 996, 'Women_Airforce_Service_Pilots': 997, 'White_Anglo-Saxon_Protestant': 998, 'U.S._Fish_and_Wildlife_Service': 999, 'United_States_Fish_and_Wildlife_Service': 1000, 'Provincial_Reconstruction_Team': 1001, 'personal_rapid_transit': 1002, 'University_of_Kentucky': 1003, 'United_Kingdom': 1004, 'University_of_Michigan': 1005, 'University_of_Miami': 1006, 'Uttar_Pradesh': 1007, 'Union_Pacific': 1008, 'United_Press': 1009, 'unconditioned_stimulus': 1010, 'United_States': 1011, 'public_service_announcement': 1012, 'Professional_Squash_Association': 1013, 'prostate-specific_antigen': 1014, 'Pacific_Southwest_Airlines': 1015, 'Politburo_Standing_Committee': 1016, 'Public_Service_Commission': 1017, 'International_Energy_Agency': 1018, 'Institute_of_Economic_Affairs': 1019, 'goals_against_average': 1020, 'Gaelic_Athletic_Association': 1021, 'Philippine_Super_Liga': 1022, 'Premier_Soccer_League': 1023, 'Veterans_Administration': 1024, 'Veterans_Affairs': 1025, 'Lithuanian_Basketball_League': 1026, 'Lietuvos_krepšinio_lyga': 1027, 'PlayStation_Portable': 1028, 'Pacifist_Socialist_Party': 1029, 'Progressive_Socialist_Party': 1030, 'Viet_Cong': 1031, 'Victoria_Cross': 1032, 'mixed_member_proportional_representation': 1033, 'matrix_metalloproteinases': 1034, 'mixed-member_proportional': 1035, 'General_Accounting_Office': 1036, 'Government_Accountability_Office': 1037, "Women's_Army_Corps": 1038, 'Western_Athletic_Conference': 1039, 'Victorian_Railways': 1040, 'virtual_reality': 1041, 'Non-Partisan_Association': 1042, "New_People's_Army": 1043, 'Parents_Television_Council': 1044, 'Philadelphia_Transportation_Company': 1045, 'non-player_characters': 1046, 'National_Paralympic_Committees': 1047, "National_People's_Congress": 1048, 'nuclear_pore_complexes': 1049, 'National_Priorities_List': 1050, 'National_Physical_Laboratory': 1051, 'New_Patriotic_Party': 1052, 'New_Progressive_Party': 1053, 'high-density_lipoprotein': 1054, 'hardware_description_language': 1055, 'Nuclear_Non-Proliferation_Treaty': 1056, 'Non-Proliferation_Treaty': 1057, 'World_Boxing_Council': 1058, 'World_Baseball_Classic': 1059, 'Westboro_Baptist_Church': 1060, "Women's_Premier_Soccer_League": 1061, 'Women’s_Pro_Softball_League': 1062, 'Automatic_Warning_System': 1063, 'Amazon_Web_Services': 1064, 'RNA-induced_silencing_complex': 1065, 'reduced_instruction_set_computing': 1066, 'Open_Source_Initiative': 1067, 'Office_of_Special_Investigations': 1068, 'Open_Systems_Interconnection': 1069, 'GNU_Compiler_Collection': 1070, 'Gulf_Cooperation_Council': 1071, 'Federal_Aviation_Administration': 1072, 'Fleet_Air_Arm': 1073, 'Federal_Arbitration_Act': 1074, 'Inoki_Genome_Federation': 1075, 'Internet_Governance_Forum': 1076, 'Progressive_Unionist_Party': 1077, "People's_United_Party": 1078, 'Fédération_Aéronautique_Internationale': 1079, 'Football_Association_of_Ireland': 1080, 'Ohio_State_University': 1081, 'Oklahoma_State_University': 1082, 'Malayan_National_Liberation_Army': 1083, 'Movement_for_the_Liberation_of_Azawad': 1084, 'London_Missionary_Society': 1085, 'learning_management_system': 1086, 'West_Coast_Conference': 1087, 'World_Council_of_Churches': 1088, 'National_Revolutionary_Army': 1089, 'National_Resistance_Army': 1090, 'National_Rifle_Association': 1091, 'National_Recovery_Administration': 1092, 'polyvinyl_chloride': 1093, 'premature_ventricular_contraction': 1094, 'National_Research_Council': 1095, 'Nuclear_Regulatory_Commission': 1096, 'National_Rugby_Championship': 1097, 'over-the-counter': 1098, 'Overseas_Telecommunications_Commission': 1099, 'Western_Canadian_Select': 1100, 'Wildlife_Conservation_Society': 1101, 'National_Rugby_League': 1102, 'Naval_Research_Laboratory': 1103, 'Marijuana_Policy_Project': 1104, 'Member_of_Provincial_Parliament': 1105, "People's_Liberation_Army_of_Namibia": 1106, "People's_Liberation_Army_Navy": 1107, 'Office_of_Thrift_Supervision': 1108, 'Officer_Training_School': 1109, 'Ligue_Nationale_de_Rugby': 1110, 'Local_Nature_Reserve': 1111, 'guanosine_diphosphate': 1112, 'gross_domestic_product': 1113, 'achieved_Adequate_Yearly_Progress': 1114, 'Adequate_Yearly_Progress': 1115, 'International_Association_of_Athletics_Federations': 1116, 'International_Amateur_Athletics_Federation': 1117, 'National_Security_Council': 1118, 'Neural_stem_cells': 1119, 'National_Security_Guards': 1120, 'Nuclear_Suppliers_Group': 1121, 'Global_Environment_Facility': 1122, 'guanine_nucleotide_exchange_factor': 1123, 'National_Service_Scheme': 1124, 'National_Security_Service': 1125, 'Emergency_Alert_System': 1126, 'East_Asia_Summit': 1127, 'Military_Revolutionary_Council': 1128, 'Medical_Research_Council': 1129, 'Health_and_Hospitals_Corporation': 1130, 'Headquarters_and_Headquarters_Company': 1131, 'United_Automobile_Workers': 1132, 'United_Auto_Workers': 1133, 'maintenance,_repair_and_overhaul': 1134, 'Mars_Reconnaissance_Orbiter': 1135, 'Franklin_D._Roosevelt': 1136, 'flight_data_recorder': 1137, 'multiple_sequence_alignment': 1138, 'Master_Settlement_Agreement': 1139, 'Modern_Standard_Arabic': 1140, 'Metropolitan_Statistical_Area': 1141, 'Marine_Stewardship_Council': 1142, 'mesenchymal_stem_cells': 1143, 'Military_Sealift_Command': 1144, 'National_Institute_on_Drug_Abuse': 1145, 'National_Institute_of_Dramatic_Art': 1146, 'U.S._Air_Force': 1147, 'United_States_Air_Force': 1148, 'European_Central_Bank': 1149, 'England_and_Wales_Cricket_Board': 1150, 'Development_Assistance_Committee': 1151, 'digital-to-analog_converter': 1152, 'directed_acyclic_graph': 1153, 'diacylglycerol': 1154, 'National_University_of_Singapore': 1155, 'National_Union_of_Students': 1156, 'extracellular_matrix': 1157, 'electronic_countermeasures': 1158, 'Michigan_State_University': 1159, 'Montana_State_University': 1160, 'Explicit_Congestion_Notification': 1161, 'electronic_communication_networks': 1162, 'United_Church_of_Christ': 1163, 'Upper_Canada_College': 1164, 'University_College_Cork': 1165, 'Uniform_Commercial_Code': 1166, 'Universal_Copyright_Convention': 1167, 'European_Conservatives_and_Reformists': 1168, 'Eastern_Counties_Railway': 1169, 'dopamine_transporter': 1170, 'Digital_Audio_Tape': 1171, 'electronic_control_unit': 1172, 'engine_control_unit': 1173, 'Tactical_Air_Command': 1174, 'Treatment_Action_Campaign': 1175, 'University_of_California,_Irvine': 1176, 'Union_Cycliste_Internationale': 1177, 'Maryland_Transit_Administration': 1178, 'Metropolitan_Transportation_Authority': 1179, 'Metropolitan_Transit_Authority': 1180, 'mail_transfer_agent': 1181, 'Center_for_Operations_Research_and_Econometrics': 1182, 'Congress_of_Racial_Equality': 1183, 'Michigan_Terminal_System': 1184, 'Metropolitan_Transit_System': 1185, 'Manitoba_Telecom_Services': 1186, 'Light_Rail_Transit': 1187, 'Lithuanian_National_Radio_and_Television': 1188, 'Lone_Star_Conference': 1189, 'Legal_Services_Corporation': 1190, 'London_Stock_Exchange': 1191, 'London_School_of_Economics': 1192, 'United_Democratic_Party': 1193, 'User_Datagram_Protocol': 1194, 'Welsh_Highland_Railway': 1195, 'waist-to-hip_ratio': 1196, 'Colonial_Athletic_Association': 1197, 'Civil_Aviation_Authority': 1198, 'Department_of_Community_Affairs': 1199, 'Drum_Corps_Associates': 1200, 'Impossible_Missions_Force': 1201, 'International_Monetary_Fund': 1202, 'Civil_Aeronautics_Board': 1203, 'Criminal_Assets_Bureau': 1204, 'Confederation_of_African_Football': 1205, 'Canadian_Arab_Federation': 1206, 'Commemorative_Air_Force': 1207, 'International_Maritime_Organization': 1208, 'International_Mathematical_Olympiad': 1209, 'Drum_Corps_International': 1210, 'Director_of_Central_Intelligence': 1211, 'West_India_Company': 1212, 'Women,_Infants_and_Children': 1213, 'Stabilisation_and_Association_Agreement': 1214, 'South_African_Airways': 1215, 'combat_air_patrol': 1216, 'Common_Agricultural_Policy': 1217, 'Civil_Air_Patrol': 1218, 'Chinese_Academy_of_Sciences': 1219, 'Chief_of_the_Air_Staff': 1220, 'Court_of_Arbitration_for_Sport': 1221, 'close_air_support': 1222, 'Strategic_Air_Command': 1223, 'spindle_assembly_checkpoint': 1224, 'Special_Action_Force': 1225, 'Singapore_Armed_Forces': 1226, 'model–view–controller': 1227, 'Missouri_Valley_Conference': 1228, 'United_States_Department_of_Agriculture': 1229, 'U.S._Department_of_Agriculture': 1230, 'surface-to-air_missile': 1231, 'S-adenosyl_methionine': 1232, 'Indian_National_Congress': 1233, 'Iglesia_ni_Cristo': 1234, 'National_Premier_Soccer_League': 1235, 'National_Professional_Soccer_League': 1236, 'traditional_Chinese_medicine': 1237, 'Turner_Classic_Movies': 1238, 'Chinese_Basketball_Association': 1239, 'collective_bargaining_agreement': 1240, 'Continental_Basketball_Association': 1241, 'cannabidiol': 1242, 'central_business_district': 1243, 'Convention_on_Biological_Diversity': 1244, 'Electronic_Frontier_Foundation': 1245, 'Economic_Freedom_Fighters': 1246, 'Scandinavian_Airlines': 1247, 'Special_Air_Service': 1248, 'Serial_Attached_SCSI': 1249, 'Scandinavian_Airlines_System': 1250, 'synthetic_aperture_radar': 1251, 'Special_Administrative_Region': 1252, 'search_and_rescue': 1253, 'South_African_Railways': 1254, 'Most_Valuable_Player': 1255, 'Montel_Vontavious_Porter': 1256, 'electronic_fuel_injection': 1257, 'Extensible_Firmware_Interface': 1258, 'Christian_Broadcasting_Network': 1259, 'Central_Bank_of_Nigeria': 1260, 'Science_Applications_International_Corporation': 1261, 'School_of_the_Art_Institute_of_Chicago': 1262, 'Immigration_and_Naturalization_Service': 1263, 'International_News_Service': 1264, 'inertial_navigation_system': 1265, 'Southern_Baptist_Convention': 1266, 'Swiss_Bank_Corporation': 1267, 'State_Bank_of_India': 1268, 'State_Bureau_of_Investigation': 1269, 'Federal_Investigation_Agency': 1270, "Fédération_Internationale_de_l'Automobile": 1271, 'unidentified_flying_objects': 1272, 'United_Farmers_of_Ontario': 1273, 'thermal_design_power': 1274, 'Telugu_Desam_Party': 1275, 'Civilian_Conservation_Corps': 1276, 'Commodity_Credit_Corporation': 1277, 'Department_of_Environmental_Conservation': 1278, 'Digital_Equipment_Corporation': 1279, 'British_Aircraft_Corporation': 1280, 'blood_alcohol_concentration': 1281, 'blood_alcohol_content': 1282, 'Intergovernmental_Panel_on_Climate_Change': 1283, 'Independent_Police_Complaints_Commission': 1284, 'Special_Broadcasting_Service': 1285, 'Special_Boat_Service': 1286, 'Seoul_Broadcasting_System': 1287, 'Select_Bus_Service': 1288, 'Canadian_Coast_Guard': 1289, 'collectible_card_game': 1290, 'International_Organization_for_Migration': 1291, 'Institute_of_Medicine': 1292, 'Combined_Cadet_Force': 1293, 'Co-operative_Commonwealth_Federation': 1294, 'Flight_Information_Region': 1295, 'First_Information_Report': 1296, 'finite_impulse_response': 1297, 'Society_for_Creative_Anachronism': 1298, 'Supreme_Court_of_Appeal': 1299, 'Centre_National_de_la_Recherche_Scientifique': 1300, 'Centre_for_Scientific_Research': 1301, 'Data_Encryption_Standard': 1302, 'diethylstilbestrol': 1303, 'Commercial_Orbital_Transportation_Services': 1304, 'commercial_off-the-shelf': 1305, 'Trans_Europ_Express': 1306, 'Tyne_Electrical_Engineers': 1307, 'Islamic_State_of_Iraq_and_Syria': 1308, 'Islamic_State': 1309, 'Royal_Air_Force': 1310, 'Red_Army_Faction': 1311, 'International_Phonetic_Alphabet': 1312, 'International_Psychoanalytical_Association': 1313, 'International_Publishers_Association': 1314, 'inter-process_communication': 1315, 'International_Paralympic_Committee': 1316, 'Iraq_Petroleum_Company': 1317, 'Democracy_for_America': 1318, 'deterministic_finite_automaton': 1319, 'Shanghai_Cooperation_Organisation': 1320, 'Santa_Cruz_Operation': 1321, 'Revolutionary_Action_Movement': 1322, 'Royal_Academy_of_Music': 1323, 'random-access_memory': 1324} -# LABEL_TO_ID = {'active noise equalizer': 0, 'Areca Nut Extract': 1, 'Acute necrotizing encephalopathy': 2, 'A nodosum extracts': 3, 'anomalous Nernst effect': 4, 'Virtual Immersive Learning': 5, 'vertically integrated liquid': 6, 'bulk metallic glass': 7, 'Buccal mucosa graft': 8, 'binary multilocus genotype': 9, 'blaze multilayer grating': 10, 'Birmingham hip resurfacing': 11, 'bean husk raw': 12, 'Birmingham Hip Replacement': 13, 'broad host range': 14, 'Bronchial Hyper Responsiveness': 15, 'spectral quantum efficiency': 16, 'Signal quality estimates': 17, 'Sasa quelpaertensis extracts': 18, 'semi quantitative echocardiography': 19, 'fully informed particle swarm': 20, 'Federal Information Processing Standard': 21, 'Avian influenza virus': 22, 'apical inferior vertebrae': 23, 'Aggregate Impact Value': 24, 'anterior interventricular vein': 25, 'silicon soft dynamic antiextrusion': 26, 'scale selective data assimilation': 27, 'Von Hippel Lindau': 28, 'Virtual Health Library': 29, 'superficial digital flexor tendon': 30, 'Sliding Discrete Fourier transform': 31, 'artificial neural network': 32, 'axillary node negative': 33, 'Electrically evoked cortical potentials': 34, 'ethanol extracted Chinese propolis': 35, 'Enhanced External Counter Pulsation': 36, 'High affinity heparin': 37, 'hierarchical annular histogram': 38, 'catalytic chemical vapor deposition': 39, 'cardiac cerebral vascular disease': 40, 'focused ion beam': 41, 'focal ictal beta': 42, 'Forwarding Information Base': 43, 'frequency invariant beamforming': 44, 'faecal indicator bacteria': 45, 'dark lock in thermography': 46, 'diffuse light imaging tomography': 47, 'quasi phase matching': 48, 'quality protein maize': 49, 'N vinyl pyrrolidinone': 50, 'nodal vesicular parcel': 51, 'one temperature model': 52, 'of the money': 53, 'Orthodontic tooth movement': 54, 'olive tail moment': 55, 'oxygen transportation Membrane': 56, 'wavelength division multiplexing': 57, 'Working Day Movement': 58, 'warm dark matter': 59, 'Windowed Discrete Model': 60, 'fluorescence guided resection': 61, 'fetal growth retardation': 62, 'fractional growth rate': 63, 'epitaxial lateral overgrowth': 64, 'Epoxidized linseed oils': 65, 'figure of merit': 66, 'first order moment': 67, 'full order model': 68, 'gate turn off': 69, 'Getting To Outcomes®': 70, 'water oil contact': 71, 'whole organ culture': 72, 'fiber Bragg grating': 73, 'Fasting blood glucose': 74, 'fiber based generator': 75, 'area of interest': 76, 'average optical intensity': 77, 'And Or Inverter': 78, 'automated optical inspection': 79, 'angle of incidence': 80, 'homotopy analysis method': 81, 'human amniotic membrane': 82, 'high albedo materials': 83, 'hybrid assessment method': 84, 'haplotype association mapping': 85, 'wavelet scalar quantization': 86, 'Workforce Sitting Questionnaire': 87, 'Unequal error protection': 88, 'upper esophageal pouch': 89, 'Media Interoperability Lab': 90, 'multiple instance Learning': 91, 'metamaterial immersion lens': 92, 'Mind in Labor': 93, 'Matrox Imaging Libraries': 94, 'Bose Chaudhuri Hocquenghem': 95, 'Barretos Cancer Hospital': 96, 'basal cell hyperplasia': 97, 'inverse synthetic aperture radar': 98, 'Individual species area relationship': 99, 'delay lock loop': 100, 'Delta Like Ligand': 101, 'Quasi Zenith Satellite': 102, 'quasi zero stiffness': 103, 'very very early': 104, 'vasa vasorum externa': 105, 'vancomycin variable enterococci': 106, 'variable reflective mirror': 107, 'Verbal recognition memory': 108, 'vector ruggedness measure': 109, 'nonnegative tensor factorization': 110, 'noise transfer function': 111, 'N terminal fragment': 112, 'native thin filament': 113, 'nuclear targeting fusion': 114, 'accelerated Runge Kutta': 115, 'AMPK related kinase': 116, 'Alveolar ridge keratosis': 117, 'initial value problem': 118, 'in vitro production': 119, 'intra ventricular pressure': 120, 'induced visceral pain': 121, 'Beverton Holt equation': 122, 'borehole heat exchanger': 123, 'before head eversion': 124, 'resin transfer molding': 125, 'reverse time migration': 126, 'residual terrain model': 127, 'relative tumor mass': 128, 'representative test materials': 129, 'vector network analyzer': 130, 'virus neutralizing antibodies': 131, 'printed elliptical monopole antenna': 132, 'Principal Elementary Mode Analysis': 133, 'electromagnetic band gap': 134, 'electro burnt graphene': 135, 'quantum denoising system': 136, 'quotient digit selection': 137, 'Superior Longitudinal Fasciculus': 138, 'simulated lacrimal fluid': 139, 'Son La Fault': 140, 'spatial likelihood function': 141, 'sonographic lung field': 142, 'zero point charge': 143, 'zona pellucida C': 144, 'Zernike phase contrast': 145, 'oscillating water column': 146, 'optical wireless communication': 147, 'Wireless local loop': 148, 'whole lung lavage': 149, 'whole lumbar lordosis': 150, 'relative neighborhood graph': 151, 'random number generator': 152, 'Linear Parameter Varying': 153, 'left pulmonary vein': 154, 'left portal vein': 155, 'lung protective ventilation': 156, 'glomerular tuft area': 157, 'gross tumor area': 158, 'gene transfer agent': 159, 'ground truth area': 160, 'General Transcription Apparatus': 161, 'insulin tolerance test': 162, 'intent to treat': 163, 'Ifakara Tunnel Test': 164, 'Ifakara tent trap': 165, 'immunoglobulin tail tyrosine': 166, 'Word error rate': 167, 'Weekly Epidemiological Record': 168, 'whorl expansion rate': 169, 'optimal water filling': 170, 'oily water flux': 171, 'retinyl ester hydrolase': 172, 'Recursively Expanded Heawood': 173, 'renewable energy harvesting': 174, 'right end hairpin': 175, 'aldo keto reductase': 176, 'Auroral kilometric radiation': 177, 'Schottky barrier height': 178, 'Single Breath Hold': 179, 'spacer blocking hairpin': 180, 'layer by layer': 181, 'low blood level': 182, 'Lawrence Berkeley Laboratory': 183, 'loop bridge loop': 184, 'lap bar loop': 185, 'Inelastic electron tunneling spectroscopy': 186, 'International Embryo Transfer Society': 187, 'resonance energy transfer': 188, 'reverse electron transport': 189, 'relative error tolerance': 190, 'regenerative endodontic technique': 191, 'resistance exercise training': 192, 'pulmonary artery systolic pressure': 193, 'probe affinity shape power': 194, 'systemic inflammatory response syndrome': 195, 'Susceptible Infectious Recovered Susceptible': 196, 'Radial basis function': 197, 'renal blood flow': 198, 'Results based financing': 199, 'vacuum circuit breaker': 200, 'ventricular conduction block': 201, 'natural bond orbital': 202, 'non bridging oxygen': 203, 'mode of action': 204, 'mechanisms of action': 205, 'Medical Office Assistant': 206, 'multiple object avoidance': 207, 'low molecular weight': 208, 'leg muscle weight': 209, 'white adipocyte tissue': 210, 'Wingate Anaerobic Test': 211, 'without annotated transcription': 212, 'weeks after treatment': 213, 'acyl CoA oxidase': 214, 'ant colony optimisation': 215, 'absolute contact order': 216, 'apocarotenoid cleavage oxygenase': 217, 'core inlet enthalpy': 218, 'chronic intermittent ethanol': 219, 'clathrin independent endocytic': 220, 'Charge Induction Efficiency': 221, 'carbon isotope excursion': 222, 'feedwater inlet enthalpy': 223, 'feature information extraction': 224, 'fuzzy inference engine': 225, 'FERTILIZATION INDEPENDENT ENDOSPERM': 226, 'fisheries induced evolution': 227, 'integral encounter theory': 228, 'Incremental exercise tests': 229, 'inner ear tissue': 230, 'interval exercise training': 231, 'free energy gap': 232, 'field emission gun': 233, 'fermentation essential genes': 234, 'automated test equipment': 235, 'average treatment effect': 236, 'associative transfer entropy': 237, 'adipose tissue extract': 238, 'Alternate Terminal Exon': 239, 'single pole double throw': 240, 'Single point diamond turning': 241, 'device interface board': 242, 'Depolarization induced bursting': 243, 'voltage controlled oscillator': 244, 'Virgin coconut oil': 245, 'vena cava occlusion': 246, 'data flow graph': 247, 'difference frequency generation': 248, 'World Trade Center': 249, 'Window Trap Collection': 250, 'Working Tax Credit': 251, 'wavelet transform coherence': 252, 'whole tree coppice': 253, 'negative permittivity material': 254, 'normative probability map': 255, 'Normalization Process Model': 256, 'nucleated polymerization models': 257, 'New South Wales': 258, 'negative slow wave': 259, 'natural sea water': 260, 'Base Stock Control System': 261, 'buffer size control scheme': 262, 'directed acyclic graph': 263, 'discharge air grille': 264, 'des acyl ghrelin': 265, 'd after germination': 266, 'Directional Acyclic Graph': 267, 'convective available potential energy': 268, 'caffeic acid phenethyl ester': 269, 'repeated plastic working': 270, 'Rapid Pace Walk': 271, 'red palm weevil': 272, 'Kaya Layton Riviere': 273, 'Kernel Logistic Regression': 274, 'kids lung register': 275, 'lower rank tensor approximation': 276, 'Large Rotor Test Apparatus': 277, 'Bayesian belief network': 278, 'big bang nucleosynthesis': 279, 'broad band noise': 280, 'negative bias temperature instability': 281, 'novel bacterial topoisomerase inhibitor': 282, 'specific growth rate': 283, 'spectral Gamma ray': 284, 'shale gouge ratio': 285, 'Solanaceae Genomics Resource': 286, 'Strawberry Genomic Resources': 287, 'Duffy binding like': 288, 'design base level': 289, 'Hospital Anxiety Depression Scale': 290, 'Historical Administrative Data Study': 291, 'direct simulation Monte Carlo': 292, 'discrete sliding mode control': 293, 'data safety monitoring committee': 294, 'laparoscopic radical nephrectomy': 295, 'lateral root number': 296, 'lateral reticular nucleus': 297, 'chronic hepatitis B': 298, 'complete heart block': 299, 'cascaded H bridges': 300, 'Chinese Han Beijing': 301, 'nerve fiber bundle': 302, 'nitrogen fixing bacteria': 303, 'newly formed bone': 304, 'nuclear fraction buffer': 305, 'anti miRNA oligonucleotides': 306, 'Atlantic Multidecadal Oscillation': 307, 'Epstein Barr virus': 308, 'estimated blood volume': 309, 'exchangeable blood volume': 310, 'estimated breeding value': 311, 'normal brain tissue': 312, 'Nitro Blue Tetrazolium': 313, 'Normalized Brodatz Texture': 314, 'false positive fraction': 315, 'Fine particle fraction': 316, 'fractional pump flow': 317, 'forest proration factor': 318, 'family protective factors': 319, 'Internal Carotid Artery Sinus': 320, 'Informed consent aggregate scores': 321, 'recursive feature elimination': 322, 'residual force enhancement': 323, 'normal human astrocytes': 324, 'non harmonic analysis': 325, 'Northern Health Authority': 326, 'non human animals': 327, 'Newtonian flow theory': 328, 'normal fallopian tube': 329, 'nursing facility transition': 330, 'nutrient film techniques': 331, 'naturally fluctuating temperature': 332, 'degrees of freedom': 333, 'dynamic output feedback': 334, 'depth of focus': 335, 'Department of Finance': 336, 'degree of functionalization': 337, 'Morris water maze': 338, 'Molecular weight markers': 339, 'mild warm moxibustion': 340, 'maximum weighted matching': 341, 'Five Lipoxygenase Activating Protein': 342, 'Fluorescence loss after photoactivation': 343, 'gestational trophoblast neoplasia': 344, 'Genome Topology Network': 345, 'right lower lobe': 346, 'ROSA26 like locus': 347, 'relative lumbar length': 348, 'Human neutrophil antigen': 349, 'hits normalized abundance': 350, 'high nucleic acid': 351, 'human nuclei antibody': 352, 'left toe off': 353, 'Lesion Tract Overlap': 354, 'lithium titanium oxide': 355, 'low temperature orthorhombic': 356, 'right toe off': 357, 'residual timing offset': 358, 'rapid thermal oxidation': 359, 'right anterior oblique': 360, 'Radial artery occlusion': 361, 'response amplitude operators': 362, 'Recurrent airway obstruction': 363, 'Rotational acetabular osteotomy': 364, 'left anterior oblique': 365, 'Local Anodic Oxidation': 366, 'Light accelerated orthodontics': 367, 'Lysine Arginine Ornithine': 368, 'L amino oxidases': 369, 'white coat hypertensive': 370, 'Windhoek Central Hospital': 371, 'gastric oxyntic heterotopias': 372, 'gain of heterozygosity': 373, 'good oral hygiene': 374, 'Fuji Intelligent Chromo Endoscopy': 375, 'flexible imaging color enhancement': 376, 'Normalized gain degradation': 377, 'no go decay': 378, 'negative group delay': 379, 'nicotinamide guanine dinucleotide': 380, 'Nencki Genomics Database': 381, 'Graphics Processing Unit': 382, 'graphical processor unit': 383, 'Chemical Looping Hydrogen': 384, 'clot lysis halftime': 385, 'game development framework': 386, 'Gradient Diffusion Filter': 387, 'glaucoma discriminant function': 388, 'growth differentiation factor': 389, 'Human Interface Device': 390, 'high iron diamine': 391, 'high intensity discharge': 392, 'highly infectious disease': 393, 'homeobox interacting domain': 394, 'variable bit rate': 395, 'ventricular brain ratio': 396, 'virus bacterium ratio': 397, 'group of pictures': 398, 'global outage probability': 399, 'gradient orientation pyramid': 400, 'generalized oblique projection': 401, 'Goal Oriented Phases': 402, 'Error Vector Magnitude': 403, 'earned value management': 404, 'Open Mobile Alliance': 405, 'optimised moving averaging': 406, 'orthogonal multiple access': 407, 'Orthologous MAtrix algorithm': 408, 'oat meal agar': 409, 'minimum number alive': 410, 'Monitored natural attenuation': 411, 'Mini Nutritional Assessment': 412, 'modified nodal analysis': 413, 'multiple network alignment': 414, 'Molecular Evolutionary Genetics Analysis': 415, 'mutual evaluation genetic algorithm': 416, 'monotonically expressed gene analysis': 417, 'minor groove binder': 418, 'Middle Gobi belt': 419, 'medial geniculate body': 420, 'Marlboro Gold Box': 421, 'Chinese Yam polysaccharide': 422, 'Chrysanthemum yellows phytoplasma': 423, 'Cape York Peninsula': 424, 'Yeast extract sucrose': 425, 'Young Environmental Scientists': 426, 'yeast estrogen screen': 427, 'localized fractional variance': 428, 'lepton flavour violation': 429, 'Neuralgia Inducing Cavitational Osteonecrosis': 430, 'non invasive cardiac output': 431, 'node to node': 432, 'number true negative': 433, 'Readback Modify Writeback': 434, 'recent migrant workers': 435, 'network processing unit': 436, 'National Penghu University': 437, 'Node Processing Unit': 438, 'Northwestern Polytechnical University': 439, 'true random number generator': 440, 'tetracycline resistant N gonorrhoeae': 441, 'Electronic Code Book': 442, 'Eddy current brake': 443, 'European corn borer': 444, 'Extreme Conditions Beamline': 445, 'Particle Image Velocimetry': 446, 'pulse interval variability': 447, 'posterior interventricular vein': 448, 'purified inactivated vaccine': 449, 'particle image velocity': 450, 'Laser Doppler Velocimetry': 451, 'laser Doppler vibrometer': 452, 'late diastolic velocity': 453, 'lactate dehydrogenase‐elevating virus': 454, 'logarithmic difference volume': 455, 'non Hodgkin lymphoma': 456, 'natural hydraulic limes': 457, 'simple sequence length polymorphism': 458, 'Short Spen like Protein': 459, 'quadrature phase shift keying': 460, 'Quaternary Phase Shift Keying': 461, 'Valiant Network Design': 462, 'variable nodes decoder': 463, 'eosinophil derived neurotoxin': 464, 'early diabetic nephropathy': 465, 'emotional day night': 466, 'Electron dense nanoparticles': 467, 'upstream stimulatory factor': 468, 'unstable stacking fault': 469, 'unstimulated salivary flow': 470, 'ultrasound switchable fluorescence': 471, 'serum free culture medium': 472, 'sisal fiber cellulose microcrystal': 473, 'Spatial Fuzzy C Means': 474, 'serum free conditioned media': 475, 'Attitude Heading Reference System': 476, 'and heading reference system': 477, 'Auditory Hallucinations Rating Scale': 478, 'Environmental scanning electron microscopic': 479, 'exploratory structural equation modeling': 480, 'Time Difference of Arrival': 481, 'time delay of arrival': 482, 'Butterworth Van Dyke': 483, 'blood vessel density': 484, 'Bovine Viral Diarrhoea': 485, 'back vertex distance': 486, 'quartz crystal microbalance': 487, 'quality control materials': 488, 'quantum corrected model': 489, 'Relative Densitometric Units': 490, 'relative density untis': 491, 'region decision unit': 492, 'relative distance units': 493, 'rotatable DNA unit': 494, 'inner hair cell': 495, 'immuno histo chemistry': 496, 'Integrated HIV Care': 497, 'proton exchange membrane fuel cell': 498, 'Polymer electrolyte membrane fuel cell': 499, 'Intra cerebral hemorrhage': 500, 'infantile cortical hyperostosis': 501, 'intermediate care hospital': 502, 'international child health': 503, 'Ginzburg Landau equation': 504, 'Ground Level Enhancement': 505, 'gestational lead exposure': 506, 'G lucidum extract': 507, 'guava leaf extracts': 508, 'Simple Solar Photon Thruster': 509, 'Swiss Ski Power Test': 510, 'Spike timing dependent plasticity': 511, 'Synaptic Time Dependent Plasticity': 512, 'fibroblast growth factor': 513, 'fresh gas flow': 514, 'flexor pollicis longus': 515, 'fluorescent protein like': 516, 'federal poverty level': 517, 'pressurized water reactors': 518, 'placental weight ratio': 519, 'Parsa Wildlife Reserve': 520, 'Gas Cooled Fast Reactor': 521, 'Greater Cape Floristic Region': 522, 'Depressurization Vent Shaft': 523, 'Dynamic voltage scaling': 524, 'Digital video stabilization': 525, 'distributed virtual switch': 526, 'look up table': 527, 'lower urinary tract': 528, 'land use types': 529, 'link under test': 530, 'United Microelectronics Corporation': 531, 'Uppsala Monitoring Centre': 532, 'uterine mesometrial compartment': 533, 'Burst Alert Robotic Telescope': 534, 'Bayesian additive regression trees': 535, 'red giant branch': 536, 'red green blue': 537, 'reinforced granular bed': 538, 'robust graph based': 539, 'residual giant bicomponent': 540, 'virtual internal bremsstrahlung': 541, 'Virgin Islands basin': 542, 'Virtual Insect Brain': 543, 'very wide field': 544, 'von Willebrand factor': 545, 'integrated Sachs Wolfe': 546, 'ice shelf water': 547, 'gamma ray bursts': 548, 'Ganzi River Basin': 549, 'genomic regulatory block': 550, 'Tidal dwarf galaxy': 551, 'thymine DNA glycosylase': 552, 'Unified Dark Matter': 553, 'ubiquitous data mining': 554, 'friends of friends': 555, 'flex on flex': 556, 'finding optimal factor': 557, 'forward optic flow': 558, 'Fear of falling': 559, 'Unmanned Air Vehicles': 560, 'unmanned aerial vehicle': 561, 'uninhabited air vehicles': 562, 'Transient Error Reconstruction Algorithm': 563, 'transitional endoplasmic reticulum ATPase': 564, 'tibial external rotation angle': 565, 'hepatitis D virus': 566, 'HIV derived vector': 567, 'Protein Data Bank': 568, 'potato dextrose broth': 569, 'preset delay broadcast': 570, 'Packet Delay Budget': 571, 'periciliary diffusion barrier': 572, 'Autoclaved Clayey Cellular Concrete': 573, 'anterior continuous curvilinear capsulorhexis': 574, 'Falling Weight Deflectometer': 575, 'frictional wage dispersion': 576, 'fuzzy logic ant colony system': 577, 'femtosecond laser assisted cataract surgery': 578, 'dynamical mean field approximation': 579, 'dual multiple factor analysis': 580, 'Direct membrane feeding assay': 581, 'charge density wave': 582, 'cell dry weight': 583, 'construction demolition waste': 584, 'conventional delivery ward': 585, 'Circumpolar Deep Water': 586, 'self consistent Born Approximation': 587, 'sugar cane bagasse ash': 588, 'surveillance culture based algorithm': 589, 'activation induced cell death': 590, 'acute irritant contact dermatitis': 591, 'Visual Implant Elastomer': 592, 'virtual intravascular endoscopy': 593, 'ventral intermediate entorhinal': 594, 'anaplastic lymphoma kinase': 595, 'Activin like kinase': 596, 'active learning Kriging': 597, 'Gauged Linear Sigma Model': 598, 'generalized linear spatial model': 599, 'large hadron collider': 600, 'large hyaline cells': 601, 'light harvesting complex': 602, 'live hard coral': 603, 'liver hepatocellular carcinoma': 604, 'Ozone Monitoring Instrument': 605, 'Orthodontic mini implants®': 606, 'Outlying Marginality Index': 607, 'outlying mean index': 608, 'Open Markets Index': 609, 'urban climate zone': 610, 'upper convective zone': 611, 'urban heat island': 612, 'Urban Health Initiative': 613, 'liquid water path': 614, 'leaf water potential': 615, 'longest wet period': 616, 'Last Week Period': 617, 'cloud optical thickness': 618, 'conventional oxygen therapy': 619, 'center of tree': 620, 'Cost of transport': 621, 'crown of thorns': 622, 'mean sea level pressure': 623, 'multi step linear prediction': 624, 'cloud condensation nuclei': 625, 'content centric networking': 626, 'central coordination node': 627, 'Cereal cyst nematode': 628, 'cell carrying nanoparticles': 629, 'New York City': 630, 'non yellow coloring': 631, 'green vegetation fraction': 632, 'gradient vector flow': 633, 'Goldmann visual field': 634, 'Genome Variation Format': 635, 'Single Column Atmosphere Model': 636, 'substituted cysteine accessibility method': 637, 'Scanning Cysteine Accessibility Method': 638, 'primary organic aerosol': 639, 'polarization orientation angle': 640, 'partial order alignment': 641, 'present on admission': 642, 'pre optic area': 643, 'International Prostate Symptom Score': 644, 'inferior petrosal sinus sampling': 645, 'integrated passive safety system': 646, 'International Prognostic Scoring System': 647, 'Half wave plate': 648, 'harvested wood products': 649, 'hand written prescribing': 650, 'non polarized beam splitter': 651, 'neural plate border specifier': 652, 'dynamic nuclear polarization': 653, 'Downstream Non Produced': 654, 'Doñana National Park': 655, 'small angle neutron scattering': 656, 'suffix array neighborhood search': 657, 'Ship Arrival Notification System': 658, 'iron deficiency anemia': 659, 'incremental dynamic analysis': 660, 'Inner Dynein Arms': 661, 'information dependent acquisition': 662, 'joint transform correlator': 663, 'Joule Thomson coefficient': 664, 'jump to contact': 665, 'Rigaku Innovative Technologies Europe': 666, 'recombination induced tag exchange': 667, 'Silicon Pore Optics': 668, 'sampling period offset': 669, 'State Planning Organisation': 670, 'half energy width': 671, 'health extension worker': 672, 'hydrogen enriched water': 673, 'fully differential cross section': 674, 'Follicular dendritic cell sarcoma': 675, 'Random Iteration Algorithm': 676, 'Random Initial Assignments': 677, 'Relative Isotope Abundances': 678, 'Ratio Immunity Assay': 679, 'Regulatory Impact Analysis': 680, 'phase zone plate': 681, 'pregnancy zone protein': 682, 'Bovine Oligonucleotide Microarray': 683, 'Base Object Model': 684, 'Bernstein operational matrix': 685, 'Bureau of Meteorology': 686, 'mucosa associated lymphoid tissue': 687, 'Mind and Liver Test': 688, 'draining lymph nodes': 689, 'Diamond like nanocomposite': 690, 'Deep learning network': 691, 'drug loaded nanocarrier': 692, 'virus like particles': 693, 'ventricular late potentials': 694, 'Very Long Period': 695, 'Bacillus Calmette Guérin': 696, 'Borel Cayley graph': 697, 'Boston Consulting Group': 698, 'Baja California Gap': 699, 'natural killer T': 700, 'normal kidney tissues': 701, 'blood nerve barrier': 702, 'Blue Nile Basin': 703, 'receptor tyrosine kinases': 704, 'real time kinematic': 705, 'normal glucose tolerance': 706, 'Nominal Group Technique': 707, 'North Gangdese Thrust': 708, 'non glandular trichomes': 709, 'sham operated heatstroke': 710, 'sequency ordered Hadamard': 711, 'second order head': 712, 'self organized hydrodynamic': 713, 'whole body heating': 714, 'weighted Benjamini Hochberg': 715, 'Wisconsin Buckwheat honey': 716, 'tert butyl hydroperoxide': 717, 'tumor bearing host': 718, 'Total brain homogenates': 719, 'upstream control region': 720, 'underlay cognitive radios': 721, 'usable capacity ratio': 722, 'uncoupling control ratio': 723, 'fatty acid methyl esters': 724, 'Familial adult myoclonic epilepsy': 725, 'Fatty acid modifying enzyme': 726, 'Lettuce mosaic virus': 727, 'Lower Mississippi Valley': 728, 'left marginal vein': 729, 'extensor hallucis longus': 730, 'Environmental Health Literacy': 731, 'extended haplotype length': 732, 'whole body irradiation': 733, 'water band index': 734, 'Wiberg bond indices': 735, 'flexor digitorum brevis': 736, 'first diagonal branch': 737, 'mouse left ventricle': 738, 'murine leukemia virus': 739, 'Moloney Leukemia Virus': 740, 'utrophin glycoprotein complex': 741, 'user generated content': 742, 'universal genetic code': 743, 'Human embryonic kidney': 744, 'human epidermal keratinocyte': 745, 'human epithelial kidney': 746, 'metal ion dependent adhesion site': 747, 'Multi Instrument Data Analysis System': 748, 'visual evoked potentials': 749, 'Variant Effect Predictor': 750, 'Vocational Enablement Protocol': 751, 'substantia nigra reticulate': 752, 'signal noise ratio': 753, 'suspended nanochannel resonator': 754, 'Shelter Neuter Return': 755, 'Positive End Expiratory Pressure': 756, 'promoter enhancer enhancer promoter': 757, 'peak end expiratory pressure': 758, 'protein kinase A': 759, 'phosphate kinase A': 760, 'pancreatic ductal hyperpressure': 761, 'pixel difference histogram': 762, 'pituitary dependent hyperadrenocorticism': 763, 'Possible Duplication History': 764, 'peak dip hump': 765, 'repeat unit domain': 766, 'recovery upon dilution': 767, 'Relative unsigned difference': 768, 'radio ulnaire distale': 769, 'Cardiff Acne Disability Index': 770, 'Chronic Allograft Damage Index': 771, 'composite animal density index': 772, 'left main coronary artery': 773, 'Last metazoan common ancestor': 774, 'Magnetic resonance coronary angiography': 775, 'most recent common ancestor': 776, 'magnetic resonance contrast agent': 777, 'single incision laparoscopic surgery': 778, 'single item literacy screener': 779, 'Agricultural Quarantine Inspection': 780, 'air quality index': 781, 'Cooperative Article Bee Colony': 782, 'chaotic artificial bee colony': 783, 'water insoluble solids': 784, 'Weighted Inherited Semantics': 785, 'wetland indicator status': 786, 'Western Interior Seaway': 787, 'ground based augmented system': 788, 'Ground Based Augmentation System': 789, 'uniform linear array': 790, 'ultra low attachment': 791, 'upper leaf angle': 792, 'ultra low adherence': 793, 'generalized cross validation': 794, 'Great cardiac vein': 795, 'gross calorific value': 796, 'GAS containing vacuole': 797, 'probabilistic independent component analysis': 798, 'posterior inferior cerebellar artery': 799, 'planar inverted cone antenna': 800, 'peptide ion current area': 801, 'Molecular Optical Simulation Environment': 802, 'Mouse Ovarian Surface Epithelium': 803, 'murine ovarian surface epithelium': 804, 'Ground Glass Opacity': 805, 'ground glass opacification': 806, 'Unfolded Protein Response': 807, 'unsaturated polyester resin': 808, 'left gastric vein': 809, 'linkage group V': 810, 'Stereotactic Body Radiation Therapy': 811, 'Systems Biology Research Tool': 812, 'Specific leaf weight': 813, 'super large working': 814, 'Core internal sett temperature': 815, 'Commandless input shaping technique': 816, 'luciferase light units': 817, 'Loma Linda University': 818, 'voltage dependent anion channel': 819, 'Voltage Dependent Anion Carrier': 820, 'global warming potential': 821, 'gross world product': 822, 'genome wide prediction': 823, 'grape seed polyphenolic extract': 824, 'grape seed proanthocyanidin extract': 825, 'Src kinase family': 826, 'sigmoidal kernel function': 827, 'wireless sensor network': 828, 'White sponge nevus': 829, 'Best Bin First': 830, 'banana bark fiber': 831, 'Benedict Bordner Filter': 832, 'buildup biofilm formation': 833, 'Digital Video Broadcasting': 834, 'diffuse vascular bundle': 835, 'Advanced Television System Committee': 836, 'Adult testis somatic cells': 837, 'below link capacity': 838, 'Bone lining cells': 839, 'blood lactate concentration': 840, 'basal like carcinomas': 841, 'B lymphocyte chemoattractant': 842, 'Real Time Streaming Protocol': 843, 'Real Time Signal Processor': 844, 'cone beam computed tomography': 845, 'Cone Beam Computer Tomographies': 846, 'Logical Story Units': 847, 'linear spectral unmixing': 848, 'local self uniformity': 849, 'Large Stock Units': 850, 'Normalized Sampling Rate': 851, 'normal sinus rhythm': 852, 'Not spontaneously recovered': 853, 'Normal specific retention': 854, 'network sarcoplasmic reticulum': 855, 'Hierarchical Token Bucket': 856, 'hexagonal tungsten bronze': 857, 'hard tissue barrier': 858, 'high trait bulk': 859, 'Obstructive sleep apnea syndrome': 860, 'Observational Skills Assessment Score': 861, 'gross tumor volume': 862, 'Gross target volume': 863, 'Gaussian total variation': 864, 'Solvated Metal Atom Dispersion': 865, 'Sma Mothers Against Decapentaplegic': 866, 'total harmonic distortion': 867, 'total horizontal derivative': 868, 'TNF homology domain': 869, 'Diameter Breast High': 870, 'differential barrier height': 871, 'dopamine beta hydroxylase': 872, 'Growth Retardation Factor': 873, 'ground reaction forces': 874, 'Gaussian random field': 875, 'global reference frame': 876, 'GROWTH REGULATING FACTOR': 877, 'classification and regression trees': 878, 'cocaine amphetamine regulated transcript': 879, 'former Soviet Union': 880, 'functional spinal units': 881, 'Frequency Scaling Unit': 882, 'fluorescence standard units': 883, 'geostationary earth orbit': 884, 'Gene Expression Omnibus': 885, 'Green Energy Office': 886, 'particulate organic nitrogen': 887, 'passive optical network': 888, 'protein overlap network': 889, 'prectectal olivary nuclei': 890, 'weighted mean difference': 891, 'White Matter Density': 892, 'whole mesh deformation': 893, 'Weighted Mean Deviation': 894, 'wood mass density': 895, 'Copper indium gallium selenide': 896, 'Cambridge Infant Growth Study': 897, 'Venezuelan equine encephalitis': 898, 'virtual electromagnetic environment': 899, 'very early endosome': 900, 'parent/guardian most knowledgeable': 901, 'Pyramid Match Kernel': 902, 'primary mouse keratinocytes': 903, 'phosphor mevalonate kinase': 904, 'Panton Valentine Leukocidin': 905, 'plasma viral load': 906, 'Portal vein ligation': 907, 'congenital diaphragmatic hernia': 908, 'conical diffused holes': 909, 'congenitally dislocated hip': 910, 'umbilical artery catheters': 911, 'Urinary albumin concentration': 912, 'Upper arm circumference': 913, 'uronic acid content': 914, 'glucagon like peptide': 915, 'G9a like protein': 916, 'good laboratory practice': 917, 'Ganoderma lucidum polysaccharide': 918, 'Gracilaria Lemaneiformis polysaccharide': 919, 'configurable logic block': 920, 'Centre Léon Bérard': 921, 'maximum utilization table': 922, 'meter under test': 923, 'disjunctive normal form': 924, 'delayed negative feedback': 925, 'Teager Huang transform': 926, 'Temporal Height Tracking': 927, 'resonant ultrasound spectroscopy': 928, 'Radiographic Union Score': 929, 'Dedicated Short Range Communications': 930, 'Drought Sensitive Root Control': 931, 'medial octavolateralis nucleus': 932, 'mineral oil nanoemulsion': 933, 'mouse hepatitis virus': 934, 'middle hepatic vein': 935, 'Influenza Like Illness': 936, 'isolated limb infusion': 937, 'in line inspection': 938, 'inter lick interval': 939, 'isolated lacunar infarct': 940, 'Wired Equivalent Privacy': 941, 'Wild Edible Plants': 942, 'Wrist exoskeleton prototype': 943, 'Analysis Filter Bank': 944, 'acid fast bacilli': 945, 'after full bloom': 946, 'Robust model predictive control': 947, 'Recursive Model Predictive Control': 948, 'Single Instruction Single Data': 949, 'simple increment simple decrement': 950, 'Single Instruction Multiple Data': 951, 'sepsis induced myocardial dysfunction': 952, 'generalized threshold gate': 953, 'gray to gray': 954, 'Global Trace Graph': 955, 'Giemsa Trypsin Giemsa': 956, 'Simple Network Management Protocol': 957, 'sensory neuron membrane protein': 958, 'decentralized alternating optimization': 959, 'destination advertisement object': 960, 'dynamic adjustment operator': 961, 'dorsal accessory olive': 962, 'Poisson Regression Multiple Model': 963, 'Protoplast regeneration minimal medium': 964, 'quasielastic light scattering': 965, 'quantitative linkage score': 966, 'Resting Energy Expenditure': 967, 'rare earth element': 968, 'relative estimation errors': 969, 'total parenteral nutrition': 970, 'time petri nets': 971, 'Task Positive Network': 972, 'nucleotide excision repair': 973, 'named entity recognition': 974, 'nuclear envelope reformation': 975, 'National Exposure Report': 976, 'Normalized expression rate': 977, 'Normal Hydrogen Electrode': 978, 'nuclease hypersensitive element': 979, 'Na+ H+ exchange': 980, 'newly hatched embryos': 981, 'nasal human epithelial': 982, 'styrene ethylene butadiene styrene': 983, 'surface energy balance system': 984, 'duplex forming oligomer': 985, 'double fond osteophyte': 986, 'Placental site trophoblastic tumor': 987, 'public security triangular theory': 988, 'dose volume histogram': 989, 'Duck viral hepatitis': 990, 'Clinical Trial Management Systems': 991, 'Camera Trap Metadata Standard': 992, 'fallopian tube epithelium': 993, 'full time equivalent': 994, 'foetal type enterocytes': 995, 'FP treatment effect': 996, 'forward transmission efficiency': 997, 'X ray powder diffractometer': 998, 'X Ray Powder Diffraction': 999, 'indium molybdenum oxide': 1000, 'International Maritime Organization': 1001, 'glucose tolerance test': 1002, 'Gradient time trail': 1003, 'gel trapping technique': 1004, 'GI transit times': 1005, 'High Resolution Scanning Electron Microscope': 1006, 'high resolution secondary electron microscopy': 1007, 'Serous tubal intraepithelial carcinoma': 1008, 'spatio temporal image correlation': 1009, 'scrub typhus infection criteria': 1010, 'Strategies to Improve Colonoscopy': 1011, 'World Health Survey': 1012, 'Wolf Hirschhorn Syndrome': 1013, 'Rotary Wall Vessel': 1014, 'rotating wall vessels': 1015, 'best match unit': 1016, 'Branch Metric Unit': 1017, 'basic multicellular units': 1018, 'ultimate failure stress': 1019, 'ultra fine sand': 1020, 'Quantitative Computed Tomography': 1021, 'quasi classical trajectory': 1022, 'Exhaust Gas Recirculation': 1023, 'early growth response': 1024, 'ultra high frequency': 1025, 'United Hospital Fund': 1026, 'Ultra Low Power': 1027, 'ulcer like projection': 1028, 'Ulva lactuca polysaccharide': 1029, 'unilateral locking plate': 1030, 'expected False Discovery Rate': 1031, 'empirical false discovery rate': 1032, 'frontal eye field': 1033, 'forced expiratory flow': 1034, 'fluorescence enhancement factor': 1035, 'human peritoneal mesothelial cells': 1036, 'Hydroxy propyl methyl cellulose': 1037, 'Human Pan Microbial Communities': 1038, 'probabilistic neural network': 1039, 'pairwise nearest neighbour': 1040, 'Pixel Nearest Neighbor': 1041, 'partial nitrate nutrition': 1042, 'fast wavelet transform': 1043, 'Fractional Wavelet Transform': 1044, 'floating wind turbine': 1045, 'free water transport': 1046, 'Far West Technology': 1047, 'differentially expressed gene': 1048, 'diesel engine generator': 1049, 'Drug Effect Graph': 1050, 'Differential Expression Gene': 1051, 'uterine artery embolization': 1052, 'Ukrainian Antarctic expeditions': 1053, 'United Arab Emirates': 1054, 'Urinary Albumin Excretion': 1055, 'ultrasound assisted extraction': 1056, 'DNA Affinity Precipitation Assay': 1057, 'DNA affinity purification assay': 1058, 'days after peak anthesis': 1059, 'Hybrid Electric Vehicle': 1060, 'high endothelial venules': 1061, 'Hepatitis E virus': 1062, 'hemispheric emotional valence': 1063, 'unstable incremental coaxiality': 1064, 'Urinary Iodine Concentration': 1065, 'unknown identity cell': 1066, 'Upper iris coverage': 1067, 'supramaximal repetitive nerve stimulation': 1068, 'steroid resistant nephrotic syndrome': 1069, 'wild type mice': 1070, 'work transformation matrix': 1071, 'accessory olfactory bulb': 1072, 'ammonia oxidizing bacteria': 1073, 'Artificial oil bodies': 1074, 'All Our Babies': 1075, 'advanced trail making tests': 1076, 'Agrobacterium tumefaciens mediated transformation': 1077, 'pregnane X receptor': 1078, 'pelvic X rays': 1079, 'Comparative genomic hybridization': 1080, 'Cathay General Hospital': 1081, 'computer generated holography': 1082, 'communication group haplotypes': 1083, 'nuclear respiratory factors': 1084, 'Nanmangalam Reserve Forest': 1085, 'necrosis related factor': 1086, 'no risk factors': 1087, 'Nutrient rich food': 1088, 'fine needle biopsy': 1089, 'femoral nerve blockade': 1090, 'fermented nutrient broth': 1091, 'acid sensing ion channels': 1092, 'application specific integrated circuit': 1093, 'Group Distribution Header': 1094, 'group Diffie Hellman': 1095, 'growing degree hours': 1096, 'Emergency Core Cooling System': 1097, 'electrolytic chromium coated steel': 1098, 'earliest color coded signal': 1099, 'loss of coolant accident': 1100, 'late onset cerebellar ataxia': 1101, 'Passive Containment Cooling System': 1102, 'photon cross correlation spectroscopy': 1103, 'Patient Communication Confidence Scale': 1104, 'palliative care consultation service': 1105, 'Emergency Heat Removal System': 1106, 'electronic health record systems': 1107, 'European Hair Research Society': 1108, 'sero submoucosal interrupted sutures': 1109, 'single source information system': 1110, 'Social Skills Improvement System': 1111, 'horizontal mattress interrupted sutures': 1112, 'Health Management Information System': 1113, 'hospital management information systems': 1114, 'stochastic boundary element method': 1115, 'Small breast epithelial mucin': 1116, 'serial blockface electron microscopy': 1117, 'sentinel lymph node': 1118, 'solid lipid nanoparticles': 1119, 'specific leaf N': 1120, 'bending beam rheometer': 1121, 'Bismarck brown R': 1122, 'Brilliant Blue R': 1123, 'base binding region': 1124, 'Pressure Aging Vessel': 1125, 'pulse amplitude variability': 1126, 'potential added value': 1127, 'Proportional Assisted Ventilation': 1128, 'bacterial foraging optimization': 1129, 'basic formal ontology': 1130, 'Black Forest Observatory': 1131, 'body figure object': 1132, 'Rule based reasoning': 1133, 'Risk benefit ratio': 1134, 'RING between RING': 1135, 'receptor binding region': 1136, 'ordinary gradient learning': 1137, 'Optimal guidance law': 1138, 'frequency response function': 1139, 'flavonoid rich fraction': 1140, 'frequency reuse factor': 1141, 'firing rate function': 1142, 'fundamental resonance frequency': 1143, 'basic oxygen furnace': 1144, 'bacterial OB fold': 1145, 'backward optic flow': 1146, 'Biomass Objective Function': 1147, 'right of way': 1148, 'running observation window': 1149, 'light weight deflectometer': 1150, 'Lateral Wall Decompression': 1151, 'logging while drilling': 1152, 'large woody debris': 1153, 'coefficient of thermal expansion': 1154, 'Compression Of The Eyelid': 1155, 'Video On Demand': 1156, 'voice onset detector': 1157, 'veno occlusive disease': 1158, 'total petroleum hydrocarbon': 1159, 'trees per hectare': 1160, 'total pumping heads': 1161, 'Community level physiological profiling': 1162, 'cytosolic lipid protein particles': 1163, 'vitamin K antagonist': 1164, 'vertebral kyphosis angle': 1165, 'Boron doped diamond': 1166, 'Binary decision diagram': 1167, 'balancing domain decomposition': 1168, 'bovine digital dermatitis': 1169, 'B domain deleted': 1170, 'graphite supported wires': 1171, 'Great Spotted Woodpecker': 1172, 'glottal source wave': 1173, 'Geocentric Solar Wind': 1174, 'K nearest neighbors': 1175, 'KLMS Neural Network': 1176, 'mixed layer height': 1177, 'micronuclear linker histone': 1178, 'Miniature Long Haired': 1179, 'turbulent flow depth': 1180, 'time frequency distribution': 1181, 'Transcription Factor Database': 1182, 'total fixation duration': 1183, 'Alfred Wegener Institute': 1184, 'average work incapacity': 1185, 'total ozone monitoring spectrometer': 1186, 'Total Ozone Mapping Spectrometer': 1187, 'above ground level': 1188, 'Adaptive Group Lasso': 1189, 'third order dispersion': 1190, 'time of death': 1191, 'transit oriented development': 1192, 'target organ damage': 1193, 'Plane Wave Expansion': 1194, 'propolis water extract': 1195, 'pulsed wire evaporation': 1196, 'people with epilepsy': 1197, 'pussy willow extract': 1198, 'reformulation linearization technique': 1199, 'residual layer thickness': 1200, 'total hip arthroplasty': 1201, 'ternary half adder': 1202, 'total hemibranch area': 1203, 'Torsion hysteresis area': 1204, 'disc height index': 1205, 'dizziness handicap inventory': 1206, 'digital histology index': 1207, 'Lumbar lordotic angle': 1208, 'Layer Level Adjustment': 1209, 'local linear approximation': 1210, 'lipid lowering agent': 1211, 'band reject filter': 1212, 'Bidirectional Reflectance Factor': 1213, 'operational transconductance amplifier': 1214, 'over the air': 1215, 'oatmeal tomato agar': 1216, 'online travel agency': 1217, 'oncocytic thyroid adenoma': 1218, 'micro electro mechanical system': 1219, 'medication event monitoring systems': 1220, 'inferior alveolar nerve': 1221, 'Interactive Autism Network': 1222, 'global ejection fraction': 1223, 'GTP exchange factor': 1224, 'Guanine Exchange Factor': 1225, 'growth enhancement factor': 1226, 'non Tf bound iron': 1227, 'non transferrin bound iron': 1228, 'wide dynamic range': 1229, 'wind driven rain': 1230, 'Hannan Crusaid Treatment Centre': 1231, 'Hürthle Cell Thyroid Carcinoma': 1232, 'double negative T': 1233, 'Diabetes Numeracy Test': 1234, 'differentiating neural tissue': 1235, 'triply differential cross section': 1236, 'transcranial direct current stimulation': 1237, 'grinding wheel active surface': 1238, 'genome wide association study': 1239, 'Genome Wide Association Scan': 1240, 'peanut oil biodiesel': 1241, 'Perceived outdoor barriers': 1242, 'hemolytic uremic syndrome': 1243, 'harvest use store': 1244, 'hemicellulose utilization system': 1245, 'type three secretion system': 1246, 'traditional taxonomic size spectrum': 1247, 'protein transduction domain': 1248, 'post transplantation day': 1249, 'partial thickness debridement': 1250, 'Proximal tubular dysfunction': 1251, 'putative targeting domain': 1252, 'physiological cross sectional area': 1253, 'physical carrier sensing adaptation': 1254, 'primary care scoring algorithm': 1255, 'infectious bronchitis virus': 1256, 'Influenza B virus': 1257, 'acute retinal necrosis': 1258, 'adventitious root number': 1259, 'Adipogenic Regulation Network': 1260, 'cyclic nucleotide gated': 1261, 'Community Network Game': 1262, 'copy number gain': 1263, 'compressed natural gas': 1264, 'basolateral inward rectifying channel': 1265, 'baculoviral IAP repeat containing': 1266, 'Boston Image Reading Center': 1267, 'dry cell weight': 1268, 'dynamic contention window': 1269, 'differential coupling wheelset': 1270, 'dental crown width': 1271, 'Graft versus Leukemia': 1272, 'Glidescope video laryngoscope': 1273, 'dialyzed fetal bovine serum': 1274, 'domain family binding site': 1275, 'stable coronary artery disease': 1276, 'smoothly clipped absolute deviation': 1277, 'Spontaneous coronary artery dissection': 1278, 'perfusion weighted imaging': 1279, 'Personal Wellbeing Index': 1280, 'effector triggered immunity': 1281, 'equivalent temperature index': 1282, 'brake specific fuel consumption': 1283, 'Bu Shen Fang Chuan': 1284, 'Green fluorescence intensity': 1285, 'good fit index': 1286, 'Gabor features images': 1287, 'Groningen Frailty Indicator': 1288, 'Geomorphic Flood Index': 1289, 'Dubin Johnson syndrome': 1290, 'degenerative joint score': 1291, 'Dangui Jakyak San': 1292, 'DeMeester Johnson score': 1293, 'Hepatitis A virus': 1294, 'Histidine Alanine Valine': 1295, 'hand arm vibrations': 1296, 'indole butyric acid': 1297, 'interscapular brown adipose': 1298, 'Izu Bonin arc': 1299, 'iron based adsorbent': 1300, 'immortalized brown adipocytes': 1301, 'biological variation analysis': 1302, 'Bee venom acupuncture': 1303, 'body vertical acceleration': 1304, 'boundary vicinity algorithm': 1305, 'atrial natriuretic factor': 1306, 'abnormal nuclei fraction': 1307, 'Apalachicola National Forest': 1308, 'myeloid derived suppressor cell': 1309, 'Muscle Derived Stem Cells': 1310, 'modulated differential scanning calorimetry': 1311, 'brain heart infusion': 1312, 'Breath Hold Index': 1313, 'Biological Homogeneity Index': 1314, 'body height index': 1315, 'weighted linear regression': 1316, 'weak label ratios': 1317, 'water liquid ratio': 1318, 'water loss rate': 1319, 'Iranian Lizard Leishmania': 1320, 'Institut Laue Langevin': 1321, 'Intra abdominal hypertension': 1322, 'ICU acquired hypernatremia': 1323, 'Electrical Impedance Tomography': 1324, 'electromagnetically induced transparency': 1325, 'interstitial fluid pressure': 1326, 'Inflammatory Fibroid Polyp': 1327, 'isotropic fixed point': 1328, 'infrared fluorescing protein': 1329, 'interstitial fluid velocity': 1330, 'Infectious flacherie virus': 1331, 'improved Fisher vector': 1332, 'Whole body vibration': 1333, 'water bottom vibrometer': 1334, 'whole brain volume': 1335, 'whole blood viscosity': 1336, 'impaired fasting glucose': 1337, 'inferior frontal gyrus': 1338, 'ideal Fermi gas': 1339, 'Internet focus groups': 1340, 'internal granular layer': 1341, 'Inner Granular Layer': 1342, 'intrinsic gene list': 1343, 'inner glomerular layer': 1344, 'Molar Stabilizing Power Arm': 1345, 'morphological spatial pattern analysis': 1346, 'left ventricular hypertrophy': 1347, 'linear vernier hybrid': 1348, 'Enhanced Self Organising Map': 1349, 'emergent self organizing maps': 1350, 'Bayesian linear discriminant analysis': 1351, 'between landmark distance analysis': 1352, 'white light endoscopy': 1353, 'wide local excision': 1354, 'Eigenvector Weighting Function': 1355, 'effective weighted factor': 1356, 'Eulerian Wall Film': 1357, 'edema water fraction': 1358, 'mean value method': 1359, 'mean variance measure': 1360, 'medial vibratory mass': 1361, 'Malaria Vaccine Model': 1362, 'ischemic heart disease': 1363, 'in hospital days': 1364, 'Health Adjusted Life Expectancy': 1365, 'high altitude long endurance': 1366, 'Canadian Community Health Survey': 1367, 'Congenital central hypoventilation syndrome': 1368, 'Changhua Christian Healthcare System': 1369, 'Copenhagen City Heart Study': 1370, 'Canadian Chronic Disease Surveillance System': 1371, 'computerised clinical decision support system': 1372, 'Arrhythmogenic Right Ventricular Cardiomyopathy': 1373, 'adult rat ventricular cardiomyocytes': 1374, 'very late antigen': 1375, 'Very Large Array': 1376, 'vertical long axis': 1377, 'Veterinary Laboratories Agency': 1378, 'great saphenous vein': 1379, 'group summary vector': 1380, 'Genome Synteny Viewer': 1381, 'Google Street View': 1382, 'Genome Structural Variation': 1383, 'Tucker Lewis Index': 1384, 'Total lymphoid irradiation': 1385, 'temporal lobe injury': 1386, 'integrated absolute error': 1387, 'interfacial area equation': 1388, 'Influenza associated encephalopathy/encephalitis': 1389, 'olive mill wastewater': 1390, 'observed molecular weights': 1391, 'escape latency time': 1392, 'ectopic lymphoid tissue': 1393, 'early life temperature': 1394, 'electrolyte leakage test': 1395, 'Gallic Acid Equivalents': 1396, 'gallic acid equiv': 1397, 'Dai kenchu to': 1398, 'discrete Kirchhoff triangular': 1399, 'dual kidney transplants': 1400, 'transfer latency time': 1401, 'total lean tissue': 1402, 'Tsogolo la Thanzi': 1403, 'Four Square Step test': 1404, 'functional similarity search tool': 1405, 'interscapular brown adipose tissue': 1406, 'Ileal bile acid transporter': 1407, 'Compound Danshen Dropping Pills': 1408, 'Combined Diet Dialysis Program': 1409, 'Conserved DNA derived polymorphism': 1410, 'enhanced usual care': 1411, 'Existing UAV Chain': 1412, 'exfoliated urothelial cells': 1413, 'estimated unique count': 1414, 'Mean arterial blood pressure': 1415, 'mild acute biliary pancreatitis': 1416, 'UDP galactopyranose mutase': 1417, 'University Gadjah Mada': 1418, 'empty bed contact time': 1419, 'electron beam computed tomography': 1420, 'Tien Hsien Liquid': 1421, 'Trojan horse liposomes': 1422, 'Total Heat Loss': 1423, 'epoxidized methyl oleate': 1424, 'Extracellular Matrix Organization': 1425, 'epigenetically modified organisms': 1426, 'flame atomic absorption spectrometry': 1427, 'Flameless Atomic Absorption Spectroscopy': 1428, 'genome wide association': 1429, 'General Work Activity': 1430, 'recombination activating gene': 1431, 'return air grille': 1432, 'reduced alignment graph': 1433, 'regeneration associated genes': 1434, 'Regional Adjacency Graph': 1435, 'Genomic Run On': 1436, 'growth regulated oncogene': 1437, 'ginger root oil': 1438, 'genomically recoded organism': 1439, 'Gene Regulation Ontology': 1440, 'elective gastrointestinal endoscopy': 1441, 'early gadolinium enhancement': 1442, 'upper esophageal sphincter': 1443, 'urban environmental stress': 1444, 'undifferentiated endometrial sarcoma': 1445, 'transjugular intrahepatic portosystemic shunt': 1446, 'thermally induced phase separation': 1447, 'Multi University Research Initiative': 1448, 'motor unit relative index': 1449, 'multiple source detection strategy': 1450, 'Material Safety Data Sheet': 1451, 'Multiple Sclerosis Documentation System': 1452, 'motion sensitized dual stack': 1453, 'compounded conical Radon transform': 1454, 'chill coma recovery time': 1455, 'concurrent chemo radiation therapy': 1456, 'Default Mode Network': 1457, 'Distributed Microphone Network': 1458, 'dorsal motor nucleus': 1459, 'quadratic discriminant analysis': 1460, 'Quantitative descriptive analysis': 1461, 'forward angle light scatter': 1462, 'familial amyotrophic lateral sclerosis': 1463, 'outer mitochondrial membrane': 1464, 'oral minimal model': 1465, 'observatory monthly mean': 1466, 'Open Mutation Miner': 1467, 'Oral mucosal melanoma': 1468, 'Turbine Engine Simulator Model': 1469, 'Triple exponential smoothing model': 1470, 'gingival crevicular fluid': 1471, 'Gaussian curve fit': 1472, 'Global Coherence Field': 1473, 'Grass carp fins': 1474, 'giant cell fibroblastoma': 1475, 'expected transmission energy': 1476, 'End to end': 1477, 'electron transfer enthalpy': 1478, 'high frequency imagery': 1479, 'High flow infiltration': 1480, 'Head Fire Intensity': 1481, 'household food insecurity': 1482, 'Beijing Tianjin Tangshan': 1483, 'Bayesian t test': 1484, 'bone transmission time': 1485, 'bladder tumor tissue': 1486, 'June July August': 1487, 'Jun Jul Aug': 1488, 'interplanetary electric field': 1489, 'Image Enhancement Factor': 1490, 'immune effector function': 1491, 'iso electric focusing': 1492, 'Guanxian Anxian fault': 1493, 'Global Assessment Functioning': 1494, 'Gene Association File': 1495, 'square wave voltammetry': 1496, 'shear wave velocity': 1497, 'single working vacation': 1498, 'inter orbital width': 1499, 'Isle of Wight': 1500, 'tip cross sectional perimeter': 1501, 'truncated corner square patches': 1502, 'open circuit voltage': 1503, 'outpatient clinic visits': 1504, 'Object Central Voxel': 1505, 'outer cross validation': 1506, 'oral cholera vaccine': 1507, 'last glacial maximum': 1508, 'L0 gradient minimization': 1509, 'liquid growth medium': 1510, 'position weight matrix score': 1511, 'planar wide mesh scanning': 1512, 'Pregelatinized waxy maize starch': 1513, 'volume of interest': 1514, 'variance of information': 1515, 'venous oxygenation index': 1516, 'Value of Information': 1517, 'bilayer lipid membrane': 1518, 'Beam Lateral Motion': 1519, 'Biotic Ligand Model': 1520, 'black leaf monkey': 1521, 'finite diffusion element': 1522, 'Fagopyrum dibotrys extract': 1523, 'Frequency domain equalization': 1524, 'Myocardial Blush Grade': 1525, 'multiple breakpoint graph': 1526, 'mean blood glucose': 1527, 'model based geostatistical': 1528, 'mouse basal ganglia': 1529, 'transhepatic arterial chemo embolisation': 1530, 'TNF alpha converting enzyme': 1531, 'Collagen Antibody Induced Arthritis': 1532, 'CII antibody induced arthritis': 1533, 'computer assisted image analysis': 1534, 'Serum Glutamine Pyruvate Transaminase': 1535, 'serum glutamic pyruvic transaminase': 1536, 'renin angiotensin system blockade': 1537, 'right anterior subdivision block': 1538, 'Francisella like endosymbionts': 1539, 'frontal lobe epilepsy': 1540, 'inorganic nanoparticle impregnation': 1541, 'introduction naturalization invasion': 1542, 'Calcific uremic arteriolopathy': 1543, 'Cow urine ark': 1544, 'sequential probability ratio test': 1545, 'shoulder proprioceptive rehabilitation tool': 1546, 'satellite based augmentation system': 1547, 'Stanford Brief Activity Survey': 1548, 'empirical orthogonal function': 1549, 'end of fall': 1550, 'electro osmotic flow': 1551, 'ghrelin o acyl Transferase': 1552, 'Galveston Orientation Amnesia Test': 1553, 'channel enzyme enhanced reaction': 1554, 'Collaborative Enzyme Enhance Reactive': 1555, 'ventral lateral lip': 1556, 'vastus lateralis longus': 1557, 'very low light': 1558, 'Kissinger Akahira Sunose': 1559, 'ketoacyl ACP synthase': 1560, 'Keszthelyi Aproszemu Sarga': 1561, 'keto acid supplementation': 1562, 'Natural fiber reinforced polymeric': 1563, 'near field resonant parasitic': 1564, 'nicking fluorescent reporter probe': 1565, 'oriented strand boards': 1566, 'outer spiral bundle': 1567, 'willingness to pay': 1568, 'water treatment plant': 1569, 'Elastica van Gieson': 1570, 'Eosin Van Gieson': 1571, 'Block Move Rotate': 1572, 'basal metabolic rate': 1573, 'Bayes minimum risk': 1574, 'background mutation rate': 1575, 'Roll Forward Checkpointing Scheme': 1576, 'regional functional correlation strength': 1577, 'Logarithm Approximation Unit': 1578, 'logarithmic arithmetic unit': 1579, 'Late Access Unit': 1580, 'linear arbitrary units': 1581, 'Direct Digital Frequency Synthesizer': 1582, 'Distant disease free survival': 1583, 'high level language': 1584, 'hadronic loop level': 1585, 'hind leg length': 1586, 'arithmetic logical unit': 1587, 'Arbitrary Light Unit': 1588, 'average light units': 1589, 'arbitrary luminescence units': 1590, 'self checking processor core': 1591, 'Southern California Particle Center': 1592, 'effective interface mass': 1593, 'Electrical Impedance Myography': 1594, 'synthetic paraffinic kerosene': 1595, 'simultaneous pancreas kidney': 1596, 'gas to liquid': 1597, 'gross temperature lift': 1598, 'green tea leaf': 1599, 'generalised likelihood ratio': 1600, 'Guided Ligand Replacement': 1601, 'leuco Methylene Blue': 1602, 'left main bronchus': 1603, 'Loose manure biochar': 1604, 'Luria Marine Broth': 1605, 'HIF prolyl hydroxylase': 1606, 'high pressure homogenization': 1607, 'high parent heterosis': 1608, 'Birt Hogg Dubé': 1609, 'Buyang Huanwu Decoction': 1610, 'Banxia houpu decoction': 1611, 'non skin sparing mastectomy': 1612, 'non steady state migration': 1613, 'fruit patch visit': 1614, 'fowl plague viruses': 1615, 'Feline Panleukopenia Virus': 1616, 'flow propagation velocity': 1617, 'Super lateral growth': 1618, 'soda lime glass': 1619, 'Smilax larvata Griseb': 1620, 'Single layer graphene': 1621, 'Suboesophageal Lateral Glia': 1622, 'San Diego State University': 1623, 'Satellite Data Simulator Unit': 1624, 'Influenza A virus': 1625, 'intra abdominal volume': 1626, 'outer zona radiata': 1627, 'obese Zucker rat': 1628, 'inner zona radiata': 1629, 'imprint zone rate': 1630, 'in situ hybridization': 1631, 'isolated systolic hypertension': 1632, 'implantable loop recorder': 1633, 'isolated local recurrence': 1634, 'isometric log ratio': 1635, 'Hepatic Vascular Exclusion': 1636, 'Histone variant exchange': 1637, 'working cell bank': 1638, 'whole colon biopsy': 1639, 'Uniform Hazard Spectra': 1640, 'Urban Health Study': 1641, 'unilateral hippocampal sclerosis': 1642, 'leading edge vortex': 1643, 'land expectation value': 1644, 'lower end vertebra': 1645, 'lentiviral empty vector': 1646, 'Partitioned iterated function systems': 1647, 'peptide induced fatal syndrome': 1648, 'tissue factor pathway inhibitor': 1649, 'Tissue Factor Protease Inhibitor': 1650, 'Wigner Ville Distribution': 1651, 'Weighted Voronoi Diagram': 1652, 'Sperm DNA Fragmentation Assay': 1653, 'simple discrete firefly algorithm': 1654, 'exhaust gas temperature': 1655, 'equal gain transmission': 1656, 'endosymbiotic gene transfer': 1657, 'environmental gene tag': 1658, 'flexor carpi ulnaris': 1659, 'first catch urine': 1660, 'Frequent Can Users': 1661, 'early postoperative intraperitoneal chemotherapy': 1662, 'exon primed intron crossing': 1663, 'Enhance Prevention in Couples': 1664, 'Early Pseudomonas Infection Control': 1665, 'adjacent channel power ratio': 1666, 'acute C peptide response': 1667, 'adequate clinical parasitological response': 1668, 'polymorphous low grade adenocarcinoma': 1669, 'Poly L glutamic acid': 1670, 'Poly Lactic glycolic acid': 1671, 'pseudovirion based neutralisation assay': 1672, 'probabilistic biological network alignment': 1673, 'Medium chain fatty acids': 1674, 'medial circumflex femoral artery': 1675, 'Karush Kuhn Tucker': 1676, 'Kramers Kronig transform': 1677, 'lymph node ratio': 1678, 'liquid natural rubber': 1679, 'leaf number ratio': 1680, 'Lin12 Notch repeats': 1681, 'Late Non Responders': 1682, 'lateral geniculate nucleus': 1683, 'low grade neoplasia': 1684, 'old growth forest': 1685, 'opioid growth factor': 1686, 'Operational Gene Families': 1687, 'orthologous gene family': 1688, 'vaso vagal syncope': 1689, 'Vulval vestibulitis syndrome': 1690, 'ventral visual stream': 1691, 'harvested rain water': 1692, 'Heat Reflector Window': 1693, 'hydrogen rich water': 1694, 'municipal tap water': 1695, 'minimization time window': 1696, 'ribosomal S6 kinase': 1697, 'rubber seed kernel': 1698, 'receptor serine/threonine kinase': 1699, 'work in process': 1700, 'WASP Interacting Protein': 1701, 'Workflow Input Ports': 1702, 'cost of electricity': 1703, 'cost of energy': 1704, 'Celastrus orbiculatus extract': 1705, 'electronic program guide': 1706, 'eggs per gram': 1707, 'edge plane graphite': 1708, 'electrical penetration graph': 1709, 'extended phase graph': 1710, 'Intra Pulse Code Modulation': 1711, 'Inverted Phase Contrast Microscope': 1712, 'long acting muscarinic antagonist': 1713, 'Left against medical advice': 1714, 'lung volume reduction surgery': 1715, 'Lake Victoria Region Superflock': 1716, 'Health Maintenance Organization': 1717, 'human milk oligosaccharides': 1718, 'high menhaden oil': 1719, 'In Vitro Fertilization': 1720, 'integrated visual field': 1721, 'in vitro fertilized': 1722, 'Idiopathic ventricular fibrillation': 1723, 'Continuous positive airway pressure': 1724, 'Centrosomal P41 associated protein': 1725, 'negative high voltage': 1726, 'normalised hybridisation value': 1727, 'Wender Utah Rating Scale': 1728, 'Wolfram Unified Rating Scale': 1729, 'lesioned white matter': 1730, 'Legendre wavelet method': 1731, 'injection locked oscillator': 1732, 'International Labor Organization': 1733, 'opioid treatment program': 1734, 'One Time Password': 1735, 'homogeneous charge compression ignition': 1736, 'High chromium cast iron': 1737, 'wildland urban interface': 1738, 'Workflow User Interface': 1739, 'just enough time': 1740, 'junctional ectopic tachycardia': 1741, 'Joint European Torus': 1742, 'Health Assessment Questionnaire': 1743, 'habitual activity questionnaire': 1744, 'Helping Alliance Questionnaire': 1745, 'relative optical density': 1746, 'Ratio of Distortion': 1747, 'Reduction of diversity': 1748, 'Support Polygon on Surface': 1749, 'solid phase organic synthesis': 1750, 'Gestational diabetes mellitus': 1751, 'group decision making': 1752, 'Generalized Dissimilarity Modeling': 1753, 'global DNA methylation': 1754, 'Goal Directed Mode': 1755, 'maternal physiological hypercholesterolaemia': 1756, 'mackerel protein hydrolysate': 1757, 'Mid parent heterosis': 1758, 'methyl parathion hydrolase': 1759, 'intermittent high glucose': 1760, 'ideal hadron gas': 1761, 'ketosis prone diabetes': 1762, 'K point deviation': 1763, 'Kofendrerd Personality Disorder': 1764, 'Block Shift Network': 1765, 'bus stop network': 1766, 'Body Sensor Network': 1767, 'Hierarchical Cubic Network': 1768, 'Hyperpolarization‐activated cyclic nucleotide‐gated': 1769, 'Hz containing neutrophils': 1770, 'Root Folded Heawood': 1771, 'Royal Free Hospital': 1772, 'Rectangular Twisted Torus Meshes': 1773, 'regression to the mean': 1774, 'frequency tuning range': 1775, 'ferridoxin thioredoxin reductase': 1776, 'free to roll': 1777, 'phase quantization noise': 1778, 'probabilistic quotient normalization': 1779, 'Constrained Sparse Spike Inversion': 1780, 'chloroplast specific saturating irradiance': 1781, 'cytokine induced killer': 1782, 'C idella kidney': 1783, 'Ctenopharyngodon idellus kidney': 1784, 'High Frequency Structure Simulator': 1785, 'high frequency simulation software': 1786, 'High Frequency Solution Solver': 1787, 'equal channel angular pressing': 1788, 'emergency care access point': 1789, 'Evoked Compound Action Potential': 1790, 'endothelial cell activation potential': 1791, 'half metallic layer': 1792, 'Hypnea musciformis lectin': 1793, 'heterostructural mixed linker': 1794, 'echo limited regime': 1795, 'egg laying radius': 1796, 'early light regulation': 1797, 'emerged lateral root': 1798, 'Inter University Institute': 1799, 'in utero ischemia': 1800, 'Incontinence Utility Index': 1801, 'Marine National Monuments': 1802, 'maternal near miss': 1803, 'metallic nanoporous materials': 1804, 'mouthfeel non masked': 1805, 'intrinsic spin orbit': 1806, 'International Standard Organisation': 1807, 'impersonal sex orientation': 1808, 'ordered macroporous electrode': 1809, 'obesity management education': 1810, 'Open Microscopy Environment': 1811, 'direct injection pyrolytic synthesis': 1812, 'direct infusion pneumatic spray': 1813, 'direct intrahepatic portocaval shunt': 1814, 'prostate specific membrane antigen': 1815, 'printed square monopole antenna': 1816, 'population specific miRNA alleles': 1817, 'Peptide nucleic acid': 1818, 'phrenic nerve activity': 1819, 'Pacific North American': 1820, 'Body fat mass': 1821, 'Block Fading Model': 1822, 'Basel face model': 1823, 'Biceps femoris muscle': 1824, 'bright field microscopic': 1825, 'adjustable gastric band': 1826, 'above ground biomass': 1827, 'asymptotic giant branch': 1828, 'tumors/tumor bearing rat': 1829, 'to background ratio': 1830, 'to blood ratio': 1831, 'thermal boundary resistances': 1832, 'tree bisection reconnection': 1833, 'third harmonic generation': 1834, 'Tamm Horsfall glycoprotein': 1835, 'Reversible Posterior Leukoencephalopathy Syndrome': 1836, 'Ridge partial least squares': 1837, 'front end electronics': 1838, 'Folium Epimedii extract': 1839, 'front end enclosure': 1840, 'Plastic Leaded Chip Carrier': 1841, 'Pearson Linear Correlation Coefficient': 1842, 'Simultaneous Localization and Mapping': 1843, 'signaling lymphocytic activation molecule': 1844, 'Spatial Logistics Appended Module': 1845, 'Systemic Lupus Activity Measure': 1846, 'Pneumatically Operated Gait Orthosis': 1847, 'Polar Orbiting Geophysical Observatory': 1848, 'Percentage of Glottic Opening': 1849, 'Robotic Gait Rehabilitation': 1850, 'reach grasp retrieve': 1851, 'Relative growth rate': 1852, 'relative gingival recession': 1853, 'random genomic regions': 1854, 'Hybrid Assistive Limb': 1855, 'histidine ammonia lyase': 1856, 'hand activity level': 1857, 'hand assisted laparoscopic': 1858, 'Haemorrhoidal artery ligation': 1859, 'thyroxine binding globulin': 1860, 'Top bottom genotyping': 1861, 'Naphthol Blue Black': 1862, 'Normal Building Block': 1863, 'enterobacterial repetitive intergenic consensus': 1864, 'effective residual ink concentration': 1865, 'European Research Infrastructure Consortium': 1866, 'European Retinoblastoma Imaging Collaboration': 1867, 'hepatic leukemia factor': 1868, 'human lung fibroblasts': 1869, 'beta half Cauchy': 1870, 'Bayesian Hierarchical Clustering': 1871, 'exponentiated half Cauchy': 1872, 'Energy Harvesting Controller': 1873, 'electrically heated cigarettes': 1874, 'essential health care': 1875, 'average neighborhood margin maximum': 1876, 'additive nonparametric margin maximum': 1877, 'basement membrane zone': 1878, 'base metal zone': 1879, 'ministry of economic affairs': 1880, 'Multi Objective Evolutionary Algorithms': 1881, 'Aphanizomenon flos aquae': 1882, 'adaptive fractal analysis': 1883, 'average fractional anisotropy': 1884, 'audio feature analysis': 1885, 'arm fat area': 1886, 'M anisopliae crude antigen': 1887, 'modified alkaline comet assay': 1888, 'olive fruit extract': 1889, 'Ophiocordyceps formosana extracts': 1890, 'Ougan flavedo extract': 1891, 'outdoor fitness equipment': 1892, 'white rice husks ash': 1893, 'Winnipeg Regional Health Authority': 1894, 'vapor liquid equilibrium': 1895, 'volumetric laser endomicroscopy': 1896, 'vegetal localization element': 1897, 'very low expression': 1898, 'Laparo Endoscopic Single Site': 1899, 'Landing Error Scoring System': 1900, 'intra aortic balloon pump': 1901, 'intra arterial blood pressure': 1902, 'dorsal raphe nucleus': 1903, 'Drug Reaction Network': 1904, 'medial ganglionic eminence': 1905, 'mobile genetic elements': 1906, 'maternal genome elimination': 1907, 'multiplex gene expression': 1908, 'multi gradient echo': 1909, 'sodium antimony gluconate': 1910, 'single amplified genome': 1911, 'senescence associated gene': 1912, 'superoxide anion generation': 1913, 'MBL associated serine protease': 1914, 'mucin associated surface proteins': 1915, 'obese non elite': 1916, 'ordinary Nernst effect': 1917, 'zeta inhibitory peptide': 1918, 'Zero Inflated Poisson': 1919, 'zero interaction potency': 1920, 'zymosan induced peritonitis': 1921, 'problem areas in diabetes': 1922, 'Personnel Accounting Integrated Database': 1923, 'impaired glucose tolerance': 1924, 'Iowa Gambling Task': 1925, 'total abdominal hysterectomy': 1926, 'total artificial heart': 1927, 'transverse arch height': 1928, 'total laparoscopic hysterectomy': 1929, 'total labor hours': 1930, 'telopeptide lysyl hydroxylase': 1931, 'ionized physical vapor deposition': 1932, 'Ischemic peripheral vascular disease': 1933, 'bone specific alkaline phosphatase': 1934, 'Baltic Sea Action Plan': 1935, 'wind turbine generators': 1936, 'Workload Transition Graph': 1937, 'solar wind plasma': 1938, 'soil water potential': 1939, 'loss of function': 1940, 'latency of fall': 1941, 'local outlier factor': 1942, 'Levels of Functioning': 1943, 'left lower pulmonary vein': 1944, 'lateral left portal vein': 1945, 'usual interstitial pneumonia': 1946, 'upper inflection point': 1947, 'Linear mixed effects': 1948, 'L moments estimators': 1949, 'leaves methanolic extract': 1950, 'low managerial experience': 1951, 'leucine methyl ester': 1952, 'vertebral compression fractures': 1953, 'variant call format': 1954, 'Variant Call Formatted': 1955, 'Vena cava filter': 1956, 'vegetation continuous field': 1957, 'histologically normal breast': 1958, 'hydroxy naphthol blue': 1959, 'National Air Pollution Surveillance': 1960, 'Narora Atomic Power Station': 1961, 'Fugl Meyer assessment': 1962, 'February March April': 1963, 'Failure Mode Analysis': 1964, 'end of life': 1965, 'Encyclopedia of Life': 1966, 'Ethanol organosolv lignin': 1967, 'Edge Orthologous Labeling': 1968, 'Augmentation severity rating scale': 1969, 'ADHD Self Report Scale': 1970, 'ADHD self rating scale': 1971, 'Autism Spectrum Rating Scale': 1972, 'shear actuated fiber composite': 1973, 'solid alkaline fuel cells': 1974, 'Japanese Experimental Module': 1975, 'job exposure matrix': 1976, 'innermost stable circular orbit': 1977, 'In situ chemical oxidation': 1978, 'Sloan Digital Sky Survey': 1979, 'sodium dodecyl sulphate sedimentation': 1980, 'Spatial decision support systems': 1981, 'NRAO VLA Sky Survey': 1982, 'National Vital Statistics System': 1983, 'fractional order sliding mode controller': 1984, 'First Order Sliding Mode Controller': 1985, 'New Gravitational Observatory': 1986, 'no good ORFs': 1987, 'non governmental organization': 1988, 'non growing oocytes': 1989, 'Wavelet Power Spectrum': 1990, 'Work Productivity Survey': 1991, 'WRF Preprocessing System': 1992, 'Web Processing Service': 1993, 'water pipe smoking': 1994, 'Giant Metrewave Radio Telescope': 1995, 'Global Multi Resolution Topography': 1996, 'clock network evaluation': 1997, 'constructive neutral evolution': 1998, 'conserved noncoding elements': 1999, 'conventional neck exploration': 2000, 'Less Flexibility First': 2001, 'low frequency fluctuation': 2002, 'Lévy flight foraging': 2003, 'ovum pick up': 2004, 'operational phylogenetic unit': 2005, 'organ procurement units': 2006, 'Carnegie Mellon University': 2007, 'cell monitoring unit': 2008, 'China Medical University': 2009, 'Clinical Monitoring Unit': 2010, 'waste cooking oils': 2011, 'Worst case optimization': 2012, 'weakly coupled oscillator': 2013, 'deep belief network': 2014, 'dynamic Bayesian networks': 2015, 'disease biomarker network': 2016, 'optimal homotopy asymptotic method': 2017, 'Optimal Homotopy Analysis Method': 2018, 'pipe wagon articulating': 2019, 'Pulse wave analysis': 2020, 'people with AIDS': 2021, 'Speckle reducing anisotropic diffusion': 2022, 'Self renewing asymmetric division': 2023, 'quantified sonar system': 2024, 'quantized state systems': 2025, 'quasi steady state': 2026, 'quorum sensing signal': 2027, 'Fluxes Petri Net': 2028, 'fixed pattern noise': 2029, 'flagellar pocket neck': 2030, 'first primary neoplasm': 2031, 'fronto parietal network': 2032, 'thinning Simple Genetic Algorithm': 2033, 'Tissue Specific Genes Analysis': 2034, 'multiple quantum barrier': 2035, 'multimedia quorum based': 2036, 'classical transverse Ising model': 2037, 'Clinical Trials Information Mediator': 2038, 'Piezoelectric wafer active sensors': 2039, 'proteome wide association study': 2040, 'ultimate limit states': 2041, 'Uniform load surface': 2042, 'upper lateral sublobe': 2043, 'Universal Linkage System': 2044, 'laminated veneer lumber': 2045, 'lymphatic vessel length': 2046, 'low viral load': 2047, 'left ventricular lateral': 2048, 'Large Volume Liposuction': 2049, 'complementary metal oxide silicon': 2050, 'Complementary Metal Oxide Semiconductor': 2051, 'comparative mean opinion score': 2052, 'juvenile chronic arthritis': 2053, 'joint capacity allocation': 2054, 'extractible nuclear antigens': 2055, 'energetic neutral atom': 2056, 'elliptical nanohole array': 2057, 'Enzootic nasal adenocarcinoma': 2058, 'European Nucleotide Archive': 2059, 'aorta gonad mesonephros': 2060, 'alternating gradient magnetometer': 2061, 'Ancestral Goat Mitogenome': 2062, 'African green monkey': 2063, 'adaptive grasping mechanism': 2064, 'functional cortical network': 2065, 'fractional Crank Nicholson': 2066, 'freely connected network': 2067, 'fixed consecutive number': 2068, 'Wireless sensing unit': 2069, 'Washington State University': 2070, 'truncated projected least squares': 2071, 'to peak longitudinal strain': 2072, 'extended multiplicative scatter correction': 2073, 'European Mediterranean Seismological Centre': 2074, 'probabilistic rule base': 2075, 'physical resource block': 2076, 'Prey Reporter Bait': 2077, 'Protease reaction buffer': 2078, 'whole kidney marrow': 2079, 'weighed Kaplan Meier': 2080, 'oxygen permeability index': 2081, 'Orthodontic Plaque Index': 2082, 'Observed Predictive Index': 2083, 'ocular protection index': 2084, 'permanent magnet guideway': 2085, 'Pombe minimal glutamate': 2086, 'Pedal Mucus Glass': 2087, 'melamine urea formaldehyde': 2088, 'material unaccounted for': 2089, 'Moving Plateau Touch Display': 2090, 'maximum primary tumor diameter': 2091, 'unified modeling language': 2092, 'Unified Markup Language': 2093, 'Universal Modelling Language': 2094, 'Preliminary Reference Earth Model': 2095, 'platinum replica electron microscopy': 2096, 'atmospheric boundary layer': 2097, 'average binaural level': 2098, 'alveolar bone loss': 2099, 'advanced backcross line': 2100, 'December January February': 2101, 'Dec Jan Feb': 2102, 'weighted ensemble mean': 2103, 'Winkler extraction method': 2104, 'total attenuated backscatter': 2105, 'tRNA anticodon binding': 2106, 'TET assisted bisulphite': 2107, 'thoracic aortic banding': 2108, 'top of atmosphere': 2109, 'time of arrival': 2110, 'tubo ovarian abscess': 2111, 'total organic acid': 2112, 'vector boson fusion': 2113, 'Vector based forwarding': 2114, 'Video based feedback': 2115, 'vaginal blood flow': 2116, 'virtual bright field': 2117, 'Rossby wave source': 2118, 'Rhythmic Weight Shift': 2119, 'Romano Ward syndrome': 2120, 'Induced Image Current': 2121, 'instantaneous imaginary coherence': 2122, 'iterative interference cancelation': 2123, 'item information curves': 2124, 'infrared reflection absorption spectroscopy': 2125, 'Integrated Research Application System': 2126, 'battery energy storage system': 2127, 'Balance error scoring system': 2128, 'Active Power Factor Correction': 2129, 'acute peripancreatic fluid collection': 2130, 'Duty Cycle Generator': 2131, 'discounted cumulative gain': 2132, 'dynamically corrected gates': 2133, 'diet control group': 2134, 'average value models': 2135, 'automatic vending machine': 2136, 'arterio venous malformation': 2137, 'Artificial Vaginal Mucus': 2138, 'zero voltage switching': 2139, 'Zero valent sulfur': 2140, 'field oriented control': 2141, 'Fear of childbirth': 2142, 'first order continuity': 2143, 'fold over control': 2144, 'Flora of China': 2145, 'warm mix asphalt': 2146, 'wall motion abnormality': 2147, 'Wireless Multicast Advantage': 2148, 'wall motion analysis': 2149, 'weighted moving average': 2150, 'convex concave anisotropic diffusion': 2151, 'common coronary artery diameter': 2152, 'equalized net diffusion': 2153, 'early neurological deterioration': 2154, 'high proliferative potential': 2155, 'hydroelectric power plant': 2156, 'High pressure processing': 2157, 'Hamiltonian Path Problem': 2158, 'Human Proteome Project': 2159, 'Western Nansen Basin': 2160, 'weighted Naïve Bayes': 2161, 'aerosol optical thickness': 2162, 'Adenomatoid odontogenic tumor': 2163, 'Assertive Outreach Teams': 2164, 'linear tapered slot antenna': 2165, 'local tangent space alignment': 2166, 'long term spectral average': 2167, 'long term sickness absence': 2168, 'loss to follow up': 2169, 'long term follow up': 2170, 'Hasheminejad Kidney Center': 2171, 'heat killed Candida': 2172, 'Healthy Kids Check': 2173, 'Legg Calvé Perthes’ disease': 2174, 'local contact potential differences': 2175, 'fast spin echo': 2176, 'Fractionally spaced equalizer': 2177, 'Forward scattered electron': 2178, 'fatigue sleepiness exhaustion': 2179, 'Feline spongiform encephalopathy': 2180, 'ligament of Berry': 2181, 'Line of balance': 2182, 'Lateral Organ Boundaries': 2183, 'limit of blank': 2184, 'inferior thyroid artery': 2185, 'intelligent trade agent': 2186, 'Intelligent Therapy Assistant': 2187, 'Intraductal tubular adenoma': 2188, 'internal thoracic artery': 2189, 'polar lipid methanol fraction': 2190, 'Property Labelled Materials Fragments': 2191, 'single lumen tube': 2192, 'Selective laser trabeculoplasty': 2193, 'soluble lytic transglycosylase': 2194, 'statistical learning theory': 2195, 'Sri Lanka Tamils': 2196, 'Canine distemper virus': 2197, 'Cumulative Discrepancy Value': 2198, 'trans Golgi network': 2199, 'trochanteric gamma nail': 2200, 'Trilayer graphene nanoribbon': 2201, 'terminal genomic nucleotide': 2202, 'spin free Hamiltonian': 2203, 'single family homes': 2204, 'symphysis fundal height': 2205, 'Knee Society Score': 2206, 'Karolinska Sleepiness Scale': 2207, 'Kearns Sayre Syndrome': 2208, 'Krug Small Seed': 2209, 'Common Warehouse Metamodel': 2210, 'cell wall maintenance': 2211, 'Cell wall material': 2212, 'community weighted mean': 2213, 'cerebellar white matter': 2214, 'filter paper unit': 2215, 'floating point unit': 2216, 'liquid hot water': 2217, 'lady health worker': 2218, 'LIKE HISTORY WEIGHT': 2219, 'fiber wobbling method': 2220, 'four wave mixing': 2221, 'fermented wheat meal': 2222, 'foragers with mites': 2223, 'frontal white matter': 2224, 'coefficients of friction': 2225, 'consolidation of fracture': 2226, 'cemento ossifying fibroma': 2227, 'electrode wear rate': 2228, 'early warning radar': 2229, 'total organic nitrogen': 2230, 'traumatic optic neuropathy': 2231, 'avian sarcoma leukemia virus': 2232, 'airway surface liquid volume': 2233, 'stress induced premature senescence': 2234, 'system integrity protection schemes': 2235, 'Ship integrated power system': 2236, 'Spina Iliaca Posterior Superior': 2237, 'Universal Force Field': 2238, 'Unsupervised Feature Filtering': 2239, 'mitochondrial permeability transition pore': 2240, 'most probable target point': 2241, 'new drug application': 2242, 'nonlinear discriminant analysis': 2243, 'Nuclear Domain A': 2244, 'natural decomposition approach': 2245, 'non dominant arm': 2246, 'Glomerular Activity Index': 2247, 'groundwater abnormality index': 2248, 'guideline adherence indicator': 2249, 'gibberellic acid insensitive': 2250, 'grazing ability index': 2251, 'helix turn helix': 2252, 'Huayu Tongluo herbs': 2253, 'High Temperature History': 2254, 'high temperature hyperthermia': 2255, 'Joint Research Centre': 2256, 'joint roughness coefficient': 2257, 'joint radar communications': 2258, 'wear debris particles': 2259, 'Web Design Perspective': 2260, 'water dissolved phase': 2261, 'Atlantic salmon kidney': 2262, 'amplitude shift keying': 2263, 'Available Seat Kilometres': 2264, 'mean normalized expression': 2265, 'minimal nephritic encephalopathy': 2266, 'minimum norm estimate': 2267, 'murine neutrophil elastase': 2268, 'maximal voluntary strength capacity': 2269, 'multipotent vascular stem cell': 2270, 'internal elastic lamina': 2271, 'Intra epithelial Lymphocytes': 2272, 'high hydrostatic pressure': 2273, 'Honolulu Heart Program': 2274, 'Having Homologs Proteins': 2275, 'hen house production': 2276, 'human hemopoietic progenitor': 2277, 'hydrophobic cluster SUMOylation motif': 2278, 'human cancer signalling map': 2279, 'human cerebrovascular smooth muscle': 2280, 'Off line Basecaller': 2281, 'One Leg Balance': 2282, 'Oral lichen planus': 2283, 'oropharyngeal leak pressure': 2284, 'older low performers': 2285, 'Open loop pointing': 2286, 'open loop perception': 2287, 'Machine Readable Zone': 2288, 'mitochondrial rich zone': 2289, 'Individual Weighted Residuals': 2290, 'irrigation water requirements': 2291, 'Kruppel like factors': 2292, 'Krüppel like family': 2293, 'Execution Chain Graph': 2294, 'exercised control group': 2295, 'endothelial cell growth': 2296, 'equivalent composition groups': 2297, 'melanocyte stimulating hormone': 2298, 'Mount St Helens': 2299, 'Miniature Smooth Haired': 2300, 'magnetic stent hyperthermia': 2301, 'main stem height': 2302, 'hepatocyte nuclear factor': 2303, 'Hypergame Normal Form': 2304, 'High nasal flow': 2305, 'Factor Inhibiting HIF': 2306, 'farthest insertion heuristics': 2307, 'First In Human': 2308, 'fall in haematocrit': 2309, 'Fremantle Inner Harbour': 2310, 'predicted body weight': 2311, 'PH BEACH WD40': 2312, 'Math Anxiety Questionnaire': 2313, 'Multiplex Amplicon Quantification': 2314, 'minimum average quality': 2315, 'phase locking value': 2316, 'Polinton like viruses': 2317, 'periodontal ligament visibility': 2318, 'predicted lesion volume': 2319, 'adaptive coding pass scanning': 2320, 'adaptive corrosion protection system': 2321, 'acyl carrier protein synthase': 2322, 'circumferential uniformity ratio estimate': 2323, 'Classroom Undergraduate Research Experience': 2324, 'stroke volume variation': 2325, 'structure variation value': 2326, 'subjective visual vertical': 2327, 'Simian varicella virus': 2328, 'continuous renal replacement therapy': 2329, 'cutaneous resonance running time': 2330, 'close form metric learning': 2331, 'cotton fibre middle lamella': 2332, 'cardiac integrated index': 2333, 'contrast improvement index': 2334, 'Current Impact Index': 2335, 'Counseling Innovation Interest': 2336, 'CORT increase index': 2337, 'modified linear contrast stretching': 2338, 'multi localized confidence score': 2339, 'foveal avascular zone': 2340, 'flagellum attachment zone': 2341, 'fruit abscission zone': 2342, 'colored Petri net': 2343, 'common peroneal nerve': 2344, 'consultative psychiatric nurse': 2345, 'common pathway network': 2346, 'callosal projection neurons': 2347, 'quantization index modulation': 2348, 'Quality Index Method': 2349, 'Partial rank correlated coefficients': 2350, 'papillary renal cell carcinoma': 2351, 'no side hole': 2352, 'nickel sulphate hexahydrate': 2353, 'North Sea houting': 2354, 'mutual information quotient': 2355, 'Malocclusion Impact Questionnaire': 2356, 'line of interest': 2357, 'Limiting oxygen index': 2358, 'loss on ignition': 2359, 'loss of imprinting': 2360, 'Loss of Interaction': 2361, 'right ventricular apex': 2362, 'retinal vessel analyzer': 2363, 'Rapid Visco Analyzer': 2364, 'Rapid viscosity analyzer': 2365, 'right VEGAS adapter': 2366, 'acute hepatic failure': 2367, 'altered hepatic foci': 2368, 'aerial hyphae formation': 2369, 'anterior heart field': 2370, 'anthropogenic heat flux': 2371, 'error related negativity': 2372, 'Elman recurrent network': 2373, 'effective response network': 2374, 'extended release niacin': 2375, 'estrogen receptor negative': 2376, 'paroxysmal nocturnal hemoglobinuria': 2377, 'periventricular nodular heterotopia': 2378, 'Public Nursing Home': 2379, 'Lobular capillary hemangioma': 2380, 'Langerhans cell histiocytosis': 2381, 'Lens culinaris hemagglutinin': 2382, 'light chain homolog': 2383, 'Black widow spider': 2384, 'body weight support': 2385, 'band width synthesis': 2386, 'best worst scaling': 2387, 'Beckwith Wiedemann syndrome': 2388, 'Double threaded Japan': 2389, 'diencephalic telencephalic junction': 2390, 'velocity time integral': 2391, 'Variable temperature insert': 2392, 'Vaginal Tactile Imager': 2393, 'Brockenbrough curved needle': 2394, 'bacterial cellulose nanofiber': 2395, 'bilateral cavernous neurectomy': 2396, 'breast care nurse': 2397, 'Bicuspid aortic valve': 2398, 'balanced antipodal Vivaldi': 2399, 'balloon aortic valvuloplasty': 2400, 'bloc atrio ventriculaire': 2401, 'aberrant left subclavian artery': 2402, 'antibody lectin sandwich array': 2403, 'tension reduction behavior': 2404, 'Transportation Research Board': 2405, 'time reference beamformer': 2406, 'tree ring boundary': 2407, 'travel related bacteremias': 2408, 'juvenile pilocytic astrocytoma': 2409, 'Java Persistence API': 2410, 'Josephson parametric amplifier': 2411, 'zinc finger nuclease': 2412, 'zinc ferrite nanoparticle': 2413, 'acute generalized exanthematous pustulosis': 2414, 'advanced glycation end products': 2415, 'Adolescent Girls Empowerment Programme': 2416, 'Red palm oil': 2417, 'random particle optimization': 2418, 'right posterior oblique': 2419, 'day of life': 2420, 'donor only labeled': 2421, 'inferior lateral genicular': 2422, 'inner leaf gel': 2423, 'ventricular ejection time': 2424, 'visceral endoderm thickening': 2425, 'vaginal epithelial thickness': 2426, 'proper hepatic artery': 2427, 'Pain Health Assessment': 2428, 'Polish hatchery Aquamar': 2429, 'putative horizontally acquired': 2430, 'primary human astrocytes': 2431, 'critical limb ischaemia': 2432, 'capsule location index': 2433, 'Cerenkov luminescence imaging': 2434, 'command line interface': 2435, 'Canopy leaf irradiation': 2436, 'NBI International Colorectal Endoscopic': 2437, 'National Intensive Care Evaluation': 2438, 'neuronal intramembrane cavitation excitation': 2439, 'nisin inducible controlled expression': 2440, 'high grade dysplasia': 2441, 'hyper gamma distribution': 2442, 'hand grip dynamometer': 2443, 'high glucose diet': 2444, 'Hymenoptera Genome Database': 2445, 'Median arcuate ligament syndrome': 2446, 'multi angle light scattering': 2447, 'anterior circumflex humeral artery': 2448, 'American College Health Association': 2449, 'endothelium dependent vessel relaxation': 2450, 'effective distribution volume ratio': 2451, 'airway smooth muscle cells': 2452, 'adaptive sliding mode control': 2453, 'Vehicle Mile Traveled': 2454, 'vital mineralized tissue': 2455, 'distilled water control': 2456, 'Daily water consumption': 2457, 'Mindfulness Attention Awareness Scale': 2458, 'Maternal antenatal attachment scale': 2459, 'Modified Active Australia Survey': 2460, 'Abdominal Withdrawal Reflex': 2461, 'Argao watershed reserve': 2462, 'average weighted risk': 2463, 'receptor interacting domain': 2464, 'reduced integration domain': 2465, 'receiver initiated diffusion': 2466, 'rotation invariant descriptor': 2467, 'Rho inactivation domain': 2468, 'Kupperman Menopausal Index': 2469, 'kinaesthetic motor imagery': 2470, 'Korean red ginseng': 2471, 'Kurdistan Regional Government': 2472, 'Krebs Ringer glucose': 2473, 'left ventricular fractional shortening': 2474, 'lateral visual field stimulation': 2475, 'Cohen Perceived Stress Scale': 2476, 'Child PTSD Symptom Scale': 2477, 'Colorado Pain Scoring System': 2478, 'wet dog shakes': 2479, 'wavelength dispersive spectroscopy': 2480, 'Welsh Demographic Service': 2481, 'water soluble tetrazolium': 2482, 'wavelet soft thresholding': 2483, 'warm sensation threshold': 2484, 'within species transmission': 2485, 'automatic tongue diagnosis system': 2486, 'Autonomic Tongue Diagnostic System': 2487, 'average total disease score': 2488, 'Ac Leu Leu norleucinal': 2489, 'acetyl leucyl leucyl norleucinal': 2490, 'anterior lateral line nerve': 2491, 'acetyl Leu Leu norLeu': 2492, 'Gray matter volume': 2493, 'global motion vector': 2494, 'fresh palm oil': 2495, 'Free Patents Online': 2496, 'Lung Weight Gain': 2497, 'letter word generation': 2498, 'Open Field Test': 2499, 'optical Fourier transformation': 2500, 'orientation filter transform': 2501, 'glycogen synthase kinase': 2502, 'generalized Sawada Kotera': 2503, 'Glaxo Smith Kline': 2504, 'glucose synthesis kinase': 2505, 'Red ginseng extract': 2506, 'relative gene expression': 2507, 'renormalization group equation': 2508, 'Naja sputatrix venom': 2509, 'no slot ventilation': 2510, 'Numerical Summarization Vectors': 2511, 'normal saphenous veins': 2512, 'neuroadapted Sindbis virus': 2513, 'green tea extract': 2514, 'Gracilaria tenuistipitata extract': 2515, 'Neck Pain Questionnaire': 2516, 'non photochemical quenching': 2517, 'non participation questionnaire': 2518, 'white balloon flower': 2519, 'Wakefield Bayes factor': 2520, 'diluted bee venom': 2521, 'Drosophila B virus': 2522, 'diastolic blood viscosity': 2523, 'heart weight index': 2524, 'hand wing index': 2525, 'nasal lavage fluid': 2526, 'Normalized Log Frequency': 2527, 'normal liver function': 2528, 'nano lipid formulation': 2529, 'Blood flow volume': 2530, 'best fitness value': 2531, 'blood flow velocity': 2532, 'Binary Feature Vector': 2533, 'Barmah Forest virus': 2534, 'pathway based similarity comparison': 2535, 'peripheral blood stem cells': 2536, 'official development assistance': 2537, 'Outer Dynein Arms': 2538, 'Overseas Development Assistance': 2539, 'Online digital assistance': 2540, 'optimal docking area': 2541, 'Total factor productivity': 2542, 'total field power': 2543, 'tomato fluorescent protein': 2544, 'transcription factor percentage': 2545, 'aortic blood flow': 2546, 'annular bright field': 2547, 'ABRE binding factor': 2548, 'Audio Bio Feedback': 2549, 'after blood feeding': 2550, 'anti müllerian hormone': 2551, 'Australian Medicines Handbook': 2552, 'anatomically modern humans': 2553, 'general purpose genotype': 2554, 'good prognosis group': 2555, 'Good Practice Guide': 2556, 'Breast Cancer Surveillance Consortium': 2557, 'breast cancer stem cell': 2558, 'National Population Health Survey': 2559, 'non pylori Helicobacter species': 2560, 'unit leaf rate': 2561, 'untranslated leader region': 2562, 'uterine lumen region': 2563, 'high grade neoplasia': 2564, 'hollow gold nanoshells': 2565, 'inferior longitudinal fasciculus': 2566, 'intelligent listening framework': 2567, 'Intervention Level Framework': 2568, 'isolated lymphoid follicle': 2569, 'lateral geniculate body': 2570, 'Laparoscopic gastric bands': 2571, 'Reactive lymphoid hyperplasia': 2572, 'Royal London Hospital': 2573, 'effective diversity gain': 2574, 'Emergency diesel generator': 2575, 'electron donating group': 2576, 'endothelial differentiation gene': 2577, 'orthogonal genetic algorithm': 2578, 'Outer Genetic Algorithm': 2579, 'medical implantable communication service': 2580, 'Multiple Ion Cluster Source': 2581, 'Multiple Indicator Cluster Surveys': 2582, 'Matrigel invasion chamber system': 2583, 'malaria incidence climate seasons': 2584, 'right hand circular polarization': 2585, 'Right Hand Circularly Polarized': 2586, 'multilayer printed circuit board': 2587, 'Maharashtra Pollution Control Board': 2588, 'planar near field': 2589, 'Proprioceptive neuromuscular facilitation': 2590, 'personalized normative feedback': 2591, 'potential nitrogen fixing': 2592, 'Multiple organ failure': 2593, 'metal organic framework': 2594, 'maximum occlusal force': 2595, 'microscopic observational fields': 2596, 'vesicular stomatitis virus': 2597, 'VM SLA violation': 2598, 'varicose saphenous veins': 2599, 'disease evaluation factor': 2600, 'dose enhancement factor': 2601, 'Distributed Execution Framework': 2602, 'Diabète en Forme': 2603, 'days in vitro': 2604, 'deep infarct volume': 2605, 'double cantilever beam': 2606, 'drug coated balloons': 2607, 'Demineralised Cortical Bone': 2608, 'dynamical Coulomb blockade': 2609, 'end notched flexure': 2610, 'early newborn food': 2611, 'early nocturnal fasting': 2612, 'medial lateral oblique': 2613, 'Multi Link Optimization': 2614, 'Mauna Loa Observatory': 2615, 'Mildew Locus O': 2616, 'non mineralized tissue': 2617, 'noninvasive microtest technique': 2618, 'N Myristoyl transferase': 2619, 'universal serial bus': 2620, 'Untied Suture Bridge': 2621, 'leaky wave antenna': 2622, 'long wave approximation': 2623, 'limited wide area': 2624, 'locally weighted averaging': 2625, 'Wireless Measurement Project': 2626, 'Wireless MAC Processor': 2627, 'Welsh Medicines Partnership': 2628, 'weight management practices': 2629, 'wireless body area network': 2630, 'Weather Bureau Army Navy': 2631, 'side lobe level': 2632, 'Small lymphocytic lymphoma': 2633, 'serum lipid level': 2634, 'standard linear liquid': 2635, 'outer volume suppression': 2636, 'offer versus serve': 2637, 'oleic vinyl sulfone': 2638, 'total false fraction': 2639, 'Talas Fergana fault': 2640, 'tangential flow filtration': 2641, 'Trefoil factor family': 2642, 'temporal Fano factor': 2643, 'heat shock protein complexes': 2644, 'hematopoietic stem progenitor cell': 2645, 'hormone sensitive prostate cancer': 2646, 'high speed pressure clamp': 2647, 'lingual bone height': 2648, 'London based hospitals': 2649, 'group parallel interference cancellation': 2650, 'guinea pig inclusion conjunctivitis': 2651, 'magnetic wall waveguide': 2652, 'Mann Whitney Wilcoxon': 2653, 'Slow Strain Rate Tensile': 2654, 'stop signal reaction time': 2655, 'Strand specific reverse transcription': 2656, 'natural organic matter': 2657, 'normal oral mucosa': 2658, 'non organiser mesoderm': 2659, 'nasal outer macula': 2660, 'non operative management': 2661, 'outer nuclear membrane': 2662, 'Otto normal medium': 2663, 'Casein kinase I': 2664, 'Compound kushen injection': 2665, 'paxillin kinase linker': 2666, 'protein kinase like': 2667, 'inner molecular layer': 2668, 'isostructural mixed linker': 2669, 'Live Harmonic Broadcasting': 2670, 'Luteinizing Hormone Beta': 2671, 'late heavy bombardment': 2672, 'power spectrum density function': 2673, 'particle size distribution function': 2674, 'volume of fluid': 2675, 'vertical occipital fasciculus': 2676, 'algae raceway integrated design': 2677, 'AT rich interaction domain': 2678, 'sulphonic functionalized silica particles': 2679, 'Summer Food Service Program': 2680, 'Isolation by barrier': 2681, 'inter boundary biota': 2682, 'importin beta binding': 2683, 'edge exclusion deviance': 2684, 'Energy efficient driving': 2685, 'edge enhancing diffusion': 2686, 'embryonic ectoderm development': 2687, 'Environmental enteric dysfunction': 2688, 'Gamma Knife surgery': 2689, 'generalized Karp Sipser': 2690, 'National Agricultural Imagery Program': 2691, 'neuronal apoptosis inhibitory protein': 2692, 'radar height indicator': 2693, 'reactive hyperemia index': 2694, 'rubber hand illusion': 2695, 'repetitive head impacts': 2696, 'Total precipitable water': 2697, 'thoracoscopic pericardial window': 2698, 'latitude ionospheric sensor network': 2699, 'Line Impedance Stabilization Network': 2700, 'Pepsi Light Twist': 2701, 'Production logging tools': 2702, 'Pepsi Twist Light': 2703, 'pass transistor logic': 2704, 'partial tight ligation': 2705, 'pulse train length': 2706, 'pancreatic triglyceride lipase': 2707, 'binary gamma gamma': 2708, 'bovine gamma globulin': 2709, 'forest vegetation management': 2710, 'finite volume method': 2711, 'Short Oligonucleotide Analysis Package': 2712, 'Simple Object Access Protocol': 2713, 'secreted ookinete adhesive protein': 2714, 'Weed Risk Assessment': 2715, 'Withdrawal Related Adaptations': 2716, 'wheel running activity': 2717, 'wrinkle recovery angle': 2718, 'Water Resources Agency': 2719, 'effective strip widths': 2720, 'electrostatic solitary wave': 2721, 'Enhanced silicate weathering': 2722, 'extracorporeal shock waves': 2723, 'higher heating value': 2724, 'Human herpes virus': 2725, 'community based forest management': 2726, 'characteristic basis function method': 2727, 'natural production forests': 2728, 'normal palmar fascia': 2729, 'no positive feedback': 2730, 'non pollen feeding': 2731, 'algebraic difference approach': 2732, 'American Diabetes Association': 2733, 'Aggressive Dual Ascending': 2734, 'anti drug antibodies': 2735, 'adenosine deaminase activity': 2736, 'Eastern Venezuelan basin': 2737, 'empirical valence bond': 2738, 'enlarged vascular bundle': 2739, 'endoscopic variceal ligation': 2740, 'epsilon very large': 2741, 'Ena VASP like': 2742, 'adipose differentiation related protein': 2743, 'autosomal dominant retinitis pigmentosa': 2744, 'tyrosine kinase receptors': 2745, 'total knee replacement': 2746, 'relative uptake ratio': 2747, 'right uterine remnant': 2748, 'female genital tract': 2749, 'functionally graded thickness': 2750, 'food grasping tasks': 2751, 'Foster Greer Thorbecke': 2752, 'formol gel test': 2753, 'polymorphic amplified typing sequences': 2754, 'Performance Aware Task Scheduling': 2755, 'polar auxin transport stream': 2756, 'Ultrasound modulated fluorescence': 2757, 'Unique Manuka Factor': 2758, 'upper membership function': 2759, 'UPR modulatory factor': 2760, 'optical vortex interferometer': 2761, 'occupied volume index': 2762, 'Tip Enhanced Raman Spectroscope': 2763, 'tip enhanced Raman scattering': 2764, 'reduced inertial sensor system': 2765, 'Revised International staging system': 2766, 'Unambiguous Frequency Aided': 2767, 'unidirectional Fano algorithm': 2768, 'unsaturated fatty acid': 2769, 'kernel density estimator': 2770, 'kappa deleting element': 2771, 'average rectified value': 2772, 'adjusted realized volatility': 2773, 'average real variability': 2774, 'total column ozone': 2775, 'transparent conducting oxide': 2776, 'tropospheric column ozone': 2777, 'greater petrosal nerve': 2778, 'Gene positive network': 2779, 'Gelatin peptide nanoparticles': 2780, 'gene proximity network': 2781, 'Gly Phe naphylamide': 2782, 'weighted average reflectance': 2783, 'winter accident rate': 2784, 'voltage source inverter': 2785, 'voxel shift interpolation': 2786, 'Vertical slice image': 2787, 'visual stability index': 2788, 'Voltage Stability Index': 2789, 'absolute corrected percentage error': 2790, 'acute cardiogenic pulmonary edema': 2791, 'years since migration': 2792, 'yolk sac membrane': 2793, 'main tumor body': 2794, 'Music Test Battery': 2795, 'marginal turbid band': 2796, 'minimum to baseline': 2797, 'Molecular Tumor Board': 2798, 'Rat brain homogenate': 2799, 'Reusability Based Heuristic': 2800, 'residual bone height': 2801, 'reciprocal best hits': 2802, 'Royal Brompton Hospital': 2803, 'bovine brain homogenate': 2804, 'binary black hole': 2805, 'Bidirectional Best Hit': 2806, 'high dose group': 2807, 'Hybrid Discontinuous Galerkin': 2808, 'hippocampal dentate gyrus': 2809, 'Hsf1 dependent genes': 2810, 'butyl benzyl phthalate': 2811, 'Bailey Borwein Plouffe': 2812, 'Batch Back Propagation': 2813, 'Brucella Bioinformatics Portal': 2814, 'Biologic Beyond Progression': 2815, 'laser scanning confocal microscopy': 2816, 'Laser Scan Confocal Microscope': 2817, 'meconium stained amniotic fluid': 2818, 'maximum statistical agreement forest': 2819, 'data dependent superimpose training': 2820, 'double disk synergy test': 2821, 'vulnerability emulation handlers': 2822, 'vibration energy harvesting': 2823, 'inverse discrete cosine transform': 2824, 'interdisciplinary diabetes care team': 2825, 'single phase induction motor': 2826, 'subdomain precise integration method': 2827, 'single plane illumination microscopy': 2828, 'Selective Plane Illumination Microscopy': 2829, 'full wake alignment': 2830, 'fractional weighted average': 2831, 'Real Time Workshop': 2832, 'return to work': 2833, 'fluorescent mirror unit': 2834, 'freestanding midwifery unit': 2835, 'forest management units': 2836, 'Fetal alcohol spectrum disorders': 2837, 'forest area status designation': 2838, 'Folic Acid Supplemented Diet': 2839, 'cholera toxin B': 2840, 'cement treated base': 2841, 'cognitive test battery': 2842, 'Cell Titer Blue': 2843, 'Chernobyl Tissue Bank': 2844, 'Chronic nonbacterial osteomyelitis': 2845, 'cashew nut oil': 2846, 'clozapine N oxide': 2847, 'carbon nano onion': 2848, 'partial knee replacement': 2849, 'protein kinase RNA': 2850, 'parallel kinetic resolution': 2851, 'front side bus': 2852, 'free slip boundary': 2853, 'frozen storage buffer': 2854, 'Fram Strait Branch': 2855, 'fan shaped body': 2856, 'Physical Unclonable Function': 2857, 'Power utilization factor': 2858, 'Cumulative Benefit Heuristic': 2859, 'cloud base height': 2860, 'critical buckling height': 2861, 'Cutaneous basophil hypersensitivity': 2862, 'Chronic brain hypoperfusion': 2863, 'Interval Based Heuristic': 2864, 'inspiratory breath hold': 2865, 'intermittent breath holdings': 2866, 'infected brain homogenates': 2867, 'femoral mean artery diameter': 2868, 'flow mediated arterial dilatation': 2869, 'Smart Distance Keeping': 2870, 'software development kit': 2871, 'snout vent length': 2872, 'small volume lavage': 2873, 'atypical ductal hyperplasia': 2874, 'adrenal dependent hyperadrenocorticism': 2875, 'Accumulated degree hours': 2876, 'Autosomal dominant hypocalcemia': 2877, 'alcohol de hydrogenase': 2878, 'Japanese Circulation Society': 2879, 'joint coordinate system': 2880, 'vibration perception threshold': 2881, 'Variable pressure therapy': 2882, 'Vital pulp therapy': 2883, 'asymptotically optimum estimator': 2884, 'A oxyphylla extract': 2885, 'Alisma orientale extract': 2886, 'acoustic over exposure': 2887, 'vanillin enzymatic oligomer': 2888, 'vinyl ester oligomer': 2889, 'multipath echo based': 2890, 'Malt extract broth': 2891, 'mixed event background': 2892, 'germination rate index': 2893, 'glutathione reductase inhibitor': 2894, 'genetic risk index': 2895, 'Meridional Heat Flux': 2896, 'mucoadhesive hydrogel film': 2897, 'maximum heat flux': 2898, 'control flow graph': 2899, 'cell free gel': 2900, 'context free grammar': 2901, 'cystic fibrosis group': 2902, 'counter flow geometry': 2903, 'abnormal umbilical Doppler': 2904, 'alcohol use disorder': 2905, 'central ischemic zone': 2906, 'cone interdigitation zone': 2907, 'Differential Quadrature Method': 2908, 'dipole quadrupole model': 2909, 'microstructure knowledge systems': 2910, 'mitosis kinase score': 2911, 'Mount Kenya Savanna': 2912, 'mitotic kinesin signature': 2913, 'near visual acuity': 2914, 'non value added': 2915, 'non voiding activity': 2916, 'Normalized velocity autocorrelation': 2917, 'multiple endocrine neoplasia': 2918, 'Minimal Essential Network': 2919, 'Modified Elastic Net': 2920, 'mitotic exit network': 2921, 'human neutrophil peptide': 2922, 'Hallasan National Park': 2923, 'herniated nucleus pulposus': 2924, 'human Nanog promoter': 2925, 'crude pongamia oil': 2926, 'chiral plasmonic oligomers': 2927, 'cleft palate only': 2928, 'crystal preferred orientation': 2929, 'low heat rejection': 2930, 'luteinizing hormone receptor': 2931, 'laser hair removal': 2932, 'Large horizontal reactor': 2933, 'linker helix region': 2934, 'Hartridge smoke unit': 2935, 'hardwired scaling unit': 2936, 'harvest store use': 2937, 'helicopter emergency medical services': 2938, 'home energy management system': 2939, 'Security Adaptation Reference Monitor': 2940, 'selective androgen receptor modulators': 2941, 'silicon rich oxide': 2942, 'short range order': 2943, 'spermatophore receiving organ': 2944, 'very high frequency': 2945, 'visible human female': 2946, 'Viral hemorrhagic fevers': 2947, 'high temperature air gasification': 2948, 'health technology assessment group': 2949, 'inferior laryngeal nerve': 2950, 'Iliac lymph node': 2951, 'inguinal lymph nodes': 2952, 'Immature lateral nectary': 2953, 'MOS current mode logic': 2954, 'Monte Carlo Maximum Likelihood': 2955, 'mixed liquor suspended solid': 2956, 'maximal lactate steady state': 2957, 'Boron potassium nitrate': 2958, 'Back Propagation Network': 2959, 'Biological Process Network': 2960, 'basic psychological needs': 2961, 'basilar pontine nuclei': 2962, 'Emergency peripartum hysterectomy': 2963, 'environmental public health': 2964, 'South American fur seal': 2965, 'San Antonio Family Study': 2966, 'San Andreas Fault System': 2967, 'species associated fluorescence spectra': 2968, 'ventralis oralis posterior': 2969, 'venous occlusion pressure': 2970, 'Venous occlusion plethysmography': 2971, 'ventralis oralis anterior': 2972, 'valve opening angle': 2973, 'value opportunities analysis': 2974, 'variable optical attenuator': 2975, 'viral outgrowth assay': 2976, 'Orthogonal Variability Modeling': 2977, 'ontogenetic vertical migrations': 2978, 'optimal velocity model': 2979, 'Wireless Intelligent Network': 2980, 'wick in needle': 2981, 'Post Archean Australian Shale': 2982, 'paternal antenatal attachment scale': 2983, 'North American Shale Composite': 2984, 'Norwegian Atlantic Slope Current': 2985, 'Nottingham Arabidopsis Stock Centre': 2986, 'wave energy converter': 2987, 'weak energy condition': 2988, 'worm egg count': 2989, 'whole exome capture': 2990, 'Western English Channel': 2991, 'piston rod laser sensor': 2992, 'Plasmon resonance light scattering': 2993, 'peripherally inserted central catheter': 2994, 'proximity identity chip card': 2995, 'long range order': 2996, 'left right organizer': 2997, 'lysosome related organelles': 2998, 'logarithmic relative occupancy': 2999, 'Lunar Reconnaissance Orbiter': 3000, 'Wood fiber cement': 3001, 'with friction compensation': 3002, 'Weibull fading channels': 3003, 'West Fertilizer Company': 3004, 'Work family conflict': 3005, 'Climate Research Unit': 3006, 'calcium release unit': 3007, 'Competitive repopulating unit': 3008, 'fructose rich diet': 3009, 'family related distress': 3010, 'open split ring resonator': 3011, 'Over Shrinkage Ridge Regression': 3012, 'gastric acid suppression therapy': 3013, 'GBAS Approach Service Type': 3014, 'gradient ascent subjective testing': 3015, 'multichannel intraluminal impedance': 3016, 'motif inclusion index': 3017, 'mean invasion index': 3018, 'Myocardial Ischemia Index': 3019, 'Malignant fibrous histiocytoma': 3020, 'morphological facial height': 3021, 'magnetic fluid hyperthermia': 3022, 'multi family homes': 3023, 'virtual private network': 3024, 'vocal pacemaker nucleus': 3025, 'visual projection neurons': 3026, 'ventral posterior nucleus': 3027, 'Mean vascular size': 3028, 'Machine Vision System': 3029, 'Macroscopic vascularization scores': 3030, 'multidimensional vector space': 3031, 'mean variance skewness': 3032, 'contra rotating wind turbines': 3033, 'Contextual Random Walk Traps': 3034, 'laminated architectural glazing': 3035, 'liquid assisted grinding': 3036, 'Laparoscopic assisted gastrectomy': 3037, 'low adherence group': 3038, 'lymphocyte activation gene': 3039, 'Bus Rapid Transit': 3040, 'boosting regression tree': 3041, 'balance rhythmical training': 3042, 'brake response time': 3043, 'Importance value index': 3044, 'interval value iteration': 3045, 'in vivo induced': 3046, 'In vitro isolation': 3047, 'right angle light scattering': 3048, 'Runway Arrested Landing Site': 3049, 'right atrial longitudinal strain': 3050, 'Secondary User Equality': 3051, 'simulated urban environment': 3052, 'septin unique element': 3053, 'ceiling slot ventilation': 3054, 'cervical stroke volume': 3055, 'cumulative squared velocity': 3056, 'comma separated value': 3057, 'cell surface Vimentin': 3058, 'Goodwin growth cycle': 3059, 'Global Granger Causality': 3060, 'gene gene coexpression': 3061, 'hybrid squeeze film damper': 3062, 'high saturated fat diet': 3063, 'multipath adaptive tabu search': 3064, 'meningococcal antigen typing system': 3065, 'maximum allowed tumor size': 3066, 'microbial adhesion to solvent': 3067, 'improved mode shape ratio': 3068, 'inter mammary sticky roll': 3069, 'International Mouse Strain Resource': 3070, 'most unstable condition': 3071, 'monosodium urate crystals': 3072, 'marine urinary clade': 3073, 'Lyapunov Characteristic Number': 3074, 'light cracked naphtha': 3075, 'London Cycle Network': 3076, 'liquid crystal network': 3077, 'iron ore tailings': 3078, 'Internet of Things': 3079, 'Natural Resources Conservation Service': 3080, 'normalized radar cross section': 3081, 'Next generation impactor': 3082, 'Non Gastro Intestinal': 3083, 'NEEAR gastrointestinal illness': 3084, 'Forearm vascular resistance': 3085, 'false viable rate': 3086, 'Flow Velocity Ratio': 3087, 'foveal visual radius': 3088, 'Distributed Hash Table': 3089, 'Die Head Temperature': 3090, 'discrete Hartley transform': 3091, 'dense Hough transform': 3092, 'di hydro testosterone': 3093, 'Packet Delay Outage Ratio': 3094, 'proportionate diagnostic outcome ratio': 3095, 'High Level Synthesis': 3096, 'hybrid local search': 3097, 'human lineage specific': 3098, 'hydrostatic leveling system': 3099, 'Generalized Extreme Value': 3100, 'gene expression value': 3101, 'goal equivalent variability': 3102, 'low energy adaptive clustering hierarchy': 3103, 'Low Energy Aware Cluster Hierarchy': 3104, 'home area network': 3105, 'hyperplastic alveolar nodules': 3106, 'Hindfoot Arthrodesis Nail': 3107, 'hybrid alignment nematic': 3108, 'excess mean square error': 3109, 'Extended Mental Status Exam': 3110, 'oxygen induced retinopathy': 3111, 'osteoblast inducer reagent': 3112, 'obese insulin resistant': 3113, 'Link Quality Indicator': 3114, 'Life Quality Index': 3115, 'delay tolerant networking': 3116, 'Delay/Disruption Tolerant Network': 3117, 'dorsomedial telencephalic neuropil': 3118, 'dissolved total nitrogen': 3119, 'decimation in time': 3120, 'diffuse intimal thickening': 3121, 'diet induced thermogenesis': 3122, 'dressing induced transparency': 3123, 'intelligent message estimator': 3124, 'International medical electives': 3125, 'interactive multimedia eBook': 3126, 'Initial Movement Errors': 3127, 'bile salt hydrolase': 3128, 'B speciosus hemagglutinin': 3129, 'basal septal hypertrophy': 3130, 'Digital audio broadcasting': 3131, 'Double Absorbing Boundary': 3132, 'days after bloom': 3133, 'Di amino benzidine': 3134, 'reactive nitrogen intermediates': 3135, 'Recommended Nutrient Intake': 3136, 'reference nutrient intake': 3137, 'Chronic granulomatous disease': 3138, 'chronic graft dysfunction': 3139, 'Constitutional growth delay': 3140, 'Candida Genome Database': 3141, 'high pressure belt': 3142, 'Hepato Pancreatico Biliary': 3143, 'West african craton': 3144, 'Water Absorption Capacity': 3145, 'West Australian Craton': 3146, 'Weighted Additive Classifier': 3147, 'wholesale acquisition cost': 3148, 'shelf margin wedge': 3149, 'synthetic mine water': 3150, 'second mitotic wave': 3151, 'silicon matched water': 3152, 'operative precursor region': 3153, 'oral panoramic radiograph': 3154, 'Outer Product Rule': 3155, 'Opioid peptide receptors': 3156, 'Barbour Stoenner Kelly': 3157, 'black soybean koji': 3158, 'Torque teno virus': 3159, 'Total Tumor Volume': 3160, 'Thiol tracker violet': 3161, 'Diastolic heart failure': 3162, 'dengue hemorrhagic fever': 3163, 'dense humid forest': 3164, 'Digital Health Framework': 3165, 'Friction stir welding': 3166, 'filtered sea water': 3167, 'female sex workers': 3168, 'Tricalcium phosphate bioceramics': 3169, 'Three point bending': 3170, 'total placement budget': 3171, 'Trinidad Petroleum Bitumen': 3172, 'Scanning Transmission Ion Microscopy': 3173, 'smart transducer interface module': 3174, 'Matrix Assisted Pulsed Laser Evaporation': 3175, 'matrix associated pulse laser evaporated': 3176, 'oscillating magnetic field': 3177, 'Orbit Management Framework': 3178, 'outer membrane factor': 3179, 'photofunctional nanoporous alumina membrane': 3180, 'Partial Network Alignment Multigraph': 3181, 'ordered mesoporous silica': 3182, 'object motion sensitive': 3183, 'Ovary Maturity Stage': 3184, 'Opsoclonus myoclonus syndrome': 3185, 'Satiety Labeled Intensity Magnitude': 3186, 'side linear induction motor': 3187, 'spectral lifetime imaging microscopy': 3188, 'pine needle like': 3189, 'partial nerve ligation': 3190, 'count median width': 3191, 'chiral magnetic wave': 3192, 'central mode water': 3193, 'horse radish peroxidase': 3194, 'handle region peptide': 3195, 'Horizontal reference plane': 3196, 'histidine rich protein': 3197, 'Tobacco mosaic virus': 3198, 'total macular volume': 3199, 'total mineralized volume': 3200, 'human salivary gland': 3201, 'hierarchical service graph': 3202, 'host specificity group': 3203, 'human submandibular glands': 3204, 'histamine succinyl glutamine': 3205, 'radical neck dissection': 3206, 'resistance nodulation division': 3207, 'relay nodes deployment': 3208, 'Gate All Around': 3209, 'Greater Athens Area': 3210, 'glacial acetic acid': 3211, 'glioma associated antigen': 3212, 'brilliant cresyl blue': 3213, 'Beta Carotene Bleaching': 3214, 'bulk conduction band': 3215, 'uracil DNA glycosylase': 3216, 'unit disk graph': 3217, 'cadmium zinc telluride': 3218, 'chirp z transform': 3219, 'moderate vigorous physical activity': 3220, 'multi voxel pattern analysis': 3221, 'Absolute energy intake': 3222, 'Adult Education Initiative': 3223, 'anger expression index': 3224, 'allelic expression imbalance': 3225, 'long chain fatty acid': 3226, 'lateral circumflex femoral artery': 3227, 'high tension glaucoma': 3228, 'Hierarchical Transition Graphs': 3229, 'Outer retinal tubulations': 3230, 'object recognition test': 3231, 'operator repressor titration': 3232, 'sustained posterior contralateral negativity': 3233, 'sparse partial correlation networks': 3234, 'open angle glaucoma': 3235, 'oxide assisted growth': 3236, 'Official Airline Guide': 3237, 'long term hypoxia': 3238, 'Large tailed Han': 3239, 'Low Temperature History': 3240, 'Lijiangxin tuan heigu': 3241, 'fetal heart rate': 3242, 'forearm hyperemic reactivity': 3243, 'frontal horn ratio': 3244, 'time varying frequency': 3245, 'temporal variance filter': 3246, 'tissue volume fraction': 3247, 'Square forecast error difference': 3248, 'six food elimination diet': 3249, 'image based visual servo': 3250, 'iterative Bayesian variable selection': 3251, 'Assembly Group Analysis': 3252, 'Amadori glycated albumin': 3253, 'adaptive genetic algorithm': 3254, 'anti gliadin antibodies': 3255, 'Automated glycan assembly': 3256, 'magnetic flux leakage': 3257, 'maximum fractal length': 3258, 'mixed feedback loop': 3259, 'marginal Fermi liquid': 3260, 'Brillouin dynamic grating': 3261, 'beta d glucan': 3262, 'tear meniscus height': 3263, 'trans membrane helices': 3264, 'traumatic macular holes': 3265, 'thoroughly modified Higuchi': 3266, 'JNK interacting protein': 3267, 'jasmonate induced protein': 3268, 'protein tyrosine kinase': 3269, 'proximal thoracic kyphosis': 3270, 'Philip Tobias Korongo': 3271, 'action potential waveform': 3272, 'alkaline peptone water': 3273, 'artificial pond water': 3274, 'apparent polar wander': 3275, 'Staphylococcal Enterotoxin E': 3276, 'System Energy Efficiency': 3277, 'Shannon energy envelope': 3278, 'secondary electron emission': 3279, 'series elastic element': 3280, 'European Synchrotron Radiation Facility': 3281, 'end stage renal failure': 3282, 'Positron Annihilation Lifetime Spectroscopy': 3283, 'Peak atrial longitudinal strain': 3284, 'Pediatric Advanced Life Support': 3285, 'Phase Analysis Light Scattering': 3286, 'STIM Orai activating region': 3287, 'Simple Opportunistic Adaptive Routing': 3288, 'primary irritation index': 3289, 'prior informed imputation': 3290, 'Personally identifiable information': 3291, 'Federal Docket Management System': 3292, 'fuzzy decision making system': 3293, 'filter dynamics measurement system': 3294, 'methanol to olefins': 3295, 'make to order': 3296, 'medical treatment overseas': 3297, 'DRB sensitivity inducing factor': 3298, 'dynamic stress intensity factor': 3299, 'human embryonic stem cell': 3300, 'human endometrial stromal cells': 3301, 'red ginseng acid polysaccharide': 3302, 'Rice Genome Annotation Project': 3303, 'Resistance gene analog polymorphism': 3304, 'cytokine induced neutrophil chemoattractant': 3305, 'Complete Interaction Network Centrality': 3306, 'helix loop helix': 3307, 'Hydraulic Lock Hopper': 3308, 'ubiquitin like domain': 3309, 'upper level discriminator': 3310, 'urate lowering drug': 3311, 'Kidney Injury Molecule': 3312, 'kinase interaction motif': 3313, 'Kilovoltage Intrafraction Monitoring': 3314, 'scanning laser vibrometer': 3315, 'single locus variations': 3316, 'systemic large veins': 3317, 'semi lunar valve': 3318, 'basic bandwidth unit': 3319, 'biofilm biomass unit': 3320, 'basic building unit': 3321, 'mean square pure error': 3322, 'magnetic solid phase extraction': 3323, 'mean squared prediction error': 3324, 'quaternion wavelet transform': 3325, 'quarter wave transformer': 3326, 'shear strength reduction method': 3327, 'scanning spreading resistance microscopy': 3328, 'Minimum Volume Ellipsoid': 3329, 'maximal voluntary effort': 3330, 'mitral valve enhancement': 3331, 'mesenteric vascular endothelium': 3332, 'mixed vehicular emission': 3333, 'colon ascendens stent peritonitis': 3334, 'critical appraisal skills program': 3335, 'comet assay software project': 3336, 'central aortic systolic pressure': 3337, 'Conventional Inertial Reference Frame': 3338, 'channel impulse response function': 3339, 'limit cycle oscillation': 3340, 'left common ostium': 3341, 'lithium cobalt oxide': 3342, 'respiratory syncytial virus': 3343, 'Rous Sarcoma virus': 3344, 'Rice stripe virus': 3345, 'relative search volume': 3346, 'Kanade Lucas Tomasi': 3347, 'Karhunen Loève transform': 3348, 'degree of hybridization': 3349, 'depth of hypnosis': 3350, 'Department of Health': 3351, 'Kalman consensus filter': 3352, 'kernelized correlation filter': 3353, 'kenaf core fiber': 3354, 'Kung Co fault': 3355, 'KEGG Chemical Function': 3356, 'total variation model': 3357, 'trunk visceral mesoderm': 3358, 'Modified Chebyshev Picard Iteration': 3359, 'Microbial Community Polarization index': 3360, 'air separation unit': 3361, 'acute stroke unit': 3362, 'approximate sparse unmixing': 3363, 'activated sludge unit': 3364, 'Lagrange Programming Neural Network': 3365, 'limbic paralimbic neocortical network': 3366, 'Direct Velocity Feedback': 3367, 'dominant vibration frequency': 3368, 'direct visual feedback': 3369, 'Digital Video Fluoroscopy': 3370, 'hybrid optimization algorithm': 3371, 'Horn of Africa': 3372, 'healthy older adults': 3373, 'highest observed ASA': 3374, 'human oral absorption': 3375, 'Dynamic Time Warping': 3376, 'directional theta weighted': 3377, 'Dynamic time warp': 3378, 'Modified Successive Linearization Method': 3379, 'minimal selective liquid medium': 3380, 'hybrid function projective synchronization': 3381, 'high frequency power supply': 3382, 'modified function projective synchronization': 3383, 'Maximum Fidelity Probe Set': 3384, 'anterior ectosylvian gyrus': 3385, 'Atomic event generator': 3386, 'acute exposure group': 3387, 'fractional discrete cosine transform': 3388, 'flat detector computed tomography': 3389, 'Fast Discrete Cosine Transform': 3390, 'mutual information maximized segmentation': 3391, 'membrane inlet mass spectrometer': 3392, 'mitochondrial inner membrane surface': 3393, 'strapdown inertial navigation system': 3394, 'Spinal Instability Neoplastic Score': 3395, 'Fall Rate Equation': 3396, 'Flesch Reading Ease': 3397, 'fiducial registration error': 3398, 'FoxO recognized element': 3399, 'Conservation space measuring locations': 3400, 'correlation similarity measure learning': 3401, 'Cell System Markup Language': 3402, 'strength duration time constant': 3403, 'secondary data transmission condition': 3404, 'standard deviation time course': 3405, 'high blood level': 3406, 'hand bone loss': 3407, 'heme binding loop': 3408, 'human blood lymphocytes': 3409, 'horizontal bone loss': 3410, 'tongue display unit': 3411, 'thermal desorption unit': 3412, 'Maximally Regular Graph': 3413, 'MORF4 related gene': 3414, 'multicenter rheumatic group': 3415, 'mid radial glial': 3416, 'modeled reduced gravity': 3417, 'delayed neurologic deficit': 3418, 'Distribution network design': 3419, 'family wise error': 3420, 'free water elimination': 3421, 'Normal pressure hydrocephalus': 3422, 'Neutral Protamine Hagedorn': 3423, 'nucleus prepositus hypoglossi': 3424, 'intermittent hypobaric hypoxia': 3425, 'Isolated hypogonadotropic hypogonadism': 3426, 'immortalized human hepatocytes': 3427, 'Advanced Nursing Directive': 3428, 'average normalized delta': 3429, 'Pediatric Respiratory Assessment Measure': 3430, 'Pressure recording analytical method': 3431, 'Alibernet wine extract': 3432, 'aqueous wheatgrass extract': 3433, 'asymptotic waveform evaluation': 3434, 'adult workplace environment': 3435, 'Atomic Weapons Establishment': 3436, 'Biological Association Network': 3437, 'body area networks': 3438, 'labored breathing index': 3439, 'Lecithin Bound Iodine': 3440, 'local branching index': 3441, 'light beam interruption': 3442, 'Strept avidin biotin complex': 3443, 'Specific antibody binding capacity': 3444, 'chronic intermittent hypoxia': 3445, 'critical illness hyperglycaemia': 3446, 'expiratory reserve volume': 3447, 'Environmental Regulation Values': 3448, 'emergency room visits': 3449, 'inspiratory reserve volume': 3450, 'inferior rectal veins': 3451, 'influence relevance voter': 3452, 'interstitial lung disease': 3453, 'interaural level difference': 3454, 'inner lipoyl domain': 3455, 'Infrastructure Less Delivery': 3456, 'integral local deformation': 3457, 'back pain only': 3458, 'Biological Process Ontology': 3459, 'voltage gated calcium channel': 3460, 'Voltage Gated Ca2+ Channels': 3461, 'Deliberate self harm': 3462, 'dyschromatosis symmetrica hereditaria': 3463, 'disc space height': 3464, 'declining steroid hormones': 3465, 'olfactory receptor neuron': 3466, 'Object Related Negativity': 3467, 'Pain Beliefs Questionnaire': 3468, 'Postpartum Bonding Questionnaire': 3469, 'Behavioural Inattention Test': 3470, 'bubble induced turbulence': 3471, 'chronic low back pain': 3472, 'completed local binary pattern': 3473, 'Epithelial Splicing Regulatory Protein': 3474, 'exon skipping related process': 3475, 'biochemical oxygen demand': 3476, 'burden of disease': 3477, 'Drosophila C virus': 3478, 'distance coherence vector': 3479, 'dilated central vein': 3480, 'Dye cycle violet': 3481, 'dense core vesicle': 3482, 'ventral nerve cord': 3483, 'virtual non contrast': 3484, 'inferior olivary complex': 3485, 'iron overload cardiomyopathy': 3486, 'internal occipital crest': 3487, 'inner optic circle': 3488, 'upper tolerance limit': 3489, 'Upper thermal limit': 3490, 'non destructive examination': 3491, 'natural direct effect': 3492, 'not differentially expressed': 3493, 'non dry eye': 3494, 'anterior neural ridge': 3495, 'average net revenue': 3496, 'automatic neighbour relation': 3497, 'apparent nitrification rate': 3498, 'Average Nutrient Requirement': 3499, 'boiling water reactor': 3500, 'back work ratio': 3501, 'direct vessel injection': 3502, 'domain versatility index': 3503, 'difference vegetation index': 3504, 'direct venous inoculation': 3505, 'Digital Video Interface': 3506, 'symbolic nuclear analysis package': 3507, 'sensory nerve action potential': 3508, 'soluble NSF attachment protein': 3509, 'Supplemental Nutrition Assistance Program': 3510, 'self actuated shutdown system': 3511, 'Subject Acupuncture Sensation Scale': 3512, 'Schools and Staffing Survey': 3513, 'Self Assembled Skin Substitute': 3514, 'British Heart Foundation': 3515, 'Brueckner Hartree Foch': 3516, 'blank holder force': 3517, 'Exercise Preference Questionnaire': 3518, 'economic production quantity': 3519, 'error per query': 3520, 'Eysenck Personality Questionnaire': 3521, 'International Renal Interest Society': 3522, 'immune reconstruction inflammatory syndrome': 3523, 'Integrated Risk Information System': 3524, 'Immunogenetic Related Information Source': 3525, 'inverted repeated intervening sequences': 3526, 'representative elementary volume': 3527, 'relative expression value': 3528, 'respiratory emergency visits': 3529, 'Uninhabited combat aerial vehicle': 3530, 'Unmanned combat air vehicle': 3531, 'microbial biomass N': 3532, 'Mechanistic Bayesian Networks': 3533, 'mung bean nuclease': 3534, 'Relative water content': 3535, 'rain water collector': 3536, 'robotic wheel chair': 3537, 'Relative Weighted Consistency': 3538, 'radial water canal': 3539, 'Universiti Putra Malaysia': 3540, 'universal primer mix': 3541, 'Optical flow method': 3542, 'Organizational functional motifs': 3543, 'Ovine forestomach matrix': 3544, 'Open flow microperfusion': 3545, 'National Nature Reserve': 3546, 'Nordic Nutrition Recommendations': 3547, 'volume phase holographic': 3548, 'Virtual Physiological Human': 3549, 'relative error norm': 3550, 'ring expanded nucleoside': 3551, 'Beavers Joseph Saffman': 3552, 'beetroot juice supplementation': 3553, 'Schwarz waveform relaxation': 3554, 'standing wave ratio': 3555, 'sharp wave ripples': 3556, 'spontaneous wheel running': 3557, 'BLV herd profile': 3558, 'Bramwell Holdsworth Pinton': 3559, 'bottom hole pressure': 3560, 'Bogong High Plains': 3561, 'bipolar hip prosthesis': 3562, 'basic belief assignment': 3563, 'blood blister‐like aneurysm': 3564, 'warm ionized medium': 3565, 'weigh in motion': 3566, 'weeks in milk': 3567, 'Government labor indicator': 3568, 'Grey level index': 3569, 'cerebellar model articulation controller': 3570, 'Cerebellar model arithmetic computer': 3571, 'cell matrix adhesion compex': 3572, 'fault distance location': 3573, 'Flexor Digitorum Longus': 3574, 'Force directed layout': 3575, 'frequency difference limen': 3576, 'Nanyang Technological University': 3577, 'National Taiwan University': 3578, 'nephelometric turbidity units': 3579, 'split step backward Euler': 3580, 'short segment Barrett esophagus': 3581, 'optimal vibration control': 3582, 'oral verrucous carcinoma': 3583, 'boundary integral equation': 3584, 'blinded image evaluation': 3585, 'bovine intestinal epithelial': 3586, 'False rejection rate': 3587, 'fault recall rate': 3588, 'flexion relaxation ratio': 3589, 'financial resource requirements': 3590, 'fast repetition rate': 3591, 'unattended ground sensors': 3592, 'Upright Gait Stability': 3593, 'underground gas storage': 3594, 'Unsolicited Grant Service': 3595, 'Bayesian principal components analysis': 3596, 'ballistic particle cluster aggregate': 3597, 'Multifocal motor neuropathy': 3598, 'multiperiod multiproduct networks': 3599, 'medial mammillary nucleus': 3600, 'Mature median nectary': 3601, 'Minimum Melin Norkrans': 3602, 'motor neuron disease': 3603, 'minor neurocognitive disorder': 3604, 'Maximum Node Degree': 3605, 'medial nuclear division': 3606, 'hen egg lysozyme': 3607, 'hole extraction layer': 3608, 'human embryonic lung': 3609, 'Verbal Decision Analysis': 3610, 'vascular disrupting agent': 3611, 'Cabibbo Kobayashi Maskawa': 3612, 'complete keratinocyte medium': 3613, 'circular knitting machine': 3614, 'Clinical Knowledge Manager': 3615, 'modified discrete cosine transform': 3616, 'multiple detector computed tomography': 3617, 'mouse distal convoluted tubule': 3618, 'swollen joint count': 3619, 'symmetrized Joe Clayton': 3620, 'operator product expansion': 3621, 'overall performance effectiveness': 3622, 'one pass evaluation': 3623, 'overlapped paired end': 3624, 'thermal wave resonator cavity': 3625, 'two way relay channel': 3626, 'One Way Shape Memory': 3627, 'Ocean Weather Station Mike': 3628, 'waste lightweight aggregate': 3629, 'weak link approach': 3630, 'water lifting aerator': 3631, 'Heat Producing Elements': 3632, 'hourly peak error': 3633, 'horizontal position error': 3634, 'Human Placenta Extract': 3635, 'La Plata Basin': 3636, 'left portal branch': 3637, 'MODIS data retrieved EP': 3638, 'Methylation Dependent Restriction Enzyme': 3639, 'protons on target': 3640, 'peaks over threshold': 3641, 'Proximal optimization technique': 3642, 'plasmonic optical tweezers': 3643, 'posterior optical track': 3644, 'Oceanic Niño Index': 3645, 'optic nerve injury': 3646, 'temperature humidity index': 3647, 'Tinnitus Handicap Inventory': 3648, 'tissued hemoglobin index': 3649, 'fractional order element': 3650, 'focus of expansion': 3651, 'fight or escape': 3652, 'sandwich double damping layer': 3653, 'Scalable Data Distribution Layer': 3654, 'no damping layer': 3655, 'nanostructured dielectric layer': 3656, 'linear friction welding': 3657, 'leaf fat weight': 3658, 'Natural gas hydrate': 3659, 'New Guinea Highlanders': 3660, 'mechanically mixed layer': 3661, 'middle molecular layer': 3662, 'Minimum Message Length': 3663, 'multiscale modelling language': 3664, 'last archaeal common ancestor': 3665, 'load aware channel assignment': 3666, 'Tibetan Plateau vortex': 3667, 'total pore volume': 3668, 'transcatheter pulmonary valve': 3669, 'Central Weather Bureau': 3670, 'counterproductive work behaviors': 3671, 'Regional Specialized Meteorological Center': 3672, 'relative soil moisture content': 3673, 'Three River Headwaters region': 3674, 'thyrotropin releasing hormone receptor': 3675, 'low dosed computed tomography': 3676, 'local discrete cosine transform': 3677, 'Letter Digit Coding Test': 3678, 'photonic band gap': 3679, 'postprandial blood glucose': 3680, 'Patch burn grazing': 3681, 'peptide binding groove': 3682, 'oxygen uptake rate': 3683, 'oxygen utilization rates': 3684, 'ground waste concrete': 3685, 'gravimetric water content': 3686, 'Gollop Wolfgang complex': 3687, 'reduced graphene oxide': 3688, 'Royal Greenwich Observatory': 3689, 'Variable Angle Nail': 3690, 'Ventral Attention Network': 3691, 'cohesive material zone': 3692, 'ciliary margin zone': 3693, 'circumferential marginal zone': 3694, 'total facet arthroplasty system': 3695, 'transcription factor association strength': 3696, 'temperature programmed oxidation': 3697, 'Timber Product Output': 3698, 'tropical Pacific Ocean': 3699, 'dual energy computer tomography': 3700, 'digital enhanced cordless telecommunications': 3701, 'Goal Question Metric': 3702, 'generalized quantitative model': 3703, 'Pongam raw oil': 3704, 'pressure retarded osmotic': 3705, 'patient reported outcomes': 3706, 'Childhood Autism Rating Scale': 3707, 'coherent anti‐Stokes Raman scattering': 3708, 'Multiple Unit Activity': 3709, 'manipulation under anaesthesia': 3710, 'high frequency repetitive stimulation': 3711, 'Heart Failure Risk Score': 3712, 'Pressurized Building Block': 3713, 'pro basal body': 3714, 'protracted bacterial bronchitis': 3715, 'water quality study': 3716, 'Wisconsin Quality Synthetic': 3717, 'Son Narmada south fault': 3718, 'Swiss National Science Foundation': 3719, 'Gender based violence': 3720, 'genomic breeding value': 3721, 'Gaussian network model': 3722, 'gastric normal mucosa': 3723, 'Phospho enol pyruvate carboxylase': 3724, 'pulse external pneumatic compression': 3725, 'modular air cooled condenser': 3726, 'Minimum Area Contour Change': 3727, 'Fast breeder reactors': 3728, 'foreign body reaction': 3729, 'Focused Beam Routing': 3730, 'fluidized bed reactor': 3731, 'spectral homotopy perturbation method': 3732, 'scanning Hall probe microscopy': 3733, 'One Versus All': 3734, 'ontology variant analysis': 3735, 'objective variability analysis': 3736, 'active implantable medical device': 3737, 'ab initio molecular dynamics': 3738, 'additive increase multiplicative decrease': 3739, 'Urine cell free': 3740, 'urea complex fraction': 3741, 'universal coupling function': 3742, 'high frequency vibration': 3743, 'hypersonic flight vehicle': 3744, 'human foamy virus': 3745, 'Hemorrhagic Fever Viruses': 3746, 'blood gas barrier': 3747, 'below ground biomass': 3748, 'Dallas Pain Questionnaire': 3749, 'dynamic phase quantization': 3750, 'frequency doubling technology': 3751, 'Fast disintegrating tablets': 3752, 'Fisher discriminant transform': 3753, 'frequency discrimination threshold': 3754, 'disease spectrum width': 3755, 'Deep sea water': 3756, 'discriminative score weighting': 3757, 'dense shelf water': 3758, 'urinary trypsin inhibitor': 3759, 'Urinary tract infection': 3760, 'upper thoracic inclination': 3761, 'Testicular germ cell tumour': 3762, 'Tenosynovial Giant Cell Tumour': 3763, 'Electronic Health Record': 3764, 'early hospital readmission': 3765, 'enduring hypoxic response': 3766, 'EGFR homologous region': 3767, 'heat aggregated OVA': 3768, 'human alveolar osteoblasts': 3769, 'High Altitude Observatory': 3770, 'Palisades of Vogt': 3771, 'perfect optical vortex': 3772, 'principal other vehicle': 3773, 'Pacific Ocean Virome': 3774, 'percentage of variance': 3775, 'Mueller Hinton agar': 3776, 'Moderate Hypercapnic Acidosis': 3777, 'ricin toxin B': 3778, 'Route Temporary Blindness': 3779, 'red turpentine beetle': 3780, 'Water immersion restraint': 3781, 'wash in rate': 3782, 'wound induced resistance': 3783, 'p Amino benzoic acid': 3784, 'Partition Addition Bootstrap Alteration': 3785, 'ultra mini PCNL': 3786, 'unbalanced magnetic pull': 3787, 'uniformly most powerful': 3788, 'ubiquitin mediated proteolysis': 3789, 'dynamic light scattering sizer': 3790, 'Degenerative lumbar spinal stenosis': 3791, 'Discrete optimized protein energy': 3792, 'Discrete Optimized Potential Energy': 3793, 'Glasgow Alcoholic Hepatitis Score': 3794, 'generalised adaptive harmony search': 3795, 'deoxyribose nucleic acid': 3796, 'Difco Nutrient Agar': 3797, 'body lead burden': 3798, 'Barankin lower bound': 3799, 'bacterial leaf blight': 3800, 'bolt looseness boundary': 3801, 'blood labyrinth barrier': 3802, 'osteogenic matrix cell sheets': 3803, 'Optical Motion Capture Systems': 3804, 'Speeded Up Robust Features': 3805, 'Sanford Underground Research Facility': 3806, 'Floating Pulsatile Release Tablet': 3807, 'Faux Pas Recognition Test': 3808, 'media layer thickness': 3809, 'magnetic local time': 3810, 'Muscle Layer Thickness': 3811, 'medical laboratory technician': 3812, 'Morris Lecar Terman': 3813, 'Nucleolar precursor body': 3814, 'non parametric bootstrap': 3815, 'neural plate border': 3816, 'normal peripheral blood': 3817, 'sago hampas hydrolysate': 3818, 'Sonic Hedgehog Homolog': 3819, 'arterial pressure variability': 3820, 'average peak velocity': 3821, 'Asymmetric Pseudo Voigt': 3822, 'Absolute Putamen Volume': 3823, 'perilla group uninfected': 3824, 'pulse generating unit': 3825, 'perilla group infected': 3826, 'Phylip Graphical Interface': 3827, 'Patient Generated Index': 3828, 'post glioma induction': 3829, 'free water surface': 3830, 'First Warning System': 3831, 'intracycle velocity variation': 3832, 'in vitro virus': 3833, 'average daily gain': 3834, 'ancient duplicated gene': 3835, 'Average Daily Growth': 3836, 'Aggregated Diagnosis Groups': 3837, 'Nicotinamide adenine dinucleotide phosphate': 3838, 'National Atmospheric Deposition Program': 3839, 'stochastic neighbor embedding': 3840, 'Southern New England': 3841, 'stroke no EA': 3842, 'sciatic nerve entrapment': 3843, 'finite helical axis': 3844, 'fundamental harmonic approximation': 3845, 'fork head associated': 3846, 'facultatively host associated': 3847, 'furrow hull awned': 3848, 'gastric wall mucus': 3849, 'geographically weighted mean': 3850, 'Late embryogenesis abundant protein': 3851, 'local electrode atom probe': 3852, 'L1 element amplification protocol': 3853, 'Longitudinal European Autism Project': 3854, 'Chinese visible human': 3855, 'Chronic visceral hyperalgesia': 3856, 'Chronic viral hepatitis': 3857, 'chicken vasa homologue': 3858, 'plasma membrane redox system': 3859, 'Parkinsonian Monkey Rating Scale': 3860, 'high resolution image set': 3861, 'human resources information systems': 3862, 'Unsupervised Discriminant Projection': 3863, 'User Datagram Protocol': 3864, 'Undiagnosed Diseases Program': 3865, 'urban dust particulates': 3866, 'Ultra deep pyrosequencing': 3867, 'Computed Tomography Severity Index': 3868, 'Carlson trophic state index': 3869, 'ventilated lung volume': 3870, 'very low valence': 3871, 'urinary free cortisol': 3872, 'Unit Forming Colonies': 3873, 'Unified Facilities Criteria': 3874, 'Universal Fingerprinting Chip': 3875, 'gluteus maximus lower': 3876, 'Gastric mucosal lesion': 3877, 'generalized maximum likelihood': 3878, 'glomerular minor lesion': 3879, 'Grey matter lesions': 3880, 'fibrodysplasia ossificans progressiva': 3881, 'fuzzy orienteering problem': 3882, 'fractional occupation probabilities': 3883, 'front of pack': 3884, 'Foramen of Panizza': 3885, 'hallux valgus angle': 3886, 'high voltage activated': 3887, 'higher visual areas': 3888, 'continuous loop average deconvolution': 3889, 'chronic lung allograft dysfunction': 3890, 'confident connected region growing': 3891, 'Critical Care Research Group': 3892, 'Childhood Cancer Research Group': 3893, 'vascular transport function': 3894, 'Vogel Tammen Fulcher': 3895, 'Triangular Number Generator': 3896, 'the nursery ground': 3897, 'transverse neurogenetic gradient': 3898, 'Trans New Guinea': 3899, 'N Formyl octabase': 3900, 'natural fish oil': 3901, 'acute tubular necrosis': 3902, 'anterior thalamic nuclei': 3903, 'anoxic terminal negativity': 3904, 'Advanced TIROS N': 3905, 'Spinal chronic subdural hematoma': 3906, 'saccharified corn starch hydrolysate': 3907, 'right upper lobe': 3908, 'residual use life': 3909, 'remaining useful life': 3910, 'Spina bifida occulta': 3911, 'Skull base osteomyelitis': 3912, 'simulation based optimization': 3913, 'small bowel obstruction': 3914, 'Systems Biology Ontology': 3915, 'Runge Kutta method': 3916, 'reproducing kernel method': 3917, 'reduced kidney mass': 3918, 'constant absolute risk aversion': 3919, 'Context Aware Resource Allocation': 3920, 'dynamic system optimal': 3921, 'digital storage oscilloscopes': 3922, 'Kikuchi Fujimoto’s disease': 3923, 'Kyasanur Forest disease': 3924, 'Transit Station Congestion Index': 3925, 'traumatic spinal cord injury': 3926, 'inferior olivary nuclei': 3927, 'isthmo optic nuclei': 3928, 'dynamic visual acuity': 3929, 'Dynamic vibration absorber': 3930, 'Dynamic Visual Attention': 3931, 'developmental venous anomaly': 3932, 'chronic unpredictable stress': 3933, 'cervical uterine smears': 3934, 'right common iliac artery': 3935, 'relative cortical interstitial area': 3936, 'left common iliac artery': 3937, 'localized corrosion image analyzer': 3938, 'life cycle impact assessment': 3939, 'Neural function defect score': 3940, 'nonstandard finite difference scheme': 3941, 'negative frequency dependent selection': 3942, 'forelimb placement test': 3943, 'First passage time': 3944, 'female producing temperature': 3945, 'frequency pattern test': 3946, 'functionally possible topology': 3947, 'brokered deposits ratio': 3948, 'background diabetic retinopathy': 3949, 'Blastocyst development rate': 3950, 'hypoxic ischemic encephalopathy': 3951, 'health information exchange': 3952, 'High frequency ultrasound': 3953, 'hydraulic flow units': 3954, 'inferior right liver': 3955, 'inner retinal layer': 3956, 'Indian River Lagoon': 3957, 'islet resident leukocytes': 3958, 'auxiliary lymph node': 3959, 'Artificial lumen narrowing': 3960, 'Auricular lymph nodes': 3961, 'Immuno Biological Laboratories': 3962, 'intraoperative blood loss': 3963, 'inferior bridging leaflets': 3964, 'integrated blood lead': 3965, 'Menstrual Distress Questionnaire': 3966, 'Mood Disorder Questionnaire': 3967, 'Nigella sativa oil': 3968, 'Newborn Screening Ontario': 3969, 'MGH Acupuncture Sensation Scale': 3970, 'minimal access spinal surgery': 3971, 'maximal average shear stress': 3972, 'Multivariate Allometric Size Scaling': 3973, 'Measured Abundance Signal Score': 3974, 'Signaling Pathway Impact Analysis': 3975, 'single primer isothermal amplification': 3976, 'K gracilis stem': 3977, 'knowledge guided scoring': 3978, 'Tripterygium wilfordii polyglycoside': 3979, 'trial work period': 3980, 'the warmest period': 3981, 'tropical western Pacific': 3982, 'bone volume fraction': 3983, 'boundary vorticity flux': 3984, 'blocked visual feedback': 3985, 'blood volume flow': 3986, 'bilateral vestibular failure': 3987, 'conjugated equine estrogens': 3988, 'continuous elution electrophoresis': 3989, 'crude ethanol extract': 3990, 'Collimator Exchange Effect': 3991, 'cross eccentric exercise': 3992, 'endothelium dependent hyperpolarization': 3993, 'edge direction histogram': 3994, 'emergency designated hospitals': 3995, 'evaporation duct height': 3996, 'Bergamot essential oil': 3997, 'Best expected optimization': 3998, 'liver pyruvate kinase': 3999, 'Lotli Pai Kaundinya': 4000, 'Lewis Polycystic Kidney': 4001, 'residual urine volume': 4002, 'Remove Unwanted Variation': 4003, 'residual unexplained variability': 4004, 'Gauss Lorentz Lorentz': 4005, 'gene log likelihood': 4006, 'propolis soluble dry extract': 4007, 'power spectral density estimation': 4008, 'Gingival bleeding index': 4009, 'Glasgow Benefit Inventory': 4010, 'corpus cavernosum smooth muscle': 4011, 'Community Climate System Model': 4012, 'upstream binding factor': 4013, 'upper boundary frequency': 4014, 'Syndrom Kurz test': 4015, 'South Kunlun Thrust': 4016, 'single kidney transplant': 4017, 'rat genome database': 4018, 'reconstructing Gauss domains': 4019, 'rapid genetic drift': 4020, 'relative group delay': 4021, 'Low hydraulic resistance points': 4022, 'lightweight hierarchical routing protocol': 4023, 'fruit vegetable ferment': 4024, 'flipped voltage follower': 4025, 'formation volume factor': 4026, 'Folded variant frequency': 4027, 'Fibril volume fraction': 4028, 'rapid eye movement sleep': 4029, 'Rapid Emergency Medicine Score': 4030, 'B1 derived phagocytes': 4031, 'Biorefinery Demo Plant': 4032, 'bonding dimer plasmon': 4033, 'water immersion restraint stress': 4034, 'WRC interacting receptor sequence': 4035, 'Neurological deficit scoring system': 4036, 'National Diabetes Surveillance System': 4037, 'Notifiable Diseases Surveillance system': 4038, 'Gauss Gauss Gauss': 4039, 'gadolinium gallium garnet': 4040, 'Gauss Gauss Lorentz': 4041, 'geodesic graph Laplacian': 4042, 'medical emergency watch': 4043, 'maximum elytra width': 4044, 'International Emergency Medicine': 4045, 'Immune Electron Microscopy': 4046, 'inner envelope membranes': 4047, 'adaptive statistical iterative reconstruction': 4048, 'Age standardized incidence rate': 4049, 'age specific incidence rate': 4050, 'sequence related amplified polymorphic': 4051, 'SOS response associated peptidase': 4052, 'integrated ultrasound transducer': 4053, 'Intersection Union Test': 4054, 'plate acoustic waves': 4055, 'projector augmented wave': 4056, 'Plant available water': 4057, 'acoustic mode assessment photonic': 4058, 'Alternating Maximum a Posteriori': 4059, 'Tang Luo Ning': 4060, 'thoracic lymph node': 4061, 'total leaf number': 4062, 'Vaginal fluid simulant': 4063, 'vertically free standing': 4064, 'vertical farming systems': 4065, 'Vegetated filter strips': 4066, 'superior rectal vein': 4067, 'surface recombination velocity': 4068, 'stimulated reservoir volume': 4069, 'stroke relevance value': 4070, 'Social role valorization': 4071, 'endoscopic ultrasound scope': 4072, 'Eastern United States': 4073, 'external urethral sphincter': 4074, 'wide subarray forcing': 4075, 'Weighted Subspace Fitting': 4076, 'wind steadiness factor': 4077, 'water soluble fraction': 4078, 'inferior hepatic veins': 4079, 'inferior haemorrhoidal vein': 4080, 'orbital angular momentum': 4081, 'observatory annual means': 4082, 'castrate resistant prostate cancer': 4083, 'centered rectangular photonic crystal': 4084, 'Joint Domain Localized': 4085, 'Job Decision Latitude': 4086, 'uniform rectangular array': 4087, 'Upstream Regulator Analysis': 4088, 'Upper River Area': 4089, 'Upper Río Agrio': 4090, 'Schauert Wilton Glisson': 4091, 'steel wire grids': 4092, 'compact microstrip resonant cell': 4093, 'cooperative maximum ratio combining': 4094, 'lower heating value': 4095, 'left hepatic vein': 4096, 'Lady Health Visitor': 4097, 'local hidden variable': 4098, 'local haplotype variant': 4099, 'protein misfolding cyclic amplification': 4100, 'plasma membrane calcium ATPase': 4101, 'Plasma Membrane Ca2+ ATPase': 4102, 'penalized weighted least squares': 4103, 'piece wise linear scaling': 4104, 'fatal familial insomnia': 4105, 'femoral flare index': 4106, 'foot function index': 4107, 'feed forward inhibitory': 4108, 'first farrowing interval': 4109, 'intensity correlation quotient': 4110, 'image coding quality': 4111, 'Illness Cognition Questionnaire': 4112, 'periodic acid silver methenamine': 4113, 'P anserina synthetic medium': 4114, 'air dry weight': 4115, 'aseptic distilled water': 4116, 'acid drainage water': 4117, 'fluoride varnish application': 4118, 'Flux Variability Analysis': 4119, 'gene expression dynamic inspector': 4120, 'Global end diastolic index': 4121, 'Network Abstraction Layer': 4122, 'National Aeronautical Laboratory': 4123, 'neuraminic acid lyase': 4124, 'normalized allele length': 4125, 'tumor node metastasis': 4126, 'typed network motif': 4127, 'Traffic Noise Model': 4128, 'Tool Narayanaswamy Moynihan': 4129, 'right portal branch': 4130, 'rotating packed bed': 4131, 'receiver processing blocks': 4132, 'Regional Psychiatry Budget': 4133, 'insulin induced hypoglycemia': 4134, 'ion ion hybrid': 4135, 'idiopathic intracranial hypertension': 4136, 'sweet potato starch syrup': 4137, 'statistical parametric speech synthesis': 4138, 'Superior Performing Statistical Software': 4139, 'hypothalamus pituitary gonadal': 4140, 'homo propargyl glycine': 4141, 'of band gain': 4142, 'optical band gap': 4143, 'Mother to Child Transmission': 4144, 'mean thinnest corneal thickness': 4145, 'short rotation woody crop': 4146, 'soil relative water content': 4147, 'liquid nitrogen temperature': 4148, 'lacto N tetrose': 4149, 'linear no threshold': 4150, 'lateral metal reflective film': 4151, 'label map registration frame': 4152, 'wave intensity analysis': 4153, 'Workforce Investment Act': 4154, 'complement factor H': 4155, 'Cercal filiform hair': 4156, 'cell free hemoglobin': 4157, 'solar PV monitoring system': 4158, 'secondary progressive multiple sclerosis': 4159, 'fine grain optimization': 4160, 'functionalized graphene oxide': 4161, 'Group Based Search Approach': 4162, 'Generalized Born Surface Area': 4163, 'Coarse Grain Optimization': 4164, 'compliance graphene oxide': 4165, 'tip leakage vortex': 4166, 'type length value': 4167, 'Threshold Limit Value': 4168, 'two lung ventilation': 4169, 'total liver volume': 4170, 'active magnetic bearing': 4171, 'additional muscle belly': 4172, 'Aerobic Mesophilic Bacteria': 4173, 'return guide vane': 4174, 'relative gray value': 4175, 'Connected Cardiac Care Program': 4176, 'cylindrical conformal CLD pair': 4177, 'high speed packet access': 4178, 'health system performance assessment': 4179, 'Orthogonal Signal Generator': 4180, 'open subscriber group': 4181, 'oxygen supplemented group': 4182, 'Open Science Grid': 4183, 'macro user equipment': 4184, 'mean unsigned error': 4185, 'quadrature mirror filter': 4186, 'quadrupole mass filter': 4187, 'bile duct ligation': 4188, 'block dictionary learning': 4189, 'Long Wave Flume': 4190, 'large white follicles': 4191, 'human plasma proteome project': 4192, 'homogeneous Poisson point process': 4193, 'Optical rotatory dispersion': 4194, 'object relation diagram': 4195, 'oligopeptide repeat domain': 4196, 'other respiratory diseases': 4197, 'Elastica Masson Goldner': 4198, 'exponentially modified Gaussian': 4199, 'early meiotic gene': 4200, 'E muricatus gleba': 4201, 'normal moving variance': 4202, 'normalized methylation value': 4203, 'Narcissus Mosaic Virus': 4204, 'exponential moving variance': 4205, 'Europay MasterCard Visa': 4206, 'estimation metric vector': 4207, 'elm mottle virus': 4208, 'Time to headway': 4209, 'tension type headache': 4210, 'time to hemostasis': 4211, 'Tamale Teaching Hospital': 4212, 'ratiometric vector iteration': 4213, 'ratio vegetation index': 4214, 'indwelling urinary catheter': 4215, 'intra unit cell': 4216, 'infected urinary calculi': 4217, 'Personal Cascade Impactor Sampler': 4218, 'precision cut intestinal slices': 4219, 'left common carotid artery': 4220, 'left circumflex coronary artery': 4221, 'fractional quantum Hall': 4222, 'field quenched history': 4223, 'long range entangled': 4224, 'long range echolocators': 4225, 'LXR responsive element': 4226, 'Coupling between objects': 4227, 'Consensus Based Optimization': 4228, 'Cell Behavior Ontology': 4229, 'community based organization': 4230, 'manifest refractive spherical equivalent': 4231, 'methicillin resistant S epidermidis': 4232, 'Muscle related side effects': 4233, 'Empirical Potential Structure Refinement': 4234, 'estimated potential scaled reduction': 4235, 'volumetric water content': 4236, 'vegetation water content': 4237, 'von Willebrand C': 4238, 'piecewise smooth subdivision surface': 4239, 'patient shoulder synovitis scores': 4240, 'Air Quality System': 4241, 'automatic quadrature scheme': 4242, 'air quality standard': 4243, 'analogue quantum simulator': 4244, 'Close Neighbor Interchange': 4245, 'critical nodes identification': 4246, 'Copy number increase': 4247, 'choline NAA index': 4248, 'Bose Hubbard model': 4249, 'balanced heuristic mechanism': 4250, 'bovine heart mitochondria': 4251, 'dilute Bose gas': 4252, 'de Bruijn graph': 4253, 'heat transfer coefficient': 4254, 'high temperature cell': 4255, 'high throughput cDNA': 4256, 'hard to cook': 4257, 'Neat Heat Flux Reduction': 4258, 'Norwegian Hip Fracture Register': 4259, 'classical swine fever virus': 4260, 'cerebro spinal fluid volume': 4261, 'interaction principal component analysis': 4262, 'Independent Principal Component Analysis': 4263, 'directable needle guide': 4264, 'double negative gate': 4265, 'Diabetic Normotensive Group': 4266, 'insecticide treated net': 4267, 'immune tolerance network': 4268, 'inferior tectal neuroepithelium': 4269, 'Idiopathic trigeminal neuralgia': 4270, 'Retinoic Acid Responsive Element': 4271, 'rapid acquisition relaxation enhanced': 4272, 'Jabalpur prognostic score': 4273, 'joint position sense': 4274, 'Japan Pancreas Society': 4275, 'juvenile polyposis syndrome': 4276, 'John Player Special': 4277, 'log mean temperature difference': 4278, 'lateral mass transverse diameter': 4279, 'female genital cutting': 4280, 'Flagellar gene cluster': 4281, 'Familial gigantiform cementoma': 4282, 'Casuarina equisetifolia needle': 4283, 'central executive network': 4284, 'Cumulative Enhancement Norm': 4285, 'next nearest neighbor': 4286, 'Nearest Neighbor Networks': 4287, 'N nitroso nornicotine': 4288, 'zero flow pressure': 4289, 'zinc finger protein': 4290, 'table point feature histogram': 4291, 'Total Posterior Facial Height': 4292, 'European Carotid Surgery Trial': 4293, 'Enhanced Convective Stratiform Technique': 4294, 'Earlier implanted group': 4295, 'early introduction group': 4296, 'adaptive threshold bit flipping': 4297, 'adipose tissue blood flow': 4298, 'breast cancer specific survival': 4299, 'Bridge Collapse Software System': 4300, 'simultaneous planning and mapping': 4301, 'Smart Power Assistance Module': 4302, 'dentin enamel junction': 4303, 'dermal epidermal junction': 4304, 'Low Grade Gliomas': 4305, 'low grade group': 4306, 'lower grade glioblastoma': 4307, 'Low Gleason grade': 4308, 'High Grade Gliomas': 4309, 'high grade group': 4310, 'high Gleason grade': 4311, 'Robot assisted navigation': 4312, 'Radio Access Network': 4313, 'raw areca nut': 4314, 'rapid automatized naming': 4315, 'aromatic polycyclic hydrocarbons': 4316, 'Atypical prostatic hyperplasia': 4317, 'Algoma Public Health': 4318, 'AYB Protein Hydrolysate': 4319, 'A phagocytophilum HZ': 4320, 'imported bancroftian filariasis': 4321, 'intermediate biofilm formation': 4322, 'individual beta frequency': 4323, 'additive genetic values': 4324, 'automated guided vehicle': 4325, 'Autonomous Guided Vehicle': 4326, 'Ahmed glaucoma valve': 4327, 'Semliki Forest virus': 4328, 'shape feature vector': 4329, 'superficial femoral vein': 4330, 'Simian Foamy Virus': 4331, 'slow frequency variability': 4332, 'flying vehicle tracking': 4333, 'flash vacuum thermolysis': 4334, 'fronto ventral transverse': 4335, 'cyclic nucleotide binding domain': 4336, 'Corneal Nerve Branch Density': 4337, 'benzofuran forming fission': 4338, 'Buffalo fetal fibroblasts': 4339, 'medial sural cutaneous nerve': 4340, 'mean subtracted contrast normalized': 4341, 'minimum statistic conjunction null': 4342, 'dorsal skin fold chamber': 4343, 'Distributed Space Frequency Coding': 4344, 'neck vessel ratio': 4345, 'non virological response': 4346, 'No venous resection': 4347, 'non verbal reasoning': 4348, 'gross national income': 4349, 'Global Names Index': 4350, 'Latent Implementation Error Detection': 4351, 'Laser induced electron diffraction': 4352, 'harmony memory consideration rate': 4353, 'hierarchical multivariate curve resolution': 4354, 'adaptive precise integration method': 4355, 'Actor Partner Interdependence Model': 4356, 'centroid point defuzzification method': 4357, 'Consensual Potential Distribution Map': 4358, 'Acceptable Noise Level': 4359, 'Average network lifetime': 4360, 'Argonne National Laboratory': 4361, 'extended Tofts Kety': 4362, 'endothelial tyrosine kinase': 4363, 'human visual system': 4364, 'high voltage side': 4365, 'Hepatitis Vaccine Study': 4366, 'high vaginal swabs': 4367, 'reliability based design optimization': 4368, 'Rare Bone Disorders Ontology': 4369, 'neural network predictive control': 4370, 'nested neutral point clamped': 4371, 'Central Sea Level Pressure': 4372, 'Corn steep liquor powder': 4373, 'traction power supply system': 4374, 'Target Plaque Severity Score': 4375, 'standard finite difference methods': 4376, 'serum free defined medium': 4377, 'visibly pushdown automaton': 4378, 'vigorous physical activity': 4379, 'Variation partitioning analysis': 4380, 'viral plaque assays': 4381, 'water maze task': 4382, 'Wechsler Memory Test': 4383, 'Point of Load': 4384, 'performance on line': 4385, 'posterior oblique ligament': 4386, 'completely decomposed granite': 4387, 'Chowghat dwarf green': 4388, 'Computer Dyno Graphy': 4389, 'capacitance diaphragm gauges': 4390, 'coefficient of variation': 4391, 'cut off value': 4392, 'center of ventilation': 4393, 'Improved Single Diode Model': 4394, 'informed shared decision making': 4395, 'rat liver microsome': 4396, 'rat liver mitochondria': 4397, 'RNA ligation mediated': 4398, 'run length matrix': 4399, 'fault slope value': 4400, 'feature selective validation': 4401, 'forward stroke volume': 4402, 'flanking SNPs value': 4403, 'Facilitative supervision visits': 4404, 'data flow diagram': 4405, 'Dahuang Fuzi Decoction': 4406, 'diabetic foot disease': 4407, 'density function distance': 4408, 'homotopy perturbation inversion method': 4409, 'High pressure injection moulding': 4410, 'Continuous Network Design Problem': 4411, 'critical node detection problem': 4412, 'cold neutron depth profiling': 4413, 'Lithium aluminium hydride': 4414, 'linoleic acid hydroperoxide': 4415, 'later arriving herbivores': 4416, 'larval air holes': 4417, 'lever arm helix': 4418, 'Stage specific embryonic antigen': 4419, 'SNP Set Enrichment Analysis': 4420, 'worm like chain': 4421, 'white light cystoscopy': 4422, 'wait list control': 4423, 'weighted linear combination': 4424, 'waiting list condition': 4425, 'exfoliated graphene oxide': 4426, 'Eukaryotic Gene Ortholog': 4427, 'empty fruit bunch': 4428, 'effective first branches': 4429, 'Pongamia Oil Hydroxyl': 4430, 'poor oral hygiene': 4431, 'progressive osseous heteroplasia': 4432, 'pressure overload hypertrophy': 4433, 'internal notched flexure': 4434, 'Internally nucleated fibers': 4435, 'Artificial immune network': 4436, 'acute interstitial nephritis': 4437, 'anterior interosseous nerve': 4438, 'anal intraepithelial neoplasia': 4439, 'anterior interposed nucleus': 4440, 'graphitized carbon black': 4441, 'germinal center B': 4442, 'Global Corruption Barometer': 4443, 'German Corned beef': 4444, 'Gel Code Blue': 4445, 'Optical Projection Tomography': 4446, 'oil palm trunk': 4447, 'oral provocation test': 4448, 'Optimizing PTSD Treatment': 4449, 'Odontoid Process Tangent': 4450, 'Rice husk flour': 4451, 'Recommended Home Fluids': 4452, 'right hind foot': 4453, 'Hydrophilic lipophilic balance': 4454, 'hyperosmolar lysis buffer': 4455, 'Haar Wavelet Descriptors': 4456, 'hot wall deposition': 4457, 'Hardy Weinberg disequilibrium': 4458, 'head worn display': 4459, 'hot water drill': 4460, 'flux switching permanent magnet': 4461, 'forming surface peripheral marrow': 4462, 'Static Switching Pulse Domino': 4463, 'superconducting single photon detector': 4464, 'Glycyrrhiza glabra root': 4465, 'global genome repair': 4466, 'recursive inertial bisection': 4467, 'rigid inflatable boat': 4468, 'rostral intestinal bulb': 4469, 'working heart rate': 4470, 'waist hip ratio': 4471, 'Width Heal Ratio': 4472, 'cuckoo optimization algorithm': 4473, 'comprehensive ophthalmologic assessment': 4474, 'commercial orthodontic adhesive': 4475, 'Peak Side Lobe Ratio': 4476, 'peak sidelobe level ratio': 4477, 'passive straight leg raise': 4478, 'Electro Magnetic Interference': 4479, 'eye mouth index': 4480, 'expectation maximization imputation': 4481, 'Ernst Mach Institute': 4482, 'experimental myocardial ischemia': 4483, 'white noise gain': 4484, 'with no grazing': 4485, 'well to wheels': 4486, 'World Trade Web': 4487, 'water treatment works': 4488, 'gas diffusion layer': 4489, 'Guideline Definition Language': 4490, 'graduated driver license': 4491, 'Gesture Description Language': 4492, 'hind limb transplantation': 4493, 'High Level Terms': 4494, 'high level trigger': 4495, 'Serra Geral Aquifer System': 4496, 'stochastically globally asymptotically stable': 4497, 'Bangoin Nujiang Thrust': 4498, 'Bayes network toolbox': 4499, 'Boston Naming Test': 4500, 'Renbu Zedong Thrust': 4501, 'rough zone trimming': 4502, 'Middle Kunlun Fault': 4503, 'Mount Kenya Forest': 4504, 'environmental impact factors': 4505, 'extended information filter': 4506, 'elongation initiation factor': 4507, 'ethanol insoluble fraction': 4508, 'end inspiratory flow': 4509, 'Unique Patient Number': 4510, 'Unique Personal Number': 4511, 'Focused Electron Beam': 4512, 'Frequency Estimation Based': 4513, 'Ion Output Rate': 4514, 'informational odds ratio': 4515, 'helix B surface peptide': 4516, 'hepatitis B spliced protein': 4517, 'ion beam etching': 4518, 'Ipsilateral Breast Events': 4519, 'isolation by environment': 4520, 'immune benefit enabled': 4521, 'methyl ethyl ketone': 4522, 'MAPK Erk kinase': 4523, 'marker estimated kinships': 4524, 'Bilayer graphene nanoribbon': 4525, 'basal ganglia network': 4526, 'Binary Vessel Extraction': 4527, 'blood volume expansion': 4528, 'Laplacian of Gaussian': 4529, 'lateral orbital gyrus': 4530, 'covalently imprinted photonic crystal': 4531, 'cumulative installed PV capacity': 4532, 'Vertical aligned graphene': 4533, 'video assisted gastrostomy': 4534, 'backscattered electron image': 4535, 'binding efficiency index': 4536, 'brain efflux index': 4537, 'baroreflex effectiveness index': 4538, 'briefly exposed individuals': 4539, 'Ionic Nanoparticle Network': 4540, 'International Nonproprietary Names': 4541, 'Orbital Inflammatory Syndrome': 4542, 'Ocular ischemic syndrome': 4543, 'optical intrinsic signal': 4544, 'original image space': 4545, 'oncogene induced senescence': 4546, 'vertical banded gastroplasty': 4547, 'variable blazing grating': 4548, 'Violet Bougainvillea glabra': 4549, 'vascularized bone grafts': 4550, 'mean absolute scaled error': 4551, 'mean average square errors': 4552, 'Integrated transmission efficiency': 4553, 'Inlet temperature effect': 4554, 'in line blending': 4555, 'inner leaf blended': 4556, 'initial lung burden': 4557, 'in line blending certification': 4558, 'Infiltrating lobular breast cancer': 4559, 'Delayed graft function': 4560, 'DNase genomic footprinting': 4561, 'Digital Genomic Footprinting': 4562, 'connective tissue mast cells': 4563, 'continuous time Markov chain': 4564, 'Growth by Adjustment': 4565, 'Gamma Brain Activity': 4566, 'Guilt by Association': 4567, 'generalized block assembly': 4568, 'gentamycin blood agar': 4569, 'average conditional exceedance rate': 4570, 'average cost effectiveness ratio': 4571, 'Adaptive Coil Enhancement Reconstruction': 4572, 'bilateral total variation': 4573, 'biological tumor volume': 4574, 'blue tongue virus': 4575, 'high fructose corn syrup': 4576, 'high frequency current switching': 4577, 'hemicellulose free corn stover': 4578, 'Household Food Consumption Survey': 4579, 'Fluoro Jade C': 4580, 'freely jointed chain': 4581, 'gap junction channel': 4582, 'gap junctional communication': 4583, 'Idiopathic inflammatory myopathies': 4584, 'iterative integral method': 4585, 'inferior inner macula': 4586, 'heat inactivated bacteria': 4587, 'heart infusion broth': 4588, 'stress induced hyperthermia': 4589, 'Spontaneous intracranial hypotension': 4590, 'single individual haplotyping': 4591, 'Salicylaldehyde isonicotinoyl hydrazone': 4592, 'general discriminant analysis': 4593, 'gradient descent algorithm': 4594, 'Gaussian Discriminant Analysis': 4595, 'Guideline Daily Amount': 4596, 'gastro duodenal artery': 4597, 'multiple organ dysfunction syndrome': 4598, 'microscopic observation drug susceptibility': 4599, 'Multiple Organ Dysfunction Score': 4600, 'unstable angina pectoris': 4601, 'universal amplification primer': 4602, 'feeder circuit breaker': 4603, 'fold change based': 4604, 'Fibrobacteres Chlorobi Bacteroidetes': 4605, 'fine carbon black': 4606, 'optimal social trust path': 4607, 'Output Spike Time Prediction': 4608, 'unilateral salpingo oopherectomies': 4609, 'ulnar shortening osteotomy': 4610, 'ultra stable oscillator': 4611, 'unheated soybean oil': 4612, 'time inferred pattern network': 4613, 'taxane induced peripheral neuropathy': 4614, 'fly optimization algorithm': 4615, 'flow oxygen atmosphere': 4616, 'Fluoro orotic acid': 4617, 'adaptive rood pattern search': 4618, 'Advanced Regional Prediction System': 4619, 'AS RPA PNA SYBR': 4620, 'Alcohol Related Problems Survey': 4621, 'trans membrane pressure': 4622, 'Tape Measure Protein': 4623, 'tympanic membrane perforation': 4624, 'Blast furnace gas': 4625, 'Broyden Fletcher Goldfarb': 4626, 'Barcode Fusion Genetics': 4627, 'global neighborhood search': 4628, 'glioma neural stem': 4629, 'GEOnet Names Service': 4630, 'rank ordered absolute differences': 4631, 'Rural Oregon Academic Detailing': 4632, 'Rice Oligonucleotide Array Database': 4633, 'quantum evolution algorithm': 4634, 'Quantitative Enrichment Analysis': 4635, 'boundary value method': 4636, 'bag valve mask': 4637, 'blood volume monitoring': 4638, 'Hyperlink Induced Topic Search': 4639, 'high intensity transient signals': 4640, 'normalized least mean square': 4641, 'non linear mixed selectivity': 4642, 'Situation interaction quality': 4643, 'sepsis indicating quantifier': 4644, 'fuzzy chance constrained programming': 4645, 'fluoro carbonyl cyanide phenylhydrazone': 4646, 'Urea formaldehyde resin': 4647, 'upstream flanking region': 4648, 'urinary flow rate': 4649, 'hard disk drive': 4650, 'heavy drinking days': 4651, 'Heating Degree Day': 4652, 'Human Development Dynamics': 4653, 'homology dependent deletions': 4654, 'average mean squared error': 4655, 'asymptotic mean square error': 4656, 'texture unit distribution': 4657, 'tetranucleotide usage deviations': 4658, 'Tobacco Use Disorder': 4659, 'concrete faced rockfill dam': 4660, 'cystic fibrosis related diabetes': 4661, 'networked learning control system': 4662, 'Nevus lipomatosus cutaneous superficialis': 4663, 'information security management system': 4664, 'In situ magnetic separation': 4665, 'New Carquinez Bridge': 4666, 'native cancellous bone': 4667, 'needle core biopsy': 4668, 'net clinical benefit': 4669, 'normal cord blood': 4670, 'clustering based niching': 4671, 'conformal Bayesian network': 4672, 'cellular blue nevi': 4673, 'causal biological network': 4674, 'conjugated bond number': 4675, 'logistics service supply chain': 4676, 'Large Sequence Similarity Clusters': 4677, 'genetic particle filter': 4678, 'greater palatine foramen': 4679, 'GSEA Pathway Feature': 4680, 'generalized pupil function': 4681, 'Markov jump system': 4682, 'medial joint space': 4683, 'fuel cell hybrid vehicle': 4684, 'Female Community Health Volunteer': 4685, 'weighted exponential sum': 4686, 'whole exome sequencing': 4687, 'wind energy system': 4688, 'white esthetic score': 4689, 'Waupaca Eating Smart': 4690, 'adaptive fuzzy control method': 4691, 'Adaptive Fuzzy C Means': 4692, 'multiple model adaptive control': 4693, 'modified modal assurance criterion': 4694, 'Urban Dynamometer Driving Schedule': 4695, 'unit dose dispensing system': 4696, 'vector field histogram': 4697, 'Viewpoint Feature Histograms': 4698, 'resistance spot welding': 4699, 'reflected short wave': 4700, 'regional stroke work': 4701, 'renal salt wasting': 4702, 'Satellite Tool Kit': 4703, 'serine threonine kinase': 4704, 'Supertree Tool Kit': 4705, 'Global Authentication Register System': 4706, 'Groningen Activities Restriction Scale': 4707, 'Cooperative Braking Control Strategy': 4708, 'composite binary coded symbols': 4709, 'Carolina Breast Cancer Study': 4710, 'arrayed waveguide grating': 4711, 'arbitrary waveform generator': 4712, 'antonym word generation': 4713, 'delayed ischemic neurological deficits': 4714, 'derived intrallelic nucleotide diversity': 4715, 'Latent variable regression': 4716, 'lung volume reduction': 4717, 'linear viscoelastic region': 4718, 'regularized canonical correlation analysis': 4719, 'right common carotid artery': 4720, 'local area network': 4721, 'left anterior negativity': 4722, 'light at night': 4723, 'Vehicle Assembly Building': 4724, 'vibration attraction behavior': 4725, 'sleep disordered breathing': 4726, 'Sabouraud dextrose broth': 4727, 'synthetic differential bathymetry': 4728, 'global wavelet power spectrum': 4729, 'Genome Wide Predictive Study': 4730, 'voxel based analysis': 4731, 'variational Bayesian approximation': 4732, 'really interesting new gene': 4733, 'Rapid iterative negative geotaxis': 4734, 'post stimulus time histogram': 4735, 'peri sniff time histogram': 4736, 'population spike timing histogram': 4737, 'laparoscopic Doppler ultrasound': 4738, 'linkage disequilibrium units': 4739, 'Leishman Donovan units': 4740, 'locked DNA unit': 4741, 'gain of function': 4742, 'goodness of fit': 4743, 'Nck interacting kinase': 4744, 'NetWalker Interactome Knowledgebase': 4745, 'NFκB inducible kinase': 4746, 'NFkappaB inducing kinase': 4747, 'Complex regional pain syndrome': 4748, 'continuous rank probability score': 4749, 'untreated filter paper': 4750, 'Ultra fine particles': 4751, 'Voice Handicap Index': 4752, 'voluntary health insurance': 4753, 'gas exchange threshold': 4754, 'Global Education Trend': 4755, 'gene expression templates': 4756, 'J domain protein': 4757, 'Jun dimerization protein': 4758, 'limits of agreement': 4759, 'line of action': 4760, 'LOSS OF APOMEIOSIS': 4761, 'Volume Rendering Technique': 4762, 'venous refill time': 4763, 'Virtual reality training': 4764, 'reactor core isolation cooling': 4765, 'resident cancer initiating cells': 4766, 'mobile filtration unit': 4767, 'mean fluorescence units': 4768, 'mammosphere forming units': 4769, 'milk forge unit': 4770, 'residual vein obstruction': 4771, 'retinal vein occlusion': 4772, 'left innominate vein': 4773, 'Landolt indicator values': 4774, 'local image variance': 4775, 'louping ill virus': 4776, 'right innominate vein': 4777, 'relative incidence values': 4778, 'Relative Importance Value': 4779, 'moving bed biofilm reactor': 4780, 'Monarch Butterfly Biosphere Reserve': 4781, 'Wechsler Adult Intelligence Scale': 4782, 'West Antarctic Ice Sheet': 4783, 'Vibrio like bacteria': 4784, 'variable length bootstrap': 4785, 'vintage lager beer': 4786, 'viable heterotrophic bacteria': 4787, 'very high bond': 4788, 'European fast reactor': 4789, 'event free rate': 4790, 'envelope following response': 4791, 'elevated fracture risk': 4792, 'Jordan Subcritical Assembly': 4793, 'Job Seekers Allowance': 4794, 'primary heat transport system': 4795, 'PTEN hamartoma tumour syndrome': 4796, 'butylated hydroxy anisole': 4797, 'bottom hole assembly': 4798, 'bean husk activated': 4799, 'bipolar hip arthroplasty': 4800, 'black hulled awned': 4801, 'electrodeless electrochemical oxidation': 4802, 'end expiratory occlusion': 4803, 'lymph node harvest': 4804, 'Landshut Neuöttinger High': 4805, 'lymphoid nodular hyperplasia': 4806, 'least distance stepwise sampling': 4807, 'low dead space syringes': 4808, 'dynamic load balancing': 4809, 'Data Linkage Branch': 4810, 'tetra oleoyl lysine': 4811, 'Tree of Life': 4812, 'toe off left': 4813, 'inferior fronto occipital': 4814, 'inferior frontal operculum': 4815, 'hydroxyl radical scavenging effect': 4816, 'hypoxia regulated silenced element': 4817, 'ultra wide band': 4818, 'Upper Wabash Basin': 4819, 'Average Well Colour Development': 4820, 'Average Within Cluster Distance': 4821, 'Unified Soil Classification System': 4822, 'ultra spectra communication system': 4823, 'United States Cancer Statistics': 4824, 'urge specific coping skills': 4825, 'So Cheong Ryong Tang': 4826, 'serial choice retention time': 4827, 'short course radiation therapy': 4828, 'Hangai Hentii belt': 4829, 'hemp hurd biomass': 4830, 'Gobi Tienshan belt': 4831, 'ground truth background': 4832, 'Genomic Tumor Board': 4833, 'weak geometry consistency': 4834, 'Wire guided cannulation': 4835, 'gated optical intensifier': 4836, 'gene of interest': 4837, 'proximal femoral nail': 4838, 'passenger flow network': 4839, 'right ventricular activation time': 4840, 'renal visceral adipose tissue': 4841, 'point of use': 4842, 'Pit Oct Unc': 4843, 'Pediatric Oncology Unit': 4844, 'pneumatic climbing maintenance robot': 4845, 'phase contrast magnetic resonance': 4846, 'Olivetti Research Laboratory': 4847, 'outer retinal layers': 4848, 'offset renormalized lognormal': 4849, 'oto rhino laryngologique': 4850, 'student housing quality': 4851, 'Sarcoidosis Health Questionnaire': 4852, 'locust bean gum': 4853, 'linear bone growth': 4854, 'Linde Buzo Gray': 4855, 'Binary full adder': 4856, 'bacterial foraging algorithm': 4857, 'Bayesian fusion algorithm': 4858, 'Bayesian factor analysis': 4859, 'bidirectional Fano algorithm': 4860, 'Double edge notch': 4861, 'Deformable elastic network': 4862, 'differential expression network': 4863, 'upper respiratory tract': 4864, 'urban rail transit': 4865, 'Milky Way galaxy': 4866, 'modified Weibull geometric': 4867, 'composite iliac stem prosthesis': 4868, 'conserved intron scanning primers': 4869, 'Berkley Illinois Maryland Association': 4870, 'bilateral internal mammary artery': 4871, 'integral field unit': 4872, 'inclusion forming units': 4873, 'Intermingled Fractal Units': 4874, 'Initial Follow Up': 4875, 'Instructions for Use': 4876, 'Not Missing at Random': 4877, 'normalized metal artifact reduction': 4878, 'Block Representation Method': 4879, 'Bioinformatics Resource Manager': 4880, 'biotransformation reaction mixture': 4881, 'World Wildlife Fund': 4882, 'whole wheat flour': 4883, 'Todd Hewitt broth': 4884, 'total heterotrophic bacterial': 4885, 'Gene structure display server': 4886, 'General Sleep Disturbance Scale': 4887, 'Spline Fitting Filtering': 4888, 'Solid freeform fabrication': 4889, 'Subtrochanteric femur fracture': 4890, 'echo state network': 4891, 'epidermal sensory neurons': 4892, 'first node dies': 4893, 'fixed node degree': 4894, 'flexible numeric display': 4895, 'focal neurological deficit': 4896, 'fractional heat diffusion': 4897, 'femoral head diameter': 4898, 'Cashew nut shell liquid': 4899, 'central nervous system lymphoma': 4900, 'Legendre Gauss Radau': 4901, 'linear growth rate': 4902, 'local gird refinement': 4903, 'proteasome catalyzed peptide splicing': 4904, 'Proteasome Cleavage Prediction Server': 4905, 'Patient Communication Pattern Scale': 4906, 'inside common knowledge': 4907, 'inhibitor cysteine knot': 4908, 'phase congruency statistical features': 4909, 'percutaneous cannulated screw fixation': 4910, 'global edge length': 4911, 'gene expression levels': 4912, 'observed Hubble data': 4913, 'oral hypoglycemic drugs': 4914, 'Krauss Nasri Trodden': 4915, 'Kashmiri Naming Test': 4916, 'linear replaceable unit': 4917, 'least recently used': 4918, 'luciferase relative units': 4919, 'space occupying lesions': 4920, 'sleep onset latency': 4921, 'Small Optic Lobe': 4922, 'scrape off layer': 4923, 'spatial object location': 4924, 'HCF Controlled Channel Access': 4925, 'Heuristic Cluster Chiselling Algorithm': 4926, 'health centre catchment area': 4927, 'retrograde axonal transport': 4928, 'Radon ambiguity transform': 4929, 'rapid antigen test': 4930, 'Repeat allelic type': 4931, 'reactional adipose tissue': 4932, 'Intelligent Usability Evaluation': 4933, 'in utero electroporation': 4934, 'Friedmann Robertson Walker': 4935, 'Flour rich waste': 4936, 'quantum field theory': 4937, 'quantum Fourier transform': 4938, 'quantitative feedback theory': 4939, 'primordial black holes': 4940, 'peptide biphenyl hybrid': 4941, 'Enhanced Vegetation Index': 4942, 'ERG vascular index': 4943, 'Rapid Economic Growth': 4944, 'Renewable Energy Group': 4945, 'Regulatory Element Group': 4946, 'Lake Victoria Basin': 4947, 'Lili Virduli Bulbus': 4948, 'Large vascular bundle': 4949, 'Southern United States': 4950, 'System Usability Scale': 4951, 'steel used stainless': 4952, 'semiorthogonal user selection': 4953, 'Split Ubiquitin System': 4954, 'weighted normalized probability': 4955, 'western North Pacific': 4956, 'Wielkopolski National Park': 4957, 'Vickers hardness numbers': 4958, 'vertical heteroepitaxial nanocomposite': 4959, 'wheat straw ash': 4960, 'WAVE Service Advertisement': 4961, 'winter sport areas': 4962, 'water selective adiabatic': 4963, 'at risk mental state': 4964, 'Agricultural Resource Management Survey': 4965, 'Amplification Refractory Mutation System': 4966, 'Hypertensive heart disease': 4967, 'Hand Held Dynamometer': 4968, 'hand held devices': 4969, 'harmonin homology domain': 4970, 'homofermentative heterofermentative differential': 4971, 'left ventricular thrombus': 4972, 'Live value table': 4973, 'Las Vegas Tissue': 4974, 'latent heat flux': 4975, 'late head fold': 4976, 'ice water content': 4977, 'intermittent warm cardioplegia': 4978, 'North American regional reanalysis': 4979, 'no avoidance response rate': 4980, 'Nonhomogeneous Hidden Markov Model': 4981, 'non hyperdiploid multiple myeloma': 4982, 'northern South China Sea': 4983, 'non specific chronic sialadenitis': 4984, 'Population Growth Rate': 4985, 'plant growth regulator': 4986, 'path generating regulator': 4987, 'pulse group rate': 4988, 'prednisolone good responder': 4989, 'NOAA profiler network': 4990, 'N phenyl naphthylamine': 4991, 'tropical Indian Ocean': 4992, 'Tumor induced osteomalacia': 4993, 'TWO IN ONE': 4994, 'lateral gene transfers': 4995, 'lost goodwill target': 4996, 'linear glandular trichomes': 4997, 'Low Gelling Temperature': 4998, 'last universal cellular ancestor': 4999, 'Last Universal Common Ancestor': 5000, 'external beam radiation therapy': 5001, 'empty bed residence time': 5002, 'normal weight concrete': 5003, 'non whisker clipped': 5004, 'metal oxide varistor': 5005, 'most occurrence velocity': 5006, 'MiMiR Ontology Viewer': 5007, 'weathered dolerite aggregate': 5008, 'wavenumber domain algorithm': 5009, 'tissue culture polystyrene plate': 5010, 'tetrakis carboxy phenyl porphyrin': 5011, 'diel vertical migration': 5012, 'discriminative vector machine': 5013, 'dorso ventral muscles': 5014, 'static noise margin': 5015, 'single nucleotide mutation': 5016, 'flexion gap balance': 5017, 'fundamental Gaussian beam': 5018, 'first generation breeding': 5019, 'initial deformation temperature': 5020, 'information dissipation time': 5021, 'elastic perfectly plastic': 5022, 'end plate potentials': 5023, 'Extra pair paternity': 5024, 'events per predictor': 5025, 'Einzel Bild Roentgen Analyse': 5026, 'Energy Balanced Redeployment Algorithm': 5027, 'squirrel cage induction motor': 5028, 'Spinal Cord Independence Measure': 5029, 'Modified Newton Raphson': 5030, 'mean noise reduction': 5031, 'micaceous iron oxide': 5032, 'maximal incisal opening': 5033, 'Myocardial iron overload': 5034, 'laboratory zeolite column': 5035, 'Lempel Ziv complexity': 5036, 'von Barth Hedin': 5037, 'vertebral body height': 5038, 'feather meal broth': 5039, 'fermented mung bean': 5040, 'foramen magnum breadth': 5041, 'wireless mesh network': 5042, 'Western Music Notation': 5043, 'very high throughput': 5044, 'village health team': 5045, 'load shift keying': 5046, 'little spotted kiwi': 5047, 'Lin Sca+c Kit+': 5048, 'Relative fluorescent quantification': 5049, 'radio frequency quadrupole': 5050, 'Nigerian Christian Hospital': 5051, 'no change high': 5052, 'Non chronic hepatitis': 5053, 'vascular vector field': 5054, 'Vascular volume fraction': 5055, '': 5056, 'average speed measuring systems': 5057, 'auction sale management system': 5058, 'nasopharynx vertical distance': 5059, 'Nidus Vespae Decoction': 5060, 'anterior mucosal width': 5061, 'average molecular weight': 5062, 'apical microtubule web': 5063, 'internal mammary nodes': 5064, 'Idiopathic membranous nephropathy': 5065, 'cell counting kit': 5066, 'complementary code keying': 5067, 'Abnormal Involuntary Movement Scale': 5068, 'Afghanistan Information Management Services': 5069, 'Alberta Infant Motor Scale': 5070, 'absolute intrinsic molecular subtyping': 5071, 'benzothiazole ethylenediamine formaldehyde': 5072, 'brightness enhancement film': 5073, 'biomass expansion factor': 5074, 'biodiversity ecosystem functioning': 5075, 'Bovine ephemeral fever': 5076, 'ubiquitin proteasome pathway': 5077, 'Urban population proportion': 5078, 'urethra pressure profiles': 5079, 'relative lacunarity function': 5080, 'Rate level functions': 5081, 'Relative Location Factor': 5082, 'Rossmann like fold': 5083, 'Sperm Chromatin Structure Assay': 5084, 'space confined self assembled': 5085, 'fusion inhibitor peptide': 5086, 'forward inner primer': 5087, 'fecal induced peritonitis': 5088, 'feline infectious peritonitis': 5089, 'carcinogenesis relevance value': 5090, 'cell released virus': 5091, 'clinically relevant variants': 5092, 'forehead head lift': 5093, 'flexor hallucis longus': 5094, 'familial hemophagocytic lymphohistiocytosis': 5095, 'F H line': 5096, 'Formate hydrogen lyase': 5097, 'Flattening Filter Free': 5098, 'flicker fusion frequency': 5099, 'field flow fractionation': 5100, 'right anterior hippocampus': 5101, 'rich amphipathic helix': 5102, 'right posterior hippocampus': 5103, 'resin Protium heptaphyllum': 5104, 'Relative Peak Height': 5105, 'provincial maternal mortality ratio': 5106, 'post mortem magnetic resonance': 5107, 'dental audio neutral': 5108, 'dorsal attention network': 5109, 'Deep Averaging Network': 5110, 'deformable angular network': 5111, 'automatic largest slice selection': 5112, 'artificial liver support system': 5113, 'primary neutralizing epitope': 5114, 'planar Nernst effect': 5115, 'posterior nuclear extremity': 5116, 'prenatal nicotinic exposure': 5117, 'Lagos bat virus': 5118, 'Laplacian boundary value': 5119, 'Guided Regeneration Gel': 5120, 'grey relational grade': 5121, 'grey relevancy grade': 5122, 'exposure buildup factor': 5123, 'exclusive breast feeding': 5124, 'equine bronchial fibroblasts': 5125, 'extracorporeal blood flow': 5126, 'blood pool contrast medium': 5127, 'bilayered porcine collagen matrix': 5128, 'Foam cell formation': 5129, 'first cardiac field': 5130, 'fortified complementary food': 5131, 'Block flow diagram': 5132, 'body fat distribution': 5133, 'mechanical vapor recompression': 5134, 'Maternal verbal report': 5135, 'mitral valve replacement': 5136, 'motor vehicle repair': 5137, 'neutral density targets': 5138, 'network dwell time': 5139, 'nil ductility transition': 5140, 'navicular drop test': 5141, 'normal distant tissue': 5142, 'Dioclea rostrata lectin': 5143, 'driven right leg': 5144, 'deterministic record linkage': 5145, 'deep retinal layer': 5146, 'diagnostic reference level': 5147, 'Dioclea guianensis lectin': 5148, 'Destination Gene List': 5149, 'diffuse Galactic light': 5150, 'Dioclea violacea lectin': 5151, 'Doppler velocity log': 5152, 'Vatairea macrocarpa lectin': 5153, 'Volumetric muscle loss': 5154, 'vastus medialis longus': 5155, 'ultrahigh hydrostatic pressure': 5156, 'urea hydrogen peroxide': 5157, 'Ultra high purity': 5158, 'autologous bone marrow concentrate': 5159, 'Agent Based Monte Carlo': 5160, 'Human Phenotype Ontology': 5161, 'hybrid polylingual object': 5162, 'high pressure oxidation': 5163, 'nerve peptide Y': 5164, 'non PAR Y': 5165, 'peripheral nerve injury': 5166, 'prognostic nutritional index': 5167, 'Equine arteritis virus': 5168, 'Entity Attribute Value': 5169, 'end acceleration velocity': 5170, 'fresh frozen bone': 5171, 'Fourier Filter Banks': 5172, 'Fresh Fruit Bunches': 5173, 'female flower bud': 5174, 'washed homogenate extract': 5175, 'Water Hygienization Equipment': 5176, 'hip knee angle': 5177, 'Hudson Kreitman Aguadé': 5178, 'D cysteine desulfhydrase': 5179, 'Directional coupling device': 5180, 'distal cone diameter': 5181, 'distributed cluster designing': 5182, 'dead cells density': 5183, 'Burrows Wheeler aligner': 5184, 'broadband wireless access': 5185, 'BOADICEA Web Application': 5186, 'body water available': 5187, 'mismatch amplification mutation assay': 5188, 'modified alternating minimization algorithm': 5189, 'mid arm muscle area': 5190, 'substance use disorder': 5191, 'single user detection': 5192, 'Sheep Unit Days': 5193, 'Substance Use Dependence': 5194, 'sudden unexplained death': 5195, 'immature myeloid information': 5196, 'Intrinsic Motivation Inventory': 5197, 'intra mammary infections': 5198, 'Inflammatory Marker Index': 5199, 'fennel aqueous seed extract': 5200, 'free air self extinguishment': 5201, 'Adaptive Poisson Boltzmann solver': 5202, 'Architectural protein binding site': 5203, 'wash in time': 5204, 'walking InCHIANTI toolkit': 5205, 'Warm ischemic time': 5206, 'weighted interaction torques': 5207, 'weapons identification task': 5208, 'image Web server': 5209, 'Intelligent Wheelchair System': 5210, 'Multiple hereditary exostosis': 5211, 'Minimal hepatic encephalopathy': 5212, 'match hour exposure': 5213, 'matured hop extract': 5214, 'High intensity light pulses': 5215, 'hyperthermic isolated limb perfusion': 5216, 'Electrical Cell Impedance Sensing': 5217, 'Electric cell–substrate impedance sensing': 5218, 'first heart field': 5219, 'filoviral haemorrhagic fever': 5220, 'Fulminant hepatic failure': 5221, 'normalized noise power spectrum': 5222, 'Nestlé Nutritional Profiling System': 5223, 'high mobility group': 5224, 'high motion group': 5225, 'hamstring muscle group': 5226, 'Growth Hormone Receptor': 5227, 'Glutathionyl Hydroquinone Reductase': 5228, 'Magnetic resonance spectroscopy imaging': 5229, 'Multiclass Rule Set Intersection': 5230, 'lamprey liver mitochondria': 5231, 'Long Lasting Memories': 5232, 'Local Labor Markets': 5233, 'Logic Learning Machines': 5234, 'leg lean mass': 5235, 'Citronella essential oil': 5236, 'Chief Executive Officer': 5237, 'combinatorial entropy optimization': 5238, 'chamomile essential oil': 5239, 'carrier envelope offset': 5240, 'excreted urine volume': 5241, 'equivalent uniform voltage': 5242, 'expected utility value': 5243, 'plasmodial surface anion channel': 5244, 'pre school age children': 5245, 'T follicular helper': 5246, 'typical flood hydrograph': 5247, 'hydroxyapatite gelatin calcium silicate': 5248, 'human gene connectome server': 5249, 'relative attachment level': 5250, 'relative antibody level': 5251, 'Relative Adduct Labeling': 5252, 'Rutherford Appleton Laboratory': 5253, 'free amino nitrogen': 5254, 'functional association network': 5255, 'fall artificial nutrients': 5256, 'ventral cochlear nucleus': 5257, 'Virtual Cloud Network': 5258, 'vector copy numbers': 5259, 'ventral cervical nerve': 5260, 'ventricular filling rate': 5261, 'visceral fat removal': 5262, 'vertical flow reactor': 5263, 'vortex formation ratio': 5264, 'vascular flow reserve': 5265, 'minimum spanning network': 5266, 'mesoporous silica nanoparticle': 5267, 'medial septal nucleus': 5268, 'Most Similar Neighbor': 5269, 'Medium spiny neurons': 5270, 'million years ago': 5271, 'Monitor Your Avatar': 5272, 'intrinsic connectivity network': 5273, 'interference corrected network': 5274, 'Idiopathic congenital nystagmus': 5275, 'Intravenous lipid emulsion': 5276, 'ionic liquid electrolyte': 5277, 'isotope labeling experiments': 5278, 'INTEGRON LIKE ELEMENT': 5279, 'Balkan endemic nephropathy': 5280, 'band eliminated noise': 5281, 'deep dorsal vein': 5282, 'drug delivery vehicle': 5283, 'Drop Down Video': 5284, 'Eye Nose Throat': 5285, 'equilibrative nucleoside transporter': 5286, 'contrast enhanced ultrasound': 5287, 'cluster edge users': 5288, 'acute hepatitis B': 5289, 'abductor hallucis brevis': 5290, 'Africanized honey bees': 5291, 'over representation analysis': 5292, 'ocular response analyzer': 5293, 'anti HM124 mAb': 5294, 'anti hypertensive medications': 5295, 'Army Half Marathon': 5296, 'adult haematological malignancy': 5297, 'redundant nuclear envelope': 5298, 'relative neighbor effect': 5299, 'open arm entries': 5300, 'of adverse events': 5301, 'genetic algorithm optimisation': 5302, 'glycogen accumulating organisms': 5303, 'Government Accountability Office': 5304, 'systolic blood pressure variability': 5305, 'slow bee paralysis virus': 5306, 'letter matching task': 5307, 'logistic models trees': 5308, 'left main trunk': 5309, 'T wave alternans': 5310, 'total wrist arthroplasty': 5311, 'time weighted average': 5312, 'NEDD8 activating enzyme': 5313, 'Normalized Absolute Error': 5314, 'net acid excretion': 5315, 'neural autoregulatory enhancer': 5316, 'Non amblyopic eye': 5317, 'Rift valley fever': 5318, 'right visual field': 5319, 'Research Vitro Fert': 5320, 'Long Term irregularity': 5321, 'linear time invariant': 5322, 'Long term infected': 5323, 'level tuning index': 5324, 'low temperature induced': 5325, 'gamma normal gamma': 5326, 'Growing Neural Gas': 5327, 'inspiratory positive airway pressure': 5328, 'internal phloem associated parenchyma': 5329, 'high fat cholesterol': 5330, 'hydrodynamic flow confinement': 5331, 'high frequency components': 5332, 'Hydroxylamine ferric chloride': 5333, 'high flux control': 5334, 'Acquired brachial cutaneous dyschromatosis': 5335, 'Avidin biotin complex DNA': 5336, 'Vehicle Identification Number': 5337, 'Vulvar intraepithelial neoplasia': 5338, 'ventral posterior lateral': 5339, 'vehicular penetration loss': 5340, 'red light running': 5341, 'RIG like receptor': 5342, 'Regularized Logistic Regression': 5343, 'revised local reference': 5344, 'gray relational analysis': 5345, 'Gamma ray attenuation': 5346, 'Granule Release Assay': 5347, 'timed inspiratory effort': 5348, 'Toxicity Identification Evaluation': 5349, 'Tip Induced Electrospinning': 5350, 'functional network connectivity': 5351, 'furthest neighbour criterion': 5352, 'Fine needle cytology': 5353, 'family nutrition climate': 5354, 'middle quality control': 5355, 'Multiple Quantum Coherence': 5356, 'Maximum Quartet Consistency': 5357, 'core needle biopsy': 5358, 'cyclic nucleotide binding': 5359, 'caloric nutritional beverages': 5360, 'pathogen associated molecular pattern': 5361, 'primer approximation multiplex PCR': 5362, 'Bone marrow biopsy': 5363, 'Bergen Marine Biobank': 5364, 'right common ostium': 5365, 'random control oligonucleotides': 5366, 'relative contact order': 5367, 'mean occupation layer': 5368, 'muscle of Lawrence': 5369, 'mild intrusive genetic algorithm': 5370, 'multi island genetic algorithm': 5371, 'triplet Markov field': 5372, 'trial master file': 5373, 'Humphrey visual field': 5374, 'Human vaginal fluid': 5375, 'higher visual functions': 5376, 'left ovarian vein': 5377, 'Light oxygen voltage': 5378, 'single incision laparoscopic cholecystectomy': 5379, 'stress induced leakage current': 5380, 'hierarchical artificial bee colony': 5381, 'Heuristic Artificial Bee Colony': 5382, 'hierarchical approximate Bayesian computation': 5383, 'gas volume score': 5384, 'Globe Visualization System': 5385, 'Galvanic vestibular stimulation': 5386, 'genome variation server': 5387, 'Substrate uncoupler inhibitor titration': 5388, 'spatially unbiased infratentorial template': 5389, 'unspecified routine pharmacotherapy': 5390, 'universal rice primers': 5391, 'electric multiple unit': 5392, 'Eastern Mediterranean University': 5393, 'energy migration upconversion': 5394, 'elementary metabolite units': 5395, 'Early morning urine': 5396, 'rank sparsity tensor decomposition': 5397, 'reference signal time difference': 5398, 'Nelumbo nucifera Gaertner': 5399, 'nontoxic nodular goiter': 5400, 'Akebia trifoliate seed extract': 5401, 'antibody templated strand exchange': 5402, 'germinated mung bean': 5403, 'gametic mutation box': 5404, 'geniposide loaded PLGA film': 5405, 'Gaussian low pass filtering': 5406, 'Viscum album extract': 5407, 'vaccine adverse events': 5408, 'variable angle epifluorescence': 5409, 'artificial gastric juice': 5410, 'annular gap junction': 5411, 'Brazilian red propolis': 5412, 'bit reversal permutation': 5413, 'Breit Rabi polarimeter': 5414, 'bracket removal plier': 5415, 'biological redox potential': 5416, 'Jirisan National Park': 5417, 'Jeju Native Pig': 5418, 'Wen Dan Decoction': 5419, 'Wigner distribution deconvolution': 5420, 'Van der Waals': 5421, 'viscoelastic damping wall': 5422, 'Thyme essential oil': 5423, 'Teager energy operator': 5424, 'Time Event Ontology': 5425, 'weeks of age': 5426, 'weak organic acids': 5427, 'World Ocean Atlas': 5428, 'Olive leaf extract': 5429, 'open label extension': 5430, 'optimal linear estimator': 5431, 'optical linear encoder': 5432, 'liver fibrosis index': 5433, 'Large Fragment Intensity': 5434, 'Lateral flow immunoassay': 5435, 'Large fish indicator': 5436, 'left ventricular weight': 5437, 'Leptospira Vanaporn Wuthiekanun': 5438, 'lateral ventricle wall': 5439, 'laser beam welding': 5440, 'low birth weight': 5441, 'Lepore Boston Washington': 5442, 'Lean body weight': 5443, 'finite Hankel transform': 5444, 'Fangji Huangqi Tang': 5445, 'family health team': 5446, 'face hand test': 5447, 'Free Hand Tool': 5448, 'spatial division multiple access': 5449, 'symmetric di methyl arginine': 5450, 'United States dollar': 5451, 'ultra scale down': 5452, 'displaced phase center antenna': 5453, 'dynamic principal component analysis': 5454, 'Discriminant Principle component analyses': 5455, 'concentric circular antenna array': 5456, 'canonical circuit activity analysis': 5457, 'Cultural Historical Activity Theory': 5458, 'Clonal Heterogeneity Analysis Tool': 5459, 'enhanced delay and sum': 5460, 'eco driving assistance systems': 5461, 'encephalo duro arterio synangiosis': 5462, 'high dependency units': 5463, 'hard drugs use': 5464, 'primary human hepatocytes': 5465, 'pectin honey hydrogel': 5466, 'Post haemorrhagic hydrocephalus': 5467, 'damage presence probability index': 5468, 'Drosophila protein protein interaction': 5469, 'full field digital mammography': 5470, 'freedom from distant metastasis': 5471, 'fiber orientation distribution': 5472, 'fuzzy oil drop': 5473, 'segmentation validation engine': 5474, 'singular value estimation': 5475, 'soil vapor extraction': 5476, 'Rao Wilton Glisson': 5477, 'Resonance Waveguide Grating': 5478, 'raw wheat germ': 5479, 'charge reuse analog Fourier transform': 5480, 'Colorado Richly Annotated Full Text': 5481, 'Community Reinforcement and Family Training': 5482, 'air quality monitoring': 5483, 'active queue management': 5484, 'Sakurajima Volcanological Observatory': 5485, 'symmetric vector ordering': 5486, 'social value orientation': 5487, 'Range Walk Ratio': 5488, 'root weight ratio': 5489, 'resistance wheel running': 5490, 'uniform square array': 5491, 'united signature algorithm': 5492, 'Groninger intelligence test': 5493, 'gastro intestinal tract': 5494, 'genotype independent test': 5495, 'geometrically impossible topologies': 5496, 'choroid plexus epithelial cells': 5497, 'Circular Polymerase Extension Cloning': 5498, 'Palm tree male flower': 5499, 'patterned thin metal film': 5500, 'Dental Health Component': 5501, 'Differential hemocyte count': 5502, 'double hopping communication': 5503, 'dynein heavy chain': 5504, 'Direct Hill Climbing': 5505, 'key population indicators': 5506, 'Key Performance Indicator': 5507, 'Kyoto Prognostic Index': 5508, 'Korean prognostic index': 5509, 'Kunitz protease inhibitor': 5510, 'critical flicker frequency': 5511, 'Close formation flight': 5512, 'Wavelet Packets Decomposition': 5513, 'White Plague disease': 5514, 'weighted Petri dish': 5515, 'fluorescence line height': 5516, 'freezing layer height': 5517, 'Formal language hierarchy': 5518, 'Institut Pierre Simon Laplace': 5519, 'Indo Pakistani Sign Language': 5520, 'electromagnetic stir casting samples': 5521, 'enhanced S cone syndrome': 5522, 'uniformly distributed load': 5523, 'unconstrained distributed lag': 5524, 'peak height velocity': 5525, 'percutaneous heart valves': 5526, 'primary head vein': 5527, 'Prosthesis Heart Valve': 5528, 'Controlled ovarian hyperstimulation': 5529, 'centrally obese healthy': 5530, 'coefficient of haze': 5531, 'voltage variation ratio': 5532, 'Virus Variation Resources': 5533, 'electron transporting layer': 5534, 'echo train length': 5535, 'Extract Transform Load': 5536, 'Electrically Tunable Lens': 5537, 'variable pumping frequency': 5538, 'viscous potential flow': 5539, 'virtual projection function': 5540, 'anterior superior iliac spine': 5541, 'automated surface inspection systems': 5542, 'annual severity increment score': 5543, 'Inverted Pendulum Standing Apparatus': 5544, 'Inverse planning simulated annealing': 5545, 'diencephalic mesencephalic boundary': 5546, 'distal main branch': 5547, 'Dragon Motif Builder': 5548, 'Davis Minimal Broth': 5549, 'lateral longitudinal fascicle': 5550, 'lung lavage fluid': 5551, 'lesion localization fraction': 5552, 'LUNG LINING FLUID': 5553, 'leaf litter fungi': 5554, 'bidirectional ventricular tachycardia': 5555, 'Basilic vein transposition': 5556, 'incubator dependent neonates': 5557, 'interaction difference network': 5558, 'brightness scale value': 5559, 'bounded support vectors': 5560, 'blood stage vaccines': 5561, 'Bremer support values': 5562, 'Instituto Geográfico Nacional': 5563, 'Iodine Global Network': 5564, 'Institut Géographique National': 5565, 'damage quantification index': 5566, 'Dickson Quality Index': 5567, 'diet quality index': 5568, 'periodic traveling wave': 5569, 'Powered two wheelers': 5570, 'Polygala tenuifolia Willd': 5571, 'Inter Destination Multimedia Synchronization': 5572, 'Interstory Drift Mode Shape': 5573, 'Intelligent Decision making System': 5574, 'isotope dilution mass spectrometry': 5575, 'Laplacian Mean Squared Error': 5576, 'least mean square errors': 5577, 'lateral scapular slide test': 5578, 'lake surface skin temperature': 5579, 'Vacuum hot pressing': 5580, 'Visible Human Project': 5581, 'very high priority': 5582, 'vasa hyaloidea propria': 5583, 'villin head piece': 5584, 'intravenous drug user': 5585, 'Injecting drug use': 5586, 'excitatory burst neuron': 5587, 'Edible bird’s nest': 5588, 'egg bearing needles': 5589, 'Serum uric acid': 5590, 'single unit activity': 5591, 'Synthetic Universal Amplicon': 5592, 'Deep infiltrating endometriosis': 5593, 'dorsal intermediate entorhinal': 5594, 'Acid Unhydrolyzable Residue': 5595, 'ammonium uptake rates': 5596, 'Acute urinary retention': 5597, 'asymmetric use rate': 5598, 'active joint count': 5599, 'apical junctional complex': 5600, 'Albert Johnson Creek': 5601, 'apple juice concentrate': 5602, 'ankle joint complex': 5603, 'superior ophthalmic vein': 5604, 'Sinus of Valsalva': 5605, 'sum of violations': 5606, 'stem outer vegetative': 5607, 'Dynamic Random Access Memory': 5608, 'Delayed Rejection Adaptive Metropolis': 5609, 'damage regulated autophagy modulator': 5610, 'Reinforcement Based Treatment': 5611, 'Rose Bengal test': 5612, 'repetitive brain trauma': 5613, 'rectal balloon training': 5614, 'extended state observer': 5615, 'European Southern Observatory': 5616, 'multiset canonical correlation analysis': 5617, 'mesh coordinated channel access': 5618, 'generalized hyperbolic distribution': 5619, 'growth hormone deficiency': 5620, 'Generalised Hamming Distance': 5621, 'prior knowledge input': 5622, 'Public Key Infrastructure': 5623, 'protein kinase inhibitor': 5624, 'Dynamic positional warping': 5625, 'Dwarf polish wheat': 5626, 'dorsal posterior wall': 5627, 'days post wound': 5628, 'The Hybrid Model': 5629, 'traditional herbal medicines': 5630, 'bronchial smooth muscle cells': 5631, 'bacterial sequential Markov coalescent': 5632, 'vector insertion site': 5633, 'vibration isolation slot': 5634, 'Vasoactive inotropic score': 5635, 'Vaccine Information Statements': 5636, 'Visible Imaging Spectrometer': 5637, 'quartz crystal resonator': 5638, 'quantum circuit refrigerator': 5639, 'plane project parallel skyline': 5640, 'putative protease processing site': 5641, 'post plasmoid plasma sheet': 5642, 'globally optimal algorithm': 5643, 'Gene Ontology Annotation': 5644, 'general older adult': 5645, 'geometric orifice area': 5646, 'Gulf of Aden': 5647, 'convex nonnegative matrix factorization': 5648, 'clingstone non melting flesh': 5649, 'steer by wire': 5650, 'System Biology Workbench': 5651, 'autonomous emergency braking': 5652, 'acute eccentric bout': 5653, 'Euclidean gradient algorithm': 5654, 'evolved gas analysis': 5655, 'estimated gestational age': 5656, 'embryonic genome activation': 5657, 'new subspace iteration method': 5658, 'Neurogram Similarity Index Measure': 5659, 'extended Hamiltonian algorithm': 5660, 'Enumerative Heuristic Algorithm': 5661, 'Environmental Home Assessment': 5662, 'Decision Making Unit': 5663, 'diesel multiple unit': 5664, 'Dalian Medical University': 5665, 'vessel traffic system': 5666, 'virtual typical subject': 5667, 'vector Taylor series': 5668, 'vacuolar transport signal': 5669, 'variable terminal structure': 5670, 'inner radiation belt': 5671, 'Institutional Review Board': 5672, 'Identical Repeated Backbone': 5673, 'optimized link state routing': 5674, 'ordinary least squares regression': 5675, 'special function unit': 5676, 'spot forming units': 5677, 'Simon Fraser University': 5678, 'Separated Random User Scheduling': 5679, 'Solitary rectal ulcer syndrome': 5680, 'Earliest Deadline First': 5681, 'empirical distribution function': 5682, 'endodermal damage fraction': 5683, 'early diverging fungal': 5684, 'Extracellular Death Factor': 5685, 'variable structure filter': 5686, 'visual script familiarity': 5687, 'Vascular stromal fraction': 5688, 'ventricular shortening fraction': 5689, 'vaginal simulant fluid': 5690, 'two fluid model': 5691, 'total fat mass': 5692, 'thick film microscopy': 5693, 'defected stub loaded resonator': 5694, 'digital single lens reflex': 5695, 'Kernel Particle Filter': 5696, 'Knitted Piezoresistive Fabric': 5697, 'unified registration model': 5698, 'unexplained recurrent miscarriage': 5699, 'upstream regulatory module': 5700, 'dispersed particle gel': 5701, 'days post germination': 5702, 'diastolic pressure gradient': 5703, 'TIRAP inhibitory peptide': 5704, 'tonoplast intrinsic protein': 5705, 'Tertiary industry proportion': 5706, 'Tourniquet inflation pressure': 5707, 'tubularized incised plate': 5708, 'Heat moisture treatment': 5709, 'histone methyl transferases': 5710, 'hidden Markov tree': 5711, 'impaired glucose regulation': 5712, 'isorhamnetin glucosyl rhamnoside': 5713, 'inert gas rebreathing': 5714, 'intergenic genomic region': 5715, 'formalin killed bacteria': 5716, 'final kissing balloon': 5717, 'immunoglobulin like transcript': 5718, 'Intra luminal Thrombus': 5719, 'Inverse Laplace Transform': 5720, 'Intra Luminal Tread': 5721, 'glucose infusion rate': 5722, 'groupwise image registration': 5723, 'global immunological risk': 5724, 'growth inhibitory rate': 5725, 'synovial fluid mononuclear cells': 5726, 'soluble fibrin monomer complex': 5727, 'congenital generalized lipodystrophy': 5728, 'Chebyshev Gauss Lobatto': 5729, 'Canavalia gladiata lectin': 5730, 'cystathionine gamma lyase': 5731, 'island arc basalt': 5732, 'individual alpha band': 5733, 'Impurity added benzene': 5734, 'Industrial Advisory Board': 5735, 'onion like carbon': 5736, 'Open Lung Concept': 5737, 'overlap layout consensus': 5738, 'chitosan modified kaolinite': 5739, 'curative malaria kit': 5740, 'amorphous calcium silicate hydrate': 5741, 'AFEX corn stover hydrolysate': 5742, 'Elongation at Break': 5743, 'emerald ash borer': 5744, 'tube in tube': 5745, 'turbine inlet temperature': 5746, 'tissue invasion type': 5747, 'Water absorbing mass': 5748, 'weight adjusted model': 5749, 'variable speed limit': 5750, 'variable stem loop': 5751, 'Current imaging tunneling spectroscopy': 5752, 'climate induced toxicant sensitivities': 5753, 'Surface mechanical attrition treatment': 5754, 'single match assigned tags': 5755, 'Electrohydrodynamic direct write': 5756, 'exponential directional weighted': 5757, 'Enterprise Data Warehouse': 5758, 'exponential decay waveform': 5759, 'ordinary magnesium hydroxide': 5760, 'Organic Mental Health': 5761, 'ultrafine magnesium hydroxide': 5762, 'Ulmus macrocarpa Hance': 5763, 'General well being': 5764, 'George Washington Bridge': 5765, 'low salinity water injection': 5766, 'Lean stratified water injection': 5767, 'exercised tucumã group': 5768, 'exercise training group': 5769, 'Low tension glaucoma': 5770, 'lag time group': 5771, 'laparoscopic total gastrectomy': 5772, 'probabilistic principal component analysis': 5773, 'protective protein cathepsin A': 5774, 'Phylogenetic Principal Component Analyses': 5775, 'Chinese Spectral Radio Heliograph': 5776, 'current spatial receding horizon': 5777, 'Nicholson Ross Weir': 5778, 'North Rhine Westphalia': 5779, 'Non Reserve West': 5780, 'Familial retinal artery macroaneurysm': 5781, 'Functional Resonance Analysis Method': 5782, 'minimal visible lesion': 5783, 'mitral valve lesion': 5784, 'Microcystis viridis lectin': 5785, 'smooth pursuit eye movements': 5786, 'spasmolytic polypeptide expressing metaplasia': 5787, 'Normal human melanocytes': 5788, 'Non hydrostatic Model': 5789, 'Natural History Museum': 5790, 'normal human muscle': 5791, 'heat transfer fluid': 5792, 'human Tenon fibroblast': 5793, 'high transmission fitness': 5794, 'natural gas combined cycle': 5795, 'New Guinea Coastal Current': 5796, 'Transducer Electronic Data Sheets': 5797, 'Twin Early Development Study': 5798, 'optical imaging equipment': 5799, 'obligate insect endosymbionts': 5800, 'yeast peptone dextrose': 5801, 'Yeast Proteome Database': 5802, 'gas exchange abnormality': 5803, 'global endoscopic assessment': 5804, 'genotype expression association': 5805, 'Green Economy Act': 5806, 'garlic extract aged': 5807, 'tubule interstitial nephritis': 5808, 'transcript integrity number': 5809, 'totally intronic noncoding': 5810, 'treatment induced necrosis': 5811, 'temperature integrating neurons': 5812, 'medial basal hypothalamus': 5813, 'membrane bound hydrogenase': 5814, 'mouse brain homogenate': 5815, 'Multi Breath Hold': 5816, 'time to win': 5817, 'travel to work': 5818, 'central dispatching control center': 5819, 'complement dependent cellular cytotoxicity': 5820, 'West Texas Intermediate': 5821, 'West Texas international': 5822, 'ordinal optimization algorithm': 5823, 'Object Oriented Analysis': 5824, 'out of Africa': 5825, 'Old Order Amish': 5826, 'Olkusz ore area': 5827, 'net positive suction head': 5828, 'north Pacific subtropical high': 5829, 'video superresolution reconstruction': 5830, 'vascular structural remodeling': 5831, 'video service range': 5832, 'ventricular septal rupture': 5833, 'vacuolar sorting receptor': 5834, 'main vacuum interrupter': 5835, 'mitral valve inflow': 5836, 'macroscopic vascular invasion': 5837, 'local discriminant bases': 5838, 'Light Dark Box': 5839, 'Local Data Base': 5840, 'kernel anisotropic diffusion': 5841, 'Kyoto Apc Delta': 5842, 'knobbed acrosome defect': 5843, 'mean excess travel times': 5844, 'Micro Expression Training Tool': 5845, 'fuzzy color histogram': 5846, 'first capsule height': 5847, 'Fundeni Clinical Hospital': 5848, 'total generalized variation': 5849, 'thoracic gas volume': 5850, 'true genetic value': 5851, 'trans Golgi vesicles': 5852, 'hybrid magnetic bearing': 5853, 'human melanin black': 5854, 'Heavy menstrual bleeding': 5855, 'Half Moon Bay': 5856, 'radial magnetic bearing': 5857, 'Restricted Mobility Based': 5858, 'right main bronchus': 5859, 'mean absolute log error': 5860, 'major adverse limb events': 5861, 'Electric Water Pump': 5862, 'egg white proteins': 5863, 'electron wave packets': 5864, 'Broyden Fletcher Goldfarb Shanno': 5865, 'bipolar fuzzy graph structure': 5866, 'maximum gradient orientation': 5867, 'Modular Groundwater Optimization': 5868, 'marine gas oil': 5869, 'test task scheduling problem': 5870, 'time to symptomatic progression': 5871, 'variable neighborhood MOEA/D': 5872, 'von Neumann Morgenstern': 5873, 'element free Galerkin': 5874, 'electric field gradient': 5875, 'bioinspired intelligence optimization': 5876, 'binocular indirect ophthalmoscope': 5877, 'two way stop controlled': 5878, 'terrestrial water storage change': 5879, 'low rank subspace clustering': 5880, 'local random sparse coding': 5881, 'orthogonal scene motion pattern': 5882, 'orbital selective Mott phase': 5883, 'Adaptive Inverse Hyperbolic Tangent': 5884, 'Analysis Iterative Hard Thresholding': 5885, 'angular random walk': 5886, 'Advanced Research WRF': 5887, 'average residue weight': 5888, 'maximum link utilization': 5889, 'Madrid Land Use': 5890, 'mean light units': 5891, 'Tsing Ma Bridge': 5892, 'Transient monocular blindness': 5893, 'tumor mutational burden': 5894, 'timber market B': 5895, 'Thailand Myanmar border': 5896, 'Ting Kau Bridge': 5897, 'tyrosine kinase binding': 5898, 'effective condition number': 5899, 'explicit congestion notification': 5900, 'external cuneate nucleus': 5901, 'executive control network': 5902, 'permanent magnetic synchronous generator': 5903, 'Pregnant mare serum gonadotropin': 5904, 'acoustic Doppler velocimetry': 5905, 'Accumulated drain volume': 5906, 'acoustic droplet vaporization': 5907, 'van der Pol': 5908, 'Variable determinant position': 5909, 'Ventilation defect percent': 5910, 'virtual motion camouflage': 5911, 'Vanderbilt Medical Center': 5912, 'Visuo motor control': 5913, 'Vascular mesenchymal cell': 5914, 'voluntary muscular contraction': 5915, 'hybrid electric bus': 5916, 'Hawaiian Emperor Bend': 5917, 'nonlinear inertia convergence classification model': 5918, 'normalized instantaneous channel correlation matrix': 5919, 'glutaminyl hydroxy benzene': 5920, 'glazed hollow bead': 5921, 'tool workpiece thermocouple technique': 5922, 'two way travel time': 5923, 'nonlinear disturbance observer': 5924, 'novel dispensation order': 5925, 'neurogenic detrusor overactivity': 5926, 'gate location changing': 5927, 'gas liquid chromatography': 5928, 'glaucomatous LC cells': 5929, 'Object Management Group': 5930, 'oligodendrocyte myelin glycoprotein': 5931, 'Illness Perception Questionnaire': 5932, 'iGroup Presence Questionnaire': 5933, 'resting metabolic rate': 5934, 'renal mass reduction': 5935, 'rock mass rating': 5936, 'relative mortality ratio': 5937, 'rhythmic masking release': 5938, 'Chemlali Olive Leaf Extract': 5939, 'Carbon On Line Estimator': 5940, 'hierarchical word list': 5941, 'high workload level': 5942, 'head withdrawal latency': 5943, 'high weight loss': 5944, 'Peripheral diabetic neuropathy': 5945, 'pull down network': 5946, 'palmar digital nerve': 5947, 'penta galloyl glucose': 5948, 'public goods game': 5949, 'Principal Genetic Groups': 5950, 'Methyl Water Ratio': 5951, 'millimeter wave radiometer': 5952, 'multi way relaying': 5953, 'Poly Unsaturated Index': 5954, 'primary user interface': 5955, 'pollen unilateral incompatibility': 5956, 'Passive ultrasonic irrigation': 5957, 'peri urban interface': 5958, 'rapidly growing mycobacteria': 5959, 'regular glucose medium': 5960, 'reactive gaseous mercury': 5961, 'repulsive guidance molecules': 5962, 'social interaction anxiety scale': 5963, 'Stroke Impairment Assessment Set': 5964, 'Break Up Time': 5965, 'benign uterine tumor': 5966, 'Body Uneasiness Test': 5967, 'de novo lipogenesis': 5968, 'differential non linearity': 5969, 'disseminated necrotizing leukoencephalopathy': 5970, 'dynamic noise level': 5971, 'dynamic gait index': 5972, 'Dietary Guideline Index': 5973, 'domain general index': 5974, 'demand gas inlet': 5975, 'functional gait assessment': 5976, 'fuzzy genetic algorithm': 5977, 'first generation antipsychotics': 5978, 'feature generation algorithm': 5979, 'autologous whole blood': 5980, 'automatic white balance': 5981, 'aberrant drug behaviors': 5982, 'Asian Dryland Belt': 5983, 'Advanced Dive Behavior': 5984, 'alveolar duct bifurcations': 5985, 'the Atlanta Cardiomyopathy Consortium': 5986, 'Texas Advanced Computing Center': 5987, 'Transforming Acidic Coiled Coil': 5988, 'GAP related domain': 5989, 'Green River District': 5990, 'ground radiation dose': 5991, 'glycine rich domains': 5992, 'Genetic Renal Disease': 5993, 'remote handling robot': 5994, 'resting heart rate': 5995, 'integral force feedback': 5996, 'instantaneous firing frequency': 5997, 'interstitial fluid flow': 5998, 'In field failure': 5999, 'Interferon gamma release assays': 6000, 'Integrated Global Radiosonde Archive': 6001, 'clipped on off': 6002, 'cycling ozone oxidation': 6003, 'cell of origin': 6004, 'depth of burial': 6005, 'distal oblique bundle': 6006, 'date of birth': 6007, 'delta over baseline': 6008, 'In vivo Imaging System': 6009, 'in vehicle information system': 6010, 'in vitro injection system': 6011, 'In Vitro Irritancy Score': 6012, 'ultrasonic spectral imaging': 6013, 'UTR shortening index': 6014, 'universal salt iodation': 6015, 'Runyang Suspension Bridge': 6016, 'residual stromal bed': 6017, 'Rice straw biomass': 6018, 'hierarchical energy tree': 6019, 'hypoxic exercise tolerance': 6020, 'oxygen extraction fraction': 6021, 'Operation Enduring Freedom': 6022, 'normal ground plane': 6023, 'northern Great Plains': 6024, 'Networked Gene Prioritizer': 6025, 'Nymphalid ground plan': 6026, 'integer wavelet transform': 6027, 'intrusive welded tuff': 6028, 'ice water test': 6029, 'interval walking training': 6030, 'intermediate water temperature': 6031, 'block artificial grids': 6032, 'Bandwidth Allocation Gap': 6033, 'flexible intramedullary nailing': 6034, 'fronto insular network': 6035, 'volumetric ring array': 6036, 'Ventral Root Avulsion': 6037, 'Volta River Authority': 6038, 'variably rectifying astrocyte': 6039, 'gait energy image': 6040, 'gene environment interactions': 6041, 'Gene Expression Index': 6042, 'Gender Equity Index': 6043, 'gross energy intake': 6044, 'mean gray level': 6045, 'module graphical lasso': 6046, 'macrophage galactose lectin': 6047, 'Pacific Argo Regional Center': 6048, 'Peninsula Aquatic Recreation Centre': 6049, 'free space optical': 6050, 'fresh soy oil': 6051, 'traditional histogram equalization': 6052, 'Total Health Expenditure': 6053, 'treated hospital effluent': 6054, 'total hemodynamic energy': 6055, 'topological Hall effect': 6056, 'Space Time Block Coding': 6057, 'Still There By Chance': 6058, 'mean absolute relative error': 6059, 'mobile augmented reality education': 6060, 'major adverse renal event': 6061, 'geospatial processing workflow': 6062, 'Gale Portage Water': 6063, 'gas phase water': 6064, 'vasa vasorum interna': 6065, 'Vector velocity imaging': 6066, 'ventricular ventricular interaction': 6067, 'wing in ground': 6068, 'water insoluble glucan': 6069, 'progressive aerobic cardiovascular endurance run': 6070, 'Progressive Aerobic Capacity Endurance Run': 6071, 'cyanoacrylate skin surface stripping': 6072, 'curcumin solubilized surfactant solution': 6073, 'unit load method': 6074, 'U2AF ligand motif': 6075, 'k shortest paths': 6076, 'kinesin spindle protein': 6077, 'pin through hole': 6078, 'primary therapeutic hypothermia': 6079, 'post traumatic headache': 6080, 'primary Tupaia hepatocytes': 6081, 'peptidyl tRNA hydrolase': 6082, 'no green vegetation': 6083, 'normalized global variance': 6084, 'Fish growth hormone': 6085, 'formyl glutathione hydrolase': 6086, 'Kellgren Lawrence Score': 6087, 'Krug Large Seed': 6088, 'gated master slave latch': 6089, 'global mean sea level': 6090, 'Orthogonal Centroid Feature Selection': 6091, 'open cell free synthesis': 6092, 'Oral cancer free survival': 6093, 'Mandarin Affective Speech Corpus': 6094, 'Mammary analogue secretory carcinoma': 6095, 'Mars Atmosphere Simulation Chamber': 6096, 'MAGUK Associated Signalling Complexes': 6097, 'multiplex allele specific colony': 6098, 'triadic game design': 6099, 'tumor growth delay': 6100, 'natural basaltic aggregate': 6101, 'new bone area': 6102, 'National Basketball Association': 6103, 'Nile blue A': 6104, 'prestressed unbounded reinforcement': 6105, 'Pesticide Use Reporting': 6106, 'pile up rejection': 6107, 'Estimated Prime Factor': 6108, 'election priority factor': 6109, 'Epimedium pubescen flavonoid': 6110, 'epidermal patterning factor': 6111, 'early preneoplastic foci': 6112, 'protocol data unit': 6113, 'Phantom Data Usage': 6114, 'process development unit': 6115, 'problematic drug use': 6116, 'network management unit': 6117, 'Nara Medical University': 6118, 'trehalose untreated eyes': 6119, 'transpiration use efficiency': 6120, 'plant polyphenol oxidase': 6121, 'plastic pyrolysis oil': 6122, 'peak power output': 6123, 'predicted post operative': 6124, 'Provincial Project Officers': 6125, 'blood perfusion unit': 6126, 'Blood percent unit': 6127, 'background parenchymal uptake': 6128, 'Component based software engineering': 6129, 'Caryocar brasiliense supercritical extract': 6130, 'multiple constrained shortest path': 6131, 'Melanoma chondroitin sulfate proteoglycan': 6132, 'guided filter fusion': 6133, 'glass fiber filter': 6134, 'General Feature Format': 6135, 'Interaction Prediction Optimization': 6136, 'interdecadal Pacific oscillation': 6137, 'input process output': 6138, 'Multidisciplinary Design Optimization': 6139, 'Marine Diesel Oil': 6140, 'mid diencephalic organizer': 6141, 'successful packet received rate': 6142, 'serine proline rich region': 6143, 'native entity object': 6144, 'nerve end organs': 6145, 'Network Edge Orienting': 6146, 'NASA Earth Observations': 6147, 'single amino acid chelator': 6148, 'split amino acid composition': 6149, 'weight on bit': 6150, 'washed oil body': 6151, 'work of breathing': 6152, 'bond order wave': 6153, 'bag of words': 6154, 'maximum midexpiratory flow rate': 6155, 'Matang Mangrove Forest Reserve': 6156, 'Lille apathy rating scale': 6157, 'Ligament Augmentation Reconstruction System': 6158, 'Long Ashton Research Station': 6159, 'biomass power generation': 6160, 'benzathine penicillin G': 6161, 'Brain Powered Games': 6162, 'bovine platelet gel': 6163, 'Csharp analytic hierarchy process': 6164, 'Cardiac Arrest Hospital Prognosis': 6165, 'biased random walk': 6166, 'Brown Roberts Wells': 6167, 'Hydrous ferric oxide': 6168, 'High frequency oscillation': 6169, 'heavy fuel oil': 6170, 'Incremental Tournament Local Searcher': 6171, 'Induced tumor like structures': 6172, 'Inverse Nyquist Array': 6173, 'ice nucleation activity': 6174, 'integrated neighborhood approach': 6175, 'Forward collision warning': 6176, 'field cooled warming': 6177, 'forwards compression waves': 6178, 'fresh coconut water': 6179, 'genuine accept rate': 6180, 'geodesic active region': 6181, 'generalized anti Robinson': 6182, 'Gait Assistance Robot': 6183, 'chemical kinetics approach': 6184, 'centered kernel alignment': 6185, 'cancer killing activity': 6186, 'adaptive cuckoo search algorithm': 6187, 'Anatomical cross sectional area': 6188, 'image quality assessment': 6189, 'interacting quantum atoms': 6190, 'tidal creek bed': 6191, 'Tabular cross bedding': 6192, 'total culturable bacteria': 6193, 'opalized white tuff': 6194, 'old wild type': 6195, 'open window thoracostomy': 6196, 'overground walking test': 6197, 'overlapping government combination': 6198, 'Open Geospatial Consortium': 6199, 'oriented graph cut': 6200, 'oxygen gas concentration': 6201, 'quantization error compensation': 6202, 'quick exposure check': 6203, 'quantum error correction': 6204, 'Engineering change orders': 6205, 'endocrine cerebro osteodysplasia': 6206, 'EPA Camelina oil': 6207, 'emergent charge ordered': 6208, 'Receiver decoding block': 6209, 'Rouge de Bordeaux': 6210, 'outer transverse diameter': 6211, 'Observed Time Difference': 6212, 'Oomycete Transcriptomics Database': 6213, 'Optical Technology Division': 6214, 'lateral wall thickness': 6215, 'lifting wavelet transform': 6216, 'low wait times': 6217, 'London wild type': 6218, 'Foramen Magnum Height': 6219, 'Functional Mental Health': 6220, 'Feto maternal hemorrhage': 6221, 'forage maturation hypothesis': 6222, 'Foramen Magnum Width': 6223, 'Flexible membrane wing': 6224, 'first mitotic wave': 6225, 'X ray telescope': 6226, 'X ray tomography': 6227, 'fast testing method': 6228, 'frequency time matrix': 6229, 'female to male': 6230, 'uncorrelated shrunken centroid': 6231, 'United Subset Consensus': 6232, 'Uterine serous carcinoma': 6233, 'Gabor Wavelet Transform': 6234, 'Gabor Wigner transform': 6235, 'pavement maintenance management systems': 6236, 'peptide mediated magnetic separation': 6237, 'normalized difference snow index': 6238, 'Nepean Dyspepsia Symptom Index': 6239, 'optical Kerr effect': 6240, 'ocean kinetic energy': 6241, 'Brookhaven National Laboratory': 6242, 'benthic nepheloid layer': 6243, 'single ozone oxidation': 6244, 'single objective optimisation': 6245, 'Global Atmosphere Watch': 6246, 'glottal area waveform': 6247, 'Genetic Analysis Workshop': 6248, 'observation minus reanalysis': 6249, 'omnidirectional mobile robot': 6250, 'Optical Music Recognition': 6251, 'benign nodular hyperplasia': 6252, 'branched nanowire heterostructure': 6253, 'atypical adenomatous hyperplasia': 6254, 'Agave americana heart': 6255, 'acute alcoholic hepatitis': 6256, 'Soil Water Assessment Tool': 6257, 'Subjective Workload Assessment Technique': 6258, 'subcutaneous white adipose tissue': 6259, 'pulmonary veno occlusive disease': 6260, 'peripheral vascular occlusive disease': 6261, 'Yellow Fever Virus': 6262, 'yellow fever vaccine': 6263, 'set top box': 6264, 'Septoria tritici blotch': 6265, 'serum total bilirubin': 6266, 'Stiftung Tumorbank Basel': 6267, 'Smaller The Better': 6268, 'Dvali Gabadadze Porrati': 6269, 'Data Generating Process': 6270, 'Biosphere Atmosphere Transfer Scheme': 6271, 'Brisbane Adolescent Twin Study': 6272, 'Three Gorges Project': 6273, 'Thousand Genomes Project': 6274, 'tibial growth plate': 6275, 'targeted gene panels': 6276, 'time generation place': 6277, 'Pacific Decadal Oscillation': 6278, 'periosteal distraction osteogenesis': 6279, 'protein disulfide oxidoreductase': 6280, 'Gas assisted injection molding': 6281, 'Global Assimilative Ionospheric Model': 6282, 'western Sichuan Basin': 6283, 'working seed bank': 6284, 'Abrasive jet machining': 6285, 'absolute joint moment': 6286, 'polymer modified bitumen': 6287, 'Pellet manure biochar': 6288, 'proximal main branch': 6289, 'polymer light emitting diode': 6290, 'periodic lateralized epileptiform discharges': 6291, 'Atlantic Meridional Overturning Circulation': 6292, 'Activity Monitoring Operating Characteristic': 6293, 'New Simplified Arakawa Schubert': 6294, 'non substance abusing schizophrenics': 6295, 'National Scenic Area Songhuahu': 6296, 'generalized stacking fault': 6297, 'Great Sumatran fault': 6298, 'Graph Structured Features': 6299, 'Gold Standards Framework': 6300, 'Positive Feedback Adiabatic Logic': 6301, 'product family assembly line': 6302, 'chemical exchange saturation transfer': 6303, 'condensation extraction steam turbogenerator': 6304, 'Central European Summer Time': 6305, 'Brewers spent grain': 6306, 'boron silicate glass': 6307, 'Viscosity ageing index': 6308, 'visceral adipose index': 6309, 'laser lift off': 6310, 'lipid linked oligosaccharide': 6311, 'Oligonucleotide Ligation Assay': 6312, 'Oued Laou Area': 6313, 'ovine Lymphocyte Antigen': 6314, 'optic lobe anlage': 6315, 'block swapping operator': 6316, 'Brain Storm Optimization': 6317, 'Bacterial Swarm Optimization': 6318, 'bismuth silicon oxide': 6319, 'Barents Sea Opening': 6320, 'average length of stay': 6321, 'Advanced Land Observing Satellite': 6322, 'hypergame expected utility': 6323, 'HIV exposed uninfected': 6324, 'indicated specific fuel consumption': 6325, 'inter subject functional correlation': 6326, 'Himalayan Frontal Thrust': 6327, 'Hidden Figures Test': 6328, 'high flow therapy': 6329, 'hydraulic fracture test': 6330, 'High frequency trading': 6331, 'Robotic Assistive Transfer Device': 6332, 'retronasal aroma trapping device': 6333, 'paired associative learning test': 6334, 'pineal associated lymphoid tissue': 6335, 'multiple dose group': 6336, 'Mean Decrease Gini': 6337, 'Millennium Development Goal': 6338, 'ezrin radixin moesin': 6339, 'explicit range match': 6340, 'equal rate Markov': 6341, 'enhanced replacement method': 6342, 'Quadrato motor training': 6343, 'quantitative muscle testing': 6344, 'quartenized maize tassels': 6345, 'mild therapeutic hypothermia': 6346, 'major teaching hospitals': 6347, 'Mammalian two hybrid': 6348, 'mild temperature hyperthermia': 6349, 'dried urine spots': 6350, 'DNA uptake sequence': 6351, 'low motion group': 6352, 'lactating mammary glands': 6353, 'Total red blood cells': 6354, 'thermal radiation barrier coating': 6355, 'Cumulative Intersection Level': 6356, 'conserved intracellular loop': 6357, 'cathode interface layer': 6358, 'Hanging Wire Test': 6359, 'harmonic wavelet transform': 6360, 'high waitlist time': 6361, 'head withdrawal threshold': 6362, 'general fitness training': 6363, 'generalized Fourier transformation': 6364, 'Google Flu Trends': 6365, 'inferior vena cava clamping': 6366, 'Innovative Vector Control Consortium': 6367, 'nucleotide exchange factor': 6368, 'normal endothelial function': 6369, 'Yellow River Estuary': 6370, 'Yap1 response element': 6371, 'Tinnitus Handicap Questionnaires': 6372, 'Target Hazard Quotiens': 6373, 'Single Bond Universal': 6374, 'secondary building units': 6375, 'Stony Brook University': 6376, 'Phase sensitive inversion recovery': 6377, 'post swallow impedance ratio': 6378, 'posterior pharyngeal wall': 6379, 'points per wavelength': 6380, 'Propodeal posterior width': 6381, 'Preoperative Peritoneal Washes': 6382, 'intermediate molecular weight': 6383, 'inner myocardial wall': 6384, 'older high performers': 6385, 'oscillating heat pipe': 6386, 'outer Helmholtz plane': 6387, 'old hypertensive patients': 6388, 'Online Health Portfolio': 6389, 'direct vertebral derotation': 6390, 'Double variable domain': 6391, 'double vessel disease': 6392, 'digital video disc': 6393, 'upper instrumented screw': 6394, 'unpredictable irregular surface': 6395, 'genetic model exclusion': 6396, 'Generic Modeling Environment': 6397, 'guanidino modifying enzyme': 6398, 'G mangostana extracts': 6399, 'genuine multipartite entanglement': 6400, 'Nearest Neighbor Interchange': 6401, 'nitrogen nutrition index': 6402, 'non nucleoside inhibitors': 6403, 'National Nanotechnology Initiative': 6404, 'no neural invasion': 6405, 'Internet gaming disorder': 6406, 'inverted generational distance': 6407, 'inpatient gradual diagnostics': 6408, 'wall motion index': 6409, 'Working Memory Index': 6410, 'white matter injury': 6411, 'bone marrow mononuclear cell': 6412, 'bone marrow–derived mast cells': 6413, 'bone marrow mesenchymal cells': 6414, 'British Columbia Multiple Sclerosis': 6415, 'British Cattle Movement Service': 6416, 'upper airway resistance': 6417, 'unweighted average recall': 6418, 'right hepatic lobe': 6419, 'residual heterozygous lines': 6420, 'right heart lesions': 6421, 'root hair length': 6422, 'basal phenotype breast cancer': 6423, 'bilateral primary breast cancers': 6424, 'normalized brain volume': 6425, 'narrow band VLBI': 6426, 'Nelson Bay virus': 6427, 'word list generation': 6428, 'walking leg ganglion': 6429, 'weight loss group': 6430, 'Oregano essential oil': 6431, 'one end open': 6432, 'Resistance Gene Identifier': 6433, 'root growth inhibition': 6434, 'root galling index': 6435, 'stable luciferase orange': 6436, 'scanning laser ophthalmoscopy': 6437, 'Single Link Optimization': 6438, 'secondary lymphoid organs': 6439, 'Wilson Central Terminal': 6440, 'Wireless Communication Technologies': 6441, 'West coast tall': 6442, 'westslope cutthroat trout': 6443, 'Wide complex tachycardia': 6444, 'autologous serum skin test': 6445, 'Artificial Society Situation Tool': 6446, 'attentional set shifting task': 6447, 'volume doubling time': 6448, 'visual display terminal': 6449, 'vibration detection threshold': 6450, 'muscle utilisation ratio': 6451, 'muscle uptake rate': 6452, 'MCPyV unique region': 6453, 'upper prediction limit': 6454, 'ulcer projection lesion': 6455, 'Universal Probe Library': 6456, 'inferior caval vein': 6457, 'intra cranial volume': 6458, 'international corporate volunteering': 6459, 'intra cellular volume': 6460, 'keratinocyte conditioned medium': 6461, 'Kitaev Chain Model': 6462, 'kinase control module': 6463, 'Goud Saraswat Brahmin': 6464, 'green sulphur bacteria': 6465, 'Glucose Salts Biotin': 6466, 'ground state bleaching': 6467, 'inducible nitric oxide synthase': 6468, 'intrinsic nucleosome occupancy score': 6469, 'no family history': 6470, 'near field holography': 6471, 'multiple kernel learning': 6472, 'minimum k labeling': 6473, 'non Hispanic white': 6474, 'nonlinear Hammerstein Wiener': 6475, 'natural hydrogen water': 6476, 'Enzyme Linked Aptamer Sorbent Assay': 6477, 'enzyme linked activity sorbent assay': 6478, 'Web of Science': 6479, 'windows of susceptibility': 6480, 'gum elastic bougie': 6481, 'Genome Environment Browser': 6482, 'Gene Expression Browser': 6483, 'Gastrodia elata Blume': 6484, 'proton resonance frequency shift': 6485, 'parotid relapse free survival': 6486, 'very high valence': 6487, 'village health volunteer': 6488, 'White Light Optical Profiling': 6489, 'Weighted Locally Optimal Projection': 6490, 'ice cold water': 6491, 'inner cell wall': 6492, 'Intercellular calcium waves': 6493, 'In cell Western': 6494, 'False negative rate': 6495, 'formal networking research': 6496, 'ferredoxin NADP reductase': 6497, 'ferromagnetic nuclear resonance': 6498, 'equivalent input noise': 6499, 'Enzyme Interaction Networks': 6500, 'isorhamnetin glucosyl rhamnosyl pentoside': 6501, 'Information Governance Review Panel': 6502, 'Opuntia ficus indica': 6503, 'other febrile illnesses': 6504, 'medial nasal process': 6505, 'monolayer neural precursors': 6506, 'myenteric nerve plexus': 6507, 'Mole National Park': 6508, 'ischemic boundary zone': 6509, 'infarct border zone': 6510, 'global brain connectivity': 6511, 'globose basal cells': 6512, 'relative wall thickness': 6513, 'Radon WVD transform': 6514, 'Real World Task': 6515, 'radial water tread': 6516, 'mean alveolar numbers': 6517, 'medial amygdalar nucleus': 6518, 'Maritime Aerosol Network': 6519, 'mouse brain organotypic': 6520, 'Migrating Birds Optimization': 6521, 'modified Brostrm operation': 6522, 'Mattis Dementia Rating Scale': 6523, 'Mars Desert Research Station': 6524, 'methacholine dose response slope': 6525, 'Malawi Diabetic Retinopathy Study': 6526, 'early HIV infection': 6527, 'Environmental Health Institute': 6528, 'exertional heat illness': 6529, 'Exposure Hazard Index': 6530, 'Early Healing Index': 6531, 'circle of Willis': 6532, 'correlation optimized warping': 6533, 'computers on wheels': 6534, 'coherent one way': 6535, 'pulmonary venous bed': 6536, 'Parece Vela Basin': 6537, 'premature ventricular beats': 6538, 'systemic venous bed': 6539, 'small vascular bundle': 6540, 'dual dictionary learning': 6541, 'Digital Delay Line': 6542, 'average maximum principal strain': 6543, 'Automated Mechanical Peripheral Stimulation': 6544, 'Front Rigid Barrier': 6545, 'fermented rice bran': 6546, 'FKBP12 rapamycin binding': 6547, 'Offset Deformable Barrier': 6548, 'observed differential bathymetry': 6549, 'chest wall line': 6550, 'Critical weight loss': 6551, 'Chemical Wall Loosening': 6552, 'cool white light': 6553, 'artificial immune recognition system': 6554, 'Aerometric Information Retrieval System': 6555, 'multiplicative intrinsic component optimization': 6556, 'minimally invasive cardiac output': 6557, 'descending thin limb': 6558, 'drift tube LINAC': 6559, 'duplication transfer loss': 6560, 'Dawson Trick Litzkow': 6561, 'diagnosis time lag': 6562, 'lateral olfactory tract': 6563, 'leak off tests': 6564, 'line of therapy': 6565, 'Left Occipito Temporal': 6566, 'local histogram equalization': 6567, 'light harvesting efficiency': 6568, 'left hand end': 6569, 'Vehicle Routing Problem': 6570, 'ventral rostral putamen': 6571, 'Vertical Reference Plane': 6572, 'kernel temporal differences': 6573, 'kidney tubular dysfunction': 6574, 'hidden semi Markov model': 6575, 'Human skeletal muscle myoblasts': 6576, 'damage associated molecular patterns': 6577, 'Distributed Applications Management Platform': 6578, 'Drug name recognition': 6579, 'Do not resuscitate': 6580, 'subepithelial connective tissue graft': 6581, 'single cognitive training group': 6582, 'differential algebraic reconstruction technique': 6583, 'dual affinity re targeting': 6584, 'Dental Alcohol Reduction Trial': 6585, 'vector field convolution': 6586, 'ventral frontal cortex': 6587, 'extensor hallucis brevis': 6588, 'European honey bees': 6589, 'medical intensive care unit': 6590, 'Mobile Intensive Care Unit': 6591, 'Pulmonary hyalinizing granuloma': 6592, 'Portal Hypertensive Gastropathy': 6593, 'Public Health Genomics': 6594, 'magnetic resonance urography': 6595, 'Meningococcal Reference Unit': 6596, 'mammary repopulating unit': 6597, 'chaotic switched turbo code': 6598, 'cortico striatal thalamo cortical': 6599, 'infarct related artery': 6600, 'Insulin Receptor A': 6601, 'Innate responsive activator': 6602, 'irregular repeat accumulate': 6603, 'infections respiratoires aiguës': 6604, 'Cytosine Adenine Thymine Thymine': 6605, 'Cochran Armitage trend test': 6606, 'Gestational trophoblastic disease': 6607, 'genotype trait distortion': 6608, 'gene transposition duplication': 6609, 'acute low back pain': 6610, 'adipocyte lipid binding protein': 6611, 'nipple aspirate fluid': 6612, 'North Anatolian Fault': 6613, 'Northern Atlantic Forest': 6614, 'nuclear area factor': 6615, 'purple sweet potato leaves': 6616, 'positional scanning peptide library': 6617, 'posterior superior parietal lobule': 6618, 'Elevated Body Swing Test': 6619, 'elevated biased swing test': 6620, 'of Tropidurus hispidus': 6621, 'over the horizon': 6622, 'other teaching hospitals': 6623, 'Jinqian Baihua She': 6624, 'junction barrier Schottky': 6625, 'juvenile bypass system': 6626, 'Japanese Burnout Scale': 6627, 'nociceptive withdrawal reflex': 6628, 'National Wildlife Refuge': 6629, 'non word repetition': 6630, 'Rhus chinensis gall': 6631, 'ray cell groups': 6632, 'relative cell growth': 6633, 'Shuang Huang Lian': 6634, 'secondary hepatic lymphoma': 6635, 'Super Helical Locations': 6636, 'Shimodaira Hasegawa Like': 6637, 'sensorineural hearing loss': 6638, 'acidic vesicular organelle': 6639, 'Alaska Volcano Observatory': 6640, 'aortic valve opening': 6641, 'autophagic vacuole organelle': 6642, 'Quorum sensing inhibitory': 6643, 'Quick Search Interface': 6644, 'Sjögren syndrome dry eye': 6645, 'size specific dose estimates': 6646, 'left cardiac work': 6647, 'local cluster weighting': 6648, 'H erinaceus Extracts': 6649, 'hexylphosphonate ethyl ester': 6650, 'human endometrial epithelial': 6651, 'hot ethanolic extract': 6652, 'Shi Du Ruan Gao': 6653, 'semantic description relation graph': 6654, 'Polygonum multiflorum Radix Preparata': 6655, 'Personalized Medicine Research Project': 6656, 'obesity related glomerulopathy': 6657, 'olfactory receptor gene': 6658, 'Old Rio Grande': 6659, 'osteogenic related genes': 6660, 'ordered region growing': 6661, 'gambogic acid lysinate': 6662, 'gradient adaptive lattice': 6663, 'general agricultural land': 6664, 'Genepix Array List': 6665, 'mesenteric vein thrombosis': 6666, 'Mississippi Valley type': 6667, 'maximal voluntary torque': 6668, 'joint clustering model': 6669, 'Jose Carlos Mariátegui': 6670, 'tie line length': 6671, 'Thermomyces lanuginosus lipase': 6672, 'total lesion load': 6673, 'Tomonaga Luttinger liquid': 6674, 'T lymphoblastic leukemia/lymphoma': 6675, 'Turbo Inversion Recovery Magnitude': 6676, 'total internal reflection microscopy': 6677, 'Far Field Diffraction Pattern': 6678, 'freedom from disease progression': 6679, 'Chungnam National University': 6680, 'canopy N uptake': 6681, 'smoked rice seed coat': 6682, 'size related shape change': 6683, 'spatially regularized spectral clustering': 6684, 'begin of life': 6685, 'basion opisthion line': 6686, 'stir bar sorptive extraction': 6687, 'Search Based Software Engineering': 6688, 'Modulated Wideband Converter': 6689, 'Myanmar West Coast': 6690, 'Modal weighting coefficient': 6691, 'Monod Wyman Changeux': 6692, 'maximum water content': 6693, 'wind driven optimization': 6694, 'W disjoint orthogonality': 6695, 'Bahia Costal Forest': 6696, 'bio concentration factor': 6697, 'beat cross frequency': 6698, 'Burnett County Forest': 6699, 'baseline carried forward': 6700, 'nearest taxon index': 6701, 'Normalized Tree Index': 6702, 'normal tissue index': 6703, 'nerve terminal impulse': 6704, 'Fusion Bond Epoxy': 6705, 'fructose bisphosphate enolase': 6706, 'fluorescence brightness equation': 6707, 'fermented blueberry extract': 6708, 'fruiting body extract': 6709, 'hidden node margin': 6710, 'Heterotrophic nitrification medium': 6711, 'passive DNS graph': 6712, 'program dependence graph': 6713, 'Protein Design Group': 6714, 'positionable node ratio': 6715, 'polarized neutron reflectivity': 6716, 'Powdermill Nature Reserve': 6717, 'photon number resolving': 6718, 'optical distribution network': 6719, 'Original Domain Neighborhood': 6720, 'oligo deoxy nucleotide': 6721, 'Ordnance Datum Newlyn': 6722, 'Chernoff upper bound': 6723, 'codon usage bias': 6724, 'C1r/C1s UEGF BMP1': 6725, 'closing unpaired bases': 6726, 'electric Hertzian dipole': 6727, 'edge histogram descriptor': 6728, 'Eps15 homology domain': 6729, 'Wannier Stark ladder': 6730, 'white spot lesions': 6731, 'Experience Weighted Attraction': 6732, 'equal weighted average': 6733, 'Password Protected Key': 6734, 'photoregulatory protein kinases': 6735, 'micro arc oxidation': 6736, 'medial accessory olive': 6737, 'metabolically abnormal obese': 6738, 'maximum movement boundary': 6739, 'micro mass body': 6740, 'mixed mode bending': 6741, 'Mindful Mood Balance': 6742, 'feature processing block': 6743, 'flexor pollicis brevis': 6744, 'target number error': 6745, 'tube nozzle electrospinning': 6746, 'transplant non eligible': 6747, 'total nitrogen excretion': 6748, 'temperate needleleaved evergreen': 6749, 'highly trusted network': 6750, 'Hierarchical Task Network': 6751, 'hyalinizing trabecular neoplasm': 6752, 'high throughput neutralization': 6753, 'Quantal response equilibrium': 6754, 'QKI response element': 6755, 'without monitor mechanism': 6756, 'weight matrix method': 6757, 'Watson mixture model': 6758, 'enhanced distributed channel access': 6759, 'Enhanced Distributed Coordination Access': 6760, 'Fast Comprehensive Outlier Detection': 6761, 'Florid cemento osseous dysplasia': 6762, 'Iterative Hard Thresholding': 6763, 'intra hospital transport': 6764, 'intermittent hypoxia treatment': 6765, 'inter hemispheric transfer': 6766, 'minimum connected dominating set': 6767, 'master coding DNA sequence': 6768, 'Monte Carlo damage simulation': 6769, 'maximum weighted link scheduling': 6770, 'meshless weighted least squares': 6771, 'contents retrieval routing path': 6772, 'clinically relevant radiographic progression': 6773, 'multiple object detection accuracy': 6774, 'Microbial Oxidative Degradation Analyzer': 6775, 'Membrane Optimum Docking Area': 6776, 'thermal electric generator': 6777, 'Total Extracellular Glutathione': 6778, 'Node Activation Multiple Access': 6779, 'No Arbuscular Mycorrhizal Access': 6780, 'Monitoring Query Management': 6781, 'Multiple QTL Mapping': 6782, 'multiple QTL models': 6783, 'multiple measurement vector': 6784, 'Mouse Minute Virus': 6785, 'myxomatous mitral valves': 6786, 'mirabilis mosaic virus': 6787, 'Marisma mosquito virus': 6788, 'coding tree unit': 6789, 'Chronic Traumatic Ulcer': 6790, 'Czech Technical University': 6791, 'Clinical Trials Units': 6792, 'computerized tomography urography': 6793, 'adaptive linear predication synchronization': 6794, 'amphipathic lipid packing sensor': 6795, 'energy usage effectiveness': 6796, 'ex utero electroporation': 6797, 'software defined networking': 6798, 'signal dependent noise': 6799, 'Sentinel Data Network': 6800, 'state dependent network': 6801, 'nucleosome occupancy likelihood': 6802, 'Novel Object Location': 6803, 'Node Orthologous Labeling': 6804, 'number of letters': 6805, 'radar sensor network': 6806, 'robust spline normalization': 6807, 'resting state networks': 6808, 'right sciatic nerve': 6809, 'Intima media complex thickness': 6810, 'Intervention Measures Configuration Tool': 6811, 'histoculture drug response assay': 6812, 'Helical Domain Recognition Analysis': 6813, 'high density rice array': 6814, 'Expanded Importance Value': 6815, 'Errors In Variables': 6816, 'external iliac vein': 6817, 'Endothelial independent vasodilatation': 6818, 'equine influenza virus': 6819, 'quadratic mean diameter': 6820, 'quantum molecular dynamics': 6821, 'South Brazil Bight': 6822, 'Sudan Black B': 6823, 'selective binaural beamformer': 6824, 'semantic body browser': 6825, 'unilateral neck exploration': 6826, 'Una Norma Española': 6827, 'Korean Residual Soil': 6828, 'Kufor Rakeb syndrome': 6829, 'main string inverter modules': 6830, 'Marie Stopes International Mali': 6831, 'quaternary ammonium salts': 6832, 'quality adjusted survival': 6833, 'quality arterial stiffness': 6834, 'bleached rice husk': 6835, 'Benzene Ring Heuristic': 6836, 'best reciprocal hit': 6837, 'Buea Regional Hospital': 6838, 'mean maximum power point': 6839, 'Markov modulated Poisson process': 6840, 'Tungsten Inert Gas': 6841, 'Total Intracellular Glutathione': 6842, 'whole cellular fractions': 6843, 'wavelet center frequency': 6844, 'triple glazing unit': 6845, 'tandem gene unit': 6846, 'transverse gradient undulator': 6847, 'secondary optical element': 6848, 'spliced overlap extension': 6849, 'speed of emergence': 6850, 'Greedy perimeter stateless routing': 6851, 'Gradient projection sparse reconstruction': 6852, 'meters above sea level': 6853, 'Maackia amurensis seed lectin': 6854, 'single precision floating point': 6855, 'saxitoxin puffer fish poisoning': 6856, 'nonintrusive stress measurement system': 6857, 'normalized surface magnetic source': 6858, 'abbreviated mental test score': 6859, 'Activated Metal Treatment System': 6860, 'direct propane fuel cell': 6861, 'days post first contact': 6862, 'Food waste leachate': 6863, 'free water level': 6864, 'Tripterygium glycosides tablet': 6865, 'tRNA guanine transglycosilase': 6866, 'tumor growth time': 6867, 'end user interface': 6868, 'Extended Uncertainty Interval': 6869, 'transient cerebral hypoperfusion': 6870, 'Tzu Chi Hospital': 6871, 'Tamale Central Hospital': 6872, 'fast frequency hopping': 6873, 'Foot Foot Hand': 6874, 'improved probabilistic routing algorithm': 6875, 'Integrated probabilistic risk assessment': 6876, 'Complex Approximate Message Passing': 6877, 'Christie Atkins Munch Petersen': 6878, 'Childhood Asthma Management Program': 6879, 'Central Atlantic Magmatic Province': 6880, 'Community Aquatic Monitoring Program': 6881, 'wire rod mill': 6882, 'Worker Resource Manager': 6883, 'Herpetic stromal keratitis': 6884, 'herpes simplex keratitis': 6885, 'Japanese integrated staging': 6886, 'Janus Immunogenicity Score': 6887, 'juvenile idiopathic scoliosis': 6888, 'Joint Interim Statement': 6889, 'Japan Integrated Scoring': 6890, 'live vaccine strain': 6891, 'Linked Valued Segments': 6892, 'Low Vaginal Swab': 6893, 'local vegetation structure': 6894, 'late vegetative stage': 6895, 'extracellular enveloped virus': 6896, 'emotionally enhanced vividness': 6897, 'End expiratory volume': 6898, 'Ilha Grande Bay': 6899, 'Integrated Genome Browser': 6900, 'intra gastric balloon': 6901, 'hollow iron silicate spheres': 6902, 'Hepatic Insulin Sensitizing Substance': 6903, 'chemically reduced GO': 6904, 'cold responsive genes': 6905, 'Clinical Risk Groups': 6906, 'commercial red ginseng': 6907, 'anti EpCAM aptamers': 6908, 'adiabatic electron affinity': 6909, 'archetypal endocannabinoid anandamide': 6910, 'ammonium zirconium carbonate': 6911, 'ADR+ Zinc+ Cobalt': 6912, 'butyl glycidyl ether': 6913, 'bone graft extender': 6914, 'metal ferroelectric insulator semiconductor': 6915, 'Modified Fatigue Impact Scale': 6916, 'water insoluble fraction': 6917, 'Wnt inhibitory factor': 6918, 'worst individual fits': 6919, 'unmodified cement paste': 6920, 'umbilical cord patch': 6921, 'University Compound Project': 6922, 'up converting phosphors': 6923, 'medium molecular weight': 6924, 'Mixed Montane Woodland': 6925, 'mobile malaria workers': 6926, 'mean maximum weight': 6927, 'motif match weight': 6928, 'Retinitis Pigmentosa GTPase Regulator': 6929, 'road perception geographical routing': 6930, 'lamellar macular holes': 6931, 'Leghorn male hepatoma': 6932, 'ganglion cell complex thickness': 6933, 'Gravity Core Cooling Tank': 6934, 'congenital cystic adenomatoid malformation': 6935, 'Conformal Cubic Atmospheric Model': 6936, 'carbonaceous chondrite anhydrous mineral': 6937, 'West African Dwarf': 6938, 'weighted average difference': 6939, 'Whiplash associated disorders': 6940, 'without artemisinin derivatives': 6941, 'line scanning ophthalmoscope': 6942, 'Locus Specific Oligo': 6943, 'lateral superior olive': 6944, 'concurrent driving differential sensing': 6945, 'chemical drug delivery system': 6946, 'subspace constrained mean shift': 6947, 'Supply Chain Management System': 6948, 'sparse canonical correlation analysis': 6949, 'spherically capped conical antenna': 6950, 'Squamous cellular carcinoma antigen': 6951, 'Western Corn Rootworm': 6952, 'weekly case ratio': 6953, 'Wound closure rate': 6954, 'Balanced Incomplete Block Design': 6955, 'basophilic inclusion body disease': 6956, 'Collision Resolution Queue': 6957, 'Chronic Respiratory Questionnaire': 6958, 'root dry weight': 6959, 'recognition description word': 6960, 'red distribution width': 6961, 'quartz tuning fork': 6962, 'Quebec Task Force': 6963, 'portal blood vessel': 6964, 'percentage bone volume': 6965, 'pulmonary blood volume': 6966, 'parenchymal blood volume': 6967, 'Ebola viral disease': 6968, 'Extreme Value Distribution': 6969, 'external ventricular drain': 6970, 'one lung ventilation': 6971, 'Organic Lake Virophage': 6972, 'optical luminosity values': 6973, 'idiopathic environmental intolerances': 6974, 'international entrepreneurial intention': 6975, 'inter event interval': 6976, 'spectral contextual dictionary learning': 6977, 'Service Component Definition Language': 6978, 'Glycoprotein A Repetitions Predominant': 6979, 'Golgi associated retrograde protein': 6980, 'Glutamic Acid rich Protein': 6981, 'extended typical urban': 6982, 'energy transfer upconversion': 6983, 'Ebola Treatment Unit': 6984, 'named data networking': 6985, 'nonclassic differentiation number': 6986, 'automatic fingerprint authentication system': 6987, 'and Facilitative Aggression Scale': 6988, 'Water resources vulnerability': 6989, 'window ratio value': 6990, 'locally weighted learning': 6991, 'low workload level': 6992, 'low weight loss': 6993, 'Twin support vector regression': 6994, 'total systemic vascular resistance': 6995, 'trajectory angular rate increment': 6996, 'Taiwan Agricultural Research Institute': 6997, 'block backward differentiation formulas': 6998, 'biodiesel based drilling fluid': 6999, 'beyond visual range': 7000, 'bioprosthetic valve replacement': 7001, 'K Dependence Bayesian': 7002, 'K dependence BNs': 7003, 'limited relative displacement': 7004, 'leucine rich domain': 7005, 'lateral root density': 7006, 'local residual draws': 7007, 'low residue diet': 7008, 'backoff time function': 7009, 'backproject then filter': 7010, 'British Thyroid Foundation': 7011, 'basal transcription factor': 7012, 'arm strength training machine': 7013, 'auditory short term memory': 7014, 'Model following control system': 7015, 'maximally freeze concentrated solution': 7016, 'average knowledge stock': 7017, 'American Knee Society': 7018, 'modified artificial bee colony': 7019, 'molecular apocrine breast cancer': 7020, 'deterministic topology optimization': 7021, 'Dynesys Transition Optima™': 7022, 'hybrid energy storage system': 7023, 'Hazard Evaluation Support System': 7024, 'Planetary roller screw mechanism': 7025, 'Peaceman Rachford splitting method': 7026, 'weak feature description': 7027, 'Water Framework Directive': 7028, 'block matching spatial fusion': 7029, 'B mori silk fibroin': 7030, 'Binary Matrix Shuffling Filter': 7031, 'relative l2 norm error': 7032, 'rat liver nuclear extract': 7033, 'Real Time Digital System': 7034, 'real time digital simulator': 7035, 'real time dynamic substructuring': 7036, 'Gravity matching navigation': 7037, 'genotype matrix network': 7038, 'generalized finite difference method': 7039, 'Generalized Frequency Division Multiplexing': 7040, 'Sum Error Value': 7041, 'soil expectation value': 7042, 'standing extended view': 7043, 'standard ellipsoid volume': 7044, 'summary exposure value': 7045, 'activity on vertex': 7046, 'angle of view': 7047, 'Global Virtual Time': 7048, 'Genome Viewer Tool': 7049, 'graft versus tumor': 7050, 'generalized force model': 7051, 'graphene family materials': 7052, 'gyrus frontalis medialis': 7053, 'gravimetric flow meter': 7054, 'germ free mice': 7055, 'Linear Hough Transform': 7056, 'linear hetero tetramer': 7057, 'lever holding task': 7058, 'long hyaline tubules': 7059, 'linear heart tube': 7060, 'Krill herd algorithm': 7061, 'Kidney Health Australia': 7062, 'fast gradient projection': 7063, 'fine grained powder': 7064, 'final germination percentage': 7065, 'Floral Genome Project': 7066, 'mechanized mining technical process': 7067, 'methadone maintenance treatment program': 7068, 'primary user emulation attack': 7069, 'Predictive Use Error Analysis': 7070, 'quasi minimal residual': 7071, 'quantitative magnetic resonance': 7072, 'tertiary lymphoid organs': 7073, 'time limited orthogonal': 7074, 'Directional Cubic Convolution Interpolation': 7075, 'Deyo Charlson Comorbidity Index': 7076, 'Fast Image Upsampling': 7077, 'Fluorescence Intensity Units': 7078, 'Nursing Care Performance Framework': 7079, 'non cirrhotic portal fibrosis': 7080, 'respiratory related evoked potential': 7081, 'resonance Raman excitation profiles': 7082, 'unresponsive wakefulness syndrome': 7083, 'untreated wheat straw': 7084, 'unstimulated whole saliva': 7085, 'usual walking speed': 7086, 'after egg laying': 7087, 'axial eye length': 7088, 'Armanni Ebstein lesions': 7089, 'accentuated eccentric load': 7090, 'Bruce Treadmill Protocol': 7091, 'best target protein': 7092, 'Bioinformatics Training Platform': 7093, 'bis triazinyl pyridine': 7094, 'daily insulin dose': 7095, 'diaphanous inhibitory domain': 7096, 'Drug Interaction Detector': 7097, 'difference in differences': 7098, 'DAXX interaction domain': 7099, 'fetal liver fibroblast': 7100, 'fatal liver failure': 7101, 'high internal phase emulsion': 7102, 'Hospital In Patient Enquiry': 7103, 'void reactivity coefficient': 7104, 'ventral respiratory column': 7105, 'viral replicative capacity': 7106, 'virtual reference coil': 7107, 'cashew apple juice': 7108, 'Centella asiatica juice': 7109, 'Prosthesis Evaluation Questionnaire': 7110, 'Persisting Effects Questionnaire': 7111, 'windowed Fourier ridges': 7112, 'weighed food record': 7113, 'whisker functional representation': 7114, 'flexible robotic manipulator': 7115, 'fiber reinforced mortar': 7116, 'federal reference method': 7117, 'dysphagia severity rating scale': 7118, 'Depression Self Rating Scale': 7119, 'user centred design': 7120, 'Unique Compound Database': 7121, 'unconfirmed celiac disease': 7122, 'University College Dublin': 7123, 'preserved Secure Reliable Routing': 7124, 'power supply rejection ratio': 7125, 'fundamental train frequency': 7126, 'face to face': 7127, 'total ammonia nitrogen': 7128, 'tree access network': 7129, 'total acid number': 7130, 'tonically active neuron': 7131, 'tumor associated neutrophils': 7132, 'Rosenberg Self Esteem Scale': 7133, 'Revised Simple Exponential Smoothing': 7134, 'Improved Electronic Load Controller': 7135, 'ion exchange liquid chromatographic': 7136, 'quadratic assignment problem': 7137, 'Quality Assurance Protocol': 7138, 'Adaptive Inverse Scale Space': 7139, 'amphetamine induced sensitized state': 7140, 'Activity induced spontaneous spikes': 7141, 'thrombolysis in cerebral infarction': 7142, 'tumor immune cell infiltration': 7143, 'common frame constellation model': 7144, 'coronary flow control model': 7145, 'Grey Wolf Optimizer': 7146, 'grey wolf optimization': 7147, 'Lianhua Qingwen capsule': 7148, 'low quality control': 7149, 'Tolman Oppenheimer Volkoff': 7150, 'Tara Oceans viromes': 7151, 'Kosambi Cartan Chern': 7152, 'Korea Coastal Current': 7153, 'Kochi Core Center': 7154, 'spring loaded inverted pendulum': 7155, 'Serial Line Internet Protocol': 7156, 'real option value': 7157, 'remotely operated vehicle': 7158, 'rimantadine o vanillin': 7159, 'voltage unbalance factor': 7160, 'variable universe fuzzy': 7161, 'Variable Infiltration Capacity': 7162, 'valve interstitial cell': 7163, 'ventricular inner curvature': 7164, 'Vaccine Industry Committee': 7165, 'Water Cycle Indicators': 7166, 'weighted cortical intensity': 7167, 'Widespread Colonizing Island': 7168, 'Wilson Cowan Izhikevich': 7169, 'nonparametric anomaly indicator method': 7170, 'Nucleotide Analog Interference Mapping': 7171, 'Cosmic Ray Neutron Probe': 7172, 'Cross River National Park': 7173, 'cancer related neuropathic pain': 7174, 'Rainfall Runoff Library': 7175, 'rabbit reticulocyte lysate': 7176, 'reduced representation library': 7177, 'ruthenium red lysine': 7178, 'Rhodiola rosea L': 7179, 'boundary layer height': 7180, 'BEL1 like homeobox': 7181, 'Latent Heating Nudging': 7182, 'Leicht Holme Newman': 7183, 'host versus graft': 7184, 'horizontal visibility graph': 7185, 'hematoxylin van Gieson': 7186, 'Roanoke City Public Schools': 7187, 'red cactus pear seeds': 7188, 'Acoustic Doppler Current Profile': 7189, 'antibody dependent cell phagocytosis': 7190, 'ATPase domain containing protein': 7191, 'vertically corrected composite algorithm': 7192, 'Vero cell cytotoxicity assay': 7193, 'mixed level mixing ratio': 7194, 'multi level meta regression': 7195, 'warm cloud depth': 7196, 'wearable cardioverter defibrillators': 7197, 'electron beam welding': 7198, 'expected body weight': 7199, 'soil moisture height': 7200, 'Santa Maria Hospital': 7201, 'Sarasota Memorial Hospital': 7202, 'Smooth muscle hamartoma': 7203, 'Bayesian Maximum Entropy': 7204, 'bone marrow edema': 7205, 'basement membrane extract': 7206, 'Balanced Minimum Evolution': 7207, 'Basal Media Eagle': 7208, 'distance made good': 7209, 'drug metabolizing genes': 7210, 'turbulent heat fluxes': 7211, 'testicular hyperechogenic foci': 7212, 'tiamulin hydrogen fumarate': 7213, 'Tetra Hydro Folate': 7214, 'lead rubber bearing': 7215, 'lower resistant bed': 7216, 'low root biomass': 7217, 'construction waste brick powder': 7218, 'chronic widespread bodily pain': 7219, 'portable seismic property analyzer': 7220, 'purple sweet potato anthocyanin': 7221, 'Music Appreciation Training Program': 7222, 'maximum allowable transmit power': 7223, 'membrane associated transport protein': 7224, 'Focused Music Listening': 7225, 'foramen magnum length': 7226, 'full matrix learning': 7227, 'functional megaspore like': 7228, 'lateral thoracic artery perforators': 7229, 'low temperature argon plasma': 7230, 'basalt fiber reinforced mortar': 7231, 'Bayesian factor regression modeling': 7232, 'modified glass wool': 7233, 'minor groove width': 7234, 'electron field emission': 7235, 'ethylene forming enzyme': 7236, 'epiploic foramen entrapment': 7237, 'ensemble free energy': 7238, 'elder flower extracts': 7239, 'Surface Traction and Radial Tire': 7240, 'Spectrally Timed Adaptive Resonance Theory': 7241, 'Simple Triage and Rapid Treatment': 7242, 'ethyl methyl ether': 7243, 'emission mitigation efficiency': 7244, 'effective medium evanescence': 7245, 'Hydrogen oxidation reaction': 7246, 'higher order repeats': 7247, 'bare carbon paste electrode': 7248, 'Box Cox power exponential': 7249, 'red mud heated': 7250, 'Rwanda Military Hospital': 7251, 'Royal Marsden Hospital': 7252, 'Landau Lifshitz Gilbert': 7253, 'log likelihood gradient': 7254, 'lyase like gene': 7255, 'Local Level Government': 7256, 'high voltage power supply': 7257, 'high volume particle spectrometer': 7258, 'Universiti Sains Malaysia': 7259, 'Universal Sequence Map': 7260, 'Universal Similarity Metric': 7261, 'uterine secreted microprotein': 7262, 'ultra slow metabolizer': 7263, 'monolithic microwave integrated circuit': 7264, 'measles mass immunization campaign': 7265, 'meatus urethrae internum': 7266, 'Mixed urinary incontinence': 7267, 'robotic partial nephrectomy': 7268, 'region proposal network': 7269, 'right phrenic nerve': 7270, 'Integrated Ocean Drilling Program': 7271, 'International Ocean Discovery Program': 7272, 'sacro iliac joint': 7273, 'small intestine juice': 7274, 'Quantitative susceptibility mapping': 7275, 'quorum sensing medium': 7276, 'quaternary solvent manager': 7277, 'Meaningful auditory integration scales': 7278, 'Maximum abbreviated injury scale': 7279, 'Deproteinized bovine bone': 7280, 'disulfide bond breaking': 7281, 'dairy based beverages': 7282, 'psychometric hepatic encephalopathy scores': 7283, 'Pumped Hydro Energy Storage': 7284, 'sonographic urethral length': 7285, 'Snout Urostyle length': 7286, 'mammographic image analysis society': 7287, 'Medicine Image Analysis System': 7288, 'Cervical squamous cell carcinoma': 7289, 'Cutaneous Squamous Cell Carcinoma': 7290, 'Environmental Symptoms Questionnaire': 7291, 'EYE STRUCTURE QUANTIFICATION': 7292, 'American visceral leishmaniasis': 7293, 'Acquired vitelliform lesions': 7294, 'indeterminate initial infection': 7295, 'increase increase increase': 7296, 'Isoform Isoform Interaction': 7297, 'ions including iron': 7298, 'Epithelial fibrosis index': 7299, 'Extended Focus Imaging': 7300, 'Extreme Forecast Index': 7301, 'store operated calcium channels': 7302, 'shifted outpatient collaborative care': 7303, 'knee abductor moment': 7304, 'Knowledge Assembly Model': 7305, 'Lycium barbarum berry': 7306, 'left bundle branch': 7307, 'lower boundary biota': 7308, 'Little Bahama Bank': 7309, 'online tracking benchmark': 7310, 'outflow tract banded': 7311, 'maximal upward drift': 7312, 'multi user detection': 7313, 'matched unrelated donor': 7314, 'region of background': 7315, 'reduced order basis': 7316, 'risk of bias': 7317, 'recurrent oscillatory bursting': 7318, 'Splice switching oligonucleotide': 7319, 'Social Security Organization': 7320, 'splenic switch off': 7321, 'Oral Hygiene Index': 7322, 'oral hygiene instructions': 7323, 'Ocean Health Index': 7324, 'femoral neck plane': 7325, 'false negative probability': 7326, 'functional nucleotide polymorphism': 7327, 'Training Variation Explained': 7328, 'Total Vector Error': 7329, 'Time varying elastance': 7330, 'total percent of characters': 7331, 'transient propagation of cracks': 7332, 'Modified Early Warning Score': 7333, 'Malaria Early Warning System': 7334, 'Exponential Forgetting Factor': 7335, 'elliptical form factors': 7336, 'Elastic fiber fragmentation': 7337, 'brain emotional learning': 7338, 'band edge luminescence': 7339, 'Basal Epithelial Layer': 7340, 'brown egg layers': 7341, 'Biological Expression Language': 7342, 'lactulose breath test': 7343, 'Listen Before Talk': 7344, 'Lower Bound Tightening': 7345, 'lupus band test': 7346, 'human upper airway': 7347, 'high uric acid': 7348, 'National Electrical Manufacturers Association': 7349, 'National Environmental Management Authority': 7350, 'premature ectopic beat': 7351, 'position error bound': 7352, 'parametric empirical Bayes': 7353, 'Pre existing bone': 7354, 'premature edge breakdown': 7355, 'Maximum Minimum Backward Selection': 7356, 'Mersilene mesh brow suspension': 7357, 'distal main vessel': 7358, 'double membrane vesicle': 7359, 'aberrant right subclavian artery': 7360, 'ampicillin resistant S aureus': 7361, 'Slope Horizontal Chain Code': 7362, 'strain hardening cementitious composite': 7363, 'Pallister Killian syndrome': 7364, 'Phytochrome Kinase Substrate': 7365, 'peripheral giant cell granuloma': 7366, 'preconditioned Gauss cloud generator': 7367, 'acute macular neuroretinopathy': 7368, 'arithmetic mismatch negativity': 7369, 'lower uterine segment': 7370, 'lung ultrasound score': 7371, 'lighted ureteric stents': 7372, 'uncomfortable loudness levels': 7373, 'Ubiquitous Learning Log': 7374, 'Home mechanical ventilation': 7375, 'healthy mitral valves': 7376, 'Herfindahl Hirschman Index': 7377, 'human human interactions': 7378, 'Tikur Anbessa Specialized Hospital': 7379, 'Trauma Associated Severe Haemorrhage': 7380, 'hybrid ant colony optimization': 7381, 'healthcare associated community onset': 7382, 'Asian Warming Hole': 7383, 'aspen wood hydrolysate': 7384, 'laser speckle perfusion imaging': 7385, 'Least Squares Policy Iteration': 7386, 'trigeminal nucleus caudalis': 7387, 'total nucleated cells': 7388, 'tri nucleotide compositions': 7389, 'total nonstructural carbohydrates': 7390, 'The Nature Conservancy': 7391, 'Canine Brief Pain Inventory': 7392, 'Cytokinesis Block Proliferation Index': 7393, 'Yinhua Miyanling Tablet': 7394, 'Y maze test': 7395, 'Sandalwood essential oil': 7396, 'Spent engine oil': 7397, 'sieve element occlusion': 7398, 'sensory evoked oscillation': 7399, 'Soil Ecosystem Observatory': 7400, 'lateral septal nucleus': 7401, 'Lean Structural Networks': 7402, 'log sequence number': 7403, 'left sciatic nerve': 7404, 'local sun noon': 7405, 'Investigational New Drug': 7406, 'intestinal neuronal dysplasia': 7407, 'Rehmannia glutinosa Libosch': 7408, 'retinal ganglion layer': 7409, 'radial glia like': 7410, 'primary hippocampal neurons': 7411, 'post herpetic neuralgia': 7412, 'personal health number': 7413, 'Prosthetic head Norberg': 7414, 'Universal Natural Products Database': 7415, 'United Nations Population Division': 7416, 'Verran Synder Halpern': 7417, 'Varroa sensitive hygiene': 7418, 'verbal sexual harassment': 7419, 'upper reference limit': 7420, 'Uniform Resource Locator': 7421, 'upper rhombic lip': 7422, 'Karachi Stock Exchange': 7423, 'Kernel Smoothed Estimate': 7424, 'Patient Health Questionnaire': 7425, 'Programme Head Quarter': 7426, 'lysyl tyrosyl quinone': 7427, 'linear trap quadrupole': 7428, 'open total gastrectomy': 7429, 'On The Go': 7430, 'Signet ring cell carcinoma': 7431, 'Spearman Rank Correlation Coefficient': 7432, 'overt hepatic encephalopathy': 7433, 'oral health education': 7434, 'intestinal trefoil factor': 7435, 'intrinsic tryptophan fluorescence': 7436, 'invasive tumour front': 7437, 'variable emittance radiator': 7438, 'very early recurrence': 7439, 'ventral ectodermal ridge': 7440, 'volume expansion ratio': 7441, 'visually evoked response': 7442, 'Wide Area Network': 7443, 'world airline network': 7444, 'Remotely Piloted Aircraft Systems': 7445, 'Revised Physical Anhedonia Scale': 7446, 'high earth orbit': 7447, 'highly elliptic orbit': 7448, 'horsemint essential oil': 7449, 'inclined geosynchronous satellite orbit': 7450, 'improved glowworm swarm optimization': 7451, 'Equipment under test': 7452, 'expected utility theory': 7453, 'orthogonal chaotic generator': 7454, 'overlapping cluster generator': 7455, 'off grid error': 7456, 'oral gingival epithelium': 7457, 'organes génitaux externe': 7458, 'Quadrifilar helix antenna': 7459, 'quasi harmonic approximation': 7460, 'Runge Kutta Fehlberg': 7461, 'robust Kalman filter': 7462, 'Residual kidney function': 7463, 'Network Function Virtualization': 7464, 'non fluent/agrammatic variant': 7465, 'average queuing delay': 7466, 'abnormal quiet day': 7467, 'network capable application processor': 7468, 'Non Competitive Atrial Pacing': 7469, 'Watt Hour Meter': 7470, 'Woods Hole medium': 7471, 'check nodes decoder': 7472, 'control normal diet': 7473, 'copy number deviation': 7474, 'Central neck dissection': 7475, 'low frequency ultrasonic': 7476, 'lacrimal function unit': 7477, 'least frequently used': 7478, 'Longitudinal Follow Up': 7479, 'AMI Wireless Network': 7480, 'activity window notification': 7481, 'Male accessory gland inflammation/infection': 7482, 'maize assembled genomic islands': 7483, 'elongate wide tunnels': 7484, 'exercise wheel test': 7485, 'short gastric veins': 7486, 'SOMATIC GENOME VARIATIONS': 7487, 'Standing genetic variation': 7488, 'patients with hemophilia': 7489, 'pine wood hydrolysate': 7490, 'Design for Six Sigma': 7491, 'Donepezil flight simulator study': 7492, 'local directional number': 7493, 'Last Dead Node': 7494, 'low dose naltrexone': 7495, 'lying down nystagmus': 7496, 'variable period grating': 7497, 'virtual proving ground': 7498, 'venous plasma glucose': 7499, 'virtual pebble game': 7500, 'specific thermal energy consumption': 7501, 'Shiga toxigenic E coli': 7502, 'specific electrical energy consumption': 7503, 'surface enhanced ellipsometric contrast': 7504, 'Copper Peel Strength Tensile': 7505, 'clinical problem solving test': 7506, 'back reflecting layer': 7507, 'boron rich layer': 7508, 'Bayesian rule learning': 7509, 'broad range library': 7510, 'undoped silicate glass': 7511, 'urine specific gravity': 7512, 'rotating magnetic field': 7513, 'relativistic mean field': 7514, 'root mass fraction': 7515, 'relative mitochondrial fluorescence': 7516, 'chemical vapour generation': 7517, 'Core Vertebrate Genes': 7518, 'Vascular reactivity index': 7519, 'virtual register interface': 7520, 'vibration response imaging': 7521, 'Veterinary Research Institute': 7522, 'Optic Disk Centred': 7523, 'optic disc cupping': 7524, 'other demyelinating conditions': 7525, 'betulinic acid amide': 7526, 'British Airports Authority': 7527, 'Bone age assessment': 7528, 'relative uptake efficiency': 7529, 'resource use efficiency': 7530, 'Extended European Stationary Cycle': 7531, 'equivalent effective stratospheric chlorine': 7532, 'unseparation complex mixture': 7533, 'ultrametric contour map': 7534, 'Ulva Culture Medium': 7535, 'ubiquitin competing molecules': 7536, 'Upper Convected Maxwell': 7537, 'Non Hispanic Black': 7538, 'Net Health Benefit': 7539, 'Oxidized Carbon Black': 7540, 'occipital condyle breadth': 7541, 'Organizational citizenship behavior': 7542, 'Orphan Crops Browser': 7543, 'Morula nut shells': 7544, 'must nutrient synthetic': 7545, 'Middle Neolithic Spain': 7546, 'mirror neuron system': 7547, 'variable air volume': 7548, 'veno arterio venous': 7549, 'Composite Autonomic Scoring Scale': 7550, 'correlation adaptive subspace segmentation': 7551, 'islet infiltrating lymphocytes': 7552, 'intestinal inflammatory lymphangiogenesis': 7553, 'intestinal infective larvae': 7554, 'intrinsic heart rate': 7555, 'instantaneous heart rate': 7556, 'Intracellular Homology Region': 7557, 'distributor service quality': 7558, 'Defence Style Questionnaire': 7559, 'discordant sib quadruplet': 7560, 'DePaul Symptom Questionnaire': 7561, 'Total Symptom Severity Complex': 7562, 'Transcription Start Site Collapse': 7563, 'rat peritoneal mast cells': 7564, 'rhythmic propulsive motor complex': 7565, 'polymer modified nanoclay': 7566, 'Plant Metabolic Network': 7567, 'poly morphonuclear neutrophils': 7568, 'primary membranous nephropathy': 7569, 'primary motor neurons': 7570, 'nonwoven nanofibrous composite': 7571, 'nearest neighbour criterion': 7572, 'neurologically normal controls': 7573, 'Nutrition North Canada': 7574, 'neural network cascade': 7575, 'Wii Balance Boards': 7576, 'whole bunch berries': 7577, 'microwave assisted hydrothermal': 7578, 'mouse anti human': 7579, 'macrophage antigen h': 7580, 'metastasis averse HCC': 7581, 'Bound rubber content': 7582, 'Biomedical Research Centre': 7583, 'Brain Reward Cascade': 7584, 'baby rabbit complement': 7585, 'bone remodelling compartment': 7586, 'surface electromagnetic wave': 7587, 'steam exploded wood': 7588, 'semi eviscerated weight': 7589, 'Rana chensinensis skin collagen': 7590, 'Rice callus suspension culture': 7591, 'Z lotus polyphenols': 7592, 'zero loss peak': 7593, 'Advanced Drug Delivery Systems': 7594, 'along dip double segmentation': 7595, 'Africa Data Dissemination Service': 7596, 'out of plane': 7597, 'outside of protocol': 7598, 'limbal stem cell deficiency': 7599, 'large subunit catalytic domain': 7600, 'Lower Maximum Assignment Point': 7601, 'low methoxyl amidated pectin': 7602, 'Benign essential blepharospasm': 7603, 'back end board': 7604, 'binary exponential backoff': 7605, 'Bayes Empirical Bayes': 7606, 'Binary Encounter Bethe': 7607, 'mean ocular surface temperature': 7608, 'micro optical sectioning tomography': 7609, 'Half logistic distribution': 7610, 'haematocrit layer depth': 7611, 'HCR1 like domain': 7612, 'hyphal length densities': 7613, 'wake induced vibrations': 7614, 'Wiseana iridescent virus': 7615, 'fiber under test': 7616, 'follicular unit transplant': 7617, 'local kernel regression': 7618, 'lysine ketoglutarate reductase': 7619, 'Neuromorphic Vision Toolkit': 7620, 'non volcanic tremor': 7621, 'non vaccine types': 7622, 'Neuro Vision Technology': 7623, 'Wavelet Denoising Technique': 7624, 'warmth detection threshold': 7625, 'Hydroxyl Tagging Velocimetry': 7626, 'Hydrodynamic Tail Vein': 7627, 'high tidal volumes': 7628, 'Humaita Tubiacanga virus': 7629, 'Duplicate Filter Hash': 7630, 'design flood hydrograph': 7631, 'primary user emulation': 7632, 'Phosphorus use efficiency': 7633, 'Precipitation use efficiency': 7634, 'Opportunistic Sensor Network': 7635, 'olfactory sensory neurons': 7636, 'Online Social Networks': 7637, 'First Dead Node': 7638, 'first degree neighbors': 7639, 'Half Dead Node': 7640, 'human disease network': 7641, 'Fire Weather Index': 7642, 'Fractional Water Index': 7643, 'time point spread function': 7644, 'Temporal Point Spread Function': 7645, 'Zero Temperature Coefficient': 7646, 'zeolite templated carbon': 7647, 'spectral spectral classification method': 7648, 'severe sepsis cytokine mixture': 7649, 'Specialist Supportive Clinical management': 7650, 'Greater occipital nerve': 7651, 'glaucomatous optic neuropathy': 7652, 'muscle creatine kinase': 7653, 'mathematics content knowledge': 7654, 'Brain water content': 7655, 'Barnstable Water Company': 7656, 'body weight change': 7657, 'Bovine Mammary Endothelial Cells': 7658, 'bone marrow endothelial cells': 7659, 'brain microvascular endothelial cells': 7660, 'vehicular traffic density': 7661, 'vapor transport deposition': 7662, 'VASP tetramerization domain': 7663, 'vanadium treated diabetic': 7664, 'driven handover optimization': 7665, 'district health office': 7666, 'damped harmonic oscillator': 7667, 'Point Feature Histograms': 7668, 'prayer for health': 7669, 'Posterior Facial Height': 7670, 'weighted sum rate': 7671, 'Weinberg sum rules': 7672, 'Wilcoxon signed rank': 7673, 'whole sweat rate': 7674, 'wall shear rate': 7675, 'Compressed Path Tree Template': 7676, 'cold point tropopause temperature': 7677, 'Citizen Broadband Radio Service': 7678, 'Comprehensive Behavior Rating Scales': 7679, 'Angelica acutiloba Kitagawa': 7680, 'amino acid kinases': 7681, 'perceived image quality': 7682, 'Protein Interaction Quantification': 7683, 'True Negative Rate': 7684, 'to normal ratio': 7685, 'Discontinuity Layout Optimization': 7686, 'dolichol linked oligosaccharide': 7687, 'Fuzzy sliding mode control': 7688, 'finite state Markov channel': 7689, 'joint correlation parameterization': 7690, 'Jellyfish collagen peptides': 7691, 'Unified Discontinuous Nonlinearity': 7692, 'Ultra Dense Network': 7693, 'valve control unit': 7694, 'villus crypt units': 7695, 'Weber local descriptor': 7696, 'Wearable Light Device': 7697, 'Second generation wavelet': 7698, 'stay green wheat': 7699, 'two code keying': 7700, 'T congolense Kilifi': 7701, 'Gahuku Gama Subtribes': 7702, 'group G streptococci': 7703, 'greedy geneset selection': 7704, 'global journey time': 7705, 'grammaticality judgment task': 7706, 'adaptive horizon group': 7707, 'Admixture History Graph': 7708, 'Jet Stirred Reactor': 7709, 'joint sparse representation': 7710, 'intensity gradient magnitude': 7711, 'interstellar gaseous matter': 7712, 'Idiopathic granulomatous mastitis': 7713, 'impaired glucose metabolism': 7714, 'estimated duration at completion': 7715, 'Error Detection And Correction': 7716, 'Excessive dynamic airway collapse': 7717, 'cost estimation at completion': 7718, 'cost effectiveness acceptability curve': 7719, 'Grid Independent Limit': 7720, 'Gene Interaction Layer': 7721, 'gene independent loci': 7722, 'Network resource graph': 7723, 'numerical renormalization group': 7724, 'non remodelling groups': 7725, 'hypersonic glider vehicle': 7726, 'Hepatitis G virus': 7727, 'pseudo continuous conduction mode': 7728, 'Protein complex co memberships': 7729, 'secure switch network coding': 7730, 'Silencer Select Negative Control': 7731, 'network random keys': 7732, 'nicotinamide riboside kinase': 7733, 'Normal rat kidney': 7734, 'Neurospecific Receptor Kinase': 7735, 'static air gap eccentricity': 7736, 'Self Administered Gerocognitive Examination': 7737, 'wind power plant': 7738, 'white pre pupal': 7739, 'World Population Prospects': 7740, 'original equipment manufacturers': 7741, 'outer envelope membranes': 7742, 'optical emission monitoring': 7743, 'bottom left fill': 7744, 'bivariate luminosity function': 7745, 'bronchiolar lavage fluid': 7746, 'broad leaved forests': 7747, 'mean squared derivative error': 7748, 'motion sensitized driven equilibrium': 7749, 'micro steam distillation extraction': 7750, 'region light index': 7751, 'resting light intensity': 7752, 'Royal Lancaster Infirmary': 7753, 'rural low impact': 7754, 'convolution kernel compensation': 7755, 'closed kinetic chain': 7756, 'conceptual knowledge construct': 7757, 'chicken kidney cells': 7758, 'Juvenile myoclonic epilepsy': 7759, 'Jatropha methyl ester': 7760, 'Conditioned Fear Response Test': 7761, 'clotting factor replacement therapy': 7762, 'Maternal Separation Anxiety Scale': 7763, 'Memorial Symptom Assessment Scale': 7764, 'high novelty seeker': 7765, 'hypothalamo neurohypophysial system': 7766, 'resting state functional connectivity': 7767, 'restriction site free cloning': 7768, 'age of onset': 7769, 'Area of Occupancy': 7770, 'Arctic Ocean Oscillation': 7771, 'action video game': 7772, 'ancestral variation graph': 7773, 'arterialized vein grafts': 7774, 'round window membrane': 7775, 'Regional wall motion': 7776, 'Hepatic ischemia reperfusion injury': 7777, 'homologous illegitimate random integration': 7778, 'hepatic insulin resistance index': 7779, 'normal brain homogenate': 7780, 'New Bedford Harbor': 7781, 'Neonatal brain hemisuction': 7782, 'non breath hold': 7783, 'fluorescence arbitrary units': 7784, 'Forced Arm Use': 7785, 'oligodendrocyte specific protein antibody': 7786, 'optimal sub pattern assignment': 7787, 'intestinal mucosal barrier': 7788, 'induced magnetosphere boundary': 7789, 'immunity magnetic bead': 7790, 'Information Motivation Behavior': 7791, 'Illyrian Mountain Buša': 7792, 'unroasted well fermented': 7793, 'Ultra wide field': 7794, 'Test Your Memory': 7795, 'to young microspore': 7796, 'Chondrodermatitis Nodularis Helicis': 7797, 'curly nano hair': 7798, 'copy number high': 7799, 'Lateral trunk flexion': 7800, 'long term facilitation': 7801, 'laparoscopic Toupet fundoplication': 7802, 'long tail fibers': 7803, 'late treatment failure': 7804, 'Frost Multidimensional Perfectionism Scale': 7805, 'Fast Mobility Particle Sizer': 7806, 'Chronic Pain Myth Scale': 7807, 'Child Psychopathology Measurement Schedule': 7808, 'facial expression intensity': 7809, 'Fractional excretion index': 7810, 'negative regulatory region': 7811, 'Non recombining regions': 7812, 'Non rigid registration': 7813, 'nutrient richness relationship': 7814, 'Self Reporting Questionnaire': 7815, 'system request queue': 7816, 'sibling relationship quality': 7817, 'Smoke inhalation injury': 7818, 'signal intensity index': 7819, 'surface irregularity indices': 7820, 'speech intelligibility index': 7821, 'skin interference indices': 7822, 'knockout serum replacement': 7823, 'Kernel Spectral Regression': 7824, 'anterior visceral endoderm': 7825, 'azimuth velocity estimator': 7826, 'Eriochrome Black T': 7827, 'external beam therapy': 7828, 'wire tension band': 7829, 'worst to best': 7830, 'wild type Berlin': 7831, 'loss given default': 7832, 'link going down': 7833, 'Low Grade Dysplasia': 7834, 'local gene duplication': 7835, 'likely gene disruptive': 7836, 'photonic Doppler velocimetry': 7837, 'percentage dense volume': 7838, 'peak diastolic velocity': 7839, 'Phocine Distemper Virus': 7840, 'average daily truck traffic': 7841, 'Autologous Dual Tissue Transplantation': 7842, 'kinetic dynamic suspension': 7843, 'known druggable space': 7844, 'King Denborough Syndrome': 7845, 'dynamic vibration neutralizers': 7846, 'dorsal vagal nucleus': 7847, 'descending vestibular nucleus': 7848, 'active energy regeneration suspension': 7849, 'adverse event reporting system': 7850, 'anchoring enzyme recognition sequences': 7851, 'Hydraulically damped bushing': 7852, 'Housing Development Board': 7853, 'horizontal diagonal band': 7854, 'Inverse Fourier Transform': 7855, 'Invasive fungal tracheobronchitis': 7856, 'image foresting transform': 7857, 'indirect Fourier transformation': 7858, 'Madras Atomic Power Station': 7859, 'Monolithic Active Pixel Sensors': 7860, 'modular automation production system': 7861, 'MUC1 Associated Proliferation Signature': 7862, 'Multiple Alternative Perceptual Search': 7863, 'remote handling maintenance': 7864, 'Regional Heritability Mapping': 7865, 'multiple actuated wing': 7866, 'minimal absent word': 7867, 'average sound pressure level': 7868, 'axial spinous process length': 7869, 'Universal verification methodology': 7870, 'UV visible marker': 7871, 'Enhanced Cognitive Walkthrough': 7872, 'Early Cal Wonder': 7873, 'quasi exactly solvable': 7874, 'qualitative evidence synthesis': 7875, 'double well potential': 7876, 'deep well plates': 7877, 'induced gravity inflation': 7878, 'iron gall inks': 7879, 'parton hadron string dynamics': 7880, 'Public Health Sciences Division': 7881, 'French Broad basin': 7882, 'fixed beta binomial': 7883, 'fibrous bed bioreactor': 7884, 'flagellar basal body': 7885, 'Fundamental Building Blocks': 7886, 'Homotopy Perturbation Laplace Method': 7887, 'Historical path loss maps': 7888, 'Foamed Mixture Lightweight Soil': 7889, 'fair multistart local search': 7890, 'precipitation microphysical characteristics sensor': 7891, 'pulsed microplasma cluster source': 7892, 'parametric time domain method': 7893, 'post transplant diabetes mellitus': 7894, 'Von Mises Equivalent': 7895, 'V mandshurica extract': 7896, 'very major error': 7897, 'red iodized oil': 7898, 'residue independent overlap': 7899, 'Course Aggregate Void Filling': 7900, 'Coronary arterio venous fistula': 7901, 'Kandil Brown Miller': 7902, 'Ku binding motif': 7903, 'gene coexpression network analysis': 7904, 'germ cell nuclear antigen': 7905, 'connective tissue graft': 7906, 'Clinical Trials Group': 7907, 'Cell Titer Glo': 7908, 'cytosine thymine guanine': 7909, 'artificial floating wetland': 7910, 'Abdominal fat weight': 7911, 'shock wave lithotripsy': 7912, 'specific warming levels': 7913, 'satisfaction with life': 7914, 'severe water loss': 7915, 'meatus acusticus externus cartilagineus': 7916, 'mastitis associated E coli': 7917, 'mouse aortic endothelial cells': 7918, 'Congenitally missing permanent teeth': 7919, 'cluster mass permutation test': 7920, 'Binary Search Feature Selection': 7921, 'Bristol Stool Form Scale': 7922, 'Sparse group LASSO': 7923, 'stereochemistry gate loops': 7924, 'spent grain liquor': 7925, 'image quality index': 7926, 'Image Quality Indicator': 7927, 'Universal Background Model': 7928, 'unified bioaccessibility method': 7929, 'UAP56 binding motif': 7930, 'Ugni Blanc mutant': 7931, 'ultrasound B mode': 7932, 'Paediatric Observation Priority Score': 7933, 'Painful os peroneum syndrome': 7934, 'Pregnancy Outcome Prediction Study': 7935, 'minimum information bipartition': 7936, 'multiplexed inhibitor bead': 7937, 'Medicine Information Box': 7938, 'mevalonate isoprenoid biosynthesis': 7939, 'Microscopy Image Browser': 7940, 'Hyper Sausage Neuron': 7941, 'human signaling network': 7942, 'hermaphrodite specific neuron': 7943, 'healthy supplier network': 7944, 'Public Health Laboratory': 7945, 'primary hepatic lymphoma': 7946, 'inspiratory flow limitation': 7947, 'intact forest landscape': 7948, 'Zollinger Ellison syndrome': 7949, 'zotarolimus eluting stents': 7950, 'vehicle dynamic model': 7951, 'virtual dipole moment': 7952, 'augmented proportional navigation': 7953, 'Advanced Practice Nurse': 7954, 'acellular processed nerve': 7955, 'Apnea Patients Network': 7956, 'visual treatment objects': 7957, 'Vertebrate Taxonomy Ontology': 7958, 'Variable Temperature Only': 7959, 'low energy density region': 7960, 'light enhanced dark respiration': 7961, 'jugular venous arch': 7962, 'joint vibration analysis': 7963, 'Nutritionally variant streptococci': 7964, 'non viable seeds': 7965, 'National Vegetation Survey': 7966, 'numerical verbal scale': 7967, 'Newest Vital Sign': 7968, 'RFID Network Planning': 7969, 'Ranomafana National Park': 7970, 'High altitude platform station': 7971, 'Hip Arthroplasty Pressure Simulator': 7972, 'Yin Deficiency Scale': 7973, 'yang deficiency syndrome': 7974, 'pore water pressure': 7975, 'permanent wilting point': 7976, 'pure water permeability': 7977, 'pleural wing process': 7978, 'Allium stipitatum dichloromethane extract': 7979, 'abdominal segment deformity element': 7980, 'necrotic sebaceous gland': 7981, 'NOD SCID gamma': 7982, 'normal salivary gland': 7983, 'quaternary benzophenanthridine alkaloids': 7984, 'Qualitative Behavioural Assessment': 7985, 'quadruple bend achromatic': 7986, 'kidney deficiency pattern': 7987, 'KWTP discharge point': 7988, 'modified Julian date': 7989, 'multicell joint decoding': 7990, 'Machado Joseph disease': 7991, 'Optimized Fourier Series': 7992, 'orthodontic friction simulator': 7993, 'ovarian function suppression': 7994, 'Tengfu Jiangya Tablet': 7995, 'tunnelling junction transistor': 7996, 'efficient partial relay selection': 7997, 'extra pair reproductive success': 7998, 'optical transmission line': 7999, 'opportunities to learn': 8000, 'fuel value index': 8001, 'fly virulence index': 8002, 'oral hairy leukoplakia': 8003, 'octanoyl homoserine lactone': 8004, 'oral health literacy': 8005, 'near field focused': 8006, 'neonatal foreskin fibroblast': 8007, 'Gray Level Transformation': 8008, 'genomic locus tags': 8009, 'glucose load test': 8010, 'Germ line transcription': 8011, 'Kunitz trypsin inhibitor': 8012, 'Knowledge Transmission Index': 8013, 'whey protein standard solution': 8014, 'Worthing Physiological Scoring System': 8015, 'University of Gondar': 8016, 'urine osmolal gap': 8017, 'natural flake graphite': 8018, 'Normal fasting glucose': 8019, 'Norwegian food guidelines': 8020, 'Steam quality analyzer': 8021, 'Speech Quality Assessment': 8022, 'Scottish Qualifications Authority': 8023, 'Toronto Clinical Scoring System': 8024, 'Topological Clustering Semantic Similarity': 8025, 'Tracheal cancer specific survival': 8026, 'inbound evacuation vulnerability': 8027, 'intracellular enveloped virus': 8028, 'Internet Embryo Viewer': 8029, 'Conflict Tolerant Channel Allocation': 8030, 'Computed tomography coronary angiography': 8031, 'superior joint space': 8032, 'Stevens Johnson syndrome': 8033, 'posterior joint space': 8034, 'Peutz Jeghers syndrome': 8035, 'anterior joint space': 8036, 'air jet stress': 8037, 'base scale entropy analysis': 8038, 'binding site enrichment analysis': 8039, 'Environmental Public Health Tracking': 8040, 'Estonian Postmenopausal Hormone Therapy': 8041, 'receptor activity modifying protein': 8042, 'Repair Associated Mysterious Protein': 8043, 'reflex action mortality predictors': 8044, 'Green cactus pear seeds': 8045, 'Glasgow Composite Pain Scale': 8046, 'Graded Chronic Pain Scale': 8047, 'idiopathic chronic inflammatory conditions': 8048, 'inter cell interference coordination': 8049, 'Direct Current Magnetron Sputtering': 8050, 'dynamic cellular manufacturing system': 8051, 'dynamic interferometric lithography': 8052, 'dual in line': 8053, 'dideoxy imino lyxitol': 8054, 'dominant intraprostatic lesion': 8055, 'decompression induced liquid': 8056, 'electron hole plasma': 8057, 'Emergency Hire Programme': 8058, 'elevated hydrostatic pressure': 8059, 'Acellular porcine corneal stroma': 8060, 'acetabular prosthesis coordinate system': 8061, 'primary mouse hepatocytes': 8062, 'Perceived morphological horizon': 8063, 'positive mental health': 8064, 'pyridine methylsulfone hydroxamate': 8065, 'geometric feature constraint': 8066, 'Global Financial Crisis': 8067, 'growth factor cocktail': 8068, 'gel filtration chromatography': 8069, 'species associated difference spectra': 8070, 'sudden arrhythmic death syndrome': 8071, 'digital holographic microscopy': 8072, 'digital height model': 8073, 'Biomechanical Eye Emulator': 8074, 'basal energy expenditure': 8075, 'Morita Baylis Hillman adducts': 8076, 'molecular beacon helicase assay': 8077, 'Susceptible Alert Infected Susceptible': 8078, 'synthetic aperture imaging sensor': 8079, 'buffer unaware proxy': 8080, 'buffer underflow probability': 8081, 'grid interval distance': 8082, 'grazing incident diffraction': 8083, 'glucose induced deficiency': 8084, 'Group Communication System Enablers': 8085, 'Gami Cheongyeul Sodok Eum': 8086, 'Lightweight directory access protocol': 8087, 'late directing attention positivity': 8088, 'weighted barrier graph': 8089, 'waveguide Bragg grating': 8090, 'difference expansion watermarking': 8091, 'dry excreta weight': 8092, 'Prestressed steel reinforced concrete': 8093, 'Pharmaceutical Sciences Research Center': 8094, 'generalized integral transform technique': 8095, 'galvanostatic intermittent titration technique': 8096, 'distributed node management': 8097, 'Dynamic network mechanism': 8098, 'De novo mutation': 8099, 'dominant negative mutant': 8100, 'Over the Top': 8101, 'overall treatment time': 8102, 'optical topological transition': 8103, 'Network Upgrade Delay': 8104, 'non ulcer dyspepsia': 8105, 'N level batching problem': 8106, 'novel LZAP binding protein': 8107, 'Engineering Shape Benchmark': 8108, 'energy selective backscattered': 8109, 'enhanced social behaviors': 8110, 'expression site body': 8111, 'Environmental Specimen Bank': 8112, 'Bayesian Network Model': 8113, 'Block Normal Mode': 8114, 'Block Network Mapping': 8115, 'mean absolute percentage deviation': 8116, 'multiple absolute pairwise difference': 8117, 'Median Absolute Pairwise Difference': 8118, 'monophasic action potential duration': 8119, 'obtuse angle prediction': 8120, 'off axis parabolic': 8121, 'optical action potentials': 8122, 'Greedy block coordinate descent': 8123, 'grain boundary character distribution': 8124, 'internal state variable': 8125, 'inter segmental vessels': 8126, 'inter species variable': 8127, 'inter somitic vessel': 8128, 'internode small vessels': 8129, 'visual word form area': 8130, 'Von Willebrand factor A': 8131, 'intermediate density lipoprotein': 8132, 'Interface Definition Language': 8133, 'instrument detection limit': 8134, 'individually darkened leaf': 8135, 'mechanistic target of Rapamycin': 8136, 'Mammalian Target of Rapamycin': 8137, 'West African immigrants': 8138, 'Working Alliance Inventory': 8139, 'Work Ability Index': 8140, 'Non Motor Symptoms Scale': 8141, 'Nutrition Monitoring Survey Series': 8142, 'Nantong Metabolic Syndrome Study': 8143, 'endothelial colony forming cells': 8144, 'extreme capsule fiber complex': 8145, 'quantitative thermal testing': 8146, 'quantitative trait transcript': 8147, 'parallel deforming mesh algorithms': 8148, 'Punjab Disaster Management Authority': 8149, 'fluid flow network': 8150, 'flexion from NP': 8151, 'fluorescent focus neutralization': 8152, 'bone porcine block': 8153, 'Bi Profile Bayesian': 8154, 'Secure similar document detection': 8155, 'standard spray dried dispersion': 8156, 'Network Intrusion Detection System': 8157, 'National Income Dynamics Study': 8158, 'Notifiable Infectious Diseases Surveillance': 8159, 'Single Frequency Network': 8160, 'Splice Function Networks': 8161, 'small fiber neuropathy': 8162, 'Location Management Unit': 8163, 'Lost Mound Unit': 8164, 'local measuring unit': 8165, 'Ludwig Maximilians University': 8166, 'greenhouse area network': 8167, 'Giant Axonal Neuropathy': 8168, 'Gene association network': 8169, 'GINS associated nuclease': 8170, 'Average Bit Error Rate': 8171, 'annual blood examination rate': 8172, 'modular accident analysis program': 8173, 'Multi atlas annotation procedure': 8174, 'squeeze film dampers': 8175, 'survival factor deprivation': 8176, 'snake fungal disease': 8177, 'Spray freeze drying': 8178, 'environmental flow component': 8179, 'enzyme fragment complementation': 8180, 'entropy focus criterion': 8181, 'Energy Dissipative Bracing': 8182, 'extra domain B': 8183, 'extensor digitorum brevis': 8184, 'Java Agent Development Environment': 8185, 'Joint Asia Diabetes Evaluation': 8186, 'Modified Constructed Analog Method': 8187, 'melanoma cell adhesion molecule': 8188, 'zero vibration derivative': 8189, 'Zika virus disease': 8190, 'roadside backfill body': 8191, 'rank based broadcast': 8192, 'Remazol Brilliant Blue': 8193, 'right biceps brachii': 8194, 'red black banded': 8195, 'filter then backproject': 8196, 'Fritillariae Thunbergii Bulbus': 8197, 'Smart Utility Network': 8198, 'serum urea nitrogen': 8199, 'Standard Uveitis Nomenclature': 8200, 'trapezoidal quadrature formula': 8201, 'triterpene quinone fraction': 8202, 'electron energy loss': 8203, 'external elastic lamina': 8204, 'Enhancer Element Locator': 8205, 'electron extraction layer': 8206, 'independent orbital approximation': 8207, 'inner optic anlagen': 8208, 'economic injury level': 8209, 'Ethylene insensitive like': 8210, 'environment inducible loci': 8211, 'catabolite control protein A': 8212, 'Chronic cavitary pulmonary aspergillosis': 8213, 'Deep ocean water': 8214, 'day of week': 8215, 'Integrated Microbial Genome': 8216, 'insertionally mutated gene': 8217, 'International Medical Graduate': 8218, 'intussusceptive microvascular growth': 8219, 'Urea Fructose Oatmeal': 8220, 'used frying oil': 8221, 'UNUSUAL FLORAL ORGANS': 8222, 'Citrus processing waste': 8223, 'Coffee pulp wastes': 8224, 'd lactate dehydrogenase': 8225, 'disintegrin like domain': 8226, 'diagonal linear discriminant': 8227, 'Deterministic lateral displacement': 8228, 'drusen like deposits': 8229, 'Non invasive ventilation': 8230, 'nodule inducing virus': 8231, 'Oxygen reserve index': 8232, 'Over Representation Index': 8233, 'Outbreak Response Immunizations': 8234, 'Berkeley Segmentation Data Set': 8235, 'Bipolar Spectrum Diagnostic Scale': 8236, 'OTC imprinted polymer': 8237, 'Object in place': 8238, 'gas oil ratio': 8239, 'groove of Ranvier': 8240, 'generalized odds ratio': 8241, 'Garnier Osguthorpe Robson': 8242, 'water based mud': 8243, 'whole bone marrow': 8244, 'coated iron oxide': 8245, 'Confidence Information Ontology': 8246, 'Cyanide Insensitive Oxidase': 8247, 'thick adherend shear test': 8248, 'transposon assisted signal trapping': 8249, 'polymeric nanoparticle micelles': 8250, 'plastic network model': 8251, 'Physiological Noise Model': 8252, 'average scan height': 8253, 'achaete scute homologue': 8254, 'asymmetric septal hypertrophy': 8255, 'allelic sequence heterozygosity': 8256, 'acute subdural hematoma': 8257, 'diacyl amino acid sodium': 8258, 'Duke Abdominal Assessment Scale': 8259, 'Hierarchical porous molecular sieve': 8260, 'home patient monitoring system': 8261, 'ring opening metathesis polymerization': 8262, 'regularization orthogonal matching pursuit': 8263, 'Ndop Meteoric Water Line': 8264, 'nominal molecular weight limit': 8265, 'Weighted mean value': 8266, 'Watermelon mosaic virus': 8267, 'White matter volume': 8268, 'hydraulic loading rate': 8269, 'H lyrata roots': 8270, 'High Level Resistance': 8271, 'kola nut pod raw': 8272, 'kernel number per row': 8273, 'Hydrological water budget': 8274, 'hot water brushing': 8275, 'hand washing bag': 8276, 'recommended limit value': 8277, 'renal limited vasculitis': 8278, 'relative larval viability': 8279, 'Ranchi Urban Agglomeration': 8280, 'Radial ulnar angle': 8281, 'water hyacinth biomass': 8282, 'Western Hudson Bay': 8283, 'limonene epoxide hydrolase': 8284, 'luminal epithelial height': 8285, 'Long Evans Hooded': 8286, 'left end hairpin': 8287, 'treated ginger waste': 8288, 'thousand grain weight': 8289, 'Water Research Institute': 8290, 'WASH Resource Index': 8291, 'whole root immersion': 8292, 'Fusarium head blight': 8293, 'Fagara heitzii barks': 8294, 'F graminearum species complex': 8295, 'Fungal Genetics Stock Center': 8296, 'female germline stem cell': 8297, 'mature spore mother cell': 8298, 'mesenteric smooth muscle cells': 8299, 'multiple sequentially Markovian coalescent': 8300, 'empty vector preparation': 8301, 'Episcleral venous pressure': 8302, 'single radial enzyme diffusion': 8303, 'Sleep related eating disorder': 8304, 'Graze and burn': 8305, 'Gene Assisted Breeding': 8306, 'globular actin binding': 8307, 'nearest neighbor distance': 8308, 'New Nordic Diet': 8309, 'network node dispersion': 8310, 'normalized node degree': 8311, 'boundary value problem': 8312, 'Blood Volume Pulse': 8313, 'South African Mutton Merino': 8314, 'severe acute maternal morbidity': 8315, 'fractal uncertainty principle': 8316, 'follow up period': 8317, 'quasi normal modes': 8318, 'quantitative nanomechanical mapping': 8319, 'mean fire return interval': 8320, 'Mt Fuji Research Institute': 8321, 'Greater Yellowstone Ecosystem': 8322, 'glucose yeast extract': 8323, 'Natural Forest Management Plan': 8324, 'non ferrous metal processing': 8325, 'total belowground biomass': 8326, 'Terpenoid Backbone Biosynthesis': 8327, 'Trypanosoma brucei brucei': 8328, 'rice bran oil': 8329, 'rank biased overlap': 8330, 'Atlantic white cedar': 8331, 'anaerobic work capacity': 8332, 'No Observed Effect Concentration': 8333, 'normal ovarian epithelial cell': 8334, 'National Agricultural Statistics Service': 8335, 'North American Spine Score': 8336, 'Temple Northeastern Birmingham': 8337, 'Terra Nova Bay': 8338, 'wet weight basis': 8339, 'white wheat bread': 8340, 'Southern Swedish Malignant Melanoma': 8341, 'sterically stabilized mixed micelles': 8342, 'Physical Properties Measurement System': 8343, 'primary progressive multiple sclerosis': 8344, 'Dynamical network biomarkers': 8345, 'Diabetic neurogenic bladder': 8346, 'Dothistroma needle blight': 8347, 'dorsal noradrenergic bundle': 8348, 'Illinois Rape Myth Acceptance': 8349, 'Impedance Ratio Modulus Analyzer': 8350, 'Iterative refinement meta assembler': 8351, 'reciprocity rewards game': 8352, 'Redundancy Reduced Gossip': 8353, 'Resource Response Graph': 8354, 'relative root growth': 8355, 'Lifelong machine learning': 8356, 'large mixed linker': 8357, 'log marginal likelihood': 8358, 'Biological Regulatory Network': 8359, 'biochemical reaction network': 8360, 'basal retinal neurons': 8361, 'biasing related negativity': 8362, 'Clustering Embedded Network Inference': 8363, 'Conjunctive Exploratory Navigation Interface': 8364, 'Rank probability skill score': 8365, 'rat pup severity score': 8366, 'Complex Adaptive Systems Modeling': 8367, 'clavicular air sac membrane': 8368, 'social group optimization': 8369, 'seafloor geodetic observation': 8370, 'switching graphene oxide': 8371, 'sweat gland output': 8372, 'Semantic Gene Organizer': 8373, 'Resource Super Graph': 8374, 'regulated secretion granules': 8375, 'Nested Ripple Down Rules': 8376, 'non reference discordance rate': 8377, 'Earthquake Research Institute': 8378, 'Early Response Index': 8379, 'ESA resistance index': 8380, 'effort reward imbalance': 8381, 'erythropoietin responsiveness index': 8382, 'polarity inversion line': 8383, 'Palatal Interalveolar Length': 8384, 'Primary intestinal lymphangiectasia': 8385, 'retained knowledge rate': 8386, 'Rydberg Klein Rees': 8387, 'systolic pulmonary arterial pressure': 8388, 'secreted placental alkaline phosphatase': 8389, 'coseismic volumetric strain changes': 8390, 'capsid vertex specific component': 8391, 'near infrared mapping spectrometer': 8392, 'nanostructure initiator mass spectrometry': 8393, 'nearly identical maximal substrings': 8394, 'Ionospheric Alfven Resonator': 8395, 'injured alveoli rate': 8396, 'probabilistic seismic hazard analysis': 8397, 'Probabilistic Seismic Hazard Assessment': 8398, 'volume mixing ratio': 8399, 'variably methylated region': 8400, 'Ventral midline region': 8401, 'Visual Motor Response': 8402, 'Vacacaí Mirim River': 8403, 'noise equivalent magnetic induction': 8404, 'National Environmental Methods Index': 8405, 'two way traveltimes': 8406, 'timed walk test': 8407, 'ruptures concentration zone': 8408, 'rostral cingulate zone': 8409, 'latest slip zone': 8410, 'Low Suitability Zone': 8411, 'very low Ti': 8412, 'Ventral longitudinal tract': 8413, 'Very Large Telescope': 8414, 'vesicle lysis test': 8415, 'video lottery terminal': 8416, 'volcanic explosivity index': 8417, 'Ventilation efficiency index': 8418, 'strong ground motion': 8419, 'scaled Gauss metric': 8420, 'Spatial Grewia Model': 8421, 'slow growing mycobacteria': 8422, 'sporozoite gliding motility': 8423, 'West Philippine Basin': 8424, 'whole plant branching': 8425, 'Weibel Palade body': 8426, 'linear dispersive wave': 8427, 'low density woods': 8428, 'Lower Deep Water': 8429, 'leaf dry weight': 8430, 'first order reversal curve': 8431, 'Fish Oil Refed Control': 8432, 'peak ground velocity': 8433, 'Phaeocystis globosa virus': 8434, 'virtual geomagnetic poles': 8435, 'vertical growth phase': 8436, 'vertebral growth plate': 8437, 'saturation isothermal remanent magnetization': 8438, 'Stable isotope resolved metabolomics': 8439, 'digital all sky imager': 8440, 'Dryness Area Severity Index': 8441, 'Duke Activity Status Index': 8442, 'days after stress imposition': 8443, 'database assisted structure identification': 8444, 'Global Muon Detector Network': 8445, 'Global Medical Device Nomenclature': 8446, 'Global Scale Wave Model': 8447, 'Graphical Sliding Window Method': 8448, 'pre stack depth migration': 8449, 'plastic stress distribution method': 8450, 'Ultra low frequency': 8451, 'unit length filaments': 8452, 'Unit Local Frame': 8453, 'unhealthy lifestyle factors': 8454, 'ballistic cluster cluster aggregate': 8455, 'bilateral common carotid artery': 8456, 'British Columbia Cancer Agency': 8457, 'Spherical Elementary Current Systems': 8458, 'steam exploded corn stover': 8459, 'westward traveling surge': 8460, 'whole transcriptome sequencing': 8461, 'wisdom tooth surgery': 8462, 'Space Vehicle Number': 8463, 'smallest valued neighbour': 8464, 'superior vestibular nerve': 8465, 'zenith wet delays': 8466, 'Zhen Wu Decoction': 8467, 'air saturated water': 8468, 'average silhouette width': 8469, 'artificial sea water': 8470, 'Albuquerque study well': 8471, 'amorphous solid water': 8472, 'Longitudinal Valley fault': 8473, 'loading variable factors': 8474, 'lower visual field': 8475, 'Lymphatic Vessel Function': 8476, 'left ventricular failure': 8477, 'High Himalaya Crystalline': 8478, 'healthy household contacts': 8479, 'niching genetic algorithm': 8480, 'non gene associated': 8481, 'Non glycolysis acidification': 8482, 'moving window admittance technique': 8483, 'mesenteric white adipose tissue': 8484, 'China Seismo Electromagnetic Satellite': 8485, 'chronic stress escape strategy': 8486, 'COPD self efficacy scale': 8487, 'chronically sun exposed skin': 8488, 'Main Zagros Thrust': 8489, 'maternal zygotic transition': 8490, 'bottom turbid layer': 8491, 'Bryothamnion triquetrum lectin': 8492, 'bilateral tubal ligation': 8493, 'biceps tendon lengthening': 8494, 'branch tip length': 8495, 'lattice preferred orientation': 8496, 'left posterior oblique': 8497, 'leave pair out': 8498, 'continuous operating reference station': 8499, 'cerebello oculo renal syndrome': 8500, 'principal slip zone': 8501, 'partially stabilized zirconia': 8502, 'Kofu granitic complex': 8503, 'Kernel Graph cuts': 8504, 'Kernel Granger Causality': 8505, 'Ou backbone range': 8506, 'optimal Bayesian robust': 8507, 'optimized background regimen': 8508, 'very broad band': 8509, 'Victoria blue B': 8510, 'High dark current': 8511, 'high dose chemotherapy': 8512, 'Hilbert differential contrast': 8513, 'Range Time Histogram': 8514, 'round top hexagonal': 8515, 'extreme wave event': 8516, 'exponentiated Weibull exponential': 8517, 'egg white extract': 8518, 'Regional Earthquake Likelihood Model': 8519, 'regularized extreme learning machine': 8520, 'end of irradiation': 8521, 'erasure of imprinting': 8522, 'expression of interest': 8523, 'Early onset infections': 8524, 'end of infusion': 8525, 'mantle transition zone': 8526, 'microscopic treatment zones': 8527, 'Krebs Henseleit buffer': 8528, 'Krebs henseleit bicarbonate': 8529, 'Krebs HEPES buffer': 8530, 'interactive region growing': 8531, 'IFN response genes': 8532, 'isogenic reference genome': 8533, 'immunity related GTPase': 8534, 'diethylene triamine pentaacetic acid': 8535, 'DNA to protein array': 8536, 'Diethylene Tetramine Penta Acetate': 8537, 'Sandstone Ridge Woodlands': 8538, 'supervised random walks': 8539, 'standardised regression weight': 8540, 'synthetic reference wave': 8541, 'total liver function': 8542, 'target lesion failure': 8543, 'TBP like factor': 8544, 'trypanosome lytic factors': 8545, 'pedagogical content knowledge': 8546, 'proto Calepineae karyotype': 8547, 'wall thinning ratio': 8548, 'with the rule': 8549, 'Hydrotreated vegetable oils': 8550, 'Hawaiian Volcano Observatory': 8551, 'digital reference object': 8552, 'Daintree Rainforest Observatory': 8553, 'DHA rich oil': 8554, 'substitute natural gas': 8555, 'synthetic natural gas': 8556, 'segmented neutrophilic granulocytes': 8557, 'single network gyroid': 8558, 'seizure onset zone': 8559, 'serum opsonized zymosan': 8560, 'hepatic extraction fraction': 8561, 'hemoglobin enhancement factor': 8562, 'Hemagglutinin Esterase Fusion': 8563, 'human esophageal fibroblast': 8564, 'human embryonic fibroblasts': 8565, 'Ernest Henry Mine': 8566, 'equine herpesvirus myeloencephalopathy': 8567, 'extra haustorial membrane': 8568, 'defect discrimination power': 8569, 'Dyadic Developmental Psychotherapy': 8570, 'pollutant mixing zone': 8571, 'proximal mitotic zone': 8572, 'posterior marginal zone': 8573, 'Quantum Fisher Information': 8574, 'quantitative functional index': 8575, 'optical ground station': 8576, 'orthologous genomic segment': 8577, 'overall Gleason score': 8578, 'official gene set': 8579, 'Objective Grading System': 8580, 'photothermal cantilever deflection spectroscopy': 8581, 'Preventable Chronic Disease Strategy': 8582, 'molecular beam mass spectrometry': 8583, 'multimedia broadcast multicast service': 8584, 'variable search window': 8585, 'variable span wing': 8586, 'Generalized Polynomial Hammerstein': 8587, 'glycoside pentoside hexuronide': 8588, 'Garissa Provincial Hospital': 8589, 'minimum shift keying': 8590, 'medullary sponge kidney': 8591, 'inter block interference': 8592, 'inter beat interval': 8593, 'iterative Boltzmann inversion': 8594, 'invasive bacterial infections': 8595, 'False Negative Error': 8596, 'free nerve endings': 8597, 'free nuclear endosperm': 8598, 'Predictive Mean Opinion Score': 8599, 'Polymerase mediated oligonucleotide synthesis': 8600, 'window of interest': 8601, 'window of implantation': 8602, 'isotropic orthogonal transform algorithm': 8603, 'International Ovarian Tumour Analysis': 8604, 'associative transfer entropy matrix': 8605, 'Analytical transmission electron microscopy': 8606, 'Phys Rev Lett': 8607, 'primary root length': 8608, 'probabilistic record linkage': 8609, 'preferred retinal locus': 8610, 'Dynamic Fuzzy Inference System': 8611, 'dual fluoroscopic imaging system': 8612, 'potential scale reduction factor': 8613, 'pedicle screw rod fixation': 8614, 'hybrid input output': 8615, 'Health information orientation': 8616, 'sea clutter constituent synthesis': 8617, 'self controlled case series': 8618, 'linear input network': 8619, 'Local Interconnect Network': 8620, 'lobular intraepithelial neoplasia': 8621, 'Lupus Interactive Navigator': 8622, 'Australian War Memorial': 8623, 'abdominal wall movement': 8624, 'association weight matrix': 8625, 'Arterial wall motion': 8626, 'acidified waste milk': 8627, 'symmetric quantized gossip': 8628, 'student question generation': 8629, 'Sediment Quality Guidelines': 8630, 'random geometric graph': 8631, 'R GUI Generator': 8632, 'media aware network element': 8633, 'major adverse neurological event': 8634, 'adaptive spatio temporal accumulation': 8635, 'Accumulative Short Term Autocorrelation': 8636, 'position velocity measured': 8637, 'Particle Vision Microscope': 8638, 'parasitophorous vacuolar membrane': 8639, 'Porcine vaginal mucosa': 8640, 'integrated cubic phase function': 8641, 'ion conductive polymer film': 8642, 'migration through resolution cells': 8643, 'mixed tissue ratiometric controls': 8644, 'higher order truncation': 8645, 'Human Organization Technology': 8646, 'Hypertension Optimal Treatment': 8647, 'healthy ovarian tissue': 8648, 'human oral taxon': 8649, 'hand gesture recognition': 8650, 'height growth rate': 8651, 'cascaded pixel domain transcoding': 8652, 'cold pain detection threshold': 8653, 'enhanced direct memory access': 8654, 'EmbryoGENE DNA methylation analysis': 8655, 'self adaptive frame enhancer': 8656, 'Survivor Activating Factor Enhancement': 8657, 'block term decomposition': 8658, 'bursa tract diverticulum': 8659, 'beta trefoil domain': 8660, 'Brightness Temperature Difference': 8661, 'fractional delay filter': 8662, 'fetal dermal fibroblasts': 8663, 'Fixed dose fortification': 8664, 'solid state power amplifier': 8665, 'secondary structure prediction accuracy': 8666, 'second strand primer adaptor': 8667, 'wireless tiny sensor network': 8668, 'weighted tissue specific network': 8669, 'unitary tensor ESPRIT': 8670, 'upper thoracic esophagus': 8671, 'Grassmann graph embedding': 8672, 'genetic generalised epilepsy': 8673, 'gradient gel electrophoresis': 8674, 'generalized Gibbs ensemble': 8675, 'Greek Goat Encephalitis': 8676, 'new edge dependent deinterlacing': 8677, 'nano enabled drug delivery': 8678, 'prune and search algorithm': 8679, 'proximal articular set angle': 8680, 'posterior acetabular sector angle': 8681, 'three step search technique': 8682, 'toxic shock syndrome toxin': 8683, 'Trier Social Stress Test': 8684, 'graph based transform': 8685, 'gefitinib based therapy': 8686, 'Green Bank Telescope': 8687, 'weighted erasure decoding': 8688, 'warped extra dimension': 8689, 'written emotional disclosure': 8690, 'pseudo junction tree': 8691, 'Plyometric jump training': 8692, 'independent vector analysis': 8693, 'Ingenuity Variant Analysis': 8694, 'pilot assisted channel estimation': 8695, 'proteosome associated control element': 8696, 'Permutation Achieved Classification Error': 8697, 'heavy tail distribution': 8698, 'Heat Temperature Deviation': 8699, 'heart transplantation donors': 8700, 'high threshold detector': 8701, 'Herd Test Day': 8702, 'non separable lifting schemes': 8703, 'National Synchrotron Light Source': 8704, 'local sense scattering function': 8705, 'Liquid static surface fermentation': 8706, 'local quadratic periodogram': 8707, 'local quantized patterns': 8708, 'logarithmic quadratic proximal': 8709, 'single user bound': 8710, 'Single Use Bioreactor': 8711, 'sterile urine bag': 8712, 'minimum distance template selection': 8713, 'Metered dose transdermal spray': 8714, 'negated log likelihood': 8715, 'narrow leaf lupin': 8716, 'directional gain deviation': 8717, 'differential group delay': 8718, 'cepstral histogram normalization': 8719, 'Chinese herbs nephropathy': 8720, 'Complex Heterogeneous Network': 8721, 'Congenital Hypomyelination Neuropathy': 8722, 'Centre Hospitalier National': 8723, 'locality sensitive hashing': 8724, 'lymphoid specific helicase': 8725, 'Less Suitable Habitat': 8726, 'prior biological knowledge': 8727, 'physiologically based kinetic': 8728, 'PDZ Binding Kinase': 8729, 'probabilistic Boolean network': 8730, 'peptide based nanoparticles': 8731, 'primary branch number': 8732, 'phenyl butyl nitrone': 8733, 'Query success rate': 8734, 'quick service restaurants': 8735, 'non reactive obstacles': 8736, 'nuclear run on': 8737, 'dynamic reconfigurable hardware': 8738, 'Dessie Referral Hospital': 8739, 'deliquescence relative humidity': 8740, 'digital video port': 8741, 'Digital Velocity Pulse': 8742, 'dynamic vascular pattern': 8743, 'iterative multiview side information': 8744, 'intracytoplasmic morphological sperm injection': 8745, 'curvature aided circle detector': 8746, 'central areolar choroidal dystrophy': 8747, 'binary background image': 8748, 'Bergen Burnout Indicator': 8749, 'broken boundary index': 8750, 'Bowman Birk inhibitor': 8751, 'bovine blood index': 8752, 'augmented Lagrangian projection method': 8753, 'anterior lateral plate mesoderm': 8754, 'completely augmented Lagrangian method': 8755, 'Cancer and Living Meaningfully': 8756, 'complementation activated light microscopy': 8757, 'Carotid Arterial Longitudinal Motion': 8758, 'hybrid knowledge guided': 8759, 'house keeping gene': 8760, 'video quality assessment': 8761, 'variance quadtree algorithm': 8762, 'quantization constraint set': 8763, 'Queen Charlotte Strait': 8764, 'active content collaboration platform': 8765, 'amorphous carbonated calcium phosphate': 8766, 'against cyclic citrullinated peptide': 8767, 'mobility robustness optimization': 8768, 'mature reproductive organs': 8769, 'MHC Restriction Ontology': 8770, 'Muscle receptor organ': 8771, 'one way relaying': 8772, 'open wedge resection': 8773, 'Joint iterative decoding': 8774, 'Jianan Irrigation District': 8775, 'JAZ interacting domain': 8776, 'automatic collision notification': 8777, 'amygdala central nucleus': 8778, 'allele copy number': 8779, 'anterior cortex nail': 8780, 'mean squared inner product': 8781, 'miniaturized sandwich immunoassay platform': 8782, 'China Mobile Multimedia Broadcasting': 8783, 'Cryopreserved Mutant Mouse Bank': 8784, 'Single Hop Multicast Maximization': 8785, 'spent hydrolysate model medium': 8786, 'remote radio heads': 8787, 'Retinal racemose hemangioma': 8788, 'saving energy clustering algorithm': 8789, 'Sulphur Emission Control Area': 8790, 'most reliable basis': 8791, 'multidrug resistant bacteria': 8792, 'mitochondria reaction buffer': 8793, 'Major River Basin': 8794, 'sparsity adaptive matching pursuit': 8795, 'Single Assembler Multiple Parameter': 8796, 'Usage Environment Description': 8797, 'Ultrafast electron diffraction': 8798, 'upper electron detector': 8799, 'network allocation vector': 8800, 'NHEJ assay vector': 8801, 'normalized angular velocity': 8802, 'N acetyl valine': 8803, 'and key agreement': 8804, 'Aurora kinase A': 8805, 'optimal channel independent': 8806, 'open chromatin index': 8807, 'Vehicular Mobility Pattern': 8808, 'ventral mesenchymal pad': 8809, 'Line based data dissemination': 8810, 'Ligand based drug design': 8811, 'user selection switch': 8812, 'Uptake signal sequences': 8813, 'urinary symptom score': 8814, 'Upshaw Schulman syndrome': 8815, 'Go back N': 8816, 'Gaussian Bayesian networks': 8817, 'adapted soft frequency reuse': 8818, 'Age Specific Fertility Rates': 8819, 'distributed admission control protocol': 8820, 'dynamic adjust contention period': 8821, 'trusted computing group': 8822, 'tropical cyclone genesis': 8823, 'training care group': 8824, 'transparent conducting glass': 8825, 'Welsh Bound Equality': 8826, 'Wheat bran extract': 8827, 'Adaptive Random Early Detection': 8828, 'advanced resistive exercise device': 8829, 'independent identically distributed': 8830, 'Infectious intestinal disease': 8831, 'Integrated Interactions Database': 8832, 'user dissatisfaction ratio': 8833, 'undetermined death rate': 8834, 'upstream distribution range': 8835, 'Quantized co phasing': 8836, 'quantum critical point': 8837, 'quality control panel': 8838, 'Binaural Cue Physiological Perception Model': 8839, 'bipolar chaotic pulse position modulation': 8840, 'discrete uniform distribution': 8841, 'directory useful decoys': 8842, 'User Under Test': 8843, 'upper urinary tract': 8844, 'Universal Unitarity Triangle': 8845, 'IP packet error rate': 8846, 'irradiation promoted exchange reaction': 8847, 'Bit Interleaving Diversity': 8848, 'Bep intracellular delivery': 8849, 'bis in die': 8850, 'benign inflammatory dermatoses': 8851, 'load adaptive power control': 8852, 'Locally advanced pancreatic cancer': 8853, 'Explicit Loss Notification': 8854, 'Ectopic lymphoid neogenesis': 8855, 'expand leaf number': 8856, 'European Leukemia Net': 8857, 'Weighted Fair Opportunistic': 8858, 'with fish oil': 8859, 'random way point': 8860, 'residual whey permeate': 8861, 'Simplified Gateway Selection Scheme': 8862, 'Second Generation Surveillance System': 8863, 'indirect learning architecture': 8864, 'interstitial lung abnormalities': 8865, 'interventional lung assist': 8866, 'minimum power configuration protocol': 8867, 'microwave pretreatment cold pressing': 8868, 'QoS Class Identifier': 8869, 'Queen Charlotte Islands': 8870, 'turbo trellis coded modulation': 8871, 'Transmural Trauma Care Model': 8872, 'Asymmetric sender receiver connected': 8873, 'Atmospheric Sciences Research Center': 8874, 'Reduced Domain Neighborhood': 8875, 'reliability density neighbourhood': 8876, 'Robust Difference Normalization': 8877, 'zero forcing beamforming': 8878, 'zinc finger B': 8879, 'Load Based Power Saving': 8880, 'low boiling point solvent': 8881, 'Directed Flooding Routing Protocol': 8882, 'DRG family regulatory protein': 8883, 'adaptive window algorithm': 8884, 'adult worm antigen': 8885, 'hybrid peak windowing': 8886, 'hundred pod weight': 8887, 'iterative water filling': 8888, 'item writing flaws': 8889, 'nonuniform linear array': 8890, 'National Lipid Association': 8891, 'normalized lamellipodia area': 8892, 'noiseless linear amplifier': 8893, 'electro optical electrical': 8894, 'Early Overt Encephalopathy': 8895, 'enamel organ epithelia': 8896, 'group addressed transmission service': 8897, 'Global Adult Tobacco Survey': 8898, 'power user sharing': 8899, 'peripheral urinary space': 8900, 'range expansion bias': 8901, 'Research Ethics Board': 8902, 'probabilistic link reliable time': 8903, 'penalized likelihood ratio test': 8904, 'earliest expiry first': 8905, 'end expiratory flow': 8906, 'external electric field': 8907, 'extra embryonic fluid': 8908, 'early exoerythrocytic form': 8909, 'Vehicular Intelligent Monitoring System': 8910, 'virtual interactive musculoskeletal system': 8911, 'Urban Distribution Centre': 8912, 'Usage Data Collector': 8913, 'Uncentrifuged diluted control': 8914, 'power spectrum blind sampling': 8915, 'positive standard bacterial strains': 8916, 'serial concatenated convolutional codes': 8917, 'Small cell cervical carcinoma': 8918, 'content delivery network': 8919, 'crop duct nerves': 8920, 'frequency division multiple access': 8921, 'first dorsal metacarpal artery': 8922, 'multi non binary': 8923, 'median neurite bundle': 8924, 'Nearest Neighbor First': 8925, 'nearest neighbor fraction': 8926, 'multi utility vehicle': 8927, 'maximum unbiased validation': 8928, 'Revenue Passenger Kilometres': 8929, 'reads per kilobase': 8930, 'Knowledge Retention Rates': 8931, 'kernel ridge regression': 8932, 'Kocks Mecking Estrin': 8933, 'Korean mistletoe extract': 8934, 'One active system impaired': 8935, 'obstetric anal sphincter injuries': 8936, 'Ecosystem Valuation Toolkit': 8937, 'expected visiting time': 8938, 'endoscopic vacuum therapy': 8939, 'Focus Group Discussion': 8940, 'Familial glucocorticoid deficiency': 8941, 'flue gas desulfurization': 8942, 'Functional genome distribution': 8943, 'environmental life cycle assessment': 8944, 'excimer laser coronary atherectomy': 8945, 'social life cycle assessment': 8946, 'single linkage cluster analysis': 8947, 'landslide number density': 8948, 'lymph node dissection': 8949, 'Lesch Nyhan Disease': 8950, 'low nucleosome density': 8951, 'urban structure types': 8952, 'Universal Screening Test': 8953, 'United States Trichrome': 8954, 'burned area reflectance characterization': 8955, 'bottom anti reflective coating': 8956, 'Bleeding Academic Research Consortium': 8957, 'vertical wind shear': 8958, 'Von Willebrand syndrome': 8959, 'synthetic clay content logs': 8960, 'Singareni Collieries Company Ltd': 8961, 'Upper Rhine Graben': 8962, 'urgency related groups': 8963, 'July August September': 8964, 'Jerusalem artichoke stalk': 8965, 'Japan Atherosclerosis Society': 8966, 'juvenile ankylosing spondylitis': 8967, 'controlled high flow experiments': 8968, 'Congestive heart failure effusion': 8969, 'Baden Baden Zone': 8970, 'Bothnian Bay Zone–Finnmark': 8971, 'South East Asian Region': 8972, 'surfactant enhanced aquifer remediation': 8973, 'borehole thermal resistance': 8974, 'benzodithiophene terthiophene rhodanine': 8975, 'basal translational readthrough': 8976, 'BLM TOPOIIIα RMI1/2': 8977, 'top of hole': 8978, 'the Ottawa Hospital': 8979, 'electron grid bias': 8980, 'eosinophilic granular bodies': 8981, 'Early Golden Bantam': 8982, 'Availability Work Products': 8983, 'Average Wholesale Price': 8984, 'aboveground wood production': 8985, 'virtual presentation board': 8986, 'ventricular premature beats': 8987, 'Prescription Drug Monitoring Program': 8988, 'piecewise deterministic Markov process': 8989, 'Horizontal Obstacle Moving': 8990, 'Hong Ou Mandel': 8991, 'Double Echo Steady State': 8992, 'Digital Evaluation Score System': 8993, 'atypical lobular hyperplasia': 8994, 'after larval hatching': 8995, 'secondary alveolar bone grafting': 8996, 'senescence associated beta galactosidase': 8997, 'transcatheter heart valves': 8998, 'total heart volume': 8999, 'terminal hepatic venule': 9000, 'average perfused speed indicator': 9001, 'average pairwise sequence identity': 9002, 'anterior posterior stability index': 9003, 'unilateral lung ventilation': 9004, 'Ultra low volume': 9005, 'upper limb vibration': 9006, 'Genes differentially expressed': 9007, 'glycogen debranching enzyme': 9008, 'Gene Dynamics Events': 9009, 'gene dosage effect': 9010, 'gas diffusion electrodes': 9011, 'Diffusion kurtosis imaging': 9012, 'DAG kinase inhibitor': 9013, 'Effective lung volume': 9014, 'extra lymphoid vein': 9015, 'Exosome like vesicle': 9016, 'ventilator associated respiratory infections': 9017, 'Van Andel Research Institute': 9018, 'variable axial load': 9019, 'voluntary activation level': 9020, 'Venom allergen like': 9021, 'Optic nerve ratio': 9022, 'Overall Non Responders': 9023, 'double web angle': 9024, 'daily weighted average': 9025, 'alkali activated binder': 9026, 'animal associated bacteria': 9027, 'acetic acid bacteria': 9028, 'ascending aortic banding': 9029, 'coal dust explosibility meter': 9030, 'Civil Defence Emergency Management': 9031, 'ground water index': 9032, 'Gulf War Illness': 9033, 'global warming impact': 9034, 'neural network modeling': 9035, 'non neoplastic mucosa': 9036, 'normal nasopharyngeal mucosal': 9037, 'integrated gasification combined cycle': 9038, 'International Germ Cell Consensus': 9039, 'underground coal gasification': 9040, 'usual care group': 9041, 'untreated control group': 9042, 'wet flue gas desulfurization': 9043, 'white fronted goose days': 9044, 'Brazilian notched disc': 9045, 'bubble number densities': 9046, 'Benzoyl naphthoyl DEAE': 9047, 'plan do check act/adjust': 9048, 'Plan Do Check Act': 9049, 'posterior descending coronary artery': 9050, 'inferior vena cava diameter': 9051, 'intra ventricular conduction delays': 9052, 'residual forestry biomass': 9053, 'replication fork barrier': 9054, 'Rhizoctonia foliar blight': 9055, 'area under study': 9056, 'artificial urinary sphincter': 9057, 'Assembled Unique sequences': 9058, 'residual gas fraction': 9059, 'radial glandular fraction': 9060, 'reduced growth factor': 9061, 'liquefied natural gas': 9062, 'lateral nasal glands': 9063, 'olive root extract': 9064, 'oligopeptide repeat expansion': 9065, 'oleate response element': 9066, 'Hydrogen transfer index': 9067, 'HAMP triggered immunity': 9068, 'home training interface': 9069, 'high throughput imaging': 9070, 'health test index': 9071, 'phthalocyanine green aluminum pigment': 9072, 'Platelet Genes and Physiology': 9073, 'Zambales Ophiolite Complex': 9074, 'Zhongshan Ophthalmic Center': 9075, 'Tapioca starch wastewater': 9076, 'thousand seed weight': 9077, 'Post Employment Wage': 9078, 'protein energy wasting': 9079, 'Workplaces with Stipends': 9080, 'Walker Warburg syndrome': 9081, 'Direct friction stir processing': 9082, 'diffusive finite state projection': 9083, 'total fresh weight': 9084, 'trophic factor withdrawal': 9085, 'gross enrolment ratio': 9086, 'greater epithelial ridge': 9087, 'gastric emptying rate': 9088, 'granular endoplasmic reticulum': 9089, 'Giberella ear rot': 9090, 'variable range hopping': 9091, 'Voigt Reuss Hill': 9092, 'order of selection': 9093, 'out of specification': 9094, 'Ocoxin Oral Solution': 9095, 'Indonesian Family Life Survey': 9096, 'iterated fast local search': 9097, 'Community Structure Activity Resource': 9098, 'Cardiac sympathetic afferent reflex': 9099, 'World Drug Index': 9100, 'Water Deficit Index': 9101, 'Scottish Environment Protection Agency': 9102, 'superficial external pudendal artery': 9103, 'Ultrafast Shape Recognition': 9104, 'Upstream Stalling Region': 9105, 'unique sequence read': 9106, 'unoccupied surface resonance': 9107, 'Computer Aided Structure Elucidation': 9108, 'Creating Active School Environments': 9109, 'global technological change': 9110, 'green tea catechins': 9111, 'Unit labor cost': 9112, 'upper lateral cartilages': 9113, 'Upper Limb Conventional': 9114, 'ultra long chain': 9115, 'relative enrichment factor': 9116, 'rat embryo fibroblasts': 9117, 'rubber elongation factor': 9118, 'relative expression factor': 9119, 'rotating electric fields': 9120, 'Medial Anterior Root Attachment': 9121, 'Motif Activity Response Analysis': 9122, 'Mobile Autism Risk Assessment': 9123, 'ligament of Wrisberg': 9124, 'Length of working': 9125, 'lamina outer width': 9126, 'descending genicular artery': 9127, 'dental general anaesthetic': 9128, 'upper transverse artery': 9129, 'Unilateral Transtibial Amputation': 9130, 'Ultrafast transient absorption': 9131, 'infrapatellar saphenous nerve': 9132, 'In Situ Nanoprobe': 9133, 'High tibial osteotomy': 9134, 'heat tolerable only': 9135, 'Government Net Revenue': 9136, 'gene name recognition': 9137, 'Gram negative rods': 9138, 'Interactive Voice Response': 9139, 'Internal vibrational relaxation': 9140, 'Influenza Virus Resource': 9141, 'in vitro release': 9142, 'Adaptive Query Processing': 9143, 'air quality plans': 9144, 'Indian railways management system': 9145, 'isotope ratio mass spectrometry': 9146, 'average annual snowfall accumulation': 9147, 'anterior acetabular sector angle': 9148, 'Interactive Traveler Information System': 9149, 'Integrated Taxonomic Information System': 9150, 'home based work': 9151, 'healthy body weight': 9152, 'Gravitational N Body Problem': 9153, 'guanine nucleotide binding protein': 9154, 'gram negative binding protein': 9155, 'video serving office': 9156, 'valve sparing operation': 9157, 'Air Quality Health Index': 9158, 'anomalous quantum Hall insulator': 9159, 'proximal fibular anatomical axis': 9160, 'plasma free amino acid': 9161, 'hereditary peripheral neuropathies': 9162, 'Human Powered Nebulizer': 9163, 'Hybrid Petri Net': 9164, 'Fuchs’ uveitis syndrome': 9165, 'First unprovoked seizure': 9166, 'focused ultrasound surgery': 9167, 'herpetic anterior uveitis': 9168, 'health app use': 9169, 'indirect electro pneumatic': 9170, 'intron encoded protein': 9171, 'Internationally Educated Physiotherapist': 9172, 'iso electric point': 9173, 'IRES EGFP polyA': 9174, 'ultimate failure load': 9175, 'upper forest line': 9176, 'photoreceptor outer segment tips': 9177, 'probabilities of surviving treatment': 9178, 'Pacific Ocean Shelf Tracking': 9179, 'water alternating gas': 9180, 'Whelan and Goldman': 9181, 'Wistar Albino Glaxo': 9182, 'Reservoir Quality Index': 9183, 'RNA Qiality Indicator': 9184, 'RNA quality index': 9185, 'Unicompartmental knee replacement': 9186, 'Unsupervised kernel regression': 9187, 'knowledge discovery metamodel': 9188, 'known diabetes mellitus': 9189, 'application structural details repository': 9190, 'age specific death rates': 9191, 'high resolution sequence stratigraphic': 9192, 'Hybrid Relative Specificity Similarity': 9193, 'Eclipse Modeling Framework': 9194, 'external magnetic field': 9195, 'external mandibular fenestra': 9196, 'yttrium aluminum borate': 9197, 'Yayasan Akondroplasia Berdikari': 9198, 'Quick Placement Test': 9199, 'quantum phase transition': 9200, 'Mathematics Computer Based Study': 9201, 'Medicare Current Beneficiary Survey': 9202, 'solid state reaction sintering': 9203, 'spontaneous short range silencing': 9204, 'Social Support Rating Scale': 9205, 'essentially non oscillatory': 9206, 'exhaled nitric oxide': 9207, 'vapor assisted solution process': 9208, 'vasodilator associated stimulated phosphoprotein': 9209, 'umbilical cord occlusion': 9210, 'unequal crossing over': 9211, 'unilateral carotid occlusion': 9212, 'ultra conserved ortholog': 9213, 'electron multiplying charge coupled device': 9214, 'Electron Multiplied Charged Couple Device': 9215, 'height distribution histograms': 9216, 'healthy dietary habits': 9217, 'canonic spherical polyelectrolyte brushes': 9218, 'canoe shaped parasporal body': 9219, 'uncut chip thickness': 9220, 'umbilical cord tissue': 9221, 'Diluted magnetic oxides': 9222, 'Diabetic macular oedema': 9223, 'dual active layer': 9224, 'dietary acid load': 9225, 'spin on glass': 9226, 'singlet oxygen generation': 9227, 'sub oesophageal ganglion': 9228, 'Speed of germination': 9229, 'vertical transparent package': 9230, 'VTP This Protocol': 9231, 'Vascular targeting PDT': 9232, 'monolayer colloidal crystal template': 9233, 'maximum clade credibility trees': 9234, 'unipolar resistive switching': 9235, 'upstream regulating sequence': 9236, 'unweighted risk score': 9237, 'ultimate rostral segments': 9238, 'thermally reduced graphene': 9239, 'Tumour regression grades': 9240, 'tetracycline resistance genes': 9241, 'trigeminal root ganglion': 9242, 'metal dielectric nanocomposite': 9243, 'Midbrain dopaminergic neuron': 9244, 'multiple demand network': 9245, 'mean division number': 9246, 'Hybrid bulk heterojunction': 9247, 'HER2 Basal hybrid': 9248, 'rare earth oxides': 9249, 'resting eyes open': 9250, 'reducing end oligosaccharide': 9251, 'rosemary essential oil': 9252, 'bond angle distribution': 9253, 'branch atheromatous disease': 9254, 'brachial artery diameter': 9255, 'betaine aldehyde dehydrogenase': 9256, 'bipolar affective disorder': 9257, 'zero resistance states': 9258, 'ZPA regulatory sequence': 9259, 'Zwolle Risk Score': 9260, 'photonic quasi crystal': 9261, 'protein quality control': 9262, 'cold electron bolometers': 9263, 'children ever born': 9264, 'valence band edge': 9265, 'vanillyl butyl ether': 9266, 'extra high tension': 9267, 'Engineered heart tissues': 9268, 'estimated hearing thresholds': 9269, 'commercial expanded graphite': 9270, 'constantly expressed genes': 9271, 'Core Eukaryotic Gene': 9272, 'Bernevig Hughes Zhang': 9273, 'Basic Health Zones': 9274, 'third nearest neighbor': 9275, 'Task Negative network': 9276, 'Two phase closed thermosyphon': 9277, 'treatment planning computed tomography': 9278, 'hydrogen exfoliated graphene': 9279, 'highly expressed genes': 9280, 'homing endonuclease genes': 9281, 'high expression group': 9282, 'Kelvin force microscopy': 9283, 'knee flexion moment': 9284, 'ZnO nanorod arrays': 9285, 'Zip nucleic acids': 9286, 'hydride vapor phase epitaxy': 9287, 'high voltage paper electrophoresis': 9288, 'ZnO nanobelt film': 9289, 'zero net flux': 9290, 'electrochemically active surface area': 9291, 'effective absorption surface area': 9292, 'Asaro Tiller Grinfeld': 9293, 'anti thymocyte globulin': 9294, 'azobenzene triazole glutamate': 9295, 'Nano imprint lithography': 9296, 'near isogenic lines': 9297, 'Normalized Information Loss': 9298, 'not in labour': 9299, 'non ischaemic limb': 9300, 'conventional polycrystalline ingot iron': 9301, 'cell proliferation inhibition index': 9302, 'ultrasonic force microscopy': 9303, 'Universal Feature Method': 9304, 'unmethylated full mutation': 9305, 'Shanghai Advanced Research Institute': 9306, 'severe acute respiratory infection': 9307, 'Severe Acute Respiratory Illness': 9308, 'Simulated Attack Reaction Index': 9309, 'single quantum well': 9310, 'Suo Quan Wan': 9311, 'buried triple gate': 9312, 'brain training group': 9313, 'benign thyroid goiter': 9314, 'quantum dot molecules': 9315, 'Qinling Daba Mountain': 9316, 'intermediate band solar cell': 9317, 'International Barley Sequencing Consortium': 9318, 'green wood density': 9319, 'glucan water dikinase': 9320, 'imported natural gas': 9321, 'interneuron network gamma': 9322, 'vortex disengager stripper': 9323, 'variable deletion sites': 9324, 'vitamin D sufficient': 9325, 'Vero Dog SLAM': 9326, 'venous disability score': 9327, 'Maasai Mara National Reserve': 9328, 'Monthly malaria notification rates': 9329, 'Participatory Response Identification Matrix': 9330, 'Poisson Regression Insertion Model': 9331, 'pseudo forward equation': 9332, 'Pomegranate fruit extract': 9333, 'Puerariae flower extract': 9334, 'Plaque forming efficiency': 9335, 'Passive force enhancement': 9336, 'National Drought Management Authority': 9337, 'Nano Differential Mobility Analyzer': 9338, 'Spider screw anchorage system®': 9339, 'single sided amplitude spectrum': 9340, 'pair distances distribution function': 9341, 'persistent DNA damage foci': 9342, 'transverse sagittal maxillary expander': 9343, 'tissue specific maximal expression': 9344, 'tyrosine kinase domain': 9345, 'transmission Kikuchi diffraction': 9346, 'Mind Wandering Questionnaire': 9347, 'Munich Wrist Questionnaire': 9348, 'lower convective zone': 9349, 'local climate zone': 9350, 'Cambridge Crystallographic Data Centre': 9351, 'coiled coil domain containing': 9352, 'bond valence sum': 9353, 'Bayesian variable selection': 9354, 'Bioresorbable vascular scaffolds': 9355, 'bait region domain': 9356, 'Bovine respiratory disease': 9357, 'Brown Ring Disease': 9358, 'diffraction weighted dose': 9359, 'Distance Weighted Discrimination': 9360, 'Polymerase Incomplete Primer Extension': 9361, 'Protein Interaction Prediction Engine': 9362, 'Fragments of Life': 9363, 'Forest Of Life': 9364, 'vitamin K dependent': 9365, 'V447H K484A D517A': 9366, 'Synkinesis Assessment Questionnaire': 9367, 'Seattle Angina Questionnaire': 9368, 'Safety Attitudes Questionnaire': 9369, 'self administered questionnaires': 9370, 'Self Audit Questionnaire': 9371, 'normal white matter': 9372, 'New World Monkey': 9373, 'nonlesioned white matter': 9374, 'neuronal intranuclear inclusions': 9375, 'Normalized ion intensity': 9376, 'Nuclear Irregularity Index': 9377, 'integration host factor': 9378, 'instantaneous heart frequency': 9379, 'bioluminescence resonance energy transfer': 9380, 'Bomb Risk Elicitation Task': 9381, 'minimally invasive subpial tonsillectomy': 9382, 'Montreal Imaging Stress Test': 9383, 'microbial in silico typing': 9384, 'argyrophilic grain disease': 9385, 'Amoebic Gill Disease': 9386, 'Autism Genetic Database': 9387, 'agar gel diffusion': 9388, 'wheat germ oil': 9389, 'World Gastroenterology Organisation': 9390, 'Sporadic progressive muscular atrophy': 9391, 'S phenyl mercapturic acid': 9392, 'myelin protein zero': 9393, 'medial proliferation zone': 9394, 'Medulloblastoma Advanced Genomics International Consortium': 9395, 'Multiparent Advanced Generation Inter Cross': 9396, 'German Glioma Network': 9397, 'ground glass nodule': 9398, 'graphical Gaussian networks': 9399, 'primary age related tauopathy': 9400, 'particle aggregate reconstruction technique': 9401, 'lower motor neurons': 9402, 'logarithmic mean normalization': 9403, 'minimal perceptible clinical improvement': 9404, 'magnetic percutaneous coronary intervention': 9405, 'bilateral carotid artery stenosis': 9406, 'British Columbia Ambulance Service': 9407, 'articular epiphyseal cartilage complex': 9408, 'American European Consensus Conference': 9409, 'Fixed flexion view': 9410, 'fraction free volume': 9411, 'flex fuel vehicle': 9412, 'fast frequency variability': 9413, 'feline foamy virus': 9414, 'Mayo elbow performance score': 9415, 'Medical Expenditure Panel Survey': 9416, 'Signature Personalized Patient Care': 9417, 'Sensitivity Partial Pearson Correlation': 9418, 'Kamuzu Central Hospital': 9419, 'Kilifi County Hospital': 9420, 'bucket handle meniscal tear': 9421, 'betaine homocysteine methyl transferase': 9422, 'Canine cognitive dysfunction syndrome': 9423, 'consensus coding DNA sequence': 9424, 'Therapeutic Goals Management': 9425, 'tumor grading metastasis': 9426, 'urine dug screening': 9427, 'ultra deep sequencing': 9428, 'urban downstream site': 9429, 'unscheduled DNA synthesis': 9430, 'Krebs Ringer HEPES': 9431, 'Krebs Ringers Henseleit': 9432, 'body figure egocentric': 9433, 'Best fitting ellipse': 9434, 'berberry fruit extract': 9435, 'event related lateralized': 9436, 'expanded rossete leaves': 9437, 'drug related deaths': 9438, 'damage recognition domain': 9439, 'DOPA RESPONSIVE DYSTONIA': 9440, 'DED recruiting domain': 9441, 'most popular price category': 9442, 'multi pixel photon counter': 9443, 'anode interface layer': 9444, 'abductor indicus longus': 9445, 'acid insoluble lignin': 9446, 'advanced intercross lines': 9447, 'non pop out': 9448, 'natural palm olein': 9449, 'most superior peripheral shadow': 9450, 'multiple source probe scan': 9451, 'mixed hearing loss': 9452, 'Mental Health Literacy': 9453, 'maximal heart length': 9454, 'early infant diagnosis': 9455, 'emerging infectious diseases': 9456, 'Endothelial independent dilatation': 9457, 'embryo implantation dysfunction': 9458, 'excitation induced dephasing': 9459, 'direct laser writing': 9460, 'doubly labeled water': 9461, 'nudged elastic band': 9462, 'Nuclear Envelope Breakdown': 9463, 'New England Biolabs': 9464, 'negative energy balance': 9465, 'Coordinatively unsaturated ferrous': 9466, 'codon usage frequency': 9467, 'adipose tissue hypoxia': 9468, 'asymmetric transfer hydrogenation': 9469, 'A T heterozygotes': 9470, 'Autologous Tumor Homogenate': 9471, 'avian thymic hormone': 9472, 'Erratic extensive elongation': 9473, 'Educational EHR Environment': 9474, 'early embryo enriched': 9475, 'exercise energy expenditure': 9476, 'Gene ontology biological process': 9477, 'general odorant binding protein': 9478, 'Bogalusa Heart Study': 9479, 'before heat shock': 9480, 'beta hemolytic streptococci': 9481, 'Beck hopelessness scale': 9482, 'average maximum life span': 9483, 'Animal Movements Licensing System': 9484, 'nuclear autoantigenic sperm protein': 9485, 'Neocortex Adaptive Systems Pattern': 9486, 'Northern Arizona SNP pipeline': 9487, 'PRC regulated gene': 9488, 'peptide reactive group': 9489, 'prion related gene': 9490, 'pair rule gene': 9491, 'preliminary remediation goal': 9492, 'cap analysis gene expression': 9493, 'conjugative assembly genome engineering': 9494, 'Lamin B receptor': 9495, 'Ligand Binding Region': 9496, 'live birth rate': 9497, 'below Hayflick limit': 9498, 'Biodiversity Heritage Library': 9499, 'butyryl homoserine lactone': 9500, 'highly active anti retroviral therapy': 9501, 'Highly Active Anti Retroviral Treatment': 9502, 'Viremic non progressors': 9503, 'Venus NLS PEST': 9504, 'Viruá National Park': 9505, 'transmitted drug resistance mutations': 9506, 'time dependent response modulations': 9507, 'Numeric Pain Rating Scale': 9508, 'National Perinatal Reporting System': 9509, 'maximum torque per ampere': 9510, 'methoxy trifluoromethyl phenylacetic acid': 9511, 'Alcohol Policy Information System': 9512, 'abdominal pain intensity scale': 9513, 'Automated Phylogenetic Inference System': 9514, 'Percentage of genes': 9515, 'partial order graph': 9516, 'Part Of Graph': 9517, 'pancreatic enzyme replacement therapy': 9518, 'product enhanced reverse transcriptase': 9519, 'Cavender Farris Neyman': 9520, 'comb filtered noise': 9521, 'Hasegawa Kishino Yano': 9522, 'heat killed Yersinia': 9523, 'Heat killed yeast': 9524, 'Semi quantitative scoring': 9525, 'Self quantification systems': 9526, 'special quasirandom structure': 9527, 'Shareholder Quorum Subsampling': 9528, 'Unique Nucleotide Sequence': 9529, 'uncharacterised Neotyphodium species': 9530, 'Bloom Filter Trie': 9531, 'B fragilis toxin': 9532, 'back fat thickness': 9533, 'Basic Fitness Test': 9534, 'Succinct Data Structure Library': 9535, 'site directed spin labeling': 9536, 'asthma control questionnaire': 9537, 'aggregation caused quenching': 9538, 'Total Rhinoconjunctivitis Symptom Score': 9539, 'twin resolved shear stress': 9540, 'Health Economics Research Group': 9541, 'Human Eag Related Gene': 9542, 'vascular brain injury': 9543, 'ventral blood islands': 9544, 'Cognitive Abilities Screening Instrument': 9545, 'Computer assisted self interviewing': 9546, 'cell autonomous sex identity': 9547, 'national biodiversity network': 9548, 'National Broadband Network': 9549, 'Infectious Disease Research Institute': 9550, 'intensity dependent refractive index': 9551, 'Verbal Series Attention Test': 9552, 'very small aperture terminals': 9553, 'Physical Self Maintenance Scale': 9554, 'personalised self management system': 9555, 'Pakistan Social Marketing Survey': 9556, 'Swedish Alzheimer Treatment Study': 9557, 'spotting and tilt spreading': 9558, 'South African Triage Scale': 9559, 'average causal mediation effect': 9560, 'arginine catabolic mobile element': 9561, 'non negative matrix factorization': 9562, 'non nuclear membrane fraction': 9563, 'Personality and Total Health': 9564, 'Pathway Analysis Through Habitat': 9565, 'Needle Exchange Surveillance Initiative': 9566, 'Natural Environment Suitability Index': 9567, 'Scottish Natural Heritage': 9568, 'straight nano hairs': 9569, 'backwards compression waves': 9570, 'Behaviour Change Wheel': 9571, 'brain controlled wheelchair': 9572, 'fecal occult blood': 9573, 'Fiber optic bronchoscopy': 9574, 'positive control well': 9575, 'primary cell wall': 9576, 'plant cell wall': 9577, 'simplified bioaccessibility extraction test': 9578, 'stand by emergency treatment': 9579, 'surface plasmon resonance imaging': 9580, 'Solid Phase Reversible Immobilization': 9581, 'Instrumental isotopic fractionation': 9582, 'intracellular ice formation': 9583, 'biofilm growth intensity': 9584, 'Beijing Genomic Institute': 9585, 'Low Gradient Furnace': 9586, 'Liver growth factor': 9587, 'dynamic kinetic resolution': 9588, 'Drosophila kinin receptor': 9589, 'mass per length': 9590, 'micro porous layer': 9591, 'mean path length': 9592, 'middle parietal lobe': 9593, 'mate pair library': 9594, 'truncated mixed linker': 9595, 'Tenebrio molitor larvae': 9596, 'total marginal length': 9597, 'Total mandibular length': 9598, 'red wood ant': 9599, 'relative warps analysis': 9600, 'Russian wheat aphid': 9601, 'Horse Grimace Scale': 9602, 'hollow graphitic spheres': 9603, 'High grade serous': 9604, 'hand grip strength': 9605, 'agar well diffusion': 9606, 'alive with disease': 9607, 'Maize bushy stunt phytoplasma': 9608, 'Minimum biogas selling price': 9609, 'self assessment anhedonia scale': 9610, 'software as a service': 9611, 'official involuntary hospitalization': 9612, 'Opioid induced hyperalgesia': 9613, 'hematologic acute radiation syndrome': 9614, 'Hamilton Anxiety Rating Scale': 9615, 'Hyperornithinemia Hyperammonemia Homocitrullinuria': 9616, 'helix hairpin helix': 9617, 'Hand Hand Hand': 9618, 'Balanced Middle Weight': 9619, 'ball milled wood': 9620, 'Boechera Microsatellite Website': 9621, 'breast muscle weight': 9622, 'neck of femur': 9623, 'non ossifying fibroma': 9624, 'natural ovarian failure': 9625, 'dutch surgical colorectal audit': 9626, 'Deviated Septal Curve Angle': 9627, 'locally advanced rectal cancer': 9628, 'LCR associated remodelling complex': 9629, 'biotin protein ligase': 9630, 'blood pressure low': 9631, 'bisulphite PCR Luminex': 9632, 'Bayesian penalized likelihood': 9633, 'Duke Surgery Patient Safety': 9634, 'Delayed Sleep Phase Syndrome': 9635, 'Golden Syrian hamster': 9636, 'guided self help': 9637, 'genomic safe harbor': 9638, 'Groote Schuur Hospital': 9639, 'Normalized prediction distribution errors': 9640, 'non parallel differentially expressed': 9641, 'standard scrapie cell assay': 9642, 'size spacing correlation approximation': 9643, 'single stranded conformation analysis': 9644, 'sulf hydryl variable': 9645, 'subjective haptic vertical': 9646, 'Organic grape juice': 9647, 'oesophago gastric junction': 9648, 'Xylem pressure potential': 9649, 'xylem pole pericycle': 9650, 'ice nucleation temperature': 9651, 'interactor normalization task': 9652, 'non fall dormant': 9653, 'normal fat diet': 9654, 'nerve fiber density': 9655, 'deer browsing susceptibility index': 9656, 'diffusion basis spectrum imaging': 9657, 'relative water deficit': 9658, 'relative wound density': 9659, 'relative wavenumber difference': 9660, 'quantile regression analysis': 9661, 'quantitative risk assessment': 9662, 'Life Table Response Experiment': 9663, 'low temperature response element': 9664, 'utilised agricultural area': 9665, 'unnatural amino acid': 9666, 'Pennine Water Group': 9667, 'post weaning gain': 9668, 'wet cell weight': 9669, 'Winter Cooled Water': 9670, 'lateral abdominal wall': 9671, 'Local Activation Waves': 9672, 'negative life events': 9673, 'non linear effects': 9674, 'neem leaf extract': 9675, 'neutral lipid emulsion': 9676, 'Fetus in fetu': 9677, 'formaldehyde induced fluorescence': 9678, 'vitello intestinal duct': 9679, 'Visual induced dizziness': 9680, 'multivariate environmental similarity surfaces': 9681, 'Mangled Extremity Severity Score': 9682, 'Implicit Relational Assessment Procedure': 9683, 'inter retrotransposon amplified polymorphism': 9684, 'upper urothelial cancer': 9685, 'Urban unit category': 9686, 'jabuticaba hydroalcoholic extract': 9687, 'juvenile hormone esterases': 9688, 'Estimated graft volume': 9689, 'estimated genetic value': 9690, 'actual graft weight': 9691, 'Abnormal Gait Window': 9692, 'key event relationships': 9693, 'kinetic energy release': 9694, 'Mixed connective tissue disease': 9695, 'medium chain TAG diet': 9696, 'Satsuma dwarf virus': 9697, 'significantly diverged variants': 9698, 'Zymosan induced arthritis': 9699, 'Zygote inhibition assay': 9700, 'operant behaviour therapy': 9701, 'optimized background therapy': 9702, 'Endoscopic sleeve gastroplasty': 9703, 'Extended Similarity Group': 9704, 'endovascular stent grafting': 9705, 'equal split game': 9706, 'arbitrary ELISA units': 9707, 'alternative exon usage': 9708, 'small leucine rich proteoglycan': 9709, 'Semantic Layered Research Platform': 9710, 'steroidal anti inflammatory drug': 9711, 'Smo auto inhibited domain': 9712, 'relative expression units': 9713, 'renewable energy use': 9714, 'in situ zymography': 9715, 'interfacial stress zone': 9716, 'Intermediate Suitability Zone': 9717, 'macrophage colony stimulating factor': 9718, 'multivariate concentric square field': 9719, 'excision repair cross complementing': 9720, 'External RNA Control Consortium': 9721, 'Aspirin Myocardial Infarction Study': 9722, 'Acute Medical Information System': 9723, 'apical membrane initiation site': 9724, 'Brigham Rheumatoid Arthritis Sequential Study': 9725, 'Biological Rhythms Analysis Software System': 9726, 'Fluorescence optical imaging': 9727, 'fidelity of implementation': 9728, 'force of infection': 9729, 'folds of induction': 9730, 'Fold of Increase': 9731, 'European Scleroderma Study Group': 9732, 'European Spondyloarthropathy Study Group': 9733, 'Paediatric Vasculitis Activity Score': 9734, 'post viral asthenic syndrome': 9735, 'pruritus visual analogue scale': 9736, 'Late onset neutropenia': 9737, 'lateral optic nerves': 9738, 'lengthening over nails': 9739, 'joint reaction force': 9740, 'Joint Reporting Form': 9741, 'Disease Extent Index': 9742, 'diffraction enhanced imaging': 9743, 'Differential Expression Index': 9744, 'kalliekrin international unit': 9745, 'Kampala International University': 9746, 'kallikrein inhibitory units': 9747, 'Diabetic+ Metformin+ Honey': 9748, 'Differential Methylation Hybridisation': 9749, 'differentially methylated hubs': 9750, 'Pooled fraction B': 9751, 'Posterior Fossa B': 9752, 'parallel fibered bone': 9753, 'C sativum essential oil': 9754, 'Cigarette Smoke Exposure Ontology': 9755, 'Mean neutrophil volume': 9756, 'multiple nucleotide variant': 9757, 'receptor interacting protein kinase': 9758, 'RPM1 induced protein kinase': 9759, 'Atrial effective refractory period': 9760, 'Auditory event related potential': 9761, 'relative expression software tool': 9762, 'replica exchange solute tempering': 9763, 'reference electrode standardization technique': 9764, 'noise onset time': 9765, 'non operative time': 9766, 'normal ovarian tissue': 9767, 'Rhodamine Labeled Bead': 9768, 'right lateral bending': 9769, 'reverse line blot': 9770, 'regular lysis buffer': 9771, 'multiple reward compliance protocol': 9772, 'Magnetic Resonance Cholangio Pancreatography': 9773, 'motor related cortical potential': 9774, 'arc cluster ion source': 9775, 'Automated Cellular Imaging System': 9776, 'Boundary Visibility Graph': 9777, 'B virgilioides group': 9778, 'Visibility Graph Analysis': 9779, 'voltage gated activation': 9780, 'intermittent energy restriction': 9781, 'individual error rate': 9782, 'Item Exposure Rate': 9783, 'core valence valence': 9784, 'citrus variegation virus': 9785, 'candidate vaccine viruses': 9786, 'bi layer graphene': 9787, 'Borneo linkage groups': 9788, 'multi layer graphene': 9789, 'Mixed linkage glucan': 9790, 'multiple locus genotype': 9791, 'major linkage group': 9792, 'transient negative ion': 9793, 'total nutrient intake': 9794, 'tibial nerve injury': 9795, 'secondary cell wall polymer': 9796, 'soluble cell wall proteins': 9797, 'electron impact mass spectrum': 9798, 'epithelioid inflammatory myofibroblastic sarcoma': 9799, 'wide band limit': 9800, 'whole brain lysates': 9801, 'flat band limit': 9802, 'Full Body Length': 9803, 'dynamic membrane aerated reactor': 9804, 'dry mass accumulation rate': 9805, 'transition metal oxide': 9806, 'Translational Medicine Ontology': 9807, 'self consistent reaction field': 9808, 'skin conductance response function': 9809, 'reduced density gradient': 9810, 'Resource Description Graph': 9811, 'recent duplicated gene': 9812, 'resource dilemma game': 9813, 'reference deviated genes': 9814, 'exon junction complex': 9815, 'excitatory junctional current': 9816, 'collision induced unfolding': 9817, 'chronic idiopathic urticaria': 9818, 'compulsive internet use': 9819, 'WAVE homology domain': 9820, 'wound healing disorders': 9821, 'winged helix domain': 9822, 'witch hazel distillate': 9823, 'divided physicochemical property scores': 9824, 'date palm pollen suspension': 9825, 'Logic Alignment Free': 9826, 'left anterior fascicle': 9827, 'Lesser Allele Fraction': 9828, 'Interaction Network Ontology': 9829, 'INNER NO OUTER': 9830, 'Mean nearest taxon distance': 9831, 'maximum non toxic dose': 9832, 'World Wide Web': 9833, 'What Where When': 9834, 'Video Plankton Recorder': 9835, 'Virtual Patient Record': 9836, 'VP64 p65 Rta': 9837, 'vancomycin per rectum': 9838, 'data derived network': 9839, 'disease disease network': 9840, 'diazotroph derived N': 9841, 'deep dry needling': 9842, 'Differential Dependency Network': 9843, 'glutamate receptor interacting protein': 9844, 'gene retrocopy insertion polymorphism': 9845, 'normal human keratinocytes': 9846, 'null Hong Kong': 9847, 'Nin Hibino Kurachi': 9848, 'Web Feature Service': 9849, 'White Faced Suffolk': 9850, 'wheat flour sausage': 9851, 'prior knowledge network': 9852, 'protein kinase N': 9853, 'Break induced replication': 9854, 'baculovirus IAP repeat': 9855, 'European Bioinformatics Institute': 9856, 'Electron beam irradiation': 9857, 'Energy Biosciences Institute': 9858, 'early brain injury': 9859, 'neuroscience database gateway': 9860, 'Nearest Downstream Gene': 9861, 'gene sets coexpression analysis': 9862, 'Gene Set Control Analysis': 9863, 'Visualization Tool Kit': 9864, 'viral thymidine kinase': 9865, 'infra red gas monitor': 9866, 'immunity related GTPase M': 9867, 'steady state persistence length': 9868, 'sagittal spinous process length': 9869, 'component volume occupancy': 9870, 'combined ventricular output': 9871, 'enoyl CoA hydratase': 9872, 'energy conserving hydrogenase': 9873, 'epithelial cell height': 9874, 'Enemy Constraint Hypothesis': 9875, 'view tuned units': 9876, 'Visualization treatment unit': 9877, 'X conserved region': 9878, 'X chromosome reactivation': 9879, 'whole transcriptome library': 9880, 'whole tissue lysates': 9881, 'background linkage disequilibrium': 9882, 'between landmark distance': 9883, 'blue light damage': 9884, 'Expected Likelihood Weights': 9885, 'Excess lung water': 9886, 'expand leaf width': 9887, 'Metabolically Coupled Replicator System': 9888, 'Monte Carlo Reference State': 9889, 'biomedical entity search tool': 9890, 'Basketball Exercise Simulation Test': 9891, 'Berkeley Earth Surface Temperature': 9892, 'chronic variable mild stress': 9893, 'Cervical vertebral maturation stage': 9894, 'Childhood Trauma Questionnaire': 9895, 'Courage to Quit®': 9896, 'post heavy mitochondrial': 9897, 'peptide histidine methionine': 9898, 'Preventive Health Model': 9899, 'passive head movement': 9900, 'primary human melanocytes': 9901, 'high weight selected': 9902, 'hard white spring': 9903, 'Hazardous Waste Sites': 9904, 'family history positive': 9905, 'folate heparin PhA': 9906, 'Frankfort horizontal plane': 9907, 'family history negative': 9908, 'femoral head necrosis': 9909, 'Family Health Network': 9910, 'Glass Bottom Boat': 9911, 'Great Bahama Bank': 9912, 'Generalized Additive Mixed Models': 9913, 'general additive mixed model': 9914, 'quail embryo fibroblasts': 9915, 'Québec en Forme': 9916, 'inter trial interval': 9917, 'Immune Tolerance Induction': 9918, 'yolk syncytial layer': 9919, 'Yellow Stripe Like': 9920, 'distal visceral endoderm': 9921, 'developing vessel element': 9922, 'weighted Shimodaira Hasegawa': 9923, 'wheat straw hydrolysate': 9924, 'intra abdominal adipose tissue': 9925, 'initially appropriate antibiotic therapy': 9926, 'double drop out': 9927, 'data definition ontology': 9928, 'hypertrophic heart rat': 9929, 'Hampshire Health Record': 9930, 'Health Human Resources': 9931, 'anterior long gyrus': 9932, 'asparagine linked glycosylation': 9933, 'posterior long gyrus': 9934, 'poly L glutamine': 9935, 'phosphorus limited growth': 9936, 'human recombinant leptin': 9937, 'high regulatory load': 9938, 'backward suction wave': 9939, 'boreal summergreen woody': 9940, 'High Frequency Chest Compression': 9941, 'Hypothesis Free Clinical Cloning': 9942, 'porcine brain endothelial cells': 9943, 'primary bronchial epithelial cells': 9944, 'relative copy number': 9945, 'Rat cortical neurons': 9946, 'reservoir pressure boundary condition': 9947, 'receptor positive breast cancers': 9948, 'autogenous iliac bone': 9949, 'anaerobic induction buffer': 9950, 'wall pressure gradient': 9951, 'weight percentage gain': 9952, 'Western Pacific Gradient': 9953, 'morphological plaque severity index': 9954, 'maternal periconceptional systemic inflammation': 9955, 'Forced Oscillation Technique': 9956, 'flower opening time': 9957, 'fraction of total': 9958, 'wearable artificial kidney': 9959, 'wall associated kinases': 9960, 'Wistar Institute Susan Hayflick': 9961, 'wholemount in situ hybridisation': 9962, 'muscle fiber orientation': 9963, 'Molecular Function Ontology': 9964, 'Maximum fluid overload': 9965, 'mixed function oxidase': 9966, 'Model Format OWL': 9967, 'medial lateral stability index': 9968, 'modified Lobene Stain Index': 9969, 'Global Histogram Equalization': 9970, 'government health expenditure': 9971, 'global health education': 9972, 'normalized Hilbert transform': 9973, 'neoadjuvant hormone therapy': 9974, 'normal healthy tissue': 9975, 'contralateral healthy knee': 9976, 'CSK homologous kinase': 9977, 'Quantitative tissue phenotype': 9978, 'Qinghai Tibetan Plateau': 9979, 'normal step length': 9980, 'non specific lethal': 9981, 'nocturnal surface layers': 9982, 'nucleotide specificity loop': 9983, 'nasion sella line': 9984, 'thoracic wall movement': 9985, 'the warmest month': 9986, 'computerized imaging reference systems': 9987, 'Critical Incident Reporting Systems': 9988, 'Cumulative Illness Rating Scale': 9989, 'artificial pulse generation': 9990, 'admission plasma glucose': 9991, 'electrode variation coefficient': 9992, 'exhaled ventilator condensate': 9993, 'empty vector control': 9994, 'early visual cortex': 9995, 'episcleral vein cauterization': 9996, 'force position hybrid': 9997, 'fish protein hydrolysates': 9998, 'Flaxseed Protein Hydrolysates': 9999, 'Medical Image Repository Center': 10000, 'Medical Imaging Resource Center': 10001, 'DNA Damage Induced Sumoylation': 10002, 'DNA damage induced senescence': 10003, 'consensus furin cleavage sites': 10004, 'cell free culture supernatants': 10005, 'Optical Calibration Factor': 10006, 'opened corolla flower': 10007, 'occipital condyle fractures': 10008, 'pyridinium para toluene sulphonate': 10009, 'Percentage Per Thousand Spacers': 10010, 'potential primary tumor site': 10011, 'knee extension strength': 10012, 'Kawabata evaluation system': 10013, 'K edge subtraction': 10014, 'relative extractable water': 10015, 'relative ear weight': 10016, 'femoral bone marrow': 10017, 'Fragile Breakage Model': 10018, 'electrolyte metal insulator semiconductor': 10019, 'European MSM Internet survey': 10020, 'surface plasmon resonance spectroscopy': 10021, 'Spastic Paraplegia Rating Scale': 10022, 'single molecule real time': 10023, 'Sub Megabase Resolution Tiling': 10024, 'wireless gait analysis sensor': 10025, 'Whole genome association studies': 10026, 'Ketoacyl CoA synthase': 10027, 'Kenny Caffey syndrome': 10028, 'Kielder Central Site': 10029, 'Ketoacyl ACP reductase': 10030, 'Kinase Addiction Ranker': 10031, 'Malonyl CoA ACP transacylase': 10032, 'Mean Cecum Arrival Time': 10033, 'critical quality attributes': 10034, 'caffeoyl quinic acid': 10035, 'organic cell wall': 10036, 'outer cell wall': 10037, 'of cystic wall': 10038, 'Yellow Springs Instruments': 10039, 'Youth Strength Inventory': 10040, 'long range electron transfer': 10041, 'luminescence resonance energy transfer': 10042, 'reaction wood inducing': 10043, 'ring width index': 10044, 'oleoyl ACP hydrolase': 10045, 'octa acid host': 10046, 'hardwood kraft lignin': 10047, 'head kidney leukocyte': 10048, 'multilevel composition fractionation process': 10049, 'mean circulatory filling pressure': 10050, 'steam exploded wheat straw': 10051, 'Standardised early warning system': 10052, 'RNAi assisted genome evolution': 10053, 'Rickettsiales amplified genetic element': 10054, 'Nearest Sequenced Taxon Index': 10055, 'necrotizing soft tissue infections': 10056, 'bagasse in natura': 10057, 'barcode index number': 10058, 'cork boiling wastewater': 10059, 'Conditional Baum Welch': 10060, 'carnation bacterial wilt': 10061, 'Zero time echo': 10062, 'zero thermal expansion': 10063, 'renewable jet fuel': 10064, 'Red jungle fowl': 10065, 'revolving algal biofilm': 10066, 'right Amazon bank': 10067, 'thickened cell walls': 10068, 'Tender coconut water': 10069, 'Total Carcass Weight': 10070, 'ethanol to jet': 10071, 'epithelial tight junction': 10072, 'sugar to jet': 10073, 'sino tubular junction': 10074, 'Therapeutic Intervention Scoring System': 10075, 'type I secretion system': 10076, 'Free oscillation rheometry': 10077, 'following offspring removal': 10078, 'false omission rate': 10079, 'egg white cystatin': 10080, 'Equilibrium water content': 10081, 'acute normovolemic hemodilution': 10082, 'antegonial notching height': 10083, 'upper gastrointestinal endoscopy': 10084, 'UDP glucose epimerase': 10085, 'urinary glucose excretion': 10086, 'opioid induced tolerance': 10087, 'oral iron therapy': 10088, 'Java Server Pages': 10089, 'Juba Sugar Project': 10090, 'ring hydroxylating oxygenase': 10091, 'residence hall offices': 10092, 'Regional Health Office': 10093, 'Biomolecular Interaction Network Database': 10094, 'bilirubin induced neurological damage': 10095, 'Minimum Robust Tag SNPs': 10096, 'Minimum Reaction Time Slating': 10097, 'Conserved Property Difference Locator': 10098, 'cumulative population doubling level': 10099, 'Poly Cystic Ovary Syndrome': 10100, 'Prostate Cancer Outcomes Study': 10101, 'Virtual Tissue Matrix': 10102, 'Variable temperature measurements': 10103, 'median intensity values': 10104, 'Movement Intensity Variation': 10105, 'Levin Robson Garnier': 10106, 'low risk group': 10107, 'leucine rich glycoprotein': 10108, 'Locus Reference Genomic': 10109, 'human experimental network': 10110, 'Home Enteral Nutrition': 10111, 'intensity based moderated T': 10112, 'integrative body mind training': 10113, 'univariate differential expression': 10114, 'Upper Digestive Endoscopy': 10115, 'mitochondrial cloud localization element': 10116, 'multi contrast late enhancement': 10117, 'weighted back projection': 10118, 'weight bearing pain': 10119, 'whole body plethysmography': 10120, 'Hinge Atlas Gold': 10121, 'high adherence group': 10122, 'h after germination': 10123, 'replica exchange Monte Carlo': 10124, 'Roadmap Epigenome Mapping Consortium': 10125, 'Olfactory Receptor Microarray Database': 10126, 'obesity related metabolic dysfunction': 10127, 'unique marker database': 10128, 'Unweighted Mean Deviation': 10129, 'multivalued logical model': 10130, 'mixed linear model': 10131, 'mouse liver mitochondria': 10132, 'maximum likelihood mapping': 10133, 'mature leaf margin': 10134, 'cross species conservation score': 10135, 'Clinical Skills Confidence Scale': 10136, 'Probability Descent QTL': 10137, 'Pain DETECT Questionnaire': 10138, 'Patient Dignity Question': 10139, 'perceived deficits questionnaire': 10140, 'functional linkage network': 10141, 'final leaf number': 10142, 'fluorescence line narrowing': 10143, 'Average Distance Between Partition': 10144, 'ambulatory diastolic blood pressure': 10145, 'de militarised zone': 10146, 'dorsal marginal zone': 10147, 'redundant object group': 10148, 'radius of gyration': 10149, 'redundant interaction group': 10150, 'Resource Invocation Graph': 10151, 'global relative variance': 10152, 'gastric residual volumes': 10153, 'Groundnut rosette virus': 10154, 'Multi Parametric Sensitivity Analysis': 10155, 'manifestation profile simulation algorithm': 10156, 'Midot posture scale analyser': 10157, 'Hierarchical Binomial Neighborhood': 10158, 'hydrogen bond network': 10159, 'head bending nystagmus': 10160, 'Uncentered Pearson Correlation': 10161, 'ultra peripheral collisions': 10162, 'primitive neuro ectodermal tumors': 10163, 'Preoperative Neuroscience Education Tool': 10164, 'Bond Energy Algorithm': 10165, 'Benzene Ethanol Ammonia': 10166, 'bile esculin agar': 10167, 'Concept Unique Identifiers': 10168, 'continuous ultrasonic irrigation': 10169, 'Greedy Randomized Adaptive Search Procedure': 10170, 'GFP reconstitution across synaptic partners': 10171, 'Golgi reassembly and stacking protein': 10172, 'yeast extracellular proteins': 10173, 'yeast extract peptone': 10174, 'Investigation Design Graph': 10175, 'infectious disease gene': 10176, 'quartile deviation plot': 10177, 'Quality Doppler Profile': 10178, 'Qing dai powder': 10179, 'Learning Activity Management System': 10180, 'Locomotor Activity Monitoring system': 10181, 'light activated mesostructured silica': 10182, 'semi infinite linear programming': 10183, 'supported ionic liquid phase': 10184, 'Protein Identifier Cross Reference': 10185, 'Proportional interval cancer rate': 10186, 'Adaptive Information Disclosure Application': 10187, 'Advanced Image Data Analyzer': 10188, 'approximate information discriminant analysis': 10189, 'conserved motif searching areas': 10190, 'Chedoke McMaster Stroke Assessment': 10191, 'Consolidated Metropolitan Statistical Area': 10192, 'Spanned Firing Time Model': 10193, 'sensor fault tolerant module': 10194, 'K Spectral Clustering': 10195, 'king sized cigarettes': 10196, 'keratinocyte stem cells': 10197, 'Dirichlet Poisson Binomial': 10198, 'dental plaque bacteria': 10199, 'divergent polo box': 10200, 'Suberoyl anilide hydroxamic acid': 10201, 'Social and Health Assessment': 10202, 'Paired Localisation Correlation Profile': 10203, 'papain like cysteine protease': 10204, 'Max Inter Distance Deviation': 10205, 'Monoclonal immunoglobulin deposition disease': 10206, 'maximum stable set problem': 10207, 'Medicare Shared Savings Program': 10208, 'query first coordinate': 10209, 'quark flavour conserving': 10210, 'Worm Phenotype Ontology': 10211, 'within pair offspring': 10212, 'Microbicide Trials Network': 10213, 'Multiple Tissue Northern': 10214, 'medial terminal nucleus': 10215, 'Mesencephalic Trigeminal Nucleus': 10216, 'ordinary canonical correlation analysis': 10217, 'ovarian clear cell adenocarcinoma': 10218, 'kernelized spatial depth': 10219, 'Kinetoplastid specific domain': 10220, 'Kolmogorov Smirnov D': 10221, 'highest scoring probe set': 10222, 'high shikonins production system': 10223, 'lowest scoring probe set': 10224, 'low shikonins production system': 10225, 'Class Balanced Active Learning': 10226, 'competency based active learning': 10227, 'XML Schema Definition': 10228, 'XML Schema Document': 10229, 'Mean Jaccard Index': 10230, 'max jump index': 10231, 'Markov Chain Ontology Analysis': 10232, 'Multiple Congenital Ocular Anomalies': 10233, 'Total Neighborhood Score': 10234, 'Trapped Neutrophil Syndrome': 10235, 'Tibial nerve stimulation': 10236, 'minimum unique substring': 10237, 'medically unexplained symptoms': 10238, 'Population specific expression analysis': 10239, 'Pathway Set Enrichment Analysis': 10240, 'portion size estimation aid': 10241, 'proximal sequence element A': 10242, 'fluorescence loss in photobleaching': 10243, 'Food Label Information Program': 10244, 'FLICE like inhibitory protein': 10245, 'Madison Metabolomics Consortium Database': 10246, 'mean monthly cost differences': 10247, 'protein kinase identification server': 10248, 'Published Kinase Inhibitor Set': 10249, 'Poisson Inverse Gaussian': 10250, 'Pine Island Glacier': 10251, 'normalized maximum likelihood': 10252, 'N methyl laudanosine': 10253, 'National Measurement Laboratory': 10254, 'nonmethylated MSAP loci': 10255, 'non metastatic like': 10256, 'DNA DNA hybridization': 10257, 'designated district hospitals': 10258, 'network perturbation effect score': 10259, 'Nano Pulse Electro Signaling': 10260, 'last common eukaryote ancestor': 10261, 'lateral center edge angle': 10262, 'UNICORE Rich Client': 10263, 'ultimate recognition complex': 10264, 'upper rectal cancer': 10265, 'non serial dynamical programming': 10266, 'Net State Domestic Product': 10267, 'paired end adapter trimming': 10268, 'peri epididymal adipose tissue': 10269, 'Paraffin embedded archival tissue': 10270, 'Severe Asthma Research Program': 10271, 'Streptomyces antibiotic regulatory proteins': 10272, 'Monte Carlo Significance Estimation': 10273, 'Monte Carlo standard errors': 10274, 'Flint Animal Cancer Center': 10275, 'Folic acid cholesterol chitosan': 10276, 'Multiple Segment Viterbi': 10277, 'Maize streak virus': 10278, 'multi site variations': 10279, 'mouse sarcoma virus': 10280, 'Protein Sequence Annotation tool': 10281, 'population structure association test': 10282, 'Semantic Link Association Prediction': 10283, 'superior labral anterior posterior': 10284, 'SRC Like Adaptor Protein': 10285, 'Network Enrichment Analysis Test': 10286, 'non exercise activity thermogenesis': 10287, 'Deacon Active Site Profiler': 10288, 'Diabetes Autoantibody Standardization Program': 10289, 'diluted alkali soluble pectins': 10290, 'Dynamic Genome Warping': 10291, 'damped gravity wave': 10292, 'Brain Zone Detector': 10293, 'Bushen Zhuangjin decoction': 10294, 'Jaccard similarity index': 10295, 'juvenile social investigation': 10296, 'joint spectral intensity': 10297, 'group infection network': 10298, 'gene interaction network': 10299, 'gap in noise': 10300, 'gastrointestinal intraepithelial neoplasia': 10301, 'gastro intestinal nematode': 10302, 'locus equivalence graph': 10303, 'lowly expressed genes': 10304, 'low expression group': 10305, 'Genome Scan Meta Analysis': 10306, 'Gene Set Matrix Analysis': 10307, 'Groupe Speciale Mobile Association': 10308, 'Ribo nucleic Acid': 10309, 'Research Natural Area': 10310, 'creatine kinase B': 10311, 'China Kadoorie Biobank': 10312, 'major urinary protein': 10313, 'Minimum unit pricing': 10314, 'White nose syndrome': 10315, 'Weighted Numerical Score': 10316, 'white noise sound': 10317, 'Early Life Institute': 10318, 'electron localizability indicator': 10319, 'antigen specific plasma/plasmablast cell': 10320, 'androgen sensitive prostate cancer': 10321, 'neuron restrictive silencer element': 10322, 'Neutron Resonance Spin Echo': 10323, 'hepatic glucose utilization': 10324, 'Hyphal Growth Unit': 10325, 'periventricular gray zone': 10326, 'posterior growth zone': 10327, 'temporal image correlation spectroscopy': 10328, 'toxicant induced climate sensitivity': 10329, 'temporary immersion bioreactor': 10330, 'time in bed': 10331, 'p nitrophenyl butyrate': 10332, 'Private non beneficiary': 10333, 'popliteal nerve block': 10334, 'placental mesenchymal stem cells': 10335, 'post meiotic sex chromatin': 10336, 'terminal nucleotidyl transferase': 10337, 'to next treatment': 10338, 'trimetallic nitride template': 10339, 'tibial nerve transection': 10340, 'wheat germ extract': 10341, 'whole ganglionic eminence': 10342, 'lateral flow immuno assay': 10343, 'larval feeding inhibition assay': 10344, 'Serum EVs Depleted Media': 10345, 'SGN Expression Data Module': 10346, 'Functional vessel density': 10347, 'femoral vein diameter': 10348, 'fixable viability dye': 10349, 'Methionine Sulfoxide Reductase A': 10350, 'Methylation sensitive restriction analysis': 10351, 'in situ end labeling': 10352, 'Interpersonal Support Evaluation List': 10353, 'tubulin binding cofactor C': 10354, 'Tom Baker Cancer Centre': 10355, 'the bioactive compound content': 10356, 'normal oral keratinocytes': 10357, 'Novel oncogenic kinase': 10358, 'next of kin': 10359, 'Calf intestinal alkaline phosphatase': 10360, 'Clinical Information Access Program': 10361, 'normal whole brain': 10362, 'Non Weight Bearing': 10363, 'dominant negative effect': 10364, 'Decentralized nursing education': 10365, 'life years saved': 10366, 'low yield selection': 10367, 'lymph node involvement': 10368, 'lingual nerve impairment': 10369, 'nasopharyngeal epithelial hyperplasia': 10370, 'Near Experimental Hall': 10371, 'terminal end bud': 10372, 'transthoracic electrical bioimpedance': 10373, 'the end bud': 10374, 'Cancer Liver Italian Program': 10375, 'Corticotropin like intermediary peptide': 10376, 'Human wound fluids': 10377, 'Hilbert weighted frequency': 10378, 'Prostate tumor endothelial cells': 10379, 'proximal tubular epithelial cells': 10380, 'Breast tumor endothelial cells': 10381, 'Brain Tumor Epidemiology Consortium': 10382, 'Ontario Health Insurance Plan': 10383, 'Oral Health Impact Profile': 10384, 'normal ovarian surface epithelial': 10385, 'nasal obstruction symptom evaluation': 10386, 'laparoscopic radical hysterectomy': 10387, 'low relative humidity': 10388, 'liver receptor homolog': 10389, 'extent of disease': 10390, 'early onset disease': 10391, 'electric organ discharge': 10392, 'explosive ordnance disposal': 10393, 'every other day': 10394, 'anti cancer fusion peptide': 10395, 'adult caudal fin primordium': 10396, 'asymmetrically coupled fixed point': 10397, 'sentinel basin nodes': 10398, 'stochastic Boolean network': 10399, 'methionine restricted cysteine depleted': 10400, 'Microscopic residual chemoresistant disease': 10401, 'negative lymph node': 10402, 'neostigmine loaded nanofibers': 10403, 'Morphological Atherosclerotic Calcification Distribution': 10404, 'moving average convergence divergence': 10405, 'Jackson Heart Study': 10406, 'junior high school': 10407, 'Coronary artery revascularisation procedures': 10408, 'cardiac ankyrin repeat protein': 10409, 'carbonic anhydrase related protein': 10410, 'creatine phosphate kinase': 10411, 'Corey Pauling Koltun': 10412, 'proximal isovelocity surface area': 10413, 'prognosis in suspected angina': 10414, 'scaffold attachment factor B': 10415, 'Scaffold Associated Factor B': 10416, 'tricho rhino phalangeal syndromes': 10417, 'tunable resistive pulse sensing': 10418, 'Shapiro Wilk W': 10419, 'silicon wire waveguides': 10420, 'oral hypoglycemic agents': 10421, 'oral hygiene advice': 10422, 'Tong Xin Luo': 10423, 'Tian Xian Liquid': 10424, 'Manchester Color Wheel': 10425, 'minimum channel width': 10426, 'mandibular cortical width': 10427, 'Vayasthapana Rasayana formulation': 10428, 'Vertical root fractures': 10429, 'volume reduction factor': 10430, 'Bitter Melon Juice': 10431, 'British Medical Journal': 10432, 'National Acupuncture Detoxification Association': 10433, 'Nitrobenzofurazan amino D alanine': 10434, 'Social Communication Questionnaire': 10435, 'Schizophrenia Caregiver Questionnaire': 10436, 'G wilfordii extract': 10437, 'genome wide exome': 10438, 'unripe pulp juice': 10439, 'uretero pelvic junction': 10440, 'pulmonary Yin deficiency': 10441, 'Positive youth development': 10442, 'step through passive avoidance': 10443, 'sensory texture profile analysis': 10444, 'Walnut residual protein': 10445, 'Water Reclamation Plant': 10446, 'vehicle control group': 10447, 'Vibrio cholerae ghosts': 10448, 'vegetative compatibility group': 10449, 'Annona muricata crude extract': 10450, 'average mediated causal effect': 10451, 'young control group': 10452, 'Yearly Contact Group': 10453, 'Non alcoholic Steato Hepatitis': 10454, 'non Abelian semiclassical Hamiltonian': 10455, 'multidimensional item response theory': 10456, 'Multidisciplinary intensive rehabilitation treatment': 10457, 'Kalanchoe crenata leaves': 10458, 'Kurozu concentrated liquid': 10459, 'Patient Orientated Eczema Measure': 10460, 'per oral endoscopic myotomy': 10461, 'protein oriented exon monikers': 10462, 'corticosteroid treated psoriatic skin': 10463, 'Central Transportation Planning Staff': 10464, 'vanillyl mandelic acid': 10465, 'voucher management agency': 10466, 'verbal mental age': 10467, 'body wall muscles': 10468, 'Bio Wet Mass': 10469, 'body worn monitors': 10470, 'posterior tuberal nucleus': 10471, 'Protein tyrosine nitration': 10472, 'pyramidal tract neuron': 10473, 'posterior tibial nerve': 10474, 'trilaminar yolk sac': 10475, 'tensile yield strength': 10476, 'horizontal cell layer': 10477, 'Hairy cell leukaemia': 10478, 'hydrophobic EMAP like protein': 10479, 'health education learning package': 10480, 'granulated metrial gland': 10481, 'Gaussian modified Gaussian': 10482, 'nasal radial vessel': 10483, 'Nutrient Reference Value': 10484, 'Seasonally Dry Tropical Forests': 10485, 'seasonally deciduous tropical forest': 10486, 'Forest Act Habitats': 10487, 'fumaryl acetoacetate hydrolase': 10488, 'fatty acid hydroxylase': 10489, 'Varroa destructor virus': 10490, 'Vibration Dose Value': 10491, 'ranked species occupancy curves': 10492, 'Rapid Survey on Children': 10493, 'general mixed yule coalescence': 10494, 'Generalized Mixed Yule Coalescent': 10495, 'diurnal animal walk': 10496, 'dopamine agonist withdrawal': 10497, 'dental apices width': 10498, 'ambulatory care sensitive conditions': 10499, 'Arabidopsis cell suspension cultures': 10500, 'Nodular adrenal hyperplasia': 10501, 'N Acyl hydrazone': 10502, 'near adult height': 10503, 'N acetyl histidine': 10504, 'N acetylated heparin': 10505, 'Northern Alberta Renal Program': 10506, 'neuropathy ataxia retinitis pigmentosa': 10507, 'Controlled Antenatal Thyroid Screening': 10508, 'chemical advanced template search': 10509, 'Predicted peptidoglycan binding protein': 10510, 'pro platelet basic protein': 10511, 'last bacterial common ancestor': 10512, 'lower body circulatory arrest': 10513, 'upstream regulatory region': 10514, 'unit relative risk': 10515, 'urea reduction ratio': 10516, 'World Heritage Area': 10517, 'wound healing assays': 10518, 'bulge helix bulge': 10519, 'beta hydroxy butyrate': 10520, 'relaxin family loci': 10521, 'reasosn for living': 10522, 'Crustacean Hyperglycemic Hormone': 10523, 'congenital hyperinsulinemic hypoglycemia': 10524, 'Congenital hypogonadotropic hypogonadism': 10525, 'double HVR variants': 10526, 'duck hepatitis virus': 10527, 'GQ370556 Rio Torto': 10528, 'guppy reference transcriptome': 10529, 'North Western Africa': 10530, 'narrow width approximation': 10531, 'Principal Component Phenotypic Complexity': 10532, 'partial common principal component': 10533, 'Personal Care Products Council': 10534, 'primary species hypothesis': 10535, 'Pulmonary sclerosing hemangioma': 10536, 'post synchronization histogram': 10537, 'Paroxysmal sympathetic hyperactivity': 10538, 'perioperative surgical home': 10539, 'N terminal extension': 10540, 'new tumor event': 10541, 'neuropathy target esterase': 10542, 'negative thermal expansion': 10543, 'Non TE exp': 10544, 'Mean bootstrap values': 10545, 'myocardial blood volume': 10546, 'microvascular blood volume': 10547, 'mean blood velocity': 10548, 'Mosquito borne viruses': 10549, 'drosophila A virus': 10550, 'days after véraison': 10551, 'distal appendage vesicles': 10552, 'osteocyte lacunar density': 10553, 'obstructive lung diseases': 10554, 'other lung disease': 10555, 'oral lichenoid disease': 10556, 'graded autocatalysis replication domain': 10557, 'Genetic Algorithm Recombination Detection': 10558, 'Genomic Allergen Rapid Detection': 10559, 'Ventro medial neuron': 10560, 'vocal motor nucleus': 10561, 'vibrating mesh nebulizer': 10562, 'predicted niche occupancy': 10563, 'poor neurological outcome': 10564, 'nuclear VCP like': 10565, 'National Veterinary Laboratory': 10566, 'Northern Victoria Land': 10567, 'parallel knock out': 10568, 'palm kernel oil': 10569, 'Indo West Pacific': 10570, 'ice water path': 10571, 'Near Eastern Neolithic': 10572, 'non endemic normals': 10573, 'Consultation and Relational Empathy': 10574, 'Cholesterol and Recurrent Events': 10575, 'colon available raspberry extract': 10576, 'Early Dementia Questionnaire': 10577, 'extensor digiti quinti': 10578, 'Fisk fatigue severity scale': 10579, 'Fluid flow shear stress': 10580, 'non erosive reflux disease': 10581, 'Net Expected Regret Difference': 10582, 'NSAIDs exacerbated respiratory disease': 10583, 'Peptic ulcer disease': 10584, 'pocket US device': 10585, 'self expandable plastic stent': 10586, 'subdural evacuating port system': 10587, 'non polypoid growth': 10588, 'no program group': 10589, 'normal pressure glaucoma': 10590, 'nostril pressure gradient': 10591, 'hepatic artery pulsatility index': 10592, 'Heredity and Phenotype Intervention': 10593, 'highly aggressive proliferating immortalized': 10594, 'Chronic Liver Disease Questionnaire': 10595, 'Cleveland Lloyd Dinosaur Quarry': 10596, 'percutaneous transhepatic cholangial drainage': 10597, 'Pontine Tegmental Cap Dysplasia': 10598, 'fast twitch glycolytic': 10599, 'femoral trochlear groove': 10600, 'Wexner constipation scale': 10601, 'Whole Cell Stain': 10602, 'warm condition study': 10603, 'Wheelchair Convoy System': 10604, 'Weighted Confidence Sharing': 10605, 'lactase phlorizin hydrolase': 10606, 'large private hospitals': 10607, 'lobster protein hydrolysates': 10608, 'Sinus wash fluids': 10609, 'small white follicles': 10610, 'Minimal detectable relative risk': 10611, 'mammographic density reduction ratio': 10612, 'Common bile duct stone': 10613, 'capsular bag distension syndrome': 10614, 'Boston bowel prep scale': 10615, 'Boston Bowel Preparation Scale': 10616, 'body bends per second': 10617, 'Harvey Bradshaw Index': 10618, 'hypersexual behaviour inventory': 10619, 'Human blood indices': 10620, 'highly branched isoprenoid': 10621, 'Healthy Beverage Index': 10622, 'Valle del Belice': 10623, 'vertical diagonal band': 10624, 'Wald type test': 10625, 'Walk the Talk': 10626, 'work toward target': 10627, 'Web Thermo Tables': 10628, 'GeneSeek® Genomic Profiler': 10629, 'Global Geodynamics Project': 10630, 'flag leaf width': 10631, 'French Large White': 10632, 'History Weighting Algorithm': 10633, 'Hybrid Watershed Algorithm': 10634, 'hemlock woolly adelgid': 10635, 'initiation within intron': 10636, 'integrative weaning index': 10637, 'heavy chain homolog': 10638, 'High Cost Hospital': 10639, 'Whole Human Genome': 10640, 'whole hand grasp': 10641, 'western hunter gatherers': 10642, 'alternative transcription start sites': 10643, 'artificial turf surrogate surface': 10644, 'chromosomal expression index': 10645, 'Cost Efficiency Index': 10646, 'cumulative exposure index': 10647, 'cathode electrolyte interphase': 10648, 'differentiation independent genes': 10649, 'Diabetes in Germany': 10650, 'Ipsilateral Breast Tumor Recurrence': 10651, 'Ipsilateral breast tumor relapse': 10652, 'Unique Non Fragmented': 10653, 'used nuclear fuel': 10654, 'Linear After The Exponential': 10655, 'local average treatment effect': 10656, 'flagellar glycosylation island': 10657, 'Fungal Genome Initiative': 10658, 'Forage Genetics International': 10659, 'fresh gas inlet': 10660, 'disease gene network': 10661, 'downstream gene neighbourhood': 10662, 'deoxy galacto noeurostegine': 10663, 'dorsal giant neuron': 10664, 'Drosophila Genome Nexus': 10665, 'predicted exonic splicing enhancers': 10666, 'pomegranate ethanolic seed extract': 10667, 'Array Genomic Hybridization': 10668, 'Allegheny General Hospital': 10669, 'Support Vector Sampling technique': 10670, 'single vessel small thoractomy': 10671, 'neighborhood quality standard': 10672, 'non quaternary suppression': 10673, 'stem outer green capsule': 10674, 'Serine Oxidation Glycine Cleavage': 10675, 'streptococcal toxin shock syndrome': 10676, 'second trans splicing site': 10677, 'Simple Triage Scoring System': 10678, 'Post translational chemical medication': 10679, 'Pragmatic trellis coded modulation': 10680, 'large unassigned region': 10681, 'land use regression': 10682, 'Ortholog hit ratio': 10683, 'Occupational Hospitalisation Register': 10684, 'with null alleles': 10685, 'Western North America': 10686, 'Water nucleophilic attack': 10687, 'Multilevel Simultaneous Component Analysis': 10688, 'mesenchymal stem cell antigen': 10689, 'Sheffield RNAi Screening Facility': 10690, 'serine/arginine rich splicing factors': 10691, 'South Asian Indian female': 10692, 'significant acute ICP fluctuation': 10693, 'Targeted genomic enrichment': 10694, 'transient gene expression': 10695, 'non reference discrepancy': 10696, 'negative regulation domain': 10697, 'Neonatal respiratory distress': 10698, 'non reductive dissolution': 10699, 'Genome Wide Interaction Search': 10700, 'genome wide interaction study': 10701, 'mixed effects model evolution': 10702, 'minimal essential medium Eagle': 10703, 'Finding Informative Regulatory Elements': 10704, 'Fungal Infection Risk Evaluation': 10705, 'Fms intronic regulatory element': 10706, 'Bootstrap Inclusion Fraction': 10707, 'bromine impact factor': 10708, 'Institute Giannina Gaslini': 10709, 'indusium griseum glia': 10710, 'One End Anchor': 10711, 'obliquus externus abdominis': 10712, 'ovarian endometrioid adenocarcinoma': 10713, 'quadruplex forming potential': 10714, 'quota filling performance': 10715, 'Medicinal Plant Genomic Resource': 10716, 'Multi planar gradient recalled': 10717, 'Penn State University': 10718, 'primary site unknown': 10719, 'primary sampling unit': 10720, 'practical salinity units': 10721, 'Decision Tree Infection Scoring': 10722, 'Deep Towed Imaging System': 10723, 'embryonic stem cell neurogenesis': 10724, 'esophageal squamous cell neoplasia': 10725, 'Copy Number Analysis Methods': 10726, 'copy number analysis module': 10727, 'Paired End Low Error': 10728, 'Protein Energy Landscape Exploration': 10729, 'alcohol chill haze degree': 10730, 'Allegheny County Health Department': 10731, 'virus responsive gene': 10732, 'ventral respiratory group': 10733, 'vacuolar processing enzyme': 10734, 'virion producing episodes': 10735, 'common symbiotic signaling pathway': 10736, 'Cervical Self Sampling Program': 10737, 'female body wall': 10738, 'final body weight': 10739, 'chromosome segment sharing coefficients': 10740, 'Christian Social Services Commission': 10741, 'corneal stromal stem cells': 10742, 'Young Finns Study': 10743, 'youth friendly services': 10744, 'Unrooted Episode Clustering': 10745, 'uterine endometrioid carcinomas': 10746, 'urinary epithelial cells': 10747, 'Conditional maximum likelihood estimation': 10748, 'cis muconate lactonizing enzyme': 10749, 'Miniature Inverted–Repeat Transposable Element': 10750, 'miniature inverted transposon element': 10751, 'putative unique transcripts': 10752, 'proctored ultrasound training': 10753, 'Upper half mean': 10754, 'U2AF homology motif': 10755, 'heat killed bacteria': 10756, 'HyQue Knowledge Base': 10757, 'single transcript single exon': 10758, 'systemic therapy side effects': 10759, 'whole genome triplication': 10760, 'whole gut transit': 10761, 'ribosome coverage value': 10762, 'red cell volume': 10763, 'right cardinal vein': 10764, 'rubella containing vaccine': 10765, 'right colic vein': 10766, 'Home and Community Care': 10767, 'Hull Automatic Cough Counter': 10768, 'passive positioning alarm package': 10769, 'polymerase proofreading associated polyposis': 10770, 'Parents Plus Adolescents Programme': 10771, 'personalized physical activity prescription': 10772, 'Word Reading Threshold': 10773, 'word recognition threshold': 10774, 'West Ridge Troop': 10775, 'Basic Health Insurance Scheme': 10776, 'brain heart infusion supplemented': 10777, 'Ontario Wait Times Strategy': 10778, 'onsite wastewater treatment systems': 10779, 'Provincial Health Services Authority': 10780, 'Public Health Service Act': 10781, 'Health Solutions Wales': 10782, 'household stored water': 10783, 'burden of illness': 10784, 'bouton overlap index': 10785, 'BioBrick™ of interest': 10786, 'Workforce evidence based': 10787, 'weighted empirical Bayes': 10788, 'District Health Executive': 10789, 'delayed hyper enhancement': 10790, 'di hydro ethidium': 10791, 'lot quality assurance sampling': 10792, 'Lot Quality Assurance Survey': 10793, 'Value Based Purchasing': 10794, 'Villa Buen Pastor': 10795, 'vanilloid binding pocket': 10796, 'voie biliaire principale': 10797, 'Blood Sample Collection Problem': 10798, 'bilateral spastic cerebral palsy': 10799, 'British National Formulary': 10800, 'biological nitrogen fixation': 10801, 'International AIDS Vaccine Institute': 10802, 'Intra abdominal volume increment': 10803, 'medication related harm': 10804, 'magnetic resonance histology': 10805, 'Adherence Barriers Questionnaire': 10806, 'algorithm based qualitative': 10807, 'Multnomah Community Ability Scale': 10808, 'mast cell activation syndrome': 10809, 'African Vaccination Week': 10810, 'arterial vessel wall': 10811, 'universal health coverage': 10812, 'unsupervised hierarchical clustering': 10813, 'University Health Consortium': 10814, 'Patient Data Monitoring System': 10815, 'patient data management system': 10816, 'Clinical Readiness Consultation Tool': 10817, 'cluster randomised controlled trial': 10818, 'mouse mast cell proteases': 10819, 'multivariate minimum convex polygons': 10820, 'Candida albicans water soluble': 10821, 'Chicago Area Waterways System': 10822, 'fermented fish oil': 10823, 'functional foot orthoses': 10824, 'Indian Oscillation Index': 10825, 'Initial Outcome Index': 10826, 'Integrative Optical Imaging': 10827, 'inter onset intervals': 10828, 'North West Frontier Province': 10829, 'North Wyke Farm Platform': 10830, 'Ebola haemorrhagic fever': 10831, 'early head fold': 10832, 'excess heat factor': 10833, 'unique recombinant forms': 10834, 'Unsupervised random forest': 10835, 'viral load monitoring': 10836, 'ventral longitudinal muscles': 10837, 'ventral lateral midbrain': 10838, 'Easy Operating Pathogen Microarray': 10839, 'electro optic phase modulators': 10840, 'pulmonary non tuberculous mycobacterial': 10841, 'paired non tumor mucosa': 10842, 'vancomycin susceptible Enterococcus': 10843, 'vancomycin sensitive enterococci': 10844, 'variant set enrichment': 10845, 'Western Uttar Pradesh': 10846, 'Weibo User Pool': 10847, 'double locus variant': 10848, 'Differential lung ventilation': 10849, 'dorso latero ventral': 10850, 'unplanned care interruptions': 10851, 'Upper Cook Inlet': 10852, 'Wajir District Hospital': 10853, 'Wolf Downloaded Howling': 10854, 'Danish HIV Cohort Study': 10855, 'digital home care service': 10856, 'Western Pennsylvania Hospital': 10857, 'whey protein hydrolysate': 10858, 'extra medullary hematopoiesis': 10859, 'epithelio mesenchymal hinge': 10860, 'estimated mature height': 10861, 'early modern human': 10862, 'carbapenem resistant Escherichia coli': 10863, 'Colorado River Extensional Corridor': 10864, 'Tumor associated collagen signatures': 10865, 'total anterior circulation strokes': 10866, 'vestibular dark cells': 10867, 'Village Development Committee': 10868, 'Vimala Dermatological Centre': 10869, 'Complaint Score Questionnaire': 10870, 'Coping Strategies Questionnaire': 10871, 'peripartum pelvic girdle pain': 10872, 'Parasitic Plant Genome Project': 10873, 'Essential Newborn Care': 10874, 'Enteric neural crest': 10875, 'equivalent noise charge': 10876, 'Familial hemiplegic migraine': 10877, 'fat head minnows': 10878, 'ESSEN Stroke Risk Score': 10879, 'Extrapyramidal Symptom Rating Scale': 10880, 'induced preterm birth': 10881, 'Injury Prevention Briefing': 10882, 'isotonic phosphate buffer': 10883, 'Chronic Pain Risk Score': 10884, 'computerized patient record system': 10885, 'pulmonary vessel volume': 10886, 'peak venous velocity': 10887, 'international health elective': 10888, 'isometric handgrip exercise': 10889, 'patient comorbidity complexity level': 10890, 'Patient Clinical Complexity Level': 10891, 'socio cognitive career theory': 10892, 'Spatial Contextual Cueing Task': 10893, 'situational judgment tests': 10894, 'semantic judgment task': 10895, 'Downstream of Kinases': 10896, 'dysplastic oral keratinocytes': 10897, 'Keratitis ichthyosis deafness': 10898, 'Kids Inpatient Database': 10899, 'kinase inducible domain': 10900, 'Kinase Inhibitory Domain': 10901, 'Long range epigenetic silencing': 10902, 'lateral root emergence site': 10903, 'gene set activity score': 10904, 'Gastroesophageal Symptom Assessment Score': 10905, 'inflamed lung volume': 10906, 'ischemia likelihood value': 10907, 'Independent lung ventilation': 10908, 'protein protein interaction affinity': 10909, 'Peptidyl Prolyl Isomerase A': 10910, 'protein phosphatase inhibition assay': 10911, 'Kinase Enrichment Analysis': 10912, 'Keldysh effective action': 10913, 'mitral valve opening': 10914, 'micro vascular obstruction': 10915, 'Montserrat Volcano Observatory': 10916, 'maximum volume overlap': 10917, 'Outline Error Distribution': 10918, 'organ equivalent dose': 10919, 'Ophthalmological Emergency Department': 10920, 'bone tracer uptake': 10921, 'Buhlmann titre unit': 10922, 'flat panel computed tomography': 10923, 'Formal Planned Client Teaching': 10924, 'Early Aberration Reporting System': 10925, 'Electronic Appetite Ratings System': 10926, 'East African Rift System': 10927, 'Localized Active Contour Model': 10928, 'Los Angeles County Museum': 10929, 'risk assessment questionnaire': 10930, 'Recovery Attitudes Questionnaire': 10931, 'chronic disease management system': 10932, 'Clinical Decision Making Style': 10933, 'clinical data management System': 10934, 'Mosoriot Medical Record System': 10935, 'microsomal metabolic reaction system': 10936, 'Fuzzy Association Rule Mining': 10937, 'free avian respiratory macrophages': 10938, 'Critical Care Information Systems': 10939, 'Charlson Comorbidity Index Score': 10940, 'statistical linkage key': 10941, 'simultaneous liver kidney': 10942, 'Master Linkage Key': 10943, 'Mixed Lineage Kinase': 10944, 'NIR illumination accessory': 10945, 'neuroleptic induced akathisia': 10946, 'Planned Care Improvement Programme': 10947, 'Pharmacy Computerized Inventory Program': 10948, 'total skeletal uptake': 10949, 'Technical Support Unit': 10950, 'team sampling unit': 10951, 'trophoblast STAT utron': 10952, 'San Antonio Health Study': 10953, 'sleep apnea hypopnea syndrome': 10954, 'restricted mean survival time': 10955, 'relaxed minimum spanning tree': 10956, 'Danish National Patient Registry': 10957, 'degree non preserving randomization': 10958, 'Respiratory Care Unit': 10959, 'red calibrated unit': 10960, 'relative color units': 10961, 'minimum biofilm eradication concentration': 10962, 'malignant bronchial epithelial cell': 10963, 'Vermicon identification technology': 10964, 'Visual Iteration Task': 10965, 'Visual Inquiry Toolkit': 10966, 'time dependent Cox model': 10967, 'tonsil derived conditioned medium': 10968, 'Events Per Variable': 10969, 'Ewald proximal volume': 10970, 'incremental net benefit': 10971, 'intercostal nerve block': 10972, 'thermostable direct hemolysin': 10973, 'the dominant hand': 10974, 'human brain microvascular endothelial': 10975, 'Human bone marrow endothelial': 10976, 'tail associated muralytic enzyme': 10977, 'tetramer associated magnetic enrichment': 10978, 'T alexandrinum methanolic extracts': 10979, 'plant functional groups': 10980, 'pulsed field gradients': 10981, 'microbial adhesion to hydrocarbon': 10982, 'mutant allele tumor heterogeneity': 10983, 'meprin and TRAF homology': 10984, 'Special Listeria Culture Collection': 10985, 'stem like cancer cells': 10986, 'variably expressed gene': 10987, 'ventral eversible gland': 10988, 'Standard Tube Agglutination Test': 10989, 'Standardized Total Average Toxicity': 10990, 'sub therapeutic antibiotic treatment': 10991, 'signal transduction and transcription': 10992, 'Ghent University Hospital': 10993, 'Gondar university hospital': 10994, 'neonatal meningitis E coli': 10995, 'normal mammary epithelial cells': 10996, 'chicken hemoglobin antimicrobial peptides': 10997, 'Cardiovascular health awareness program': 10998, 'days post hatch': 10999, 'd post harvest': 11000, 'deep peat heating': 11001, 'upstream regulatory element': 11002, 'U rich element': 11003, 'SCImago Journal Rank': 11004, 'Saint John River': 11005, 'medial branch block': 11006, 'Mature basal body': 11007, 'Martinique Black Belly': 11008, 'Vertebral endplate signal changes': 11009, 'vascular endothelial stem cell': 11010, 'oscillatory fluid flow': 11011, 'Object File Format': 11012, 'allogeneic blood transfusion': 11013, 'adjacent brain tissue': 11014, 'affective bias test': 11015, 'extension from NP': 11016, 'early frontal negativity': 11017, 'extra floral nectaries': 11018, 'radial neurodynamic test': 11019, 'rostral neural tube': 11020, 'Munich Shoulder Questionnaire': 11021, 'Medication Satisfaction Questionnaire': 11022, 'Matriculating Student Questionnaire': 11023, 'Migraine Specific Questionnaire': 11024, 'transoral atlantoaxial reduction plate': 11025, 'Tunku Abdul Rahman Park': 11026, 'Foraminal height index': 11027, 'Family Hardiness Index': 11028, 'atypical femoral fractures': 11029, 'Atrial Filling Fraction': 11030, 'aortic forward flow': 11031, 'area force fields': 11032, 'Angiomatoid fibrous histiocytoma': 11033, 'alpha/beta fold hydrolase': 11034, 'Anterior Facial Height': 11035, 'Diffuse idiopathic skeletal hyperostosis': 11036, 'dual in situ hybridization': 11037, 'femur upward dynamic sitting': 11038, 'formerly used defense site': 11039, 'Generalized Feedforward Networks': 11040, 'giant fiber neuron': 11041, 'impaired glomerular filtration rate': 11042, 'Insulin Growth Factor Receptor': 11043, 'infection related hospitalization': 11044, 'ischemic reactive hyperemia': 11045, 'Renal Epidemiology Information Network': 11046, 'Ramipril Efficacy in Nephropathy': 11047, 'Central Australian Rural Practitioners Association': 11048, 'complement activation related pseudo allergy': 11049, 'question prompt sheet': 11050, 'quantum phase slip': 11051, 'distance from voxel': 11052, 'deep femoral vein': 11053, 'tongue force variability': 11054, 'theft from vehicle': 11055, 'systolic blood viscosity': 11056, 'sac brood virus': 11057, 'symptomatic bacterial vaginosis': 11058, 'Early Posterior Negativity': 11059, 'Entity Pool Nodes': 11060, 'endogenous protein normalization': 11061, 'event processing network': 11062, 'root entry zone': 11063, 'ribosome exclusion zone': 11064, 'Emotional Lability Questionnaire': 11065, 'extended linear quadratic': 11066, 'inside out vesicles': 11067, 'inferior ophthalmic veins': 11068, 'inter occasion variability': 11069, 'main olfactory bulb': 11070, 'methane oxidizing bacteria': 11071, 'mixed outcome block': 11072, 'coincidence enhanced stochastic resonance': 11073, 'core environmental stress response': 11074, 'conduction electron spin resonance': 11075, 'valine amide antisera': 11076, 'vitamin A adequate': 11077, 'Orthogonal Distance Regression': 11078, 'optical density ratio': 11079, 'oxygen disappearance rate': 11080, 'oculomotor delayed response': 11081, 'object discrimination reversal': 11082, 'late frontal negativity': 11083, 'low frequency noise': 11084, 'spike time difference map': 11085, 'skipjack tuna dark muscle': 11086, 'Japanese sign language': 11087, 'Japanese sea lion': 11088, 'posterior pole asymmetry analysis': 11089, 'proximal phalangeal articular angle': 11090, 'Protein Pathway Array Analysis': 11091, 'relative flow volume': 11092, 'respiratory frequency variability': 11093, 'hand held fan': 11094, 'hypertensive heart failure': 11095, 'chronic suppurative otitis media': 11096, 'client specific outcome measures': 11097, 'neonatal care unit': 11098, 'non consanguineous unions': 11099, 'Child Behavior Check List': 11100, 'cutaneous B cell lymphoma': 11101, 'bladder volume capacity': 11102, 'boundary vector cells': 11103, 'presynaptic active zone': 11104, 'PIWI Argonaute Zwille': 11105, 'high throughput transcriptome sequencing': 11106, 'heat tolerance testing system': 11107, 'heavy ion irradiation': 11108, 'horizontal inequity index': 11109, 'HDL inflammatory index': 11110, 'transcription factor binding': 11111, 'Tracheobronchial foreign body': 11112, 'putative receptor kinase': 11113, 'PKC related kinase': 11114, 'violaxanthin de epoxidase': 11115, 'Visual Development Environment': 11116, 'native cell wall': 11117, 'nasal cavity width': 11118, 'primary germ tubes': 11119, 'Peltate glandular trichome': 11120, 'personal genomic testing': 11121, 'zeatin O glucosyltransferases': 11122, 'Z O glucoside': 11123, 'elongating opened buds': 11124, 'excess over Bliss': 11125, 'Eocene Oligocene boundary': 11126, 'high affinity transport system': 11127, 'Healthy Ageing Twin Study': 11128, 'Laser Pressure Catapult Microdissection': 11129, 'Longitudinal Partial Credit Model': 11130, 'proximal distal zones': 11131, 'PSD95 Dlg1 ZO1': 11132, 'Maximum middle bract length': 11133, 'monocot mannose binding lectin': 11134, 'binary destination vector': 11135, 'Borna disease virus': 11136, 'aromatic amino acid decarboxylase': 11137, 'A Acute Aortic Dissection': 11138, 'SOMATIC EMBRYOGENESIS RELATED KINASE': 11139, 'somatic embryogenesis receptor kinase': 11140, 'deep root weight': 11141, 'downstream river water': 11142, 'dealcoholized red wine': 11143, 'high volume instrument': 11144, 'heat vulnerability index': 11145, 'hollow viscus injury': 11146, 'Special Care Unit': 11147, 'Southern Cross University': 11148, 'synonymous codon usage': 11149, 'horizontal growth index': 11150, 'high glycemic index': 11151, 'midwife led ward': 11152, 'mean low water': 11153, 'Muhimbili National Hospital': 11154, 'metabolically non healthy': 11155, 'Helping Babies Breathe': 11156, 'hook basal body': 11157, 'pedigree parental asymmetry test': 11158, 'peri prostatic adipose tissue': 11159, 'adenylate kinase lid': 11160, 'A kurodai lectin': 11161, 'Panic Disorder Severity Scale': 11162, 'patient decision support systems': 11163, 'Maudsley Violence Questionnaire': 11164, 'mitral valve quantification': 11165, 'Fronto Temporal Lobar Degeneration': 11166, 'fronto temporal lobe dementia': 11167, 'Post Concussion Symptom Scale': 11168, 'Portable Cardiopulmonary Support System': 11169, 'prostate cancer specific survival': 11170, 'self rated health status': 11171, 'Saskatchewan Rural Health Study': 11172, 'Minimum Basic Data Set': 11173, 'Mekong Basin Disease Surveillance': 11174, 'Jingshan County Hospital': 11175, 'jellyfish collagen hydrolysate': 11176, 'serum blood glucose': 11177, 'Scutellaria baicalensis Georgi': 11178, 'annual average reduction rate': 11179, 'active avoidance response rate': 11180, 'amino acid reference ratio': 11181, 'computerized management information system': 11182, 'Central monitoring information system': 11183, 'Devon Active Villages Evaluation': 11184, 'differential affine velocity estimator': 11185, 'Resource Dependence Institutional Cooperation': 11186, 'reflection differential interference contrast': 11187, 'Maine Syracuse Longitudinal Study': 11188, 'molten salt liquefied structure': 11189, 'M spicata limonene synthase': 11190, 'Physical Activity Questionnaire': 11191, 'Peripheral Artery Questionnaire': 11192, 'intelligent physical exercise training': 11193, 'individual particle electron tomography': 11194, 'Upper East Region': 11195, 'urinary excretion rate': 11196, 'Gender Inequality Index': 11197, 'genomic instability index': 11198, 'genome integrity index': 11199, 'Health Experts and Research Team': 11200, 'Health Equity Assessment Response Tool': 11201, 'Two Stage Clonal Expasion': 11202, 'telomere sister chromatid exchange': 11203, 'Acute Care Enhanced Surveillance': 11204, 'and Calmness Evaluation Scale': 11205, 'Aravind Comprehensive Eye Survey': 11206, 'problematic Internet use': 11207, 'Pathological Internet use': 11208, 'continuous air monitoring stations': 11209, 'Child/Adolescent Anxiety Multimodal Study': 11210, 'Climate Anomaly Monitoring System': 11211, 'primary nasal epithelial cells': 11212, 'Predicted No Effect Concentration': 11213, 'Asthmatic non obese': 11214, 'adenosine N1 Oxide': 11215, 'cumulative expired volume': 11216, 'carp edema virus': 11217, 'cardiovascular emergency visits': 11218, 'California encephalitis virus': 11219, 'hypoxic ventilatory response': 11220, 'hyper variable regions': 11221, 'hepatic vascular resistance': 11222, 'Alternative Splicing Mutation Database': 11223, 'absolute standardized mean difference': 11224, 'false negative level': 11225, 'FMD National Laboratory': 11226, 'facial nerve line': 11227, 'Chowghat dwarf orange': 11228, 'cyclic dipeptide oxidase': 11229, 'green leaf volatiles': 11230, 'global loss volume': 11231, 'Specific primer unspliced': 11232, 'singles processing unit': 11233, 'Health Demographic Surveillance System': 11234, 'Hyperhidrosis Disease Severity Scale': 11235, 'High Stringency Hybridizations': 11236, 'Human simulated howls': 11237, 'minimal invasive surgery center': 11238, 'Motivational Interviewing Skills Code': 11239, 'Chandigarh yellow variety': 11240, 'Culex Y virus': 11241, 'universal disease biomarker': 11242, 'use dependent block': 11243, 'mean tidal expiratory flow': 11244, 'Medium Term Expenditure Framework': 11245, 'Addis Ababa University': 11246, 'Acute Assessment Unit': 11247, 'acute anterior uveitis': 11248, 'antioxidant activity unit': 11249, 'smear positive pulmonary tuberculosis': 11250, 'stochastically perturbed physical tendencies': 11251, 'urine microscopy analysis': 11252, 'UBAP1 MVB12 associated': 11253, 'Knowledge Based Potential': 11254, 'KIF1 binding protein': 11255, 'Entero colpo cysto defecography': 11256, 'exciton coupling circular dichroism': 11257, 'laparoscopic wedge resection': 11258, 'leaf weight ratio': 11259, 'Light Weight Robot': 11260, 'pelvic autonomic nerve preservation': 11261, 'PILR associating neural protein': 11262, 'protein interaction permutation analysis': 11263, 'putative invasive pulmonary aspergillosis': 11264, 'Human Infectome Network': 11265, 'Human Interaction Network': 11266, 'BIOBASE Knowledge Library': 11267, 'benign kidney lesions': 11268, 'glucose uptake rate': 11269, 'Glycyrrhiza uralensis roots': 11270, 'reference transcriptional network': 11271, 'retrograde tibial nail': 11272, 'reticular thalamic nucleus': 11273, 'extensively self renewing erythroblasts': 11274, 'ER stress response element': 11275, 'Bayesian Variable Selection Algorithm': 11276, 'B variegata stem alcoholic': 11277, 'normalized uptake change': 11278, 'normal urothelial cell': 11279, 'Stress Induced Telomere Shortening': 11280, 'single incision thoracoscopic surgery': 11281, 'Elementary Flux Patterns': 11282, 'epididymal fat pad': 11283, 'evoked field potentials': 11284, 'Inter Cells Similarity Index': 11285, 'intra cytoplasmic sperm injection': 11286, 'Moment Independent Robustnesss Indicator': 11287, 'myocardial ischemia reperfusion injury': 11288, 'stratified random cross validation': 11289, 'superior right colic vein': 11290, 'Metropolis adjusted Langevin Algorithm': 11291, 'Metformin associated lactic acidosis': 11292, 'idiopathic detrusor overactivity': 11293, 'Infectious Disease Ontology': 11294, 'renal vein thrombosis': 11295, 'radical vaginal trachelectomy': 11296, 'root vascular tissue': 11297, 'reduced volume training': 11298, 'Scrapie Notifications Database': 11299, 'Sympathetic nerve discharge': 11300, 'sinus node dysfunction': 11301, 'Standard normal deviate': 11302, 'National Veterinary Services Laboratory': 11303, 'normalized vector sum length': 11304, 'Aortic valve prolapse': 11305, 'anthrax vaccine precipitated': 11306, 'Amplatzer Vascular Plug': 11307, 'porcine respiratory disease complex': 11308, 'Poverty Related Diseases College': 11309, 'Slaughterhouse Pleurisy Evaluation System': 11310, 'single pulse electrical stimulation': 11311, 'short segment pedicle instrumentation': 11312, 'succinylated soy protein isolate': 11313, 'Steady state plasma insulin': 11314, 'pressure sensitive walkways': 11315, 'public supply well': 11316, 'plant specific weight': 11317, 'Sixth National Population Census': 11318, 'substantia nigra pars compacta': 11319, 'Virtual Water Maze': 11320, 'ventral white matter': 11321, 'Visual working memory': 11322, 'native motor zones': 11323, 'nitrite maximum zone': 11324, 'Auditory verbal hallucinations': 11325, 'anterior vertebral height': 11326, 'endoscopic vein harvesting': 11327, 'eucapnic voluntary hyperpnea': 11328, 'chest wall compression': 11329, 'conventional wound care': 11330, 'gamma glutamyl hydrolase': 11331, 'ground glass hepatocytes': 11332, 'Guangdong General Hospital': 11333, 'extreme limiting dilution analysis': 11334, 'Extreme Limiting Dilution Assay': 11335, 'cancer associated endothelial cells': 11336, 'Coronary Artery Endothelial Cells': 11337, 'criminal justice system': 11338, 'Cormack Jolly Seber': 11339, 'test wound dressing': 11340, 'total work done': 11341, 'Total Wellbeing Diet': 11342, 'human fetal brain': 11343, 'high fat butter': 11344, 'Combined small cell lung cancer': 11345, 'cancer stem cell like cells': 11346, 'single stranded nucleic acid': 11347, 'Skin sympathetic nerve activity': 11348, 'benign notochordal cell tumor': 11349, 'Boron neutron capture therapy': 11350, 'human universal reference': 11351, 'Heisenberg Uncertainty Relationship': 11352, 'normal B lymphocytes': 11353, 'non bound like': 11354, 'nocturnal boundary layer': 11355, 'normal body loading': 11356, 'gastric dilatation volvulus': 11357, 'Goal directed Value': 11358, 'DNA integrity index': 11359, 'dietary inflammatory index': 11360, 'Derwent Innovation Index': 11361, 'Percutaneous hepatic perfusion': 11362, 'presynaptic homeostatic plasticity': 11363, 'percutaneous heart pump': 11364, 'real world evidence': 11365, 'real world environment': 11366, 'intra ventricular gradients': 11367, 'in vitro grown': 11368, 'Peak positive strain rate': 11369, 'pseudo profile score regression': 11370, 'effective regurgitant orifice': 11371, 'expected response outcome': 11372, 'trans oesophageal echocardiography': 11373, 'TAT ODD EGFP': 11374, 'TARGET OF EAT': 11375, 'impar umbilical artery': 11376, 'instantaneous unidirectional admixture': 11377, 'Spinal epidural hematoma': 11378, 'soluble epoxide hydrolase': 11379, 'simple endometrial hyperplasia': 11380, 'Staphylococcal enterotoxin H': 11381, 'Gastro intestinal stromal tumours': 11382, 'grid inhomogeneous solvation theory': 11383, 'kissing wire technique': 11384, 'Kruskal Wallis Test': 11385, 'intra cranial self stimulation': 11386, 'International Carotid Stenting Study': 11387, 'dynein dynactin BICD2N': 11388, 'Diphenyl dimethyl bicarboxylate': 11389, 'Damaged DNA binding': 11390, 'additional strand catalytic E': 11391, 'ammonium sulphate crude extract': 11392, 'Tissue Culture Poly Styrone': 11393, 'tissue culture plate surfaces': 11394, 'Novel E3 ligase': 11395, 'net energy lactation': 11396, 'newly excysted larvae': 11397, 'signal transducing adaptor molecule': 11398, 'Senior Technology Acceptance Model': 11399, 'short term associative memory': 11400, 'universal minicircle sequence': 11401, 'unidentified morpho species': 11402, 'super cyan fluorescent protein': 11403, 'soluble cocoa fiber product': 11404, 'active regulator of SIRT1': 11405, 'AGE RAGE oxidative stress': 11406, 'sphingosine kinase inhibitor': 11407, 'South Kawishiwi intrusion': 11408, 'herpes virus entry mediator': 11409, 'high voltage electron microscopy': 11410, 'Bladder urothelial carcinoma': 11411, 'blood urea clearance': 11412, 'human calvarial osteoblasts': 11413, 'High cut off': 11414, 'Heme Copper Oxidase': 11415, 'half center oscillator': 11416, 'unit high doses': 11417, 'Ultra high Density': 11418, 'C terminal helix': 11419, 'carboxy terminal homology': 11420, 'WAVE regulatory complex': 11421, 'whole region crossover': 11422, 'water retention curve': 11423, 'water retaining capacity': 11424, 'lateral vestibular nucleus': 11425, 'lateral visual network': 11426, 'nucleo olivary inhibition': 11427, 'Neurite outgrowth index': 11428, 'species auto correlation function': 11429, 'soft agar colony formation': 11430, 'photoelectron kinetic energy': 11431, 'Palm Kernel Extract': 11432, 'Plasmon induced charge separation': 11433, 'Purdue Improved Crop Storage': 11434, 'partial androgen insensitivity syndrome': 11435, 'Perinatal arterial ischemic stroke': 11436, 'Welsh Institute of Chiropractic': 11437, 'water insoluble organic carbon': 11438, 'low intensity statin treatment': 11439, 'Loughborough Intermittent Shuttle Test': 11440, 'aryl hydrocarbon receptor repressor': 11441, 'Ambulatory heart rate range': 11442, 'nodular granulomatous episcleritis': 11443, 'native genome equivalents': 11444, 'cytosine guanine guanine': 11445, 'concentration gradient generator': 11446, 'chicken gamma globulin': 11447, 'Microbiologically confirmed pneumococcal pneumonia': 11448, 'Monte Carlo Permutation Procedure': 11449, 'arbitrary relative unit': 11450, 'aspirin response unit': 11451, 'tensor veli palatini': 11452, 'Twin Volcano Plot': 11453, 'ISAC Standardized Units': 11454, 'inflorescence sympodial units': 11455, 'Ang receptor NEP inhibitor': 11456, 'angiotensin receptor neprilysin inhibitor': 11457, 'Avoidance and Dietary Restrictions': 11458, 'Amino Acid Deprivation Resistant': 11459, 'maximum therapeutic plasma concentration': 11460, 'multi threshold permutation correction': 11461, 'small incision cataract surgery': 11462, 'Sparse Inverse Covariance Selection': 11463, 'metabolite identification carbon efficiency': 11464, 'Mesothelial/monocytic incidental cardiac excrescence': 11465, 'mobile insertion cassette element': 11466, 'Aggregated gamma globulin': 11467, 'Australian Grains Genebank': 11468, 'West Central North Pacific': 11469, 'Wind Cave National Park': 11470, 'cuff leak volume': 11471, 'centro lobular vein': 11472, 'customer lifetime value': 11473, 'lactate glucose index': 11474, 'low glycaemic index': 11475, 'Local gyrification index': 11476, 'lectin glycan interaction': 11477, 'low grade inflammation': 11478, 'invasive bronchial pulmonary aspergillosis': 11479, 'Intersectionality Based Policy Analysis': 11480, 'post traumatic stress syndrome': 11481, 'Post traumatic Symptom Scale': 11482, 'fecal wet weight': 11483, 'fallow wheat wheat': 11484, 'colour density spectral array': 11485, 'crystallization driven self assembly': 11486, 'Score to Door Time': 11487, 'somatosensory temporal discrimination threshold': 11488, 'lung water index': 11489, 'lung weight index': 11490, 'only positive galactomannan': 11491, 'optic pathway glioma': 11492, 'acute kidney disease': 11493, 'avian keratin disorder': 11494, 'vitamin D deficiency': 11495, 'Voronoi deformation density': 11496, 'Croatian Medical Journal': 11497, 'counter movement jump': 11498, 'cortical medullary junction': 11499, 'optical surface monitoring system': 11500, 'optic spinal multiple sclerosis': 11501, 'Cysteine Proline Proline Cysteine': 11502, 'Canadian Public Policy Collection': 11503, 'Cysteine Proline Serine Cysteine': 11504, 'Consumer Product Safety Commission': 11505, 'Fawn Hooded Hypertensive': 11506, 'Familial hypocalciuric hypercalcaemia': 11507, 'short lived effector cells': 11508, 'Saint Louis Exploratory Cohort': 11509, 'Lambert Eaton Myasthenic Syndrome': 11510, 'lower extremities motor scale': 11511, 'CENP A targeting domain': 11512, 'constant absolute target direction': 11513, 'Oral Mucositis Assessment Scale': 11514, 'opsoclonus myoclonus ataxia syndrome': 11515, 'ontology request broker': 11516, 'Oueme River Basin': 11517, 'origin recognition box': 11518, 'true vertical line': 11519, 'Trametes villosa laccase': 11520, 'Cambridge Protein Trap Insertion': 11521, 'carnitine palmitoyl transferase I': 11522, 'random plasma glucose': 11523, 'ribosomal protein gene': 11524, 'research project grant': 11525, 'heart failure hospitalisation': 11526, 'Hereditary Footpad Hyperkeratosis': 11527, 'pulmonary adenoid cystic carcinoma': 11528, 'Pancreatic acinar cell carcinoma': 11529, 'International Consensus Guideline': 11530, 'impaired cognition group': 11531, 'upstream control element': 11532, 'Ubiquitin conjugating enzyme': 11533, 'Acquired computerized image analysis': 11534, 'and collagen induced arthritis': 11535, 'cranial subcutaneous adipose tissue': 11536, 'collision stimulated abortive termination': 11537, 'Listening Inventory for Education': 11538, 'Laser induced fluorescence examination': 11539, 'Language Independent Functional Evaluation': 11540, 'Million Dollar Bombie': 11541, 'Mallory Denk bodies': 11542, 'methyl DNA binding': 11543, 'average filament length': 11544, 'alcoholic fatty liver': 11545, 'astrocyte feeder layers': 11546, 'kidney somatic index': 11547, 'knowledge sharing index': 11548, 'Lord Howe Island': 11549, 'Local Homogeneity Index': 11550, 'warmed fertilized irrigated': 11551, 'Water for Injection': 11552, 'Smithsonian Environmental Research Center': 11553, 'Sustainable Emergency Referral Care': 11554, 'Light tunnel wall': 11555, 'largest transverse width': 11556, 'southern Jeju Island': 11557, 'San Juan Islands': 11558, 'Methot Isaac Kidd': 11559, 'myo inositol kinase': 11560, 'hind foot length': 11561, 'high fructose liquid': 11562, 'hepatic focal lesion': 11563, 'Average Score Per Taxon': 11564, 'automatic single particle tracking': 11565, 'Coral Reef Watch': 11566, 'Comparative RNA Web': 11567, 'inner spiral bundle': 11568, 'information seeking behavior': 11569, 'Isua Supracrustal Belt': 11570, 'Vermont Oxford Network': 11571, 'ventral optic nerves': 11572, 'sister chromatid junction': 11573, 'squamo columnar junction': 11574, 'Global Natural Product Social': 11575, 'global network perturbation score': 11576, 'tumorsphere forming units': 11577, 'total fibre unit': 11578, 'adult intensive care unit': 11579, 'aspirin induced chronic urticaria': 11580, 'vestibular efferent neurons': 11581, 'valence electron number': 11582, 'von Economo neurones': 11583, 'spinal bulbar muscular atrophy': 11584, 'S benzyl mercapturic acid': 11585, 'voluntary wheel running': 11586, 'Vessel Wall Ratio': 11587, 'own gender bias': 11588, 'Oregon Green BAPTA': 11589, 'reduced form response surface': 11590, 'Random Forests Relapse Score': 11591, 'masculinisation programming window': 11592, 'mussel processing wastewaters': 11593, 'mismanaged plastic waste': 11594, 'maximum pronotum width': 11595, 'weighted road density': 11596, 'weighted rank difference': 11597, 'indoor environmental quality': 11598, 'Involvement Evaluation Questionnaire': 11599, 'multiple path particle dosimetry': 11600, 'median pairwise patristic distance': 11601, 'harmful algal bloom': 11602, 'high affinity binders': 11603, 'hyaluronic acid binding peptide': 11604, 'Hospital acquired bacterial pneumonia': 11605, 'dust equivalent quantity': 11606, 'dynamic enhancement quotient': 11607, 'environmental quality standards': 11608, 'embedding quantum simulator': 11609, 'prenatal maternal stress exposure': 11610, 'Predictive mean squared error': 11611, 'Seychelles Child Development Study': 11612, 'single crystal diffuse scattering': 11613, 'Atypical endometrial hyperplasia': 11614, 'adult emergence holes': 11615, 'after embryo hatching': 11616, 'blue green red': 11617, 'Brooks Gelman Rubin': 11618, 'Motivational Interviewing Treatment Integrity': 11619, 'Mobile Insulin Titration Intervention': 11620, 'natural killer cell cytotoxicity': 11621, 'Na K Cl cotransporter': 11622, 'acetate based infusion solution': 11623, 'Automated Best Image Selection': 11624, 'Decoupled biconical fixed points': 11625, 'dry blood filter paper': 11626, 'long lived particles': 11627, 'late lactation proteins': 11628, 'contrast enhanced spectral mammography': 11629, 'Community Earth System Model': 11630, 'evolutionary significant units': 11631, 'experimental substance use': 11632, 'vascular occlusion score': 11633, 'Voice Outcome Survey': 11634, 'CKAP2 positive cell count': 11635, 'Chinese Prostate Cancer Consortium': 11636, 'percutanous microwave coagulation therapy': 11637, 'Post mortem computed tomography': 11638, 'peripheral motor conduction time': 11639, 'differential transcript usage': 11640, 'discrete typing units': 11641, 'CMP Kdo synthetase': 11642, 'cyclin kinase subunit': 11643, 'Wet Distillers Grains': 11644, 'water dispersible granules': 11645, 'forsythia essential oil': 11646, 'familial expansile osteolysis': 11647, 'food entrainable oscillator': 11648, 'foam eyes open': 11649, 'Quinoa Protein Isolates': 11650, 'quasi particle interference': 11651, 'Ready to eat cereals': 11652, 'renal tubular epithelial cells': 11653, 'Growing up milks': 11654, 'grand unified model': 11655, 'black adzuki beans': 11656, 'bahiagrass alfalfa baleage': 11657, 'blood aqueous barrier': 11658, 'superficial ventral inferior protocerebrum': 11659, 'small VCP/p97 interacting protein': 11660, 'Est one homology': 11661, 'end of hemorrhage': 11662, 'double hit lymphomas': 11663, 'Duane Hunt limit': 11664, 'fresh excreta weight': 11665, 'forward expansion wave': 11666, 'Hair lengths variation': 11667, 'horseradish latent virus': 11668, 'hydrodynamic limb vein': 11669, 'quantitative trait variants': 11670, 'quantitative temporal viromics': 11671, 'transcript per quarter': 11672, 'target platform quadrant': 11673, 'Turnover Fragile Breakage Model': 11674, 'transcription factor binding motifs': 11675, 'Tumor Aberration Prediction Suite': 11676, 'thermally activated phase slips': 11677, 'Burkholderia pan genome array': 11678, 'Bacterial Pan Genome Analysis': 11679, 'minimal reverse complementary covering': 11680, 'metastatic renal cell carcinoma': 11681, 'interactive heterochromatic island': 11682, 'Ifakara Health Institute': 11683, 'Incomplete Hippocampal Inversion': 11684, 'Differentially methylated window': 11685, 'Deutsche Medizinische Wochenschrift': 11686, 'first order conditional independence': 11687, 'frequency offset corrected inversion': 11688, 'ancestral vertebrate karyotype': 11689, 'actuated virtual keypad': 11690, 'primary enamel knot': 11691, 'pancreatic EIF2α kinase': 11692, 'general nucleotide codon': 11693, 'General Nursing Council': 11694, 'grain N concentration': 11695, 'Coriell Personalized Medicine Collaborative': 11696, 'colonic peristaltic motor complex': 11697, 'ocean heat content tendency': 11698, 'of hydroxy cis terpenone': 11699, 'GEWEX Asian monsoon experiment': 11700, 'gallic acid methyl ester': 11701, 'Mosquito Spruce Unburned': 11702, 'Medical Services Unit': 11703, 'multinucleated stromal giant cells': 11704, 'Major salivary gland cancer': 11705, 'lamina cribrosa tilt angles': 11706, 'latent class trajectory analyses': 11707, 'weighted piecewise linear': 11708, 'whole parasite lysates': 11709, 'inferior nodal extension': 11710, 'intensive nutrition education': 11711, 'Anglo Cardiff Collaborative Trial': 11712, 'Aortic cross clamp time': 11713, 'human motor area template': 11714, 'half maximal antibody titers': 11715, 'Universidade Eduardo Mondlane': 11716, 'ultrafast electron microscopy': 11717, 'Drug Discovery Unit': 11718, 'Dirofilaria development units': 11719, 'Human endometrial endothelial cell': 11720, 'human esophageal epithelial cells': 11721, 'young healthy controls': 11722, 'Youth Health Care': 11723, 'multiple exposed uninfected': 11724, 'Monomeric equivalent unit': 11725, 'Post Arrest Consult Team': 11726, 'Preventable Admissions Care Team': 11727, 'pericentrin AKAP450 centrosomal targeting': 11728, 'acute care unit': 11729, 'aortic cross unclamping': 11730, 'East Surrey Hospital': 11731, 'expansile skeletal hyperphosphatasia': 11732, 'external snapping hip': 11733, 'elongated secondary hyphae': 11734, 'Asian longhorned beetle': 11735, 'Aged lager beer': 11736, 'Oxford Bio Innovation': 11737, 'on board imaging': 11738, 'non milk extrinsic sugars': 11739, 'Neuro Muscular Electrical Stimulation': 11740, 'computed tomography dose index': 11741, 'C Terminal Domain I': 11742, 'negative air ions': 11743, 'nucleus attraction index': 11744, 'naturally acquired immunity': 11745, 'NFκB activation inhibitor': 11746, 'Gokuldas Tejpal Hospital': 11747, 'Glycine tomentella Hayata': 11748, 'Bernhard Nocht Institute': 11749, 'Barrow Neurological Institute': 11750, 'central fan integrated supply': 11751, 'cell free implant system': 11752, 'Tibetan Plateau monsoon index': 11753, 'third person motor imagery': 11754, 'Juglans rigia Linn': 11755, 'jacalin related lectin': 11756, 'adaptive Internet users': 11757, 'Average Intensity units': 11758, 'maladaptive Internet users': 11759, 'Million International units': 11760, 'non pregnant women': 11761, 'non paretic workspace': 11762, 'Environmental Health Officers': 11763, 'early Holocene Optimum': 11764, 'average relative humidity': 11765, 'Autosomal Recessive Hypercholesterolemia': 11766, 'ADP ribosylarginine hydrolases': 11767, 'directly observed treatment strategy': 11768, 'density of trion states': 11769, 'I kappaB kinase': 11770, 'inhibitor kB kinase': 11771, 'Deoxy d glucose': 11772, 'digital delay generator': 11773, 'Flavin mono nucleotide': 11774, 'facial motor nucleus': 11775, 'non stressed nodules': 11776, 'non surrounded nucleolus': 11777, 'non sentinel node': 11778, 'modified seedbox screening test': 11779, 'Motion Sensitivity Screening Test': 11780, 'pulmonary vascular endothelial cell': 11781, 'partial volume effect corrected': 11782, 'conditions of normal': 11783, 'cingulo opercular network': 11784, 'in vitro encapsidated': 11785, 'in vivo electrotransfer': 11786, 'equine recurrent uveitis': 11787, 'EF2 reacting unit': 11788, 'zosteric acid salt': 11789, 'zymosan activated serum': 11790, 'tea water extract': 11791, 'Tasco® Water Extract': 11792, 'total worm extracts': 11793, 'high dose ionizing radiation': 11794, 'HISS dependent insulin resistance': 11795, 'phosphatidyl ethanolamine binding protein': 11796, 'polyphenol enriched blueberry preparation': 11797, 'pre epiglottic baton plate': 11798, 'young leaf center': 11799, 'Yeast Like Core': 11800, 'Alpinate Oxyphyllae Fructus': 11801, 'Almeria Oran Front': 11802, 'Voxel Guided Morphometry': 11803, 'ventral grey matter': 11804, 'Aletrnative Splice Site Predictor': 11805, 'Airborne Smoke Sampling package': 11806, 'Frequency Distribution Histogram': 11807, 'fast degradable hydrogel': 11808, 'vector enabled metagenomics': 11809, 'visual estimation method': 11810, 'Sensitive Centrifugal Flotation Technique': 11811, 'self consistent field theory': 11812, 'Caesalpinia pulcherrima seed polysaccharide': 11813, 'Central post stroke pain': 11814, 'orally disintegrating tablets': 11815, 'optical dipole trap': 11816, 'Junior Project Officer': 11817, 'Josephson parametric oscillator': 11818, 'Albright Hereditary Osteodystrophy': 11819, 'all hide OTUs': 11820, 'fluid accumulation vest': 11821, 'fistule artério veineuse': 11822, 'severe therapy resistant asthma': 11823, 'superior temporal retinal arteriole': 11824, 'aortic depressor nerve': 11825, 'anterior dorsal nerve': 11826, 'H antigen acceptor': 11827, 'hepatic artery aneurysm': 11828, 'hydroxy anthranilic acid': 11829, 'right side out': 11830, 'radial shortening osteotomy': 11831, 'rubber seed oil': 11832, 'kallikrein kinin system': 11833, 'Kansas knee simulator': 11834, 'Larger The Better': 11835, 'low trait bulk': 11836, 'Mean Gastric Emptying Time': 11837, 'Marine Geospatial Ecology Tools': 11838, 'Cys Cys His Cys': 11839, 'Cameron County Hispanic Cohort': 11840, 'International Mouse Phenotyping Consortium': 11841, 'invasive micro papillary carcinoma': 11842, 'autoimmune hypocalciuric hypercalcemia': 11843, 'Aryl hydrocarbon hydroxylase': 11844, 'open vein harvest': 11845, 'Oral verrucous hyperplasia': 11846, 'ventricular myocardial band': 11847, 'ventral medial basal': 11848, 'albumin based hydrogel sealant': 11849, 'alcohol based hand sanitizer': 11850, 'right ventricular insertion point': 11851, 'Rapid Visual Information Processing': 11852, 'pulmonary venous obstruction': 11853, 'Pyogenic vertebral osteomyelitis': 11854, 'Pulmonary blood volume variation': 11855, 'prognostic binary variable vector': 11856, 'blood thrombus imaging': 11857, 'brain tissue imprints': 11858, 'Jun N‐terminal kinase': 11859, 'jun NH2 kinase': 11860, 'Clinical Laboratory Improvement Amendments': 11861, 'Continuous local infiltration analgesia': 11862, 'Exercise associated hyponatremia': 11863, 'endometrial atypical hyperplasia': 11864, 'National Glycohemoglobin Standardization Program': 11865, 'nested gene specific primer': 11866, 'breast milk iodine concentrations': 11867, 'brain metastasis initiating cell': 11868, 'Shahroud industrial Zone': 11869, 'sea ice zone': 11870, 'Pure water flux': 11871, 'paw withdrawal frequency': 11872, 'electrolyzed oxidizing water': 11873, 'every other week': 11874, 'Secure Anonymised Information Linkage': 11875, 'Syngenta Arabidopsis Insertion Library': 11876, 'traditional ecological knowledge': 11877, 'Traditional Environmental Knowledge': 11878, 'days after solution change': 11879, 'distal airway stem cell': 11880, 'oxo nonanoic acid': 11881, 'Oxoid nutrient agar': 11882, 'optic nerve aplasia': 11883, 'Cornell High Energy Synchrotron Source': 11884, 'Comprehensive Health Enhancement Support System': 11885, 'transmissible venereal tumor': 11886, 'Tube Versus Trabeculectomy': 11887, 'simian T lymphotropic virus': 11888, 'slow turning lateral vessel': 11889, 'tissue culture petri dish': 11890, 'trabecular ciliary process distance': 11891, 'Chemical Ionization Mass Spectrometers': 11892, 'Controlled ionization marine solution': 11893, 'double expressing lymphoma': 11894, 'duck egg lysozyme': 11895, 'differentially expressed lncRNA': 11896, 'running anaerobic sprint test': 11897, 'rapid annotation subsystem technology': 11898, 'Multi ingredient performance supplements': 11899, 'Minimally Invasive Ponto Surgery': 11900, 'molecular inversion probe sequencing': 11901, 'myo inositol phosphate synthase': 11902, 'Non convulsive status epilepticus': 11903, 'normalized corrected Shannon entropy': 11904, 'normal cervical squamous epithelium': 11905, 'Web Accessibility Barrier': 11906, 'Western Aphasia Battery': 11907, 'vigorous moderate walking': 11908, 'Village Malaria Worker': 11909, 'HCC amino D alanine': 11910, 'hydroxycoumarin amino d alanine': 11911, 'multiple copy simultaneous search': 11912, 'mean composite severity score': 11913, 'Fugl Meyer Stroke Assessment': 11914, 'flexible micro spring array': 11915, 'selective voluntary movement control': 11916, 'spin vector Monte Carlo': 11917, 'finger independency index': 11918, 'foliar insecticide impact': 11919, 'Food Insulin Index': 11920, 'geometric mean channel fluorescence': 11921, 'geodesic mean curvature flow': 11922, 'Kristiansand Nikkelrafferingsverk refinery': 11923, 'Kerr nonlinear resonators': 11924, 'Scaphoid non union': 11925, 'shot noise units': 11926, 'Hard Gelatin Capsules': 11927, 'human granulosa cells': 11928, 'low CHO high fat': 11929, 'low carbohydrate high fat': 11930, 'quick fixation screw': 11931, 'quadruplex forming sequences': 11932, 'Y Ba Cu O': 11933, 'yttrium barium copper oxide': 11934, 'unusual mortality event': 11935, 'undergraduate medical education': 11936, 'whole liver tissue': 11937, 'withdrawal latency time': 11938, 'upper normal limit': 11939, 'Upper Nutrient Level': 11940, 'Di Aminido Phenyl Indol': 11941, 'di amino phenyl indolamine': 11942, 'Calyx seu Fructus Physalis': 11943, 'coronary slow flow phenomenon': 11944, 'Conventionally Treated nets': 11945, 'cyhalothrin treated nets': 11946, 'European Mouse Mutant Archive': 11947, 'efficient mixed model analysis': 11948, 'RNA editing core complex': 11949, 'Rapidly Evolving Clinical Cascades': 11950, 'viral genome equivalents': 11951, 'venous gas emboli': 11952, 'upper end vertebra': 11953, 'ubiquitin E2 variant': 11954, 'thymidylate synthase enhancer region': 11955, 'Tortugas South Ecological Reserve': 11956, 'baseline impedance level': 11957, 'backcross inbred line': 11958, 'Direct contact membrane distillation': 11959, 'descending contralateral movement detector': 11960, 'Clinical Laboratory Standards Institute': 11961, 'Cercospora Leaf Spot Index': 11962, 'Standardized Human Gut Microbiota': 11963, 'Stephen Hui Geological Museum': 11964, 'old saline planktonic large': 11965, 'outer segment photoreceptor layer': 11966, 'N helminthoeca Oregon': 11967, 'no half occlusions': 11968, 'Ubiquitin Fusion Degradation': 11969, 'ubiquitin fold domain': 11970, 'human primary colon cancer': 11971, 'high performance computing cluster': 11972, 'neural crest stem cells': 11973, 'non cancer stem cells': 11974, 'optical injection locking': 11975, 'oxygen inhibition layer': 11976, 'Caucasus hunter gatherers': 11977, 'cingulum hippocampus gyrus': 11978, 'bending induced oscillatory shear': 11979, 'Basic Input Output System': 11980, 'inter follicular epidermis': 11981, 'Individual fold error': 11982, 'friend virus B': 11983, 'Forearm vascular bed': 11984, 'c src kinase': 11985, 'chloroplast sensor kinase': 11986, 'time in quiescence': 11987, 'Total Intelligence Quotient': 11988, 'repressive chromatin hub': 11989, 'retinal capillary hemangioblastomas': 11990, 'A koraiensis extract': 11991, 'apricot kernel extract': 11992, 'cancer stem like cells': 11993, 'carrier state life cycle': 11994, 'lower limit normal': 11995, 'left lymph node': 11996, 'fraction covered vessels': 11997, 'frottis cervico vaginal': 11998, 'paxillin GFP Nudel': 11999, 'Pontine Gray Nucleus': 12000} - -INFINITY_NUMBER = 1e12 \ No newline at end of file diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/python/dqn/__init__.py b/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/python/dqn/__init__.py deleted file mode 100644 index 4ae42872c812a7c8a18dff002086c7e6e935f580..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/python/dqn/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from stable_baselines3.dqn.dqn import DQN -from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy diff --git a/spaces/Luna-Crestt/How_is_it_ze/utils.py b/spaces/Luna-Crestt/How_is_it_ze/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/Luna-Crestt/How_is_it_ze/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/normalization.py b/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/normalization.py deleted file mode 100644 index a865db0290c7159c6e641bbc52e14fbc79dde289..0000000000000000000000000000000000000000 --- a/spaces/MCkernick/Image_Restoration_Colorization/Face_Enhancement/models/networks/normalization.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import re -import torch -import torch.nn as nn -import torch.nn.functional as F -from models.networks.sync_batchnorm import SynchronizedBatchNorm2d -import torch.nn.utils.spectral_norm as spectral_norm - - -def get_nonspade_norm_layer(opt, norm_type="instance"): - # helper function to get # output channels of the previous layer - def get_out_channel(layer): - if hasattr(layer, "out_channels"): - return getattr(layer, "out_channels") - return layer.weight.size(0) - - # this function will be returned - def add_norm_layer(layer): - nonlocal norm_type - if norm_type.startswith("spectral"): - layer = spectral_norm(layer) - subnorm_type = norm_type[len("spectral") :] - - if subnorm_type == "none" or len(subnorm_type) == 0: - return layer - - # remove bias in the previous layer, which is meaningless - # since it has no effect after normalization - if getattr(layer, "bias", None) is not None: - delattr(layer, "bias") - layer.register_parameter("bias", None) - - if subnorm_type == "batch": - norm_layer = nn.BatchNorm2d(get_out_channel(layer), affine=True) - elif subnorm_type == "sync_batch": - norm_layer = SynchronizedBatchNorm2d(get_out_channel(layer), affine=True) - elif subnorm_type == "instance": - norm_layer = nn.InstanceNorm2d(get_out_channel(layer), affine=False) - else: - raise ValueError("normalization layer %s is not recognized" % subnorm_type) - - return nn.Sequential(layer, norm_layer) - - return add_norm_layer - - -class SPADE(nn.Module): - def __init__(self, config_text, norm_nc, label_nc, opt): - super().__init__() - - assert config_text.startswith("spade") - parsed = re.search("spade(\D+)(\d)x\d", config_text) - param_free_norm_type = str(parsed.group(1)) - ks = int(parsed.group(2)) - self.opt = opt - if param_free_norm_type == "instance": - self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False) - elif param_free_norm_type == "syncbatch": - self.param_free_norm = SynchronizedBatchNorm2d(norm_nc, affine=False) - elif param_free_norm_type == "batch": - self.param_free_norm = nn.BatchNorm2d(norm_nc, affine=False) - else: - raise ValueError("%s is not a recognized param-free norm type in SPADE" % param_free_norm_type) - - # The dimension of the intermediate embedding space. Yes, hardcoded. - nhidden = 128 - - pw = ks // 2 - - if self.opt.no_parsing_map: - self.mlp_shared = nn.Sequential(nn.Conv2d(3, nhidden, kernel_size=ks, padding=pw), nn.ReLU()) - else: - self.mlp_shared = nn.Sequential( - nn.Conv2d(label_nc + 3, nhidden, kernel_size=ks, padding=pw), nn.ReLU() - ) - self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw) - self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=ks, padding=pw) - - def forward(self, x, segmap, degraded_image): - - # Part 1. generate parameter-free normalized activations - normalized = self.param_free_norm(x) - - # Part 2. produce scaling and bias conditioned on semantic map - segmap = F.interpolate(segmap, size=x.size()[2:], mode="nearest") - degraded_face = F.interpolate(degraded_image, size=x.size()[2:], mode="bilinear") - - if self.opt.no_parsing_map: - actv = self.mlp_shared(degraded_face) - else: - actv = self.mlp_shared(torch.cat((segmap, degraded_face), dim=1)) - gamma = self.mlp_gamma(actv) - beta = self.mlp_beta(actv) - - # apply scale and bias - out = normalized * (1 + gamma) + beta - - return out diff --git a/spaces/MGLDZM/chgpt/error_map.py b/spaces/MGLDZM/chgpt/error_map.py deleted file mode 100644 index 90d5ac820f006a5bf60ad20b6250ffb1dc3b58cb..0000000000000000000000000000000000000000 --- a/spaces/MGLDZM/chgpt/error_map.py +++ /dev/null @@ -1,26 +0,0 @@ -from openai import error as openai_error -from fastapi import status -import requests -error_table = { - requests.exceptions.RequestException: { - "status_code": status.HTTP_408_REQUEST_TIMEOUT, - "detail": "Los servidores tardaron mucho en responder, puede haber sobrecarga en OpenAI, reintenta luego (error 1)" - }, - openai_error.APIConnectionError: { - "status_code": status.HTTP_408_REQUEST_TIMEOUT, - "detail": "El servidor no respondió, hubo un error de API" - }, - openai_error.Timeout: { - "status_code": status.HTTP_408_REQUEST_TIMEOUT, - "detail": "El servidor tardó demasiado en responder" - }, - openai_error.InvalidRequestError: { - "status_code": status.HTTP_408_REQUEST_TIMEOUT, - "detail": "ChatGPT se gomitó 🤮, este chat ya es muy largo, limpia el chat y reintenta." - }, - "undefined": { - "status_code": status.HTTP_408_REQUEST_TIMEOUT, - "detail": "Error no definido 🙄" - } -} - \ No newline at end of file diff --git a/spaces/Manjushri/MusicGen/audiocraft/quantization/__init__.py b/spaces/Manjushri/MusicGen/audiocraft/quantization/__init__.py deleted file mode 100644 index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/audiocraft/quantization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .vq import ResidualVectorQuantizer -from .base import BaseQuantizer, DummyQuantizer, QuantizedResult diff --git a/spaces/March07/PromptBench/README.md b/spaces/March07/PromptBench/README.md deleted file mode 100644 index bebbb71b7ed00d9ca589763db8933a42926c5c64..0000000000000000000000000000000000000000 --- a/spaces/March07/PromptBench/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PromptBench -emoji: 🏃 -colorFrom: yellow -colorTo: yellow -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MarkMcCormack/NLP-EduTech-App/utils.py b/spaces/MarkMcCormack/NLP-EduTech-App/utils.py deleted file mode 100644 index 311d2a0a291bcc1002a05f9458665be940b7c3b5..0000000000000000000000000000000000000000 --- a/spaces/MarkMcCormack/NLP-EduTech-App/utils.py +++ /dev/null @@ -1,33 +0,0 @@ -import streamlit as st -from langchain.chains import LLMChain -from feedbackCollection import database - -llmOpenAI = None - -def createComponent(template, component_title, text_input_label): - global count - - with st.expander(f"{component_title}"): - st.info(f'Prompt: {template.template}', icon="🗣️") - user_input = st.text_input(f"{text_input_label}", key=f"{component_title}_button") - displayResult = st.success('Result: ', icon="✅") - - if st.button('Send Request!', key=f"{component_title}{text_input_label}_button"): - chain = LLMChain(llm=llmOpenAI, prompt=template) - result = chain.run(user_input) - displayResult.success(f'Result: {result}', icon="✅") - database.insert_one({"Question": template.template, "User Input": user_input, "Answer": result}) - -def createAutomatedComponent(template, component_title, text_input_label): - global count - - with st.expander(f"{component_title}"): - st.info(f'Prompt: {template.template}', icon="🗣️") - user_input = st.text_input(f"{text_input_label}", key=f"{component_title}_button") - displayResult = st.success('Result: ', icon="✅") - - if st.button('Send Request!', key=f"{component_title}{text_input_label}_button"): - chain = LLMChain(llm=llmOpenAI, prompt=template) - result = chain.run(user_input) - displayResult.success(f'Result: {result}', icon="✅") - database.insert_one({"Question": template.template, "User Input": user_input, "Answer": result}) \ No newline at end of file diff --git a/spaces/MercurialAi/OncologyGPT/app.py b/spaces/MercurialAi/OncologyGPT/app.py deleted file mode 100644 index f820d45791a97b94cb8d6342e44ddb8cb38c00c4..0000000000000000000000000000000000000000 --- a/spaces/MercurialAi/OncologyGPT/app.py +++ /dev/null @@ -1,64 +0,0 @@ -import gradio as gr -import os -os.system("pip -qq install openai") -import openai - -EX_Q1 = "What is the best course of treatment for a 65-year-old breast cancer patient with a non-ductal carcinoma and HER-2 positive status? Explain the reasoning behind it. " -EX_Q2 = "What is the best course of treatment for a 25-year-old non-metastatic breast cancer patient with a Nottingham grade of 5 and HER-2 negative status? Explain the reasoning behind it. " -EX_Q3 = "What testing must a patient candidate for poly ADP-ribose polymerase (PARP) inhibitor therapy undergo to determine their eligibility?" -EX_Q4 = "What criteria are used to determine a patient's eligibility for treatment with PARP inhibitors like olaparib and talazoparib for metastatic HER2-negative breast cancer?" -EX_Q5 = "What should be done before discharging a patient who had a mastectomy with axillary node clearance?" -EX_Q6 = "What is the best course of treatment for a 60-year-old metastatic breast cancer patient with a tumor size of 3.0 cm, HER-2 negative status, and a tumor grade of 2.0 based on nuclear characteristics? Explain the reasoning behind it. " -EX_Q7 = "What is the best course of treatment for a 50-year-old breast metastatic breast cancer patient with HER-2 positive status, a tumor size of 2 cm, and the involvement of 2 lymph nodes? Explain the reasoning behind it. " - -def get_response(Q): - - # clear cache before generating new response - os.system('huggingface-cli delete-cache') - - response = openai.Completion.create( - model="davinci:ft-personal-2023-07-31-03-16-53", - prompt=Q+"->", - max_tokens=128, - temperature=1 - ) - - response = str(response.choices[0].text) - - # cut response off after end is signified - end_index = len(response) - end_markers = ["END", " ", "\n"] - for mark in end_markers: - if mark in response: - end_index = response.index(mark) - break - - response = response[:end_index] - - return response - -def bot(Q, history): - history = history or [] - c_history = list(sum(history, ())) - c_history.append(Q) - c_input = ' '.join(c_history) - output = get_response(c_input) - history.append((Q, output)) - return history, history - -def get_question_example(qe): - return qe - -with gr.Blocks() as iFace: - - chatbot = gr.Chatbot() - state = gr.State() - - Q = gr.Textbox(show_label=False, placeholder="I'm here to help.").style(container=False) - - question_example = gr.Radio(label="Inquiry Examples", choices=[EX_Q1, EX_Q2, EX_Q3, EX_Q4, EX_Q5, EX_Q6, EX_Q7]) - - Q.submit(bot, inputs=[Q, state], outputs=[chatbot, state]) - question_example.change(get_question_example, inputs=[question_example], outputs=Q) - -iFace.launch() diff --git a/spaces/Mississippiexhib/theintuitiveye-HARDblend/app.py b/spaces/Mississippiexhib/theintuitiveye-HARDblend/app.py deleted file mode 100644 index 8a15aa259c24dcf344f83207fe84282ed37c7da2..0000000000000000000000000000000000000000 --- a/spaces/Mississippiexhib/theintuitiveye-HARDblend/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/theintuitiveye/HARDblend").launch() \ No newline at end of file diff --git a/spaces/Miuzarte/SUI-svc-4.0/cluster/train_cluster.py b/spaces/Miuzarte/SUI-svc-4.0/cluster/train_cluster.py deleted file mode 100644 index 83518f89bd959ab9a8929552b109bd63a5185fd2..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-4.0/cluster/train_cluster.py +++ /dev/null @@ -1,89 +0,0 @@ -import os -from glob import glob -from pathlib import Path -import torch -import logging -import argparse -import torch -import numpy as np -from sklearn.cluster import KMeans, MiniBatchKMeans -import tqdm -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) -import time -import random - -def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False): - - logger.info(f"Loading features from {in_dir}") - features = [] - nums = 0 - for path in tqdm.tqdm(in_dir.glob("*.soft.pt")): - features.append(torch.load(path).squeeze(0).numpy().T) - # print(features[-1].shape) - features = np.concatenate(features, axis=0) - print(nums, features.nbytes/ 1024**2, "MB , shape:",features.shape, features.dtype) - features = features.astype(np.float32) - logger.info(f"Clustering features of shape: {features.shape}") - t = time.time() - if use_minibatch: - kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features) - else: - kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features) - print(time.time()-t, "s") - - x = { - "n_features_in_": kmeans.n_features_in_, - "_n_threads": kmeans._n_threads, - "cluster_centers_": kmeans.cluster_centers_, - } - print("end") - - return x - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument('--dataset', type=Path, default="dataset/44k", - help='path of training data directory') - parser.add_argument('--output', type=Path, default="logs/44k", - help='path of model output directory') - - args = parser.parse_args() - - checkpoint_dir = args.output - dataset = args.dataset - n_clusters = 10000 - - ckpt = {} - for spk in os.listdir(dataset): - if os.path.isdir(dataset/spk): - print(f"train kmeans for {spk}...") - in_dir = dataset/spk - x = train_cluster(in_dir, n_clusters, verbose=False) - ckpt[spk] = x - - checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pt" - checkpoint_path.parent.mkdir(exist_ok=True, parents=True) - torch.save( - ckpt, - checkpoint_path, - ) - - - # import cluster - # for spk in tqdm.tqdm(os.listdir("dataset")): - # if os.path.isdir(f"dataset/{spk}"): - # print(f"start kmeans inference for {spk}...") - # for feature_path in tqdm.tqdm(glob(f"dataset/{spk}/*.discrete.npy", recursive=True)): - # mel_path = feature_path.replace(".discrete.npy",".mel.npy") - # mel_spectrogram = np.load(mel_path) - # feature_len = mel_spectrogram.shape[-1] - # c = np.load(feature_path) - # c = utils.tools.repeat_expand_2d(torch.FloatTensor(c), feature_len).numpy() - # feature = c.T - # feature_class = cluster.get_cluster_result(feature, spk) - # np.save(feature_path.replace(".discrete.npy", ".discrete_class.npy"), feature_class) - - diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/panet/_base_panet_resnet18_fpem-ffm.py b/spaces/Mountchicken/MAERec-Gradio/configs/textdet/panet/_base_panet_resnet18_fpem-ffm.py deleted file mode 100644 index 49b66da4afec5245883c40116d35e018e8935e71..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/panet/_base_panet_resnet18_fpem-ffm.py +++ /dev/null @@ -1,77 +0,0 @@ -# BasicBlock has a little difference from official PANet -# BasicBlock in mmdet lacks RELU in the last convolution. -model = dict( - type='PANet', - data_preprocessor=dict( - type='TextDetDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True, - pad_size_divisor=32), - backbone=dict( - type='mmdet.ResNet', - depth=18, - num_stages=4, - stem_channels=128, - deep_stem=True, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_eval=False, - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'), - style='pytorch'), - neck=dict(type='FPEM_FFM', in_channels=[64, 128, 256, 512]), - det_head=dict( - type='PANHead', - in_channels=[128, 128, 128, 128], - hidden_dim=128, - out_channel=6, - module_loss=dict( - type='PANModuleLoss', - loss_text=dict(type='MaskedSquareDiceLoss'), - loss_kernel=dict(type='MaskedSquareDiceLoss'), - ), - postprocessor=dict(type='PANPostprocessor', text_repr_type='quad'))) - -train_pipeline = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadOCRAnnotations', - with_polygon=True, - with_bbox=True, - with_label=True, - ), - dict(type='ShortScaleAspectJitter', short_size=736, scale_divisor=32), - dict(type='RandomFlip', prob=0.5, direction='horizontal'), - dict(type='RandomRotate', max_angle=10), - dict(type='TextDetRandomCrop', target_size=(736, 736)), - dict(type='Pad', size=(736, 736)), - dict( - type='TorchVisionWrapper', - op='ColorJitter', - brightness=32.0 / 255, - saturation=0.5), - dict( - type='PackTextDetInputs', - meta_keys=('img_path', 'ori_shape', 'img_shape', 'scale_factor')) -] - -test_pipeline = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - # TODO Replace with mmcv.RescaleToShort when it's ready - dict( - type='ShortScaleAspectJitter', - short_size=736, - scale_divisor=1, - ratio_range=(1.0, 1.0), - aspect_ratio_range=(1.0, 1.0)), - # add loading annotation after ``Resize`` because ground truth - # does not need to do resize data transform - dict( - type='LoadOCRAnnotations', - with_polygon=True, - with_bbox=True, - with_label=True), - dict( - type='PackTextDetInputs', - meta_keys=('img_path', 'ori_shape', 'img_shape', 'scale_factor')) -] diff --git a/spaces/Munderstand/whisper-to-chatGPT/README.md b/spaces/Munderstand/whisper-to-chatGPT/README.md deleted file mode 100644 index 2a07ed591202b5d563026813d22ca1b1f9029431..0000000000000000000000000000000000000000 --- a/spaces/Munderstand/whisper-to-chatGPT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Whisper to chatGPT -emoji: 👄🤖 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: fffiloni/whisper-to-chatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NATSpeech/DiffSpeech/utils/commons/meters.py b/spaces/NATSpeech/DiffSpeech/utils/commons/meters.py deleted file mode 100644 index e38790e9f292ec843a820dad73c9795eb2ab8daa..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/DiffSpeech/utils/commons/meters.py +++ /dev/null @@ -1,42 +0,0 @@ -import time -import torch - - -class AvgrageMeter(object): - - def __init__(self): - self.reset() - - def reset(self): - self.avg = 0 - self.sum = 0 - self.cnt = 0 - - def update(self, val, n=1): - self.sum += val * n - self.cnt += n - self.avg = self.sum / self.cnt - - -class Timer: - timer_map = {} - - def __init__(self, name, enable=False): - if name not in Timer.timer_map: - Timer.timer_map[name] = 0 - self.name = name - self.enable = enable - - def __enter__(self): - if self.enable: - if torch.cuda.is_available(): - torch.cuda.synchronize() - self.t = time.time() - - def __exit__(self, exc_type, exc_val, exc_tb): - if self.enable: - if torch.cuda.is_available(): - torch.cuda.synchronize() - Timer.timer_map[self.name] += time.time() - self.t - if self.enable: - print(f'[Timer] {self.name}: {Timer.timer_map[self.name]}') diff --git a/spaces/NATSpeech/PortaSpeech/tasks/tts/ps.py b/spaces/NATSpeech/PortaSpeech/tasks/tts/ps.py deleted file mode 100644 index 995dec8c7f40c27310a6231b08330e807d02c405..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/tasks/tts/ps.py +++ /dev/null @@ -1,194 +0,0 @@ -import os -import torch -import torch.nn.functional as F -from torch import nn - -from modules.tts.portaspeech.portaspeech import PortaSpeech -from tasks.tts.fs import FastSpeechTask -from utils.audio.align import mel2token_to_dur -from utils.commons.hparams import hparams -from utils.metrics.diagonal_metrics import get_focus_rate, get_phone_coverage_rate, get_diagonal_focus_rate -from utils.nn.model_utils import num_params -import numpy as np - -from utils.plot.plot import spec_to_figure -from utils.text.text_encoder import build_token_encoder - - -class PortaSpeechTask(FastSpeechTask): - def __init__(self): - super().__init__() - data_dir = hparams['binary_data_dir'] - self.word_encoder = build_token_encoder(f'{data_dir}/word_set.json') - - def build_tts_model(self): - ph_dict_size = len(self.token_encoder) - word_dict_size = len(self.word_encoder) - self.model = PortaSpeech(ph_dict_size, word_dict_size, hparams) - - def on_train_start(self): - super().on_train_start() - for n, m in self.model.named_children(): - num_params(m, model_name=n) - if hasattr(self.model, 'fvae'): - for n, m in self.model.fvae.named_children(): - num_params(m, model_name=f'fvae.{n}') - - def run_model(self, sample, infer=False, *args, **kwargs): - txt_tokens = sample['txt_tokens'] - word_tokens = sample['word_tokens'] - spk_embed = sample.get('spk_embed') - spk_id = sample.get('spk_ids') - if not infer: - output = self.model(txt_tokens, word_tokens, - ph2word=sample['ph2word'], - mel2word=sample['mel2word'], - mel2ph=sample['mel2ph'], - word_len=sample['word_lengths'].max(), - tgt_mels=sample['mels'], - pitch=sample.get('pitch'), - spk_embed=spk_embed, - spk_id=spk_id, - infer=False, - global_step=self.global_step) - losses = {} - losses['kl_v'] = output['kl'].detach() - losses_kl = output['kl'] - losses_kl = torch.clamp(losses_kl, min=hparams['kl_min']) - losses_kl = min(self.global_step / hparams['kl_start_steps'], 1) * losses_kl - losses_kl = losses_kl * hparams['lambda_kl'] - losses['kl'] = losses_kl - self.add_mel_loss(output['mel_out'], sample['mels'], losses) - if hparams['dur_level'] == 'word': - self.add_dur_loss( - output['dur'], sample['mel2word'], sample['word_lengths'], sample['txt_tokens'], losses) - self.get_attn_stats(output['attn'], sample, losses) - else: - super(PortaSpeechTask, self).add_dur_loss(output['dur'], sample['mel2ph'], sample['txt_tokens'], losses) - return losses, output - else: - use_gt_dur = kwargs.get('infer_use_gt_dur', hparams['use_gt_dur']) - output = self.model( - txt_tokens, word_tokens, - ph2word=sample['ph2word'], - word_len=sample['word_lengths'].max(), - pitch=sample.get('pitch'), - mel2ph=sample['mel2ph'] if use_gt_dur else None, - mel2word=sample['mel2word'] if use_gt_dur else None, - tgt_mels=sample['mels'], - infer=True, - spk_embed=spk_embed, - spk_id=spk_id, - ) - return output - - def add_dur_loss(self, dur_pred, mel2token, word_len, txt_tokens, losses=None): - T = word_len.max() - dur_gt = mel2token_to_dur(mel2token, T).float() - nonpadding = (torch.arange(T).to(dur_pred.device)[None, :] < word_len[:, None]).float() - dur_pred = dur_pred * nonpadding - dur_gt = dur_gt * nonpadding - wdur = F.l1_loss((dur_pred + 1).log(), (dur_gt + 1).log(), reduction='none') - wdur = (wdur * nonpadding).sum() / nonpadding.sum() - if hparams['lambda_word_dur'] > 0: - losses['wdur'] = wdur * hparams['lambda_word_dur'] - if hparams['lambda_sent_dur'] > 0: - sent_dur_p = dur_pred.sum(-1) - sent_dur_g = dur_gt.sum(-1) - sdur_loss = F.l1_loss(sent_dur_p, sent_dur_g, reduction='mean') - losses['sdur'] = sdur_loss.mean() * hparams['lambda_sent_dur'] - - def validation_step(self, sample, batch_idx): - return super().validation_step(sample, batch_idx) - - def save_valid_result(self, sample, batch_idx, model_out): - super(PortaSpeechTask, self).save_valid_result(sample, batch_idx, model_out) - if self.global_step > 0 and hparams['dur_level'] == 'word': - self.logger.add_figure(f'attn_{batch_idx}', spec_to_figure(model_out['attn'][0]), self.global_step) - - def get_attn_stats(self, attn, sample, logging_outputs, prefix=''): - # diagonal_focus_rate - txt_lengths = sample['txt_lengths'].float() - mel_lengths = sample['mel_lengths'].float() - src_padding_mask = sample['txt_tokens'].eq(0) - target_padding_mask = sample['mels'].abs().sum(-1).eq(0) - src_seg_mask = sample['txt_tokens'].eq(self.seg_idx) - attn_ks = txt_lengths.float() / mel_lengths.float() - - focus_rate = get_focus_rate(attn, src_padding_mask, target_padding_mask).mean().data - phone_coverage_rate = get_phone_coverage_rate( - attn, src_padding_mask, src_seg_mask, target_padding_mask).mean() - diagonal_focus_rate, diag_mask = get_diagonal_focus_rate( - attn, attn_ks, mel_lengths, src_padding_mask, target_padding_mask) - logging_outputs[f'{prefix}fr'] = focus_rate.mean().data - logging_outputs[f'{prefix}pcr'] = phone_coverage_rate.mean().data - logging_outputs[f'{prefix}dfr'] = diagonal_focus_rate.mean().data - - def get_plot_dur_info(self, sample, model_out): - if hparams['dur_level'] == 'word': - T_txt = sample['word_lengths'].max() - dur_gt = mel2token_to_dur(sample['mel2word'], T_txt)[0] - dur_pred = model_out['dur'] if 'dur' in model_out else dur_gt - txt = sample['ph_words'][0].split(" ") - else: - T_txt = sample['txt_tokens'].shape[1] - dur_gt = mel2token_to_dur(sample['mel2ph'], T_txt)[0] - dur_pred = model_out['dur'] if 'dur' in model_out else dur_gt - txt = self.token_encoder.decode(sample['txt_tokens'][0].cpu().numpy()) - txt = txt.split(" ") - return {'dur_gt': dur_gt, 'dur_pred': dur_pred, 'txt': txt} - - def build_optimizer(self, model): - self.optimizer = torch.optim.AdamW( - self.model.parameters(), - lr=hparams['lr'], - betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']), - weight_decay=hparams['weight_decay']) - return self.optimizer - - def build_scheduler(self, optimizer): - return FastSpeechTask.build_scheduler(self, optimizer) - - ############ - # infer - ############ - def test_start(self): - super().test_start() - if hparams.get('save_attn', False): - os.makedirs(f'{self.gen_dir}/attn', exist_ok=True) - self.model.store_inverse_all() - - def test_step(self, sample, batch_idx): - assert sample['txt_tokens'].shape[0] == 1, 'only support batch_size=1 in inference' - outputs = self.run_model(sample, infer=True) - text = sample['text'][0] - item_name = sample['item_name'][0] - tokens = sample['txt_tokens'][0].cpu().numpy() - mel_gt = sample['mels'][0].cpu().numpy() - mel_pred = outputs['mel_out'][0].cpu().numpy() - mel2ph = sample['mel2ph'][0].cpu().numpy() - mel2ph_pred = None - str_phs = self.token_encoder.decode(tokens, strip_padding=True) - base_fn = f'[{batch_idx:06d}][{item_name.replace("%", "_")}][%s]' - if text is not None: - base_fn += text.replace(":", "$3A")[:80] - base_fn = base_fn.replace(' ', '_') - gen_dir = self.gen_dir - wav_pred = self.vocoder.spec2wav(mel_pred) - self.saving_result_pool.add_job(self.save_result, args=[ - wav_pred, mel_pred, base_fn % 'P', gen_dir, str_phs, mel2ph_pred]) - if hparams['save_gt']: - wav_gt = self.vocoder.spec2wav(mel_gt) - self.saving_result_pool.add_job(self.save_result, args=[ - wav_gt, mel_gt, base_fn % 'G', gen_dir, str_phs, mel2ph]) - if hparams.get('save_attn', False): - attn = outputs['attn'][0].cpu().numpy() - np.save(f'{gen_dir}/attn/{item_name}.npy', attn) - print(f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}") - return { - 'item_name': item_name, - 'text': text, - 'ph_tokens': self.token_encoder.decode(tokens.tolist()), - 'wav_fn_pred': base_fn % 'P', - 'wav_fn_gt': base_fn % 'G', - } diff --git a/spaces/NeuralInternet/Text-Generation_Playground/extensions/send_pictures/script.py b/spaces/NeuralInternet/Text-Generation_Playground/extensions/send_pictures/script.py deleted file mode 100644 index b0c356329a51edf026f7223a0ee7e5427d8751ce..0000000000000000000000000000000000000000 --- a/spaces/NeuralInternet/Text-Generation_Playground/extensions/send_pictures/script.py +++ /dev/null @@ -1,46 +0,0 @@ -import base64 -from io import BytesIO - -import gradio as gr -import torch -from transformers import BlipForConditionalGeneration, BlipProcessor - -import modules.chat as chat -import modules.shared as shared - -# If 'state' is True, will hijack the next chat generation with -# custom input text given by 'value' in the format [text, visible_text] -input_hijack = { - 'state': False, - 'value': ["", ""] -} - -processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") -model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float32).to("cpu") - -def caption_image(raw_image): - inputs = processor(raw_image.convert('RGB'), return_tensors="pt").to("cpu", torch.float32) - out = model.generate(**inputs, max_new_tokens=100) - return processor.decode(out[0], skip_special_tokens=True) - -def generate_chat_picture(picture, name1, name2): - text = f'*{name1} sends {name2} a picture that contains the following: "{caption_image(picture)}"*' - buffer = BytesIO() - picture.save(buffer, format="JPEG") - img_str = base64.b64encode(buffer.getvalue()).decode('utf-8') - visible_text = f'' - return text, visible_text - -def ui(): - picture_select = gr.Image(label='Send a picture', type='pil') - - function_call = 'chat.cai_chatbot_wrapper' if shared.args.cai_chat else 'chat.chatbot_wrapper' - - # Prepare the hijack with custom inputs - picture_select.upload(lambda picture, name1, name2: input_hijack.update({"state": True, "value": generate_chat_picture(picture, name1, name2)}), [picture_select, shared.gradio['name1'], shared.gradio['name2']], None) - - # Call the generation function - picture_select.upload(eval(function_call), shared.input_params, shared.gradio['display'], show_progress=shared.args.no_stream) - - # Clear the picture from the upload field - picture_select.upload(lambda : None, [], [picture_select], show_progress=False) diff --git a/spaces/Nick1/rvc-models/lib/infer_pack/modules.py b/spaces/Nick1/rvc-models/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Nick1/rvc-models/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/OAOA/DifFace/basicsr/models/__init__.py b/spaces/OAOA/DifFace/basicsr/models/__init__.py deleted file mode 100644 index 85796deae014c20a9aa600133468d04900c4fb89..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/models/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -import importlib -from copy import deepcopy -from os import path as osp - -from basicsr.utils import get_root_logger, scandir -from basicsr.utils.registry import MODEL_REGISTRY - -__all__ = ['build_model'] - -# automatically scan and import model modules for registry -# scan all the files under the 'models' folder and collect files ending with '_model.py' -model_folder = osp.dirname(osp.abspath(__file__)) -model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')] -# import all the model modules -_model_modules = [importlib.import_module(f'basicsr.models.{file_name}') for file_name in model_filenames] - - -def build_model(opt): - """Build model from options. - - Args: - opt (dict): Configuration. It must contain: - model_type (str): Model type. - """ - opt = deepcopy(opt) - model = MODEL_REGISTRY.get(opt['model_type'])(opt) - logger = get_root_logger() - logger.info(f'Model [{model.__class__.__name__}] is created.') - return model diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/utils/cider/pyciderevalcap/ciderD/ciderD.py b/spaces/OFA-Sys/OFA-Generic_Interface/utils/cider/pyciderevalcap/ciderD/ciderD.py deleted file mode 100644 index 280f9890312a76b54695b2a8c456c5d52a87e186..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/utils/cider/pyciderevalcap/ciderD/ciderD.py +++ /dev/null @@ -1,58 +0,0 @@ -# Filename: ciderD.py -# -# Description: Describes the class to compute the CIDEr-D (Consensus-Based Image Description Evaluation) Metric -# by Vedantam, Zitnick, and Parikh (http://arxiv.org/abs/1411.5726) -# -# Creation Date: Sun Feb 8 14:16:54 2015 -# -# Authors: Ramakrishna Vedantam and Tsung-Yi Lin -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from .ciderD_scorer import CiderScorer -import pdb - -class CiderD: - """ - Main Class to compute the CIDEr metric - - """ - def __init__(self, n=4, sigma=6.0, df="corpus"): - # set cider to sum over 1 to 4-grams - self._n = n - # set the standard deviation parameter for gaussian penalty - self._sigma = sigma - # set which where to compute document frequencies from - self._df = df - self.cider_scorer = CiderScorer(n=self._n, df_mode=self._df) - - def compute_score(self, gts, res): - """ - Main function to compute CIDEr score - :param hypo_for_image (dict) : dictionary with key and value - ref_for_image (dict) : dictionary with key and value - :return: cider (float) : computed CIDEr score for the corpus - """ - - # clear all the previous hypos and refs - tmp_cider_scorer = self.cider_scorer.copy_empty() - tmp_cider_scorer.clear() - for res_id in res: - - hypo = res_id['caption'] - ref = gts[res_id['image_id']] - - # Sanity check. - assert(type(hypo) is list) - assert(len(hypo) == 1) - assert(type(ref) is list) - assert(len(ref) > 0) - tmp_cider_scorer += (hypo[0], ref) - - (score, scores) = tmp_cider_scorer.compute_score() - - return score, scores - - def method(self): - return "CIDEr-D" diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/pq/pq.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/pq/pq.py deleted file mode 100644 index eddc2eb34602403f10979f54cd23a45bc2f104d5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/pq/pq.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .em import EM, EmptyClusterResolveError - - -class PQ(EM): - """ - Quantizes the layer weights W with the standard Product Quantization - technique. This learns a codebook of codewords or centroids of size - block_size from W. For further reference on using PQ to quantize - neural networks, see "And the Bit Goes Down: Revisiting the Quantization - of Neural Networks", Stock et al., ICLR 2020. - - PQ is performed in two steps: - (1) The matrix W (weights or fully-connected or convolutional layer) - is reshaped to (block_size, -1). - - If W is fully-connected (2D), its columns are split into - blocks of size block_size. - - If W is convolutional (4D), its filters are split along the - spatial dimension. - (2) We apply the standard EM/k-means algorithm to the resulting reshaped matrix. - - Args: - - W: weight matrix to quantize of size (in_features x out_features) - - block_size: size of the blocks (subvectors) - - n_centroids: number of centroids - - n_iter: number of k-means iterations - - eps: for cluster reassignment when an empty cluster is found - - max_tentatives for cluster reassignment when an empty cluster is found - - verbose: print information after each iteration - - Remarks: - - block_size be compatible with the shape of W - """ - - def __init__( - self, - W, - block_size, - n_centroids=256, - n_iter=20, - eps=1e-6, - max_tentatives=30, - verbose=True, - ): - self.block_size = block_size - W_reshaped = self._reshape(W) - super(PQ, self).__init__( - W_reshaped, - n_centroids=n_centroids, - n_iter=n_iter, - eps=eps, - max_tentatives=max_tentatives, - verbose=verbose, - ) - - def _reshape(self, W): - """ - Reshapes the matrix W as expained in step (1). - """ - - # fully connected: by convention the weight has size out_features x in_features - if len(W.size()) == 2: - self.out_features, self.in_features = W.size() - assert ( - self.in_features % self.block_size == 0 - ), "Linear: n_blocks must be a multiple of in_features" - return ( - W.reshape(self.out_features, -1, self.block_size) - .permute(2, 1, 0) - .flatten(1, 2) - ) - - # convolutional: we reshape along the spatial dimension - elif len(W.size()) == 4: - self.out_channels, self.in_channels, self.k_h, self.k_w = W.size() - assert ( - self.in_channels * self.k_h * self.k_w - ) % self.block_size == 0, ( - "Conv2d: n_blocks must be a multiple of in_channels * k_h * k_w" - ) - return ( - W.reshape(self.out_channels, -1, self.block_size) - .permute(2, 1, 0) - .flatten(1, 2) - ) - # not implemented - else: - raise NotImplementedError(W.size()) - - def encode(self): - """ - Performs self.n_iter EM steps. - """ - - self.initialize_centroids() - for i in range(self.n_iter): - try: - self.step(i) - except EmptyClusterResolveError: - break - - def decode(self): - """ - Returns the encoded full weight matrix. Must be called after - the encode function. - """ - - # fully connected case - if "k_h" not in self.__dict__: - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_features, self.block_size) - .permute(1, 0, 2) - .flatten(1, 2) - ) - - # convolutional case - else: - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_channels, self.block_size) - .permute(1, 0, 2) - .reshape(self.out_channels, self.in_channels, self.k_h, self.k_w) - ) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/docs/conf.py b/spaces/OFA-Sys/OFA-vqa/fairseq/docs/conf.py deleted file mode 100644 index 87b0db98c77d0c240c030a0b48354c86b84358d1..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/docs/conf.py +++ /dev/null @@ -1,134 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# -# fairseq documentation build configuration file, created by -# sphinx-quickstart on Fri Aug 17 21:45:30 2018. -# -# This file is execfile()d with the current directory set to its -# containing dir. -# -# Note that not all possible configuration values are present in this -# autogenerated file. -# -# All configuration values have a default; values that are commented out -# serve to show the default. - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. - -import os -import sys -from fairseq import __version__ - - -# source code directory, relative to this file, for sphinx-autobuild -sys.path.insert(0, os.path.abspath("..")) - -source_suffix = [".rst"] - -# -- General configuration ------------------------------------------------ - -# If your documentation needs a minimal Sphinx version, state it here. -# -# needs_sphinx = '1.0' - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - "sphinx.ext.autodoc", - "sphinx.ext.intersphinx", - "sphinx.ext.viewcode", - "sphinx.ext.napoleon", - "sphinxarg.ext", -] - -# Add any paths that contain templates here, relative to this directory. -templates_path = ["_templates"] - -# The master toctree document. -master_doc = "index" - -# General information about the project. -project = "fairseq" -copyright = "Facebook AI Research (FAIR)" -author = "Facebook AI Research (FAIR)" - -github_doc_root = "https://github.com/pytorch/fairseq/tree/main/docs/" - -# The version info for the project you're documenting, acts as replacement for -# |version| and |release|, also used in various other places throughout the -# built documents. -# -# The short X.Y version. -version = __version__ -# The full version, including alpha/beta/rc tags. -release = __version__ - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. -language = None - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This patterns also effect to html_static_path and html_extra_path -exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = "sphinx" -highlight_language = "python" - -# If true, `todo` and `todoList` produce output, else they produce nothing. -todo_include_todos = False - - -# -- Options for HTML output ---------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = "sphinx_rtd_theme" - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -# -# html_theme_options = {} - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ["_static"] - -html_context = { - "css_files": [ - "_static/theme_overrides.css", # override wide tables in RTD theme - ], -} - -# Custom sidebar templates, must be a dictionary that maps document names -# to template names. -# -# This is required for the alabaster theme -# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars -# html_sidebars = { -# '**': [ -# 'about.html', -# 'navigation.html', -# 'relations.html', # needs 'show_related': True theme option to display -# 'searchbox.html', -# 'donate.html', -# ] -# } - - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - "numpy": ("http://docs.scipy.org/doc/numpy/", None), - "python": ("https://docs.python.org/", None), - "torch": ("https://pytorch.org/docs/master/", None), -} diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/latent_depth/latent_depth_src/models/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/latent_depth/latent_depth_src/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_to_text/docs/covost_example.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_to_text/docs/covost_example.md deleted file mode 100644 index 16447f041e4751f79d9f7848b33ef2ff943d63c2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_to_text/docs/covost_example.md +++ /dev/null @@ -1,102 +0,0 @@ -[[Back]](..) - -# S2T Example: ST on CoVoST -We replicate the experiments in -[CoVoST 2 and Massively Multilingual Speech-to-Text Translation (Wang et al., 2020)](https://arxiv.org/abs/2007.10310). - -## Data Preparation -[Download](https://commonvoice.mozilla.org/en/datasets) and unpack Common Voice v4 to a path -`${COVOST_ROOT}/${SOURCE_LANG_ID}`, then preprocess it with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio sentencepiece - -# En ASR -python examples/speech_to_text/prep_covost_data.py \ - --data-root ${COVOST_ROOT} --vocab-type char --src-lang en -# ST -python examples/speech_to_text/prep_covost_data.py \ - --data-root ${COVOST_ROOT} --vocab-type char \ - --src-lang fr --tgt-lang en -``` -The generated files (manifest, features, vocabulary and data configuration) will be added to -`${COVOST_ROOT}/${SOURCE_LANG_ID}`. - -Download our vocabulary files if you want to use our pre-trained models: -- ASR: [En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_asr_vocab_char.zip) -- ST: [Fr-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_fr_en_st_vocab_char.zip), [De-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_de_en_st_vocab_char.zip), [Es-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_es_en_st_vocab_char.zip), [Ca-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_ca_en_st_vocab_char.zip), [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_de_st_vocab_char.zip), [En-Ca](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_ca_st_vocab_char.zip), [En-Fa](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_fa_st_vocab_char.zip), [En-Et](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_et_st_vocab_char.zip) - -## ASR -#### Training -We train an En ASR model for encoder pre-training of all ST models: -```bash -fairseq-train ${COVOST_ROOT}/en \ - --config-yaml config_asr_en.yaml --train-subset train_asr_en --valid-subset dev_asr_en \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 50000 --max-update 60000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --report-accuracy --arch s2t_transformer_s --dropout 0.15 --optimizer adam --lr 2e-3 \ - --lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -where `ASR_SAVE_DIR` is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. -You may want to update it accordingly when using more than 1 GPU. - -#### Inference & Evaluation -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${COVOST_ROOT}/en \ - --config-yaml config_asr_en.yaml --gen-subset test_asr_en --task speech_to_text \ - --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct -``` -#### Results -| --arch | Params | En | Model | -|---|---|---|---| -| s2t_transformer_s | 31M | 25.6 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_asr_transformer_s.pt) | - -## ST -#### Training -Fr-En as example: -```bash -fairseq-train ${COVOST_ROOT}/fr \ - --config-yaml config_st_fr_en.yaml --train-subset train_st_fr_en --valid-subset dev_st_fr_en \ - --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-update 30000 --max-tokens 40000 \ # --max-tokens 50000 for en-* - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --encoder-freezing-updates 1000 --optimizer adam --lr 2e-3 \ - --lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} -``` -where `ST_SAVE_DIR` is the checkpoint root path. The ST encoder is pre-trained by En ASR for faster training and better -performance: `--load-pretrained-encoder-from `. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. -You may want to update it accordingly when using more than 1 GPU. - -#### Inference & Evaluation -Average the last 10 checkpoints and evaluate on test split: -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${COVOST_ROOT}/fr \ - --config-yaml config_st_fr_en.yaml --gen-subset test_st_fr_en --task speech_to_text \ - --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu -``` - -## Interactive Decoding -Launch the interactive console via -```bash -fairseq-interactive ${COVOST_ROOT}/fr --config-yaml config_st_fr_en.yaml \ - --task speech_to_text --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 -``` -Type in WAV/FLAC/OGG audio paths (one per line) after the prompt. - -#### Results -| --arch | Params | Fr-En | De-En | Es-En | Ca-En | En-De | En-Ca | En-Fa | En-Et | Model | -|---|---|---|---|---|---|---|---|---|---|---| -| s2t_transformer_s | 31M | [27.2](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_fr_en_st_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_de_en_st_transformer_s.pt) | [23.1](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_es_en_st_transformer_s.pt) | [19.3](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_ca_en_st_transformer_s.pt) | [16.1](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_de_st_transformer_s.pt) | [21.6](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_ca_st_transformer_s.pt) | [12.9](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_fa_st_transformer_s.pt) | [12.8](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_et_st_transformer_s.pt) | (<-Download) | - -[[Back]](..) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh deleted file mode 100644 index c2edcefede2da3b6a991b9c8fbc78c96d46d27cb..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/usr/bin/env bash - -langdir="" -lmdir="" - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -arpa_lm=$1 -data=$2 - -if [ -z $langdir ]; then - langdir=$data/lang -fi -if [ -z $lmdir ]; then - lmdir=$data/lang_test -fi - -if [ ! -d $langdir ]; then - echo "$langdir not found. run local/prepare_lang.sh first" && exit 1 -fi - -mkdir -p $lmdir -cp -r $langdir/* $lmdir - -if [[ "$arpa_lm" == *.gz ]]; then - gunzip -c $arpa_lm | arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt - $lmdir/G.fst -else - arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt $arpa_lm $lmdir/G.fst -fi -fstisstochastic $lmdir/G.fst -utils/validate_lang.pl $lmdir || exit 1 - -echo "done preparing lm ($lmdir)" diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/dynamicconv_layer/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/dynamicconv_layer/__init__.py deleted file mode 100644 index 22dc6f403d2a0ecdb1b9e7e69ed96bd560e93b2c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/dynamicconv_layer/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .dynamicconv_layer import DynamicconvLayer # noqa diff --git a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTEN/llmriddles/questions/__init__.py b/spaces/OpenDILabCommunity/LLMRiddlesChatGPTEN/llmriddles/questions/__init__.py deleted file mode 100644 index 6904d13a3d820f39b4b54d09671673e49a06aa4c..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/LLMRiddlesChatGPTEN/llmriddles/questions/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -from .executor import QuestionExecutor -from .level1 import __file__ as _level1_file_ -from .level2 import __file__ as _level2_file_ -from .level3 import __file__ as _level3_file_ -from .level4 import __file__ as _level4_file_ -from .level5 import __file__ as _level5_file_ -from .question import Question, register_question, list_ordered_questions - -_ = _level1_file_ -_ = _level2_file_ -_ = _level3_file_ -_ = _level4_file_ -_ = _level5_file_ \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/augmentation.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/augmentation.md deleted file mode 100644 index 7601a082ceadf645e32468c2045dfe50c1216efc..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/tutorials/augmentation.md +++ /dev/null @@ -1,186 +0,0 @@ - -# Data Augmentation - -Augmentation is an important part of training. -Detectron2's data augmentation system aims at addressing the following goals: - -1. Allow augmenting multiple data types together - (e.g., images together with their bounding boxes and masks) -2. Allow applying a sequence of statically-declared augmentation -3. Allow adding custom new data types to augment (rotated bounding boxes, video clips, etc.) -4. Process and manipulate the __operations__ that are applied by augmentations - -The first two features cover most of the common use cases, and is also -available in other libraries such as [albumentations](https://medium.com/pytorch/multi-target-in-albumentations-16a777e9006e). -Supporting other features adds some overhead to detectron2's augmentation API, -which we'll explain in this tutorial. - -This tutorial focuses on how to use augmentations when writing new data loaders, -and how to write new augmentations. -If you use the default data loader in detectron2, it already supports taking a user-provided list of custom augmentations, -as explained in the [Dataloader tutorial](data_loading). - -## Basic Usage - -The basic usage of feature (1) and (2) is like the following: -```python -from detectron2.data import transforms as T -# Define a sequence of augmentations: -augs = T.AugmentationList([ - T.RandomBrightness(0.9, 1.1), - T.RandomFlip(prob=0.5), - T.RandomCrop("absolute", (640, 640)) -]) # type: T.Augmentation - -# Define the augmentation input ("image" required, others optional): -input = T.AugInput(image, boxes=boxes, sem_seg=sem_seg) -# Apply the augmentation: -transform = augs(input) # type: T.Transform -image_transformed = input.image # new image -sem_seg_transformed = input.sem_seg # new semantic segmentation - -# For any extra data that needs to be augmented together, use transform, e.g.: -image2_transformed = transform.apply_image(image2) -polygons_transformed = transform.apply_polygons(polygons) -``` - -Three basic concepts are involved here. They are: -* [T.Augmentation](../modules/data_transforms.html#detectron2.data.transforms.Augmentation) defines the __"policy"__ to modify inputs. - * its `__call__(AugInput) -> Transform` method augments the inputs in-place, and returns the operation that is applied -* [T.Transform](../modules/data_transforms.html#detectron2.data.transforms.Transform) - implements the actual __operations__ to transform data - * it has methods such as `apply_image`, `apply_coords` that define how to transform each data type -* [T.AugInput](../modules/data_transforms.html#detectron2.data.transforms.AugInput) - stores inputs needed by `T.Augmentation` and how they should be transformed. - This concept is needed for some advanced usage. - Using this class directly should be sufficient for all common use cases, - since extra data not in `T.AugInput` can be augmented using the returned - `transform`, as shown in the above example. - -## Write New Augmentations - -Most 2D augmentations only need to know about the input image. Such augmentation can be implemented easily like this: - -```python -class MyColorAugmentation(T.Augmentation): - def get_transform(self, image): - r = np.random.rand(2) - return T.ColorTransform(lambda x: x * r[0] + r[1] * 10) - -class MyCustomResize(T.Augmentation): - def get_transform(self, image): - old_h, old_w = image.shape[:2] - new_h, new_w = int(old_h * np.random.rand()), int(old_w * 1.5) - return T.ResizeTransform(old_h, old_w, new_h, new_w) - -augs = MyCustomResize() -transform = augs(input) -``` - -In addition to image, any attributes of the given `AugInput` can be used as long -as they are part of the function signature, e.g.: - -```python -class MyCustomCrop(T.Augmentation): - def get_transform(self, image, sem_seg): - # decide where to crop using both image and sem_seg - return T.CropTransform(...) - -augs = MyCustomCrop() -assert hasattr(input, "image") and hasattr(input, "sem_seg") -transform = augs(input) -``` - -New transform operation can also be added by subclassing -[T.Transform](../modules/data_transforms.html#detectron2.data.transforms.Transform). - -## Advanced Usage - -We give a few examples of advanced usages that -are enabled by our system. -These options can be interesting to new research, -although changing them is often not needed -for standard use cases. - -### Custom transform strategy - -Instead of only returning the augmented data, detectron2's `Augmentation` returns the __operations__ as `T.Transform`. -This allows users to apply custom transform strategy on their data. -We use keypoints data as an example. - -Keypoints are (x, y) coordinates, but they are not so trivial to augment due to the semantic meaning they carry. -Such meaning is only known to the users, therefore users may want to augment them manually -by looking at the returned `transform`. -For example, when an image is horizontally flipped, we'd like to swap the keypoint annotations for "left eye" and "right eye". -This can be done like this (included by default in detectron2's default data loader): -```python -# augs, input are defined as in previous examples -transform = augs(input) # type: T.Transform -keypoints_xy = transform.apply_coords(keypoints_xy) # transform the coordinates - -# get a list of all transforms that were applied -transforms = T.TransformList([transform]).transforms -# check if it is flipped for odd number of times -do_hflip = sum(isinstance(t, T.HFlipTransform) for t in transforms) % 2 == 1 -if do_hflip: - keypoints_xy = keypoints_xy[flip_indices_mapping] -``` - -As another example, keypoints annotations often have a "visibility" field. -A sequence of augmentations might augment a visible keypoint out of the image boundary (e.g. with cropping), -but then bring it back within the boundary afterwards (e.g. with image padding). -If users decide to label such keypoints "invisible", -then the visibility check has to happen after every transform step. -This can be achieved by: - -```python -transform = augs(input) # type: T.TransformList -assert isinstance(transform, T.TransformList) -for t in transform.transforms: - keypoints_xy = t.apply_coords(keypoints_xy) - visibility &= (keypoints_xy >= [0, 0] & keypoints_xy <= [W, H]).all(axis=1) - -# btw, detectron2's `transform_keypoint_annotations` function chooses to label such keypoints "visible": -# keypoints_xy = transform.apply_coords(keypoints_xy) -# visibility &= (keypoints_xy >= [0, 0] & keypoints_xy <= [W, H]).all(axis=1) -``` - - -### Geometrically invert the transform -If images are pre-processed by augmentations before inference, the predicted results -such as segmentation masks are localized on the augmented image. -We'd like to invert the applied augmentation with the [inverse()](../modules/data_transforms.html#detectron2.data.transforms.Transform.inverse) -API, to obtain results on the original image: -```python -transform = augs(input) -pred_mask = make_prediction(input.image) -inv_transform = transform.inverse() -pred_mask_orig = inv_transform.apply_segmentation(pred_mask) -``` - -### Add new data types - -[T.Transform](../modules/data_transforms.html#detectron2.data.transforms.Transform) -supports a few common data types to transform, including images, coordinates, masks, boxes, polygons. -It allows registering new data types, e.g.: -```python -@T.HFlipTransform.register_type("rotated_boxes") -def func(flip_transform: T.HFlipTransform, rotated_boxes: Any): - # do the work - return flipped_rotated_boxes - -t = HFlipTransform(width=800) -transformed_rotated_boxes = t.apply_rotated_boxes(rotated_boxes) # func will be called -``` - -### Extend T.AugInput - -An augmentation can only access attributes available in the given input. -[T.AugInput](../modules/data_transforms.html#detectron2.data.transforms.StandardAugInput) defines "image", "boxes", "sem_seg", -which are sufficient for common augmentation strategies to decide how to augment. -If not, a custom implementation is needed. - -By re-implement the "transform()" method in AugInput, it is also possible to -augment different fields in ways that are dependent on each other. -Such use case is uncommon (e.g. post-process bounding box based on augmented masks), but allowed by the system. - diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/losses/constants.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/losses/constants.py deleted file mode 100644 index ae3e5e151342232be8e2c2a77fe6fd5798dc2a8c..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/training/losses/constants.py +++ /dev/null @@ -1,152 +0,0 @@ -weights = {"ade20k": - [6.34517766497462, - 9.328358208955224, - 11.389521640091116, - 16.10305958132045, - 20.833333333333332, - 22.22222222222222, - 25.125628140703515, - 43.29004329004329, - 50.5050505050505, - 54.6448087431694, - 55.24861878453038, - 60.24096385542168, - 62.5, - 66.2251655629139, - 84.74576271186442, - 90.90909090909092, - 91.74311926605505, - 96.15384615384616, - 96.15384615384616, - 97.08737864077669, - 102.04081632653062, - 135.13513513513513, - 149.2537313432836, - 153.84615384615384, - 163.93442622950818, - 166.66666666666666, - 188.67924528301887, - 192.30769230769232, - 217.3913043478261, - 227.27272727272725, - 227.27272727272725, - 227.27272727272725, - 303.03030303030306, - 322.5806451612903, - 333.3333333333333, - 370.3703703703703, - 384.61538461538464, - 416.6666666666667, - 416.6666666666667, - 434.7826086956522, - 434.7826086956522, - 454.5454545454545, - 454.5454545454545, - 500.0, - 526.3157894736842, - 526.3157894736842, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 666.6666666666666, - 666.6666666666666, - 666.6666666666666, - 666.6666666666666, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 769.2307692307693, - 769.2307692307693, - 769.2307692307693, - 833.3333333333334, - 833.3333333333334, - 833.3333333333334, - 833.3333333333334, - 909.090909090909, - 1000.0, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1250.0, - 1250.0, - 1250.0, - 1250.0, - 1250.0, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 5000.0, - 5000.0, - 5000.0] -} \ No newline at end of file diff --git a/spaces/OpenMotionLab/MotionGPT/pyrender/examples/example.py b/spaces/OpenMotionLab/MotionGPT/pyrender/examples/example.py deleted file mode 100644 index 599a4850a5899cdeb1a76db1c5cf1c91c263cd41..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/pyrender/examples/example.py +++ /dev/null @@ -1,157 +0,0 @@ -"""Examples of using pyrender for viewing and offscreen rendering. -""" -import pyglet -pyglet.options['shadow_window'] = False -import os -import numpy as np -import trimesh - -from pyrender import PerspectiveCamera,\ - DirectionalLight, SpotLight, PointLight,\ - MetallicRoughnessMaterial,\ - Primitive, Mesh, Node, Scene,\ - Viewer, OffscreenRenderer, RenderFlags - -#============================================================================== -# Mesh creation -#============================================================================== - -#------------------------------------------------------------------------------ -# Creating textured meshes from trimeshes -#------------------------------------------------------------------------------ - -# Fuze trimesh -fuze_trimesh = trimesh.load('./models/fuze.obj') -fuze_mesh = Mesh.from_trimesh(fuze_trimesh) - -# Drill trimesh -drill_trimesh = trimesh.load('./models/drill.obj') -drill_mesh = Mesh.from_trimesh(drill_trimesh) -drill_pose = np.eye(4) -drill_pose[0,3] = 0.1 -drill_pose[2,3] = -np.min(drill_trimesh.vertices[:,2]) - -# Wood trimesh -wood_trimesh = trimesh.load('./models/wood.obj') -wood_mesh = Mesh.from_trimesh(wood_trimesh) - -# Water bottle trimesh -bottle_gltf = trimesh.load('./models/WaterBottle.glb') -bottle_trimesh = bottle_gltf.geometry[list(bottle_gltf.geometry.keys())[0]] -bottle_mesh = Mesh.from_trimesh(bottle_trimesh) -bottle_pose = np.array([ - [1.0, 0.0, 0.0, 0.1], - [0.0, 0.0, -1.0, -0.16], - [0.0, 1.0, 0.0, 0.13], - [0.0, 0.0, 0.0, 1.0], -]) - -#------------------------------------------------------------------------------ -# Creating meshes with per-vertex colors -#------------------------------------------------------------------------------ -boxv_trimesh = trimesh.creation.box(extents=0.1*np.ones(3)) -boxv_vertex_colors = np.random.uniform(size=(boxv_trimesh.vertices.shape)) -boxv_trimesh.visual.vertex_colors = boxv_vertex_colors -boxv_mesh = Mesh.from_trimesh(boxv_trimesh, smooth=False) - -#------------------------------------------------------------------------------ -# Creating meshes with per-face colors -#------------------------------------------------------------------------------ -boxf_trimesh = trimesh.creation.box(extents=0.1*np.ones(3)) -boxf_face_colors = np.random.uniform(size=boxf_trimesh.faces.shape) -boxf_trimesh.visual.face_colors = boxf_face_colors -boxf_mesh = Mesh.from_trimesh(boxf_trimesh, smooth=False) - -#------------------------------------------------------------------------------ -# Creating meshes from point clouds -#------------------------------------------------------------------------------ -points = trimesh.creation.icosphere(radius=0.05).vertices -point_colors = np.random.uniform(size=points.shape) -points_mesh = Mesh.from_points(points, colors=point_colors) - -#============================================================================== -# Light creation -#============================================================================== - -direc_l = DirectionalLight(color=np.ones(3), intensity=1.0) -spot_l = SpotLight(color=np.ones(3), intensity=10.0, - innerConeAngle=np.pi/16, outerConeAngle=np.pi/6) -point_l = PointLight(color=np.ones(3), intensity=10.0) - -#============================================================================== -# Camera creation -#============================================================================== - -cam = PerspectiveCamera(yfov=(np.pi / 3.0)) -cam_pose = np.array([ - [0.0, -np.sqrt(2)/2, np.sqrt(2)/2, 0.5], - [1.0, 0.0, 0.0, 0.0], - [0.0, np.sqrt(2)/2, np.sqrt(2)/2, 0.4], - [0.0, 0.0, 0.0, 1.0] -]) - -#============================================================================== -# Scene creation -#============================================================================== - -scene = Scene(ambient_light=np.array([0.02, 0.02, 0.02, 1.0])) - -#============================================================================== -# Adding objects to the scene -#============================================================================== - -#------------------------------------------------------------------------------ -# By manually creating nodes -#------------------------------------------------------------------------------ -fuze_node = Node(mesh=fuze_mesh, translation=np.array([0.1, 0.15, -np.min(fuze_trimesh.vertices[:,2])])) -scene.add_node(fuze_node) -boxv_node = Node(mesh=boxv_mesh, translation=np.array([-0.1, 0.10, 0.05])) -scene.add_node(boxv_node) -boxf_node = Node(mesh=boxf_mesh, translation=np.array([-0.1, -0.10, 0.05])) -scene.add_node(boxf_node) - -#------------------------------------------------------------------------------ -# By using the add() utility function -#------------------------------------------------------------------------------ -drill_node = scene.add(drill_mesh, pose=drill_pose) -bottle_node = scene.add(bottle_mesh, pose=bottle_pose) -wood_node = scene.add(wood_mesh) -direc_l_node = scene.add(direc_l, pose=cam_pose) -spot_l_node = scene.add(spot_l, pose=cam_pose) - -#============================================================================== -# Using the viewer with a default camera -#============================================================================== - -v = Viewer(scene, shadows=True) - -#============================================================================== -# Using the viewer with a pre-specified camera -#============================================================================== -cam_node = scene.add(cam, pose=cam_pose) -v = Viewer(scene, central_node=drill_node) - -#============================================================================== -# Rendering offscreen from that camera -#============================================================================== - -r = OffscreenRenderer(viewport_width=640*2, viewport_height=480*2) -color, depth = r.render(scene) - -import matplotlib.pyplot as plt -plt.figure() -plt.imshow(color) -plt.show() - -#============================================================================== -# Segmask rendering -#============================================================================== - -nm = {node: 20*(i + 1) for i, node in enumerate(scene.mesh_nodes)} -seg = r.render(scene, RenderFlags.SEG, nm)[0] -plt.figure() -plt.imshow(seg) -plt.show() - -r.delete() - diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/plugin.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/plugin.py deleted file mode 100644 index 07c010d4053174dd41107aa654ea67e82b46a25c..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/plugin.py +++ /dev/null @@ -1,88 +0,0 @@ -import inspect -import platform - -from .registry import PLUGIN_LAYERS - -if platform.system() == 'Windows': - import regex as re -else: - import re - - -def infer_abbr(class_type): - """Infer abbreviation from the class name. - - This method will infer the abbreviation to map class types to - abbreviations. - - Rule 1: If the class has the property "abbr", return the property. - Rule 2: Otherwise, the abbreviation falls back to snake case of class - name, e.g. the abbreviation of ``FancyBlock`` will be ``fancy_block``. - - Args: - class_type (type): The norm layer type. - - Returns: - str: The inferred abbreviation. - """ - - def camel2snack(word): - """Convert camel case word into snack case. - - Modified from `inflection lib - `_. - - Example:: - - >>> camel2snack("FancyBlock") - 'fancy_block' - """ - - word = re.sub(r'([A-Z]+)([A-Z][a-z])', r'\1_\2', word) - word = re.sub(r'([a-z\d])([A-Z])', r'\1_\2', word) - word = word.replace('-', '_') - return word.lower() - - if not inspect.isclass(class_type): - raise TypeError( - f'class_type must be a type, but got {type(class_type)}') - if hasattr(class_type, '_abbr_'): - return class_type._abbr_ - else: - return camel2snack(class_type.__name__) - - -def build_plugin_layer(cfg, postfix='', **kwargs): - """Build plugin layer. - - Args: - cfg (None or dict): cfg should contain: - type (str): identify plugin layer type. - layer args: args needed to instantiate a plugin layer. - postfix (int, str): appended into norm abbreviation to - create named layer. Default: ''. - - Returns: - tuple[str, nn.Module]: - name (str): abbreviation + postfix - layer (nn.Module): created plugin layer - """ - if not isinstance(cfg, dict): - raise TypeError('cfg must be a dict') - if 'type' not in cfg: - raise KeyError('the cfg dict must contain the key "type"') - cfg_ = cfg.copy() - - layer_type = cfg_.pop('type') - if layer_type not in PLUGIN_LAYERS: - raise KeyError(f'Unrecognized plugin type {layer_type}') - - plugin_layer = PLUGIN_LAYERS.get(layer_type) - abbr = infer_abbr(plugin_layer) - - assert isinstance(postfix, (int, str)) - name = abbr + str(postfix) - - layer = plugin_layer(**kwargs, **cfg_) - - return name, layer diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/cgnet.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/cgnet.py deleted file mode 100644 index f8bca442c8f18179f217e40c298fb5ef39df77c4..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/backbones/cgnet.py +++ /dev/null @@ -1,367 +0,0 @@ -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (ConvModule, build_conv_layer, build_norm_layer, - constant_init, kaiming_init) -from annotator.uniformer.mmcv.runner import load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES - - -class GlobalContextExtractor(nn.Module): - """Global Context Extractor for CGNet. - - This class is employed to refine the joint feature of both local feature - and surrounding context. - - Args: - channel (int): Number of input feature channels. - reduction (int): Reductions for global context extractor. Default: 16. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, channel, reduction=16, with_cp=False): - super(GlobalContextExtractor, self).__init__() - self.channel = channel - self.reduction = reduction - assert reduction >= 1 and channel >= reduction - self.with_cp = with_cp - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction), nn.ReLU(inplace=True), - nn.Linear(channel // reduction, channel), nn.Sigmoid()) - - def forward(self, x): - - def _inner_forward(x): - num_batch, num_channel = x.size()[:2] - y = self.avg_pool(x).view(num_batch, num_channel) - y = self.fc(y).view(num_batch, num_channel, 1, 1) - return x * y - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class ContextGuidedBlock(nn.Module): - """Context Guided Block for CGNet. - - This class consists of four components: local feature extractor, - surrounding feature extractor, joint feature extractor and global - context extractor. - - Args: - in_channels (int): Number of input feature channels. - out_channels (int): Number of output feature channels. - dilation (int): Dilation rate for surrounding context extractor. - Default: 2. - reduction (int): Reduction for global context extractor. Default: 16. - skip_connect (bool): Add input to output or not. Default: True. - downsample (bool): Downsample the input to 1/2 or not. Default: False. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='PReLU'). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, - in_channels, - out_channels, - dilation=2, - reduction=16, - skip_connect=True, - downsample=False, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='PReLU'), - with_cp=False): - super(ContextGuidedBlock, self).__init__() - self.with_cp = with_cp - self.downsample = downsample - - channels = out_channels if downsample else out_channels // 2 - if 'type' in act_cfg and act_cfg['type'] == 'PReLU': - act_cfg['num_parameters'] = channels - kernel_size = 3 if downsample else 1 - stride = 2 if downsample else 1 - padding = (kernel_size - 1) // 2 - - self.conv1x1 = ConvModule( - in_channels, - channels, - kernel_size, - stride, - padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.f_loc = build_conv_layer( - conv_cfg, - channels, - channels, - kernel_size=3, - padding=1, - groups=channels, - bias=False) - self.f_sur = build_conv_layer( - conv_cfg, - channels, - channels, - kernel_size=3, - padding=dilation, - groups=channels, - dilation=dilation, - bias=False) - - self.bn = build_norm_layer(norm_cfg, 2 * channels)[1] - self.activate = nn.PReLU(2 * channels) - - if downsample: - self.bottleneck = build_conv_layer( - conv_cfg, - 2 * channels, - out_channels, - kernel_size=1, - bias=False) - - self.skip_connect = skip_connect and not downsample - self.f_glo = GlobalContextExtractor(out_channels, reduction, with_cp) - - def forward(self, x): - - def _inner_forward(x): - out = self.conv1x1(x) - loc = self.f_loc(out) - sur = self.f_sur(out) - - joi_feat = torch.cat([loc, sur], 1) # the joint feature - joi_feat = self.bn(joi_feat) - joi_feat = self.activate(joi_feat) - if self.downsample: - joi_feat = self.bottleneck(joi_feat) # channel = out_channels - # f_glo is employed to refine the joint feature - out = self.f_glo(joi_feat) - - if self.skip_connect: - return x + out - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class InputInjection(nn.Module): - """Downsampling module for CGNet.""" - - def __init__(self, num_downsampling): - super(InputInjection, self).__init__() - self.pool = nn.ModuleList() - for i in range(num_downsampling): - self.pool.append(nn.AvgPool2d(3, stride=2, padding=1)) - - def forward(self, x): - for pool in self.pool: - x = pool(x) - return x - - -@BACKBONES.register_module() -class CGNet(nn.Module): - """CGNet backbone. - - A Light-weight Context Guided Network for Semantic Segmentation - arXiv: https://arxiv.org/abs/1811.08201 - - Args: - in_channels (int): Number of input image channels. Normally 3. - num_channels (tuple[int]): Numbers of feature channels at each stages. - Default: (32, 64, 128). - num_blocks (tuple[int]): Numbers of CG blocks at stage 1 and stage 2. - Default: (3, 21). - dilations (tuple[int]): Dilation rate for surrounding context - extractors at stage 1 and stage 2. Default: (2, 4). - reductions (tuple[int]): Reductions for global context extractors at - stage 1 and stage 2. Default: (8, 16). - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='PReLU'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, - in_channels=3, - num_channels=(32, 64, 128), - num_blocks=(3, 21), - dilations=(2, 4), - reductions=(8, 16), - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='PReLU'), - norm_eval=False, - with_cp=False): - - super(CGNet, self).__init__() - self.in_channels = in_channels - self.num_channels = num_channels - assert isinstance(self.num_channels, tuple) and len( - self.num_channels) == 3 - self.num_blocks = num_blocks - assert isinstance(self.num_blocks, tuple) and len(self.num_blocks) == 2 - self.dilations = dilations - assert isinstance(self.dilations, tuple) and len(self.dilations) == 2 - self.reductions = reductions - assert isinstance(self.reductions, tuple) and len(self.reductions) == 2 - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - if 'type' in self.act_cfg and self.act_cfg['type'] == 'PReLU': - self.act_cfg['num_parameters'] = num_channels[0] - self.norm_eval = norm_eval - self.with_cp = with_cp - - cur_channels = in_channels - self.stem = nn.ModuleList() - for i in range(3): - self.stem.append( - ConvModule( - cur_channels, - num_channels[0], - 3, - 2 if i == 0 else 1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - cur_channels = num_channels[0] - - self.inject_2x = InputInjection(1) # down-sample for Input, factor=2 - self.inject_4x = InputInjection(2) # down-sample for Input, factor=4 - - cur_channels += in_channels - self.norm_prelu_0 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - # stage 1 - self.level1 = nn.ModuleList() - for i in range(num_blocks[0]): - self.level1.append( - ContextGuidedBlock( - cur_channels if i == 0 else num_channels[1], - num_channels[1], - dilations[0], - reductions[0], - downsample=(i == 0), - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - with_cp=with_cp)) # CG block - - cur_channels = 2 * num_channels[1] + in_channels - self.norm_prelu_1 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - # stage 2 - self.level2 = nn.ModuleList() - for i in range(num_blocks[1]): - self.level2.append( - ContextGuidedBlock( - cur_channels if i == 0 else num_channels[2], - num_channels[2], - dilations[1], - reductions[1], - downsample=(i == 0), - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - with_cp=with_cp)) # CG block - - cur_channels = 2 * num_channels[2] - self.norm_prelu_2 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - def forward(self, x): - output = [] - - # stage 0 - inp_2x = self.inject_2x(x) - inp_4x = self.inject_4x(x) - for layer in self.stem: - x = layer(x) - x = self.norm_prelu_0(torch.cat([x, inp_2x], 1)) - output.append(x) - - # stage 1 - for i, layer in enumerate(self.level1): - x = layer(x) - if i == 0: - down1 = x - x = self.norm_prelu_1(torch.cat([x, down1, inp_4x], 1)) - output.append(x) - - # stage 2 - for i, layer in enumerate(self.level2): - x = layer(x) - if i == 0: - down2 = x - x = self.norm_prelu_2(torch.cat([down2, x], 1)) - output.append(x) - - return output - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.Linear)): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - elif isinstance(m, nn.PReLU): - constant_init(m, 0) - else: - raise TypeError('pretrained must be a str or None') - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(CGNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/Preetesh/VideoSummaryfromYouTubeVideo/summarize.py b/spaces/Preetesh/VideoSummaryfromYouTubeVideo/summarize.py deleted file mode 100644 index 0053dde4348f24cc152a60c4d20f201e3b1f5482..0000000000000000000000000000000000000000 --- a/spaces/Preetesh/VideoSummaryfromYouTubeVideo/summarize.py +++ /dev/null @@ -1,43 +0,0 @@ -import traceback -import sys - -from youtube_transcript_api import YouTubeTranscriptApi -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - -def Summarizer(link, model): - - video_id = link.split("=")[1] - - try: - transcript = YouTubeTranscriptApi.get_transcript(video_id) - FinalTranscript = ' '.join([i['text'] for i in transcript]) - - if model == "Pegasus": - checkpoint = "google/pegasus-large" - elif model == "mT5": - checkpoint = "csebuetnlp/mT5_multilingual_XLSum" - elif model == "BART": - checkpoint = "sshleifer/distilbart-cnn-12-6" - - tokenizer = AutoTokenizer.from_pretrained(checkpoint) - model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) - - - inputs = tokenizer(FinalTranscript, - max_length=1024, - truncation=True, - return_tensors="pt") - - summary_ids = model.generate(inputs["input_ids"]) - summary = tokenizer.batch_decode(summary_ids, - skip_special_tokens=True, - clean_up_tokenization_spaces=False) - - - return summary[0] - - - except Exception: - print(traceback.format_exc()) - # or - print(sys.exc_info()[2]) \ No newline at end of file diff --git a/spaces/RamAnanth1/T2I-Adapter/gradio_seg.py b/spaces/RamAnanth1/T2I-Adapter/gradio_seg.py deleted file mode 100644 index 3eede0ae1b1dbd55b98cc222bc6578751cdadce2..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/T2I-Adapter/gradio_seg.py +++ /dev/null @@ -1,32 +0,0 @@ -import gradio as gr - -def create_demo(process): - block = gr.Blocks().queue() - with block: - with gr.Row(): - with gr.Column(): - input_img = gr.Image(source='upload', type="numpy") - prompt = gr.Textbox(label="Prompt") - neg_prompt = gr.Textbox(label="Negative Prompt", - value='ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, bad anatomy, watermark, signature, cut off, low contrast, underexposed, overexposed, bad art, beginner, amateur, distorted face') - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - con_strength = gr.Slider(label="Controling Strength (The guidance strength of the sketch to the result)", minimum=0, maximum=1, value=0.4, step=0.1) - scale = gr.Slider(label="Guidance Scale (Classifier free guidance)", minimum=0.1, maximum=30.0, value=7.5, step=0.1) - fix_sample = gr.inputs.Radio(['True', 'False'], type="value", default='False', label='Fix Sampling\n (Fix the random seed)') - base_model = gr.inputs.Radio(['sd-v1-4.ckpt', 'anything-v4.0-pruned.ckpt'], type="value", default='sd-v1-4.ckpt', label='The base model you want to use') - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto') - ips = [input_img,prompt, neg_prompt, fix_sample, scale, con_strength, base_model] - run_button.click(fn=process, inputs=ips, outputs=[result]) - - examples_list = [["motor.png", "A black Honda motorcycle parked in front of a garage", - "ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, extra limbs, disfigured, deformed, body out of frame, bad anatomy, watermark, signature, cut off, low contrast, underexposed, overexposed, bad art, beginner, amateur, distorted face", - 'True', - 7.5, - 0.4, - 'anything-v4.0-pruned.ckpt']] - - examples = gr.Examples(examples=examples_list,inputs = [input_img, prompt,neg_prompt, fix_sample, scale, con_strength,base_model], outputs = [result], cache_examples = True, fn = process) - - return block \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/help.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/help.py deleted file mode 100644 index 62066318b74dcc5c32bcd24b9493fb34d1ce52d7..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/commands/help.py +++ /dev/null @@ -1,41 +0,0 @@ -from optparse import Values -from typing import List - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import SUCCESS -from pip._internal.exceptions import CommandError - - -class HelpCommand(Command): - """Show help for commands""" - - usage = """ - %prog """ - ignore_require_venv = True - - def run(self, options: Values, args: List[str]) -> int: - from pip._internal.commands import ( - commands_dict, - create_command, - get_similar_commands, - ) - - try: - # 'pip help' with no args is handled by pip.__init__.parseopt() - cmd_name = args[0] # the command we need help for - except IndexError: - return SUCCESS - - if cmd_name not in commands_dict: - guess = get_similar_commands(cmd_name) - - msg = [f'unknown command "{cmd_name}"'] - if guess: - msg.append(f'maybe you meant "{guess}"') - - raise CommandError(" - ".join(msg)) - - command = create_command(cmd_name) - command.parser.print_help() - - return SUCCESS diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/installation_report.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/installation_report.py deleted file mode 100644 index 965f09523719a61439694fb5e583535e3d1771c1..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/installation_report.py +++ /dev/null @@ -1,53 +0,0 @@ -from typing import Any, Dict, Sequence - -from pip._vendor.packaging.markers import default_environment - -from pip import __version__ -from pip._internal.req.req_install import InstallRequirement - - -class InstallationReport: - def __init__(self, install_requirements: Sequence[InstallRequirement]): - self._install_requirements = install_requirements - - @classmethod - def _install_req_to_dict(cls, ireq: InstallRequirement) -> Dict[str, Any]: - assert ireq.download_info, f"No download_info for {ireq}" - res = { - # PEP 610 json for the download URL. download_info.archive_info.hash may - # be absent when the requirement was installed from the wheel cache - # and the cache entry was populated by an older pip version that did not - # record origin.json. - "download_info": ireq.download_info.to_dict(), - # is_direct is true if the requirement was a direct URL reference (which - # includes editable requirements), and false if the requirement was - # downloaded from a PEP 503 index or --find-links. - "is_direct": bool(ireq.original_link), - # requested is true if the requirement was specified by the user (aka - # top level requirement), and false if it was installed as a dependency of a - # requirement. https://peps.python.org/pep-0376/#requested - "requested": ireq.user_supplied, - # PEP 566 json encoding for metadata - # https://www.python.org/dev/peps/pep-0566/#json-compatible-metadata - "metadata": ireq.get_dist().metadata_dict, - } - if ireq.user_supplied and ireq.extras: - # For top level requirements, the list of requested extras, if any. - res["requested_extras"] = list(sorted(ireq.extras)) - return res - - def to_dict(self) -> Dict[str, Any]: - return { - "version": "0", - "pip_version": __version__, - "install": [ - self._install_req_to_dict(ireq) for ireq in self._install_requirements - ], - # https://peps.python.org/pep-0508/#environment-markers - # TODO: currently, the resolver uses the default environment to evaluate - # environment markers, so that is what we report here. In the future, it - # should also take into account options such as --python-version or - # --platform, perhaps under the form of an environment_override field? - # https://github.com/pypa/pip/issues/11198 - "environment": default_environment(), - } diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/install/legacy.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/install/legacy.py deleted file mode 100644 index 290967dd6d57adef52a4999e92aafceac5760cd7..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/install/legacy.py +++ /dev/null @@ -1,120 +0,0 @@ -"""Legacy installation process, i.e. `setup.py install`. -""" - -import logging -import os -from typing import List, Optional, Sequence - -from pip._internal.build_env import BuildEnvironment -from pip._internal.exceptions import InstallationError, LegacyInstallFailure -from pip._internal.locations.base import change_root -from pip._internal.models.scheme import Scheme -from pip._internal.utils.misc import ensure_dir -from pip._internal.utils.setuptools_build import make_setuptools_install_args -from pip._internal.utils.subprocess import runner_with_spinner_message -from pip._internal.utils.temp_dir import TempDirectory - -logger = logging.getLogger(__name__) - - -def write_installed_files_from_setuptools_record( - record_lines: List[str], - root: Optional[str], - req_description: str, -) -> None: - def prepend_root(path: str) -> str: - if root is None or not os.path.isabs(path): - return path - else: - return change_root(root, path) - - for line in record_lines: - directory = os.path.dirname(line) - if directory.endswith(".egg-info"): - egg_info_dir = prepend_root(directory) - break - else: - message = ( - "{} did not indicate that it installed an " - ".egg-info directory. Only setup.py projects " - "generating .egg-info directories are supported." - ).format(req_description) - raise InstallationError(message) - - new_lines = [] - for line in record_lines: - filename = line.strip() - if os.path.isdir(filename): - filename += os.path.sep - new_lines.append(os.path.relpath(prepend_root(filename), egg_info_dir)) - new_lines.sort() - ensure_dir(egg_info_dir) - inst_files_path = os.path.join(egg_info_dir, "installed-files.txt") - with open(inst_files_path, "w") as f: - f.write("\n".join(new_lines) + "\n") - - -def install( - install_options: List[str], - global_options: Sequence[str], - root: Optional[str], - home: Optional[str], - prefix: Optional[str], - use_user_site: bool, - pycompile: bool, - scheme: Scheme, - setup_py_path: str, - isolated: bool, - req_name: str, - build_env: BuildEnvironment, - unpacked_source_directory: str, - req_description: str, -) -> bool: - - header_dir = scheme.headers - - with TempDirectory(kind="record") as temp_dir: - try: - record_filename = os.path.join(temp_dir.path, "install-record.txt") - install_args = make_setuptools_install_args( - setup_py_path, - global_options=global_options, - install_options=install_options, - record_filename=record_filename, - root=root, - prefix=prefix, - header_dir=header_dir, - home=home, - use_user_site=use_user_site, - no_user_config=isolated, - pycompile=pycompile, - ) - - runner = runner_with_spinner_message( - f"Running setup.py install for {req_name}" - ) - with build_env: - runner( - cmd=install_args, - cwd=unpacked_source_directory, - ) - - if not os.path.exists(record_filename): - logger.debug("Record file %s not found", record_filename) - # Signal to the caller that we didn't install the new package - return False - - except Exception as e: - # Signal to the caller that we didn't install the new package - raise LegacyInstallFailure(package_details=req_name) from e - - # At this point, we have successfully installed the requirement. - - # We intentionally do not use any encoding to read the file because - # setuptools writes the file using distutils.file_util.write_file, - # which does not specify an encoding. - with open(record_filename) as f: - record_lines = f.read().splitlines() - - write_installed_files_from_setuptools_record(record_lines, root, req_description) - return True diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/train.py b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/train.py deleted file mode 100644 index f1aeb79f630932b539500544d4249b1237d06605..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/train.py +++ /dev/null @@ -1,159 +0,0 @@ -import math -import argparse -import pprint -from distutils.util import strtobool -from pathlib import Path -from loguru import logger as loguru_logger - -import pytorch_lightning as pl -from pytorch_lightning.utilities import rank_zero_only -from pytorch_lightning.loggers import TensorBoardLogger -from pytorch_lightning.callbacks import ModelCheckpoint, LearningRateMonitor -from pytorch_lightning.plugins import DDPPlugin - -from src.config.default import get_cfg_defaults -from src.utils.misc import get_rank_zero_only_logger, setup_gpus -from src.utils.profiler import build_profiler -from src.lightning.data import MultiSceneDataModule -from src.lightning.lightning_aspanformer import PL_ASpanFormer - -loguru_logger = get_rank_zero_only_logger(loguru_logger) - - -def parse_args(): - def str2bool(v): - return v.lower() in ("true", "1") - - # init a costum parser which will be added into pl.Trainer parser - # check documentation: https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags - parser = argparse.ArgumentParser( - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("data_cfg_path", type=str, help="data config path") - parser.add_argument("main_cfg_path", type=str, help="main config path") - parser.add_argument("--exp_name", type=str, default="default_exp_name") - parser.add_argument("--batch_size", type=int, default=4, help="batch_size per gpu") - parser.add_argument("--num_workers", type=int, default=4) - parser.add_argument( - "--pin_memory", - type=lambda x: bool(strtobool(x)), - nargs="?", - default=True, - help="whether loading data to pinned memory or not", - ) - parser.add_argument( - "--ckpt_path", - type=str, - default=None, - help="pretrained checkpoint path, helpful for using a pre-trained coarse-only ASpanFormer", - ) - parser.add_argument( - "--disable_ckpt", - action="store_true", - help="disable checkpoint saving (useful for debugging).", - ) - parser.add_argument( - "--profiler_name", - type=str, - default=None, - help="options: [inference, pytorch], or leave it unset", - ) - parser.add_argument( - "--parallel_load_data", - action="store_true", - help="load datasets in with multiple processes.", - ) - parser.add_argument( - "--mode", - type=str, - default="vanilla", - help="pretrained checkpoint path, helpful for using a pre-trained coarse-only ASpanFormer", - ) - parser.add_argument( - "--ini", - type=str2bool, - default=False, - help="pretrained checkpoint path, helpful for using a pre-trained coarse-only ASpanFormer", - ) - - parser = pl.Trainer.add_argparse_args(parser) - return parser.parse_args() - - -def main(): - # parse arguments - args = parse_args() - rank_zero_only(pprint.pprint)(vars(args)) - - # init default-cfg and merge it with the main- and data-cfg - config = get_cfg_defaults() - config.merge_from_file(args.main_cfg_path) - config.merge_from_file(args.data_cfg_path) - pl.seed_everything(config.TRAINER.SEED) # reproducibility - # TODO: Use different seeds for each dataloader workers - # This is needed for data augmentation - - # scale lr and warmup-step automatically - args.gpus = _n_gpus = setup_gpus(args.gpus) - config.TRAINER.WORLD_SIZE = _n_gpus * args.num_nodes - config.TRAINER.TRUE_BATCH_SIZE = config.TRAINER.WORLD_SIZE * args.batch_size - _scaling = config.TRAINER.TRUE_BATCH_SIZE / config.TRAINER.CANONICAL_BS - config.TRAINER.SCALING = _scaling - config.TRAINER.TRUE_LR = config.TRAINER.CANONICAL_LR * _scaling - config.TRAINER.WARMUP_STEP = math.floor(config.TRAINER.WARMUP_STEP / _scaling) - - # lightning module - profiler = build_profiler(args.profiler_name) - model = PL_ASpanFormer(config, pretrained_ckpt=args.ckpt_path, profiler=profiler) - loguru_logger.info(f"ASpanFormer LightningModule initialized!") - - # lightning data - data_module = MultiSceneDataModule(args, config) - loguru_logger.info(f"ASpanFormer DataModule initialized!") - - # TensorBoard Logger - logger = TensorBoardLogger( - save_dir="logs/tb_logs", name=args.exp_name, default_hp_metric=False - ) - ckpt_dir = Path(logger.log_dir) / "checkpoints" - - # Callbacks - # TODO: update ModelCheckpoint to monitor multiple metrics - ckpt_callback = ModelCheckpoint( - monitor="auc@10", - verbose=True, - save_top_k=5, - mode="max", - save_last=True, - dirpath=str(ckpt_dir), - filename="{epoch}-{auc@5:.3f}-{auc@10:.3f}-{auc@20:.3f}", - ) - lr_monitor = LearningRateMonitor(logging_interval="step") - callbacks = [lr_monitor] - if not args.disable_ckpt: - callbacks.append(ckpt_callback) - - # Lightning Trainer - trainer = pl.Trainer.from_argparse_args( - args, - plugins=DDPPlugin( - find_unused_parameters=False, - num_nodes=args.num_nodes, - sync_batchnorm=config.TRAINER.WORLD_SIZE > 0, - ), - gradient_clip_val=config.TRAINER.GRADIENT_CLIPPING, - callbacks=callbacks, - logger=logger, - sync_batchnorm=config.TRAINER.WORLD_SIZE > 0, - replace_sampler_ddp=False, # use custom sampler - reload_dataloaders_every_epoch=False, # avoid repeated samples! - weights_summary="full", - profiler=profiler, - ) - loguru_logger.info(f"Trainer initialized!") - loguru_logger.info(f"Start training!") - trainer.fit(model, datamodule=data_module) - - -if __name__ == "__main__": - main() diff --git a/spaces/Robert001/UniControl-Demo/annotator/inpainting/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/inpainting/__init__.py deleted file mode 100644 index c84462036d7ad95500d7021035d6fa822ccefbef..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/inpainting/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -import numpy as np - -class Inpainter: - def __call__(self, img, height_top_mask, height_down_mask, width_left_mask, width_right_mask): - h = img.shape[0] - w = img.shape[1] - h_top_mask = int(float(h) / 100.0 * float(height_top_mask)) - h_down_mask = int(float(h) / 100.0 * float(height_down_mask)) - - w_left_mask = int(float(w) / 100.0 * float(width_left_mask)) - w_right_mask = int(float(w) / 100.0 * float(width_right_mask)) - - img_new = img - img_new[h_top_mask:h_down_mask, w_left_mask:w_right_mask] = 0 - img_new = img_new.astype('ubyte') - return img_new diff --git a/spaces/SakshiRathi77/SakshiRathi77-Wishper-Hi-Kagglex/README.md b/spaces/SakshiRathi77/SakshiRathi77-Wishper-Hi-Kagglex/README.md deleted file mode 100644 index beb866e8d7535bce1ed6cfc8258f536c958f24d5..0000000000000000000000000000000000000000 --- a/spaces/SakshiRathi77/SakshiRathi77-Wishper-Hi-Kagglex/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SakshiRathi77 Wishper Hi Kagglex -emoji: ⚡ -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/text/shanghainese.py b/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/text/shanghainese.py deleted file mode 100644 index cb29c24a08d2e406e8399cf7bc9fe5cb43cb9c61..0000000000000000000000000000000000000000 --- a/spaces/Sakukaze/VITS-Umamusume-voice-synthesizer/text/shanghainese.py +++ /dev/null @@ -1,64 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('zaonhe') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ᴇ'), - ('B', 'bi'), - ('C', 'si'), - ('D', 'di'), - ('E', 'i'), - ('F', 'ᴇf'), - ('G', 'dʑi'), - ('H', 'ᴇtɕʰ'), - ('I', 'ᴀi'), - ('J', 'dʑᴇ'), - ('K', 'kʰᴇ'), - ('L', 'ᴇl'), - ('M', 'ᴇm'), - ('N', 'ᴇn'), - ('O', 'o'), - ('P', 'pʰi'), - ('Q', 'kʰiu'), - ('R', 'ᴀl'), - ('S', 'ᴇs'), - ('T', 'tʰi'), - ('U', 'ɦiu'), - ('V', 'vi'), - ('W', 'dᴀbɤliu'), - ('X', 'ᴇks'), - ('Y', 'uᴀi'), - ('Z', 'zᴇ') -]] - - -def _number_to_shanghainese(num): - num = cn2an.an2cn(num).replace('一十','十').replace('二十', '廿').replace('二', '两') - return re.sub(r'((?:^|[^三四五六七八九])十|廿)两', r'\1二', num) - - -def number_to_shanghainese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: _number_to_shanghainese(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def shanghainese_to_ipa(text): - text = number_to_shanghainese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/Saturdays/ClassificationPeripheralBloodCell/README.md b/spaces/Saturdays/ClassificationPeripheralBloodCell/README.md deleted file mode 100644 index 70d920ff1e4e79b28180c51269521e446397d803..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/ClassificationPeripheralBloodCell/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Classification of Peripheral Blood Cells -sdk: streamlit -emoji: 📊 -colorFrom: red -colorTo: blue -app_file: main.py -pinned: false -license: mit ---- \ No newline at end of file diff --git a/spaces/SeViLA/SeViLA/lavis/datasets/download_scripts/download_flickr.py b/spaces/SeViLA/SeViLA/lavis/datasets/download_scripts/download_flickr.py deleted file mode 100644 index 3075f02299110b729ccb0f4b34f7b9cf23046b6c..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/datasets/download_scripts/download_flickr.py +++ /dev/null @@ -1,78 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import os -from pathlib import Path - -from omegaconf import OmegaConf - -from lavis.common.utils import ( - cleanup_dir, - get_abs_path, - get_cache_path, -) - -import opendatasets as od - - -DATA_URL = "https://www.kaggle.com/datasets/hsankesara/flickr-image-dataset" - -print( - """ - To download the dataset, you need to have a Kaggle account and the associated key. - See https://www.kaggle.com/docs/api to create account and a new API token. - """ -) - - -def move_directory(src_dir, dst_dir): - """ - Move files from download_path to storage_path - """ - print("Moving to {}".format(dst_dir)) - - os.makedirs(dst_dir, exist_ok=True) - - for file_name in os.listdir(src_dir): - os.rename( - os.path.join(src_dir, file_name), - os.path.join(dst_dir, file_name), - ) - - -if __name__ == "__main__": - - config_path = get_abs_path("configs/datasets/flickr30k/defaults.yaml") - - storage_dir = OmegaConf.load( - config_path - ).datasets.flickr30k.build_info.images.storage - - storage_dir = Path(get_cache_path(storage_dir)) - download_dir = storage_dir.parent / "download" - - if storage_dir.exists(): - print(f"Dataset already exists at {storage_dir}. Aborting.") - exit(0) - - os.makedirs(download_dir) - - try: - print("Downloading {} to {}".format(DATA_URL, download_dir)) - od.download(DATA_URL, download_dir) - except Exception as e: - print(e) - # remove download dir if failed - cleanup_dir(download_dir) - exit(1) - - move_directory( - download_dir / "flickr-image-dataset" / "flickr30k_images" / "flickr30k_images", - storage_dir / "flickr30k-images", - ) - - cleanup_dir(download_dir) diff --git a/spaces/Semibit/tts-server/README.md b/spaces/Semibit/tts-server/README.md deleted file mode 100644 index 52464494b6bfefe462c3672fc9f9c556234d60d3..0000000000000000000000000000000000000000 --- a/spaces/Semibit/tts-server/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tts Server -emoji: 😻 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ShAnSantosh/Chatbot_Using_Pytorch/app.py b/spaces/ShAnSantosh/Chatbot_Using_Pytorch/app.py deleted file mode 100644 index 92098f43ea52de53289e4facadf73cac49558ed5..0000000000000000000000000000000000000000 --- a/spaces/ShAnSantosh/Chatbot_Using_Pytorch/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import random -import json - -import torch - -from model import NeuralNet -from nltk_utils import bag_of_words, tokenize - -device = torch.device("cpu") - -with open('./intents.json', 'r') as json_data: - intents = json.load(json_data) - -FILE = "./data.pth" -data = torch.load(FILE) - -input_size = data["input_size"] -hidden_size = data["hidden_size"] -output_size = data["output_size"] -all_words = data['all_words'] -tags = data['tags'] -model_state = data["model_state"] - -model = NeuralNet(input_size, hidden_size, output_size).to(device) -model.load_state_dict(model_state) -model.eval() - -def predict(message, history): - history = history or [] - sentence = tokenize(message) - X = bag_of_words(sentence, all_words) - X = X.reshape(1, X.shape[0]) - X = torch.from_numpy(X).to(device) - - output = model(X) - _, predicted = torch.max(output, dim=1) - - tag = tags[predicted.item()] - - probs = torch.softmax(output, dim=1) - prob = probs[0][predicted.item()] - if prob.item() > 0.75: - for intent in intents['intents']: - if tag == intent["tag"]: - reply = [random.choice(intent['responses'])] - else: - reply = ["Sorry I do not understand :-("] - - history.append((message, reply)) - return history, history - -import gradio as gr - -gr.Interface(fn=predict, - theme="default", - css=".footer {display:none !important}", - inputs=["text", "state"], - outputs=["chatbot", "state"], - title="Coffee Shop Bot").launch(share=True) - \ No newline at end of file diff --git a/spaces/Sharathhebbar24/One-stop-for-Open-source-models/llm.py b/spaces/Sharathhebbar24/One-stop-for-Open-source-models/llm.py deleted file mode 100644 index 5e316ed22adf6872647e9a0a9f9e96536e3bb1a2..0000000000000000000000000000000000000000 --- a/spaces/Sharathhebbar24/One-stop-for-Open-source-models/llm.py +++ /dev/null @@ -1,37 +0,0 @@ -import langchain -from langchain import HuggingFaceHub -from langchain.embeddings import HuggingFaceHubEmbeddings -from langchain.document_loaders import PyPDFLoader -from langchain.vectorstores import FAISS -from langchain.chains import ConversationalRetrievalChain -from langchain.chains.question_answering import load_qa_chain - -def llm_conv(filename): - document_loader = PyPDFLoader(filename) - chunks = document_loader.load_and_split() - embeddings = HuggingFaceHubEmbeddings() - db = FAISS.from_documents(chunks, embeddings) - return db, chunks - -def similarity(filename, repo_id, model_kwargs, query): - db, chunks = llm_conv(filename) - docs = db.similarity_search(query) - chain = load_qa_chain( - HuggingFaceHub( - repo_id=repo_id, - model_kwargs=model_kwargs - ), - chain_type="stuff" - ) - question = f""" - Answer the question based on the context, if you don't know then output "Out of Context". - Context: \n {chunks[0].page_content} \n - Question: \n {query} \n - Answer: - """ - result = chain.run( - input_documents=docs, - question=question - ) - return result - diff --git a/spaces/SpacesExamples/Fooocus/README.md b/spaces/SpacesExamples/Fooocus/README.md deleted file mode 100644 index 911056b680ce2d10c58ad8c355a984651e0269d4..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/Fooocus/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Fooocus -emoji: 🧘 -colorFrom: pink -colorTo: indigo -sdk: docker -pinned: false ---- - -https://github.com/lllyasviel/Fooocus/ \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/docs/DATASETS.md b/spaces/SuYuanS/AudioCraft_Plus/docs/DATASETS.md deleted file mode 100644 index b0890c03cf732450eb498559638c6b45d50e40c3..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/docs/DATASETS.md +++ /dev/null @@ -1,82 +0,0 @@ -# AudioCraft datasets - -Our dataset manifest files consist in 1-json-per-line files, potentially gzipped, -as `data.jsons` or `data.jsons.gz` files. This JSON contains the path to the audio -file and associated metadata. The manifest files are then provided in the configuration, -as `datasource` sub-configuration. A datasource contains the pointers to the paths of -the manifest files for each AudioCraft stage (or split) along with additional information -(eg. maximum sample rate to use against this dataset). All the datasources are under the -`dset` group config, with a dedicated configuration file for each dataset. - -## Getting started - -### Example - -See the provided example in the directory that provides a manifest to use the example dataset -provided under the [dataset folder](../dataset/example). - -The manifest files are stored in the [egs folder](../egs/example). - -```shell -egs/ - example/data.json.gz -``` - -A datasource is defined in the configuration folder, in the dset group config for this dataset -at [config/dset/audio/example](../config/dset/audio/example.yaml): - -```shell -# @package __global__ - -datasource: - max_sample_rate: 44100 - max_channels: 2 - - train: egs/example - valid: egs/example - evaluate: egs/example - generate: egs/example -``` - -For proper dataset, one should create manifest for each of the splits and specify the correct path -to the given manifest in the datasource for each split. - -Then, using a dataset through the configuration can be done pointing to the -corresponding dataset configuration: -```shell -dset= # should match the yaml file name - -# for example -dset=audio/example -``` - -### Creating manifest files - -Assuming you want to create manifest files to load with AudioCraft's AudioDataset, you can use -the following command to create new manifest files from a given folder containing audio files: - -```shell -python -m audiocraft.data.audio_dataset egs/my_dataset/my_dataset_split/data.jsonl.gz - -# For example to generate the manifest for dset=audio/example -# note: we don't use any split and we don't compress the jsonl file for this dummy example -python -m audiocraft.data.audio_dataset dataset/example egs/example/data.jsonl - -# More info with: python -m audiocraft.data.audio_dataset --help -``` - -## Additional information - -### MusicDataset and metadata - -The MusicDataset is an AudioDataset with additional metadata. The MusicDataset expects -the additional metadata to be stored in a JSON file that has the same path as the corresponding -audio file, but with a `.json` extension. - -### SoundDataset and metadata - -The SoundDataset is an AudioDataset with descriptions metadata. Similarly to the MusicDataset, -the SoundDataset expects the additional metadata to be stored in a JSON file that has the same -path as the corresponding audio file, but with a `.json` extension. Additionally, the SoundDataset -supports an additional parameter pointing to an extra folder `external_metadata_source` containing -all the JSON metadata files given they have the same filename as the audio file. diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/paths.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/paths.py deleted file mode 100644 index cc6408ca4348e81b6ff8716408886e89fb862a10..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/paths.py +++ /dev/null @@ -1,125 +0,0 @@ -"""Find files and directories which IPython uses. -""" -import os.path -import tempfile -from warnings import warn - -import IPython -from IPython.utils.importstring import import_item -from IPython.utils.path import ( - get_home_dir, - get_xdg_dir, - get_xdg_cache_dir, - compress_user, - _writable_dir, - ensure_dir_exists, -) - - -def get_ipython_dir() -> str: - """Get the IPython directory for this platform and user. - - This uses the logic in `get_home_dir` to find the home directory - and then adds .ipython to the end of the path. - """ - - env = os.environ - pjoin = os.path.join - - - ipdir_def = '.ipython' - - home_dir = get_home_dir() - xdg_dir = get_xdg_dir() - - if 'IPYTHON_DIR' in env: - warn('The environment variable IPYTHON_DIR is deprecated since IPython 3.0. ' - 'Please use IPYTHONDIR instead.', DeprecationWarning) - ipdir = env.get('IPYTHONDIR', env.get('IPYTHON_DIR', None)) - if ipdir is None: - # not set explicitly, use ~/.ipython - ipdir = pjoin(home_dir, ipdir_def) - if xdg_dir: - # Several IPython versions (up to 1.x) defaulted to .config/ipython - # on Linux. We have decided to go back to using .ipython everywhere - xdg_ipdir = pjoin(xdg_dir, 'ipython') - - if _writable_dir(xdg_ipdir): - cu = compress_user - if os.path.exists(ipdir): - warn(('Ignoring {0} in favour of {1}. Remove {0} to ' - 'get rid of this message').format(cu(xdg_ipdir), cu(ipdir))) - elif os.path.islink(xdg_ipdir): - warn(('{0} is deprecated. Move link to {1} to ' - 'get rid of this message').format(cu(xdg_ipdir), cu(ipdir))) - else: - ipdir = xdg_ipdir - - ipdir = os.path.normpath(os.path.expanduser(ipdir)) - - if os.path.exists(ipdir) and not _writable_dir(ipdir): - # ipdir exists, but is not writable - warn("IPython dir '{0}' is not a writable location," - " using a temp directory.".format(ipdir)) - ipdir = tempfile.mkdtemp() - elif not os.path.exists(ipdir): - parent = os.path.dirname(ipdir) - if not _writable_dir(parent): - # ipdir does not exist and parent isn't writable - warn("IPython parent '{0}' is not a writable location," - " using a temp directory.".format(parent)) - ipdir = tempfile.mkdtemp() - else: - os.makedirs(ipdir, exist_ok=True) - assert isinstance(ipdir, str), "all path manipulation should be str(unicode), but are not." - return ipdir - - -def get_ipython_cache_dir() -> str: - """Get the cache directory it is created if it does not exist.""" - xdgdir = get_xdg_cache_dir() - if xdgdir is None: - return get_ipython_dir() - ipdir = os.path.join(xdgdir, "ipython") - if not os.path.exists(ipdir) and _writable_dir(xdgdir): - ensure_dir_exists(ipdir) - elif not _writable_dir(xdgdir): - return get_ipython_dir() - - return ipdir - - -def get_ipython_package_dir() -> str: - """Get the base directory where IPython itself is installed.""" - ipdir = os.path.dirname(IPython.__file__) - assert isinstance(ipdir, str) - return ipdir - - -def get_ipython_module_path(module_str): - """Find the path to an IPython module in this version of IPython. - - This will always find the version of the module that is in this importable - IPython package. This will always return the path to the ``.py`` - version of the module. - """ - if module_str == 'IPython': - return os.path.join(get_ipython_package_dir(), '__init__.py') - mod = import_item(module_str) - the_path = mod.__file__.replace('.pyc', '.py') - the_path = the_path.replace('.pyo', '.py') - return the_path - - -def locate_profile(profile='default'): - """Find the path to the folder associated with a given profile. - - I.e. find $IPYTHONDIR/profile_whatever. - """ - from IPython.core.profiledir import ProfileDir, ProfileDirError - try: - pd = ProfileDir.find_profile_dir_by_name(get_ipython_dir(), profile) - except ProfileDirError as e: - # IOError makes more sense when people are expecting a path - raise IOError("Couldn't find profile %r" % profile) from e - return pd.location diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/backoff/_async.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/backoff/_async.py deleted file mode 100644 index 82fd4773581587b7cea95a28fbcdda8d423f0d16..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/backoff/_async.py +++ /dev/null @@ -1,188 +0,0 @@ -# coding:utf-8 -import datetime -import functools -import asyncio -from datetime import timedelta - -from backoff._common import (_init_wait_gen, _maybe_call, _next_wait) - - -def _ensure_coroutine(coro_or_func): - if asyncio.iscoroutinefunction(coro_or_func): - return coro_or_func - else: - @functools.wraps(coro_or_func) - async def f(*args, **kwargs): - return coro_or_func(*args, **kwargs) - return f - - -def _ensure_coroutines(coros_or_funcs): - return [_ensure_coroutine(f) for f in coros_or_funcs] - - -async def _call_handlers(handlers, - *, - target, args, kwargs, tries, elapsed, - **extra): - details = { - 'target': target, - 'args': args, - 'kwargs': kwargs, - 'tries': tries, - 'elapsed': elapsed, - } - details.update(extra) - for handler in handlers: - await handler(details) - - -def retry_predicate(target, wait_gen, predicate, - *, - max_tries, max_time, jitter, - on_success, on_backoff, on_giveup, - wait_gen_kwargs): - on_success = _ensure_coroutines(on_success) - on_backoff = _ensure_coroutines(on_backoff) - on_giveup = _ensure_coroutines(on_giveup) - - # Easy to implement, please report if you need this. - assert not asyncio.iscoroutinefunction(max_tries) - assert not asyncio.iscoroutinefunction(jitter) - - assert asyncio.iscoroutinefunction(target) - - @functools.wraps(target) - async def retry(*args, **kwargs): - - # update variables from outer function args - max_tries_value = _maybe_call(max_tries) - max_time_value = _maybe_call(max_time) - - tries = 0 - start = datetime.datetime.now() - wait = _init_wait_gen(wait_gen, wait_gen_kwargs) - while True: - tries += 1 - elapsed = timedelta.total_seconds(datetime.datetime.now() - start) - details = { - "target": target, - "args": args, - "kwargs": kwargs, - "tries": tries, - "elapsed": elapsed, - } - - ret = await target(*args, **kwargs) - if predicate(ret): - max_tries_exceeded = (tries == max_tries_value) - max_time_exceeded = (max_time_value is not None and - elapsed >= max_time_value) - - if max_tries_exceeded or max_time_exceeded: - await _call_handlers(on_giveup, **details, value=ret) - break - - try: - seconds = _next_wait(wait, ret, jitter, elapsed, - max_time_value) - except StopIteration: - await _call_handlers(on_giveup, **details, value=ret) - break - - await _call_handlers(on_backoff, **details, value=ret, - wait=seconds) - - # Note: there is no convenient way to pass explicit event - # loop to decorator, so here we assume that either default - # thread event loop is set and correct (it mostly is - # by default), or Python >= 3.5.3 or Python >= 3.6 is used - # where loop.get_event_loop() in coroutine guaranteed to - # return correct value. - # See for details: - # - # - await asyncio.sleep(seconds) - continue - else: - await _call_handlers(on_success, **details, value=ret) - break - - return ret - - return retry - - -def retry_exception(target, wait_gen, exception, - *, - max_tries, max_time, jitter, giveup, - on_success, on_backoff, on_giveup, raise_on_giveup, - wait_gen_kwargs): - on_success = _ensure_coroutines(on_success) - on_backoff = _ensure_coroutines(on_backoff) - on_giveup = _ensure_coroutines(on_giveup) - giveup = _ensure_coroutine(giveup) - - # Easy to implement, please report if you need this. - assert not asyncio.iscoroutinefunction(max_tries) - assert not asyncio.iscoroutinefunction(jitter) - - @functools.wraps(target) - async def retry(*args, **kwargs): - - max_tries_value = _maybe_call(max_tries) - max_time_value = _maybe_call(max_time) - - tries = 0 - start = datetime.datetime.now() - wait = _init_wait_gen(wait_gen, wait_gen_kwargs) - while True: - tries += 1 - elapsed = timedelta.total_seconds(datetime.datetime.now() - start) - details = { - "target": target, - "args": args, - "kwargs": kwargs, - "tries": tries, - "elapsed": elapsed, - } - - try: - ret = await target(*args, **kwargs) - except exception as e: - giveup_result = await giveup(e) - max_tries_exceeded = (tries == max_tries_value) - max_time_exceeded = (max_time_value is not None and - elapsed >= max_time_value) - - if giveup_result or max_tries_exceeded or max_time_exceeded: - await _call_handlers(on_giveup, **details, exception=e) - if raise_on_giveup: - raise - return None - - try: - seconds = _next_wait(wait, e, jitter, elapsed, - max_time_value) - except StopIteration: - await _call_handlers(on_giveup, **details, exception=e) - raise e - - await _call_handlers(on_backoff, **details, wait=seconds, - exception=e) - - # Note: there is no convenient way to pass explicit event - # loop to decorator, so here we assume that either default - # thread event loop is set and correct (it mostly is - # by default), or Python >= 3.5.3 or Python >= 3.6 is used - # where loop.get_event_loop() in coroutine guaranteed to - # return correct value. - # See for details: - # - # - await asyncio.sleep(seconds) - else: - await _call_handlers(on_success, **details) - - return ret - return retry diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/cc_sqlalchemy/sql/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/cc_sqlalchemy/sql/__init__.py deleted file mode 100644 index 68becd54d641dde154cf039fbe2c74ac70b770cb..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/cc_sqlalchemy/sql/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from typing import Optional - -from sqlalchemy import Table - -from clickhouse_connect.driver.query import quote_identifier - - -def full_table(table_name: str, schema: Optional[str] = None) -> str: - if table_name.startswith('(') or '.' in table_name or not schema: - return quote_identifier(table_name) - return f'{quote_identifier(schema)}.{quote_identifier(table_name)}' - - -def format_table(table: Table): - return full_table(table.name, table.schema) diff --git a/spaces/TH5314/newbing/src/components/user-menu.tsx b/spaces/TH5314/newbing/src/components/user-menu.tsx deleted file mode 100644 index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/user-menu.tsx +++ /dev/null @@ -1,113 +0,0 @@ -'use client' - -import { useEffect, useState } from 'react' -import Image from 'next/image' -import { toast } from 'react-hot-toast' -import { Button } from '@/components/ui/button' -import pkg from '../../package.json' -import { - DropdownMenu, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuSeparator, - DropdownMenuTrigger -} from '@/components/ui/dropdown-menu' -import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons' -import SettingIcon from '@/assets/images/settings.svg' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - -export function UserMenu() { - const [host, setHost] = useState('') - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - useEffect(() => { - setHost(location.host) - }, []) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - return ( -
- - - - - - - location.href='#dialog="settings"' - } - className="cursor-pointer" - > - 设置用户 - - - - location.href='#dialog="voice"' - } - className="cursor-pointer" - > - 语音设置 - - - - - 开源地址 - - - - - - - - 托管地址 - 🤗 - - - - - - - 复制站点 - - - - - -
版本信息 {pkg.version}
-
- - -
站点域名
-
copyToClipboard(host)} className="flex gap-1 text-xs text-zinc-500 cursor-pointer"> - {host} -
-
-
-
-
- ) -} diff --git a/spaces/TRI-ML/risk_biased_prediction/risk_biased/mpc_planner/dynamics.py b/spaces/TRI-ML/risk_biased_prediction/risk_biased/mpc_planner/dynamics.py deleted file mode 100644 index 14d4adcf9aa5bac6e5933819d71074cfecc85ea6..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/risk_biased/mpc_planner/dynamics.py +++ /dev/null @@ -1,49 +0,0 @@ -import torch - -from risk_biased.utils.planner_utils import AbstractState, to_state - - -class PositionVelocityDoubleIntegrator: - """Deterministic discrete-time double-integrator dynamics, where state is - [position_x_m, position_y_m, velocity_x_m_s velocity_y_m_s] and control is - [acceleration_x_m_s2, acceleration_y_m_s2]. - - Args: - dt: time differential between two discrete timesteps in seconds - """ - - def __init__(self, dt: float): - self.dt = dt - self.control_dim = 2 - - def simulate( - self, - state_init: AbstractState, - control_input: torch.Tensor, - ) -> AbstractState: - """Euler-integrate dynamics from the initial position and the initial velocity given - an acceleration input - - Args: - state_init: initial Markov state of the system - control_input: (num_agents, num_steps_future, 2) tensor of acceleration input - - Returns: - (num_agents, num_steps_future, 5) tensor of simulated future Markov state - sequence - """ - position_init, velocity_init = state_init.position, state_init.velocity - - assert ( - control_input.shape[-1] == self.control_dim - ), "invalid control input dimension" - - velocity_future = velocity_init + self.dt * torch.cumsum(control_input, dim=-2) - - position_future = position_init + self.dt * torch.cumsum( - velocity_future, dim=-2 - ) - state_future = to_state( - torch.cat((position_future, velocity_future), dim=-1), self.dt - ) - return state_future diff --git a/spaces/TRI-ML/risk_biased_prediction/setup.py b/spaces/TRI-ML/risk_biased_prediction/setup.py deleted file mode 100644 index 5f51989b8167b3c57216027fe7233e8234639fdc..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/setup.py +++ /dev/null @@ -1,27 +0,0 @@ -from setuptools import setup - -setup( - name="risk_biased", - version="0.1", - description="Risk biased trajectory prediction", - authors=["Jean Mercat", "Haruki Nishimura"], - author_emails=["jean.mercat@tri.global", "haruki.nishimura@tri.global"], - license="MIT", - packages=["risk_biased"], - zip_safe=False, - install_requires=[ - "torch>=1.12", - "matplotlib", - "numpy", - "mmcv>=1.4.7", - "pytorch-lightning", - "pytest", - "setuptools>=59.5.0", - "wandb", - "plotly", - "scipy", - "gradio", - "einops", - "pydantic", - ], -) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/tags.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/tags.py deleted file mode 100644 index 76d243414d00f54a8973359cf553123e9bd1760e..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/tags.py +++ /dev/null @@ -1,546 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import logging -import platform -import subprocess -import sys -import sysconfig -from importlib.machinery import EXTENSION_SUFFIXES -from typing import ( - Dict, - FrozenSet, - Iterable, - Iterator, - List, - Optional, - Sequence, - Tuple, - Union, - cast, -) - -from . import _manylinux, _musllinux - -logger = logging.getLogger(__name__) - -PythonVersion = Sequence[int] -MacVersion = Tuple[int, int] - -INTERPRETER_SHORT_NAMES: Dict[str, str] = { - "python": "py", # Generic. - "cpython": "cp", - "pypy": "pp", - "ironpython": "ip", - "jython": "jy", -} - - -_32_BIT_INTERPRETER = sys.maxsize <= 2**32 - - -class Tag: - """ - A representation of the tag triple for a wheel. - - Instances are considered immutable and thus are hashable. Equality checking - is also supported. - """ - - __slots__ = ["_interpreter", "_abi", "_platform", "_hash"] - - def __init__(self, interpreter: str, abi: str, platform: str) -> None: - self._interpreter = interpreter.lower() - self._abi = abi.lower() - self._platform = platform.lower() - # The __hash__ of every single element in a Set[Tag] will be evaluated each time - # that a set calls its `.disjoint()` method, which may be called hundreds of - # times when scanning a page of links for packages with tags matching that - # Set[Tag]. Pre-computing the value here produces significant speedups for - # downstream consumers. - self._hash = hash((self._interpreter, self._abi, self._platform)) - - @property - def interpreter(self) -> str: - return self._interpreter - - @property - def abi(self) -> str: - return self._abi - - @property - def platform(self) -> str: - return self._platform - - def __eq__(self, other: object) -> bool: - if not isinstance(other, Tag): - return NotImplemented - - return ( - (self._hash == other._hash) # Short-circuit ASAP for perf reasons. - and (self._platform == other._platform) - and (self._abi == other._abi) - and (self._interpreter == other._interpreter) - ) - - def __hash__(self) -> int: - return self._hash - - def __str__(self) -> str: - return f"{self._interpreter}-{self._abi}-{self._platform}" - - def __repr__(self) -> str: - return f"<{self} @ {id(self)}>" - - -def parse_tag(tag: str) -> FrozenSet[Tag]: - """ - Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances. - - Returning a set is required due to the possibility that the tag is a - compressed tag set. - """ - tags = set() - interpreters, abis, platforms = tag.split("-") - for interpreter in interpreters.split("."): - for abi in abis.split("."): - for platform_ in platforms.split("."): - tags.add(Tag(interpreter, abi, platform_)) - return frozenset(tags) - - -def _get_config_var(name: str, warn: bool = False) -> Union[int, str, None]: - value: Union[int, str, None] = sysconfig.get_config_var(name) - if value is None and warn: - logger.debug( - "Config variable '%s' is unset, Python ABI tag may be incorrect", name - ) - return value - - -def _normalize_string(string: str) -> str: - return string.replace(".", "_").replace("-", "_").replace(" ", "_") - - -def _abi3_applies(python_version: PythonVersion) -> bool: - """ - Determine if the Python version supports abi3. - - PEP 384 was first implemented in Python 3.2. - """ - return len(python_version) > 1 and tuple(python_version) >= (3, 2) - - -def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> List[str]: - py_version = tuple(py_version) # To allow for version comparison. - abis = [] - version = _version_nodot(py_version[:2]) - debug = pymalloc = ucs4 = "" - with_debug = _get_config_var("Py_DEBUG", warn) - has_refcount = hasattr(sys, "gettotalrefcount") - # Windows doesn't set Py_DEBUG, so checking for support of debug-compiled - # extension modules is the best option. - # https://github.com/pypa/pip/issues/3383#issuecomment-173267692 - has_ext = "_d.pyd" in EXTENSION_SUFFIXES - if with_debug or (with_debug is None and (has_refcount or has_ext)): - debug = "d" - if py_version < (3, 8): - with_pymalloc = _get_config_var("WITH_PYMALLOC", warn) - if with_pymalloc or with_pymalloc is None: - pymalloc = "m" - if py_version < (3, 3): - unicode_size = _get_config_var("Py_UNICODE_SIZE", warn) - if unicode_size == 4 or ( - unicode_size is None and sys.maxunicode == 0x10FFFF - ): - ucs4 = "u" - elif debug: - # Debug builds can also load "normal" extension modules. - # We can also assume no UCS-4 or pymalloc requirement. - abis.append(f"cp{version}") - abis.insert( - 0, - "cp{version}{debug}{pymalloc}{ucs4}".format( - version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4 - ), - ) - return abis - - -def cpython_tags( - python_version: Optional[PythonVersion] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a CPython interpreter. - - The tags consist of: - - cp-- - - cp-abi3- - - cp-none- - - cp-abi3- # Older Python versions down to 3.2. - - If python_version only specifies a major version then user-provided ABIs and - the 'none' ABItag will be used. - - If 'abi3' or 'none' are specified in 'abis' then they will be yielded at - their normal position and not at the beginning. - """ - if not python_version: - python_version = sys.version_info[:2] - - interpreter = f"cp{_version_nodot(python_version[:2])}" - - if abis is None: - if len(python_version) > 1: - abis = _cpython_abis(python_version, warn) - else: - abis = [] - abis = list(abis) - # 'abi3' and 'none' are explicitly handled later. - for explicit_abi in ("abi3", "none"): - try: - abis.remove(explicit_abi) - except ValueError: - pass - - platforms = list(platforms or platform_tags()) - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - if _abi3_applies(python_version): - yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms) - yield from (Tag(interpreter, "none", platform_) for platform_ in platforms) - - if _abi3_applies(python_version): - for minor_version in range(python_version[1] - 1, 1, -1): - for platform_ in platforms: - interpreter = "cp{version}".format( - version=_version_nodot((python_version[0], minor_version)) - ) - yield Tag(interpreter, "abi3", platform_) - - -def _generic_abi() -> List[str]: - """ - Return the ABI tag based on EXT_SUFFIX. - """ - # The following are examples of `EXT_SUFFIX`. - # We want to keep the parts which are related to the ABI and remove the - # parts which are related to the platform: - # - linux: '.cpython-310-x86_64-linux-gnu.so' => cp310 - # - mac: '.cpython-310-darwin.so' => cp310 - # - win: '.cp310-win_amd64.pyd' => cp310 - # - win: '.pyd' => cp37 (uses _cpython_abis()) - # - pypy: '.pypy38-pp73-x86_64-linux-gnu.so' => pypy38_pp73 - # - graalpy: '.graalpy-38-native-x86_64-darwin.dylib' - # => graalpy_38_native - - ext_suffix = _get_config_var("EXT_SUFFIX", warn=True) - if not isinstance(ext_suffix, str) or ext_suffix[0] != ".": - raise SystemError("invalid sysconfig.get_config_var('EXT_SUFFIX')") - parts = ext_suffix.split(".") - if len(parts) < 3: - # CPython3.7 and earlier uses ".pyd" on Windows. - return _cpython_abis(sys.version_info[:2]) - soabi = parts[1] - if soabi.startswith("cpython"): - # non-windows - abi = "cp" + soabi.split("-")[1] - elif soabi.startswith("cp"): - # windows - abi = soabi.split("-")[0] - elif soabi.startswith("pypy"): - abi = "-".join(soabi.split("-")[:2]) - elif soabi.startswith("graalpy"): - abi = "-".join(soabi.split("-")[:3]) - elif soabi: - # pyston, ironpython, others? - abi = soabi - else: - return [] - return [_normalize_string(abi)] - - -def generic_tags( - interpreter: Optional[str] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a generic interpreter. - - The tags consist of: - - -- - - The "none" ABI will be added if it was not explicitly provided. - """ - if not interpreter: - interp_name = interpreter_name() - interp_version = interpreter_version(warn=warn) - interpreter = "".join([interp_name, interp_version]) - if abis is None: - abis = _generic_abi() - else: - abis = list(abis) - platforms = list(platforms or platform_tags()) - if "none" not in abis: - abis.append("none") - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - - -def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]: - """ - Yields Python versions in descending order. - - After the latest version, the major-only version will be yielded, and then - all previous versions of that major version. - """ - if len(py_version) > 1: - yield f"py{_version_nodot(py_version[:2])}" - yield f"py{py_version[0]}" - if len(py_version) > 1: - for minor in range(py_version[1] - 1, -1, -1): - yield f"py{_version_nodot((py_version[0], minor))}" - - -def compatible_tags( - python_version: Optional[PythonVersion] = None, - interpreter: Optional[str] = None, - platforms: Optional[Iterable[str]] = None, -) -> Iterator[Tag]: - """ - Yields the sequence of tags that are compatible with a specific version of Python. - - The tags consist of: - - py*-none- - - -none-any # ... if `interpreter` is provided. - - py*-none-any - """ - if not python_version: - python_version = sys.version_info[:2] - platforms = list(platforms or platform_tags()) - for version in _py_interpreter_range(python_version): - for platform_ in platforms: - yield Tag(version, "none", platform_) - if interpreter: - yield Tag(interpreter, "none", "any") - for version in _py_interpreter_range(python_version): - yield Tag(version, "none", "any") - - -def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str: - if not is_32bit: - return arch - - if arch.startswith("ppc"): - return "ppc" - - return "i386" - - -def _mac_binary_formats(version: MacVersion, cpu_arch: str) -> List[str]: - formats = [cpu_arch] - if cpu_arch == "x86_64": - if version < (10, 4): - return [] - formats.extend(["intel", "fat64", "fat32"]) - - elif cpu_arch == "i386": - if version < (10, 4): - return [] - formats.extend(["intel", "fat32", "fat"]) - - elif cpu_arch == "ppc64": - # TODO: Need to care about 32-bit PPC for ppc64 through 10.2? - if version > (10, 5) or version < (10, 4): - return [] - formats.append("fat64") - - elif cpu_arch == "ppc": - if version > (10, 6): - return [] - formats.extend(["fat32", "fat"]) - - if cpu_arch in {"arm64", "x86_64"}: - formats.append("universal2") - - if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}: - formats.append("universal") - - return formats - - -def mac_platforms( - version: Optional[MacVersion] = None, arch: Optional[str] = None -) -> Iterator[str]: - """ - Yields the platform tags for a macOS system. - - The `version` parameter is a two-item tuple specifying the macOS version to - generate platform tags for. The `arch` parameter is the CPU architecture to - generate platform tags for. Both parameters default to the appropriate value - for the current system. - """ - version_str, _, cpu_arch = platform.mac_ver() - if version is None: - version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2]))) - if version == (10, 16): - # When built against an older macOS SDK, Python will report macOS 10.16 - # instead of the real version. - version_str = subprocess.run( - [ - sys.executable, - "-sS", - "-c", - "import platform; print(platform.mac_ver()[0])", - ], - check=True, - env={"SYSTEM_VERSION_COMPAT": "0"}, - stdout=subprocess.PIPE, - universal_newlines=True, - ).stdout - version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2]))) - else: - version = version - if arch is None: - arch = _mac_arch(cpu_arch) - else: - arch = arch - - if (10, 0) <= version and version < (11, 0): - # Prior to Mac OS 11, each yearly release of Mac OS bumped the - # "minor" version number. The major version was always 10. - for minor_version in range(version[1], -1, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=10, minor=minor_version, binary_format=binary_format - ) - - if version >= (11, 0): - # Starting with Mac OS 11, each yearly release bumps the major version - # number. The minor versions are now the midyear updates. - for major_version in range(version[0], 10, -1): - compat_version = major_version, 0 - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=major_version, minor=0, binary_format=binary_format - ) - - if version >= (11, 0): - # Mac OS 11 on x86_64 is compatible with binaries from previous releases. - # Arm64 support was introduced in 11.0, so no Arm binaries from previous - # releases exist. - # - # However, the "universal2" binary format can have a - # macOS version earlier than 11.0 when the x86_64 part of the binary supports - # that version of macOS. - if arch == "x86_64": - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - else: - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_format = "universal2" - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - - -def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]: - linux = _normalize_string(sysconfig.get_platform()) - if is_32bit: - if linux == "linux_x86_64": - linux = "linux_i686" - elif linux == "linux_aarch64": - linux = "linux_armv7l" - _, arch = linux.split("_", 1) - yield from _manylinux.platform_tags(linux, arch) - yield from _musllinux.platform_tags(arch) - yield linux - - -def _generic_platforms() -> Iterator[str]: - yield _normalize_string(sysconfig.get_platform()) - - -def platform_tags() -> Iterator[str]: - """ - Provides the platform tags for this installation. - """ - if platform.system() == "Darwin": - return mac_platforms() - elif platform.system() == "Linux": - return _linux_platforms() - else: - return _generic_platforms() - - -def interpreter_name() -> str: - """ - Returns the name of the running interpreter. - - Some implementations have a reserved, two-letter abbreviation which will - be returned when appropriate. - """ - name = sys.implementation.name - return INTERPRETER_SHORT_NAMES.get(name) or name - - -def interpreter_version(*, warn: bool = False) -> str: - """ - Returns the version of the running interpreter. - """ - version = _get_config_var("py_version_nodot", warn=warn) - if version: - version = str(version) - else: - version = _version_nodot(sys.version_info[:2]) - return version - - -def _version_nodot(version: PythonVersion) -> str: - return "".join(map(str, version)) - - -def sys_tags(*, warn: bool = False) -> Iterator[Tag]: - """ - Returns the sequence of tag triples for the running interpreter. - - The order of the sequence corresponds to priority order for the - interpreter, from most to least important. - """ - - interp_name = interpreter_name() - if interp_name == "cp": - yield from cpython_tags(warn=warn) - else: - yield from generic_tags() - - if interp_name == "pp": - interp = "pp3" - elif interp_name == "cp": - interp = "cp" + interpreter_version(warn=warn) - else: - interp = None - yield from compatible_tags(interpreter=interp) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/wrappers.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/wrappers.py deleted file mode 100644 index 29d0ef9102b2db0ffbf723c168aa32d2451b9419..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/wrappers.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Wrappers around on some nn functions, mainly to support empty tensors. - -Ideally, add support directly in PyTorch to empty tensors in those functions. - -These can be removed once https://github.com/pytorch/pytorch/issues/12013 -is implemented -""" - -from typing import List, Optional -import torch -from torch.nn import functional as F - - -def shapes_to_tensor(x: List[int], device: Optional[torch.device] = None) -> torch.Tensor: - """ - Turn a list of integer scalars or integer Tensor scalars into a vector, - in a way that's both traceable and scriptable. - - In tracing, `x` should be a list of scalar Tensor, so the output can trace to the inputs. - In scripting or eager, `x` should be a list of int. - """ - if torch.jit.is_scripting(): - return torch.as_tensor(x, device=device) - if torch.jit.is_tracing(): - assert all( - [isinstance(t, torch.Tensor) for t in x] - ), "Shape should be tensor during tracing!" - # as_tensor should not be used in tracing because it records a constant - ret = torch.stack(x) - if ret.device != device: # avoid recording a hard-coded device if not necessary - ret = ret.to(device=device) - return ret - return torch.as_tensor(x, device=device) - - -def cat(tensors: List[torch.Tensor], dim: int = 0): - """ - Efficient version of torch.cat that avoids a copy if there is only a single element in a list - """ - assert isinstance(tensors, (list, tuple)) - if len(tensors) == 1: - return tensors[0] - return torch.cat(tensors, dim) - - -def cross_entropy(input, target, *, reduction="mean", **kwargs): - """ - Same as `torch.nn.functional.cross_entropy`, but returns 0 (instead of nan) - for empty inputs. - """ - if target.numel() == 0 and reduction == "mean": - return input.sum() * 0.0 # connect the gradient - return F.cross_entropy(input, target, reduction=reduction, **kwargs) - - -class _NewEmptyTensorOp(torch.autograd.Function): - @staticmethod - def forward(ctx, x, new_shape): - ctx.shape = x.shape - return x.new_empty(new_shape) - - @staticmethod - def backward(ctx, grad): - shape = ctx.shape - return _NewEmptyTensorOp.apply(grad, shape), None - - -class Conv2d(torch.nn.Conv2d): - """ - A wrapper around :class:`torch.nn.Conv2d` to support empty inputs and more features. - """ - - def __init__(self, *args, **kwargs): - """ - Extra keyword arguments supported in addition to those in `torch.nn.Conv2d`: - - Args: - norm (nn.Module, optional): a normalization layer - activation (callable(Tensor) -> Tensor): a callable activation function - - It assumes that norm layer is used before activation. - """ - norm = kwargs.pop("norm", None) - activation = kwargs.pop("activation", None) - super().__init__(*args, **kwargs) - - self.norm = norm - self.activation = activation - - def forward(self, x): - # torchscript does not support SyncBatchNorm yet - # https://github.com/pytorch/pytorch/issues/40507 - # and we skip these codes in torchscript since: - # 1. currently we only support torchscript in evaluation mode - # 2. features needed by exporting module to torchscript are added in PyTorch 1.6 or - # later version, `Conv2d` in these PyTorch versions has already supported empty inputs. - if not torch.jit.is_scripting(): - if x.numel() == 0 and self.training: - # https://github.com/pytorch/pytorch/issues/12013 - assert not isinstance( - self.norm, torch.nn.SyncBatchNorm - ), "SyncBatchNorm does not support empty inputs!" - - x = F.conv2d( - x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups - ) - if self.norm is not None: - x = self.norm(x) - if self.activation is not None: - x = self.activation(x) - return x - - -ConvTranspose2d = torch.nn.ConvTranspose2d -BatchNorm2d = torch.nn.BatchNorm2d -interpolate = F.interpolate -Linear = torch.nn.Linear - - -def nonzero_tuple(x): - """ - A 'as_tuple=True' version of torch.nonzero to support torchscript. - because of https://github.com/pytorch/pytorch/issues/38718 - """ - if torch.jit.is_scripting(): - if x.dim() == 0: - return x.unsqueeze(0).nonzero().unbind(1) - return x.nonzero().unbind(1) - else: - return x.nonzero(as_tuple=True) diff --git a/spaces/Tonic/cybermints/app.py b/spaces/Tonic/cybermints/app.py deleted file mode 100644 index d7a998c07c107d38533fb8afa16e81b3d3e3b3a3..0000000000000000000000000000000000000000 --- a/spaces/Tonic/cybermints/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import time - -from theme_dropdown import create_theme_dropdown # noqa: F401 - -import gradio as gr - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme='Tonic/cybermints') as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `cybermints` - To use this theme, set `theme='Tonic/cybermints'` in `gr.Blocks()` or `gr.Interface()`. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - document.querySelector('gradio-app').style.backgroundColor = 'var(--color-background-primary)' - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="Tonic the Sesh Hog", - value="Tonic the Sesh Hog", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"tonic1": 0.7, "tonic2": 0.2, "tonic3": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Welcome to cybermints template", "I should check out darkmode!")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["What's my name?", "Your name is Tonic!"]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Tonic") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() \ No newline at end of file diff --git a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/processors/blip_processors.py b/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/processors/blip_processors.py deleted file mode 100644 index fd26160ec96a8458cdac083d19c19695937a7a62..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/processors/blip_processors.py +++ /dev/null @@ -1,141 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import re - -from minigpt4.common.registry import registry -from minigpt4.processors.base_processor import BaseProcessor -from minigpt4.processors.randaugment import RandomAugment -from omegaconf import OmegaConf -from torchvision import transforms -from torchvision.transforms.functional import InterpolationMode - - -class BlipImageBaseProcessor(BaseProcessor): - def __init__(self, mean=None, std=None): - if mean is None: - mean = (0.48145466, 0.4578275, 0.40821073) - if std is None: - std = (0.26862954, 0.26130258, 0.27577711) - - self.normalize = transforms.Normalize(mean, std) - - -@registry.register_processor("blip_caption") -class BlipCaptionProcessor(BaseProcessor): - def __init__(self, prompt="", max_words=50): - self.prompt = prompt - self.max_words = max_words - - def __call__(self, caption): - caption = self.prompt + self.pre_caption(caption) - - return caption - - @classmethod - def from_config(cls, cfg=None): - if cfg is None: - cfg = OmegaConf.create() - - prompt = cfg.get("prompt", "") - max_words = cfg.get("max_words", 50) - - return cls(prompt=prompt, max_words=max_words) - - def pre_caption(self, caption): - caption = re.sub( - r"([.!\"()*#:;~])", - " ", - caption.lower(), - ) - caption = re.sub( - r"\s{2,}", - " ", - caption, - ) - caption = caption.rstrip("\n") - caption = caption.strip(" ") - - # truncate caption - caption_words = caption.split(" ") - if len(caption_words) > self.max_words: - caption = " ".join(caption_words[: self.max_words]) - - return caption - - -@registry.register_processor("blip2_image_train") -class Blip2ImageTrainProcessor(BlipImageBaseProcessor): - def __init__(self, image_size=224, mean=None, std=None, min_scale=0.5, max_scale=1.0): - super().__init__(mean=mean, std=std) - - self.transform = transforms.Compose( - [ - transforms.RandomResizedCrop( - image_size, - scale=(min_scale, max_scale), - interpolation=InterpolationMode.BICUBIC, - ), - transforms.ToTensor(), - self.normalize, - ] - ) - - def __call__(self, item): - return self.transform(item) - - @classmethod - def from_config(cls, cfg=None): - if cfg is None: - cfg = OmegaConf.create() - - image_size = cfg.get("image_size", 224) - - mean = cfg.get("mean", None) - std = cfg.get("std", None) - - min_scale = cfg.get("min_scale", 0.5) - max_scale = cfg.get("max_scale", 1.0) - - return cls( - image_size=image_size, - mean=mean, - std=std, - min_scale=min_scale, - max_scale=max_scale, - ) - - -@registry.register_processor("blip2_image_eval") -class Blip2ImageEvalProcessor(BlipImageBaseProcessor): - def __init__(self, image_size=224, mean=None, std=None): - super().__init__(mean=mean, std=std) - - self.transform = transforms.Compose( - [ - transforms.Resize( - (image_size, image_size), interpolation=InterpolationMode.BICUBIC - ), - transforms.ToTensor(), - self.normalize, - ] - ) - - def __call__(self, item): - return self.transform(item) - - @classmethod - def from_config(cls, cfg=None): - if cfg is None: - cfg = OmegaConf.create() - - image_size = cfg.get("image_size", 224) - - mean = cfg.get("mean", None) - std = cfg.get("std", None) - - return cls(image_size=image_size, mean=mean, std=std) \ No newline at end of file diff --git a/spaces/WZUN666/vits-uma-genshin-honkai/mel_processing.py b/spaces/WZUN666/vits-uma-genshin-honkai/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/WZUN666/vits-uma-genshin-honkai/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/collect_env.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/collect_env.py deleted file mode 100644 index 7b59eb9be8f644f83d210bc0510c86a133996d84..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/collect_env.py +++ /dev/null @@ -1,204 +0,0 @@ -"Utility functions to help deal with user environment" - -from ..imports.torch import * -from ..core import * -from ..script import * -from .pynvml_gate import * -import fastprogress, subprocess, platform - -__all__ = ['show_install', 'check_perf'] - -def get_env(name): - "Return env var value if it's defined and not an empty string, or return Unknown" - res = os.environ.get(name,'') - return res if len(res) else "Unknown" - -def show_install(show_nvidia_smi:bool=False): - "Print user's setup information" - - import platform, fastai.version - - rep = [] - opt_mods = [] - - rep.append(["=== Software ===", None]) - rep.append(["python", platform.python_version()]) - rep.append(["fastai", fastai.__version__]) - rep.append(["fastprogress", fastprogress.__version__]) - rep.append(["torch", torch.__version__]) - - # nvidia-smi - cmd = "nvidia-smi" - have_nvidia_smi = False - try: result = subprocess.run(cmd.split(), shell=False, check=False, stdout=subprocess.PIPE) - except: pass - else: - if result.returncode == 0 and result.stdout: have_nvidia_smi = True - - # XXX: if nvidia-smi is not available, another check could be: - # /proc/driver/nvidia/version on most systems, since it's the - # currently active version - - if have_nvidia_smi: - smi = result.stdout.decode('utf-8') - # matching: "Driver Version: 396.44" - match = re.findall(r'Driver Version: +(\d+\.\d+)', smi) - if match: rep.append(["nvidia driver", match[0]]) - - available = "available" if torch.cuda.is_available() else "**Not available** " - rep.append(["torch cuda", f"{torch.version.cuda} / is {available}"]) - - # no point reporting on cudnn if cuda is not available, as it - # seems to be enabled at times even on cpu-only setups - if torch.cuda.is_available(): - enabled = "enabled" if torch.backends.cudnn.enabled else "**Not enabled** " - rep.append(["torch cudnn", f"{torch.backends.cudnn.version()} / is {enabled}"]) - - rep.append(["\n=== Hardware ===", None]) - - # it's possible that torch might not see what nvidia-smi sees? - gpu_total_mem = [] - nvidia_gpu_cnt = 0 - if have_nvidia_smi: - try: - cmd = "nvidia-smi --query-gpu=memory.total --format=csv,nounits,noheader" - result = subprocess.run(cmd.split(), shell=False, check=False, stdout=subprocess.PIPE) - except: - print("have nvidia-smi, but failed to query it") - else: - if result.returncode == 0 and result.stdout: - output = result.stdout.decode('utf-8') - gpu_total_mem = [int(x) for x in output.strip().split('\n')] - nvidia_gpu_cnt = len(gpu_total_mem) - - - if nvidia_gpu_cnt: rep.append(["nvidia gpus", nvidia_gpu_cnt]) - - torch_gpu_cnt = torch.cuda.device_count() - if torch_gpu_cnt: - rep.append(["torch devices", torch_gpu_cnt]) - # information for each gpu - for i in range(torch_gpu_cnt): - rep.append([f" - gpu{i}", (f"{gpu_total_mem[i]}MB | " if gpu_total_mem else "") + torch.cuda.get_device_name(i)]) - else: - if nvidia_gpu_cnt: - rep.append([f"Have {nvidia_gpu_cnt} GPU(s), but torch can't use them (check nvidia driver)", None]) - else: - rep.append([f"No GPUs available", None]) - - - rep.append(["\n=== Environment ===", None]) - - rep.append(["platform", platform.platform()]) - - if platform.system() == 'Linux': - distro = try_import('distro') - if distro: - # full distro info - rep.append(["distro", ' '.join(distro.linux_distribution())]) - else: - opt_mods.append('distro'); - # partial distro info - rep.append(["distro", platform.uname().version]) - - rep.append(["conda env", get_env('CONDA_DEFAULT_ENV')]) - rep.append(["python", sys.executable]) - rep.append(["sys.path", "\n".join(sys.path)]) - - print("\n\n```text") - - keylen = max([len(e[0]) for e in rep if e[1] is not None]) - for e in rep: - print(f"{e[0]:{keylen}}", (f": {e[1]}" if e[1] is not None else "")) - - if have_nvidia_smi: - if show_nvidia_smi: print(f"\n{smi}") - else: - if torch_gpu_cnt: print("no nvidia-smi is found") - else: print("no supported gpus found on this system") - - print("```\n") - - print("Please make sure to include opening/closing ``` when you paste into forums/github to make the reports appear formatted as code sections.\n") - - if opt_mods: - print("Optional package(s) to enhance the diagnostics can be installed with:") - print(f"pip install {' '.join(opt_mods)}") - print("Once installed, re-run this utility to get the additional information") - -def pypi_module_version_is_available(module, version): - "Check whether module==version is available on pypi" - # returns True/False (or None if failed to execute the check) - - # using a hack that when passing "module==" w/ no version number to pip - # it "fails" and returns all the available versions in stderr - try: - cmd = f"pip install {module}==" - result = subprocess.run(cmd.split(), shell=False, check=False, - stdout=subprocess.PIPE, stderr=subprocess.PIPE) - except Exception as e: - print(f"Error: {e}") - return None - else: - if result.returncode == 1 and result.stderr: - output = result.stderr.decode('utf-8') - return True if version in output else False - else: - print(f"Some error in {cmd}") - return None - -def check_perf(): - "Suggest how to improve the setup to speed things up" - - from PIL import features, Image - from packaging import version - - print("Running performance checks.") - - # libjpeg_turbo check - print("\n*** libjpeg-turbo status") - if version.parse(Image.PILLOW_VERSION) >= version.parse("5.3.9"): - if features.check_feature('libjpeg_turbo'): - print("✔ libjpeg-turbo is on") - else: - print("✘ libjpeg-turbo is not on. It's recommended you install libjpeg-turbo to speed up JPEG decoding. See https://docs.fast.ai/performance.html#libjpeg-turbo") - else: - print(f"❓ libjpeg-turbo's status can't be derived - need Pillow(-SIMD)? >= 5.4.0 to tell, current version {Image.PILLOW_VERSION}") - # XXX: remove this check/note once Pillow and Pillow-SIMD 5.4.0 is available - pillow_ver_5_4_is_avail = pypi_module_version_is_available("Pillow", "5.4.0") - if pillow_ver_5_4_is_avail == False: - print("5.4.0 is not yet available, other than the dev version on github, which can be installed via pip from git+https://github.com/python-pillow/Pillow. See https://docs.fast.ai/performance.html#libjpeg-turbo") - - # Pillow-SIMD check - print("\n*** Pillow-SIMD status") - if re.search(r'\.post\d+', Image.PILLOW_VERSION): - print(f"✔ Running Pillow-SIMD {Image.PILLOW_VERSION}") - else: - print(f"✘ Running Pillow {Image.PILLOW_VERSION}; It's recommended you install Pillow-SIMD to speed up image resizing and other operations. See https://docs.fast.ai/performance.html#pillow-simd") - - # CUDA version check - # compatibility table: k: min nvidia ver is required for v: cuda ver - # note: windows nvidia driver version is slightly higher, see: - # https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html - # note: add new entries if pytorch starts supporting new cudaXX - nvidia2cuda = { - "410.00": "10.0", - "384.81": "9.0", - "367.48": "8.0", - } - print("\n*** CUDA status") - if torch.cuda.is_available(): - pynvml = load_pynvml_env() - nvidia_ver = (pynvml.nvmlSystemGetDriverVersion().decode('utf-8') if platform.system() != "Darwin" else "Cannot be determined on OSX yet") - cuda_ver = torch.version.cuda - max_cuda = "8.0" - for k in sorted(nvidia2cuda.keys()): - if version.parse(nvidia_ver) > version.parse(k): max_cuda = nvidia2cuda[k] - if version.parse(str(max_cuda)) <= version.parse(cuda_ver): - print(f"✔ Running the latest CUDA {cuda_ver} with NVIDIA driver {nvidia_ver}") - else: - print(f"✘ You are running pytorch built against cuda {cuda_ver}, your NVIDIA driver {nvidia_ver} supports cuda10. See https://pytorch.org/get-started/locally/ to install pytorch built against the faster CUDA version.") - else: - print(f"❓ Running cpu-only torch version, CUDA check is not relevant") - - print("\nRefer to https://docs.fast.ai/performance.html to make sense out of these checks and suggestions.") diff --git a/spaces/Xeaser/rvc-tes/infer_pack/commons.py b/spaces/Xeaser/rvc-tes/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Xeaser/rvc-tes/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/XzJosh/Echo-Bert-VITS2/text/symbols.py b/spaces/XzJosh/Echo-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Echo-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/Yassine/Stego/stc_interface.h b/spaces/Yassine/Stego/stc_interface.h deleted file mode 100644 index 2da3e9a28fc987049641006dba66431d8e320861..0000000000000000000000000000000000000000 --- a/spaces/Yassine/Stego/stc_interface.h +++ /dev/null @@ -1,13 +0,0 @@ - -#ifndef STC_INTERFACE_H -#define STC_INTERFACE_H - -extern "C" { - int stc_hide(uint cover_length, int* cover, float* costs, - uint message_length, u8* message, int* stego); - - int stc_unhide(uint stego_length, int* stego, - uint message_length, u8* message); -} - -#endif diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/commands/__init__.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/commands/__init__.py deleted file mode 100644 index 902bd46cedc6f2df785c1dc5d2e6bd8ef7c69ca6..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/commands/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from abc import ABC, abstractmethod -from argparse import ArgumentParser - - -class BaseDiffusersCLICommand(ABC): - @staticmethod - @abstractmethod - def register_subcommand(parser: ArgumentParser): - raise NotImplementedError() - - @abstractmethod - def run(self): - raise NotImplementedError() diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/image_dense_captions.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/image_dense_captions.py deleted file mode 100644 index d1f98ff658b5ba2ef246ad2bb504ef88e10fca52..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/image_dense_captions.py +++ /dev/null @@ -1,84 +0,0 @@ -import sys - -from detectron2.config import get_cfg - -sys.path.insert( - 0, 'model/vision/grit_src/third_party/CenterNet2/projects/CenterNet2/') -from model.vision.grit_src.third_party.CenterNet2.projects.CenterNet2.centernet.config import add_centernet_config -from model.vision.grit_src.grit.config import add_grit_config - -from model.vision.grit_src.grit.predictor import VisualizationDemo - -# constants -WINDOW_NAME = "GRiT" - - -def dense_pred_to_caption_no_bbox(predictions): - object_description = predictions["instances"].pred_object_descriptions.data - new_caption = "" - for i in range(len(object_description) - 1): - new_caption += (object_description[i] + ", ") - new_caption += (object_description[-1] + ".") - return new_caption - - -def dense_pred_to_caption(predictions): - boxes = predictions["instances"].pred_boxes if predictions[ - "instances"].has("pred_boxes") else None - object_description = predictions["instances"].pred_object_descriptions.data - new_caption = "" - for i in range(len(object_description)): - new_caption += (object_description[i] + ": " + str( - [int(a) - for a in boxes[i].tensor.cpu().detach().numpy()[0]])) + "; " - return new_caption - - -def setup_cfg(args): - cfg = get_cfg() - if args["cpu"]: - cfg.MODEL.DEVICE = "cpu" - add_centernet_config(cfg) - add_grit_config(cfg) - cfg.merge_from_file(args["config_file"]) - cfg.merge_from_list(args["opts"]) - # Set score_threshold for builtin models - cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args["confidence_threshold"] - cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args[ - "confidence_threshold"] - if args["test_task"]: - cfg.MODEL.TEST_TASK = args["test_task"] - cfg.MODEL.BEAM_SIZE = 1 - cfg.MODEL.ROI_HEADS.SOFT_NMS_ENABLED = False - cfg.USE_ACT_CHECKPOINT = False - cfg.freeze() - return cfg - - -def get_parser(device): - arg_dict = { - 'config_file': - "model/vision/grit_src/configs/GRiT_B_DenseCap_ObjectDet.yaml", - 'cpu': - False, - 'confidence_threshold': - 0.5, - 'test_task': - 'DenseCap', - 'opts': - ["MODEL.WEIGHTS", "pretrained_models/grit_b_densecap_objectdet.pth"] - } - if device == "cpu": - arg_dict["cpu"] = True - return arg_dict - - -def image_caption_api(cv2_img, device='cuda'): - args2 = get_parser(device) - cfg = setup_cfg(args2) - demo = VisualizationDemo(cfg) - - predictions, _ = demo.run_on_image(cv2_img) - new_caption = dense_pred_to_caption_no_bbox(predictions) - - return new_caption diff --git a/spaces/YotamNitzan/domain-expansion/torch_utils/persistence.py b/spaces/YotamNitzan/domain-expansion/torch_utils/persistence.py deleted file mode 100644 index 0186cfd97bca0fcb397a7b73643520c1d1105a02..0000000000000000000000000000000000000000 --- a/spaces/YotamNitzan/domain-expansion/torch_utils/persistence.py +++ /dev/null @@ -1,251 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for pickling Python code alongside other data. - -The pickled code is automatically imported into a separate Python module -during unpickling. This way, any previously exported pickles will remain -usable even if the original code is no longer available, or if the current -version of the code is not consistent with what was originally pickled.""" - -import sys -import pickle -import io -import inspect -import copy -import uuid -import types -import dnnlib - -#---------------------------------------------------------------------------- - -_version = 6 # internal version number -_decorators = set() # {decorator_class, ...} -_import_hooks = [] # [hook_function, ...] -_module_to_src_dict = dict() # {module: src, ...} -_src_to_module_dict = dict() # {src: module, ...} - -#---------------------------------------------------------------------------- - -def persistent_class(orig_class): - r"""Class decorator that extends a given class to save its source code - when pickled. - - Example: - - from torch_utils import persistence - - @persistence.persistent_class - class MyNetwork(torch.nn.Module): - def __init__(self, num_inputs, num_outputs): - super().__init__() - self.fc = MyLayer(num_inputs, num_outputs) - ... - - @persistence.persistent_class - class MyLayer(torch.nn.Module): - ... - - When pickled, any instance of `MyNetwork` and `MyLayer` will save its - source code alongside other internal state (e.g., parameters, buffers, - and submodules). This way, any previously exported pickle will remain - usable even if the class definitions have been modified or are no - longer available. - - The decorator saves the source code of the entire Python module - containing the decorated class. It does *not* save the source code of - any imported modules. Thus, the imported modules must be available - during unpickling, also including `torch_utils.persistence` itself. - - It is ok to call functions defined in the same module from the - decorated class. However, if the decorated class depends on other - classes defined in the same module, they must be decorated as well. - This is illustrated in the above example in the case of `MyLayer`. - - It is also possible to employ the decorator just-in-time before - calling the constructor. For example: - - cls = MyLayer - if want_to_make_it_persistent: - cls = persistence.persistent_class(cls) - layer = cls(num_inputs, num_outputs) - - As an additional feature, the decorator also keeps track of the - arguments that were used to construct each instance of the decorated - class. The arguments can be queried via `obj.init_args` and - `obj.init_kwargs`, and they are automatically pickled alongside other - object state. A typical use case is to first unpickle a previous - instance of a persistent class, and then upgrade it to use the latest - version of the source code: - - with open('old_pickle.pkl', 'rb') as f: - old_net = pickle.load(f) - new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs) - misc.copy_params_and_buffers(old_net, new_net, require_all=True) - """ - assert isinstance(orig_class, type) - if is_persistent(orig_class): - return orig_class - - assert orig_class.__module__ in sys.modules - orig_module = sys.modules[orig_class.__module__] - orig_module_src = _module_to_src(orig_module) - - class Decorator(orig_class): - _orig_module_src = orig_module_src - _orig_class_name = orig_class.__name__ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._init_args = copy.deepcopy(args) - self._init_kwargs = copy.deepcopy(kwargs) - assert orig_class.__name__ in orig_module.__dict__ - _check_pickleable(self.__reduce__()) - - @property - def init_args(self): - return copy.deepcopy(self._init_args) - - @property - def init_kwargs(self): - return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs)) - - def __reduce__(self): - fields = list(super().__reduce__()) - fields += [None] * max(3 - len(fields), 0) - if fields[0] is not _reconstruct_persistent_obj: - meta = dict(type='class', version=_version, module_src=self._orig_module_src, class_name=self._orig_class_name, state=fields[2]) - fields[0] = _reconstruct_persistent_obj # reconstruct func - fields[1] = (meta,) # reconstruct args - fields[2] = None # state dict - return tuple(fields) - - Decorator.__name__ = orig_class.__name__ - _decorators.add(Decorator) - return Decorator - -#---------------------------------------------------------------------------- - -def is_persistent(obj): - r"""Test whether the given object or class is persistent, i.e., - whether it will save its source code when pickled. - """ - try: - if obj in _decorators: - return True - except TypeError: - pass - return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck - -#---------------------------------------------------------------------------- - -def import_hook(hook): - r"""Register an import hook that is called whenever a persistent object - is being unpickled. A typical use case is to patch the pickled source - code to avoid errors and inconsistencies when the API of some imported - module has changed. - - The hook should have the following signature: - - hook(meta) -> modified meta - - `meta` is an instance of `dnnlib.EasyDict` with the following fields: - - type: Type of the persistent object, e.g. `'class'`. - version: Internal version number of `torch_utils.persistence`. - module_src Original source code of the Python module. - class_name: Class name in the original Python module. - state: Internal state of the object. - - Example: - - @persistence.import_hook - def wreck_my_network(meta): - if meta.class_name == 'MyNetwork': - print('MyNetwork is being imported. I will wreck it!') - meta.module_src = meta.module_src.replace("True", "False") - return meta - """ - assert callable(hook) - _import_hooks.append(hook) - -#---------------------------------------------------------------------------- - -def _reconstruct_persistent_obj(meta): - r"""Hook that is called internally by the `pickle` module to unpickle - a persistent object. - """ - meta = dnnlib.EasyDict(meta) - meta.state = dnnlib.EasyDict(meta.state) - for hook in _import_hooks: - meta = hook(meta) - assert meta is not None - - assert meta.version == _version - module = _src_to_module(meta.module_src) - - assert meta.type == 'class' - orig_class = module.__dict__[meta.class_name] - decorator_class = persistent_class(orig_class) - obj = decorator_class.__new__(decorator_class) - - setstate = getattr(obj, '__setstate__', None) - if callable(setstate): - setstate(meta.state) # pylint: disable=not-callable - else: - obj.__dict__.update(meta.state) - return obj - -#---------------------------------------------------------------------------- - -def _module_to_src(module): - r"""Query the source code of a given Python module. - """ - src = _module_to_src_dict.get(module, None) - if src is None: - src = inspect.getsource(module) - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - return src - -def _src_to_module(src): - r"""Get or create a Python module for the given source code. - """ - module = _src_to_module_dict.get(src, None) - if module is None: - module_name = "_imported_module_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - exec(src, module.__dict__) # pylint: disable=exec-used - return module - -#---------------------------------------------------------------------------- - -def _check_pickleable(obj): - r"""Check that the given object is pickleable, raising an exception if - it is not. This function is expected to be considerably more efficient - than actually pickling the object. - """ - def recurse(obj): - if isinstance(obj, (list, tuple, set)): - return [recurse(x) for x in obj] - if isinstance(obj, dict): - return [[recurse(x), recurse(y)] for x, y in obj.items()] - if isinstance(obj, (str, int, float, bool, bytes, bytearray)): - return None # Python primitive types are pickleable. - if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor']: - return None # NumPy arrays and PyTorch tensors are pickleable. - if is_persistent(obj): - return None # Persistent objects are pickleable, by virtue of the constructor check. - return obj - with io.BytesIO() as f: - pickle.dump(recurse(obj), f) - -#---------------------------------------------------------------------------- diff --git a/spaces/abdvl/datahub_qa_bot/docs/how/add-custom-data-platform.md b/spaces/abdvl/datahub_qa_bot/docs/how/add-custom-data-platform.md deleted file mode 100644 index eee120e960260d071cef955a41a5f9ae2c43ff4d..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/how/add-custom-data-platform.md +++ /dev/null @@ -1,105 +0,0 @@ -# Adding a custom Dataset Data Platform - -A Data Platform represents a 3rd party system from which [Metadata Entities](https://datahubproject.io/docs/metadata-modeling/metadata-model/) are ingested from. Each Dataset that is ingested is associated with a single platform, for example MySQL, Snowflake, Redshift, or BigQuery. - -There are some cases in which you may want to add a custom Data Platform identifier for a Dataset. For example, -you have an internal data system that is not widely available, or you're using a Data Platform that is not natively supported by DataHub. - -To do so, you can either change the default Data Platforms that are ingested into DataHub *prior to deployment time*, or ingest -a new Data Platform at runtime. You can use the first option if you're able to periodically merge new Data Platforms from the OSS -repository into your own. It will cause the custom Data Platform to be re-ingested each time you deploy DataHub, meaning that -your custom Data Platform will persist even between full cleans (nukes) of DataHub. - -## Changing Default Data Platforms - -Simply make a change to the [data_platforms.json](https://github.com/datahub-project/datahub/blob/master/metadata-service/war/src/main/resources/boot/data_platforms.json) -file to add a custom Data Platform: - -``` -[ - ..... - { - "urn": "urn:li:dataPlatform:MyCustomDataPlatform", - "aspect": { - "name": "My Custom Data Platform", - "type": "OTHERS", - "logoUrl": "https://" - } - } -] -``` - -## Ingesting Data Platform at runtime - -You can also ingest a Data Platform at runtime using either a file-based ingestion source, or using a normal curl to the -[GMS Rest.li APIs](https://datahubproject.io/docs/metadata-service#restli-api). - -### Using File-Based Ingestion Recipe - -**Step 1** Define a JSON file containing your custom Data Platform - -``` -// my-custom-data-platform.json -[ - { - "auditHeader": null, - "proposedSnapshot": { - "com.linkedin.pegasus2avro.metadata.snapshot.DataPlatformSnapshot": { - "urn": "urn:li:dataPlatform:MyCustomDataPlatform", - "aspects": [ - { - "com.linkedin.pegasus2avro.dataplatform.DataPlatformInfo": { - "datasetNameDelimiter": "/", - "name": "My Custom Data Platform", - "type": "OTHERS", - "logoUrl": "https://" - } - } - ] - } - }, - "proposedDelta": null - } -] -``` - -**Step 2**: Define an [ingestion recipe](https://datahubproject.io/docs/metadata-ingestion/#recipes) - -``` ---- -# see https://datahubproject.io/docs/generated/ingestion/sources/file for complete documentation -source: - type: "file" - config: - filename: "./my-custom-data-platform.json" - -# see https://datahubproject.io/docs/metadata-ingestion/sink_docs/datahub for complete documentation -sink: - ... -``` - -### Using Rest.li API - -You can also issue a normal curl request to the Rest.li `/entities` API to add a custom Data Platform. - -``` -curl 'http://localhost:8080/entities?action=ingest' -X POST --data '{ - "entity":{ - "value":{ - "com.linkedin.metadata.snapshot.DataPlatformSnapshot":{ - "aspects":[ - { - "com.linkedin.dataplatform.DataPlatformInfo":{ - "datasetNameDelimiter": "/", - "name": "My Custom Data Platform", - "type": "OTHERS", - "logoUrl": "https://" - } - } - ], - "urn":"urn:li:dataPlatform:MyCustomDataPlatform" - } - } - } -}' -``` \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/inpainting/__init__.py b/spaces/abhishek/sketch-to-image/annotator/inpainting/__init__.py deleted file mode 100644 index c84462036d7ad95500d7021035d6fa822ccefbef..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/inpainting/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -import numpy as np - -class Inpainter: - def __call__(self, img, height_top_mask, height_down_mask, width_left_mask, width_right_mask): - h = img.shape[0] - w = img.shape[1] - h_top_mask = int(float(h) / 100.0 * float(height_top_mask)) - h_down_mask = int(float(h) / 100.0 * float(height_down_mask)) - - w_left_mask = int(float(w) / 100.0 * float(width_left_mask)) - w_right_mask = int(float(w) / 100.0 * float(width_right_mask)) - - img_new = img - img_new[h_top_mask:h_down_mask, w_left_mask:w_right_mask] = 0 - img_new = img_new.astype('ubyte') - return img_new diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/apis/test.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/apis/test.py deleted file mode 100644 index e574eb7da04f09a59cf99ff953c36468ae87a326..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/apis/test.py +++ /dev/null @@ -1,238 +0,0 @@ -import os.path as osp -import pickle -import shutil -import tempfile - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch -import torch.distributed as dist -from annotator.uniformer.mmcv.image import tensor2imgs -from annotator.uniformer.mmcv.runner import get_dist_info - - -def np2tmp(array, temp_file_name=None): - """Save ndarray to local numpy file. - - Args: - array (ndarray): Ndarray to save. - temp_file_name (str): Numpy file name. If 'temp_file_name=None', this - function will generate a file name with tempfile.NamedTemporaryFile - to save ndarray. Default: None. - - Returns: - str: The numpy file name. - """ - - if temp_file_name is None: - temp_file_name = tempfile.NamedTemporaryFile( - suffix='.npy', delete=False).name - np.save(temp_file_name, array) - return temp_file_name - - -def single_gpu_test(model, - data_loader, - show=False, - out_dir=None, - efficient_test=False, - opacity=0.5): - """Test with single GPU. - - Args: - model (nn.Module): Model to be tested. - data_loader (utils.data.Dataloader): Pytorch data loader. - show (bool): Whether show results during inference. Default: False. - out_dir (str, optional): If specified, the results will be dumped into - the directory to save output results. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - Returns: - list: The prediction results. - """ - - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, **data) - - if show or out_dir: - img_tensor = data['img'][0] - img_metas = data['img_metas'][0].data[0] - imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) - assert len(imgs) == len(img_metas) - - for img, img_meta in zip(imgs, img_metas): - h, w, _ = img_meta['img_shape'] - img_show = img[:h, :w, :] - - ori_h, ori_w = img_meta['ori_shape'][:-1] - img_show = mmcv.imresize(img_show, (ori_w, ori_h)) - - if out_dir: - out_file = osp.join(out_dir, img_meta['ori_filename']) - else: - out_file = None - - model.module.show_result( - img_show, - result, - palette=dataset.PALETTE, - show=show, - out_file=out_file, - opacity=opacity) - - if isinstance(result, list): - if efficient_test: - result = [np2tmp(_) for _ in result] - results.extend(result) - else: - if efficient_test: - result = np2tmp(result) - results.append(result) - - batch_size = len(result) - for _ in range(batch_size): - prog_bar.update() - return results - - -def multi_gpu_test(model, - data_loader, - tmpdir=None, - gpu_collect=False, - efficient_test=False): - """Test model with multiple gpus. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting 'gpu_collect=True' - it encodes results to gpu tensors and use gpu communication for results - collection. On cpu mode it saves the results on different gpus to 'tmpdir' - and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (utils.data.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - - Returns: - list: The prediction results. - """ - - model.eval() - results = [] - dataset = data_loader.dataset - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - - if isinstance(result, list): - if efficient_test: - result = [np2tmp(_) for _ in result] - results.extend(result) - else: - if efficient_test: - result = np2tmp(result) - results.append(result) - - if rank == 0: - batch_size = data['img'][0].size(0) - for _ in range(batch_size * world_size): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results - - -def collect_results_cpu(result_part, size, tmpdir=None): - """Collect results with CPU.""" - rank, world_size = get_dist_info() - # create a tmp dir if it is not specified - if tmpdir is None: - MAX_LEN = 512 - # 32 is whitespace - dir_tensor = torch.full((MAX_LEN, ), - 32, - dtype=torch.uint8, - device='cuda') - if rank == 0: - tmpdir = tempfile.mkdtemp() - tmpdir = torch.tensor( - bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda') - dir_tensor[:len(tmpdir)] = tmpdir - dist.broadcast(dir_tensor, 0) - tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip() - else: - mmcv.mkdir_or_exist(tmpdir) - # dump the part result to the dir - mmcv.dump(result_part, osp.join(tmpdir, 'part_{}.pkl'.format(rank))) - dist.barrier() - # collect all parts - if rank != 0: - return None - else: - # load results of all parts from tmp dir - part_list = [] - for i in range(world_size): - part_file = osp.join(tmpdir, 'part_{}.pkl'.format(i)) - part_list.append(mmcv.load(part_file)) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - # remove tmp dir - shutil.rmtree(tmpdir) - return ordered_results - - -def collect_results_gpu(result_part, size): - """Collect results with GPU.""" - rank, world_size = get_dist_info() - # dump result part to tensor with pickle - part_tensor = torch.tensor( - bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda') - # gather all result part tensor shape - shape_tensor = torch.tensor(part_tensor.shape, device='cuda') - shape_list = [shape_tensor.clone() for _ in range(world_size)] - dist.all_gather(shape_list, shape_tensor) - # padding result part tensor to max length - shape_max = torch.tensor(shape_list).max() - part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda') - part_send[:shape_tensor[0]] = part_tensor - part_recv_list = [ - part_tensor.new_zeros(shape_max) for _ in range(world_size) - ] - # gather all result part - dist.all_gather(part_recv_list, part_send) - - if rank == 0: - part_list = [] - for recv, shape in zip(part_recv_list, shape_list): - part_list.append( - pickle.loads(recv[:shape[0]].cpu().numpy().tobytes())) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - return ordered_results diff --git a/spaces/adirik/stylemc-demo/torch_utils/ops/bias_act.cpp b/spaces/adirik/stylemc-demo/torch_utils/ops/bias_act.cpp deleted file mode 100644 index 5d2425d8054991a8e8b6f7a940fd0ff7fa0bb330..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/torch_utils/ops/bias_act.cpp +++ /dev/null @@ -1,99 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "bias_act.h" - -//------------------------------------------------------------------------ - -static bool has_same_layout(torch::Tensor x, torch::Tensor y) -{ - if (x.dim() != y.dim()) - return false; - for (int64_t i = 0; i < x.dim(); i++) - { - if (x.size(i) != y.size(i)) - return false; - if (x.size(i) >= 2 && x.stride(i) != y.stride(i)) - return false; - } - return true; -} - -//------------------------------------------------------------------------ - -static torch::Tensor bias_act(torch::Tensor x, torch::Tensor b, torch::Tensor xref, torch::Tensor yref, torch::Tensor dy, int grad, int dim, int act, float alpha, float gain, float clamp) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(b.numel() == 0 || (b.dtype() == x.dtype() && b.device() == x.device()), "b must have the same dtype and device as x"); - TORCH_CHECK(xref.numel() == 0 || (xref.sizes() == x.sizes() && xref.dtype() == x.dtype() && xref.device() == x.device()), "xref must have the same shape, dtype, and device as x"); - TORCH_CHECK(yref.numel() == 0 || (yref.sizes() == x.sizes() && yref.dtype() == x.dtype() && yref.device() == x.device()), "yref must have the same shape, dtype, and device as x"); - TORCH_CHECK(dy.numel() == 0 || (dy.sizes() == x.sizes() && dy.dtype() == x.dtype() && dy.device() == x.device()), "dy must have the same dtype and device as x"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(b.dim() == 1, "b must have rank 1"); - TORCH_CHECK(b.numel() == 0 || (dim >= 0 && dim < x.dim()), "dim is out of bounds"); - TORCH_CHECK(b.numel() == 0 || b.numel() == x.size(dim), "b has wrong number of elements"); - TORCH_CHECK(grad >= 0, "grad must be non-negative"); - - // Validate layout. - TORCH_CHECK(x.is_non_overlapping_and_dense(), "x must be non-overlapping and dense"); - TORCH_CHECK(b.is_contiguous(), "b must be contiguous"); - TORCH_CHECK(xref.numel() == 0 || has_same_layout(xref, x), "xref must have the same layout as x"); - TORCH_CHECK(yref.numel() == 0 || has_same_layout(yref, x), "yref must have the same layout as x"); - TORCH_CHECK(dy.numel() == 0 || has_same_layout(dy, x), "dy must have the same layout as x"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - torch::Tensor y = torch::empty_like(x); - TORCH_CHECK(has_same_layout(y, x), "y must have the same layout as x"); - - // Initialize CUDA kernel parameters. - bias_act_kernel_params p; - p.x = x.data_ptr(); - p.b = (b.numel()) ? b.data_ptr() : NULL; - p.xref = (xref.numel()) ? xref.data_ptr() : NULL; - p.yref = (yref.numel()) ? yref.data_ptr() : NULL; - p.dy = (dy.numel()) ? dy.data_ptr() : NULL; - p.y = y.data_ptr(); - p.grad = grad; - p.act = act; - p.alpha = alpha; - p.gain = gain; - p.clamp = clamp; - p.sizeX = (int)x.numel(); - p.sizeB = (int)b.numel(); - p.stepB = (b.numel()) ? (int)x.stride(dim) : 1; - - // Choose CUDA kernel. - void* kernel; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - kernel = choose_bias_act_kernel(p); - }); - TORCH_CHECK(kernel, "no CUDA kernel found for the specified activation func"); - - // Launch CUDA kernel. - p.loopX = 4; - int blockSize = 4 * 32; - int gridSize = (p.sizeX - 1) / (p.loopX * blockSize) + 1; - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("bias_act", &bias_act); -} - -//------------------------------------------------------------------------ diff --git a/spaces/akhaliq/AppleNeuralHash2ONNX/app.py b/spaces/akhaliq/AppleNeuralHash2ONNX/app.py deleted file mode 100644 index 7cf29e91b4915c3f04ec644fbb2b8b8564729883..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/AppleNeuralHash2ONNX/app.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright 2021 Asuhariet Ygvar -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express -# or implied. See the License for the specific language governing -# permissions and limitations under the License. - -import sys -import onnxruntime -import numpy as np -from PIL import Image -import gradio as gr -import torch -import os -os.system('wget https://www.dropbox.com/s/ggf6ok63u7hywhc/neuralhash_128x96_seed1.dat') -os.system('wget https://www.dropbox.com/s/1jug4wtevz1rol0/model.onnx') - - -torch.hub.download_url_to_file('https://cdn.pixabay.com/photo/2017/09/11/15/58/sunset-2739472_1280.jpg', 'sunset.jpg') -torch.hub.download_url_to_file('https://i.imgur.com/W8aXbd2.png', 'rotate.png') - -torch.hub.download_url_to_file('https://user-images.githubusercontent.com/1328/129860794-e7eb0132-d929-4c9d-b92e-4e4faba9e849.png', 'dog.png') -torch.hub.download_url_to_file('https://user-images.githubusercontent.com/1328/129860810-f414259a-3253-43e3-9e8e-a0ef78372233.png', 'same.png') - -torch.hub.download_url_to_file('https://cdn.pixabay.com/photo/2021/08/23/17/53/cat-6568422_1280.jpg', 'cat1.jpg') -torch.hub.download_url_to_file('https://i.imgur.com/fMoVhSz.png', 'cat2.png') - -torch.hub.download_url_to_file('https://cdn.pixabay.com/photo/2021/08/10/09/41/lesser-sand-plover-6535531_1280.jpg', 'bird1.jpg') -torch.hub.download_url_to_file('https://i.imgur.com/jDgKAC7.png', 'bird2.png') - - - - - - -# Load ONNX model -session = onnxruntime.InferenceSession('model.onnx') - -# Load output hash matrix -seed1 = open('neuralhash_128x96_seed1.dat', 'rb').read()[128:] -seed1 = np.frombuffer(seed1, dtype=np.float32) -seed1 = seed1.reshape([96, 128]) - -pre_text = "

{}

" - -# Preprocess image -def inference(img, img2): - image = Image.open(img.name).convert('RGB') - image = image.resize([360, 360]) - arr = np.array(image).astype(np.float32) / 255.0 - arr = arr * 2.0 - 1.0 - arr = arr.transpose(2, 0, 1).reshape([1, 3, 360, 360]) - - # Run model - inputs = {session.get_inputs()[0].name: arr} - outs = session.run(None, inputs) - - # Convert model output to hex hash - hash_output = seed1.dot(outs[0].flatten()) - hash_bits = ''.join(['1' if it >= 0 else '0' for it in hash_output]) - hash_hex = '{:0{}x}'.format(int(hash_bits, 2), len(hash_bits) // 4) - - image2 = Image.open(img2.name).convert('RGB') - image2 = image2.resize([360, 360]) - arr2 = np.array(image2).astype(np.float32) / 255.0 - arr2 = arr2 * 2.0 - 1.0 - arr2 = arr2.transpose(2, 0, 1).reshape([1, 3, 360, 360]) - - # Run model - inputs2 = {session.get_inputs()[0].name: arr2} - outs2 = session.run(None, inputs2) - - # Convert model output to hex hash - hash_output2 = seed1.dot(outs2[0].flatten()) - hash_bits2 = ''.join(['1' if it >= 0 else '0' for it in hash_output2]) - hash_hex2 = '{:0{}x}'.format(int(hash_bits2, 2), len(hash_bits2) // 4) - - if hash_hex == hash_hex2: - return pre_text.format("Same Hash"), pre_text.format(hash_hex), pre_text.format(hash_hex2) - return pre_text.format("Different Hash"), pre_text.format(hash_hex), pre_text.format(hash_hex2) - -title = "AppleNeuralHash" -description = "Gradio demo for Apple NeuralHash, a perceptual hashing method for images based on neural networks. It can tolerate image resize and compression. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "

CSAM Detection Technical Summary | Github Repo | Working Collision example images from github issue

" -examples = [['sunset.jpg','rotate.png'],['dog.png','same.png'],['cat1.jpg','cat2.png'],['bird1.jpg','bird2.png']] - -gr.Interface( - inference, - [gr.inputs.Image(type="file", label="Input Image"),gr.inputs.Image(type="file", label="Input Image")], - [gr.outputs.HTML(label="Comparison.."), gr.outputs.HTML(label="First Hash"), gr.outputs.HTML(label="Second Hash")], - title=title, - description=description, - article=article, - examples=examples, - allow_flagging=False - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/akhaliq/Music_Source_Separation/scripts/2_create_indexes/musdb18/create_indexes.sh b/spaces/akhaliq/Music_Source_Separation/scripts/2_create_indexes/musdb18/create_indexes.sh deleted file mode 100644 index fc571ebd1971ce44b973b878a83ac54ebfb47948..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Music_Source_Separation/scripts/2_create_indexes/musdb18/create_indexes.sh +++ /dev/null @@ -1,18 +0,0 @@ -#!/bin/bash -WORKSPACE=${1:-"./workspaces/bytesep"} # Default workspace directory - -echo "WORKSPACE=${WORKSPACE}" - -# --- Create indexes for vocals and accompaniment --- -INDEXES_CONFIG_YAML="scripts/2_create_indexes/musdb18/configs/vocals-accompaniment,sr=44100,chn=2.yaml" - -python3 bytesep/dataset_creation/create_indexes/create_indexes.py \ - --workspace=$WORKSPACE \ - --config_yaml=$INDEXES_CONFIG_YAML - -# --- Create indexes for vocals, bass, drums, and other --- -INDEXES_CONFIG_YAML="scripts/2_create_indexes/musdb18/configs/vocals-bass-drums-other,sr=44100,chn=2.yaml" - -python3 bytesep/dataset_creation/create_indexes/create_indexes.py \ - --workspace=$WORKSPACE \ - --config_yaml=$INDEXES_CONFIG_YAML diff --git a/spaces/andzhk/PGNInfo-test/app.py b/spaces/andzhk/PGNInfo-test/app.py deleted file mode 100644 index 45d5b84f2c25df4c7c2255230f31bad80877f725..0000000000000000000000000000000000000000 --- a/spaces/andzhk/PGNInfo-test/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import gradio as gr -from PIL import Image -from urllib.request import Request, urlopen - -def display_image_from_url(url, input_image): - if url == '' and input_image is None: - return None, "", "" - - image = None - if url != '': - req = Request( - url=url, - headers={'User-Agent': 'Mozilla/5.0'} - ) - res = urlopen(req) - image = Image.open(res) - image.load() - - - if input_image is not None: - image = input_image - - parameters = "Parameters have been erased from this image or unsupported format" - if 'parameters' in image.info: - - parameters = image.info['parameters'] - - custom_notes = "" - if 'custom_notes' in image.info: - custom_notes = image.info['custom_notes'] - - return image, parameters, custom_notes, image.info - -blocks = gr.Blocks(css="#out_image {height: 400px}") -with blocks as png_info: - with gr.Row(): - gr.Markdown( - """ - Report any issues on the [GitHub](https://github.com/andzhik/png-params) page of this project - """) - with gr.Row().style(equal_height=False): - with gr.Column(scale=1): - in_url = gr.Textbox(label="Source URL") - in_image = gr.Image(label="Source Image", type='pil') - with gr.Row(): - btn_submit = gr.Button("Submit", variant="primary") - - with gr.Column(scale=2): - with gr.Accordion("Image is here") as acc_image: - out_image = gr.Image(type='pil', elem_id="out_image") - - out_info = gr.Textbox(label="Generation Parameters") - - out_notes = gr.TextArea(label="Custom Notes", interactive=True) - # download_file = gr.File() - btn_save_notes = gr.Button("Save Notes") - # btn_download = gr.Button("Download Image") - - with gr.Accordion("Metadata", open=False): - out_meta = gr.Textbox() - - btn_submit.click(fn=display_image_from_url, - inputs=[in_url, in_image], - outputs=[out_image, out_info, out_notes, out_meta]) - - def save_notes(image, custom_notes): - print(custom_notes) - image.info["custom_notes"] = custom_notes - return image - - btn_save_notes.click(fn=save_notes,inputs=[out_image, out_notes], outputs=[out_image]) - - # def download_image(image: Image): - # print(image.info["custom_notes"]) - # image.save() - - # btn_download.click(None, [out_image], _js="(image)=>{gradioApp().getElementById('out_image')}") - -png_info.launch() diff --git a/spaces/aodianyun/stable-diffusion-webui/webui-user.sh b/spaces/aodianyun/stable-diffusion-webui/webui-user.sh deleted file mode 100644 index bfa53cb7c67083ec0a01bfa420269af4d85c6c94..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/webui-user.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash -######################################################### -# Uncomment and change the variables below to your need:# -######################################################### - -# Install directory without trailing slash -#install_dir="/home/$(whoami)" - -# Name of the subdirectory -#clone_dir="stable-diffusion-webui" - -# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention" -#export COMMANDLINE_ARGS="" - -# python3 executable -#python_cmd="python3" - -# git executable -#export GIT="git" - -# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv) -#venv_dir="venv" - -# script to launch to start the app -#export LAUNCH_SCRIPT="launch.py" - -# install command for torch -#export TORCH_COMMAND="pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113" - -# Requirements file to use for stable-diffusion-webui -#export REQS_FILE="requirements_versions.txt" - -# Fixed git repos -#export K_DIFFUSION_PACKAGE="" -#export GFPGAN_PACKAGE="" - -# Fixed git commits -#export STABLE_DIFFUSION_COMMIT_HASH="" -#export TAMING_TRANSFORMERS_COMMIT_HASH="" -#export CODEFORMER_COMMIT_HASH="" -#export BLIP_COMMIT_HASH="" - -# Uncomment to enable accelerated launch -#export ACCELERATE="True" - -########################################### diff --git a/spaces/argilla/argilla-streamlit-customs/my_app/wip/guideline-and-comment-ability.py b/spaces/argilla/argilla-streamlit-customs/my_app/wip/guideline-and-comment-ability.py deleted file mode 100644 index 187fd05d3ef4098ef5bd2094d01bce30ae7bbf93..0000000000000000000000000000000000000000 --- a/spaces/argilla/argilla-streamlit-customs/my_app/wip/guideline-and-comment-ability.py +++ /dev/null @@ -1,177 +0,0 @@ -# import os - -# import argilla as rg -# import streamlit as st -# import streamlit_analytics -# from _utils import login_workflow -# from text_highlighter import text_highlighter - -# st.set_page_config( -# page_title="Argilla Annotation Guideline and Comment Ability", -# page_icon=":memo:", -# layout="wide", -# ) - -# # st.image("https://docs.argilla.io/en/latest/_static/images/logo-light-mode.svg") -# st.title("Annotation Comment and Note support") - -# # login workflow -# login_workflow() - -# st.error( -# "WIP: Work in progress. Check our https://github.com/argilla-io/argilla-streamlit" -# " to open a PR." -# ) -# st.stop() -# dataset = st.text_input("Dataset Name") - -# if dataset: -# records = rg.load(name=dataset, limit=1) - -# if records: -# record = records[0] -# if isinstance(record, rg.TokenClassificationRecord) or isinstance( -# record, rg.TextClassificationRecord -# ): -# labels = st.text_input("Labels") -# split_labels = labels.split(",") -# split_labels = [label.strip() for label in split_labels] - -# if not any(split_labels): -# st.warning("No labels provided") -# st.stop() -# if isinstance(record, rg.TokenClassificationRecord): -# multi_label = st.radio("multi label", [False, True], horizontal=True) -# else: -# multi_label = False -# else: -# st.warning("No dataset provided") -# st.stop() - -# st.write("This is an annotation guideline. Label A is for cats, label B is for dogs.") -# query = st.text_input("Query", value="status: Default", key="query") -# if not query: -# query = None - -# records = rg.load(name=dataset, limit=1, query=query) - - -# def form_callback(dataset, query): -# rg.log(st.session_state.rec, dataset) -# st.session_state.rec = rg.load(name=dataset, limit=1, query=query)[0] -# if st.session_state.rec.inputs is not None: -# st.session_state.inputs = "\n".join( -# [ -# f"**{key}** \n\n {value}" -# for key, value in st.session_state.rec.inputs.items() -# ] -# ) -# else: -# st.session_state.inputs = st.session_state.rec.text -# st.session_state.comment = st.session_state.rec.metadata.get("comment", "") -# if st.session_state.rec.annotation: -# st.session_state["annotation"] = st.session_state.rec.annotation - -# st.success("Saved") - - -# if records: -# with st.form(key="my_form"): -# records = records[0] -# st.session_state.rec = records -# if isinstance(st.session_state.rec, rg.TokenClassificationRecord): -# if st.session_state.rec.annotation: -# old_annotation = [ -# { -# "start": an[1], -# "end": an[2], -# "tag": an[0], -# "text": st.session_state.rec.text[an[1] : an[2]], -# } -# for an in st.session_state.rec.annotation -# ] -# else: -# old_annotation = None -# annotation = text_highlighter( -# text=st.session_state.rec.text, -# labels=split_labels, -# annotations=old_annotation, -# ) -# formatted_annotation = [ -# (an["tag"], an["start"], an["end"]) for an in annotation -# ] - -# elif isinstance(st.session_state.rec, rg.TextClassificationRecord): -# if st.session_state.rec.inputs is not None: -# st.text_area( -# "Text", -# value="\n".join( -# [ -# f"{key}: {value}" -# for key, value in st.session_state.rec.inputs.items() -# ] -# ), -# key="inputs", -# disabled=True, -# ) -# else: -# st.text_area( -# "Text", value=st.session_state.rec.text, key="inputs", disabled=True -# ) - -# if st.session_state.rec.multi_label: -# annotation = st.multiselect( -# "annotation", -# split_labels, -# st.session_state.rec.annotation, -# key="annotation", -# ) -# else: -# if st.session_state.rec.annotation: -# if st.session_state.rec.annotation in split_labels: -# index = split_labels.index(st.session_state.rec.annotation) -# else: -# st.error(st.session_state.rec.annotation + " not in labels") -# else: -# index = 0 -# annotation = st.radio( -# "annotation", -# split_labels, -# index, -# horizontal=True, -# key="annotation", -# ) - -# elif isinstance(st.session_state.rec, rg.Text2TextRecord): -# st.write(st.session_state.rec.text) -# st.text_area(st.session_state.rec.annotation) - -# try: -# st.session_state.rec.__class__(**st.session_state.rec.__dict__) -# st.session_state.rec.annotation = annotation -# except Exception as e: -# st.write(e) - -# if st.session_state.rec.metadata: -# if "comment" in st.session_state.rec.metadata: -# input_comment = st.session_state.rec.metadata["comment"] -# else: -# input_comment = "" -# else: -# input_comment = "" - -# comment = st.text_input("comment", value=input_comment, key="comment") -# if st.session_state.rec.metadata: -# st.session_state.rec.metadata["comment/note"] = comment -# else: -# st.session_state.rec.metadata = {"comment": comment} - -# save = st.form_submit_button( -# "Save", on_click=form_callback, args=(dataset, query) -# ) - -# else: -# st.warning("No records found") - - -# \ No newline at end of file diff --git a/spaces/arnold-anand/chat-with-pdf/README.md b/spaces/arnold-anand/chat-with-pdf/README.md deleted file mode 100644 index cb7e43391015859d7f5cf03f7778064edd3ea8e1..0000000000000000000000000000000000000000 --- a/spaces/arnold-anand/chat-with-pdf/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chat With Pdf -emoji: 🚀 -colorFrom: indigo -colorTo: purple -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/__init__.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/selection_histogram.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/selection_histogram.py deleted file mode 100644 index 936217814022de54fff1484b238a8fa0da21368e..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/selection_histogram.py +++ /dev/null @@ -1,32 +0,0 @@ -""" -Selection Histogram -=================== -This chart shows an example of using an interval selection to filter the -contents of an attached histogram, allowing the user to see the proportion -of items in each category within the selection. -""" -# category: interactive charts -import altair as alt -from vega_datasets import data - -source = data.cars() - -brush = alt.selection(type='interval') - -points = alt.Chart(source).mark_point().encode( - x='Horsepower:Q', - y='Miles_per_Gallon:Q', - color=alt.condition(brush, 'Origin:N', alt.value('lightgray')) -).add_selection( - brush -) - -bars = alt.Chart(source).mark_bar().encode( - y='Origin:N', - color='Origin:N', - x='count(Origin):Q' -).transform_filter( - brush -) - -points & bars diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/trellis_stacked_bar_chart.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/trellis_stacked_bar_chart.py deleted file mode 100644 index f7d65a75f9ab21a95745c0bb95bc863e407b925e..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/trellis_stacked_bar_chart.py +++ /dev/null @@ -1,17 +0,0 @@ -""" -Trellis Stacked Bar Chart -========================= -This is an example of a horizontal stacked bar chart using data which contains crop yields over different regions and different years in the 1930s. -""" -# category: bar charts -import altair as alt -from vega_datasets import data - -source = data.barley() - -alt.Chart(source).mark_bar().encode( - column='year', - x='yield', - y='variety', - color='site' -).properties(width=220) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/streams/stapled.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/streams/stapled.py deleted file mode 100644 index a71ffb0dff230c599fa97d1e4e4556c524624493..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/streams/stapled.py +++ /dev/null @@ -1,138 +0,0 @@ -from dataclasses import dataclass -from typing import Any, Callable, Generic, List, Mapping, Optional, Sequence, TypeVar - -from ..abc import ( - ByteReceiveStream, - ByteSendStream, - ByteStream, - Listener, - ObjectReceiveStream, - ObjectSendStream, - ObjectStream, - TaskGroup, -) - -T_Item = TypeVar("T_Item") -T_Stream = TypeVar("T_Stream") - - -@dataclass(eq=False) -class StapledByteStream(ByteStream): - """ - Combines two byte streams into a single, bidirectional byte stream. - - Extra attributes will be provided from both streams, with the receive stream providing the - values in case of a conflict. - - :param ByteSendStream send_stream: the sending byte stream - :param ByteReceiveStream receive_stream: the receiving byte stream - """ - - send_stream: ByteSendStream - receive_stream: ByteReceiveStream - - async def receive(self, max_bytes: int = 65536) -> bytes: - return await self.receive_stream.receive(max_bytes) - - async def send(self, item: bytes) -> None: - await self.send_stream.send(item) - - async def send_eof(self) -> None: - await self.send_stream.aclose() - - async def aclose(self) -> None: - await self.send_stream.aclose() - await self.receive_stream.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return { - **self.send_stream.extra_attributes, - **self.receive_stream.extra_attributes, - } - - -@dataclass(eq=False) -class StapledObjectStream(Generic[T_Item], ObjectStream[T_Item]): - """ - Combines two object streams into a single, bidirectional object stream. - - Extra attributes will be provided from both streams, with the receive stream providing the - values in case of a conflict. - - :param ObjectSendStream send_stream: the sending object stream - :param ObjectReceiveStream receive_stream: the receiving object stream - """ - - send_stream: ObjectSendStream[T_Item] - receive_stream: ObjectReceiveStream[T_Item] - - async def receive(self) -> T_Item: - return await self.receive_stream.receive() - - async def send(self, item: T_Item) -> None: - await self.send_stream.send(item) - - async def send_eof(self) -> None: - await self.send_stream.aclose() - - async def aclose(self) -> None: - await self.send_stream.aclose() - await self.receive_stream.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return { - **self.send_stream.extra_attributes, - **self.receive_stream.extra_attributes, - } - - -@dataclass(eq=False) -class MultiListener(Generic[T_Stream], Listener[T_Stream]): - """ - Combines multiple listeners into one, serving connections from all of them at once. - - Any MultiListeners in the given collection of listeners will have their listeners moved into - this one. - - Extra attributes are provided from each listener, with each successive listener overriding any - conflicting attributes from the previous one. - - :param listeners: listeners to serve - :type listeners: Sequence[Listener[T_Stream]] - """ - - listeners: Sequence[Listener[T_Stream]] - - def __post_init__(self) -> None: - listeners: List[Listener[T_Stream]] = [] - for listener in self.listeners: - if isinstance(listener, MultiListener): - listeners.extend(listener.listeners) - del listener.listeners[:] # type: ignore[attr-defined] - else: - listeners.append(listener) - - self.listeners = listeners - - async def serve( - self, handler: Callable[[T_Stream], Any], task_group: Optional[TaskGroup] = None - ) -> None: - from .. import create_task_group - - async with create_task_group() as tg: - for listener in self.listeners: - tg.start_soon(listener.serve, handler, task_group) - - async def aclose(self) -> None: - for listener in self.listeners: - await listener.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - attributes: dict = {} - for listener in self.listeners: - attributes.update(listener.extra_attributes) - - return attributes diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/colorama/tests/winterm_test.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/colorama/tests/winterm_test.py deleted file mode 100644 index d0955f9e608377940f0d548576964f2fcf3caf48..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/colorama/tests/winterm_test.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file. -import sys -from unittest import TestCase, main, skipUnless - -try: - from unittest.mock import Mock, patch -except ImportError: - from mock import Mock, patch - -from ..winterm import WinColor, WinStyle, WinTerm - - -class WinTermTest(TestCase): - - @patch('colorama.winterm.win32') - def testInit(self, mockWin32): - mockAttr = Mock() - mockAttr.wAttributes = 7 + 6 * 16 + 8 - mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr - term = WinTerm() - self.assertEqual(term._fore, 7) - self.assertEqual(term._back, 6) - self.assertEqual(term._style, 8) - - @skipUnless(sys.platform.startswith("win"), "requires Windows") - def testGetAttrs(self): - term = WinTerm() - - term._fore = 0 - term._back = 0 - term._style = 0 - self.assertEqual(term.get_attrs(), 0) - - term._fore = WinColor.YELLOW - self.assertEqual(term.get_attrs(), WinColor.YELLOW) - - term._back = WinColor.MAGENTA - self.assertEqual( - term.get_attrs(), - WinColor.YELLOW + WinColor.MAGENTA * 16) - - term._style = WinStyle.BRIGHT - self.assertEqual( - term.get_attrs(), - WinColor.YELLOW + WinColor.MAGENTA * 16 + WinStyle.BRIGHT) - - @patch('colorama.winterm.win32') - def testResetAll(self, mockWin32): - mockAttr = Mock() - mockAttr.wAttributes = 1 + 2 * 16 + 8 - mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr - term = WinTerm() - - term.set_console = Mock() - term._fore = -1 - term._back = -1 - term._style = -1 - - term.reset_all() - - self.assertEqual(term._fore, 1) - self.assertEqual(term._back, 2) - self.assertEqual(term._style, 8) - self.assertEqual(term.set_console.called, True) - - @skipUnless(sys.platform.startswith("win"), "requires Windows") - def testFore(self): - term = WinTerm() - term.set_console = Mock() - term._fore = 0 - - term.fore(5) - - self.assertEqual(term._fore, 5) - self.assertEqual(term.set_console.called, True) - - @skipUnless(sys.platform.startswith("win"), "requires Windows") - def testBack(self): - term = WinTerm() - term.set_console = Mock() - term._back = 0 - - term.back(5) - - self.assertEqual(term._back, 5) - self.assertEqual(term.set_console.called, True) - - @skipUnless(sys.platform.startswith("win"), "requires Windows") - def testStyle(self): - term = WinTerm() - term.set_console = Mock() - term._style = 0 - - term.style(22) - - self.assertEqual(term._style, 22) - self.assertEqual(term.set_console.called, True) - - @patch('colorama.winterm.win32') - def testSetConsole(self, mockWin32): - mockAttr = Mock() - mockAttr.wAttributes = 0 - mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr - term = WinTerm() - term.windll = Mock() - - term.set_console() - - self.assertEqual( - mockWin32.SetConsoleTextAttribute.call_args, - ((mockWin32.STDOUT, term.get_attrs()), {}) - ) - - @patch('colorama.winterm.win32') - def testSetConsoleOnStderr(self, mockWin32): - mockAttr = Mock() - mockAttr.wAttributes = 0 - mockWin32.GetConsoleScreenBufferInfo.return_value = mockAttr - term = WinTerm() - term.windll = Mock() - - term.set_console(on_stderr=True) - - self.assertEqual( - mockWin32.SetConsoleTextAttribute.call_args, - ((mockWin32.STDERR, term.get_attrs()), {}) - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/asciicorp/Legal-ai/similarity.py b/spaces/asciicorp/Legal-ai/similarity.py deleted file mode 100644 index 7164a89931720a3b3bf5ff4db108f2fd25f1e20e..0000000000000000000000000000000000000000 --- a/spaces/asciicorp/Legal-ai/similarity.py +++ /dev/null @@ -1,74 +0,0 @@ -import streamlit as st -import nltk -from nltk.tokenize import word_tokenize -from nltk.corpus import stopwords -from nltk.stem import WordNetLemmatizer -from nltk.corpus import wordnet -from sklearn.feature_extraction.text import TfidfVectorizer -from sklearn.metrics.pairwise import cosine_similarity - -nltk.download('punkt') -nltk.download('stopwords') -nltk.download('wordnet') -nltk.download('averaged_perceptron_tagger') - -# Function to calculate Textual Similarity -def calculate_textual_similarity(text1, text2): - tokens1 = word_tokenize(text1) - tokens2 = word_tokenize(text2) - return 100 - (nltk.edit_distance(tokens1, tokens2) * 100) / max(len(tokens1), len(tokens2)) - -# Function to calculate Linguistic Similarity -def calculate_linguistic_similarity(text1, text2): - stop_words = set(stopwords.words('english')) - lemmatizer = WordNetLemmatizer() - - def get_wordnet_pos(treebank_tag): - if treebank_tag.startswith('J'): - return wordnet.ADJ - elif treebank_tag.startswith('V'): - return wordnet.VERB - elif treebank_tag.startswith('N'): - return wordnet.NOUN - elif treebank_tag.startswith('R'): - return wordnet.ADV - else: - return wordnet.NOUN - - def preprocess_text(text): - tokens = word_tokenize(text.lower()) - tokens = [token for token in tokens if token.isalpha()] - tokens = [token for token in tokens if token not in stop_words] - tokens = [lemmatizer.lemmatize(token, get_wordnet_pos(nltk.pos_tag([token])[0][1])) for token in tokens] - return tokens - - tokens1 = preprocess_text(text1) - tokens2 = preprocess_text(text2) - vectorizer = TfidfVectorizer(tokenizer=preprocess_text) - vectors = vectorizer.fit_transform([text1, text2]) - cosine_similarities = cosine_similarity(vectors)[0, 1] - return round(cosine_similarities * 100, 2) - -# Function to calculate Semantic Similarity -def calculate_semantic_similarity(text1, text2): - return 0 # todo - -def highlight_text_differences(text1, text2): - tokens1 = word_tokenize(text1) - tokens2 = word_tokenize(text2) - common_tokens = set(tokens1).intersection(tokens2) - new_text1 = [] - new_text2 = [] - for token in tokens1: - if token in common_tokens: - new_text1.append("{}".format(token)) - else: - new_text1.append("{}".format(token)) - for token in tokens2: - if token in common_tokens: - new_text2.append("{}".format(token)) - else: - new_text2.append("{}".format(token)) - new_text1 = " ".join(new_text1) - new_text2 = " ".join(new_text2) - return new_text1, new_text2 \ No newline at end of file diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Justin Smith.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Justin Smith.html deleted file mode 100644 index 0f591482f34ab2effecf970e8af3f0c0687edf82..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Justin Smith.html +++ /dev/null @@ -1,132 +0,0 @@ - - - - Justin Smith - - - - -
-

Justin Smith

- -
-
Mentee to mentor

1- what's your motivation to become a mentor with SharpestMinds?
- It is a very rewarding experience to help people progress. Was a teacher in past and have a strong education and teaching background. Helped people with personal and professional growth. 

2- What's your career journey in the Data field been like? 
- Have previous background in non-technical field and worked with non-profit field. 
- Education also in non-tech field but got interested in ML side and technical side but was more interested in Data and software engineering as most of the work done by DS folks is also D.E.
- After masters PhD didn't make sense - didn't want to spend a lot on pursuing this. 
- Discovered SM through a podcast. 
- Landed a job as Software engineer working in D.S. / M.L. group. It involved building recommendation product taking in input and recommending profitability. Woking in third party logistics supply chain company - deploying models for forecasting profitability across supply chain. 

3- How was your experience as a SM Mentee? Is there any improvements that can happen?
- It was really good. It was a new experience to rely on someone to guide and help. 
- Needed to build some confidence in interviewing and how to communicate technical knowledge properly. Mentorship helped with this. 
- Improvement - The platforms lack context for SWE / D.E roles. 
- worked on a project but didn't finish it, got a job before completing it. The project was forecasting cryptocurrency prices from publicly available coin prices.

4-  According to you, What's the biggest challenge faced by someone trying to land a SWE or D.E. role? How can you help them with this?
- The biggest challenge for a newcomer is having the right network, getting in front of the right people to showcase knowledge and skills and being able to continue to network. 
- Will help mentees with how to reach out to professionals and encourage them with when there are no responses. Make them understand and normalize the process of networking. Help them with a potential burnout that can happen because of this, when not hearing back or getting responses. 

5- Do you have any questions for me regarding the platform?
- How does onboarding look like?
- Aware that SWE mentorship was rolled out - is there a mentee pool already available on the platform for this?
- Is there a plan to start marketing mentorships for SWE?
-
- -
- - - \ No newline at end of file diff --git a/spaces/awacke1/Daredevil-Text-Generation/README.md b/spaces/awacke1/Daredevil-Text-Generation/README.md deleted file mode 100644 index fd399887cfc313f8d454cafee64461e53936adde..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Daredevil-Text-Generation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🔥Daredevil Text Generation🔥 -emoji: 🔥Text🔥 -colorFrom: pink -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/awacke1/Data-Augmentation/app.py b/spaces/awacke1/Data-Augmentation/app.py deleted file mode 100644 index 309cdc9cffc3e888ada75ffba22664f19f246ef6..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Data-Augmentation/app.py +++ /dev/null @@ -1,101 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np - -# Define the states and their populations and health concerns -states = { - 'Minnesota': { - 'population': 5700000, - 'health_concerns': ['obesity', 'diabetes', 'heart disease'] - }, - 'Wisconsin': { - 'population': 5850000, - 'health_concerns': ['cancer', 'alcoholism', 'depression'] - }, - 'Texas': { - 'population': 29000000, - 'health_concerns': ['obesity', 'diabetes', 'heart disease'] - }, - 'Florida': { - 'population': 21500000, - 'health_concerns': ['cancer', 'alcoholism', 'depression'] - }, - 'California': { - 'population': 39500000, - 'health_concerns': ['obesity', 'diabetes', 'heart disease'] - }, - 'New York': { - 'population': 19500000, - 'health_concerns': ['cancer', 'alcoholism', 'depression'] - } -} - -# Augment the data by adding random noise and additional columns -for state in states: - states[state]['population'] += int(np.random.normal(0, 500000)) - states[state]['climate'] = np.random.choice(['cold', 'moderate', 'hot']) - states[state]['geography'] = np.random.choice(['coastal', 'inland', 'mountainous']) - states[state]['economy'] = np.random.choice(['agriculture', 'manufacturing', 'services']) - -# Create a pandas dataframe from the augmented data -df = pd.DataFrame.from_dict(states, orient='index') -df = df[['population', 'climate', 'geography', 'economy', 'health_concerns']] - -# Define the top 3 health concerns by state -top_health_concerns = { - 'Minnesota': ['obesity', 'diabetes', 'heart disease'], - 'Wisconsin': ['cancer', 'alcoholism', 'depression'], - 'Texas': ['obesity', 'diabetes', 'heart disease'], - 'Florida': ['cancer', 'alcoholism', 'depression'], - 'California': ['obesity', 'diabetes', 'heart disease'], - 'New York': ['cancer', 'alcoholism', 'depression'] -} - -# Define the statistics for each health concern and cite references -statistics = { - 'obesity': { - 'prevalence': '32.4%', - 'source': 'https://www.cdc.gov/obesity/data/prevalence-maps.html' - }, - 'diabetes': { - 'prevalence': '10.7%', - 'source': 'https://www.cdc.gov/diabetes/data/statistics-report/index.html' - }, - 'heart disease': { - 'prevalence': '12.1%', - 'source': 'https://www.cdc.gov/heartdisease/facts.htm' - }, - 'cancer': { - 'prevalence': '38.4%', - 'source': 'https://www.cdc.gov/cancer/dcpc/data/types.htm' - }, - 'alcoholism': { - 'prevalence': '14.5%', - 'source': 'https://www.niaaa.nih.gov/publications/brochures-and-fact-sheets/alcohol-facts-and-statistics' - }, - 'depression': { - 'prevalence': '7.6%', - 'source': 'https://www.nimh.nih.gov/health/statistics/major-depression.shtml' - } -} - -# Define the streamlit app -def app(): - st.title('Data Augmentation Example') - st.write('This app demonstrates data augmentation by adding random noise and additional columns to a short python dictionary list of the states.') - -# Display the augmented data -st.header('Augmented Data') -st.write(df) - -# Display the top 3 health concerns by state and their statistics -st.header('Top 3 Health Concerns by State') -for state in top_health_concerns: - st.subheader(state) - for health_concern in top_health_concerns[state]: - st.write(health_concern) - st.write('Prevalence:', statistics[health_concern]['prevalence']) - st.write('Source:', statistics[health_concern]['source']) - st.write('---') - -app() diff --git a/spaces/awacke1/Generative-AI-EACN/style.css b/spaces/awacke1/Generative-AI-EACN/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Generative-AI-EACN/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/banana-projects/convai/server/lib/Utils.ts b/spaces/banana-projects/convai/server/lib/Utils.ts deleted file mode 100644 index 233f47b66c4e7ea561bc579c4eb5dc49d1b48750..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/convai/server/lib/Utils.ts +++ /dev/null @@ -1,4 +0,0 @@ - -export function capitalize(s: string): string { - return s.charAt(0).toUpperCase() + s.slice(1); -} diff --git a/spaces/bhvsh/stroke-prediction/apps/model.py b/spaces/bhvsh/stroke-prediction/apps/model.py deleted file mode 100644 index 0f9ef07a982c808dc9b0dd382f7d0d34763910e7..0000000000000000000000000000000000000000 --- a/spaces/bhvsh/stroke-prediction/apps/model.py +++ /dev/null @@ -1,54 +0,0 @@ -import streamlit as st -import pickle -import lightgbm -from sklearn.metrics import classification_report,plot_precision_recall_curve,plot_confusion_matrix,precision_recall_fscore_support,plot_roc_curve - -def app(): - with st.sidebar: - st.title('Stroke Prediction using Machine Learning') - - st.write('This model which predicts whether a patient is likely to get a stroke based on the parameters like gender, age various diseases and smoking status.') - st.markdown('_For Machine Learning - 19CS601_') - - st.title('Model Overview') - st.write('The model performance of the dataset is presented below.') - - # Retreving model and it's components for performance metric - model = pickle.load(open("/home/user/app/apps/models/gbm/gbm-model-pickle.sav", 'rb')) - X_test = pickle.load(open("/home/user/app/apps/models/gbm/gbm-xtest.sav", 'rb')) - Y_test = pickle.load(open("/home/user/app/apps/models/gbm/gbm-ytest.sav", 'rb')) - Y_pred = model.predict(X_test) - - st.header('Model performance') - #result = model.score(X_test, Y_test) - - precision,recall,f1_sc,support=precision_recall_fscore_support(Y_test,Y_pred) - accuracy=model.score(X_test,Y_test) - - col1, col2, col3, col4 = st.columns(4) - col1.metric("Accuracy", round(accuracy,4), "") - col2.metric("Recall", round(recall[0],4), "") - col3.metric("F-measure", round(f1_sc[0],4), "") - col4.metric("Support", support[0], "") - - st.subheader("Model type: ") - st.write(model) - - st.set_option('deprecation.showPyplotGlobalUse', False) - st.subheader("Confusion Matrix: ") - plot_confusion_matrix(model, X_test, Y_test, display_labels=['NoStroke','Stroke']) - st.pyplot() - #st.table(confusion_matrix(Y_test, Y_pred)) - - st.subheader("ROC Curve") - plot_roc_curve(model, X_test, Y_test) - st.set_option('deprecation.showPyplotGlobalUse', False) - st.pyplot() - - st.subheader("Precision-Recall Curve") - plot_precision_recall_curve(model, X_test, Y_test) - st.pyplot() - - st.subheader('Other metrics:') - report=classification_report(Y_test, Y_pred, target_names=None) - st.code(report) \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/scripts/prompts_from_file.py b/spaces/bigjoker/stable-diffusion-webui/scripts/prompts_from_file.py deleted file mode 100644 index 17c9c967ffeeb3f538c6f95d93ae79c32ca17828..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/scripts/prompts_from_file.py +++ /dev/null @@ -1,177 +0,0 @@ -import copy -import math -import os -import random -import sys -import traceback -import shlex - -import modules.scripts as scripts -import gradio as gr - -from modules import sd_samplers -from modules.processing import Processed, process_images -from PIL import Image -from modules.shared import opts, cmd_opts, state - - -def process_string_tag(tag): - return tag - - -def process_int_tag(tag): - return int(tag) - - -def process_float_tag(tag): - return float(tag) - - -def process_boolean_tag(tag): - return True if (tag == "true") else False - - -prompt_tags = { - "sd_model": None, - "outpath_samples": process_string_tag, - "outpath_grids": process_string_tag, - "prompt_for_display": process_string_tag, - "prompt": process_string_tag, - "negative_prompt": process_string_tag, - "styles": process_string_tag, - "seed": process_int_tag, - "subseed_strength": process_float_tag, - "subseed": process_int_tag, - "seed_resize_from_h": process_int_tag, - "seed_resize_from_w": process_int_tag, - "sampler_index": process_int_tag, - "sampler_name": process_string_tag, - "batch_size": process_int_tag, - "n_iter": process_int_tag, - "steps": process_int_tag, - "cfg_scale": process_float_tag, - "width": process_int_tag, - "height": process_int_tag, - "restore_faces": process_boolean_tag, - "tiling": process_boolean_tag, - "do_not_save_samples": process_boolean_tag, - "do_not_save_grid": process_boolean_tag -} - - -def cmdargs(line): - args = shlex.split(line) - pos = 0 - res = {} - - while pos < len(args): - arg = args[pos] - - assert arg.startswith("--"), f'must start with "--": {arg}' - assert pos+1 < len(args), f'missing argument for command line option {arg}' - - tag = arg[2:] - - if tag == "prompt" or tag == "negative_prompt": - pos += 1 - prompt = args[pos] - pos += 1 - while pos < len(args) and not args[pos].startswith("--"): - prompt += " " - prompt += args[pos] - pos += 1 - res[tag] = prompt - continue - - - func = prompt_tags.get(tag, None) - assert func, f'unknown commandline option: {arg}' - - val = args[pos+1] - if tag == "sampler_name": - val = sd_samplers.samplers_map.get(val.lower(), None) - - res[tag] = func(val) - - pos += 2 - - return res - - -def load_prompt_file(file): - if file is None: - lines = [] - else: - lines = [x.strip() for x in file.decode('utf8', errors='ignore').split("\n")] - - return None, "\n".join(lines), gr.update(lines=7) - - -class Script(scripts.Script): - def title(self): - return "Prompts from file or textbox" - - def ui(self, is_img2img): - checkbox_iterate = gr.Checkbox(label="Iterate seed every line", value=False, elem_id=self.elem_id("checkbox_iterate")) - checkbox_iterate_batch = gr.Checkbox(label="Use same random seed for all lines", value=False, elem_id=self.elem_id("checkbox_iterate_batch")) - - prompt_txt = gr.Textbox(label="List of prompt inputs", lines=1, elem_id=self.elem_id("prompt_txt")) - file = gr.File(label="Upload prompt inputs", type='binary', elem_id=self.elem_id("file")) - - file.change(fn=load_prompt_file, inputs=[file], outputs=[file, prompt_txt, prompt_txt]) - - # We start at one line. When the text changes, we jump to seven lines, or two lines if no \n. - # We don't shrink back to 1, because that causes the control to ignore [enter], and it may - # be unclear to the user that shift-enter is needed. - prompt_txt.change(lambda tb: gr.update(lines=7) if ("\n" in tb) else gr.update(lines=2), inputs=[prompt_txt], outputs=[prompt_txt]) - return [checkbox_iterate, checkbox_iterate_batch, prompt_txt] - - def run(self, p, checkbox_iterate, checkbox_iterate_batch, prompt_txt: str): - lines = [x.strip() for x in prompt_txt.splitlines()] - lines = [x for x in lines if len(x) > 0] - - p.do_not_save_grid = True - - job_count = 0 - jobs = [] - - for line in lines: - if "--" in line: - try: - args = cmdargs(line) - except Exception: - print(f"Error parsing line {line} as commandline:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - args = {"prompt": line} - else: - args = {"prompt": line} - - job_count += args.get("n_iter", p.n_iter) - - jobs.append(args) - - print(f"Will process {len(lines)} lines in {job_count} jobs.") - if (checkbox_iterate or checkbox_iterate_batch) and p.seed == -1: - p.seed = int(random.randrange(4294967294)) - - state.job_count = job_count - - images = [] - all_prompts = [] - infotexts = [] - for n, args in enumerate(jobs): - state.job = f"{state.job_no + 1} out of {state.job_count}" - - copy_p = copy.copy(p) - for k, v in args.items(): - setattr(copy_p, k, v) - - proc = process_images(copy_p) - images += proc.images - - if checkbox_iterate: - p.seed = p.seed + (p.batch_size * p.n_iter) - all_prompts += proc.all_prompts - infotexts += proc.infotexts - - return Processed(p, images, p.seed, "", all_prompts=all_prompts, infotexts=infotexts) diff --git a/spaces/bigjoker/stable-diffusion-webui/webui.sh b/spaces/bigjoker/stable-diffusion-webui/webui.sh deleted file mode 100644 index 8cdad22d310fed20f229b09d7a3160aeb1731a85..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/webui.sh +++ /dev/null @@ -1,186 +0,0 @@ -#!/usr/bin/env bash -################################################# -# Please do not make any changes to this file, # -# change the variables in webui-user.sh instead # -################################################# - -# If run from macOS, load defaults from webui-macos-env.sh -if [[ "$OSTYPE" == "darwin"* ]]; then - if [[ -f webui-macos-env.sh ]] - then - source ./webui-macos-env.sh - fi -fi - -# Read variables from webui-user.sh -# shellcheck source=/dev/null -if [[ -f webui-user.sh ]] -then - source ./webui-user.sh -fi - -# Set defaults -# Install directory without trailing slash -if [[ -z "${install_dir}" ]] -then - install_dir="/home/$(whoami)" -fi - -# Name of the subdirectory (defaults to stable-diffusion-webui) -if [[ -z "${clone_dir}" ]] -then - clone_dir="stable-diffusion-webui" -fi - -# python3 executable -if [[ -z "${python_cmd}" ]] -then - python_cmd="python3" -fi - -# git executable -if [[ -z "${GIT}" ]] -then - export GIT="git" -fi - -# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv) -if [[ -z "${venv_dir}" ]] -then - venv_dir="venv" -fi - -if [[ -z "${LAUNCH_SCRIPT}" ]] -then - LAUNCH_SCRIPT="launch.py" -fi - -# this script cannot be run as root by default -can_run_as_root=0 - -# read any command line flags to the webui.sh script -while getopts "f" flag > /dev/null 2>&1 -do - case ${flag} in - f) can_run_as_root=1;; - *) break;; - esac -done - -# Disable sentry logging -export ERROR_REPORTING=FALSE - -# Do not reinstall existing pip packages on Debian/Ubuntu -export PIP_IGNORE_INSTALLED=0 - -# Pretty print -delimiter="################################################################" - -printf "\n%s\n" "${delimiter}" -printf "\e[1m\e[32mInstall script for stable-diffusion + Web UI\n" -printf "\e[1m\e[34mTested on Debian 11 (Bullseye)\e[0m" -printf "\n%s\n" "${delimiter}" - -# Do not run as root -if [[ $(id -u) -eq 0 && can_run_as_root -eq 0 ]] -then - printf "\n%s\n" "${delimiter}" - printf "\e[1m\e[31mERROR: This script must not be launched as root, aborting...\e[0m" - printf "\n%s\n" "${delimiter}" - exit 1 -else - printf "\n%s\n" "${delimiter}" - printf "Running on \e[1m\e[32m%s\e[0m user" "$(whoami)" - printf "\n%s\n" "${delimiter}" -fi - -if [[ -d .git ]] -then - printf "\n%s\n" "${delimiter}" - printf "Repo already cloned, using it as install directory" - printf "\n%s\n" "${delimiter}" - install_dir="${PWD}/../" - clone_dir="${PWD##*/}" -fi - -# Check prerequisites -gpu_info=$(lspci 2>/dev/null | grep VGA) -case "$gpu_info" in - *"Navi 1"*|*"Navi 2"*) export HSA_OVERRIDE_GFX_VERSION=10.3.0 - ;; - *"Renoir"*) export HSA_OVERRIDE_GFX_VERSION=9.0.0 - printf "\n%s\n" "${delimiter}" - printf "Experimental support for Renoir: make sure to have at least 4GB of VRAM and 10GB of RAM or enable cpu mode: --use-cpu all --no-half" - printf "\n%s\n" "${delimiter}" - ;; - *) - ;; -esac -if echo "$gpu_info" | grep -q "AMD" && [[ -z "${TORCH_COMMAND}" ]] -then - export TORCH_COMMAND="pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2" -fi - -for preq in "${GIT}" "${python_cmd}" -do - if ! hash "${preq}" &>/dev/null - then - printf "\n%s\n" "${delimiter}" - printf "\e[1m\e[31mERROR: %s is not installed, aborting...\e[0m" "${preq}" - printf "\n%s\n" "${delimiter}" - exit 1 - fi -done - -if ! "${python_cmd}" -c "import venv" &>/dev/null -then - printf "\n%s\n" "${delimiter}" - printf "\e[1m\e[31mERROR: python3-venv is not installed, aborting...\e[0m" - printf "\n%s\n" "${delimiter}" - exit 1 -fi - -cd "${install_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/, aborting...\e[0m" "${install_dir}"; exit 1; } -if [[ -d "${clone_dir}" ]] -then - cd "${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; } -else - printf "\n%s\n" "${delimiter}" - printf "Clone stable-diffusion-webui" - printf "\n%s\n" "${delimiter}" - "${GIT}" clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git "${clone_dir}" - cd "${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; } -fi - -printf "\n%s\n" "${delimiter}" -printf "Create and activate python venv" -printf "\n%s\n" "${delimiter}" -cd "${install_dir}"/"${clone_dir}"/ || { printf "\e[1m\e[31mERROR: Can't cd to %s/%s/, aborting...\e[0m" "${install_dir}" "${clone_dir}"; exit 1; } -if [[ ! -d "${venv_dir}" ]] -then - "${python_cmd}" -m venv "${venv_dir}" - first_launch=1 -fi -# shellcheck source=/dev/null -if [[ -f "${venv_dir}"/bin/activate ]] -then - source "${venv_dir}"/bin/activate -else - printf "\n%s\n" "${delimiter}" - printf "\e[1m\e[31mERROR: Cannot activate python venv, aborting...\e[0m" - printf "\n%s\n" "${delimiter}" - exit 1 -fi - -if [[ ! -z "${ACCELERATE}" ]] && [ ${ACCELERATE}="True" ] && [ -x "$(command -v accelerate)" ] -then - printf "\n%s\n" "${delimiter}" - printf "Accelerating launch.py..." - printf "\n%s\n" "${delimiter}" - exec accelerate launch --num_cpu_threads_per_process=6 "${LAUNCH_SCRIPT}" "$@" -else - printf "\n%s\n" "${delimiter}" - printf "Launching launch.py..." - printf "\n%s\n" "${delimiter}" - exec "${python_cmd}" "${LAUNCH_SCRIPT}" "$@" -fi diff --git a/spaces/biingshanak/vits-uma-genshin-honkai/modules.py b/spaces/biingshanak/vits-uma-genshin-honkai/modules.py deleted file mode 100644 index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000 --- a/spaces/biingshanak/vits-uma-genshin-honkai/modules.py +++ /dev/null @@ -1,388 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/binery/Table_Transformer_PaddleOCR/app.py b/spaces/binery/Table_Transformer_PaddleOCR/app.py deleted file mode 100644 index 0cc94ad56ecd84e9eb9df5c9fa80bfe07788dcb4..0000000000000000000000000000000000000000 --- a/spaces/binery/Table_Transformer_PaddleOCR/app.py +++ /dev/null @@ -1,533 +0,0 @@ -import streamlit as st -from PIL import Image, ImageEnhance -import statistics -import os -import string -from collections import Counter -from itertools import tee, count -# import TDTSR -import pytesseract -from pytesseract import Output -import json -import pandas as pd -import matplotlib.pyplot as plt -import cv2 -import numpy as np -# from transformers import TrOCRProcessor, VisionEncoderDecoderModel -# from cv2 import dnn_superres -from transformers import DetrFeatureExtractor -#from transformers import DetrForObjectDetection -from transformers import TableTransformerForObjectDetection -import torch -import asyncio -import paddlehub as hub -from paddleocr import PaddleOCR,draw_ocr -# pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' - - -st.set_option('deprecation.showPyplotGlobalUse', False) -st.set_page_config(layout='wide') -st.title("Table Detection and Table Structure Recognition") -st.write("Implemented by MSFT team: https://github.com/microsoft/table-transformer") - - -def PIL_to_cv(pil_img): - return cv2.cvtColor(np.array(pil_img), cv2.COLOR_RGB2BGR) - -def cv_to_PIL(cv_img): - return Image.fromarray(cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB)) - - -async def pytess(cell_pil_img): - #pytess_output=' '.join(pytesseract.image_to_data(cell_pil_img, output_type=Output.DICT, config='-c tessedit_char_blacklist=œ˜â€œï¬â™Ã©œ¢!|”?«“¥ --psm 6 preserve_interword_spaces')['text']).strip() - #print("pytess_output######################################") - #print(pytess_output) - #print("pytess_output@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@") - - ###paddleocr - paddle_output=' ' - cell_cv_img=PIL_to_cv(cell_pil_img) - height, width, channels = cell_cv_img.shape - st.text('height:'+str(height)+'/n'+'width:'+str(width)) - if height>=10 and width>=10: - ocr = PaddleOCR(use_angle_cls=True,use_space_char=True) # need to run only once to download and load model into memory - result = ocr.ocr(cell_cv_img,cls=True) - print(result) - print("___________________________________________________________") - for idx in range(len(result)): - res = result[idx] - for line in res: - print(line) - print(line[1][0]) - print("____________________________@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@") - paddle_output=paddle_output+' '+line[1][0] - paddle_output=paddle_output+' ' - print("paddleocr@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@") - print(paddle_output) - print("paddleocr$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$") - st.image(cell_pil_img, caption=paddle_output) - return str(paddle_output) - -# def super_res(pil_img): - # ''' - # Useful for low-res docs - # ''' - # requires opencv-contrib-python installed without the opencv-python - # sr = dnn_superres.DnnSuperResImpl_create() - # image = PIL_to_cv(pil_img) - # model_path = "/data/Salman/TRD/code/table-transformer/transformers/LapSRN_x2.pb" - # model_name = 'lapsrn' - # model_scale = 2 - # sr.readModel(model_path) - # sr.setModel(model_name, model_scale) - # final_img = sr.upsample(image) - # final_img = cv_to_PIL(final_img) - - # return final_img - - -def sharpen_image(pil_img): - - img = PIL_to_cv(pil_img) - sharpen_kernel = np.array([[-1, -1, -1], - [-1, 9, -1], - [-1, -1, -1]]) - - sharpen = cv2.filter2D(img, -1, sharpen_kernel) - pil_img = cv_to_PIL(sharpen) - return pil_img - - -def uniquify(seq, suffs = count(1)): - """Make all the items unique by adding a suffix (1, 2, etc). - Credit: https://stackoverflow.com/questions/30650474/python-rename-duplicates-in-list-with-progressive-numbers-without-sorting-list - `seq` is mutable sequence of strings. - `suffs` is an optional alternative suffix iterable. - """ - not_unique = [k for k,v in Counter(seq).items() if v>1] - - suff_gens = dict(zip(not_unique, tee(suffs, len(not_unique)))) - for idx,s in enumerate(seq): - try: - suffix = str(next(suff_gens[s])) - except KeyError: - continue - else: - seq[idx] += suffix - - return seq - -def binarizeBlur_image(pil_img): - image = PIL_to_cv(pil_img) - thresh = cv2.threshold(image, 150, 255, cv2.THRESH_BINARY_INV)[1] - - result = cv2.GaussianBlur(thresh, (5,5), 0) - result = 255 - result - return cv_to_PIL(result) - - - -def td_postprocess(pil_img): - ''' - Removes gray background from tables - ''' - img = PIL_to_cv(pil_img) - - hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) - mask = cv2.inRange(hsv, (0, 0, 100), (255, 5, 255)) # (0, 0, 100), (255, 5, 255) - nzmask = cv2.inRange(hsv, (0, 0, 5), (255, 255, 255)) # (0, 0, 5), (255, 255, 255)) - nzmask = cv2.erode(nzmask, np.ones((3,3))) # (3,3) - mask = mask & nzmask - - new_img = img.copy() - new_img[np.where(mask)] = 255 - - - return cv_to_PIL(new_img) - -# def super_res(pil_img): -# # requires opencv-contrib-python installed without the opencv-python -# sr = dnn_superres.DnnSuperResImpl_create() -# image = PIL_to_cv(pil_img) -# model_path = "./LapSRN_x8.pb" -# model_name = model_path.split('/')[1].split('_')[0].lower() -# model_scale = int(model_path.split('/')[1].split('_')[1].split('.')[0][1]) - -# sr.readModel(model_path) -# sr.setModel(model_name, model_scale) -# final_img = sr.upsample(image) -# final_img = cv_to_PIL(final_img) - -# return final_img - -def table_detector(image, THRESHOLD_PROBA): - ''' - Table detection using DEtect-object TRansformer pre-trained on 1 million tables - ''' - - feature_extractor = DetrFeatureExtractor(do_resize=True, size=800, max_size=800) - encoding = feature_extractor(image, return_tensors="pt") - - model = TableTransformerForObjectDetection.from_pretrained("microsoft/table-transformer-detection") - - with torch.no_grad(): - outputs = model(**encoding) - - probas = outputs.logits.softmax(-1)[0, :, :-1] - keep = probas.max(-1).values > THRESHOLD_PROBA - - target_sizes = torch.tensor(image.size[::-1]).unsqueeze(0) - postprocessed_outputs = feature_extractor.post_process(outputs, target_sizes) - bboxes_scaled = postprocessed_outputs[0]['boxes'][keep] - - return (model, probas[keep], bboxes_scaled) - - -def table_struct_recog(image, THRESHOLD_PROBA): - ''' - Table structure recognition using DEtect-object TRansformer pre-trained on 1 million tables - ''' - - feature_extractor = DetrFeatureExtractor(do_resize=True, size=1000, max_size=1000) - encoding = feature_extractor(image, return_tensors="pt") - - model = TableTransformerForObjectDetection.from_pretrained("microsoft/table-transformer-structure-recognition") - with torch.no_grad(): - outputs = model(**encoding) - - probas = outputs.logits.softmax(-1)[0, :, :-1] - keep = probas.max(-1).values > THRESHOLD_PROBA - - target_sizes = torch.tensor(image.size[::-1]).unsqueeze(0) - postprocessed_outputs = feature_extractor.post_process(outputs, target_sizes) - bboxes_scaled = postprocessed_outputs[0]['boxes'][keep] - - return (model, probas[keep], bboxes_scaled) - - - - - -class TableExtractionPipeline(): - - colors = ["red", "blue", "green", "yellow", "orange", "violet"] - - # colors = ["red", "blue", "green", "red", "red", "red"] - - def add_padding(self, pil_img, top, right, bottom, left, color=(255,255,255)): - ''' - Image padding as part of TSR pre-processing to prevent missing table edges - ''' - width, height = pil_img.size - new_width = width + right + left - new_height = height + top + bottom - result = Image.new(pil_img.mode, (new_width, new_height), color) - result.paste(pil_img, (left, top)) - return result - - def plot_results_detection(self, c1, model, pil_img, prob, boxes, delta_xmin, delta_ymin, delta_xmax, delta_ymax): - ''' - crop_tables and plot_results_detection must have same co-ord shifts because 1 only plots the other one updates co-ordinates - ''' - # st.write('img_obj') - # st.write(pil_img) - plt.imshow(pil_img) - ax = plt.gca() - - for p, (xmin, ymin, xmax, ymax) in zip(prob, boxes.tolist()): - cl = p.argmax() - xmin, ymin, xmax, ymax = xmin-delta_xmin, ymin-delta_ymin, xmax+delta_xmax, ymax+delta_ymax - ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin,fill=False, color='red', linewidth=3)) - text = f'{model.config.id2label[cl.item()]}: {p[cl]:0.2f}' - ax.text(xmin-20, ymin-50, text, fontsize=10,bbox=dict(facecolor='yellow', alpha=0.5)) - plt.axis('off') - c1.pyplot() - - - def crop_tables(self, pil_img, prob, boxes, delta_xmin, delta_ymin, delta_xmax, delta_ymax): - ''' - crop_tables and plot_results_detection must have same co-ord shifts because 1 only plots the other one updates co-ordinates - ''' - cropped_img_list = [] - - for p, (xmin, ymin, xmax, ymax) in zip(prob, boxes.tolist()): - - xmin, ymin, xmax, ymax = xmin-delta_xmin, ymin-delta_ymin, xmax+delta_xmax, ymax+delta_ymax - cropped_img = pil_img.crop((xmin, ymin, xmax, ymax)) - cropped_img_list.append(cropped_img) - - - return cropped_img_list - - def generate_structure(self, c2, model, pil_img, prob, boxes, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom): - ''' - Co-ordinates are adjusted here by 3 'pixels' - To plot table pillow image and the TSR bounding boxes on the table - ''' - # st.write('img_obj') - # st.write(pil_img) - plt.figure(figsize=(32,20)) - plt.imshow(pil_img) - ax = plt.gca() - rows = {} - cols = {} - idx = 0 - - - for p, (xmin, ymin, xmax, ymax) in zip(prob, boxes.tolist()): - - xmin, ymin, xmax, ymax = xmin, ymin, xmax, ymax - cl = p.argmax() - class_text = model.config.id2label[cl.item()] - text = f'{class_text}: {p[cl]:0.2f}' - # or (class_text == 'table column') - if (class_text == 'table row') or (class_text =='table projected row header') or (class_text == 'table column'): - ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin,fill=False, color=self.colors[cl.item()], linewidth=2)) - ax.text(xmin-10, ymin-10, text, fontsize=5, bbox=dict(facecolor='yellow', alpha=0.5)) - - if class_text == 'table row': - rows['table row.'+str(idx)] = (xmin, ymin-expand_rowcol_bbox_top, xmax, ymax+expand_rowcol_bbox_bottom) - if class_text == 'table column': - cols['table column.'+str(idx)] = (xmin, ymin-expand_rowcol_bbox_top, xmax, ymax+expand_rowcol_bbox_bottom) - - idx += 1 - - - plt.axis('on') - c2.pyplot() - return rows, cols - - def sort_table_featuresv2(self, rows:dict, cols:dict): - # Sometimes the header and first row overlap, and we need the header bbox not to have first row's bbox inside the headers bbox - rows_ = {table_feature : (xmin, ymin, xmax, ymax) for table_feature, (xmin, ymin, xmax, ymax) in sorted(rows.items(), key=lambda tup: tup[1][1])} - cols_ = {table_feature : (xmin, ymin, xmax, ymax) for table_feature, (xmin, ymin, xmax, ymax) in sorted(cols.items(), key=lambda tup: tup[1][0])} - - return rows_, cols_ - - def individual_table_featuresv2(self, pil_img, rows:dict, cols:dict): - - for k, v in rows.items(): - xmin, ymin, xmax, ymax = v - cropped_img = pil_img.crop((xmin, ymin, xmax, ymax)) - rows[k] = xmin, ymin, xmax, ymax, cropped_img - - for k, v in cols.items(): - xmin, ymin, xmax, ymax = v - cropped_img = pil_img.crop((xmin, ymin, xmax, ymax)) - cols[k] = xmin, ymin, xmax, ymax, cropped_img - - return rows, cols - - - def object_to_cellsv2(self, master_row:dict, cols:dict, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom, padd_left): - '''Removes redundant bbox for rows&columns and divides each row into cells from columns - Args: - Returns: - - ''' - cells_img = {} - header_idx = 0 - row_idx = 0 - previous_xmax_col = 0 - new_cols = {} - new_master_row = {} - previous_ymin_row = 0 - new_cols = cols - new_master_row = master_row - ## Below 2 for loops remove redundant bounding boxes ### - # for k_col, v_col in cols.items(): - # xmin_col, _, xmax_col, _, col_img = v_col - # if (np.isclose(previous_xmax_col, xmax_col, atol=5)) or (xmin_col >= xmax_col): - # print('Found a column with double bbox') - # continue - # previous_xmax_col = xmax_col - # new_cols[k_col] = v_col - - # for k_row, v_row in master_row.items(): - # _, ymin_row, _, ymax_row, row_img = v_row - # if (np.isclose(previous_ymin_row, ymin_row, atol=5)) or (ymin_row >= ymax_row): - # print('Found a row with double bbox') - # continue - # previous_ymin_row = ymin_row - # new_master_row[k_row] = v_row - ###################################################### - for k_row, v_row in new_master_row.items(): - - _, _, _, _, row_img = v_row - xmax, ymax = row_img.size - xa, ya, xb, yb = 0, 0, 0, ymax - row_img_list = [] - # plt.imshow(row_img) - # st.pyplot() - for idx, kv in enumerate(new_cols.items()): - k_col, v_col = kv - xmin_col, _, xmax_col, _, col_img = v_col - xmin_col, xmax_col = xmin_col - padd_left - 10, xmax_col - padd_left - # plt.imshow(col_img) - # st.pyplot() - # xa + 3 : to remove borders on the left side of the cropped cell - # yb = 3: to remove row information from the above row of the cropped cell - # xb - 3: to remove borders on the right side of the cropped cell - xa = xmin_col - xb = xmax_col - if idx == 0: - xa = 0 - if idx == len(new_cols)-1: - xb = xmax - xa, ya, xb, yb = xa, ya, xb, yb - - row_img_cropped = row_img.crop((xa, ya, xb, yb)) - row_img_list.append(row_img_cropped) - - cells_img[k_row+'.'+str(row_idx)] = row_img_list - row_idx += 1 - - return cells_img, len(new_cols), len(new_master_row)-1 - - def clean_dataframe(self, df): - ''' - Remove irrelevant symbols that appear with tesseractOCR - ''' - # df.columns = [col.replace('|', '') for col in df.columns] - - for col in df.columns: - - df[col]=df[col].str.replace("'", '', regex=True) - df[col]=df[col].str.replace('"', '', regex=True) - df[col]=df[col].str.replace(']', '', regex=True) - df[col]=df[col].str.replace('[', '', regex=True) - df[col]=df[col].str.replace('{', '', regex=True) - df[col]=df[col].str.replace('}', '', regex=True) - return df - - @st.cache - def convert_df(self, df): - return df.to_csv().encode('utf-8') - - - def create_dataframe(self, c3, cells_pytess_result:list, max_cols:int, max_rows:int): - '''Create dataframe using list of cell values of the table, also checks for valid header of dataframe - Args: - cells_pytess_result: list of strings, each element representing a cell in a table - max_cols, max_rows: number of columns and rows - Returns: - dataframe : final dataframe after all pre-processing - ''' - - headers = cells_pytess_result[:max_cols] - new_headers = uniquify(headers, (f' {x!s}' for x in string.ascii_lowercase)) - counter = 0 - - cells_list = cells_pytess_result[max_cols:] - df = pd.DataFrame("", index=range(0, max_rows), columns=new_headers) - - cell_idx = 0 - for nrows in range(max_rows): - for ncols in range(max_cols): - df.iat[nrows, ncols] = str(cells_list[cell_idx]) - cell_idx += 1 - - ## To check if there are duplicate headers if result of uniquify+col == col - ## This check removes headers when all headers are empty or if median of header word count is less than 6 - for x, col in zip(string.ascii_lowercase, new_headers): - if f' {x!s}' == col: - counter += 1 - header_char_count = [len(col) for col in new_headers] - - # if (counter == len(new_headers)) or (statistics.median(header_char_count) < 6): - # st.write('woooot') - # df.columns = uniquify(df.iloc[0], (f' {x!s}' for x in string.ascii_lowercase)) - # df = df.iloc[1:,:] - - df = self.clean_dataframe(df) - - c3.dataframe(df) - csv = self.convert_df(df) - c3.download_button("Download table", csv, "file.csv", "text/csv", key='download-csv') - - return df - - - - - - - async def start_process(self, image_path:str, TD_THRESHOLD, TSR_THRESHOLD, padd_top, padd_left, padd_bottom, padd_right, delta_xmin, delta_ymin, delta_xmax, delta_ymax, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom): - ''' - Initiates process of generating pandas dataframes from raw pdf-page images - ''' - image = Image.open(image_path).convert("RGB") - model, probas, bboxes_scaled = table_detector(image, THRESHOLD_PROBA=TD_THRESHOLD) - - if bboxes_scaled.nelement() == 0: - st.write('No table found in the pdf-page image') - return '' - - # try: - # st.write('Document: '+image_path.split('/')[-1]) - c1, c2, c3 = st.columns((1,1,1)) - - self.plot_results_detection(c1, model, image, probas, bboxes_scaled, delta_xmin, delta_ymin, delta_xmax, delta_ymax) - cropped_img_list = self.crop_tables(image, probas, bboxes_scaled, delta_xmin, delta_ymin, delta_xmax, delta_ymax) - - for unpadded_table in cropped_img_list: - - table = self.add_padding(unpadded_table, padd_top, padd_right, padd_bottom, padd_left) - # table = super_res(table) - # table = binarizeBlur_image(table) - # table = sharpen_image(table) # Test sharpen image next - # table = td_postprocess(table) - - model, probas, bboxes_scaled = table_struct_recog(table, THRESHOLD_PROBA=TSR_THRESHOLD) - rows, cols = self.generate_structure(c2, model, table, probas, bboxes_scaled, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom) - # st.write(len(rows), len(cols)) - rows, cols = self.sort_table_featuresv2(rows, cols) - master_row, cols = self.individual_table_featuresv2(table, rows, cols) - - cells_img, max_cols, max_rows = self.object_to_cellsv2(master_row, cols, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom, padd_left) - - sequential_cell_img_list = [] - for k, img_list in cells_img.items(): - for img in img_list: - # img = super_res(img) - # img = sharpen_image(img) # Test sharpen image next - # img = binarizeBlur_image(img) - # img = self.add_padding(img, 10,10,10,10) - # plt.imshow(img) - # c3.pyplot() - sequential_cell_img_list.append(pytess(img)) - - cells_pytess_result = await asyncio.gather(*sequential_cell_img_list) - - - self.create_dataframe(c3, cells_pytess_result, max_cols, max_rows) - st.write('Errors in OCR is due to either quality of the image or performance of the OCR') - # except: - # st.write('Either incorrectly identified table or no table, to debug remove try/except') - # break - # break - - - - -if __name__ == "__main__": - - img_name = st.file_uploader("Upload an image with table(s)") - st1, st2 = st.columns((1,1)) - TD_th = st1.slider('Table detection threshold', 0.0, 1.0, 0.6) - TSR_th = st2.slider('Table structure recognition threshold', 0.0, 1.0, 0.8) - - st1, st2, st3, st4 = st.columns((1,1,1,1)) - - padd_top = st1.slider('Padding top', 0, 200, 20) - padd_left = st2.slider('Padding left', 0, 200, 20) - padd_right = st3.slider('Padding right', 0, 200, 20) - padd_bottom = st4.slider('Padding bottom', 0, 200, 20) - - te = TableExtractionPipeline() - # for img in image_list: - if img_name is not None: - asyncio.run(te.start_process(img_name, TD_THRESHOLD=TD_th , TSR_THRESHOLD=TSR_th , padd_top=padd_top, padd_left=padd_left, padd_bottom=padd_bottom, padd_right=padd_right, delta_xmin=0, delta_ymin=0, delta_xmax=0, delta_ymax=0, expand_rowcol_bbox_top=0, expand_rowcol_bbox_bottom=0)) - - - diff --git a/spaces/bioriAsaeru/text-to-voice/Any Video Converter 6.3.8 Crack Plus Serial Key 2020 Free.md b/spaces/bioriAsaeru/text-to-voice/Any Video Converter 6.3.8 Crack Plus Serial Key 2020 Free.md deleted file mode 100644 index 99b47d32305c886321d9136cf93b0e3441194cd2..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Any Video Converter 6.3.8 Crack Plus Serial Key 2020 Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

Any Video Converter 6.3.8 Crack Plus Serial Key 2020 Free


Download ✓✓✓ https://urloso.com/2uyQaJ



- -Movavi Video Converter Crack Full Version Torrent Free 2020 . ... Any Video Converter Ultimate 6.3.8 Crack With Full License Key Free Download. ... Freemake Video Converter Key plus Full Crack Gold Version 2020 . 1fdad05405
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Autodesk Vehicle Tracking 2020 (x64) Crack [Latest].md b/spaces/bioriAsaeru/text-to-voice/Autodesk Vehicle Tracking 2020 (x64) Crack [Latest].md deleted file mode 100644 index 023c778a50e07c2a760f17a50ef041780e7b0640..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Autodesk Vehicle Tracking 2020 (x64) Crack [Latest].md +++ /dev/null @@ -1,6 +0,0 @@ -

Autodesk Vehicle Tracking 2020 (x64) Crack [Latest]


DOWNLOADhttps://urloso.com/2uyRck



-
-Autodesk Vehicle Tracking 2021 (x64) with Crack | 4HowCrack. June 2020. Autodesk Vehicle Tracking Crack Free Download specialized programs to simulate ... 1fdad05405
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Hindi Movie Lafangey Parindey Full Movie Hd 1080p Watch the Inspiring Story of Two Friends on Skates.md b/spaces/bioriAsaeru/text-to-voice/Hindi Movie Lafangey Parindey Full Movie Hd 1080p Watch the Inspiring Story of Two Friends on Skates.md deleted file mode 100644 index 58053e5d58c4fc8fa672a74b090ad2fe269fde82..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Hindi Movie Lafangey Parindey Full Movie Hd 1080p Watch the Inspiring Story of Two Friends on Skates.md +++ /dev/null @@ -1,6 +0,0 @@ -

Hindi Movie Lafangey Parindey Full Movie Hd 1080p


Download >>> https://urloso.com/2uyS3q



- - aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Ifinger Dictionary Licence Key Crack A Comprehensive Review.md b/spaces/bioriAsaeru/text-to-voice/Ifinger Dictionary Licence Key Crack A Comprehensive Review.md deleted file mode 100644 index b10fb5264536c3625ad78b94a2af38813aad5f37..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Ifinger Dictionary Licence Key Crack A Comprehensive Review.md +++ /dev/null @@ -1,6 +0,0 @@ -

Ifinger Dictionary Licence Key Crack


Download –––––>>> https://urloso.com/2uyRN5



- - aaccfb2cb3
-
-
-

diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/serialize.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/serialize.py deleted file mode 100644 index 0b38862804b70cf1159a9bc93acdef73c184d883..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/utils/serialize.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import cloudpickle - - -class PicklableWrapper(object): - """ - Wrap an object to make it more picklable, note that it uses - heavy weight serialization libraries that are slower than pickle. - It's best to use it only on closures (which are usually not picklable). - - This is a simplified version of - https://github.com/joblib/joblib/blob/master/joblib/externals/loky/cloudpickle_wrapper.py - """ - - def __init__(self, obj): - while isinstance(obj, PicklableWrapper): - # Wrapping an object twice is no-op - obj = obj._obj - self._obj = obj - - def __reduce__(self): - s = cloudpickle.dumps(self._obj) - return cloudpickle.loads, (s,) - - def __call__(self, *args, **kwargs): - return self._obj(*args, **kwargs) - - def __getattr__(self, attr): - # Ensure that the wrapped object can be used seamlessly as the previous object. - if attr not in ["_obj"]: - return getattr(self._obj, attr) - return getattr(self, attr) diff --git a/spaces/caslabs/midi-autocompletion/musicautobot/music_transformer/learner.py b/spaces/caslabs/midi-autocompletion/musicautobot/music_transformer/learner.py deleted file mode 100644 index 49efebd8ebc173c453ef0ae5b1a82f25ca04dfa2..0000000000000000000000000000000000000000 --- a/spaces/caslabs/midi-autocompletion/musicautobot/music_transformer/learner.py +++ /dev/null @@ -1,171 +0,0 @@ -from fastai.basics import * -from fastai.text.learner import LanguageLearner, get_language_model, _model_meta -from .model import * -from .transform import MusicItem -from ..numpy_encode import SAMPLE_FREQ -from ..utils.top_k_top_p import top_k_top_p -from ..utils.midifile import is_empty_midi - -_model_meta[MusicTransformerXL] = _model_meta[TransformerXL] # copy over fastai's model metadata - -def music_model_learner(data:DataBunch, arch=MusicTransformerXL, config:dict=None, drop_mult:float=1., - pretrained_path:PathOrStr=None, **learn_kwargs) -> 'LanguageLearner': - "Create a `Learner` with a language model from `data` and `arch`." - meta = _model_meta[arch] - - if pretrained_path: - state = torch.load(pretrained_path, map_location='cpu') - if config is None: config = state['config'] - - model = get_language_model(arch, len(data.vocab.itos), config=config, drop_mult=drop_mult) - learn = MusicLearner(data, model, split_func=meta['split_lm'], **learn_kwargs) - - if pretrained_path: - get_model(model).load_state_dict(state['model'], strict=False) - if not hasattr(learn, 'opt'): learn.create_opt(defaults.lr, learn.wd) - try: learn.opt.load_state_dict(state['opt']) - except: pass - del state - gc.collect() - - return learn - -# Predictions -from fastai import basic_train # for predictions -class MusicLearner(LanguageLearner): - def save(self, file:PathLikeOrBinaryStream=None, with_opt:bool=True, config=None): - "Save model and optimizer state (if `with_opt`) with `file` to `self.model_dir`. `file` can be file-like (file or buffer)" - out_path = super().save(file, return_path=True, with_opt=with_opt) - if config and out_path: - state = torch.load(out_path) - state['config'] = config - torch.save(state, out_path) - del state - gc.collect() - return out_path - - def beam_search(self, xb:Tensor, n_words:int, top_k:int=10, beam_sz:int=10, temperature:float=1., - ): - "Return the `n_words` that come after `text` using beam search." - self.model.reset() - self.model.eval() - xb_length = xb.shape[-1] - if xb.shape[0] > 1: xb = xb[0][None] - yb = torch.ones_like(xb) - - nodes = None - xb = xb.repeat(top_k, 1) - nodes = xb.clone() - scores = xb.new_zeros(1).float() - with torch.no_grad(): - for k in progress_bar(range(n_words), leave=False): - out = F.log_softmax(self.model(xb)[0][:,-1], dim=-1) - values, indices = out.topk(top_k, dim=-1) - scores = (-values + scores[:,None]).view(-1) - indices_idx = torch.arange(0,nodes.size(0))[:,None].expand(nodes.size(0), top_k).contiguous().view(-1) - sort_idx = scores.argsort()[:beam_sz] - scores = scores[sort_idx] - nodes = torch.cat([nodes[:,None].expand(nodes.size(0),top_k,nodes.size(1)), - indices[:,:,None].expand(nodes.size(0),top_k,1),], dim=2) - nodes = nodes.view(-1, nodes.size(2))[sort_idx] - self.model[0].select_hidden(indices_idx[sort_idx]) - xb = nodes[:,-1][:,None] - if temperature != 1.: scores.div_(temperature) - node_idx = torch.multinomial(torch.exp(-scores), 1).item() - return [i.item() for i in nodes[node_idx][xb_length:] ] - - def predict(self, item:MusicItem, n_words:int=128, - temperatures:float=(1.0,1.0), min_bars=4, - top_k=30, top_p=0.6): - "Return the `n_words` that come after `text`." - self.model.reset() - new_idx = [] - vocab = self.data.vocab - x, pos = item.to_tensor(), item.get_pos_tensor() - last_pos = pos[-1] if len(pos) else 0 - y = torch.tensor([0]) - - start_pos = last_pos - - sep_count = 0 - bar_len = SAMPLE_FREQ * 4 # assuming 4/4 time - vocab = self.data.vocab - - repeat_count = 0 - if hasattr(self.model[0], 'encode_position'): - encode_position = self.model[0].encode_position - else: encode_position = False - - for i in progress_bar(range(n_words), leave=True): - with torch.no_grad(): - if encode_position: - batch = { 'x': x[None], 'pos': pos[None] } - logits = self.model(batch)[0][-1][-1] - else: - logits = self.model(x[None])[0][-1][-1] - - prev_idx = new_idx[-1] if len(new_idx) else vocab.pad_idx - - # Temperature - # Use first temperatures value if last prediction was duration - temperature = temperatures[0] if vocab.is_duration_or_pad(prev_idx) else temperatures[1] - repeat_penalty = max(0, np.log((repeat_count+1)/4)/5) * temperature - temperature += repeat_penalty - if temperature != 1.: logits = logits / temperature - - - # Filter - # bar = 16 beats - filter_value = -float('Inf') - if ((last_pos - start_pos) // 16) <= min_bars: logits[vocab.bos_idx] = filter_value - - logits = filter_invalid_indexes(logits, prev_idx, vocab, filter_value=filter_value) - logits = top_k_top_p(logits, top_k=top_k, top_p=top_p, filter_value=filter_value) - - # Sample - probs = F.softmax(logits, dim=-1) - idx = torch.multinomial(probs, 1).item() - - # Update repeat count - num_choices = len(probs.nonzero().view(-1)) - if num_choices <= 2: repeat_count += 1 - else: repeat_count = repeat_count // 2 - - if prev_idx==vocab.sep_idx: - duration = idx - vocab.dur_range[0] - last_pos = last_pos + duration - - bars_pred = (last_pos - start_pos) // 16 - abs_bar = last_pos // 16 - # if (bars % 8 == 0) and (bars_pred > min_bars): break - if (i / n_words > 0.80) and (abs_bar % 4 == 0): break - - - if idx==vocab.bos_idx: - print('Predicted BOS token. Returning prediction...') - break - - new_idx.append(idx) - x = x.new_tensor([idx]) - pos = pos.new_tensor([last_pos]) - - pred = vocab.to_music_item(np.array(new_idx)) - full = item.append(pred) - return pred, full - -# High level prediction functions from midi file -def predict_from_midi(learn, midi=None, n_words=400, - temperatures=(1.0,1.0), top_k=30, top_p=0.6, seed_len=None, **kwargs): - vocab = learn.data.vocab - seed = MusicItem.from_file(midi, vocab) if not is_empty_midi(midi) else MusicItem.empty(vocab) - if seed_len is not None: seed = seed.trim_to_beat(seed_len) - - pred, full = learn.predict(seed, n_words=n_words, temperatures=temperatures, top_k=top_k, top_p=top_p, **kwargs) - return full - -def filter_invalid_indexes(res, prev_idx, vocab, filter_value=-float('Inf')): - if vocab.is_duration_or_pad(prev_idx): - res[list(range(*vocab.dur_range))] = filter_value - else: - res[list(range(*vocab.note_range))] = filter_value - return res diff --git a/spaces/ceckenrode/PunctuationTokenClassification/README.md b/spaces/ceckenrode/PunctuationTokenClassification/README.md deleted file mode 100644 index 5168b35ed38b28d5eac4e223ec7287bfe6d818d4..0000000000000000000000000000000000000000 --- a/spaces/ceckenrode/PunctuationTokenClassification/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: PunctuationTokenClassification -emoji: 🚀 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chaowei100/ChatGPT_Taiyi-Stable-Diffusion/functional.py b/spaces/chaowei100/ChatGPT_Taiyi-Stable-Diffusion/functional.py deleted file mode 100644 index eccc0ac251784f4611c60ae754194448fca2e9e8..0000000000000000000000000000000000000000 --- a/spaces/chaowei100/ChatGPT_Taiyi-Stable-Diffusion/functional.py +++ /dev/null @@ -1,70 +0,0 @@ -# 'primary' 颜色对应 theme.py 中的 primary_hue -# 'secondary' 颜色对应 theme.py 中的 neutral_hue -# 'stop' 颜色对应 theme.py 中的 color_er -# 默认按钮颜色是 secondary -from toolbox import clear_line_break - -def get_functionals(): - return { - "英语学术润色": { - # 前言 - "Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " + - r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " + - r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n", - # 后语 - "Suffix": r"", - "Color": r"secondary", # 按钮颜色 - }, - "中文学术润色": { - "Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," + - r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n", - "Suffix": r"", - }, - "查找语法错误": { - "Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " + - r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." + - r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " + - r"put the original text the first column, " + - r"put the corrected text in the second column and highlight the key words you fixed.""\n" - r"Example:""\n" - r"Paragraph: How is you? Do you knows what is it?""\n" - r"| Original sentence | Corrected sentence |""\n" - r"| :--- | :--- |""\n" - r"| How **is** you? | How **are** you? |""\n" - r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n" - r"Below is a paragraph from an academic paper. " - r"You need to report all grammar and spelling mistakes as the example before." - + "\n\n", - "Suffix": r"", - "PreProcess": clear_line_break, # 预处理:清除换行符 - }, - "中译英": { - "Prefix": r"Please translate following sentence to English:" + "\n\n", - "Suffix": r"", - }, - "学术中英互译": { - "Prefix": r"I want you to act as a scientific English-Chinese translator, " + - r"I will provide you with some paragraphs in one language " + - r"and your task is to accurately and academically translate the paragraphs only into the other language. " + - r"Do not repeat the original provided paragraphs after translation. " + - r"You should use artificial intelligence tools, " + - r"such as natural language processing, and rhetorical knowledge " + - r"and experience about effective writing techniques to reply. " + - r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n", - "Suffix": "", - "Color": "secondary", - }, - "英译中": { - "Prefix": r"请翻译成中文:" + "\n\n", - "Suffix": r"", - }, - "找图片": { - "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL," + - r"然后请使用Markdown格式封装,并且不要有反斜线,不要用代码块。现在,请按以下描述给我发送图片:" + "\n\n", - "Suffix": r"", - }, - "解释代码": { - "Prefix": r"请解释以下代码:" + "\n```\n", - "Suffix": "\n```\n", - }, - } diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/assignment_visualization.md b/spaces/chendl/compositional_test/multimodal/YOLOX/docs/assignment_visualization.md deleted file mode 100644 index 4bc7791f92ad58f7071d25bb668a18d144a4b6c4..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/assignment_visualization.md +++ /dev/null @@ -1,29 +0,0 @@ -# Visualize label assignment - -This tutorial explains how to visualize your label asssignment result when training with YOLOX. - -## 1. Visualization command - -We provide a visualization tool to help you visualize your label assignment result. You can find it in [`tools/visualize_assignment.py`](../tools/visualize_assign.py). - -Here is an example of command to visualize your label assignment result: - -```shell -python3 tools/visualize_assign.py -f /path/to/your/exp.py yolox-s -d 1 -b 8 --max-batch 2 -``` - -`max-batch` here means the maximum number of batches to visualize. The default value is 1, which the tool means only visualize the first batch. - -By the way, the mosaic augmentation is used in default dataloader, so you can also see the mosaic result here. - -After running the command, the logger will show you where the visualization result is saved, let's open it and into the step 2. - -## 2. Check the visualization result - -Here is an example of visualization result: -
- -Those dots in one box is the matched anchor of gt box. **The color of dots is the same as the color of the box** to help you determine which object is assigned to the anchor. Note the box and dots are **instance level** visualization, which means the same class may have different colors. -**If the gt box doesn't match any anchor, the box will be marked as red and the red text "unmatched" will be drawn over the box**. - -Please feel free to open an issue if you have any questions. diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/src/flamingo.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/src/flamingo.py deleted file mode 100644 index da54f8f02a046fad7dfcfe32fb59092b24d2f9da..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/open_flamingo/src/flamingo.py +++ /dev/null @@ -1,637 +0,0 @@ -import torch -import torchvision -from einops import rearrange -from torch import nn -from yolox.models.yolo_head import YOLOXHead -from yolox.utils.boxes import xyxy2cxcywh, cxcywh2xyxy -from yolox.utils.demo_utils import nms -# import matplotlib.pyplot as plt -# import seaborn as sns -import numpy as np -import logging -from open_flamingo.src.gcn import GCN -from transformers import LogitsProcessorList -logging.basicConfig( - level=logging.INFO, - format='%(asctime)s %(message)s', - datefmt='%m/%d %I:%M:%S', -) - - -# class PositionEncodingModule(nn.Module): -# def __init__(self, dim, pos_dim=128): -# super().__init__() -# self.encode = nn.Sequential( -# nn.Linear(5, pos_dim // 2), -# nn.BatchNorm1d(pos_dim // 2), -# nn.GELU(), -# nn.Linear(pos_dim // 2, pos_dim), -# nn.BatchNorm1d(pos_dim), -# nn.GELU(), -# ) -# self.merge = nn.Sequential( -# nn.Linear(dim + pos_dim, dim), -# nn.BatchNorm1d(dim), -# nn.GELU(), -# ) - -# def forward(self, x, box): -# box = self.encode(box) -# x = torch.cat([x, box], dim=-1) -# x = self.merge(x) -# return x - - -# class PositionEncodingModule(nn.Module): -# def __init__(self, dim): -# super().__init__() -# self.encode = nn.Sequential( -# nn.Linear(5, dim), -# nn.GELU(), -# ) - -# def forward(self, x, box): -# box = self.encode(box) -# x = x + box -# return x - - -# class PositionEncodingModule2(nn.Module): -# def __init__(self, dim): -# super().__init__() -# self.encode = nn.Sequential( -# nn.Linear(5 + dim, dim), -# nn.ELU(), -# ) - -# def forward(self, x, box): -# x = torch.cat([x, box], dim=-1) -# x = self.encode(x) -# return x - - -# class RelationHead(nn.Module): -# def __init__(self, dim): -# super().__init__() -# self.encode = nn.Sequential( -# nn.LayerNorm(dim), -# nn.Linear(dim, 128), -# nn.ELU(), -# ) -# self.classifier = nn.Linear(256, 51) - -# def forward(self, x1, x2): -# x1 = self.encode(x1) -# x2 = self.encode(x2) -# x = torch.cat([x1, x2], dim=-1) -# x = self.classifier(x) -# return x - - -class Flamingo(nn.Module): - def __init__( - self, - vision_encoder: nn.Module, - lang_encoder: nn.Module, - eoc_token_id: int, - media_token_id: int, - image_end_token_id: int, - visual_token_id: int, - previsual_token_id: int, - box_token_id: int, - prebox_token_id: int, - nothing_token_id: int, - endofobject_token_id: int, - vis_dim: int, - vis_embed_size: int, - lang_dim: int, - hidden_state_dim: int, - image_size: int, - patch_size: int, - use_media_placement_augmentation: bool = False, - add_visual_token: bool = False, - add_pe: bool = False, - add_relation: bool = False, - use_format_v2: bool = False, - roi_align: bool = False, - roi_output_size: int = 4, - apply_mask: bool = False, - ): - """ - Args: - vision_encoder (nn.Module): HF CLIPModel - lang_encoder (nn.Module): HF causal language model - eoc_token_id (int): Token id for eos token - media_token_id (int): Token id for <|#image#|> - vis_dim (int): Dimension of the visual features. - Visual features are projected to match this shape along the last dimension. - cross_attn_every_n_layers (int, optional): How often to apply cross attention after transformer layer. Defaults to 1. - use_media_placement_augmentation (bool, optional): Whether to randomly assign images to the preceding or following text in training. Defaults to False. - """ - super().__init__() - self.image_end_token_id = image_end_token_id - self.eoc_token_id = eoc_token_id - self.media_token_id = media_token_id - self.use_media_placement_augmentation = use_media_placement_augmentation - self.vis_dim = vis_dim - self.lang_dim = lang_dim - # inner_dim = self.lang_dim * 4 - # self.vis_proj = nn.Sequential( - # nn.LayerNorm(self.vis_dim), - # nn.Linear(self.vis_dim, inner_dim, bias=False), - # nn.GELU(), - # nn.Linear(inner_dim, self.lang_dim, bias=False), - # ) - self.vis_proj = nn.Linear(self.vis_dim, self.lang_dim) - self.vision_encoder = vision_encoder - self.num_positions = vis_embed_size - self.lang_encoder = lang_encoder - self.lang_encoder.init_flamingo( - media_token_id=media_token_id, - use_media_placement_augmentation=self.use_media_placement_augmentation, - ) - first_layer = self.lang_encoder._get_decoder_layers()[0] - first_layer.add_visual_token = add_visual_token - first_layer.visual_token_id = visual_token_id - first_layer.media_token_id = media_token_id - first_layer.box_token_id = box_token_id - # first_layer.pos_enc = PositionEncodingModule(self.lang_dim) if add_pe else None - # assert not (add_pe and add_relation) - # self.pos_enc = PositionEncodingModule(self.lang_dim) if add_pe else None - # first_layer.pos_enc = self.pos_enc - self.box_token_id = box_token_id - self.prebox_token_id = prebox_token_id - self.media_token_id = media_token_id - self.visual_token_id = visual_token_id - self.previsual_token_id = previsual_token_id - self.hidden_state_dim = hidden_state_dim - self.image_size = image_size - self.patch_size = patch_size - self.patch_num = self.image_size // self.patch_size - self.detection_head = YOLOXHead( - num_classes=1, - strides=[patch_size], - in_channels=[self.hidden_state_dim + self.lang_dim], - ) - self.use_format_v2 = use_format_v2 - self.nothing_token_id = nothing_token_id - self.roi_align = roi_align - self.roi_output_size = roi_output_size if roi_align else None - self.apply_mask = apply_mask - self.endofobject_token_id = endofobject_token_id - - - def _get_detection_batch( - self, - visual_token_id, - previsual_token_id, - input_ids: torch.Tensor, - hidden_states: torch.Tensor, - added_bbox_list, - box_num = 100, - ): - select_mask = torch.logical_or(input_ids == visual_token_id, input_ids == previsual_token_id) - visual_token_position = select_mask.nonzero() - visual_token_hidden_states = hidden_states[select_mask] - prev_batch_idx = -1 - media_idx = [] - cnt = 0 - assert len(visual_token_hidden_states) == len(visual_token_position) - if len(added_bbox_list) != len(visual_token_position): - msg = f"ERROR: {len(added_bbox_list)}:{len(visual_token_position)}\n{added_bbox_list}\n{visual_token_position}" - logging.info(msg) - alpha = 0.0 - else: - alpha = 1.0 - visual_batches = [] - previsual_batches = [] - for (batch_idx, idx), visual_token_hidden_state, bbox in zip( - visual_token_position, visual_token_hidden_states, added_bbox_list, - ): - # ! VERY IMPORTANT BUG ! - bbox = bbox.clone() - # ! VERY IMPORTANT BUG ! - batch_idx = batch_idx.item() - idx = idx.item() - if batch_idx != prev_batch_idx: - prev_batch_idx = batch_idx - this_input_ids = input_ids[batch_idx] - cnt += len(media_idx) - media_idx = (this_input_ids == self.media_token_id).nonzero().reshape(-1).tolist() - for i in range(len(media_idx)): - if i == len(media_idx) - 1 or idx > media_idx[i] and idx < media_idx[i+1]: - break - image_index = cnt + i - size = int(self.image_embedding[image_index].shape[0] ** 0.5) - image_embedding = self.image_embedding[image_index] - # inplace xyxy2cxcywh - # print(bbox) - # TODO: CHECK self.image_size. Is it 224? - bbox = xyxy2cxcywh(bbox) * self.image_size - # print(bbox) - concat_image_visual_embedding = torch.cat([image_embedding, visual_token_hidden_state.unsqueeze(0).repeat(image_embedding.shape[0], 1)], dim=-1).reshape(size, size, -1) - label = torch.cat([torch.zeros(bbox.shape[0], 1, device=bbox.device), bbox], dim=-1) - label = torch.cat([label, torch.zeros(box_num - label.shape[0], label.shape[1], device=label.device)], dim=0) - if input_ids[batch_idx, idx] == previsual_token_id: - previsual_batches.append([concat_image_visual_embedding, label]) - elif input_ids[batch_idx, idx] == visual_token_id: - visual_batches.append([concat_image_visual_embedding, label]) - else: - logging.info(f"WARNING... NOT visual nor previsual. it is {input_ids[batch_idx, idx]}") - return visual_batches, previsual_batches, alpha, alpha - - def get_detection_losses( - self, - input_ids: torch.Tensor, - hidden_states: torch.Tensor, - added_bbox_list, - box_num = 100, - ): - visual_token_batches, previsual_token_batches, alpha1, alpha2 = self._get_detection_batch( - visual_token_id=self.visual_token_id, - previsual_token_id=self.previsual_token_id, - input_ids=input_ids, - hidden_states=hidden_states, - added_bbox_list=added_bbox_list, - box_num=box_num, - ) - loss_dict = [] - for batches, alpha in zip([visual_token_batches, previsual_token_batches], [alpha1, alpha2]): - # x: [B, C, H, W] - if len(batches) != 0: - x = torch.cat([batch[0].unsqueeze(0) for batch in batches], dim=0).permute(0,3,1,2) - labels = torch.cat([batch[1].unsqueeze(0) for batch in batches], dim=0) - else: - x = None - labels = None - if x is not None: - losses = self.detection_head(xin=[x], labels=labels) - loss, loss_iou, loss_obj, loss_cls, loss_l1, _ = losses - else: - loss = torch.tensor(0.0).cuda() - loss_iou = loss - loss_obj = loss - loss_cls = loss - loss_l1 = loss - - loss_dict.append(dict( - loss=loss * alpha, - loss_iou=loss_iou * alpha, - loss_obj=loss_obj * alpha, - loss_cls=loss_cls * alpha, - loss_l1=loss_l1 * alpha, - )) - ret_loss = {} - for key in loss_dict[0].keys(): - ret_loss[key] = 0.0 - for d in loss_dict: - ret_loss[key] += d[key] - return ret_loss, loss_dict - - def get_detection_result( - self, - input_ids: torch.Tensor, - hidden_states: torch.Tensor, - nms_thr: float = 0.45, - score_thr: float = 0.01, - debug_id: int = 0, - debug_mode: bool = False, - ): - assert len(input_ids) == 1, "only batch size = 1 is supported yet" - # assert len(self.image_embedding) == 1, "only one image is supported yet" - # assert (input_ids[..., -1] == self.visual_token_id).all(), "the last token should be visual token" - visual_token_hidden_state = hidden_states[..., -1, :] - boxes_list = [] - scores_list = [] - for image_embedding in self.image_embedding: - size = int(image_embedding.shape[0] ** 0.5) - x = torch.cat([image_embedding, visual_token_hidden_state.repeat(image_embedding.shape[0], 1)], dim=-1).reshape(size, size, -1).unsqueeze(0).permute(0,3,1,2) - with torch.no_grad(): - outputs = self.detection_head(xin=[x], labels=None) - boxes = outputs[0,:,:4].cpu().numpy() - scores = outputs[0,:,4].cpu().numpy() - scores_mask = scores > score_thr - boxes = boxes[scores_mask] - boxes = cxcywh2xyxy(boxes) - scores = scores[scores_mask] - keep = nms(boxes, scores, nms_thr=nms_thr) - boxes = boxes[keep] - scores = scores[keep] - if debug_mode: - obj_heatmap = outputs[0,:, -2].reshape(size, size).cpu().numpy() - import matplotlib.pyplot as plt - import seaborn as sns - plt.figure() - sns_plot = sns.heatmap(obj_heatmap) - plt.savefig(f"heatmap_{debug_id}.jpg") - debug_id += 1 - boxes_list.append(boxes) - scores_list.append(scores) - if len(boxes_list) == 1: - boxes_list = boxes_list[0] - scores_list = scores_list[0] - return boxes_list, scores_list - - def _condition_attention(self, loc_list = None): - for i in range(len(self.lang_encoder.gpt_neox.layers)): - self.lang_encoder.gpt_neox.layers[i].decoder_layer.attention.loc_list = loc_list - - def forward( - self, - vision_x: torch.Tensor, - lang_x: torch.Tensor, - attention_mask: torch.Tensor = None, - labels: torch.Tensor = None, - use_cached_vision_x: bool = False, - clear_conditioned_layers: bool = True, - past_key_values=None, - use_cache: bool = False, - image_nums=None, - image_start_index_list=None, - added_bbox_list=None, - add_box: bool = False, - relations=None, - debug_mode: bool = False, - ): - """ - Forward pass of Flamingo. - - Args: - vision_x (torch.Tensor): Vision input - shape (B, T_img, F, C, H, W) with F=1 - lang_x (torch.Tensor): Language input ids - shape (B, T_txt) - attention_mask (torch.Tensor, optional): Attention mask. Defaults to None. - labels (torch.Tensor, optional): Labels. Defaults to None. - clear_conditioned_layers: if True, clear the conditioned layers - once the foward pass is completed. Set this to false if the - same set of images will be reused in another subsequent - forward pass. - past_key_values: pre-computed values to pass to language model. - See past_key_values documentation in Hugging Face - CausalLM models. - use_cache: whether to use cached key values. See use_cache - documentation in Hugging Face CausalLM models. - """ - self.valid = True - self.lang_encoder.loc_list = None - if use_cached_vision_x: - # Case: use cached; vision_x should be cached and other - # vision-related inputs should not be provided. - assert ( - vision_x is None - ), "Expect vision_x to be None when use_cached_vision_x is True." - assert self.lang_encoder.is_conditioned() - else: - # Case: do not use caching (i.e. this is a standard forward pass); - self._encode_vision_x( - vision_x=vision_x, - image_nums=image_nums, - image_start_index_list=image_start_index_list, - added_bbox_list=added_bbox_list if add_box else None, - input_ids=lang_x, - relations=relations, - ) - if self.apply_mask: - if self.roi_align: - attend_length = 1 + self.roi_output_size ** 2 - else: - attend_length = 2 - prebox_loc = (lang_x == self.prebox_token_id).nonzero() - loc_list = [] - for (x, y) in prebox_loc: - x = x.item() - y = y.item() - for yy in range(y+1, lang_x.shape[1]): - if lang_x[x, yy] == self.endofobject_token_id: - # [batch_idx, [previsual:prebox], [object:endofobject-1]] - loc_list.append([x, [y-attend_length+1, y], [y+1, yy-1]]) - self._condition_attention(loc_list=loc_list) - else: - self._condition_attention(None) - - output = self.lang_encoder( - input_ids=lang_x, - attention_mask=attention_mask, - labels=labels, - past_key_values=past_key_values, - use_cache=use_cache, - output_hidden_states=True, - ) - if vision_x is None: - output['loss'][0] += 0.0 * self.vis_proj(self.vision_encoder.visual(torch.randn(1, 3, 224, 224, device=lang_x.device, dtype=output['loss'].dtype))[1]).mean() - - hidden_states = output["hidden_states"][-1] - if self.training and added_bbox_list is not None: - detection_losses, loss_dict = self.get_detection_losses( - input_ids=lang_x, - hidden_states=hidden_states, - added_bbox_list=added_bbox_list, - ) - output["detection_losses"] = detection_losses - output["loss_dict"] = loss_dict - elif labels is None: - boxes, scores = self.get_detection_result( - input_ids=lang_x, - hidden_states=hidden_states, - debug_id=self.debug_id if hasattr(self, "debug_id") else None, - debug_mode=debug_mode, - ) - output["boxes"] = boxes - output["scores"] = scores - - if clear_conditioned_layers: - self.lang_encoder.clear_conditioned_layers() - self._condition_attention(None) - return output - - def generate( - self, - vision_x: torch.Tensor, - lang_x: torch.Tensor, - attention_mask: torch.Tensor = None, - added_bbox_list=None, - num_beams=1, - max_new_tokens=None, - temperature=1.0, - top_k=0, - top_p=1.0, - no_repeat_ngram_size=0, - prefix_allowed_tokens_fn=None, - length_penalty=1.0, - num_return_sequences=1, - do_sample=False, - early_stopping=False, - bad_words_ids=None, - force_words_ids=None, - image_start_index_list=None, - image_nums=None, - min_length=None, - return_dict_in_generate=False, - output_hidden_states=False, - output_scores=False, - logits_processor_list=None, - eos_token_id=None, - ): - """ - Generate text conditioned on vision and language inputs. - - Args: - vision_x (torch.Tensor): Vision input - shape (B, T_img, F, C, H, W) - images in the same chunk are collated along T_img, and frames are collated along F - currently only F=1 is supported (single-frame videos) - lang_x (torch.Tensor): Language input - shape (B, T_txt) - max_length (int, optional): Maximum length of the output. Defaults to None. - attention_mask (torch.Tensor, optional): Attention mask. Defaults to None. - num_beams (int, optional): Number of beams. Defaults to 1. - max_new_tokens (int, optional): Maximum new tokens. Defaults to None. - temperature (float, optional): Temperature. Defaults to 1.0. - top_k (int, optional): Top k. Defaults to 0. - top_p (float, optional): Top p. Defaults to 1.0. - no_repeat_ngram_size (int, optional): No repeat ngram size. Defaults to 0. - length_penalty (float, optional): Length penalty. Defaults to 1.0. - num_return_sequences (int, optional): Number of return sequences. Defaults to 1. - do_sample (bool, optional): Do sample. Defaults to False. - early_stopping (bool, optional): Early stopping. Defaults to False. - Returns: - torch.Tensor: lang_x with generated tokens appended to it - """ - if num_beams > 1: - vision_x = vision_x.repeat_interleave(num_beams, dim=0) - image_start_index_list = torch.tensor(image_start_index_list).repeat_interleave(num_beams, dim=0).tolist() - image_nums = torch.tensor(image_nums).repeat_interleave(num_beams, dim=0).tolist() - if added_bbox_list is not None and len(added_bbox_list) != 0: - added_bbox_list = added_bbox_list * num_beams - - self._encode_vision_x(vision_x=vision_x, image_nums=image_nums, image_start_index_list=image_start_index_list, num_beams=num_beams, added_bbox_list=added_bbox_list, input_ids=lang_x.repeat_interleave(num_beams, dim=0)) - - if logits_processor_list is not None: - assert isinstance(logits_processor_list, list) - logits_processor_list = LogitsProcessorList(logits_processor_list) - output = self.lang_encoder.generate( - input_ids=lang_x, - attention_mask=attention_mask, - eos_token_id=(self.eoc_token_id) if eos_token_id is None else eos_token_id, - num_beams=num_beams, - max_new_tokens=max_new_tokens, - min_length=min_length, - length_penalty=length_penalty, - logits_processor=logits_processor_list, - return_dict_in_generate=return_dict_in_generate, - output_scores=output_scores, - ) - self.lang_encoder.clear_conditioned_layers() - return output - - def _get_data_list_and_visual_tokens( - self, - all_box_list, - box_token_id, - prebox_token_id, - input_ids, - vision_x, - nothing_embedding = None, - ): - box_locations = (torch.logical_or(input_ids == box_token_id, input_ids == prebox_token_id)).nonzero() - prev_batch_idx = -1 - media_idx = [] - cnt = 0 - data_list = [] - visual_tokens = [] - if len(all_box_list) != len(box_locations): - logging.info(f"WARNING. len(all_box_list) != len(box_locations) {len(all_box_list)} vs {len(box_locations)}") - self.valid = False - for III, (batch_idx, idx) in enumerate(box_locations): - batch_idx = batch_idx.item() - idx = idx.item() - if batch_idx != prev_batch_idx: - prev_batch_idx = batch_idx - this_input_ids = input_ids[batch_idx] - cnt += len(media_idx) - media_idx = (this_input_ids == self.media_token_id).nonzero().reshape(-1).tolist() - for i in range(len(media_idx)): - if i == len(media_idx) - 1 or idx > media_idx[i] and idx < media_idx[i+1]: - break - image_index = cnt + i - size = int(vision_x[image_index].shape[0] ** 0.5) - image_feature = vision_x[image_index].reshape(size, size, -1) - try: - raw_xyxy = all_box_list[III] - except: - logging.info("out of scope for all_box_list") - raw_xyxy = all_box_list[-1] - region_xyxy = np.array(raw_xyxy) * size - x1, y1, x2, y2 = region_xyxy.astype(int).clip(0, size-1).tolist() - x2 = max(x1, x2) - y2 = max(y1, y2) - if x1 + y1 + x2 + y2 == 0.0 and nothing_embedding is not None: - visual_token = nothing_embedding - else: - if self.roi_align: - visual_token = torchvision.ops.roi_align( - image_feature.permute(2, 0, 1).unsqueeze(0), - [torch.tensor(region_xyxy.astype(np.float32)).unsqueeze(0).cuda()], - output_size=self.roi_output_size, - spatial_scale=1.0, - ) - visual_token = visual_token.squeeze(0).flatten(1).permute(1, 0) - else: - visual_token = image_feature[y1:y2+1, x1:x2+1].reshape(-1, image_feature.shape[-1]).mean(0) - box = torch.tensor([0] + raw_xyxy, device=visual_token.device, dtype=visual_token.dtype) - data_list.append([visual_token, box, batch_idx, idx, i]) - visual_tokens.append(visual_token) - return data_list, visual_tokens - - def _encode_vision_x(self, vision_x: torch.Tensor, image_nums=None, image_start_index_list=None, added_bbox_list=None, num_beams=None, input_ids=None, relations=None): - """ - Compute media tokens from vision input by passing it through vision encoder and conditioning language model. - Args: - vision_x (torch.Tensor): Vision input - shape (B, T_img, F, C, H, W) - Images in the same chunk are collated along T_img, and frames are collated along F - Currently only F=1 is supported (single-frame videos) - - rearrange code based on https://github.com/dhansmair/flamingo-mini - """ - assert vision_x.ndim == 6, "vision_x should be of shape (b, T_img, F, C, H, W)" - b, T, F = vision_x.shape[:3] - assert F == 1, "Only single frame supported" - - vision_x = rearrange(vision_x, "b T F c h w -> (b T F) c h w") - if hasattr(self.vision_encoder, "visual"): - vision_x = self.vision_encoder.visual(vision_x)[1] - else: - vision_x = self.vision_encoder(vision_x).flatten(2).permute(0, 2, 1) - vision_x = rearrange(vision_x, "(b T F) v d -> b T F v d", b=b, T=T, F=F) - - # print(vision_x[0,0,0]) - # # DEBUG HERE - # if torch.distributed.get_rank() == 0: - # import pdb; pdb.set_trace() - # else: - # torch.distributed.barrier() - vision_x = vision_x.mean(2) - # vision_x = self.perceiver(vision_x) # reshapes to (b, T, n, d) - # vision_x = self.vis_proj(vision_x) + self.vis_position_embedding(self.vis_position_ids).unsqueeze(0) - vision_x = self.vis_proj(vision_x).squeeze(1) - self.image_embedding = vision_x - - data_list = None - visual_tokens = None - if added_bbox_list is not None and input_ids is not None: - all_box_list = added_bbox_list[0].tolist() - for list in added_bbox_list[1:]: - all_box_list.extend(list.tolist()) - data_list, visual_tokens = self._get_data_list_and_visual_tokens( - all_box_list=all_box_list, - box_token_id=self.box_token_id, - prebox_token_id=self.prebox_token_id, - input_ids=input_ids, - vision_x=vision_x, - nothing_embedding=self.lang_encoder.gpt_neox.embed_in(torch.tensor(self.nothing_token_id).to(self.lang_encoder.gpt_neox.embed_in.weight.device)) if self.nothing_token_id is not None else None, - ) - - first_layer = self.lang_encoder._get_decoder_layers()[0] - first_layer.condition_vis_x(vision_x, image_nums, image_start_index_list, num_beams=num_beams, visual_tokens=visual_tokens, data_list=[[d[2], d[3]] for d in data_list] if data_list is not None else data_list) diff --git a/spaces/chronopt-research/ViTExCo/src/losses.py b/spaces/chronopt-research/ViTExCo/src/losses.py deleted file mode 100644 index dd78f9226bdee39354fa8fb31a05e4aefeb9e55d..0000000000000000000000000000000000000000 --- a/spaces/chronopt-research/ViTExCo/src/losses.py +++ /dev/null @@ -1,277 +0,0 @@ -import torch -import torch.nn as nn -from src.utils import feature_normalize - - -### START### CONTEXTUAL LOSS #### -class ContextualLoss(nn.Module): - """ - input is Al, Bl, channel = 1, range ~ [0, 255] - """ - - def __init__(self): - super(ContextualLoss, self).__init__() - return None - - def forward(self, X_features, Y_features, h=0.1, feature_centering=True): - """ - X_features&Y_features are are feature vectors or feature 2d array - h: bandwidth - return the per-sample loss - """ - batch_size = X_features.shape[0] - feature_depth = X_features.shape[1] - - # to normalized feature vectors - if feature_centering: - X_features = X_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze( - dim=-1 - ) - Y_features = Y_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze( - dim=-1 - ) - X_features = feature_normalize(X_features).view( - batch_size, feature_depth, -1 - ) # batch_size * feature_depth * feature_size^2 - Y_features = feature_normalize(Y_features).view( - batch_size, feature_depth, -1 - ) # batch_size * feature_depth * feature_size^2 - - # conine distance = 1 - similarity - X_features_permute = X_features.permute(0, 2, 1) # batch_size * feature_size^2 * feature_depth - d = 1 - torch.matmul(X_features_permute, Y_features) # batch_size * feature_size^2 * feature_size^2 - - # normalized distance: dij_bar - d_norm = d / (torch.min(d, dim=-1, keepdim=True)[0] + 1e-5) # batch_size * feature_size^2 * feature_size^2 - - # pairwise affinity - w = torch.exp((1 - d_norm) / h) - A_ij = w / torch.sum(w, dim=-1, keepdim=True) - - # contextual loss per sample - CX = torch.mean(torch.max(A_ij, dim=1)[0], dim=-1) - return -torch.log(CX) - - -class ContextualLoss_forward(nn.Module): - """ - input is Al, Bl, channel = 1, range ~ [0, 255] - """ - - def __init__(self): - super(ContextualLoss_forward, self).__init__() - return None - - def forward(self, X_features, Y_features, h=0.1, feature_centering=True): - """ - X_features&Y_features are are feature vectors or feature 2d array - h: bandwidth - return the per-sample loss - """ - batch_size = X_features.shape[0] - feature_depth = X_features.shape[1] - - # to normalized feature vectors - if feature_centering: - X_features = X_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze( - dim=-1 - ) - Y_features = Y_features - Y_features.view(batch_size, feature_depth, -1).mean(dim=-1).unsqueeze(dim=-1).unsqueeze( - dim=-1 - ) - X_features = feature_normalize(X_features).view( - batch_size, feature_depth, -1 - ) # batch_size * feature_depth * feature_size^2 - Y_features = feature_normalize(Y_features).view( - batch_size, feature_depth, -1 - ) # batch_size * feature_depth * feature_size^2 - - # conine distance = 1 - similarity - X_features_permute = X_features.permute(0, 2, 1) # batch_size * feature_size^2 * feature_depth - d = 1 - torch.matmul(X_features_permute, Y_features) # batch_size * feature_size^2 * feature_size^2 - - # normalized distance: dij_bar - d_norm = d / (torch.min(d, dim=-1, keepdim=True)[0] + 1e-5) # batch_size * feature_size^2 * feature_size^2 - - # pairwise affinity - w = torch.exp((1 - d_norm) / h) - A_ij = w / torch.sum(w, dim=-1, keepdim=True) - - # contextual loss per sample - CX = torch.mean(torch.max(A_ij, dim=-1)[0], dim=1) - return -torch.log(CX) - - -### END### CONTEXTUAL LOSS #### - - -########################## - - -def mse_loss_fn(input, target=0): - return torch.mean((input - target) ** 2) - - -### START### PERCEPTUAL LOSS ### -def Perceptual_loss(domain_invariant, weight_perceptual): - instancenorm = nn.InstanceNorm2d(512, affine=False) - - def __call__(A_relu5_1, predict_relu5_1): - if domain_invariant: - feat_loss = ( - mse_loss_fn(instancenorm(predict_relu5_1), instancenorm(A_relu5_1.detach())) * weight_perceptual * 1e5 * 0.2 - ) - else: - feat_loss = mse_loss_fn(predict_relu5_1, A_relu5_1.detach()) * weight_perceptual - return feat_loss - - return __call__ - - -### END### PERCEPTUAL LOSS ### - - -def l1_loss_fn(input, target=0): - return torch.mean(torch.abs(input - target)) - - -### END################# - - -### START### ADVERSIAL LOSS ### -def generator_loss_fn(real_data_lab, fake_data_lab, discriminator, weight_gan, device): - if weight_gan > 0: - y_pred_fake, _ = discriminator(fake_data_lab) - y_pred_real, _ = discriminator(real_data_lab) - - y = torch.ones_like(y_pred_real) - generator_loss = ( - ( - torch.mean((y_pred_real - torch.mean(y_pred_fake) + y) ** 2) - + torch.mean((y_pred_fake - torch.mean(y_pred_real) - y) ** 2) - ) - / 2 - * weight_gan - ) - return generator_loss - - return torch.Tensor([0]).to(device) - - -def discriminator_loss_fn(real_data_lab, fake_data_lab, discriminator): - y_pred_fake, _ = discriminator(fake_data_lab.detach()) - y_pred_real, _ = discriminator(real_data_lab.detach()) - - y = torch.ones_like(y_pred_real) - discriminator_loss = ( - torch.mean((y_pred_real - torch.mean(y_pred_fake) - y) ** 2) - + torch.mean((y_pred_fake - torch.mean(y_pred_real) + y) ** 2) - ) / 2 - return discriminator_loss - - -### END### ADVERSIAL LOSS ##### - - -def consistent_loss_fn( - I_current_lab_predict, - I_last_ab_predict, - I_current_nonlocal_lab_predict, - I_last_nonlocal_lab_predict, - flow_forward, - mask, - warping_layer, - weight_consistent=0.02, - weight_nonlocal_consistent=0.0, - device="cuda", -): - def weighted_mse_loss(input, target, weights): - out = (input - target) ** 2 - out = out * weights.expand_as(out) - return out.mean() - - def consistent(): - I_current_lab_predict_warp = warping_layer(I_current_lab_predict, flow_forward) - I_current_ab_predict_warp = I_current_lab_predict_warp[:, 1:3, :, :] - consistent_loss = weighted_mse_loss(I_current_ab_predict_warp, I_last_ab_predict, mask) * weight_consistent - return consistent_loss - - def nonlocal_consistent(): - I_current_nonlocal_lab_predict_warp = warping_layer(I_current_nonlocal_lab_predict, flow_forward) - nonlocal_consistent_loss = ( - weighted_mse_loss( - I_current_nonlocal_lab_predict_warp[:, 1:3, :, :], - I_last_nonlocal_lab_predict[:, 1:3, :, :], - mask, - ) - * weight_nonlocal_consistent - ) - - return nonlocal_consistent_loss - - consistent_loss = consistent() if weight_consistent else torch.Tensor([0]).to(device) - nonlocal_consistent_loss = nonlocal_consistent() if weight_nonlocal_consistent else torch.Tensor([0]).to(device) - - return consistent_loss + nonlocal_consistent_loss - - -### END### CONSISTENCY LOSS ##### - - -### START### SMOOTHNESS LOSS ### -def smoothness_loss_fn( - I_current_l, - I_current_lab, - I_current_ab_predict, - A_relu2_1, - weighted_layer_color, - nonlocal_weighted_layer, - weight_smoothness=5.0, - weight_nonlocal_smoothness=0.0, - device="cuda", -): - def smoothness(scale_factor=1.0): - I_current_lab_predict = torch.cat((I_current_l, I_current_ab_predict), dim=1) - IA_ab_weighed = weighted_layer_color( - I_current_lab, - I_current_lab_predict, - patch_size=3, - alpha=10, - scale_factor=scale_factor, - ) - smoothness_loss = ( - mse_loss_fn( - nn.functional.interpolate(I_current_ab_predict, scale_factor=scale_factor), - IA_ab_weighed, - ) - * weight_smoothness - ) - - return smoothness_loss - - def nonlocal_smoothness(scale_factor=0.25, alpha_nonlocal_smoothness=0.5): - nonlocal_smooth_feature = feature_normalize(A_relu2_1) - I_current_lab_predict = torch.cat((I_current_l, I_current_ab_predict), dim=1) - I_current_ab_weighted_nonlocal = nonlocal_weighted_layer( - I_current_lab_predict, - nonlocal_smooth_feature.detach(), - patch_size=3, - alpha=alpha_nonlocal_smoothness, - scale_factor=scale_factor, - ) - nonlocal_smoothness_loss = ( - mse_loss_fn( - nn.functional.interpolate(I_current_ab_predict, scale_factor=scale_factor), - I_current_ab_weighted_nonlocal, - ) - * weight_nonlocal_smoothness - ) - return nonlocal_smoothness_loss - - smoothness_loss = smoothness() if weight_smoothness else torch.Tensor([0]).to(device) - nonlocal_smoothness_loss = nonlocal_smoothness() if weight_nonlocal_smoothness else torch.Tensor([0]).to(device) - - return smoothness_loss + nonlocal_smoothness_loss - - -### END### SMOOTHNESS LOSS ##### diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/utils.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/utils.py deleted file mode 100644 index 71916816844020a3fe6f0d8d395031946098cabd..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/utils.py +++ /dev/null @@ -1,130 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -from __future__ import annotations - -import enum -import sys -import types -import typing -import warnings - - -# We use a UserWarning subclass, instead of DeprecationWarning, because CPython -# decided deprecation warnings should be invisble by default. -class CryptographyDeprecationWarning(UserWarning): - pass - - -# Several APIs were deprecated with no specific end-of-life date because of the -# ubiquity of their use. They should not be removed until we agree on when that -# cycle ends. -DeprecatedIn36 = CryptographyDeprecationWarning -DeprecatedIn37 = CryptographyDeprecationWarning -DeprecatedIn40 = CryptographyDeprecationWarning -DeprecatedIn41 = CryptographyDeprecationWarning - - -def _check_bytes(name: str, value: bytes) -> None: - if not isinstance(value, bytes): - raise TypeError(f"{name} must be bytes") - - -def _check_byteslike(name: str, value: bytes) -> None: - try: - memoryview(value) - except TypeError: - raise TypeError(f"{name} must be bytes-like") - - -def int_to_bytes(integer: int, length: typing.Optional[int] = None) -> bytes: - return integer.to_bytes( - length or (integer.bit_length() + 7) // 8 or 1, "big" - ) - - -def _extract_buffer_length(obj: typing.Any) -> typing.Tuple[typing.Any, int]: - from cryptography.hazmat.bindings._rust import _openssl - - buf = _openssl.ffi.from_buffer(obj) - return buf, int(_openssl.ffi.cast("uintptr_t", buf)) - - -class InterfaceNotImplemented(Exception): - pass - - -class _DeprecatedValue: - def __init__(self, value: object, message: str, warning_class): - self.value = value - self.message = message - self.warning_class = warning_class - - -class _ModuleWithDeprecations(types.ModuleType): - def __init__(self, module: types.ModuleType): - super().__init__(module.__name__) - self.__dict__["_module"] = module - - def __getattr__(self, attr: str) -> object: - obj = getattr(self._module, attr) - if isinstance(obj, _DeprecatedValue): - warnings.warn(obj.message, obj.warning_class, stacklevel=2) - obj = obj.value - return obj - - def __setattr__(self, attr: str, value: object) -> None: - setattr(self._module, attr, value) - - def __delattr__(self, attr: str) -> None: - obj = getattr(self._module, attr) - if isinstance(obj, _DeprecatedValue): - warnings.warn(obj.message, obj.warning_class, stacklevel=2) - - delattr(self._module, attr) - - def __dir__(self) -> typing.Sequence[str]: - return ["_module"] + dir(self._module) - - -def deprecated( - value: object, - module_name: str, - message: str, - warning_class: typing.Type[Warning], - name: typing.Optional[str] = None, -) -> _DeprecatedValue: - module = sys.modules[module_name] - if not isinstance(module, _ModuleWithDeprecations): - sys.modules[module_name] = module = _ModuleWithDeprecations(module) - dv = _DeprecatedValue(value, message, warning_class) - # Maintain backwards compatibility with `name is None` for pyOpenSSL. - if name is not None: - setattr(module, name, dv) - return dv - - -def cached_property(func: typing.Callable) -> property: - cached_name = f"_cached_{func}" - sentinel = object() - - def inner(instance: object): - cache = getattr(instance, cached_name, sentinel) - if cache is not sentinel: - return cache - result = func(instance) - setattr(instance, cached_name, result) - return result - - return property(inner) - - -# Python 3.10 changed representation of enums. We use well-defined object -# representation and string representation from Python 3.9. -class Enum(enum.Enum): - def __repr__(self) -> str: - return f"<{self.__class__.__name__}.{self._name_}: {self._value_!r}>" - - def __str__(self) -> str: - return f"{self.__class__.__name__}.{self._name_}" diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/rel.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/rel.py deleted file mode 100644 index 7dba2af8eef9c8a6949c76e03b0fd64047083952..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/rel.py +++ /dev/null @@ -1,170 +0,0 @@ -# encoding: utf-8 - -""" -Relationship-related objects. -""" - -from __future__ import ( - absolute_import, division, print_function, unicode_literals -) - -from .oxml import CT_Relationships - - -class Relationships(dict): - """ - Collection object for |_Relationship| instances, having list semantics. - """ - def __init__(self, baseURI): - super(Relationships, self).__init__() - self._baseURI = baseURI - self._target_parts_by_rId = {} - - def add_relationship(self, reltype, target, rId, is_external=False): - """ - Return a newly added |_Relationship| instance. - """ - rel = _Relationship(rId, reltype, target, self._baseURI, is_external) - self[rId] = rel - if not is_external: - self._target_parts_by_rId[rId] = target - return rel - - def get_or_add(self, reltype, target_part): - """ - Return relationship of *reltype* to *target_part*, newly added if not - already present in collection. - """ - rel = self._get_matching(reltype, target_part) - if rel is None: - rId = self._next_rId - rel = self.add_relationship(reltype, target_part, rId) - return rel - - def get_or_add_ext_rel(self, reltype, target_ref): - """ - Return rId of external relationship of *reltype* to *target_ref*, - newly added if not already present in collection. - """ - rel = self._get_matching(reltype, target_ref, is_external=True) - if rel is None: - rId = self._next_rId - rel = self.add_relationship( - reltype, target_ref, rId, is_external=True - ) - return rel.rId - - def part_with_reltype(self, reltype): - """ - Return target part of rel with matching *reltype*, raising |KeyError| - if not found and |ValueError| if more than one matching relationship - is found. - """ - rel = self._get_rel_of_type(reltype) - return rel.target_part - - @property - def related_parts(self): - """ - dict mapping rIds to target parts for all the internal relationships - in the collection. - """ - return self._target_parts_by_rId - - @property - def xml(self): - """ - Serialize this relationship collection into XML suitable for storage - as a .rels file in an OPC package. - """ - rels_elm = CT_Relationships.new() - for rel in self.values(): - rels_elm.add_rel( - rel.rId, rel.reltype, rel.target_ref, rel.is_external - ) - return rels_elm.xml - - def _get_matching(self, reltype, target, is_external=False): - """ - Return relationship of matching *reltype*, *target*, and - *is_external* from collection, or None if not found. - """ - def matches(rel, reltype, target, is_external): - if rel.reltype != reltype: - return False - if rel.is_external != is_external: - return False - rel_target = rel.target_ref if rel.is_external else rel.target_part - if rel_target != target: - return False - return True - - for rel in self.values(): - if matches(rel, reltype, target, is_external): - return rel - return None - - def _get_rel_of_type(self, reltype): - """ - Return single relationship of type *reltype* from the collection. - Raises |KeyError| if no matching relationship is found. Raises - |ValueError| if more than one matching relationship is found. - """ - matching = [rel for rel in self.values() if rel.reltype == reltype] - if len(matching) == 0: - tmpl = "no relationship of type '%s' in collection" - raise KeyError(tmpl % reltype) - if len(matching) > 1: - tmpl = "multiple relationships of type '%s' in collection" - raise ValueError(tmpl % reltype) - return matching[0] - - @property - def _next_rId(self): - """ - Next available rId in collection, starting from 'rId1' and making use - of any gaps in numbering, e.g. 'rId2' for rIds ['rId1', 'rId3']. - """ - for n in range(1, len(self)+2): - rId_candidate = 'rId%d' % n # like 'rId19' - if rId_candidate not in self: - return rId_candidate - - -class _Relationship(object): - """ - Value object for relationship to part. - """ - def __init__(self, rId, reltype, target, baseURI, external=False): - super(_Relationship, self).__init__() - self._rId = rId - self._reltype = reltype - self._target = target - self._baseURI = baseURI - self._is_external = bool(external) - - @property - def is_external(self): - return self._is_external - - @property - def reltype(self): - return self._reltype - - @property - def rId(self): - return self._rId - - @property - def target_part(self): - if self._is_external: - raise ValueError("target_part property on _Relationship is undef" - "ined when target mode is External") - return self._target - - @property - def target_ref(self): - if self._is_external: - return self._target - else: - return self._target.partname.relative_ref(self._baseURI) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/text/parfmt.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/text/parfmt.py deleted file mode 100644 index 37206729cb4c9a2fa338e0e512d645c07345fb22..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/text/parfmt.py +++ /dev/null @@ -1,303 +0,0 @@ -# encoding: utf-8 - -""" -Paragraph-related proxy types. -""" - -from __future__ import ( - absolute_import, division, print_function, unicode_literals -) - -from ..enum.text import WD_LINE_SPACING -from ..shared import ElementProxy, Emu, lazyproperty, Length, Pt, Twips -from .tabstops import TabStops - - -class ParagraphFormat(ElementProxy): - """ - Provides access to paragraph formatting such as justification, - indentation, line spacing, space before and after, and widow/orphan - control. - """ - - __slots__ = ('_tab_stops',) - - @property - def alignment(self): - """ - A member of the :ref:`WdParagraphAlignment` enumeration specifying - the justification setting for this paragraph. A value of |None| - indicates paragraph alignment is inherited from the style hierarchy. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.jc_val - - @alignment.setter - def alignment(self, value): - pPr = self._element.get_or_add_pPr() - pPr.jc_val = value - - @property - def first_line_indent(self): - """ - |Length| value specifying the relative difference in indentation for - the first line of the paragraph. A positive value causes the first - line to be indented. A negative value produces a hanging indent. - |None| indicates first line indentation is inherited from the style - hierarchy. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.first_line_indent - - @first_line_indent.setter - def first_line_indent(self, value): - pPr = self._element.get_or_add_pPr() - pPr.first_line_indent = value - - @property - def keep_together(self): - """ - |True| if the paragraph should be kept "in one piece" and not broken - across a page boundary when the document is rendered. |None| - indicates its effective value is inherited from the style hierarchy. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.keepLines_val - - @keep_together.setter - def keep_together(self, value): - self._element.get_or_add_pPr().keepLines_val = value - - @property - def keep_with_next(self): - """ - |True| if the paragraph should be kept on the same page as the - subsequent paragraph when the document is rendered. For example, this - property could be used to keep a section heading on the same page as - its first paragraph. |None| indicates its effective value is - inherited from the style hierarchy. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.keepNext_val - - @keep_with_next.setter - def keep_with_next(self, value): - self._element.get_or_add_pPr().keepNext_val = value - - @property - def left_indent(self): - """ - |Length| value specifying the space between the left margin and the - left side of the paragraph. |None| indicates the left indent value is - inherited from the style hierarchy. Use an |Inches| value object as - a convenient way to apply indentation in units of inches. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.ind_left - - @left_indent.setter - def left_indent(self, value): - pPr = self._element.get_or_add_pPr() - pPr.ind_left = value - - @property - def line_spacing(self): - """ - |float| or |Length| value specifying the space between baselines in - successive lines of the paragraph. A value of |None| indicates line - spacing is inherited from the style hierarchy. A float value, e.g. - ``2.0`` or ``1.75``, indicates spacing is applied in multiples of - line heights. A |Length| value such as ``Pt(12)`` indicates spacing - is a fixed height. The |Pt| value class is a convenient way to apply - line spacing in units of points. Assigning |None| resets line spacing - to inherit from the style hierarchy. - """ - pPr = self._element.pPr - if pPr is None: - return None - return self._line_spacing(pPr.spacing_line, pPr.spacing_lineRule) - - @line_spacing.setter - def line_spacing(self, value): - pPr = self._element.get_or_add_pPr() - if value is None: - pPr.spacing_line = None - pPr.spacing_lineRule = None - elif isinstance(value, Length): - pPr.spacing_line = value - if pPr.spacing_lineRule != WD_LINE_SPACING.AT_LEAST: - pPr.spacing_lineRule = WD_LINE_SPACING.EXACTLY - else: - pPr.spacing_line = Emu(value * Twips(240)) - pPr.spacing_lineRule = WD_LINE_SPACING.MULTIPLE - - @property - def line_spacing_rule(self): - """ - A member of the :ref:`WdLineSpacing` enumeration indicating how the - value of :attr:`line_spacing` should be interpreted. Assigning any of - the :ref:`WdLineSpacing` members :attr:`SINGLE`, :attr:`DOUBLE`, or - :attr:`ONE_POINT_FIVE` will cause the value of :attr:`line_spacing` - to be updated to produce the corresponding line spacing. - """ - pPr = self._element.pPr - if pPr is None: - return None - return self._line_spacing_rule( - pPr.spacing_line, pPr.spacing_lineRule - ) - - @line_spacing_rule.setter - def line_spacing_rule(self, value): - pPr = self._element.get_or_add_pPr() - if value == WD_LINE_SPACING.SINGLE: - pPr.spacing_line = Twips(240) - pPr.spacing_lineRule = WD_LINE_SPACING.MULTIPLE - elif value == WD_LINE_SPACING.ONE_POINT_FIVE: - pPr.spacing_line = Twips(360) - pPr.spacing_lineRule = WD_LINE_SPACING.MULTIPLE - elif value == WD_LINE_SPACING.DOUBLE: - pPr.spacing_line = Twips(480) - pPr.spacing_lineRule = WD_LINE_SPACING.MULTIPLE - else: - pPr.spacing_lineRule = value - - @property - def page_break_before(self): - """ - |True| if the paragraph should appear at the top of the page - following the prior paragraph. |None| indicates its effective value - is inherited from the style hierarchy. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.pageBreakBefore_val - - @page_break_before.setter - def page_break_before(self, value): - self._element.get_or_add_pPr().pageBreakBefore_val = value - - @property - def right_indent(self): - """ - |Length| value specifying the space between the right margin and the - right side of the paragraph. |None| indicates the right indent value - is inherited from the style hierarchy. Use a |Cm| value object as - a convenient way to apply indentation in units of centimeters. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.ind_right - - @right_indent.setter - def right_indent(self, value): - pPr = self._element.get_or_add_pPr() - pPr.ind_right = value - - @property - def space_after(self): - """ - |Length| value specifying the spacing to appear between this - paragraph and the subsequent paragraph. |None| indicates this value - is inherited from the style hierarchy. |Length| objects provide - convenience properties, such as :attr:`~.Length.pt` and - :attr:`~.Length.inches`, that allow easy conversion to various length - units. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.spacing_after - - @space_after.setter - def space_after(self, value): - self._element.get_or_add_pPr().spacing_after = value - - @property - def space_before(self): - """ - |Length| value specifying the spacing to appear between this - paragraph and the prior paragraph. |None| indicates this value is - inherited from the style hierarchy. |Length| objects provide - convenience properties, such as :attr:`~.Length.pt` and - :attr:`~.Length.cm`, that allow easy conversion to various length - units. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.spacing_before - - @space_before.setter - def space_before(self, value): - self._element.get_or_add_pPr().spacing_before = value - - @lazyproperty - def tab_stops(self): - """ - |TabStops| object providing access to the tab stops defined for this - paragraph format. - """ - pPr = self._element.get_or_add_pPr() - return TabStops(pPr) - - @property - def widow_control(self): - """ - |True| if the first and last lines in the paragraph remain on the - same page as the rest of the paragraph when Word repaginates the - document. |None| indicates its effective value is inherited from the - style hierarchy. - """ - pPr = self._element.pPr - if pPr is None: - return None - return pPr.widowControl_val - - @widow_control.setter - def widow_control(self, value): - self._element.get_or_add_pPr().widowControl_val = value - - @staticmethod - def _line_spacing(spacing_line, spacing_lineRule): - """ - Return the line spacing value calculated from the combination of - *spacing_line* and *spacing_lineRule*. Returns a |float| number of - lines when *spacing_lineRule* is ``WD_LINE_SPACING.MULTIPLE``, - otherwise a |Length| object of absolute line height is returned. - Returns |None| when *spacing_line* is |None|. - """ - if spacing_line is None: - return None - if spacing_lineRule == WD_LINE_SPACING.MULTIPLE: - return spacing_line / Pt(12) - return spacing_line - - @staticmethod - def _line_spacing_rule(line, lineRule): - """ - Return the line spacing rule value calculated from the combination of - *line* and *lineRule*. Returns special members of the - :ref:`WdLineSpacing` enumeration when line spacing is single, double, - or 1.5 lines. - """ - if lineRule == WD_LINE_SPACING.MULTIPLE: - if line == Twips(240): - return WD_LINE_SPACING.SINGLE - if line == Twips(360): - return WD_LINE_SPACING.ONE_POINT_FIVE - if line == Twips(480): - return WD_LINE_SPACING.DOUBLE - return lineRule diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/DefaultTable.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/DefaultTable.py deleted file mode 100644 index 32a4b1f258f54d78ad39eb764867a6c354939743..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/DefaultTable.py +++ /dev/null @@ -1,50 +0,0 @@ -from fontTools.misc.textTools import Tag -from fontTools.ttLib import getClassTag - - -class DefaultTable(object): - - dependencies = [] - - def __init__(self, tag=None): - if tag is None: - tag = getClassTag(self.__class__) - self.tableTag = Tag(tag) - - def decompile(self, data, ttFont): - self.data = data - - def compile(self, ttFont): - return self.data - - def toXML(self, writer, ttFont, **kwargs): - if hasattr(self, "ERROR"): - writer.comment("An error occurred during the decompilation of this table") - writer.newline() - writer.comment(self.ERROR) - writer.newline() - writer.begintag("hexdata") - writer.newline() - writer.dumphex(self.compile(ttFont)) - writer.endtag("hexdata") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - from fontTools.misc.textTools import readHex - from fontTools import ttLib - - if name != "hexdata": - raise ttLib.TTLibError("can't handle '%s' element" % name) - self.decompile(readHex(content), ttFont) - - def __repr__(self): - return "<'%s' table at %x>" % (self.tableTag, id(self)) - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/compiler/plugin_pb2.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/compiler/plugin_pb2.py deleted file mode 100644 index 3e3a36de677288d766c62d994e7b5fef354251de..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/compiler/plugin_pb2.py +++ /dev/null @@ -1,36 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/protobuf/compiler/plugin.proto -"""Generated protocol buffer code.""" -from google.protobuf import descriptor as _descriptor -from google.protobuf import descriptor_pool as _descriptor_pool -from google.protobuf import symbol_database as _symbol_database -from google.protobuf.internal import builder as _builder -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.protobuf import descriptor_pb2 as google_dot_protobuf_dot_descriptor__pb2 - - -DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n%google/protobuf/compiler/plugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"c\n\x07Version\x12\x14\n\x05major\x18\x01 \x01(\x05R\x05major\x12\x14\n\x05minor\x18\x02 \x01(\x05R\x05minor\x12\x14\n\x05patch\x18\x03 \x01(\x05R\x05patch\x12\x16\n\x06suffix\x18\x04 \x01(\tR\x06suffix\"\xf1\x01\n\x14\x43odeGeneratorRequest\x12(\n\x10\x66ile_to_generate\x18\x01 \x03(\tR\x0e\x66ileToGenerate\x12\x1c\n\tparameter\x18\x02 \x01(\tR\tparameter\x12\x43\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProtoR\tprotoFile\x12L\n\x10\x63ompiler_version\x18\x03 \x01(\x0b\x32!.google.protobuf.compiler.VersionR\x0f\x63ompilerVersion\"\x94\x03\n\x15\x43odeGeneratorResponse\x12\x14\n\x05\x65rror\x18\x01 \x01(\tR\x05\x65rror\x12-\n\x12supported_features\x18\x02 \x01(\x04R\x11supportedFeatures\x12H\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.FileR\x04\x66ile\x1a\xb1\x01\n\x04\x46ile\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\'\n\x0finsertion_point\x18\x02 \x01(\tR\x0einsertionPoint\x12\x18\n\x07\x63ontent\x18\x0f \x01(\tR\x07\x63ontent\x12R\n\x13generated_code_info\x18\x10 \x01(\x0b\x32\".google.protobuf.GeneratedCodeInfoR\x11generatedCodeInfo\"8\n\x07\x46\x65\x61ture\x12\x10\n\x0c\x46\x45\x41TURE_NONE\x10\x00\x12\x1b\n\x17\x46\x45\x41TURE_PROTO3_OPTIONAL\x10\x01\x42r\n\x1c\x63om.google.protobuf.compilerB\x0cPluginProtosZ)google.golang.org/protobuf/types/pluginpb\xaa\x02\x18Google.Protobuf.Compiler') - -_globals = globals() -_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals) -_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.compiler.plugin_pb2', _globals) -if _descriptor._USE_C_DESCRIPTORS == False: - - DESCRIPTOR._options = None - DESCRIPTOR._serialized_options = b'\n\034com.google.protobuf.compilerB\014PluginProtosZ)google.golang.org/protobuf/types/pluginpb\252\002\030Google.Protobuf.Compiler' - _globals['_VERSION']._serialized_start=101 - _globals['_VERSION']._serialized_end=200 - _globals['_CODEGENERATORREQUEST']._serialized_start=203 - _globals['_CODEGENERATORREQUEST']._serialized_end=444 - _globals['_CODEGENERATORRESPONSE']._serialized_start=447 - _globals['_CODEGENERATORRESPONSE']._serialized_end=851 - _globals['_CODEGENERATORRESPONSE_FILE']._serialized_start=616 - _globals['_CODEGENERATORRESPONSE_FILE']._serialized_end=793 - _globals['_CODEGENERATORRESPONSE_FEATURE']._serialized_start=795 - _globals['_CODEGENERATORRESPONSE_FEATURE']._serialized_end=851 -# @@protoc_insertion_point(module_scope) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/type_pb2.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/type_pb2.py deleted file mode 100644 index ca8a4e20eb1f72feb66261973848b3e16515fef5..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/type_pb2.py +++ /dev/null @@ -1,43 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/protobuf/type.proto -"""Generated protocol buffer code.""" -from google.protobuf import descriptor as _descriptor -from google.protobuf import descriptor_pool as _descriptor_pool -from google.protobuf import symbol_database as _symbol_database -from google.protobuf.internal import builder as _builder -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.protobuf import any_pb2 as google_dot_protobuf_dot_any__pb2 -from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2 - - -DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1agoogle/protobuf/type.proto\x12\x0fgoogle.protobuf\x1a\x19google/protobuf/any.proto\x1a$google/protobuf/source_context.proto\"\xa7\x02\n\x04Type\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12.\n\x06\x66ields\x18\x02 \x03(\x0b\x32\x16.google.protobuf.FieldR\x06\x66ields\x12\x16\n\x06oneofs\x18\x03 \x03(\tR\x06oneofs\x12\x31\n\x07options\x18\x04 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x45\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContextR\rsourceContext\x12/\n\x06syntax\x18\x06 \x01(\x0e\x32\x17.google.protobuf.SyntaxR\x06syntax\x12\x18\n\x07\x65\x64ition\x18\x07 \x01(\tR\x07\x65\x64ition\"\xb4\x06\n\x05\x46ield\x12/\n\x04kind\x18\x01 \x01(\x0e\x32\x1b.google.protobuf.Field.KindR\x04kind\x12\x44\n\x0b\x63\x61rdinality\x18\x02 \x01(\x0e\x32\".google.protobuf.Field.CardinalityR\x0b\x63\x61rdinality\x12\x16\n\x06number\x18\x03 \x01(\x05R\x06number\x12\x12\n\x04name\x18\x04 \x01(\tR\x04name\x12\x19\n\x08type_url\x18\x06 \x01(\tR\x07typeUrl\x12\x1f\n\x0boneof_index\x18\x07 \x01(\x05R\noneofIndex\x12\x16\n\x06packed\x18\x08 \x01(\x08R\x06packed\x12\x31\n\x07options\x18\t \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x1b\n\tjson_name\x18\n \x01(\tR\x08jsonName\x12#\n\rdefault_value\x18\x0b \x01(\tR\x0c\x64\x65\x66\x61ultValue\"\xc8\x02\n\x04Kind\x12\x10\n\x0cTYPE_UNKNOWN\x10\x00\x12\x0f\n\x0bTYPE_DOUBLE\x10\x01\x12\x0e\n\nTYPE_FLOAT\x10\x02\x12\x0e\n\nTYPE_INT64\x10\x03\x12\x0f\n\x0bTYPE_UINT64\x10\x04\x12\x0e\n\nTYPE_INT32\x10\x05\x12\x10\n\x0cTYPE_FIXED64\x10\x06\x12\x10\n\x0cTYPE_FIXED32\x10\x07\x12\r\n\tTYPE_BOOL\x10\x08\x12\x0f\n\x0bTYPE_STRING\x10\t\x12\x0e\n\nTYPE_GROUP\x10\n\x12\x10\n\x0cTYPE_MESSAGE\x10\x0b\x12\x0e\n\nTYPE_BYTES\x10\x0c\x12\x0f\n\x0bTYPE_UINT32\x10\r\x12\r\n\tTYPE_ENUM\x10\x0e\x12\x11\n\rTYPE_SFIXED32\x10\x0f\x12\x11\n\rTYPE_SFIXED64\x10\x10\x12\x0f\n\x0bTYPE_SINT32\x10\x11\x12\x0f\n\x0bTYPE_SINT64\x10\x12\"t\n\x0b\x43\x61rdinality\x12\x17\n\x13\x43\x41RDINALITY_UNKNOWN\x10\x00\x12\x18\n\x14\x43\x41RDINALITY_OPTIONAL\x10\x01\x12\x18\n\x14\x43\x41RDINALITY_REQUIRED\x10\x02\x12\x18\n\x14\x43\x41RDINALITY_REPEATED\x10\x03\"\x99\x02\n\x04\x45num\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x38\n\tenumvalue\x18\x02 \x03(\x0b\x32\x1a.google.protobuf.EnumValueR\tenumvalue\x12\x31\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\x12\x45\n\x0esource_context\x18\x04 \x01(\x0b\x32\x1e.google.protobuf.SourceContextR\rsourceContext\x12/\n\x06syntax\x18\x05 \x01(\x0e\x32\x17.google.protobuf.SyntaxR\x06syntax\x12\x18\n\x07\x65\x64ition\x18\x06 \x01(\tR\x07\x65\x64ition\"j\n\tEnumValue\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12\x16\n\x06number\x18\x02 \x01(\x05R\x06number\x12\x31\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.OptionR\x07options\"H\n\x06Option\x12\x12\n\x04name\x18\x01 \x01(\tR\x04name\x12*\n\x05value\x18\x02 \x01(\x0b\x32\x14.google.protobuf.AnyR\x05value*C\n\x06Syntax\x12\x11\n\rSYNTAX_PROTO2\x10\x00\x12\x11\n\rSYNTAX_PROTO3\x10\x01\x12\x13\n\x0fSYNTAX_EDITIONS\x10\x02\x42{\n\x13\x63om.google.protobufB\tTypeProtoP\x01Z-google.golang.org/protobuf/types/known/typepb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') - -_globals = globals() -_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals) -_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.type_pb2', _globals) -if _descriptor._USE_C_DESCRIPTORS == False: - - DESCRIPTOR._options = None - DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\tTypeProtoP\001Z-google.golang.org/protobuf/types/known/typepb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' - _globals['_SYNTAX']._serialized_start=1699 - _globals['_SYNTAX']._serialized_end=1766 - _globals['_TYPE']._serialized_start=113 - _globals['_TYPE']._serialized_end=408 - _globals['_FIELD']._serialized_start=411 - _globals['_FIELD']._serialized_end=1231 - _globals['_FIELD_KIND']._serialized_start=785 - _globals['_FIELD_KIND']._serialized_end=1113 - _globals['_FIELD_CARDINALITY']._serialized_start=1115 - _globals['_FIELD_CARDINALITY']._serialized_end=1231 - _globals['_ENUM']._serialized_start=1234 - _globals['_ENUM']._serialized_end=1515 - _globals['_ENUMVALUE']._serialized_start=1517 - _globals['_ENUMVALUE']._serialized_end=1623 - _globals['_OPTION']._serialized_start=1625 - _globals['_OPTION']._serialized_end=1697 -# @@protoc_insertion_point(module_scope) diff --git a/spaces/cihyFjudo/fairness-paper-search/ Asus N76v .md b/spaces/cihyFjudo/fairness-paper-search/ Asus N76v .md deleted file mode 100644 index f26b954749159eb3fa77b43781be7e69b8737574..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/ Asus N76v .md +++ /dev/null @@ -1,8 +0,0 @@ - -

This error is due to the service being started before asus-wmi could be loaded by the kernel (noted as kernel: battery: new extension: ASUS Battery Extension in the journal), making it impossible to write there.

-

The battery's charge_control_end_threshold power supply class attribute does not initially exist. It is added to the sysfs(5) directory by the asus-nb-wmi kernel module. Create a udev rule for asus-nb-wmi to set the battery's charge threshold:

-

Драйвера Для Ноутбука Asus N76v


Download Ziphttps://tinurli.com/2uwkmd



-

Another (more simple) way to force the charging threshold is by using bat-asus-battery-binAUR, which provides a bat-boot.service systemd service and an intuitive terminal interface to change the threshold by typing

-

asusctlAUR (or asusctl-gitAUR) implements functionality specific to the ROG line of laptops, such as backlit keyboards, fan profiles, and the AniMe LED matrix. Check the project's official site for usage: -linux.org/

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Artcam Pro 2010 SP4 Full Version Download and Install the Software in a Few Simple Steps.md b/spaces/cihyFjudo/fairness-paper-search/Artcam Pro 2010 SP4 Full Version Download and Install the Software in a Few Simple Steps.md deleted file mode 100644 index 5693666b76ad05e114c40a16df6a51cbc7a93987..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Artcam Pro 2010 SP4 Full Version Download and Install the Software in a Few Simple Steps.md +++ /dev/null @@ -1,6 +0,0 @@ -

artcam pro 2010 sp4 download full version


Download Filehttps://tinurli.com/2uwjpN



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Grand Theft Auto 4 The Lost And Damned NO-CD KEY-GEN UNLOCK.rar - Unlock the Full Potential of the Game.md b/spaces/cihyFjudo/fairness-paper-search/Grand Theft Auto 4 The Lost And Damned NO-CD KEY-GEN UNLOCK.rar - Unlock the Full Potential of the Game.md deleted file mode 100644 index ea8e0e8b2a78de0b1c0a0c59576dd85d2005ba51..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Grand Theft Auto 4 The Lost And Damned NO-CD KEY-GEN UNLOCK.rar - Unlock the Full Potential of the Game.md +++ /dev/null @@ -1,6 +0,0 @@ -

Grand Theft Auto 4: The Lost And Damned NO-CD KEY-GEN UNLOCK.rar


Download File ✦✦✦ https://tinurli.com/2uwkUk



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Klanghelm MJUC variable-tube compressor 1.4.1 VST AAX AU WIN.OSX x64 A tone shaper with unique TIMBRE and DRIVE knobs.md b/spaces/cihyFjudo/fairness-paper-search/Klanghelm MJUC variable-tube compressor 1.4.1 VST AAX AU WIN.OSX x64 A tone shaper with unique TIMBRE and DRIVE knobs.md deleted file mode 100644 index a37d3a27f34ff2c6802d8bc9a7628e587a4829ec..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Klanghelm MJUC variable-tube compressor 1.4.1 VST AAX AU WIN.OSX x64 A tone shaper with unique TIMBRE and DRIVE knobs.md +++ /dev/null @@ -1,6 +0,0 @@ -

Klanghelm – MJUC variable-tube compressor 1.4.1 VST, AAX, AU WIN.OSX x64


Download Ziphttps://tinurli.com/2uwhDF



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/clip-italian/clip-italian-demo/configuration_hybrid_clip.py b/spaces/clip-italian/clip-italian-demo/configuration_hybrid_clip.py deleted file mode 100644 index 5272ac44a1a884eaf9b058c9e29729bfaec29a58..0000000000000000000000000000000000000000 --- a/spaces/clip-italian/clip-italian-demo/configuration_hybrid_clip.py +++ /dev/null @@ -1,112 +0,0 @@ -import copy - -from transformers.configuration_utils import PretrainedConfig -from transformers.utils import logging - - -logger = logging.get_logger(__name__) - - -class HybridCLIPConfig(PretrainedConfig): - r""" - :class:`HybridCLIPConfig` is the configuration class to store the configuration of a - :class:`~HybridCLIPModel`. It is used to instantiate HybridCLIPModel model according to the specified arguments, - defining the text model and vision model configs. - - Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model - outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information. - - Args: - text_config_dict (:obj:`dict`): - Dictionary of configuration options that defines text model config. - vision_config_dict (:obj:`dict`): - Dictionary of configuration options that defines vison model config. - projection_dim (:obj:`int`, `optional`, defaults to 512): - Dimentionality of text and vision projection layers. - kwargs (`optional`): - Dictionary of keyword arguments. - - Examples:: - - >>> from transformers import BertConfig, CLIPConfig, HybridCLIPConfig, FlaxHybridCLIP - - >>> # Initializing a BERT and CLIP configuration - >>> config_text = BertConfig() - >>> config_vision = CLIPConfig() - - >>> config = HybridCLIPConfig.from_text_vision_configs(config_text, config_vision, projection_dim=512) - - >>> # Initializing a BERT and CLIPVision model - >>> model = EncoderDecoderModel(config=config) - - >>> # Accessing the model configuration - >>> config_text = model.config.text_config - >>> config_vision = model.config.vision_config - - >>> # Saving the model, including its configuration - >>> model.save_pretrained('my-model') - - >>> # loading model and config from pretrained folder - >>> encoder_decoder_config = HybridCLIPConfig.from_pretrained('my-model') - >>> model = FlaxHybridCLIP.from_pretrained('my-model', config=encoder_decoder_config) - """ - - model_type = "hybrid-clip" - is_composition = True - - def __init__(self, projection_dim=512, **kwargs): - super().__init__(**kwargs) - - if "text_config" not in kwargs: - raise ValueError("`text_config` can not be `None`.") - - if "vision_config" not in kwargs: - raise ValueError("`vision_config` can not be `None`.") - - text_config = kwargs.pop("text_config") - vision_config = kwargs.pop("vision_config") - - text_model_type = text_config.pop("model_type") - vision_model_type = vision_config.pop("model_type") - - from transformers import AutoConfig - - self.text_config = AutoConfig.for_model(text_model_type, **text_config) - - if vision_model_type == "clip": - self.vision_config = AutoConfig.for_model(vision_model_type, **vision_config).vision_config - elif vision_model_type == "clip_vision_model": - from transformers import CLIPVisionConfig - - self.vision_config = CLIPVisionConfig(**vision_config) - else: - self.vision_config = AutoConfig.for_model(vision_model_type, **vision_config) - - self.projection_dim = projection_dim - self.initializer_factor = 1.0 - - @classmethod - def from_text_vision_configs(cls, text_config: PretrainedConfig, vision_config: PretrainedConfig, **kwargs): - r""" - Instantiate a :class:`HybridCLIPConfig` (or a derived class) from text model configuration and - vision model configuration. - - Returns: - :class:`HybridCLIPConfig`: An instance of a configuration object - """ - - return cls(text_config=text_config.to_dict(), vision_config=vision_config.to_dict(), **kwargs) - - def to_dict(self): - """ - Serializes this instance to a Python dictionary. Override the default - :meth:`~transformers.PretrainedConfig.to_dict`. - - Returns: - :obj:`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, - """ - output = copy.deepcopy(self.__dict__) - output["text_config"] = self.text_config.to_dict() - output["vision_config"] = self.vision_config.to_dict() - output["model_type"] = self.__class__.model_type - return output diff --git a/spaces/clip-italian/clip-italian-demo/modeling_hybrid_clip.py b/spaces/clip-italian/clip-italian-demo/modeling_hybrid_clip.py deleted file mode 100644 index 49cf0b4d99a87f63d6be51093a971c512f6f6055..0000000000000000000000000000000000000000 --- a/spaces/clip-italian/clip-italian-demo/modeling_hybrid_clip.py +++ /dev/null @@ -1,422 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Optional, Tuple - -import flax.linen as nn -import jax -import jax.numpy as jnp -from configuration_hybrid_clip import HybridCLIPConfig -from flax.core.frozen_dict import FrozenDict -from transformers import FLAX_MODEL_MAPPING, FlaxCLIPVisionModel -from transformers.modeling_flax_utils import FlaxPreTrainedModel -from transformers.models.clip.modeling_flax_clip import FlaxCLIPOutput -from transformers.utils import logging - - -logger = logging.get_logger(__name__) - - -class FlaxHybridCLIPModule(nn.Module): - config: HybridCLIPConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - text_config = self.config.text_config - vision_config = self.config.vision_config - - self.projection_dim = self.config.projection_dim - self.text_embed_dim = text_config.hidden_size - self.vision_embed_dim = vision_config.hidden_size - - text_module = FLAX_MODEL_MAPPING[self.config.text_config.__class__].module_class - vision_module = FLAX_MODEL_MAPPING.get(self.config.vision_config.__class__, FlaxCLIPVisionModel).module_class - - self.text_model = text_module(text_config, dtype=self.dtype) - self.vision_model = vision_module(vision_config, dtype=self.dtype) - - self.visual_projection = nn.Dense( - self.projection_dim, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(0.02, dtype=self.dtype), - use_bias=False, - ) - self.text_projection = nn.Dense( - self.projection_dim, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(0.02, dtype=self.dtype), - use_bias=False, - ) - self.logit_scale = self.param("logit_scale", jax.nn.initializers.ones, []) - - def __call__( - self, - input_ids=None, - pixel_values=None, - attention_mask=None, - position_ids=None, - token_type_ids=None, - deterministic: bool = True, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - return_dict = return_dict if return_dict is not None else self.config.return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - image_embeds = vision_outputs[1] - image_embeds = self.visual_projection(image_embeds) - - text_embeds = text_outputs[1] - text_embeds = self.text_projection(text_embeds) - - # normalized features - image_embeds = image_embeds / jnp.linalg.norm(image_embeds, axis=-1, keepdims=True) - text_embeds = text_embeds / jnp.linalg.norm(text_embeds, axis=-1, keepdims=True) - - # cosine similarity as logits - logit_scale = jnp.exp(self.logit_scale) - logits_per_text = jnp.matmul(text_embeds, image_embeds.T) * logit_scale - logits_per_image = logits_per_text.T - - if not return_dict: - return (logits_per_image, logits_per_text, text_embeds, image_embeds, text_outputs, vision_outputs) - - return FlaxCLIPOutput( - logits_per_image=logits_per_image, - logits_per_text=logits_per_text, - text_embeds=text_embeds, - image_embeds=image_embeds, - text_model_output=text_outputs, - vision_model_output=vision_outputs, - ) - - -class FlaxHybridCLIP(FlaxPreTrainedModel): - config_class = HybridCLIPConfig - module_class = FlaxHybridCLIPModule - - def __init__( - self, - config: HybridCLIPConfig, - input_shape: Optional[Tuple] = None, - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - **kwargs - ): - if input_shape is None: - input_shape = ((1, 1), (1, config.vision_config.image_size, config.vision_config.image_size, 3)) - - print(kwargs) - - module = self.module_class(config=config, dtype=dtype) # , **kwargs) - super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict: - # init input tensor - input_ids = jnp.zeros(input_shape[0], dtype="i4") - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape[0]) - token_type_ids = jnp.ones_like(input_ids) - attention_mask = jnp.ones_like(input_ids) - - pixel_values = jax.random.normal(rng, input_shape[1]) - - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - return self.module.init(rngs, input_ids, pixel_values, attention_mask, position_ids, token_type_ids)["params"] - - def __call__( - self, - input_ids, - pixel_values, - attention_mask=None, - position_ids=None, - token_type_ids=None, - params: dict = None, - dropout_rng: jax.random.PRNGKey = None, - train: bool = False, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - if position_ids is None: - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - return self.module.apply( - {"params": params or self.params}, - jnp.array(input_ids, dtype="i4"), - jnp.array(pixel_values, dtype=jnp.float32), - jnp.array(attention_mask, dtype="i4"), - jnp.array(position_ids, dtype="i4"), - jnp.array(token_type_ids, dtype="i4"), - not train, - output_attentions, - output_hidden_states, - return_dict, - rngs=rngs, - ) - - def get_text_features( - self, - input_ids, - attention_mask=None, - position_ids=None, - token_type_ids=None, - dropout_rng: jax.random.PRNGKey = None, - train=False, - ): - r""" - Args: - input_ids (:obj:`numpy.ndarray` of shape :obj:`(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you - provide it. - - Indices can be obtained using :class:`~transformers.PreTrainedTokenizer`. See - :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` - for details. - - `What are input IDs? <../glossary.html#input-ids>`__ - - Returns: - text_features (:obj:`jax_xla.DeviceArray` of shape :obj:`(batch_size, output_dim`): The text embeddings - obtained by applying the projection layer to the pooled output of text model. - """ - if position_ids is None: - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - if token_type_ids is None: - token_type_ids = jnp.zeros_like(input_ids) - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - def _get_features(module, input_ids, attention_mask, position_ids, token_type_ids, deterministic): - text_outputs = module.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - token_type_ids=token_type_ids, - deterministic=deterministic, - ) - pooled_output = text_outputs[1] - text_features = module.text_projection(pooled_output) - return text_features - - return self.module.apply( - {"params": self.params}, - jnp.array(input_ids, dtype="i4"), - jnp.array(attention_mask, dtype="i4"), - jnp.array(position_ids, dtype="i4"), - jnp.array(token_type_ids, dtype="i4"), - not train, - method=_get_features, - rngs=rngs, - ) - - def get_image_features(self, pixel_values, dropout_rng: jax.random.PRNGKey = None, train=False): - r""" - Args: - pixel_values (:obj:`numpy.ndarray` of shape :obj:`(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained - using :class:`~transformers.ImageFeatureExtractionMixin`. See - :meth:`transformers.ImageFeatureExtractionMixin.__call__` for details. - - Returns: - image_features (:obj:`jax_xla.DeviceArray` of shape :obj:`(batch_size, output_dim`): The image embeddings - obtained by applying the projection layer to the pooled output of vision model. - """ - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - def _get_features(module, pixel_values, deterministic): - vision_outputs = module.vision_model(pixel_values=pixel_values, deterministic=deterministic) - pooled_output = vision_outputs[1] # pooled_output - image_features = module.visual_projection(pooled_output) - return image_features - - return self.module.apply( - {"params": self.params}, - jnp.array(pixel_values, dtype=jnp.float32), - not train, - method=_get_features, - rngs=rngs, - ) - - @classmethod - def from_text_vision_pretrained( - cls, - text_model_name_or_path: str = None, - vision_model_name_or_path: str = None, - *model_args, - **kwargs, - ) -> FlaxPreTrainedModel: - """ - Params: - text_model_name_or_path (:obj: `str`, `optional`): - Information necessary to initiate the text model. Can be either: - - - A string, the `model id` of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like ``bert-base-uncased``, or namespaced under - a user or organization name, like ``dbmdz/bert-base-german-cased``. - - A path to a `directory` containing model weights saved using - :func:`~transformers.FlaxPreTrainedModel.save_pretrained`, e.g., ``./my_model_directory/``. - - A path or url to a `PyTorch checkpoint folder` (e.g, ``./pt_model``). In - this case, ``from_pt`` should be set to :obj:`True` and a configuration object should be provided - as ``config`` argument. This loading path is slower than converting the PyTorch checkpoint in - a Flax model using the provided conversion scripts and loading the Flax model afterwards. - - vision_model_name_or_path (:obj: `str`, `optional`, defaults to `None`): - Information necessary to initiate the vision model. Can be either: - - - A string, the `model id` of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like ``bert-base-uncased``, or namespaced under - a user or organization name, like ``dbmdz/bert-base-german-cased``. - - A path to a `directory` containing model weights saved using - :func:`~transformers.FlaxPreTrainedModel.save_pretrained`, e.g., ``./my_model_directory/``. - - A path or url to a `PyTorch checkpoint folder` (e.g, ``./pt_model``). In - this case, ``from_pt`` should be set to :obj:`True` and a configuration object should be provided - as ``config`` argument. This loading path is slower than converting the PyTorch checkpoint in - a Flax model using the provided conversion scripts and loading the Flax model afterwards. - - model_args (remaining positional arguments, `optional`): - All remaning positional arguments will be passed to the underlying model's ``__init__`` method. - - kwargs (remaining dictionary of keyword arguments, `optional`): - Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., - :obj:`output_attentions=True`). - - - To update the text configuration, use the prefix `text_` for each configuration parameter. - - To update the vision configuration, use the prefix `vision_` for each configuration parameter. - - To update the parent model configuration, do not use a prefix for each configuration parameter. - - Behaves differently depending on whether a :obj:`config` is provided or automatically loaded. - - Example:: - - >>> from transformers import FlaxHybridCLIP - >>> # initialize a model from pretrained BERT and CLIP models. Note that the projection layers will be randomly initialized. - >>> # If using CLIP's vision model the vision projection layer will be initialized using pre-trained weights - >>> model = FlaxHybridCLIP.from_text_vision_pretrained('bert-base-uncased', 'openai/clip-vit-base-patch32') - >>> # saving model after fine-tuning - >>> model.save_pretrained("./bert-clip") - >>> # load fine-tuned model - >>> model = FlaxHybridCLIP.from_pretrained("./bert-clip") - """ - - kwargs_text = { - argument[len("text_") :]: value for argument, value in kwargs.items() if argument.startswith("text_") - } - - kwargs_vision = { - argument[len("vision_") :]: value for argument, value in kwargs.items() if argument.startswith("vision_") - } - - # remove text, vision kwargs from kwargs - for key in kwargs_text.keys(): - del kwargs["text_" + key] - for key in kwargs_vision.keys(): - del kwargs["vision_" + key] - - # Load and initialize the text and vision model - text_model = kwargs_text.pop("model", None) - if text_model is None: - assert ( - text_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `text_model_name_or_path` has to be defined" - from transformers import FlaxAutoModel - - if "config" not in kwargs_text: - from transformers import AutoConfig - - text_config = AutoConfig.from_pretrained(text_model_name_or_path) - kwargs_text["config"] = text_config - - text_model = FlaxAutoModel.from_pretrained(text_model_name_or_path, *model_args, **kwargs_text) - - vision_model = kwargs_vision.pop("model", None) - if vision_model is None: - assert ( - vision_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `vision_model_name_or_path` has to be defined" - from transformers import FlaxAutoModel - - if "config" not in kwargs_vision: - from transformers import AutoConfig - - vision_config = AutoConfig.from_pretrained(vision_model_name_or_path) - kwargs_vision["config"] = vision_config - - vision_model = FlaxAutoModel.from_pretrained(vision_model_name_or_path, *model_args, **kwargs_vision) - - # instantiate config with corresponding kwargs - dtype = kwargs.pop("dtype", jnp.float32) - config = HybridCLIPConfig.from_text_vision_configs(text_model.config, vision_model.config, **kwargs) - - # init model - model = cls(config, *model_args, dtype=dtype, **kwargs) - - if vision_config.model_type == "clip": - model.params["vision_model"]["vision_model"] = vision_model.params["vision_model"] - model.params["visual_projection"]["kernel"] = vision_model.params["visual_projection"]["kernel"] - else: - model.params["vision_model"] = vision_model.params - - model.params["text_model"] = text_model.params - - return model diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/parse_c_type.h b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/parse_c_type.h deleted file mode 100644 index 84e4ef85659eb63e6453d8af9f024f1866182342..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/parse_c_type.h +++ /dev/null @@ -1,181 +0,0 @@ - -/* This part is from file 'cffi/parse_c_type.h'. It is copied at the - beginning of C sources generated by CFFI's ffi.set_source(). */ - -typedef void *_cffi_opcode_t; - -#define _CFFI_OP(opcode, arg) (_cffi_opcode_t)(opcode | (((uintptr_t)(arg)) << 8)) -#define _CFFI_GETOP(cffi_opcode) ((unsigned char)(uintptr_t)cffi_opcode) -#define _CFFI_GETARG(cffi_opcode) (((intptr_t)cffi_opcode) >> 8) - -#define _CFFI_OP_PRIMITIVE 1 -#define _CFFI_OP_POINTER 3 -#define _CFFI_OP_ARRAY 5 -#define _CFFI_OP_OPEN_ARRAY 7 -#define _CFFI_OP_STRUCT_UNION 9 -#define _CFFI_OP_ENUM 11 -#define _CFFI_OP_FUNCTION 13 -#define _CFFI_OP_FUNCTION_END 15 -#define _CFFI_OP_NOOP 17 -#define _CFFI_OP_BITFIELD 19 -#define _CFFI_OP_TYPENAME 21 -#define _CFFI_OP_CPYTHON_BLTN_V 23 // varargs -#define _CFFI_OP_CPYTHON_BLTN_N 25 // noargs -#define _CFFI_OP_CPYTHON_BLTN_O 27 // O (i.e. a single arg) -#define _CFFI_OP_CONSTANT 29 -#define _CFFI_OP_CONSTANT_INT 31 -#define _CFFI_OP_GLOBAL_VAR 33 -#define _CFFI_OP_DLOPEN_FUNC 35 -#define _CFFI_OP_DLOPEN_CONST 37 -#define _CFFI_OP_GLOBAL_VAR_F 39 -#define _CFFI_OP_EXTERN_PYTHON 41 - -#define _CFFI_PRIM_VOID 0 -#define _CFFI_PRIM_BOOL 1 -#define _CFFI_PRIM_CHAR 2 -#define _CFFI_PRIM_SCHAR 3 -#define _CFFI_PRIM_UCHAR 4 -#define _CFFI_PRIM_SHORT 5 -#define _CFFI_PRIM_USHORT 6 -#define _CFFI_PRIM_INT 7 -#define _CFFI_PRIM_UINT 8 -#define _CFFI_PRIM_LONG 9 -#define _CFFI_PRIM_ULONG 10 -#define _CFFI_PRIM_LONGLONG 11 -#define _CFFI_PRIM_ULONGLONG 12 -#define _CFFI_PRIM_FLOAT 13 -#define _CFFI_PRIM_DOUBLE 14 -#define _CFFI_PRIM_LONGDOUBLE 15 - -#define _CFFI_PRIM_WCHAR 16 -#define _CFFI_PRIM_INT8 17 -#define _CFFI_PRIM_UINT8 18 -#define _CFFI_PRIM_INT16 19 -#define _CFFI_PRIM_UINT16 20 -#define _CFFI_PRIM_INT32 21 -#define _CFFI_PRIM_UINT32 22 -#define _CFFI_PRIM_INT64 23 -#define _CFFI_PRIM_UINT64 24 -#define _CFFI_PRIM_INTPTR 25 -#define _CFFI_PRIM_UINTPTR 26 -#define _CFFI_PRIM_PTRDIFF 27 -#define _CFFI_PRIM_SIZE 28 -#define _CFFI_PRIM_SSIZE 29 -#define _CFFI_PRIM_INT_LEAST8 30 -#define _CFFI_PRIM_UINT_LEAST8 31 -#define _CFFI_PRIM_INT_LEAST16 32 -#define _CFFI_PRIM_UINT_LEAST16 33 -#define _CFFI_PRIM_INT_LEAST32 34 -#define _CFFI_PRIM_UINT_LEAST32 35 -#define _CFFI_PRIM_INT_LEAST64 36 -#define _CFFI_PRIM_UINT_LEAST64 37 -#define _CFFI_PRIM_INT_FAST8 38 -#define _CFFI_PRIM_UINT_FAST8 39 -#define _CFFI_PRIM_INT_FAST16 40 -#define _CFFI_PRIM_UINT_FAST16 41 -#define _CFFI_PRIM_INT_FAST32 42 -#define _CFFI_PRIM_UINT_FAST32 43 -#define _CFFI_PRIM_INT_FAST64 44 -#define _CFFI_PRIM_UINT_FAST64 45 -#define _CFFI_PRIM_INTMAX 46 -#define _CFFI_PRIM_UINTMAX 47 -#define _CFFI_PRIM_FLOATCOMPLEX 48 -#define _CFFI_PRIM_DOUBLECOMPLEX 49 -#define _CFFI_PRIM_CHAR16 50 -#define _CFFI_PRIM_CHAR32 51 - -#define _CFFI__NUM_PRIM 52 -#define _CFFI__UNKNOWN_PRIM (-1) -#define _CFFI__UNKNOWN_FLOAT_PRIM (-2) -#define _CFFI__UNKNOWN_LONG_DOUBLE (-3) - -#define _CFFI__IO_FILE_STRUCT (-1) - - -struct _cffi_global_s { - const char *name; - void *address; - _cffi_opcode_t type_op; - void *size_or_direct_fn; // OP_GLOBAL_VAR: size, or 0 if unknown - // OP_CPYTHON_BLTN_*: addr of direct function -}; - -struct _cffi_getconst_s { - unsigned long long value; - const struct _cffi_type_context_s *ctx; - int gindex; -}; - -struct _cffi_struct_union_s { - const char *name; - int type_index; // -> _cffi_types, on a OP_STRUCT_UNION - int flags; // _CFFI_F_* flags below - size_t size; - int alignment; - int first_field_index; // -> _cffi_fields array - int num_fields; -}; -#define _CFFI_F_UNION 0x01 // is a union, not a struct -#define _CFFI_F_CHECK_FIELDS 0x02 // complain if fields are not in the - // "standard layout" or if some are missing -#define _CFFI_F_PACKED 0x04 // for CHECK_FIELDS, assume a packed struct -#define _CFFI_F_EXTERNAL 0x08 // in some other ffi.include() -#define _CFFI_F_OPAQUE 0x10 // opaque - -struct _cffi_field_s { - const char *name; - size_t field_offset; - size_t field_size; - _cffi_opcode_t field_type_op; -}; - -struct _cffi_enum_s { - const char *name; - int type_index; // -> _cffi_types, on a OP_ENUM - int type_prim; // _CFFI_PRIM_xxx - const char *enumerators; // comma-delimited string -}; - -struct _cffi_typename_s { - const char *name; - int type_index; /* if opaque, points to a possibly artificial - OP_STRUCT which is itself opaque */ -}; - -struct _cffi_type_context_s { - _cffi_opcode_t *types; - const struct _cffi_global_s *globals; - const struct _cffi_field_s *fields; - const struct _cffi_struct_union_s *struct_unions; - const struct _cffi_enum_s *enums; - const struct _cffi_typename_s *typenames; - int num_globals; - int num_struct_unions; - int num_enums; - int num_typenames; - const char *const *includes; - int num_types; - int flags; /* future extension */ -}; - -struct _cffi_parse_info_s { - const struct _cffi_type_context_s *ctx; - _cffi_opcode_t *output; - unsigned int output_size; - size_t error_location; - const char *error_message; -}; - -struct _cffi_externpy_s { - const char *name; - size_t size_of_result; - void *reserved1, *reserved2; -}; - -#ifdef _CFFI_INTERNAL -static int parse_c_type(struct _cffi_parse_info_s *info, const char *input); -static int search_in_globals(const struct _cffi_type_context_s *ctx, - const char *search, size_t search_len); -static int search_in_struct_unions(const struct _cffi_type_context_s *ctx, - const char *search, size_t search_len); -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/aac.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/aac.h deleted file mode 100644 index cafa881fc7d244ec8e69a28fa445f9ee653f49f7..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/aac.h +++ /dev/null @@ -1,143 +0,0 @@ -/* - * Copyright (c) 2010 Mans Rullgard - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_ARM_AAC_H -#define AVCODEC_ARM_AAC_H - -#include "config.h" - -#if HAVE_NEON_INLINE - -#define VMUL2 VMUL2 -static inline float *VMUL2(float *dst, const float *v, unsigned idx, - const float *scale) -{ - unsigned v0, v1; - __asm__ ("ubfx %0, %6, #0, #4 \n\t" - "ubfx %1, %6, #4, #4 \n\t" - "ldr %0, [%5, %0, lsl #2] \n\t" - "ldr %1, [%5, %1, lsl #2] \n\t" - "vld1.32 {d1[]}, [%7,:32] \n\t" - "vmov d0, %0, %1 \n\t" - "vmul.f32 d0, d0, d1 \n\t" - "vst1.32 {d0}, [%2,:64]! \n\t" - : "=&r"(v0), "=&r"(v1), "+r"(dst), "=m"(dst[0]), "=m"(dst[1]) - : "r"(v), "r"(idx), "r"(scale) - : "d0", "d1"); - return dst; -} - -#define VMUL4 VMUL4 -static inline float *VMUL4(float *dst, const float *v, unsigned idx, - const float *scale) -{ - unsigned v0, v1, v2, v3; - __asm__ ("ubfx %0, %10, #0, #2 \n\t" - "ubfx %1, %10, #2, #2 \n\t" - "ldr %0, [%9, %0, lsl #2] \n\t" - "ubfx %2, %10, #4, #2 \n\t" - "ldr %1, [%9, %1, lsl #2] \n\t" - "ubfx %3, %10, #6, #2 \n\t" - "ldr %2, [%9, %2, lsl #2] \n\t" - "vmov d0, %0, %1 \n\t" - "ldr %3, [%9, %3, lsl #2] \n\t" - "vld1.32 {d2[],d3[]},[%11,:32] \n\t" - "vmov d1, %2, %3 \n\t" - "vmul.f32 q0, q0, q1 \n\t" - "vst1.32 {q0}, [%4,:128]! \n\t" - : "=&r"(v0), "=&r"(v1), "=&r"(v2), "=&r"(v3), "+r"(dst), - "=m"(dst[0]), "=m"(dst[1]), "=m"(dst[2]), "=m"(dst[3]) - : "r"(v), "r"(idx), "r"(scale) - : "d0", "d1", "d2", "d3"); - return dst; -} - -#define VMUL2S VMUL2S -static inline float *VMUL2S(float *dst, const float *v, unsigned idx, - unsigned sign, const float *scale) -{ - unsigned v0, v1, v2, v3; - __asm__ ("ubfx %0, %8, #0, #4 \n\t" - "ubfx %1, %8, #4, #4 \n\t" - "ldr %0, [%7, %0, lsl #2] \n\t" - "lsl %2, %10, #30 \n\t" - "ldr %1, [%7, %1, lsl #2] \n\t" - "lsl %3, %10, #31 \n\t" - "vmov d0, %0, %1 \n\t" - "bic %2, %2, #1<<30 \n\t" - "vld1.32 {d1[]}, [%9,:32] \n\t" - "vmov d2, %2, %3 \n\t" - "veor d0, d0, d2 \n\t" - "vmul.f32 d0, d0, d1 \n\t" - "vst1.32 {d0}, [%4,:64]! \n\t" - : "=&r"(v0), "=&r"(v1), "=&r"(v2), "=&r"(v3), "+r"(dst), - "=m"(dst[0]), "=m"(dst[1]) - : "r"(v), "r"(idx), "r"(scale), "r"(sign) - : "d0", "d1", "d2"); - return dst; -} - -#define VMUL4S VMUL4S -static inline float *VMUL4S(float *dst, const float *v, unsigned idx, - unsigned sign, const float *scale) -{ - unsigned v0, v1, v2, v3, nz; - __asm__ ("vld1.32 {d2[],d3[]},[%13,:32] \n\t" - "ubfx %0, %12, #0, #2 \n\t" - "ubfx %1, %12, #2, #2 \n\t" - "ldr %0, [%11,%0, lsl #2] \n\t" - "ubfx %2, %12, #4, #2 \n\t" - "ldr %1, [%11,%1, lsl #2] \n\t" - "ubfx %3, %12, #6, #2 \n\t" - "ldr %2, [%11,%2, lsl #2] \n\t" - "vmov d0, %0, %1 \n\t" - "ldr %3, [%11,%3, lsl #2] \n\t" - "lsr %6, %12, #12 \n\t" - "rbit %6, %6 \n\t" - "vmov d1, %2, %3 \n\t" - "lsls %6, %6, #1 \n\t" - "and %0, %5, #1<<31 \n\t" - "it cs \n\t" - "lslcs %5, %5, #1 \n\t" - "lsls %6, %6, #1 \n\t" - "and %1, %5, #1<<31 \n\t" - "it cs \n\t" - "lslcs %5, %5, #1 \n\t" - "lsls %6, %6, #1 \n\t" - "and %2, %5, #1<<31 \n\t" - "it cs \n\t" - "lslcs %5, %5, #1 \n\t" - "vmov d4, %0, %1 \n\t" - "and %3, %5, #1<<31 \n\t" - "vmov d5, %2, %3 \n\t" - "veor q0, q0, q2 \n\t" - "vmul.f32 q0, q0, q1 \n\t" - "vst1.32 {q0}, [%4,:128]! \n\t" - : "=&r"(v0), "=&r"(v1), "=&r"(v2), "=&r"(v3), "+r"(dst), - "+r"(sign), "=r"(nz), - "=m"(dst[0]), "=m"(dst[1]), "=m"(dst[2]), "=m"(dst[3]) - : "r"(v), "r"(idx), "r"(scale) - : "cc", "d0", "d1", "d2", "d3", "d4", "d5"); - return dst; -} - -#endif /* HAVE_NEON_INLINE */ - -#endif /* AVCODEC_ARM_AAC_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/videodsp_init_armv5te.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/videodsp_init_armv5te.c deleted file mode 100644 index eaa8c5bbf8ea24599d2ef96b4890ca2c6b7184a2..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/videodsp_init_armv5te.c +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Copyright (C) 2012 Ronald S. Bultje - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/attributes.h" -#include "libavutil/arm/cpu.h" -#include "libavcodec/videodsp.h" -#include "videodsp_arm.h" - -void ff_prefetch_arm(const uint8_t *mem, ptrdiff_t stride, int h); - -av_cold void ff_videodsp_init_armv5te(VideoDSPContext *ctx, int bpc) -{ -#if HAVE_ARMV5TE_EXTERNAL - ctx->prefetch = ff_prefetch_arm; -#endif -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_frame_split_bsf.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_frame_split_bsf.c deleted file mode 100644 index 5f6a40316cb58a9c3721a662565eb9330d57eebb..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_frame_split_bsf.c +++ /dev/null @@ -1,261 +0,0 @@ -/* - * Copyright (c) 2019 James Almer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * This bitstream filter splits AV1 Temporal Units into packets containing - * just one frame, plus any leading and trailing OBUs that may be present at - * the beginning or end, respectively. - * - * Temporal Units already containing only one frame will be passed through - * unchanged. When splitting can't be performed, the Temporal Unit will be - * passed through containing only the remaining OBUs starting from the first - * one after the last successfully split frame. - */ - -#include "libavutil/avassert.h" - -#include "bsf.h" -#include "bsf_internal.h" -#include "cbs.h" -#include "cbs_av1.h" - -typedef struct AV1FSplitContext { - AVPacket *buffer_pkt; - CodedBitstreamContext *cbc; - CodedBitstreamFragment temporal_unit; - - int nb_frames; - int cur_frame; - int cur_frame_idx; - int last_frame_idx; -} AV1FSplitContext; - -static int av1_frame_split_filter(AVBSFContext *ctx, AVPacket *out) -{ - AV1FSplitContext *s = ctx->priv_data; - CodedBitstreamFragment *td = &s->temporal_unit; - int i, ret; - int split = !!s->buffer_pkt->data; - - if (!s->buffer_pkt->data) { - int nb_frames = 0; - - ret = ff_bsf_get_packet_ref(ctx, s->buffer_pkt); - if (ret < 0) - return ret; - - ret = ff_cbs_read_packet(s->cbc, td, s->buffer_pkt); - if (ret < 0) { - av_log(ctx, AV_LOG_WARNING, "Failed to parse temporal unit.\n"); - goto passthrough; - } - - for (i = 0; i < td->nb_units; i++) { - CodedBitstreamUnit *unit = &td->units[i]; - - if (unit->type == AV1_OBU_FRAME || - unit->type == AV1_OBU_FRAME_HEADER) - nb_frames++; - else if (unit->type == AV1_OBU_TILE_LIST) { - av_log(ctx, AV_LOG_VERBOSE, "Large scale tiles are unsupported.\n"); - goto passthrough; - } - } - if (nb_frames > 1) { - s->cur_frame = 0; - s->cur_frame_idx = s->last_frame_idx = 0; - s->nb_frames = nb_frames; - split = 1; - } - } - - if (split) { - AV1RawFrameHeader *frame = NULL; - int cur_frame_type = -1, size = 0; - - for (i = s->cur_frame_idx; i < td->nb_units; i++) { - CodedBitstreamUnit *unit = &td->units[i]; - - size += unit->data_size; - if (unit->type == AV1_OBU_FRAME) { - AV1RawOBU *obu = unit->content; - - if (frame) { - av_log(ctx, AV_LOG_WARNING, "Frame OBU found when Tile data for a " - "previous frame was expected.\n"); - goto passthrough; - } - - frame = &obu->obu.frame.header; - cur_frame_type = obu->header.obu_type; - s->last_frame_idx = s->cur_frame_idx; - s->cur_frame_idx = i + 1; - s->cur_frame++; - - // split here unless it's the last frame, in which case - // include every trailing OBU - if (s->cur_frame < s->nb_frames) - break; - } else if (unit->type == AV1_OBU_FRAME_HEADER) { - AV1RawOBU *obu = unit->content; - - if (frame) { - av_log(ctx, AV_LOG_WARNING, "Frame Header OBU found when Tile data for a " - "previous frame was expected.\n"); - goto passthrough; - } - - frame = &obu->obu.frame_header; - cur_frame_type = obu->header.obu_type; - s->last_frame_idx = s->cur_frame_idx; - s->cur_frame++; - - // split here if show_existing_frame unless it's the last - // frame, in which case include every trailing OBU - if (frame->show_existing_frame && - s->cur_frame < s->nb_frames) { - s->cur_frame_idx = i + 1; - break; - } - } else if (unit->type == AV1_OBU_TILE_GROUP) { - AV1RawOBU *obu = unit->content; - AV1RawTileGroup *group = &obu->obu.tile_group; - - if (!frame || cur_frame_type != AV1_OBU_FRAME_HEADER) { - av_log(ctx, AV_LOG_WARNING, "Unexpected Tile Group OBU found before a " - "Frame Header.\n"); - goto passthrough; - } - - if ((group->tg_end == (frame->tile_cols * frame->tile_rows) - 1) && - // include every trailing OBU with the last frame - s->cur_frame < s->nb_frames) { - s->cur_frame_idx = i + 1; - break; - } - } - } - av_assert0(frame && s->cur_frame <= s->nb_frames); - - ret = av_packet_ref(out, s->buffer_pkt); - if (ret < 0) - goto fail; - - out->data = (uint8_t *)td->units[s->last_frame_idx].data; - out->size = size; - - // skip the frame in the buffer packet if it's split successfully, so it's not present - // if the packet is passed through in case of failure when splitting another frame. - s->buffer_pkt->data += size; - s->buffer_pkt->size -= size; - - if (!frame->show_existing_frame && !frame->show_frame) - out->pts = AV_NOPTS_VALUE; - - if (s->cur_frame == s->nb_frames) { - av_packet_unref(s->buffer_pkt); - ff_cbs_fragment_reset(td); - } - - return 0; - } - -passthrough: - av_packet_move_ref(out, s->buffer_pkt); - - ret = 0; -fail: - if (ret < 0) { - av_packet_unref(out); - av_packet_unref(s->buffer_pkt); - } - ff_cbs_fragment_reset(td); - - return ret; -} - -static const CodedBitstreamUnitType decompose_unit_types[] = { - AV1_OBU_TEMPORAL_DELIMITER, - AV1_OBU_SEQUENCE_HEADER, - AV1_OBU_FRAME_HEADER, - AV1_OBU_TILE_GROUP, - AV1_OBU_FRAME, -}; - -static int av1_frame_split_init(AVBSFContext *ctx) -{ - AV1FSplitContext *s = ctx->priv_data; - CodedBitstreamFragment *td = &s->temporal_unit; - int ret; - - s->buffer_pkt = av_packet_alloc(); - if (!s->buffer_pkt) - return AVERROR(ENOMEM); - - ret = ff_cbs_init(&s->cbc, AV_CODEC_ID_AV1, ctx); - if (ret < 0) - return ret; - - s->cbc->decompose_unit_types = decompose_unit_types; - s->cbc->nb_decompose_unit_types = FF_ARRAY_ELEMS(decompose_unit_types); - - if (!ctx->par_in->extradata_size) - return 0; - - ret = ff_cbs_read_extradata(s->cbc, td, ctx->par_in); - if (ret < 0) - av_log(ctx, AV_LOG_WARNING, "Failed to parse extradata.\n"); - - ff_cbs_fragment_reset(td); - - return 0; -} - -static void av1_frame_split_flush(AVBSFContext *ctx) -{ - AV1FSplitContext *s = ctx->priv_data; - - av_packet_unref(s->buffer_pkt); - ff_cbs_fragment_reset(&s->temporal_unit); -} - -static void av1_frame_split_close(AVBSFContext *ctx) -{ - AV1FSplitContext *s = ctx->priv_data; - - av_packet_free(&s->buffer_pkt); - ff_cbs_fragment_free(&s->temporal_unit); - ff_cbs_close(&s->cbc); -} - -static const enum AVCodecID av1_frame_split_codec_ids[] = { - AV_CODEC_ID_AV1, AV_CODEC_ID_NONE, -}; - -const FFBitStreamFilter ff_av1_frame_split_bsf = { - .p.name = "av1_frame_split", - .p.codec_ids = av1_frame_split_codec_ids, - .priv_data_size = sizeof(AV1FSplitContext), - .init = av1_frame_split_init, - .flush = av1_frame_split_flush, - .close = av1_frame_split_close, - .filter = av1_frame_split_filter, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca.c deleted file mode 100644 index fb359b2ff3be1d252cc95dcc84ecc7b3876abf16..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca.c +++ /dev/null @@ -1,157 +0,0 @@ -/* - * DCA compatible decoder data - * Copyright (C) 2004 Gildas Bazin - * Copyright (C) 2004 Benjamin Zores - * Copyright (C) 2006 Benjamin Larsson - * Copyright (C) 2007 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include -#include - -#include "libavutil/error.h" - -#include "dca.h" -#include "dca_core.h" -#include "dca_syncwords.h" -#include "get_bits.h" -#include "put_bits.h" - -const uint32_t ff_dca_sampling_freqs[16] = { - 8000, 16000, 32000, 64000, 128000, 22050, 44100, 88200, - 176400, 352800, 12000, 24000, 48000, 96000, 192000, 384000, -}; - -const uint8_t ff_dca_freq_ranges[16] = { - 0, 1, 2, 3, 4, 1, 2, 3, 4, 4, 0, 1, 2, 3, 4, 4 -}; - -const uint8_t ff_dca_bits_per_sample[8] = { - 16, 16, 20, 20, 0, 24, 24, 0 -}; - -int avpriv_dca_convert_bitstream(const uint8_t *src, int src_size, uint8_t *dst, - int max_size) -{ - uint32_t mrk; - int i, tmp; - PutBitContext pb; - - if ((unsigned) src_size > (unsigned) max_size) - src_size = max_size; - - mrk = AV_RB32(src); - switch (mrk) { - case DCA_SYNCWORD_CORE_BE: - case DCA_SYNCWORD_SUBSTREAM: - memcpy(dst, src, src_size); - return src_size; - case DCA_SYNCWORD_CORE_LE: - for (i = 0; i < (src_size + 1) >> 1; i++) { - AV_WB16(dst, AV_RL16(src)); - src += 2; - dst += 2; - } - return src_size; - case DCA_SYNCWORD_CORE_14B_BE: - case DCA_SYNCWORD_CORE_14B_LE: - init_put_bits(&pb, dst, max_size); - for (i = 0; i < (src_size + 1) >> 1; i++, src += 2) { - tmp = ((mrk == DCA_SYNCWORD_CORE_14B_BE) ? AV_RB16(src) : AV_RL16(src)) & 0x3FFF; - put_bits(&pb, 14, tmp); - } - flush_put_bits(&pb); - return put_bytes_output(&pb); - default: - return AVERROR_INVALIDDATA; - } -} - -int ff_dca_parse_core_frame_header(DCACoreFrameHeader *h, GetBitContext *gb) -{ - if (get_bits_long(gb, 32) != DCA_SYNCWORD_CORE_BE) - return DCA_PARSE_ERROR_SYNC_WORD; - - h->normal_frame = get_bits1(gb); - h->deficit_samples = get_bits(gb, 5) + 1; - if (h->deficit_samples != DCA_PCMBLOCK_SAMPLES) - return DCA_PARSE_ERROR_DEFICIT_SAMPLES; - - h->crc_present = get_bits1(gb); - h->npcmblocks = get_bits(gb, 7) + 1; - if (h->npcmblocks & (DCA_SUBBAND_SAMPLES - 1)) - return DCA_PARSE_ERROR_PCM_BLOCKS; - - h->frame_size = get_bits(gb, 14) + 1; - if (h->frame_size < 96) - return DCA_PARSE_ERROR_FRAME_SIZE; - - h->audio_mode = get_bits(gb, 6); - if (h->audio_mode >= DCA_AMODE_COUNT) - return DCA_PARSE_ERROR_AMODE; - - h->sr_code = get_bits(gb, 4); - if (!ff_dca_sample_rates[h->sr_code]) - return DCA_PARSE_ERROR_SAMPLE_RATE; - - h->br_code = get_bits(gb, 5); - if (get_bits1(gb)) - return DCA_PARSE_ERROR_RESERVED_BIT; - - h->drc_present = get_bits1(gb); - h->ts_present = get_bits1(gb); - h->aux_present = get_bits1(gb); - h->hdcd_master = get_bits1(gb); - h->ext_audio_type = get_bits(gb, 3); - h->ext_audio_present = get_bits1(gb); - h->sync_ssf = get_bits1(gb); - h->lfe_present = get_bits(gb, 2); - if (h->lfe_present == DCA_LFE_FLAG_INVALID) - return DCA_PARSE_ERROR_LFE_FLAG; - - h->predictor_history = get_bits1(gb); - if (h->crc_present) - skip_bits(gb, 16); - h->filter_perfect = get_bits1(gb); - h->encoder_rev = get_bits(gb, 4); - h->copy_hist = get_bits(gb, 2); - h->pcmr_code = get_bits(gb, 3); - if (!ff_dca_bits_per_sample[h->pcmr_code]) - return DCA_PARSE_ERROR_PCM_RES; - - h->sumdiff_front = get_bits1(gb); - h->sumdiff_surround = get_bits1(gb); - h->dn_code = get_bits(gb, 4); - return 0; -} - -int avpriv_dca_parse_core_frame_header(DCACoreFrameHeader *h, const uint8_t *buf, int size) -{ - GetBitContext gb; - int ret; - - ret = init_get_bits8(&gb, buf, size); - if (ret < 0) - return ret; - - if (ff_dca_parse_core_frame_header(h, &gb) < 0) - return AVERROR_INVALIDDATA; - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libwebpenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libwebpenc.c deleted file mode 100644 index d6edd866037b6e1e6a0d0a8a93416dc7320768b3..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libwebpenc.c +++ /dev/null @@ -1,105 +0,0 @@ -/* - * WebP encoding support via libwebp - * Copyright (c) 2013 Justin Ruggles - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * WebP encoder using libwebp (WebPEncode API) - */ - -#include "codec_internal.h" -#include "encode.h" -#include "libwebpenc_common.h" - -typedef LibWebPContextCommon LibWebPContext; - -static av_cold int libwebp_encode_init(AVCodecContext *avctx) -{ - return ff_libwebp_encode_init_common(avctx); -} - -static int libwebp_encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *frame, int *got_packet) -{ - LibWebPContext *s = avctx->priv_data; - WebPPicture *pic = NULL; - AVFrame *alt_frame = NULL; - WebPMemoryWriter mw = { 0 }; - - int ret = ff_libwebp_get_frame(avctx, s, frame, &alt_frame, &pic); - if (ret < 0) - goto end; - - WebPMemoryWriterInit(&mw); - pic->custom_ptr = &mw; - pic->writer = WebPMemoryWrite; - - ret = WebPEncode(&s->config, pic); - if (!ret) { - av_log(avctx, AV_LOG_ERROR, "WebPEncode() failed with error: %d\n", - pic->error_code); - ret = ff_libwebp_error_to_averror(pic->error_code); - goto end; - } - - ret = ff_get_encode_buffer(avctx, pkt, mw.size, 0); - if (ret < 0) - goto end; - memcpy(pkt->data, mw.mem, mw.size); - - *got_packet = 1; - -end: -#if (WEBP_ENCODER_ABI_VERSION > 0x0203) - WebPMemoryWriterClear(&mw); -#else - free(mw.mem); /* must use free() according to libwebp documentation */ -#endif - WebPPictureFree(pic); - av_freep(&pic); - av_frame_free(&alt_frame); - - return ret; -} - -static int libwebp_encode_close(AVCodecContext *avctx) -{ - LibWebPContextCommon *s = avctx->priv_data; - av_frame_free(&s->ref); - - return 0; -} - -const FFCodec ff_libwebp_encoder = { - .p.name = "libwebp", - CODEC_LONG_NAME("libwebp WebP image"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_WEBP, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .p.pix_fmts = ff_libwebpenc_pix_fmts, - .p.priv_class = &ff_libwebpenc_class, - .p.wrapper_name = "libwebp", - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE, - .priv_data_size = sizeof(LibWebPContext), - .defaults = ff_libwebp_defaults, - .init = libwebp_encode_init, - FF_CODEC_ENCODE_CB(libwebp_encode_frame), - .close = libwebp_encode_close, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Experience Realistic Drifting and Racing in Car 2 - Download Now.md b/spaces/congsaPfin/Manga-OCR/logs/Experience Realistic Drifting and Racing in Car 2 - Download Now.md deleted file mode 100644 index 876c8e86bdcdc0bfaf4e2ffde41b493859ac807a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Experience Realistic Drifting and Racing in Car 2 - Download Now.md +++ /dev/null @@ -1,119 +0,0 @@ -
-

Racing in Car 2: A Realistic and Fun Driving Simulator

-

If you are looking for a racing game that gives you a first-person perspective of driving a car, then you should check out Racing in Car 2. This game lets you drive your car in cockpit view through the endless traffic and realistic environment. You can go as fast as possible, overtake traffic cars, earn coins and buy new cars. You can also compete with other players on the global leaderboards and become the king of the road.

-

racing in car 2 app download


Download File ——— https://urlca.com/2uO91F



-

In this article, we will tell you more about the features of Racing in Car 2, how to download and play it on your device, some tips and tricks to improve your driving skills, and some alternatives to this game that you might also enjoy. So, buckle up and get ready for some adrenaline-pumping action!

-

Features of Racing in Car 2

-

Racing in Car 2 is a game that offers you a realistic and fun driving experience. Here are some of the features that make this game stand out:

-

3D realistic cockpit view

-

Unlike most racing games that use a third-person perspective, Racing in Car 2 puts you in the driver's seat. You can see the dashboard, the steering wheel, the mirrors, and the road ahead of you. You can also switch between different camera angles to find the best view for you.

-

Endless game mode

-

Racing in Car 2 has an endless game mode that lets you drive as long as you want without any limits. You can choose from different locations such as city, desert, or snow. You can also adjust the traffic density and speed to suit your level of difficulty. The game will keep track of your distance, speed, time, and coins earned.

-

Different locations and cars to choose

-

Racing in Car 2 has a variety of locations and cars to choose from. You can drive in different environments such as city, desert, or snow. You can also unlock and buy new cars with different performance and appearance. You can customize your car with body kits, rims, vinyls, and more.

-

Simulator-like controls

-

Racing in Car 2 has simulator-like controls that make you feel like you are driving a real car. You can use the tilt or touch steering option according to your preference. You can also use the accelerator and brake pedals to control your speed. The game also has realistic physics and sound effects that add to the immersion.

-

How to Download and Play Racing in Car 2 on Your Device

-

Racing in Car 2 is available for both Android and iOS devices. You can download it from the Google Play Store or the App Store for free. However, if you want to play it on your PC or Mac, you will need an emulator such as BlueStacks. Here are the steps to download and play Racing in Car 2 on your device:

-

racing in car 2 game download
-racing in car 2 apk download
-racing in car 2 mod apk download
-racing in car 2 free download
-racing in car 2 download for pc
-racing in car 2 download for android
-racing in car 2 download for ios
-racing in car 2 download for windows
-racing in car 2 download for mac
-racing in car 2 download for laptop
-racing in car 2 online play
-racing in car 2 offline play
-racing in car 2 gameplay
-racing in car 2 review
-racing in car 2 tips and tricks
-racing in car 2 cheats and hacks
-racing in car 2 best cars
-racing in car 2 new update
-racing in car 2 multiplayer mode
-racing in car 2 simulator game
-racing in car 2 cockpit view
-racing in car 2 realistic graphics
-racing in car 2 endless mode
-racing in car 2 different locations
-racing in car 2 traffic cars
-racing in car 2 earn coins
-racing in car 2 buy new cars
-racing in car 2 leaderboards
-racing in car 2 vs CarX Drift Racing 2
-CarX Drift Racing 2 app download
-CarX Drift Racing 2 game download
-CarX Drift Racing 2 apk download
-CarX Drift Racing 2 mod apk download
-CarX Drift Racing 2 free download
-CarX Drift Racing 2 download for pc
-CarX Drift Racing 2 download for android
-CarX Drift Racing 2 download for ios
-CarX Drift Racing 2 download for windows
-CarX Drift Racing 2 download for mac
-CarX Drift Racing 2 download for laptop
-CarX Drift Racing 2 online play
-CarX Drift Racing 2 offline play
-CarX Drift Racing 2 gameplay
-CarX Drift Racing 2 review
-CarX Drift Racing 2 tips and tricks
-CarX Drift Racing 2 cheats and hacks
-CarX Drift Racing 2 best cars
-CarX Drift Racing 2 new update
-CarX Drift Racing 2 multiplayer mode

-

Download from Google Play Store or App Store

-
    -
  • Open the Google Play Store or the App Store on your device and search for Racing in Car 2.
  • -
  • Tap on the game icon and then tap on Install or Get to download the game.
  • -
  • Wait for the game to finish downloading and then tap on Open or Launch to start the game.
  • -
  • Enjoy driving your car in cockpit view and overtaking traffic cars.
  • -
-

Download from BlueStacks Emulator

-
    -
  • Download and install BlueStacks Emulator on your PC or Mac from https://www.bluestacks.com/.
  • -
  • Launch BlueStacks and sign in with your Google account.
  • -
  • Go to the Google Play Store on BlueStacks and search for Racing in Car 2.
  • -
  • Click on the game icon and then click on Install to download the game.
  • -
  • Wait for the game to finish downloading and then click on Open to start the game.
  • -
  • Enjoy driving your car in cockpit view and overtaking traffic cars on a bigger screen.
  • -
-

Tips and Tricks for Racing in Car 2

-

Racing in Car 2 is a game that requires skill and strategy to master. Here are some tips and tricks that can help you improve your driving performance and score higher:

-

Overtake traffic cars to earn coins and bonuses

-

The main objective of Racing in Car 2 is to overtake as many traffic cars as possible without crashing. The more cars you overtake, the more coins and bonuses you earn. Coins can be used to buy new cars and upgrade your existing ones. Bonuses can give you extra speed, time, or coins. Try to overtake cars from a close distance and avoid hitting them to get more rewards.

-

Upgrade your car with performance and visual tuning

-

Racing in Car 2 allows you to upgrade your car with performance and visual tuning. Performance tuning can improve your car's speed, acceleration, handling, and braking. Visual tuning can change your car's body kit, rims, vinyls, and more. Upgrading your car can help you drive faster, smoother, and more stylishly.

-

Use the tilt or touch steering option according to your preference

-

Racing in Car 2 gives you two options to control your car: tilt or touch steering. Tilt steering uses the accelerometer of your device to steer your car by tilting it left or right. Touch steering uses buttons on the screen to steer your car by tapping them. You can choose the option that suits your preference and comfort level. You can also adjust the sensitivity of the steering in the settings menu.

-

Try different camera angles to find the best view

-

Racing in Car 2 has different camera angles that you can switch between during the game. You can use the cockpit view, the hood view, or the rear view. Each view has its own advantages and disadvantages. The cockpit view gives you a realistic feeling of driving a car, but it may also limit your visibility of the road. The hood view gives you a clear view of the road ahead, but it may also make you feel detached from the car. The rear view gives you a wider view of the road behind, but it may also make you lose focus of the road ahead. Try different camera angles to find the best view for you.

-

Alternatives to Racing in Car 2

-

If you like Racing in Car 2, you might also like some other racing games that offer similar or different features. Here are some alternatives to Racing in Car 2 that you can try:

-

CarX Drift Racing 2

-

If you are into drifting, then you should check out CarX Drift Racing 2. This game lets you drive powerful sports cars and perform amazing drifts on various tracks. You can customize your car with different parts, paint jobs, vinyls, and decals. You can also compete with other players online or offline in different modes such as solo run, tandem drift, or drift battles.

-

Real Racing 3

-

If you are into realistic racing, then you should check out Real Racing 3. This game lets you drive over 250 authentic cars from top manufacturers such as Ferrari, Porsche, Lamborghini, and more. You can race on over 40 real tracks from around the world such as Silverstone, Le Mans, Dubai Autodrome, and more. You can also challenge other players online or offline in different modes such as time trials, cup races, endurance races, or multiplayer races.

-

Asphalt 9: Legends

-

If you are into arcade racing, then you should check out Asphalt 9: Legends. This game lets you drive over 80 dream cars from top brands such as Ferrari, Lamborghini, Bugatti, and more. You can race on over 60 stunning tracks from around the world such as New York, Paris, Tokyo, and more. You can also perform amazing stunts and tricks such as barrel rolls, 360° spins, and nitro boosts.

-

Conclusion

-

Racing in Car 2 is a game that offers you a realistic and fun driving simulator. You can drive your car in cockpit view through the endless traffic and realistic environment. You can also customize your car with different performance and visual tuning. You can also compete with other players on the global leaderboards and become the king of the road.

-

If you are looking for a racing game that gives you a first-person perspective of driving a car, then you should download Racing in Car 2 today. You can download it from the Google Play Store or the App Store for free. You can also download it from BlueStacks Emulator if you want to play it on your PC or Mac.

-

So, what are you waiting for? Download Racing in Car 2 now and enjoy the thrill of driving a car in cockpit view!

-

FAQs

-
    -
  • Q: Is Racing in Car 2 free to play?
  • -
  • A: Yes, Racing in Car 2 is free to play. However, it contains ads and in-app purchases that you can disable or buy if you want.
  • -
  • Q: How can I earn more coins in Racing in Car 2?
  • -
  • A: You can earn more coins by overtaking traffic cars from a close distance, collecting bonuses, completing missions, and watching ads.
  • -
  • Q: How can I unlock new cars in Racing in Car 2?
  • -
  • A: You can unlock new cars by earning enough coins to buy them. You can also unlock some cars by completing certain missions or achievements.
  • -
  • Q: How can I play Racing in Car 2 with my friends?
  • -
  • A: You can play Racing in Car 2 with your friends by connecting your game to Facebook or Google Play Games. You can then see your friends' scores on the leaderboards and challenge them to beat your records.
  • -
  • Q: What are the minimum requirements to play Racing in Car 2 on my device?
  • -
  • A: The minimum requirements to play Racing in Car 2 on your device are: - Android: Android 4.4 or higher, 1 GB of RAM, and 100 MB of free storage space. - iOS: iOS 9.0 or later, iPhone 5S or newer, iPad Air or newer, iPod touch (6th generation) or newer, and 200 MB of free storage space. - PC or Mac: Windows 7 or higher, Mac OS X 10.11 or higher, Intel or AMD processor, 4 GB of RAM, and 500 MB of free storage space.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Home Design Online Express Your Style with a Catalog of Branded Products.md b/spaces/congsaPfin/Manga-OCR/logs/Home Design Online Express Your Style with a Catalog of Branded Products.md deleted file mode 100644 index 221f19eb0e931011c0aeddce17121c258970366b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Home Design Online Express Your Style with a Catalog of Branded Products.md +++ /dev/null @@ -1,183 +0,0 @@ -
-

Home Design Online: How to Create Your Dream House in 3D

-

Have you ever dreamed of designing your own house, but felt overwhelmed by the complexity and cost of traditional methods? Do you want to unleash your creativity and express your personal style in your home? If so, you might want to try home design online, a modern and easy way to create your dream house in 3D.

-

Introduction

-

What is home design online?

-

Home design online is the process of creating floor plans, layouts, furniture arrangements, decorations, and other aspects of a house using online software. Home design online software allows you to design your house in 2D or 3D, using a variety of tools and features. You can also edit colors, patterns, materials, sizes, and shapes of different items, as well as apply realistic lighting and shadows. Home design online software can help you create realistic images of your project, as well as share it online with others.

-

home design online


Download Zip ——— https://urlca.com/2uO9MY



-

Why use home design online?

-

There are many benefits of using home design online software, such as:

-
    -
  • It is easy and intuitive. You don't need any professional skills or experience to use home design online software. You can simply drag and drop items, adjust settings, and switch between views. You can also access tutorials and instructions if you need help.
  • -
  • It is flexible and customizable. You can design your house according to your preferences and needs. You can choose from a wide range of branded products, or create your own custom items. You can also change the dimensions, colors, textures, and styles of any item.
  • -
  • It is affordable and convenient. You don't need to spend money on hiring an architect, a designer, or a contractor. You also don't need to buy any materials or tools. You can design your house from the comfort of your own home, at any time and pace.
  • -
  • It is fun and rewarding. You can enjoy the creative process of designing your house, as well as the satisfaction of seeing your vision come to life. You can also share your project with others, and get feedbacks and suggestions.
  • -
-

How to get started with home design online?

-

To start designing your house online, you will need to choose a home design online software that suits your needs and preferences. There are many options available on the market, each with its own features, pros and cons, pricing and plans. In this article, we will review three of the best home design online software: Planner 5D, HomeByMe, and RoomSketcher.

-

Best Home Design Online Software

-

Planner 5D

-

Planner 5D is one of the most popular home design online software in the world. It has over 91 million users, who have created over 70 million projects. Planner 5D allows you to create floor plans in 2D or 3D, as well as furnish and decorate your house with over 5000 items.

-

Features

-
    -
  • You can use the 2D mode to create floor plans and layouts with simple and intuitive tools. You can draw walls, doors, windows, stairs, and other elements. You can also import existing floor plans from images or PDF files.
  • -
  • You can use the 3D mode to view your project from different angles and perspectives. You can also apply realistic lighting and shadows, as well as adjust the time of day and the weather.
  • -
  • You can furnish and decorate your house with over 5000 items, including furniture, appliances, accessories, plants, and more. You can also customize the colors, patterns, materials, sizes, and shapes of any item.
  • -
  • You can create your own items using the 3D editor. You can import models from other sources, or create them from scratch. You can also edit the textures, colors, and dimensions of your items.
  • -
  • You can share your project online with other users, or download it as an image or a video. You can also export your project as a PDF file or a DWG file.
  • -
-

Pros and cons

-
    -
  • Pros:
      -
    • It is easy to use and has a user-friendly interface.
    • -
    • It has a large and diverse catalog of items.
    • -
    • It has a powerful 3D editor that allows you to create your own items.
    • -
    • It has realistic rendering and animation options.
    • -
    • It is compatible with multiple devices and platforms, including web browsers, Windows, Mac, iOS, Android, and VR.
    • -
    -
  • -
  • Cons:
      -
    • It requires an internet connection to access all the features and items.
    • -
    • It has some limitations in the free version, such as the number of projects, items, and renders you can create.
    • -
    • It has some bugs and glitches that may affect the performance and quality of your project.
    • -
    -
  • -
-

Pricing and plans

-

Planner 5D offers a free version that allows you to create up to 3 projects with limited items and features. You can also upgrade to a premium version that gives you unlimited access to all the features and items for a monthly or yearly fee. The premium version costs $6.99 per month or $29.99 per year. You can also purchase additional items or renders separately.

-

HomeByMe

-

HomeByMe is another popular home design online software that lets you create floor plans in 2D or 3D, as well as furnish and decorate your house with over 20,000 items. HomeByMe allows you to design your house in a realistic and immersive way, using high-quality renders and 360° views.

-

Features

-
    -
  • You can use the 2D mode to create floor plans and layouts with simple and precise tools. You can draw walls, doors, windows, stairs, and other elements. You can also import existing floor plans from images or PDF files.
  • -
  • You can use the 3D mode to view your project from different angles and perspectives. You can also apply realistic lighting and shadows, as well as adjust the time of day and the weather.
  • -
  • You can furnish and decorate your house with over 20,000 items, including furniture, appliances, accessories, plants, and more. You can choose from a wide range of branded products, or create your own custom items. You can also customize the colors, patterns, materials, sizes, and shapes of any item.
  • -
  • You can create high-quality renders of your project in HD or 4K resolution. You can also create 360° views that allow you to explore your project in a virtual reality mode.
  • -
  • You can share your project online with other users, or download it as an image or a video. You can also export your project as a PDF file or a DWG file.
  • -
-

Pros and cons

-
    -
  • Pros:
      -
    • It is easy to use and has a user-friendly interface.
    • -
    • It has a large and diverse catalog of items.
    • -
    • It has realistic rendering and animation options.
    • -
    • It has a virtual reality mode that allows you to experience your project in an immersive way.
    • -
    -
  • -
  • Cons:
      -
    • It requires an internet connection to access all the features and items.
    • -
    • It has some limitations in the free version, such as the number of projects, items, renders, and 360° views you can create.
    • -
    • It has some bugs and glitches that may affect the performance and quality of your project.
    • -
    -
  • -
-

Pricing and plans

-

HomeByMe offers a free version that allows you to create up to 3 projects with limited items and features. You can also upgrade to a premium version that gives you unlimited access to all the features and items for a monthly or yearly fee. The premium version costs $14.99 per month or $119.88 per year. You can also purchase additional items, renders, or 360° views separately.

-

home design online free
-home design online tool
-home design online 3d
-home design online software
-home design online app
-home design online game
-home design online course
-home design online planner
-home design online magazine
-home design online store
-home design online consultation
-home design online classes
-home design online program
-home design online tutorial
-home design online service
-home design online shop
-home design online simulator
-home design online platform
-home design online inspiration
-home design online community
-home design online rendering
-home design online degree
-home design online portfolio
-home design online quiz
-home design online blog
-home design online challenge
-home design online certification
-home design online reviews
-home design online ideas
-home design online tips
-home design online trends
-home design online projects
-home design online videos
-home design online books
-home design online courses free
-home design online tools free
-home design online 3d free
-home design online software free
-home design online app free
-home design online game free
-best home design online software
-best home design online app
-best home design online game
-best home design online tool
-best home design online 3d
-best home design online free
-best home design online service
-best home design online platform

-

RoomSketcher

-

RoomSketcher is another home design online software that enables you to create floor plans in 2D or 3D, as well as furnish and decorate your house with over 10,000 items. RoomSketcher allows you to design your house in a simple and fun way, using interactive features and tools.

-

Features

-
    -
  • You can use the 2D mode to create floor plans and layouts with simple and intuitive tools. You can draw walls, doors, windows, stairs, and other elements. You can also import existing floor plans from images or PDF files.
  • -
  • You can use the 3D mode to view your project from different angles and perspectives. You can also apply realistic lighting and shadows, as well as adjust the time of day and the weather.
  • -
  • You can furnish and decorate your house with over 10,000 items, including furniture, appliances, accessories, plants, and more. You can also customize the colors, patterns, materials, sizes, and shapes of any item.
  • -
  • You can create interactive floor plans that allow you to walk through your project in a virtual reality mode. You can also create live 3D floor plans that allow you to view your project in real time.
  • -
  • You can share your project online with other users, or download it as an image or a video. You can also export your project as a PDF file or a DWG file.
  • -
-

Pros and cons

-
    -
  • Pros:
      -
    • It is easy to use and has a user-friendly interface.
    • -
    • It has a large and diverse catalog of items.
    • -
    • It has interactive and live 3D floor plans that allow you to experience your project in a dynamic way.
    • -
    • It is compatible with multiple devices and platforms, including web browsers, Windows, Mac, iOS, Android, and VR.
    • -
    -
  • -
  • Cons:
      -
    • It requires an internet connection to access all the features and items.
    • -
    • It has some limitations in the free version, such as the number of projects, items, renders, and live 3D floor plans you can create.
    • -
    • It has some bugs and glitches that may affect the performance and quality of your project.
    • -
    -
  • -
-

Pricing and plans

-

RoomSketcher offers a free version that allows you to create up to 5 projects with limited items and features. You can also upgrade to a premium version that gives you unlimited access to all the features and items for a monthly or yearly fee. The premium version costs $49 per year for personal use or $99 per year for professional use. You can also purchase additional items or renders separately.

-

Tips and Tricks for Home Design Online

-

To make the most out of your home design online experience, here are some tips and tricks that you can follow:

-

Choose the right software for your needs

-

Before you start designing your house online, you should consider your needs and preferences. Do you want a simple or a complex software? Do you want a free or a paid software? Do you want a realistic or a stylized software? Do you want a software that has many features or one that has few features? Do you want a software that has many items or one that has few items? Do you want a software that is compatible with your device or platform? Do you want a software that allows you to share your project online or offline?

-

To help you choose the right software for your needs, you can compare different options based on their features, pros and cons, pricing and plans. You can also read reviews from other users, watch tutorials and demos, or try out free versions before you buy.

-

Plan your layout and design in 2D first

-

Before you jump into the 3D mode of your home design online software , you should plan your layout and design in 2D first. This will help you to create a clear and accurate floor plan of your house, as well as to arrange the furniture and other items in a logical and functional way. You can use the 2D mode of your home design online software to draw the walls, doors, windows, stairs, and other elements of your house. You can also import an existing floor plan from an image or a PDF file, or use a template or a sample project. You can then drag and drop the items from the catalog to your floor plan, and adjust their positions, orientations, and dimensions. You can also add labels, dimensions, and notes to your floor plan, as well as change the scale and the units.

-

Express your style with branded products and custom colors

-

One of the advantages of home design online software is that you can express your personal style and taste in your house. You can choose from a wide range of branded products that are available in the catalog of your home design online software, such as IKEA, Pottery Barn, West Elm, and more. You can also create your own custom items using the 3D editor or the color picker. You can change the colors, patterns, materials, sizes, and shapes of any item in your house, as well as apply different finishes and effects. You can also mix and match different styles and themes, such as modern, rustic, vintage, or eclectic. You can also add some personal touches, such as photos, artworks, or souvenirs.

-

Use renders and 3D views to visualize your project

-

Another benefit of home design online software is that you can visualize your project in a realistic and immersive way. You can use the 3D mode of your home design online software to view your project from different angles and perspectives. You can also apply realistic lighting and shadows, as well as adjust the time of day and the weather. You can also create high-quality renders of your project in HD or 4K resolution. You can also create 360° views or interactive floor plans that allow you to walk through your project in a virtual reality mode. These features will help you to see how your project will look like in real life, as well as to spot any errors or improvements.

-

Share your project online and get feedbacks

-

The final step of home design online is to share your project online with others. You can use the sharing options of your home design online software to upload your project to their website or app, or to social media platforms such as Facebook, Instagram, Pinterest, or YouTube. You can also download your project as an image or a video, or export it as a PDF file or a DWG file. You can then share your project with your friends, family, or clients, and get feedbacks and suggestions. You can also browse other users' projects and get inspired by their ideas.

-

Conclusion

-

In conclusion, home design online is a modern and easy way to create your dream house in 3D. You can use home design online software to create floor plans in 2D or 3D, as well as furnish and decorate your house with over 10,000 items. You can also customize the colors, patterns, materials, sizes, and shapes of any item in your house. You can also create realistic images of your project, as well as share it online with others.

-

If you want to try home design online software for yourself , you can choose from one of the three best home design online software that we reviewed in this article: Planner 5D, HomeByMe, or RoomSketcher. You can also follow the tips and tricks that we shared to make the most out of your home design online experience. We hope that this article has helped you to learn more about home design online and inspired you to create your own dream house in 3D.

-

Here are some FAQs that you might have about home design online:

-

FAQs

-
    -
  1. What are the advantages of home design online over traditional methods?
  2. -

    Home design online has many advantages over traditional methods, such as being easy, flexible, affordable, convenient, fun, and rewarding. You don't need any professional skills or experience to use home design online software. You can also design your house according to your preferences and needs, without spending money on hiring an architect, a designer, or a contractor. You can also enjoy the creative process of designing your house, as well as the satisfaction of seeing your vision come to life.

    -
  3. What are the disadvantages of home design online?
  4. -

    Home design online also has some disadvantages, such as requiring an internet connection, having some limitations in the free version, and having some bugs and glitches. You will need an internet connection to access all the features and items of your home design online software. You will also have some restrictions in the number of projects, items, and renders you can create in the free version. You may also encounter some errors or problems that may affect the performance and quality of your project.

    -
  5. How can I choose the best home design online software for me?
  6. -

    To choose the best home design online software for you, you should consider your needs and preferences. You should compare different options based on their features, pros and cons, pricing and plans. You should also read reviews from other users, watch tutorials and demos, or try out free versions before you buy.

    -
  7. How can I improve my home design online skills?
  8. -

    To improve your home design online skills, you should practice and experiment with different tools and features of your home design online software. You should also learn from other users' projects and get feedbacks and suggestions. You should also follow some tips and tricks that we shared in this article, such as planning your layout and design in 2D first, expressing your style with branded products and custom colors, using renders and 3D views to visualize your project, and sharing your project online.

    -
  9. Where can I find more resources and inspiration for home design online?
  10. -

    You can find more resources and inspiration for home design online on the websites or apps of your home design online software. You can also find them on social media platforms such as Facebook, Instagram, Pinterest, or YouTube. You can also find them on blogs, magazines, books, or podcasts that are related to home design.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Solo Piano Music Royalty Free Download - Pixabay.md b/spaces/congsaPfin/Manga-OCR/logs/Solo Piano Music Royalty Free Download - Pixabay.md deleted file mode 100644 index 85903650a677242d8c983cc8c6bdba61b1f6da3d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Solo Piano Music Royalty Free Download - Pixabay.md +++ /dev/null @@ -1,100 +0,0 @@ - -

Piano Free Download Music: How to Enjoy Beautiful Piano Music Without Paying a Dime

-

Do you love piano music? Do you wish you could listen to it anytime and anywhere without spending any money? If so, you are in luck. There is a way to enjoy beautiful piano music without paying a dime. It is called piano free download music.

-

piano free download music


Download Zip ———>>> https://urlca.com/2uOekD



-

Piano free download music is music that you can download from the internet for free. You can find thousands of piano tracks in various genres, moods, styles, and lengths. You can use them for your personal or commercial projects, as long as you follow the license and attribution rules.

-

In this article, we will show you how to find and download piano free download music from different sources. We will also tell you about the benefits of listening to piano music and how it can enhance your life. Let's get started.

-

The Benefits of Piano Free Download Music

-

Piano music is one of the most popular and versatile types of music in the world. It can express a wide range of emotions, from joy and happiness to sadness and sorrow. It can also inspire you, relax you, educate you, and entertain you. Here are some of the benefits of listening to piano free download music:

-
    -
  • Relaxation: Piano music can help you relax and reduce stress. It can lower your blood pressure, heart rate, and cortisol levels. It can also improve your mood and sleep quality. Piano music can be especially soothing when you are feeling anxious, depressed, or overwhelmed.
  • -
  • Inspiration: Piano music can stimulate your creativity and imagination. It can help you generate new ideas, solve problems, and express yourself. Piano music can also enhance your memory, concentration, and learning abilities.
  • -
  • Education: Piano music can teach you about musical theory and history. You can learn about different musical elements, such as melody, harmony, rhythm, tempo, dynamics, and timbre. You can also learn about different piano composers, styles, periods, and genres.
  • -
  • Entertainment: Piano music can provide you with hours of enjoyment and fun. You can listen to it while working, studying, exercising, or relaxing. You can also sing along, dance along, or play along with it. You can also share it with your friends and family.
  • -
-

The Best Sources of Piano Free Download Music

-

There are many sources of piano free download music on the internet. You can find them on websites, apps, podcasts, and online courses. Here are some of the best sources that we recommend:

-

piano background music free download
-royalty free piano music mp3
-free piano stock music tracks
-solo piano music free download
-piano music free download for youtube
-relaxing piano music free download
-classical piano music free download
-sad piano music free download
-romantic piano music free download
-jazz piano music free download
-piano sheet music free download
-piano instrumental music free download
-piano cover music free download
-piano meditation music free download
-piano study music free download
-piano ambient music free download
-piano cinematic music free download
-piano pop music free download
-piano trap music free download
-piano r&b music free download
-piano house music free download
-piano lo-fi music free download
-piano rap music free download
-piano rock music free download
-piano gospel music free download
-piano christmas music free download
-piano halloween music free download
-piano wedding music free download
-piano anime music free download
-piano game music free download
-easy listening piano music free download
-beautiful piano music free download
-emotional piano music free download
-inspiring piano music free download
-uplifting piano music free download
-dramatic piano music free download
-mysterious piano music free download
-creepy piano music free download
-happy piano music free download
-mellow piano music free download
-experimental piano music free download
-electronic piano music free download
-fusion piano music free download
-blues piano music free download
-country piano music free download
-folk piano music free download
-reggae piano music free download
-latin piano music free download
-indian piano music free download

-
    -
  • Websites: There are many websites that offer piano free download music in various formats (mp3, wav, ogg, etc.). Some of the best ones are Chosic, Pixabay, Mixkit, etc. These websites have a large collection of piano tracks that you can browse by genre (solo piano, classical piano, jazz piano), mood (calm, relaxing), artist (Mozart), or keyword (lullaby). You can also preview the tracks before downloading them. You can also download them in bulk or individually.
  • -
  • Apps: There are many apps that offer piano free download music in various formats (mp3, wav, ogg, etc.). Some of the best ones are Spotify, SoundCloud, YouTube Music, etc. These apps have a large collection of piano tracks that you can stream or download offline. You can also create your own playlists, follow your favorite artists, and discover new music.
  • -
  • Podcasts: There are many podcasts that offer piano free download music in various formats (mp3, wav, ogg, etc.). Some of the best ones are Piano Relaxation, Piano Stories, Piano Jazz, etc. These podcasts have a large collection of piano tracks that you can listen to on your phone, computer, or smart speaker. You can also subscribe to them, rate them, and leave reviews.
  • -
  • Online Courses: There are many online courses that offer piano free download music in various formats (mp3, wav, ogg, etc.). Some of the best ones are Learn Piano Online, Piano for Beginners, Piano Masterclass, etc. These online courses have a large collection of piano tracks that you can learn from and play along with. You can also access video lessons, quizzes, exercises, and certificates.
  • -
-

The Tips for Finding and Downloading Piano Free Download Music

-

Now that you know the best sources of piano free download music, you might be wondering how to find and download the tracks that you like. Here are some tips that can help you:

-
    -
  • Search by genre, mood, artist, or keyword: You can use the search function on the websites, apps, podcasts, or online courses to find the piano tracks that suit your preferences. You can also use filters or categories to narrow down your results.
  • -
  • Check the license and attribution requirements: Before you download any piano track, make sure you check the license and attribution requirements. Some tracks are free to use for any purpose, while others require you to give credit to the original creator or pay a fee. You can usually find this information on the source page or in the file description.
  • -
  • Use a reliable and secure downloader tool: To download the piano tracks from the websites, apps, podcasts, or online courses, you need to use a reliable and secure downloader tool. You can find many such tools online, but make sure you choose one that is compatible with your device and source. You can also read reviews and ratings to find the best one.
  • -
  • Organize and manage your downloaded files: After you download the piano tracks, you need to organize and manage them properly. You can use folders, labels, tags, or playlists to sort them by genre, mood, artist, or keyword. You can also use a media player or an editor to play or edit them.
  • -
-

The Conclusion

-

Piano free download music is a great way to enjoy beautiful piano music without paying a dime. You can find thousands of piano tracks in various genres, moods, styles, and lengths from different sources on the internet. You can use them for your personal or commercial projects, as long as you follow the license and attribution rules.

-

Piano free download music can also benefit you in many ways. It can help you relax, inspire you, educate you, and entertain you. It can also improve your mood, sleep quality, creativity, memory, concentration, and learning abilities.

-

If you want to find and download piano free download music from different sources, you need to follow some tips. You need to search by genre, mood, artist, or keyword, check the license and attribution requirements, use a reliable and secure downloader tool, and organize and manage your downloaded files.

-

So, what are you waiting for? Start exploring the world of piano free download music today and enjoy the beauty and magic of piano music. You will be amazed by how much it can enrich your life.

-

FAQs

-

Here are some of the frequently asked questions about piano free download music:

-
    -
  1. What is the difference between piano free download music and piano royalty-free music?
  2. -

    Piano free download music is music that you can download from the internet for free. Piano royalty-free music is music that you can use for your projects without paying any royalties to the original creator. However, some piano royalty-free music may require you to pay a one-time fee or give credit to the original creator.

    -
  3. How can I use piano free download music for my projects?
  4. -

    You can use piano free download music for your personal or commercial projects, such as videos, podcasts, games, apps, websites, presentations, etc. However, you need to follow the license and attribution rules of the source. Some sources may allow you to use the music for any purpose, while others may have some restrictions or conditions.

    -
  5. How can I find the best piano free download music for my projects?
  6. -

    You can find the best piano free download music for your projects by searching by genre, mood, artist, or keyword on the websites, apps, podcasts, or online courses that offer piano free download music. You can also use filters or categories to narrow down your results. You can also preview the tracks before downloading them.

    -
  7. How can I download piano free download music from different sources?
  8. -

    You can download piano free download music from different sources by using a reliable and secure downloader tool. You can find many such tools online, but make sure you choose one that is compatible with your device and source. You can also read reviews and ratings to find the best one.

    -
  9. How can I organize and manage my downloaded piano free download music files?
  10. -

    You can organize and manage your downloaded piano free download music files by using folders, labels, tags, or playlists to sort them by genre, mood, artist, or keyword. You can also use a media player or an editor to play or edit them.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/TikTok The app that lets you express yourself with music filters and effects.md b/spaces/congsaPfin/Manga-OCR/logs/TikTok The app that lets you express yourself with music filters and effects.md deleted file mode 100644 index f7e0310e9fdc07a2f88d3106b41d07fe6026ae68..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/TikTok The app that lets you express yourself with music filters and effects.md +++ /dev/null @@ -1,153 +0,0 @@ - -

TikTok: The Ultimate Guide for Beginners

-

If you are looking for a fun and easy way to express yourself, connect with others, and discover new things, you might want to give TikTok a try. TikTok is a video-sharing app that has taken the world by storm, with over 1 billion monthly active users and millions of videos uploaded every day. But what is TikTok exactly, how do you use it, and why is it so popular? In this guide, we will answer these questions and more, as well as give you some tips and tricks to get the most out of your TikTok experience.

-

What is TikTok?

-

A brief introduction to the app

-

TikTok is a social media app that allows users to create and share short-form videos on any topic. It’s mainly mobile-based, although you can still watch TikTok videos using the web app. The platform allows users to get creative with their content using filters, stickers, voiceovers, sound effects, and background music.

-

tik tok app


Download ––– https://urlca.com/2uOgxz



-

TikTok was launched in China in 2016 as Douyin, and then expanded internationally in 2017 as TikTok. In 2018, it merged with Musical.ly, another popular video app that focused on lip-syncing and dancing. Since then, TikTok has grown into a fully-fledged video service, with content ranging from comedy, gaming, DIY, food, sports, memes, pets, to oddly satisfying, ASMR, and everything in between.

-

TikTok is owned by ByteDance, a Chinese internet company that also operates other apps such as Toutiao (a news aggregator) and Helo (a social networking app for India). ByteDance has faced some controversies over its data privacy and security practices, as well as its alleged censorship of content that is sensitive to the Chinese government. However, TikTok has denied these allegations and has tried to distance itself from ByteDance.

-

How to use TikTok

-

How to create an account

-

To start using TikTok, you need to download the app from the App Store or Google Play Store. You can sign up using your phone number, email address, or a third-party account such as Facebook or Google. You can also choose a username and a profile picture for your account.

-

Once you have an account, you can decide if you want to make it private or public. A private account means that only people who follow you can see your videos and send you messages. A public account means that anyone can see your videos and send you messages. You can change your privacy settings at any time from your profile page.

-

How to watch videos

-

When you open the app, you will see two tabs at the top: Following and For You. The Following tab shows you videos from the users you follow. The For You tab shows you videos that are recommended for you by TikTok’s algorithm based on your preferences and behavior.

-

You can swipe up or down to scroll through the videos. You can also tap on the video to pause or resume it. You can also double-tap on the video to like it, or swipe left to see the user’s profile and other videos.

-

On the right side of the screen, you will see some icons that let you interact with the video. You can tap on the heart icon to like the video, the comment icon to leave a comment, the share icon to share the video with others, and the record icon to create a duet or a reaction video. You can also tap on the spinning record icon at the bottom right to see the sound or song used in the video, and use it for your own videos.

-

tik tok app download
-tik tok app for pc
-tik tok app store
-tik tok app online
-tik tok app review
-tik tok app challenge
-tik tok app tutorial
-tik tok app logo
-tik tok app banned
-tik tok app update
-tik tok app features
-tik tok app tips
-tik tok app ranking
-tik tok app revenue
-tik tok app history
-tik tok app alternatives
-tik tok app analytics
-tik tok app ads
-tik tok app creator
-tik tok app support
-tik tok app safety
-tik tok app privacy
-tik tok app music
-tik tok app sound effects
-tik tok app filters
-tik tok app stickers
-tik tok app transitions
-tik tok app effects
-tik tok app duet
-tik tok app live stream
-tik tok app followers
-tik tok app likes
-tik tok app views
-tik tok app comments
-tik tok app hashtags
-tik tok app captions
-tik tok app trends
-tik tok app viral videos
-tik tok app memes
-tik tok app comedy
-tik tok app dance
-tik tok app lip sync
-tik tok app prank
-tik tok app magic tricks
-tik tok app art and craft
-tik tok app cooking and baking
-tik tok app beauty and fashion
-tik tok app fitness and health
-tik tok app education and learning
-tik tok app travel and adventure

-

On the left side of the screen, you will see some information about the video. You can tap on the user’s name to see their profile and follow them, or tap on the caption to see more details about the video. You can also tap on the hashtags or mentions to see more videos related to them.

-

How to make your own videos

-

If you want to create your own videos, you need to tap on the plus icon at the bottom of the screen. This will open the camera mode, where you can choose from various options to make your video.

-

You can either record a video using your phone’s camera, or upload a video from your gallery. You can also choose a sound or a song from TikTok’s library, or use your own voice or music. You can adjust the speed, timer, filters, effects, and beauty mode of your video before or after recording it.

-

Once you have recorded or uploaded your video, you can edit it further using TikTok’s editing tools. You can trim, cut, split, merge, duplicate, or reverse your video clips. You can also add stickers, text, emojis, filters, effects, transitions, and voice effects to your video. You can also adjust the volume, pitch, and speed of your sound or music.

-

When you are done editing your video, you can add a caption, hashtags, mentions, and location to your video. You can also choose who can view your video (public, friends only, or private), who can comment on your video (everyone, friends only, or no one), who can duet or react to your video (everyone, friends only, or no one), and who can download your video (on or off). You can also save your video to your phone or share it with other apps. Finally, you can tap on Post to upload your video to TikTok.

-

How to interact with other users

-

TikTok is not only a platform for creating and watching videos, but also a community for connecting and engaging with other users. There are many ways you can interact with other users on TikTok.

-

You can follow other users that you like or find interesting by tapping on their name and then tapping on Follow. You can also unfollow them at any time by tapping on Following and then tapping on Unfollow. You can see who you are following and who is following you from your profile page.

-

You can send messages to other users by tapping on the message icon at the bottom of the screen. You can either start a new conversation with someone by tapping on New Message and then typing their name or username, or continue an existing conversation with someone by tapping on their name from the list. You can also send messages to multiple users by creating a group chat. You can send text messages, voice messages, photos, videos, stickers, emojis, and GIFs in your messages.

-

You can comment on other users’ videos by tapping on the comment icon below their video and then typing your comment. You can also reply to other users’ comments by tapping on their comment and then typing your reply. You can like other users’ comments by tapping on the heart icon next to their comment.

-

You can duet or react to other users’ videos by tapping on the record icon below their video and then choosing Duet or React. A duet is when you create a split-screen video with another user’s video playing alongside yours. A reaction is when you create a picture-in-picture video with another user’s video playing in a small window while you record yourself reacting to it. You can edit your duet or reaction video using TikTok’s editing tools before posting it.

-

Why is TikTok so popular?

-

The features that make TikTok stand out

-

TikTok has many features that make it different from other social media apps. Some of these features are:

-
    -
  • The short-form format: TikTok videos are usually 15 seconds long, although you can make up to 60 seconds long videos by combining multiple clips. This makes TikTok videos easy to consume and create.
  • -
  • The algorithm: Tik Tok’s algorithm is very powerful and personalized, as it learns from your preferences and behavior and shows you videos that you are likely to enjoy and engage with. You can also discover new videos and users by exploring different categories, hashtags, and trends.
  • -
  • The sound and music: TikTok has a huge library of sounds and songs that you can use for your videos, or you can use your own voice or music. You can also see what sounds or songs are popular or trending, and use them for your own videos. You can also create your own sounds or songs and share them with others.
  • -
  • The editing tools: TikTok has a variety of editing tools that let you customize your videos and make them more creative and fun. You can add filters, stickers, text, emojis, effects, transitions, voice effects, and more to your videos. You can also trim, cut, split, merge, duplicate, or reverse your video clips.
  • -
  • The community and culture: TikTok has a vibrant and diverse community of users who share their passions, talents, opinions, humor, and more through their videos. You can connect and interact with other users who have similar interests or tastes as you. You can also join or create challenges, trends, memes, hashtags, and more that are unique to TikTok.
  • -
-

The trends that drive TikTok culture

-

TikTok is also known for its viral trends that shape its culture and influence other platforms. Some of these trends are:

-
    -
  • The dances: TikTok is famous for its dance challenges, where users create or copy dance moves to a specific song or sound. Some of the most popular dance challenges on TikTok are the Renegade, the Savage, the Say So, the WAP, the Blinding Lights, and the Toosie Slide.
  • -
  • The lip-syncs: TikTok is also known for its lip-syncs, where users mimic the words or lyrics of a song, a movie scene, a comedy sketch, or anything else. Some of the most popular lip-syncs on TikTok are the I’m Already Tracer, the Hit or Miss, the Can I Pet That Dog?, the I’m Not Like Other Girls, and the I’m an Accountant.
  • -
  • The pranks: TikTok is also a platform for pranks, where users trick or scare their friends, family members, strangers, or themselves. Some of the most popular pranks on TikTok are the Invisible Challenge, the Zoom Prank, the Pregnancy Prank, the Shampoo Prank, and the Spider Prank.
  • -
  • The transformations: TikTok is also a place for transformations, where users show their before and after changes in appearance, mood, style, or anything else. Some of the most popular transformations on TikTok are the Glow Up Challenge, the Don’t Rush Challenge, the Flip The Switch Challenge, the Buss It Challenge, and the Silhouette Challenge.
  • -
  • The duets and reactions: TikTok is also a platform for duets and reactions, where users create videos in response to other users’ videos, either by adding their own content or by showing their reaction. Some of the most popular duets and reactions on TikTok are the Old Town Road Duet, the Ratatouille Musical, the Hamilton Reaction, the Kombucha Girl, and the Try Not To Laugh Challenge.
  • -
-

The challenges that TikTok faces

-

Despite its popularity and success, TikTok also faces some challenges that threaten its future. Some of these challenges are:

-
    -
  • The legal issues: TikTok has been involved in several legal disputes and investigations over its data privacy and security practices, its content moderation policies, its alleged censorship of content that is sensitive to the Chinese government, and its potential influence on elections and public opinion. TikTok has also been banned or restricted in some countries such as India, Pakistan, Indonesia, and the United States.
  • -
  • The competition: TikTok has to compete with other social media platforms that offer similar or alternative features and services, such as Instagram, YouTube, Snapchat, Facebook, Twitter, and Triller. Some of these platforms have also copied or integrated some of TikTok’s features, such as Instagram Reels, YouTube Shorts, Snapchat Spotlight, and Facebook Lasso.
  • -
  • The sustainability: TikTok has to maintain its growth and relevance in a fast-changing and crowded market, where user preferences and behaviors can shift quickly and unpredictably. TikTok has to constantly innovate and adapt to keep its users engaged and satisfied, as well as attract new users and advertisers.
  • -
-

How to get the most out of TikTok

-

Tips and tricks for viewers and lurkers

-

If you are a viewer or a lurker on TikTok, meaning that you mainly watch videos without creating or interacting with them, here are some tips and tricks to enhance your experience:

-
    -
  • Customize your For You page: The For You page is where you can discover new videos and users that match your interests and tastes. You can customize your For You page by liking, commenting, sharing, or following the videos and users that you enjoy, or by tapping on Not Interested or reporting the videos and users that you don’t like. You can also use the Discover tab to search for specific categories, hashtags, sounds, or users.
  • -
  • Use filters and effects: You can use filters and effects to change the appearance of the videos that you watch. You can access them by tapping on the filter icon at the top right of the screen. You can choose from different categories such as Beauty, Funny, Scary, Trending, etc. You can also use the slider at the bottom to adjust the intensity of the filter or effect.
  • -
  • Save videos to your favorites: You can save videos that you like or want to watch later to your favorites. You can access them by tapping on the bookmark icon at the bottom right of the screen. You can also create folders to organize your favorites by tapping on the plus icon at the top right of the screen.
  • -
  • Download videos to your phone: You can download videos that you like or want to share with others to your phone. You can do this by tapping on the share icon below the video and then tapping on Save Video. However, this option is only available if the user allows it in their settings.
  • -
  • Watch live streams: You can watch live streams from other users who are broadcasting in real time. You can access them by tapping on the Live tab at the top of the screen. You can also see who is live from the users you follow by tapping on the Following tab. You can interact with the live streamers by sending messages, gifts, or emojis in the chat box.
  • -
-

Tips and tricks for creators and influencers

-

If you are a creator or an influencer on TikTok, meaning that you regularly create and share videos with a large or loyal audience, here are some tips and tricks to boost your performance:

-
    -
  • Know your niche: You should have a clear idea of what kind of content you want to create and who your target audience is. You should also research what topics, hashtags, sounds, or trends are popular or relevant to your niche, and use them for your videos.
  • -
  • Optimize your profile: You should have a catchy and memorable username, a high-quality and attractive profile picture, and a concise and informative bio that describes who you are and what you do. You should also link your other social media accounts or websites to your profile, if you have any.
  • -
  • Use hashtags and captions: You should use hashtags and captions to make your videos more discoverable and engaging. You should use relevant and specific hashtags that match your content and niche, as well as trending or viral hashtags that can attract more viewers. You should also write captions that summarize or explain your videos, or ask questions or invite feedback from your viewers.
  • -
  • Engage with your audience: You should interact with your audience by responding to their comments, messages, duets, or reactions. You should also thank them for their support, ask them for their opinions or suggestions, or invite them to participate in your challenges or contests. You should also follow or shout out some of your fans or fellow creators who inspire you or collaborate with you.
  • -
  • Analyze your analytics: You should monitor and analyze your analytics to measure your performance and improve your strategy. You can access your analytics by tapping on the three dots icon at the top right of your profile page and then tapping on Analytics. You can see data such as your video views, likes, comments, shares, followers, watch time, audience demographics, traffic sources, etc.
  • -
-

Tips and tricks for marketers and businesses

-

If you are a marketer or a business owner who wants to use TikTok for promoting your brand, product, or service, here are some tips and tricks to achieve your goals:

-
    -
  • Create a business account: You should create a business account instead of a personal account to access more features and tools for marketing purposes. You can create a business account by tapping on the three dots icon at the top right of your profile page and then tapping on Manage Account. You can then switch to a Pro Account and choose Business as your category. You can also verify your account by providing some information about your business.
  • -
  • Use TikTok Ads: You can use TikTok Ads to create and run paid campaigns to reach more potential customers on TikTok. You can access TikTok Ads by visiting ads.tiktok.com and signing up for an account. You can choose from different types of ads such as In-Feed Ads, TopView Ads, Brand Takeover Ads, Branded Hashtag Challenge Ads, Branded Effects Ads, etc. You can also set your budget, target audience, schedule, creative assets, etc.
  • -
  • Collaborate with influencers: You can collaborate with influencers who have a large or loyal following on TikTok and who are relevant to your niche or industry. You can ask them to review, endorse, or feature your brand, product, or service in their videos, or to create a challenge, trend, or hashtag related to your brand, product, or service. You can find and contact influencers by using platforms such as TikTok Creator Marketplace, FameBit, AspireIQ, etc.
  • -
  • Create engaging content: You can also create your own content to showcase your brand, product, or service on TikTok. You should create content that is entertaining, informative, authentic, and relevant to your niche or industry. You should also use the features and tools that TikTok offers, such as filters, effects, sounds, hashtags, etc. You should also follow the trends and challenges that are popular or related to your niche or industry.
  • -
  • Build a community: You can also build a community of loyal and engaged customers on TikTok. You should interact with your followers and potential customers by responding to their comments, messages, duets, or reactions. You should also thank them for their support, ask them for their feedback or testimonials, or invite them to join your loyalty program or newsletter. You should also follow or shout out some of your customers or partners who support you or collaborate with you.
  • -
-

Conclusion

-

TikTok is a video-sharing app that has become one of the most popular and influential social media platforms in the world. It allows users to create and share short-form videos on any topic, using various features and tools to make them more creative and fun. It also allows users to discover and interact with other users who share their interests and tastes.

-

TikTok is not only a platform for entertainment and expression, but also a platform for learning and marketing. Users can learn new skills, ideas, information, or perspectives from other users’ videos. Marketers and businesses can use TikTok to promote their brand, product, or service to a large and diverse audience.

-

TikTok is also a platform for challenges and opportunities. Users can join or create challenges, trends, memes, hashtags, and more that are unique to TikTok culture. Marketers and businesses can face challenges such as legal issues, competition, and sustainability.

-

If you want to get the most out of TikTok, whether you are a viewer, a creator, an influencer, a marketer, or a business owner, you should follow the tips and tricks that we have shared in this guide. We hope that this guide has helped you understand what TikTok is, how to use it, and why it is so popular.

-

FAQs

-

Here are some frequently asked questions about TikTok:

-
    -
  1. How do I get more followers on TikTok?
  2. -

    There is no magic formula to get more followers on TikTok, but there are some best practices that you can follow. Some of them are: create high-quality and original content that showcases your personality and talent; use relevant and specific hashtags that match your content and niche; follow the trends and challenges that are popular or related to your niche; collaborate with other users who have similar or complementary content or audience; engage with your existing followers and potential followers by liking, commenting, sharing, or following their videos; and analyze your analytics to see what works and what doesn’t for your content and audience.

    -
  3. How do I make money on TikTok?
  4. -

    There are several ways to make money on TikTok, depending on your goals and skills. Some of them are: join the TikTok Creator Fund, which pays eligible creators based on their video views and engagement; join the TikTok Live program, which allows you to receive gifts from your viewers during your live streams; join the TikTok Affiliate program, which allows you to earn commissions by promoting products or services from TikTok’s partners; create sponsored content for brands or businesses that match your niche or audience; sell your own products or services through your videos or links; or offer your skills or services as a freelancer or consultant to other users who need them.

    -
  5. How do I delete my TikTok account?
  6. -

    If you want to delete your TikTok account, you need to follow these steps: tap on the three dots icon at the top right of your profile page and then tap on Manage Account; tap on Delete Account at the bottom of the screen and then follow the instructions; verify your identity using your phone number, email address, or third-party account; and confirm your decision by tapping on Delete Account. Note that deleting your account will remove all your videos, messages, comments, likes, followers, and other data from TikTok. You will also lose access to the TikTok Creator Fund, TikTok Live, TikTok Affiliate, and other programs that you have joined. You can still restore your account within 30 days of deletion by logging in with your credentials, but after that period, your account will be permanently deleted.

    -
  7. How do I report a problem or a user on TikTok?
  8. -

    If you encounter a problem or a user that violates TikTok’s Community Guidelines or Terms of Service, you can report it to TikTok’s team. You can do this by tapping on the three dots icon at the top right of the video or profile that you want to report and then tapping on Report. You can then choose the reason for your report and provide more details if needed. You can also block or mute the user that you want to report by tapping on their name and then tapping on Block or Mute.

    -
  9. How do I contact TikTok’s customer service?
  10. -

    If you have any questions, feedback, suggestions, or complaints about TikTok’s app or service, you can contact TikTok’s customer service team. You can do this by tapping on the three dots icon at the top right of your profile page and then tapping on Report a Problem. You can then choose the category and subcategory of your issue and provide more details if needed. You can also attach screenshots or videos to illustrate your issue. You can also email TikTok’s customer service team at feedback@tiktok.com.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/What You Need to Know About Video Live Wallpaper Maker Premium APK.md b/spaces/congsaPfin/Manga-OCR/logs/What You Need to Know About Video Live Wallpaper Maker Premium APK.md deleted file mode 100644 index c2a5c0da14fbc0c89ebcb07cebe6b1a18c6c6f86..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/What You Need to Know About Video Live Wallpaper Maker Premium APK.md +++ /dev/null @@ -1,135 +0,0 @@ - -

    Video Live Wallpaper Maker Premium APK: How to Create Stunning Wallpapers for Your Phone

    -

    Do you want to make your phone look more lively and attractive? Do you want to express your personality and mood with your wallpaper? Do you want to have fun and be creative with your videos? If you answered yes to any of these questions, then you need to try Video Live Wallpaper Maker Premium APK.

    -

    What is Video Live Wallpaper Maker Premium APK?

    -

    Video Live Wallpaper Maker Premium APK is a powerful and easy-to-use app that lets you create amazing live wallpapers from your videos. You can use any video from your gallery or record your own with the built-in camera. You can also edit and customize your video with various filters, effects, stickers, text, and music. You can then set your video as a live wallpaper on your home screen or lock screen, and enjoy watching it every time you use your phone.

    -

    video live wallpaper maker premium apk


    Download File ——— https://urlca.com/2uO5zo



    -

    Features of Video Live Wallpaper Maker Premium APK

    -

    Video Live Wallpaper Maker Premium APK has many features that make it stand out from other similar apps. Some of these features are:

    -
      -
    • It supports all video formats, including MP4, AVI, MKV, MOV, FLV, and more.
    • -
    • It has a simple and intuitive interface that makes it easy to use for anyone.
    • -
    • It has a premium version that unlocks all the features and removes all the ads and watermarks.
    • -
    • It has a large collection of filters, effects, stickers, text, and music that you can apply to your video.
    • -
    • It has a preview mode that lets you see how your video will look as a live wallpaper before setting it.
    • -
    • It has a low battery consumption mode that saves your battery life while running your live wallpaper.
    • -
    • It has a community feature that lets you share your creations with other users and discover new wallpapers.
    • -
    -

    Benefits of Video Live Wallpaper Maker Premium APK

    -

    Video Live Wallpaper Maker Premium APK has many benefits that make it worth downloading and using. Some of these benefits are:

    -
      -
    • It enhances the appearance and functionality of your phone by adding dynamic and interactive wallpapers.
    • -
    • It lets you express yourself and show off your style and taste with your wallpaper.
    • -
    • It lets you have fun and be creative with your videos by adding various elements and effects.
    • -
    • It lets you enjoy your favorite videos and memories on your phone screen every day.
    • -
    • It lets you impress your friends and family with your unique and stunning wallpapers.
    • -
    -

    How to Download and Install Video Live Wallpaper Maker Premium APK?

    -

    If you are interested in trying out Video Live Wallpaper Maker Premium APK, you need to download and install it on your phone. Here are the steps to do so:

    -

    Steps to Download and Install Video Live Wallpaper Maker Premium APK

    -
      -
    1. Go to the official website of Video Live Wallpaper Maker Premium APK or click on this link: .
    2. -
    3. Select the download button and wait for the file to be downloaded on your phone.
    4. -
    5. Go to your file manager and locate the downloaded file. Tap on it to start the installation process.
    6. -
    7. If you see a warning message that says "Install blocked", go to your settings and enable "Unknown sources" or "Allow from this source".
    8. -
    9. Follow the instructions on the screen and complete the installation process.
    10. -
    11. Launch the app and enjoy creating your live wallpapers.
    12. -
    -

    Tips and Tricks for Using Video Live Wallpaper Maker Premium APK

    -

    To make the most out of Video Live Wallpaper Maker Premium APK, here are some tips and tricks that you can follow:

    -
      -
    • Use high-quality videos that have good resolution and frame rate for better results.
    • -
    • Trim your videos to the desired length and remove any unwanted parts.
    • -
    • Adjust the brightness, contrast, saturation, and hue of your videos to match your preference.
    • -
    • Use filters and effects that suit your theme and mood. You can also combine different filters and effects for more variety.
    • -
    • Add stickers and text that complement your video. You can also change the size, color, font, and position of your stickers and text.
    • -
    • Add music that matches your video. You can choose from the app's library or use your own music from your phone.
    • -
    • Preview your video before setting it as a live wallpaper. You can also change the playback speed and direction of your video.
    • -
    • Share your live wallpapers with other users and get inspired by their creations.
    • -
    -

    How to Create Amazing Wallpapers with Video Live Wallpaper Maker Premium APK?

    -

    Now that you have downloaded and installed Video Live Wallpaper Maker Premium APK, you are ready to create amazing wallpapers with it. Here are the steps to do so:

    -

    Choose a Video or Record Your Own

    -

    The first step is to choose a video that you want to use as a live wallpaper. You can select any video from your gallery or record a new one with the app's camera. You can also browse through the app's community and download any video that you like.

    -

    Edit and Customize Your Video

    -

    The second step is to edit and customize your video according to your liking. You can use the app's tools to trim, crop, rotate, flip, and zoom your video. You can also add filters, effects, stickers, text, and music to your video. You can adjust the settings of each element and preview the changes in real time.

    -

    Set Your Video as Live Wallpaper

    -

    The final step is to set your video as a live wallpaper on your phone. You can choose whether you want to set it as a home screen wallpaper, a lock screen wallpaper, or both. You can also adjust the quality and battery consumption of your live wallpaper. Once you are done, you can enjoy watching your video on your phone screen.

    -

    video live wallpaper maker pro apk
    -video live wallpaper maker 3d mod apk
    -video live wallpaper maker hd premium apk
    -video live wallpaper maker no watermark apk
    -video live wallpaper maker cracked apk
    -video live wallpaper maker full version apk
    -video live wallpaper maker unlocked apk
    -video live wallpaper maker free download apk
    -video live wallpaper maker latest apk
    -video live wallpaper maker offline apk
    -video live wallpaper creator premium apk
    -video live wallpaper creator mod apk
    -video live wallpaper creator hd premium apk
    -video live wallpaper creator no watermark apk
    -video live wallpaper creator cracked apk
    -video live wallpaper creator full version apk
    -video live wallpaper creator unlocked apk
    -video live wallpaper creator free download apk
    -video live wallpaper creator latest apk
    -video live wallpaper creator offline apk
    -3d video live wallpaper maker premium apk
    -3d video live wallpaper maker mod apk
    -3d video live wallpaper maker hd premium apk
    -3d video live wallpaper maker no watermark apk
    -3d video live wallpaper maker cracked apk
    -3d video live wallpaper maker full version apk
    -3d video live wallpaper maker unlocked apk
    -3d video live wallpaper maker free download apk
    -3d video live wallpaper maker latest apk
    -3d video live wallpaper maker offline apk
    -hd video live wallpaper maker premium apk
    -hd video live wallpaper maker mod apk
    -hd video live wallpaper maker 3d premium apk
    -hd video live wallpaper maker no watermark apk
    -hd video live wallpaper maker cracked apk
    -hd video live wallpaper maker full version apk
    -hd video live wallpaper maker unlocked apk
    -hd video live wallpaper maker free download apk
    -hd video live wallpaper maker latest apk
    -hd video live wallpaper maker offline apk
    -wave live wallpapers maker 3d premium apk[^1^]
    -wave live wallpapers maker 3d mod apk[^1^]
    -wave live wallpapers maker 3d hd premium apk[^1^]
    -wave live wallpapers maker 3d no watermark apk[^1^]
    -wave live wallpapers maker 3d cracked apk[^1^]
    -wave live wallpapers maker 3d full version apk[^1^]
    -wave live wallpapers maker 3d unlocked apk[^1^]
    -wave live wallpapers maker 3d free download apk[^1^]
    -wave live wallpapers maker 3d latest apk[^1^]

    -

    Conclusion

    -

    Video Live Wallpaper Maker Premium APK is a great app that lets you create stunning live wallpapers from your videos. You can use any video from your gallery or record your own with the app's camera. You can also edit and customize your video with various filters, effects, stickers, text, and music. You can then set your video as a live wallpaper on your home screen or lock screen, and enjoy watching it every time you use your phone.

    -

    Summary of the Main Points

    -

    In this article, we have covered the following points:

    -
      -
    • What is Video Live Wallpaper Maker Premium APK?
    • -
    • What are the features and benefits of Video Live Wallpaper Maker Premium APK?
    • -
    • How to download and install Video Live Wallpaper Maker Premium APK?
    • -
    • How to create amazing wallpapers with Video Live Wallpaper Maker Premium APK?
    • -
    -

    Call to Action

    -

    If you are looking for a way to make your phone look more lively and attractive, you should definitely try Video Live Wallpaper Maker Premium APK. It is a powerful and easy-to-use app that lets you create amazing live wallpapers from your videos. You can have fun and be creative with your videos by adding various elements and effects. You can also express yourself and show off your style and taste with your wallpaper. You can impress your friends and family with your unique and stunning wallpapers.

    -

    So what are you waiting for? Download Video Live Wallpaper Maker Premium APK now and start creating your own live wallpapers!

    -

    FAQs

    -

    Here are some frequently asked questions about Video Live Wallpaper Maker Premium APK:

    -
      -
    1. Is Video Live Wallpaper Maker Premium APK safe to use?
    2. -

      Yes, Video Live Wallpaper Maker Premium APK is safe to use. It does not contain any viruses or malware that can harm your phone or data. It also does not require any root access or permissions that can compromise your privacy or security.

      -
    3. Is Video Live Wallpaper Maker Premium APK free to use?
    4. -

      Yes, Video Live Wallpaper Maker Premium APK is free to use. However, it has a premium version that unlocks all the features and removes all the ads and watermarks. You can download the premium version from the official website or click on this link: .

      -
    5. How can I share my live wallpapers with other users?
    6. -

      You can share your live wallpapers with other users by using the app's community feature. You can upload your creations to the app's gallery and browse through other users' wallpapers. You can also rate, comment, and download other users' wallpapers.

      -
    7. How can I change the quality and battery consumption of my live wallpaper?
    8. -

      You can change the quality and battery consumption of your live wallpaper by using the app's settings. You can choose from low, medium, high, and ultra quality options. You can also enable or disable the low battery consumption mode that saves your battery life while running your live wallpaper.

      -
    9. How can I contact the developers of Video Live Wallpaper Maker Premium APK?
    10. -

      You can contact the developers of Video Live Wallpaper Maker Premium APK by using the app's feedback feature. You can send them your suggestions, questions, or issues. You can also follow them on their social media accounts for more updates and news.

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/WhatsApp estilo iPhone disfruta de los emojis y el diseo de iOS en tu WhatsApp para Android.md b/spaces/congsaPfin/Manga-OCR/logs/WhatsApp estilo iPhone disfruta de los emojis y el diseo de iOS en tu WhatsApp para Android.md deleted file mode 100644 index bae8bf3f5527163b756f4c5c9ad1e09bb5fb31c0..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/WhatsApp estilo iPhone disfruta de los emojis y el diseo de iOS en tu WhatsApp para Android.md +++ /dev/null @@ -1,100 +0,0 @@ - -

    How to Download WhatsApp with iPhone Style for Android in 2021

    -

    WhatsApp is one of the most popular messaging apps in the world, with over two billion users. However, not everyone is satisfied with the default look and functionality of WhatsApp, especially if they have switched from an iPhone to an Android device or vice versa. If you are one of those people who want to have a WhatsApp experience that resembles the iOS version, then this article is for you. We will show you how to download and install WhatsApp with iPhone style for Android in 2021, a mod that will give you a theme that mimics the iOS appearance and emojis, as well as some extra features that will enhance your WhatsApp usage.

    -

    What is WhatsApp with iPhone Style for Android?

    -

    WhatsApp with iPhone style for Android is a mod, or a modified version, of WhatsApp that is based on Fouad WhatsApp, one of the most popular and trusted WhatsApp mods available. A mod is an unofficial app that offers some features and options that are not present in the official app, such as customization, privacy, security, and more. However, not all mods are safe or reliable, so you should always download them from trusted sources and at your own risk.

    -

    whatsapp estilo iphone descargar 2021 apk malavida


    DOWNLOADhttps://urlca.com/2uO5oy



    -

    A mod based on Fouad WhatsApp

    -

    Fouad WhatsApp is a mod that is known for its stability, performance, and updates. It has a lot of features that make it stand out from other mods, such as themes, fonts, colors, wallpapers, stickers, emojis, and more. It also has some advanced options that let you control your privacy and security settings, such as hiding your online status, disabling forwarded messages, locking chats with passwords or fingerprints, and more.

    -

    A theme that mimics iOS appearance and emojis

    -

    WhatsApp with iPhone style for Android is a mod that uses Fouad WhatsApp as its base, but adds a theme that makes it look like the iOS version of WhatsApp. This means that you will have a WhatsApp app that has the same layout, icons, buttons, menus, notifications, and animations as the iPhone version. You will also have access to the iOS emojis, which are different from the Android ones. This way, you can enjoy a different look and feel of WhatsApp on your Android device.

    -

    A way to customize and enhance WhatsApp features

    -

    WhatsApp with iPhone style for Android is not only a theme, but also a way to customize and enhance your WhatsApp features. You can change the fonts, colors, wallpapers, stickers, emojis, and more according to your preferences. You can also access some extra functions and options that are not available in the official app, such as call blocker, voice modulator, anti-delete messages and status, customizable chats and contacts list, and more.

    -

    Why Download WhatsApp with iPhone Style for Android?

    -

    There are many reasons why you might want to download WhatsApp with iPhone style for Android. Here are some of them:

    -

    whatsapp estilo iphone para android apk descargar gratis
    -whatsapp estilo iphone ultima version 2021 apk
    -whatsapp estilo iphone apk download fouad
    -whatsapp estilo iphone android apk junio 2023
    -whatsapp estilo iphone mod apk 2021
    -whatsapp estilo iphone apk 2021 malavida
    -whatsapp estilo iphone para android descargar 2021
    -whatsapp estilo iphone actualizado julio 2023 apk
    -whatsapp estilo iphone apk gratis para android
    -whatsapp estilo iphone fouad mods apk 2021
    -whatsapp estilo iphone descargar apk ultima version
    -whatsapp estilo iphone para android apk 9.66
    -whatsapp estilo iphone apk julio 2023
    -whatsapp estilo iphone con emojis de ios apk
    -whatsapp estilo iphone personalizado apk 2021
    -whatsapp estilo iphone descargar gratis para android
    -whatsapp estilo iphone apk 2021 fouad mokdad
    -whatsapp estilo iphone para android ultima version apk
    -whatsapp estilo iphone apk agosto 2023
    -whatsapp estilo iphone con modulador de voz apk
    -whatsapp estilo iphone antieliminacion de mensajes apk
    -whatsapp estilo iphone descargar 2021 fouad mods
    -whatsapp estilo iphone para android gratis apk
    -whatsapp estilo iphone apk septiembre 2023
    -whatsapp estilo iphone con estado antieliminacion apk
    -whatsapp estilo iphone descargar 2021 gratis apk
    -whatsapp estilo iphone para android fouad mods apk
    -whatsapp estilo iphone apk octubre 2023
    -whatsapp estilo iphone con automatizacion de mensajes apk
    -whatsapp estilo iphone descargar 2021 ultima version
    -whatsapp estilo iphone para android descargar gratis apk
    -whatsapp estilo iphone apk noviembre 2023
    -whatsapp estilo iphone con opcion de quien puede llamarte apk
    -whatsapp estilo iphone descargar 2021 actualizado apk
    -whatsapp estilo iphone para android descargar fouad mods
    -whatsapp estilo iphone apk diciembre 2023
    -whatsapp estilo iphone con ocultar estado de visto en estados apk
    -whatsapp estilo iphone descargar 2021 gratis para android
    -whatsapp estilo iphone para android descargar ultima version
    -whatsapp estilo iphone apk enero 2024

    -

    To enjoy a different look and feel of WhatsApp

    -

    If you are bored with the default look and feel of WhatsApp on your Android device, you might want to try WhatsApp with iPhone style for Android. This mod will give you a fresh and new look of WhatsApp that resembles the iOS version. You will be able to enjoy the same design, layout, icons, buttons, menus, notifications, and animations as the iPhone users. You will also be able to use the iOS emojis, which are different from the Android ones. This way, you can have a more fun and exciting WhatsApp experience on your Android device.

    -

    To protect your privacy and security

    -

    If you are concerned about your privacy and security on WhatsApp, you might want to download WhatsApp with iPhone style for Android. This mod will give you more control over your privacy and security settings, such as hiding your online status, disabling forwarded messages, locking chats with passwords or fingerprints, and more. You will also be able to prevent others from deleting messages or status that they have sent to you. This way, you can have a more secure and private WhatsApp experience on your Android device.

    -

    To access extra functions and options

    -

    If you are looking for more functions and options on WhatsApp, you might want to download WhatsApp with iPhone style for Android. This mod will give you access to some extra features that are not available in the official app, such as call blocker, voice modulator, customizable fonts and colors, and more. You will also be able to customize your chats and contacts list according to your preferences. This way, you can have a more functional and personalized WhatsApp experience on your Android device.

    -

    How to Download and Install WhatsApp with iPhone Style for Android?

    -

    If you are interested in downloading and installing WhatsApp with iPhone style for Android, you will need to follow these steps:

    -

    Step 1: Backup your chats and uninstall the official WhatsApp app

    -

    Before you download and install WhatsApp with iPhone style for Android, you will need to backup your chats and uninstall the official WhatsApp app from your device. To backup your chats, go to Settings > Chats > Chat backup and tap on Backup. To uninstall the official WhatsApp app, go to Settings > Apps > WhatsApp and tap on Uninstall. This is necessary because you cannot have two WhatsApp apps with the same phone number on the same device.

    -

    Step 2: Download the APK file from a reliable source

    -

    After you have backed up your chats and uninstalled the official WhatsApp app, you will need to download the APK file of WhatsApp with iPhone style for Android from a reliable source. An APK file is an installer file that allows you to install apps that are not available on the Google Play Store. However, not all APK files are safe or reliable, so you should always download them from trusted sources and at your own risk. One of the sources that you can use is Malavida, a website that offers safe and verified APK files of various apps and games. To download the APK file of WhatsApp with iPhone style for Android from Malavida, go to [this link] and tap on Download.

    -

    Step 3: Enable unknown sources and install the APK file

    -

    After you have downloaded the APK file of WhatsApp with iPhone style for Android from Malavida, you will need to enable unknown sources and install the APK file on your device. Unknown sources are sources that are not authorized by Google Play Store, such as APK files from websites or third-party app stores. To enable unknown sources, go to Settings > Security > Unknown sources and toggle it on. To install the APK file of WhatsApp with iPhone style for Android on your device, go to the folder where you have saved the APK file and tap on it. Follow the instructions on the screen to complete the installation.

    -

    Step 4: Verify your phone number and restore your chats

    -

    After you have installed the APK file of WhatsApp with iPhone style for Android on your device, you will need to verify your phone number and restore your chats. To verify your phone number, open the app and enter your phone number that you have used for the official WhatsApp app. You will receive a verification code via SMS or call that you will need to enter in the app. To restore your chats, tap on Restore when prompted and wait for the process to finish.

    -

    Step 5: Choose the iOS theme and enjoy

    -

    After you have verified your phone number and restored your chats, you will need to choose the iOS theme and enjoy WhatsApp with iPhone style for Android on your device. To choose the iOS theme, go to Settings > Fouad Mods > Themes > Load Theme > iOS Theme.zip and tap on Apply. You will see a message that says Theme applied successfully. Restart WhatsApp now. Tap on OK and wait for the app to restart. You will now see that your WhatsApp app has the same appearance and emojis as the iOS version. You can also explore the other features and options that WhatsApp with iPhone style for Android offers. Enjoy!

    -

    What are the Main Features of WhatsApp with iPhone Style for Android?

    -

    WhatsApp with iPhone style for Android is not only a theme, but also a mod that offers some amazing features that are not present in the official app. Here are some of the main features that you can enjoy with this mod:

    -

    Call blocker

    -

    With this feature, you can block unwanted calls from anyone on WhatsApp. You can choose to block all calls, calls from unknown numbers, or calls from specific contacts. You can also enable or disable the call blocker at any time. To access this feature, go to Settings > Fouad Mods > Privacy > Call Blocker.

    -

    Anti-delete messages and status

    -

    With this feature, you can prevent others from deleting messages or status that they have sent to you. This means that even if they delete them for everyone, you will still be able to see them on your device. You will also see a message that says This message was deleted next to the deleted message or status. To access this feature, go to Settings > Fouad Mods > Privacy > Anti-Delete Messages and Anti-Delete Status.

    -

    Voice modulator

    -

    With this feature, you can change your voice when sending voice notes on WhatsApp. You can choose from different voice effects, such as chipmunk, robot, alien, drunk, and more. You can also adjust the pitch and speed of your voice. To access this feature, go to Settings > Fouad Mods > Voice Changer.

    -

    Customizable fonts and colors

    -

    With this feature, you can change the fonts and colors of your WhatsApp app according to your preferences. You can choose from different fonts, such as Arial, Comic Sans, Helvetica, and more. You can also change the colors of the text, background, header, footer, and more. To access this feature, go to Settings > Fouad Mods > Universal > Colors and Fonts.

    -

    And more

    -

    WhatsApp with iPhone style for Android has many more features that you can explore and enjoy, such as customizable chats and contacts list, wallpapers, stickers, emojis, media mods, lock mods, conversation mods, and more. To access these features, go to Settings > Fouad Mods and browse through the different categories.

    -

    Conclusion

    -

    WhatsApp with iPhone style for Android is a mod that lets you have a WhatsApp experience that resembles the iOS version on your Android device. It is based on Fouad WhatsApp, one of the most popular and trusted WhatsApp mods available. It offers a theme that mimics the iOS appearance and emojis, as well as some extra features and options that enhance your WhatsApp usage. To download and install WhatsApp with iPhone style for Android in 2021, you will need to backup your chats and uninstall the official WhatsApp app, download the APK file from a reliable source such as Malavida, enable unknown sources and install the APK file on your device, verify your phone number and restore your chats, and choose the iOS theme and enjoy. In this article, we have shown you what WhatsApp with iPhone style for Android is, why you might want to download it, how to download and install it, and what are the main features that it offers. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

    -

    FAQs

    -

    Here are some frequently asked questions about WhatsApp with iPhone style for Android:

    -

    Is WhatsApp with iPhone style for Android safe?

    -

    WhatsApp with iPhone style for Android is a mod that is based on Fouad WhatsApp, one of the most popular and trusted WhatsApp mods available. However, it is not an official app and it is not authorized by WhatsApp or Google Play Store. Therefore, there is always a risk of downloading and installing mods from unknown sources, such as malware, viruses, data theft, account ban, and more. You should always download mods from trusted sources and at your own risk.

    -

    Is WhatsApp with iPhone style for Android updated?

    -

    WhatsApp with iPhone style for Android is a mod that is updated regularly by its developers. However, it is not always compatible with the latest version of the official WhatsApp app. Therefore, you might experience some bugs, glitches, or errors when using the mod. You should always check for updates and download them from reliable sources.

    -

    Can I use WhatsApp with iPhone style for Android with the official WhatsApp app?

    -

    No, you cannot use WhatsApp with iPhone style for Android with the official WhatsApp app on the same device. This is because you cannot have two WhatsApp apps with the same phone number on the same device. You will need to backup your chats and uninstall the official WhatsApp app before you can download and install WhatsApp with iPhone style for Android.

    -

    Can I use WhatsApp with iPhone style for Android on an iPhone?

    -

    No, you cannot use WhatsApp with iPhone style for Android on an iPhone. This is because this mod is only compatible with Android devices. If you want to use a mod on an iPhone, you will need to jailbreak your device and download a mod that is compatible with iOS devices.

    -

    Can I switch back to the official WhatsApp app after using WhatsApp with iPhone style for Android?

    -

    Yes, you can switch back to the official WhatsApp app after using WhatsApp with iPhone style for Android. However, you will need to backup your chats and uninstall the mod before you can download and install the official WhatsApp app from the Google Play Store. You will also lose some of the features and options that the mod offers.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/data/constants.py b/spaces/cooelf/Multimodal-CoT/timm/data/constants.py deleted file mode 100644 index d6d4a01b0316989a3f5142167f1e384b098bc930..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/data/constants.py +++ /dev/null @@ -1,7 +0,0 @@ -DEFAULT_CROP_PCT = 0.875 -IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406) -IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225) -IMAGENET_INCEPTION_MEAN = (0.5, 0.5, 0.5) -IMAGENET_INCEPTION_STD = (0.5, 0.5, 0.5) -IMAGENET_DPN_MEAN = (124 / 255, 117 / 255, 104 / 255) -IMAGENET_DPN_STD = tuple([1 / (.0167 * 255)] * 3) diff --git a/spaces/daibs/bananafreshnessclass/app.py b/spaces/daibs/bananafreshnessclass/app.py deleted file mode 100644 index 779fd0f4c9338586b17248e18ecf971f77a7979f..0000000000000000000000000000000000000000 --- a/spaces/daibs/bananafreshnessclass/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import numpy as np -import gradio as gr -from tensorflow.keras.models import load_model -import imutils -import matplotlib.pyplot as plt -import cv2 -import numpy as np -from tensorflow.keras.preprocessing.image import img_to_array -model = load_model("pisang.h5") - -def prosesgambar(gambar): - # load the image - image = gambar - output = imutils.resize(image, width=400) - - # pre-process the image for classification - image = cv2.resize(image, (94, 94)) - image = image.astype("float") / 255.0 - image = img_to_array(image) - image = np.expand_dims(image, axis=0) - return image - - - - -def prediksi(gambar): - a = np.round(model.predict(prosesgambar(gambar)), 4)[0].tolist() - if a.index(max(a)) == 1: - pred = "Segar" - else: - pred = "Busuk" - return pred - -demo = gr.Interface(prediksi, gr.Image(shape=(200, 200)), "text") -demo.launch() \ No newline at end of file diff --git a/spaces/dammasimbung/Cardiovascular-Detecting-App/setup.sh b/spaces/dammasimbung/Cardiovascular-Detecting-App/setup.sh deleted file mode 100644 index c8650a8b74a58d9a5f53b185fd711c5668e1cd52..0000000000000000000000000000000000000000 --- a/spaces/dammasimbung/Cardiovascular-Detecting-App/setup.sh +++ /dev/null @@ -1,13 +0,0 @@ -mkdir -p ~/.streamlit/ - -echo "\ -[general]\n\ -email = \"your-email@domain.com\"\n\ -" > ~/.streamlit/credentials.toml - -echo "\ -[server]\n\ -headless = true\n\ -enableCORS=false\n\ -port = $PORT\n\ -" > ~/.streamlit/config.toml \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/McIdasImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/McIdasImagePlugin.py deleted file mode 100644 index 17c008b9a6a1218f6e51add4fda83acb92ee06ce..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/McIdasImagePlugin.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Basic McIdas support for PIL -# -# History: -# 1997-05-05 fl Created (8-bit images only) -# 2009-03-08 fl Added 16/32-bit support. -# -# Thanks to Richard Jones and Craig Swank for specs and samples. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -import struct - -from . import Image, ImageFile - - -def _accept(s): - return s[:8] == b"\x00\x00\x00\x00\x00\x00\x00\x04" - - -## -# Image plugin for McIdas area images. - - -class McIdasImageFile(ImageFile.ImageFile): - format = "MCIDAS" - format_description = "McIdas area file" - - def _open(self): - # parse area file directory - s = self.fp.read(256) - if not _accept(s) or len(s) != 256: - msg = "not an McIdas area file" - raise SyntaxError(msg) - - self.area_descriptor_raw = s - self.area_descriptor = w = [0] + list(struct.unpack("!64i", s)) - - # get mode - if w[11] == 1: - mode = rawmode = "L" - elif w[11] == 2: - # FIXME: add memory map support - mode = "I" - rawmode = "I;16B" - elif w[11] == 4: - # FIXME: add memory map support - mode = "I" - rawmode = "I;32B" - else: - msg = "unsupported McIdas format" - raise SyntaxError(msg) - - self.mode = mode - self._size = w[10], w[9] - - offset = w[34] + w[15] - stride = w[15] + w[10] * w[11] * w[14] - - self.tile = [("raw", (0, 0) + self.size, offset, (rawmode, stride, 1))] - - -# -------------------------------------------------------------------- -# registry - -Image.register_open(McIdasImageFile.format, McIdasImageFile, _accept) - -# no default extension diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/expr/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/expr/__init__.py deleted file mode 100644 index 6ba7f8b8b96e28e4f0f7f143f29023d1bc0e58ba..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/expr/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -"""Tools for creating transform & filter expressions with a python syntax""" -# ruff: noqa -from typing import Any - -from .core import datum, Expression -from .funcs import * -from .consts import * -from ..vegalite.v5.schema.core import ExprRef as _ExprRef - - -class _ExprType: - def __init__(self, expr): - vars(self).update(expr) - - def __call__(self, expr, **kwargs): - return _ExprRef(expr, **kwargs) - - -expr: Any = _ExprType(globals()) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/G_S_U_B_.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/G_S_U_B_.py deleted file mode 100644 index bb8375a5f83029d2b05388d5c882edd9c4aba95c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/G_S_U_B_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .otBase import BaseTTXConverter - - -class table_G_S_U_B_(BaseTTXConverter): - pass diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/ttProgram.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/ttProgram.py deleted file mode 100644 index 84aa63f36301ec9a4ae21acff0cbc95010d956b7..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/ttProgram.py +++ /dev/null @@ -1,593 +0,0 @@ -"""ttLib.tables.ttProgram.py -- Assembler/disassembler for TrueType bytecode programs.""" -from __future__ import annotations - -from fontTools.misc.textTools import num2binary, binary2num, readHex, strjoin -import array -from io import StringIO -from typing import List -import re -import logging - - -log = logging.getLogger(__name__) - -# fmt: off - -# first, the list of instructions that eat bytes or words from the instruction stream - -streamInstructions = [ -# -# opcode mnemonic argBits descriptive name pops pushes eats from instruction stream pushes -# - (0x40, 'NPUSHB', 0, 'PushNBytes', 0, -1), # n, b1, b2,...bn b1,b2...bn - (0x41, 'NPUSHW', 0, 'PushNWords', 0, -1), # n, w1, w2,...w w1,w2...wn - (0xb0, 'PUSHB', 3, 'PushBytes', 0, -1), # b0, b1,..bn b0, b1, ...,bn - (0xb8, 'PUSHW', 3, 'PushWords', 0, -1), # w0,w1,..wn w0 ,w1, ...wn -] - - -# next, the list of "normal" instructions - -instructions = [ -# -# opcode mnemonic argBits descriptive name pops pushes eats from instruction stream pushes -# - (0x7f, 'AA', 0, 'AdjustAngle', 1, 0), # p - - (0x64, 'ABS', 0, 'Absolute', 1, 1), # n |n| - (0x60, 'ADD', 0, 'Add', 2, 1), # n2, n1 (n1 + n2) - (0x27, 'ALIGNPTS', 0, 'AlignPts', 2, 0), # p2, p1 - - (0x3c, 'ALIGNRP', 0, 'AlignRelativePt', -1, 0), # p1, p2, ... , ploopvalue - - (0x5a, 'AND', 0, 'LogicalAnd', 2, 1), # e2, e1 b - (0x2b, 'CALL', 0, 'CallFunction', 1, 0), # f - - (0x67, 'CEILING', 0, 'Ceiling', 1, 1), # n ceil(n) - (0x25, 'CINDEX', 0, 'CopyXToTopStack', 1, 1), # k ek - (0x22, 'CLEAR', 0, 'ClearStack', -1, 0), # all items on the stack - - (0x4f, 'DEBUG', 0, 'DebugCall', 1, 0), # n - - (0x73, 'DELTAC1', 0, 'DeltaExceptionC1', -1, 0), # argn, cn, argn-1,cn-1, , arg1, c1 - - (0x74, 'DELTAC2', 0, 'DeltaExceptionC2', -1, 0), # argn, cn, argn-1,cn-1, , arg1, c1 - - (0x75, 'DELTAC3', 0, 'DeltaExceptionC3', -1, 0), # argn, cn, argn-1,cn-1, , arg1, c1 - - (0x5d, 'DELTAP1', 0, 'DeltaExceptionP1', -1, 0), # argn, pn, argn-1, pn-1, , arg1, p1 - - (0x71, 'DELTAP2', 0, 'DeltaExceptionP2', -1, 0), # argn, pn, argn-1, pn-1, , arg1, p1 - - (0x72, 'DELTAP3', 0, 'DeltaExceptionP3', -1, 0), # argn, pn, argn-1, pn-1, , arg1, p1 - - (0x24, 'DEPTH', 0, 'GetDepthStack', 0, 1), # - n - (0x62, 'DIV', 0, 'Divide', 2, 1), # n2, n1 (n1 * 64)/ n2 - (0x20, 'DUP', 0, 'DuplicateTopStack', 1, 2), # e e, e - (0x59, 'EIF', 0, 'EndIf', 0, 0), # - - - (0x1b, 'ELSE', 0, 'Else', 0, 0), # - - - (0x2d, 'ENDF', 0, 'EndFunctionDefinition', 0, 0), # - - - (0x54, 'EQ', 0, 'Equal', 2, 1), # e2, e1 b - (0x57, 'EVEN', 0, 'Even', 1, 1), # e b - (0x2c, 'FDEF', 0, 'FunctionDefinition', 1, 0), # f - - (0x4e, 'FLIPOFF', 0, 'SetAutoFlipOff', 0, 0), # - - - (0x4d, 'FLIPON', 0, 'SetAutoFlipOn', 0, 0), # - - - (0x80, 'FLIPPT', 0, 'FlipPoint', -1, 0), # p1, p2, ..., ploopvalue - - (0x82, 'FLIPRGOFF', 0, 'FlipRangeOff', 2, 0), # h, l - - (0x81, 'FLIPRGON', 0, 'FlipRangeOn', 2, 0), # h, l - - (0x66, 'FLOOR', 0, 'Floor', 1, 1), # n floor(n) - (0x46, 'GC', 1, 'GetCoordOnPVector', 1, 1), # p c - (0x88, 'GETINFO', 0, 'GetInfo', 1, 1), # selector result - (0x91, 'GETVARIATION', 0, 'GetVariation', 0, -1), # - a1,..,an - (0x0d, 'GFV', 0, 'GetFVector', 0, 2), # - px, py - (0x0c, 'GPV', 0, 'GetPVector', 0, 2), # - px, py - (0x52, 'GT', 0, 'GreaterThan', 2, 1), # e2, e1 b - (0x53, 'GTEQ', 0, 'GreaterThanOrEqual', 2, 1), # e2, e1 b - (0x89, 'IDEF', 0, 'InstructionDefinition', 1, 0), # f - - (0x58, 'IF', 0, 'If', 1, 0), # e - - (0x8e, 'INSTCTRL', 0, 'SetInstrExecControl', 2, 0), # s, v - - (0x39, 'IP', 0, 'InterpolatePts', -1, 0), # p1, p2, ... , ploopvalue - - (0x0f, 'ISECT', 0, 'MovePtToIntersect', 5, 0), # a1, a0, b1, b0, p - - (0x30, 'IUP', 1, 'InterpolateUntPts', 0, 0), # - - - (0x1c, 'JMPR', 0, 'Jump', 1, 0), # offset - - (0x79, 'JROF', 0, 'JumpRelativeOnFalse', 2, 0), # e, offset - - (0x78, 'JROT', 0, 'JumpRelativeOnTrue', 2, 0), # e, offset - - (0x2a, 'LOOPCALL', 0, 'LoopAndCallFunction', 2, 0), # f, count - - (0x50, 'LT', 0, 'LessThan', 2, 1), # e2, e1 b - (0x51, 'LTEQ', 0, 'LessThenOrEqual', 2, 1), # e2, e1 b - (0x8b, 'MAX', 0, 'Maximum', 2, 1), # e2, e1 max(e1, e2) - (0x49, 'MD', 1, 'MeasureDistance', 2, 1), # p2,p1 d - (0x2e, 'MDAP', 1, 'MoveDirectAbsPt', 1, 0), # p - - (0xc0, 'MDRP', 5, 'MoveDirectRelPt', 1, 0), # p - - (0x3e, 'MIAP', 1, 'MoveIndirectAbsPt', 2, 0), # n, p - - (0x8c, 'MIN', 0, 'Minimum', 2, 1), # e2, e1 min(e1, e2) - (0x26, 'MINDEX', 0, 'MoveXToTopStack', 1, 1), # k ek - (0xe0, 'MIRP', 5, 'MoveIndirectRelPt', 2, 0), # n, p - - (0x4b, 'MPPEM', 0, 'MeasurePixelPerEm', 0, 1), # - ppem - (0x4c, 'MPS', 0, 'MeasurePointSize', 0, 1), # - pointSize - (0x3a, 'MSIRP', 1, 'MoveStackIndirRelPt', 2, 0), # d, p - - (0x63, 'MUL', 0, 'Multiply', 2, 1), # n2, n1 (n1 * n2)/64 - (0x65, 'NEG', 0, 'Negate', 1, 1), # n -n - (0x55, 'NEQ', 0, 'NotEqual', 2, 1), # e2, e1 b - (0x5c, 'NOT', 0, 'LogicalNot', 1, 1), # e ( not e ) - (0x6c, 'NROUND', 2, 'NoRound', 1, 1), # n1 n2 - (0x56, 'ODD', 0, 'Odd', 1, 1), # e b - (0x5b, 'OR', 0, 'LogicalOr', 2, 1), # e2, e1 b - (0x21, 'POP', 0, 'PopTopStack', 1, 0), # e - - (0x45, 'RCVT', 0, 'ReadCVT', 1, 1), # location value - (0x7d, 'RDTG', 0, 'RoundDownToGrid', 0, 0), # - - - (0x7a, 'ROFF', 0, 'RoundOff', 0, 0), # - - - (0x8a, 'ROLL', 0, 'RollTopThreeStack', 3, 3), # a,b,c b,a,c - (0x68, 'ROUND', 2, 'Round', 1, 1), # n1 n2 - (0x43, 'RS', 0, 'ReadStore', 1, 1), # n v - (0x3d, 'RTDG', 0, 'RoundToDoubleGrid', 0, 0), # - - - (0x18, 'RTG', 0, 'RoundToGrid', 0, 0), # - - - (0x19, 'RTHG', 0, 'RoundToHalfGrid', 0, 0), # - - - (0x7c, 'RUTG', 0, 'RoundUpToGrid', 0, 0), # - - - (0x77, 'S45ROUND', 0, 'SuperRound45Degrees', 1, 0), # n - - (0x7e, 'SANGW', 0, 'SetAngleWeight', 1, 0), # weight - - (0x85, 'SCANCTRL', 0, 'ScanConversionControl', 1, 0), # n - - (0x8d, 'SCANTYPE', 0, 'ScanType', 1, 0), # n - - (0x48, 'SCFS', 0, 'SetCoordFromStackFP', 2, 0), # c, p - - (0x1d, 'SCVTCI', 0, 'SetCVTCutIn', 1, 0), # n - - (0x5e, 'SDB', 0, 'SetDeltaBaseInGState', 1, 0), # n - - (0x86, 'SDPVTL', 1, 'SetDualPVectorToLine', 2, 0), # p2, p1 - - (0x5f, 'SDS', 0, 'SetDeltaShiftInGState', 1, 0), # n - - (0x0b, 'SFVFS', 0, 'SetFVectorFromStack', 2, 0), # y, x - - (0x04, 'SFVTCA', 1, 'SetFVectorToAxis', 0, 0), # - - - (0x08, 'SFVTL', 1, 'SetFVectorToLine', 2, 0), # p2, p1 - - (0x0e, 'SFVTPV', 0, 'SetFVectorToPVector', 0, 0), # - - - (0x34, 'SHC', 1, 'ShiftContourByLastPt', 1, 0), # c - - (0x32, 'SHP', 1, 'ShiftPointByLastPoint', -1, 0), # p1, p2, ..., ploopvalue - - (0x38, 'SHPIX', 0, 'ShiftZoneByPixel', -1, 0), # d, p1, p2, ..., ploopvalue - - (0x36, 'SHZ', 1, 'ShiftZoneByLastPoint', 1, 0), # e - - (0x17, 'SLOOP', 0, 'SetLoopVariable', 1, 0), # n - - (0x1a, 'SMD', 0, 'SetMinimumDistance', 1, 0), # distance - - (0x0a, 'SPVFS', 0, 'SetPVectorFromStack', 2, 0), # y, x - - (0x02, 'SPVTCA', 1, 'SetPVectorToAxis', 0, 0), # - - - (0x06, 'SPVTL', 1, 'SetPVectorToLine', 2, 0), # p2, p1 - - (0x76, 'SROUND', 0, 'SuperRound', 1, 0), # n - - (0x10, 'SRP0', 0, 'SetRefPoint0', 1, 0), # p - - (0x11, 'SRP1', 0, 'SetRefPoint1', 1, 0), # p - - (0x12, 'SRP2', 0, 'SetRefPoint2', 1, 0), # p - - (0x1f, 'SSW', 0, 'SetSingleWidth', 1, 0), # n - - (0x1e, 'SSWCI', 0, 'SetSingleWidthCutIn', 1, 0), # n - - (0x61, 'SUB', 0, 'Subtract', 2, 1), # n2, n1 (n1 - n2) - (0x00, 'SVTCA', 1, 'SetFPVectorToAxis', 0, 0), # - - - (0x23, 'SWAP', 0, 'SwapTopStack', 2, 2), # e2, e1 e1, e2 - (0x13, 'SZP0', 0, 'SetZonePointer0', 1, 0), # n - - (0x14, 'SZP1', 0, 'SetZonePointer1', 1, 0), # n - - (0x15, 'SZP2', 0, 'SetZonePointer2', 1, 0), # n - - (0x16, 'SZPS', 0, 'SetZonePointerS', 1, 0), # n - - (0x29, 'UTP', 0, 'UnTouchPt', 1, 0), # p - - (0x70, 'WCVTF', 0, 'WriteCVTInFUnits', 2, 0), # n, l - - (0x44, 'WCVTP', 0, 'WriteCVTInPixels', 2, 0), # v, l - - (0x42, 'WS', 0, 'WriteStore', 2, 0), # v, l - -] - -# fmt: on - - -def bitRepr(value, bits): - s = "" - for i in range(bits): - s = "01"[value & 0x1] + s - value = value >> 1 - return s - - -_mnemonicPat = re.compile(r"[A-Z][A-Z0-9]*$") - - -def _makeDict(instructionList): - opcodeDict = {} - mnemonicDict = {} - for op, mnemonic, argBits, name, pops, pushes in instructionList: - assert _mnemonicPat.match(mnemonic) - mnemonicDict[mnemonic] = op, argBits, name - if argBits: - argoffset = op - for i in range(1 << argBits): - opcodeDict[op + i] = mnemonic, argBits, argoffset, name - else: - opcodeDict[op] = mnemonic, 0, 0, name - return opcodeDict, mnemonicDict - - -streamOpcodeDict, streamMnemonicDict = _makeDict(streamInstructions) -opcodeDict, mnemonicDict = _makeDict(instructions) - - -class tt_instructions_error(Exception): - def __init__(self, error): - self.error = error - - def __str__(self): - return "TT instructions error: %s" % repr(self.error) - - -_comment = r"/\*.*?\*/" -_instruction = r"([A-Z][A-Z0-9]*)\s*\[(.*?)\]" -_number = r"-?[0-9]+" -_token = "(%s)|(%s)|(%s)" % (_instruction, _number, _comment) - -_tokenRE = re.compile(_token) -_whiteRE = re.compile(r"\s*") - -_pushCountPat = re.compile(r"[A-Z][A-Z0-9]*\s*\[.*?\]\s*/\* ([0-9]+).*?\*/") - -_indentRE = re.compile(r"^FDEF|IF|ELSE\[ \]\t.+") -_unindentRE = re.compile(r"^ELSE|ENDF|EIF\[ \]\t.+") - - -def _skipWhite(data, pos): - m = _whiteRE.match(data, pos) - newPos = m.regs[0][1] - assert newPos >= pos - return newPos - - -class Program(object): - def __init__(self) -> None: - pass - - def fromBytecode(self, bytecode: bytes) -> None: - self.bytecode = array.array("B", bytecode) - if hasattr(self, "assembly"): - del self.assembly - - def fromAssembly(self, assembly: List[str] | str) -> None: - if isinstance(assembly, list): - self.assembly = assembly - elif isinstance(assembly, str): - self.assembly = assembly.splitlines() - else: - raise TypeError(f"expected str or List[str], got {type(assembly).__name__}") - if hasattr(self, "bytecode"): - del self.bytecode - - def getBytecode(self) -> bytes: - if not hasattr(self, "bytecode"): - self._assemble() - return self.bytecode.tobytes() - - def getAssembly(self, preserve=True) -> List[str]: - if not hasattr(self, "assembly"): - self._disassemble(preserve=preserve) - return self.assembly - - def toXML(self, writer, ttFont) -> None: - if ( - not hasattr(ttFont, "disassembleInstructions") - or ttFont.disassembleInstructions - ): - try: - assembly = self.getAssembly() - except: - import traceback - - tmp = StringIO() - traceback.print_exc(file=tmp) - msg = "An exception occurred during the decompilation of glyph program:\n\n" - msg += tmp.getvalue() - log.error(msg) - writer.begintag("bytecode") - writer.newline() - writer.comment(msg.strip()) - writer.newline() - writer.dumphex(self.getBytecode()) - writer.endtag("bytecode") - writer.newline() - else: - if not assembly: - return - writer.begintag("assembly") - writer.newline() - i = 0 - indent = 0 - nInstr = len(assembly) - while i < nInstr: - instr = assembly[i] - if _unindentRE.match(instr): - indent -= 1 - writer.write(writer.indentwhite * indent) - writer.write(instr) - writer.newline() - m = _pushCountPat.match(instr) - i = i + 1 - if m: - nValues = int(m.group(1)) - line: List[str] = [] - j = 0 - for j in range(nValues): - if j and not (j % 25): - writer.write(writer.indentwhite * indent) - writer.write(" ".join(line)) - writer.newline() - line = [] - line.append(assembly[i + j]) - writer.write(writer.indentwhite * indent) - writer.write(" ".join(line)) - writer.newline() - i = i + j + 1 - if _indentRE.match(instr): - indent += 1 - writer.endtag("assembly") - writer.newline() - else: - bytecode = self.getBytecode() - if not bytecode: - return - writer.begintag("bytecode") - writer.newline() - writer.dumphex(bytecode) - writer.endtag("bytecode") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont) -> None: - if name == "assembly": - self.fromAssembly(strjoin(content)) - self._assemble() - del self.assembly - else: - assert name == "bytecode" - self.fromBytecode(readHex(content)) - - def _assemble(self) -> None: - assembly = " ".join(getattr(self, "assembly", [])) - bytecode: List[int] = [] - push = bytecode.append - lenAssembly = len(assembly) - pos = _skipWhite(assembly, 0) - while pos < lenAssembly: - m = _tokenRE.match(assembly, pos) - if m is None: - raise tt_instructions_error( - "Syntax error in TT program (%s)" % assembly[pos - 5 : pos + 15] - ) - dummy, mnemonic, arg, number, comment = m.groups() - pos = m.regs[0][1] - if comment: - pos = _skipWhite(assembly, pos) - continue - - arg = arg.strip() - if mnemonic.startswith("INSTR"): - # Unknown instruction - op = int(mnemonic[5:]) - push(op) - elif mnemonic not in ("PUSH", "NPUSHB", "NPUSHW", "PUSHB", "PUSHW"): - op, argBits, name = mnemonicDict[mnemonic] - if len(arg) != argBits: - raise tt_instructions_error( - "Incorrect number of argument bits (%s[%s])" % (mnemonic, arg) - ) - if arg: - arg = binary2num(arg) - push(op + arg) - else: - push(op) - else: - args = [] - pos = _skipWhite(assembly, pos) - while pos < lenAssembly: - m = _tokenRE.match(assembly, pos) - if m is None: - raise tt_instructions_error( - "Syntax error in TT program (%s)" % assembly[pos : pos + 15] - ) - dummy, _mnemonic, arg, number, comment = m.groups() - if number is None and comment is None: - break - pos = m.regs[0][1] - pos = _skipWhite(assembly, pos) - if comment is not None: - continue - args.append(int(number)) - nArgs = len(args) - if mnemonic == "PUSH": - # Automatically choose the most compact representation - nWords = 0 - while nArgs: - while ( - nWords < nArgs - and nWords < 255 - and not (0 <= args[nWords] <= 255) - ): - nWords += 1 - nBytes = 0 - while ( - nWords + nBytes < nArgs - and nBytes < 255 - and 0 <= args[nWords + nBytes] <= 255 - ): - nBytes += 1 - if ( - nBytes < 2 - and nWords + nBytes < 255 - and nWords + nBytes != nArgs - ): - # Will write bytes as words - nWords += nBytes - continue - - # Write words - if nWords: - if nWords <= 8: - op, argBits, name = streamMnemonicDict["PUSHW"] - op = op + nWords - 1 - push(op) - else: - op, argBits, name = streamMnemonicDict["NPUSHW"] - push(op) - push(nWords) - for value in args[:nWords]: - assert -32768 <= value < 32768, ( - "PUSH value out of range %d" % value - ) - push((value >> 8) & 0xFF) - push(value & 0xFF) - - # Write bytes - if nBytes: - pass - if nBytes <= 8: - op, argBits, name = streamMnemonicDict["PUSHB"] - op = op + nBytes - 1 - push(op) - else: - op, argBits, name = streamMnemonicDict["NPUSHB"] - push(op) - push(nBytes) - for value in args[nWords : nWords + nBytes]: - push(value) - - nTotal = nWords + nBytes - args = args[nTotal:] - nArgs -= nTotal - nWords = 0 - else: - # Write exactly what we've been asked to - words = mnemonic[-1] == "W" - op, argBits, name = streamMnemonicDict[mnemonic] - if mnemonic[0] != "N": - assert nArgs <= 8, nArgs - op = op + nArgs - 1 - push(op) - else: - assert nArgs < 256 - push(op) - push(nArgs) - if words: - for value in args: - assert -32768 <= value < 32768, ( - "PUSHW value out of range %d" % value - ) - push((value >> 8) & 0xFF) - push(value & 0xFF) - else: - for value in args: - assert 0 <= value < 256, ( - "PUSHB value out of range %d" % value - ) - push(value) - - pos = _skipWhite(assembly, pos) - - if bytecode: - assert max(bytecode) < 256 and min(bytecode) >= 0 - self.bytecode = array.array("B", bytecode) - - def _disassemble(self, preserve=False) -> None: - assembly = [] - i = 0 - bytecode = getattr(self, "bytecode", []) - numBytecode = len(bytecode) - while i < numBytecode: - op = bytecode[i] - try: - mnemonic, argBits, argoffset, name = opcodeDict[op] - except KeyError: - if op in streamOpcodeDict: - values = [] - - # Merge consecutive PUSH operations - while bytecode[i] in streamOpcodeDict: - op = bytecode[i] - mnemonic, argBits, argoffset, name = streamOpcodeDict[op] - words = mnemonic[-1] == "W" - if argBits: - nValues = op - argoffset + 1 - else: - i = i + 1 - nValues = bytecode[i] - i = i + 1 - assert nValues > 0 - if not words: - for j in range(nValues): - value = bytecode[i] - values.append(repr(value)) - i = i + 1 - else: - for j in range(nValues): - # cast to signed int16 - value = (bytecode[i] << 8) | bytecode[i + 1] - if value >= 0x8000: - value = value - 0x10000 - values.append(repr(value)) - i = i + 2 - if preserve: - break - - if not preserve: - mnemonic = "PUSH" - nValues = len(values) - if nValues == 1: - assembly.append("%s[ ] /* 1 value pushed */" % mnemonic) - else: - assembly.append( - "%s[ ] /* %s values pushed */" % (mnemonic, nValues) - ) - assembly.extend(values) - else: - assembly.append("INSTR%d[ ]" % op) - i = i + 1 - else: - if argBits: - assembly.append( - mnemonic - + "[%s] /* %s */" % (num2binary(op - argoffset, argBits), name) - ) - else: - assembly.append(mnemonic + "[ ] /* %s */" % name) - i = i + 1 - self.assembly = assembly - - def __bool__(self) -> bool: - """ - >>> p = Program() - >>> bool(p) - False - >>> bc = array.array("B", [0]) - >>> p.fromBytecode(bc) - >>> bool(p) - True - >>> p.bytecode.pop() - 0 - >>> bool(p) - False - - >>> p = Program() - >>> asm = ['SVTCA[0]'] - >>> p.fromAssembly(asm) - >>> bool(p) - True - >>> p.assembly.pop() - 'SVTCA[0]' - >>> bool(p) - False - """ - return (hasattr(self, "assembly") and len(self.assembly) > 0) or ( - hasattr(self, "bytecode") and len(self.bytecode) > 0 - ) - - __nonzero__ = __bool__ - - def __eq__(self, other) -> bool: - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ - - def __ne__(self, other) -> bool: - result = self.__eq__(other) - return result if result is NotImplemented else not result - - -def _test(): - """ - >>> _test() - True - """ - - bc = b"""@;:9876543210/.-,+*)(\'&%$#"! \037\036\035\034\033\032\031\030\027\026\025\024\023\022\021\020\017\016\015\014\013\012\011\010\007\006\005\004\003\002\001\000,\001\260\030CXEj\260\031C`\260F#D#\020 \260FN\360M/\260\000\022\033!#\0213Y-,\001\260\030CX\260\005+\260\000\023K\260\024PX\261\000@8Y\260\006+\033!#\0213Y-,\001\260\030CXN\260\003%\020\362!\260\000\022M\033 E\260\004%\260\004%#Jad\260(RX!#\020\326\033\260\003%\020\362!\260\000\022YY-,\260\032CX!!\033\260\002%\260\002%I\260\003%\260\003%Ja d\260\020PX!!!\033\260\003%\260\003%I\260\000PX\260\000PX\270\377\3428!\033\260\0208!Y\033\260\000RX\260\0368!\033\270\377\3608!YYYY-,\001\260\030CX\260\005+\260\000\023K\260\024PX\271\000\000\377\3008Y\260\006+\033!#\0213Y-,N\001\212\020\261F\031CD\260\000\024\261\000F\342\260\000\025\271\000\000\377\3608\000\260\000<\260(+\260\002%\020\260\000<-,\001\030\260\000/\260\001\024\362\260\001\023\260\001\025M\260\000\022-,\001\260\030CX\260\005+\260\000\023\271\000\000\377\3408\260\006+\033!#\0213Y-,\001\260\030CXEdj#Edi\260\031Cd``\260F#D#\020 \260F\360/\260\000\022\033!! \212 \212RX\0213\033!!YY-,\001\261\013\012C#Ce\012-,\000\261\012\013C#C\013-,\000\260F#p\261\001F>\001\260F#p\261\002FE:\261\002\000\010\015-,\260\022+\260\002%E\260\002%Ej\260@\213`\260\002%#D!!!-,\260\023+\260\002%E\260\002%Ej\270\377\300\214`\260\002%#D!!!-,\260\000\260\022+!!!-,\260\000\260\023+!!!-,\001\260\006C\260\007Ce\012-, i\260@a\260\000\213 \261,\300\212\214\270\020\000b`+\014d#da\\X\260\003aY-,\261\000\003%EhT\260\034KPZX\260\003%E\260\003%E`h \260\004%#D\260\004%#D\033\260\003% Eh \212#D\260\003%Eh`\260\003%#DY-,\260\003% Eh \212#D\260\003%Edhe`\260\004%\260\001`#D-,\260\011CX\207!\300\033\260\022CX\207E\260\021+\260G#D\260Gz\344\033\003\212E\030i \260G#D\212\212\207 \260\240QX\260\021+\260G#D\260Gz\344\033!\260Gz\344YYY\030-, \212E#Eh`D-,EjB-,\001\030/-,\001\260\030CX\260\004%\260\004%Id#Edi\260@\213a \260\200bj\260\002%\260\002%a\214\260\031C`\260F#D!\212\020\260F\366!\033!!!!Y-,\001\260\030CX\260\002%E\260\002%Ed`j\260\003%Eja \260\004%Ej \212\213e\260\004%#D\214\260\003%#D!!\033 EjD EjDY-,\001 E\260\000U\260\030CZXEh#Ei\260@\213a \260\200bj \212#a \260\003%\213e\260\004%#D\214\260\003%#D!!\033!!\260\031+Y-,\001\212\212Ed#EdadB-,\260\004%\260\004%\260\031+\260\030CX\260\004%\260\004%\260\003%\260\033+\001\260\002%C\260@T\260\002%C\260\000TZX\260\003% E\260@aDY\260\002%C\260\000T\260\002%C\260@TZX\260\004% E\260@`DYY!!!!-,\001KRXC\260\002%E#aD\033!!Y-,\001KRXC\260\002%E#`D\033!!Y-,KRXED\033!!Y-,\001 \260\003%#I\260@`\260 c \260\000RX#\260\002%8#\260\002%e8\000\212c8\033!!!!!Y\001-,KPXED\033!!Y-,\001\260\005%\020# \212\365\000\260\001`#\355\354-,\001\260\005%\020# \212\365\000\260\001a#\355\354-,\001\260\006%\020\365\000\355\354-,F#F`\212\212F# F\212`\212a\270\377\200b# \020#\212\261KK\212pE` \260\000PX\260\001a\270\377\272\213\033\260F\214Y\260\020`h\001:-, E\260\003%FRX\260\002%F ha\260\003%\260\003%?#!8\033!\021Y-, E\260\003%FPX\260\002%F ha\260\003%\260\003%?#!8\033!\021Y-,\000\260\007C\260\006C\013-,\212\020\354-,\260\014CX!\033 F\260\000RX\270\377\3608\033\260\0208YY-, \260\000UX\270\020\000c\260\003%Ed\260\003%Eda\260\000SX\260\002\033\260@a\260\003Y%EiSXED\033!!Y\033!\260\002%E\260\002%Ead\260(QXED\033!!YY-,!!\014d#d\213\270@\000b-,!\260\200QX\014d#d\213\270 \000b\033\262\000@/+Y\260\002`-,!\260\300QX\014d#d\213\270\025Ub\033\262\000\200/+Y\260\002`-,\014d#d\213\270@\000b`#!-,KSX\260\004%\260\004%Id#Edi\260@\213a \260\200bj\260\002%\260\002%a\214\260F#D!\212\020\260F\366!\033!\212\021#\022 9/Y-,\260\002%\260\002%Id\260\300TX\270\377\3708\260\0108\033!!Y-,\260\023CX\003\033\002Y-,\260\023CX\002\033\003Y-,\260\012+#\020 <\260\027+-,\260\002%\270\377\3608\260(+\212\020# \320#\260\020+\260\005CX\300\033 pd.DataFrame | None: - """ - Parameters: - x: Dict with keys 'data': 2D array of str, numeric, or bool data, 'headers': list of strings for header names, 'range': optional two element list designating start of end of subrange. - Returns: - Dataframe of timeseries data - """ - if x is None: - return x - elif x.get("is_file"): - dataframe = pd.read_csv(x["name"]) - else: - dataframe = pd.DataFrame(data=x["data"], columns=x["headers"]) - if x.get("range") is not None: - dataframe = dataframe.loc[dataframe[self.x or 0] >= x["range"][0]] - dataframe = dataframe.loc[dataframe[self.x or 0] <= x["range"][1]] - return dataframe - - def postprocess(self, y: str | pd.DataFrame | None) -> dict | None: - """ - Parameters: - y: csv or dataframe with timeseries data - Returns: - JSON object with key 'headers' for list of header names, 'data' for 2D array of string or numeric data - """ - if y is None: - return None - if isinstance(y, str): - dataframe = pd.read_csv(y) - return { - "headers": dataframe.columns.values.tolist(), - "data": dataframe.values.tolist(), - } - if isinstance(y, pd.DataFrame): - return {"headers": y.columns.values.tolist(), "data": y.values.tolist()} - raise ValueError("Cannot process value as Timeseries data") - - def as_example(self, input_data: str | None) -> str: - return Path(input_data).name if input_data else "" diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/_multi_commits.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/_multi_commits.py deleted file mode 100644 index c41d2a36fc0971ad031e05d851e632b263f10e48..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/_multi_commits.py +++ /dev/null @@ -1,305 +0,0 @@ -# coding=utf-8 -# Copyright 2023-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains utilities to multi-commits (i.e. push changes iteratively on a PR).""" -import re -from dataclasses import dataclass, field -from hashlib import sha256 -from typing import TYPE_CHECKING, Iterable, List, Optional, Set, Tuple, Union - -from ._commit_api import CommitOperationAdd, CommitOperationDelete -from .community import DiscussionWithDetails -from .utils import experimental -from .utils._cache_manager import _format_size - - -if TYPE_CHECKING: - from .hf_api import HfApi - - -class MultiCommitException(Exception): - """Base exception for any exception happening while doing a multi-commit.""" - - -MULTI_COMMIT_PR_DESCRIPTION_TEMPLATE = """ -## {commit_message} - -{commit_description} - -**Multi commit ID:** {multi_commit_id} - -Scheduled commits: - -{multi_commit_strategy} - -_This is a PR opened using the `huggingface_hub` library in the context of a multi-commit. PR can be commented as a usual PR. However, please be aware that manually updating the PR description, changing the PR status, or pushing new commits, is not recommended as it might corrupt the commit process. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - -MULTI_COMMIT_PR_COMPLETION_COMMENT_TEMPLATE = """ -Multi-commit is now completed! You can ping the repo owner to review the changes. This PR can now be commented or modified without risking to corrupt it. - -_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - -MULTI_COMMIT_PR_CLOSING_COMMENT_TEMPLATE = """ -`create_pr=False` has been passed so PR is automatically merged. - -_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - -MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_NO_CHANGES_TEMPLATE = """ -Cannot merge Pull Requests as no changes are associated. This PR will be closed automatically. - -_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - -MULTI_COMMIT_PR_CLOSE_COMMENT_FAILURE_BAD_REQUEST_TEMPLATE = """ -An error occurred while trying to merge the Pull Request: `{error_message}`. - -_This is a comment posted using the `huggingface_hub` library in the context of a multi-commit. Learn more about multi-commits [in this guide](https://huggingface.co/docs/huggingface_hub/main/guides/upload)._ -""" - - -STEP_ID_REGEX = re.compile(r"- \[(?P[ |x])\].*(?P[a-fA-F0-9]{64})", flags=re.MULTILINE) - - -@experimental -def plan_multi_commits( - operations: Iterable[Union[CommitOperationAdd, CommitOperationDelete]], - max_operations_per_commit: int = 50, - max_upload_size_per_commit: int = 2 * 1024 * 1024 * 1024, -) -> Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]: - """Split a list of operations in a list of commits to perform. - - Implementation follows a sub-optimal (yet simple) algorithm: - 1. Delete operations are grouped together by commits of maximum `max_operations_per_commits` operations. - 2. All additions exceeding `max_upload_size_per_commit` are committed 1 by 1. - 3. All remaining additions are grouped together and split each time the `max_operations_per_commit` or the - `max_upload_size_per_commit` limit is reached. - - We do not try to optimize the splitting to get the lowest number of commits as this is a NP-hard problem (see - [bin packing problem](https://en.wikipedia.org/wiki/Bin_packing_problem)). For our use case, it is not problematic - to use a sub-optimal solution so we favored an easy-to-explain implementation. - - Args: - operations (`List` of [`~hf_api.CommitOperation`]): - The list of operations to split into commits. - max_operations_per_commit (`int`): - Maximum number of operations in a single commit. Defaults to 50. - max_upload_size_per_commit (`int`): - Maximum size to upload (in bytes) in a single commit. Defaults to 2GB. Files bigger than this limit are - uploaded, 1 per commit. - - Returns: - `Tuple[List[List[CommitOperationAdd]], List[List[CommitOperationDelete]]]`: a tuple. First item is a list of - lists of [`CommitOperationAdd`] representing the addition commits to push. The second item is a list of lists - of [`CommitOperationDelete`] representing the deletion commits. - - - - `plan_multi_commits` is experimental. Its API and behavior is subject to change in the future without prior notice. - - - - Example: - ```python - >>> from huggingface_hub import HfApi, plan_multi_commits - >>> addition_commits, deletion_commits = plan_multi_commits( - ... operations=[ - ... CommitOperationAdd(...), - ... CommitOperationAdd(...), - ... CommitOperationDelete(...), - ... CommitOperationDelete(...), - ... CommitOperationAdd(...), - ... ], - ... ) - >>> HfApi().create_commits_on_pr( - ... repo_id="my-cool-model", - ... addition_commits=addition_commits, - ... deletion_commits=deletion_commits, - ... (...) - ... verbose=True, - ... ) - ``` - - - - The initial order of the operations is not guaranteed! All deletions will be performed before additions. If you are - not updating multiple times the same file, you are fine. - - - """ - addition_commits: List[List[CommitOperationAdd]] = [] - deletion_commits: List[List[CommitOperationDelete]] = [] - - additions: List[CommitOperationAdd] = [] - additions_size = 0 - deletions: List[CommitOperationDelete] = [] - for op in operations: - if isinstance(op, CommitOperationDelete): - # Group delete operations together - deletions.append(op) - if len(deletions) >= max_operations_per_commit: - deletion_commits.append(deletions) - deletions = [] - - elif op.upload_info.size >= max_upload_size_per_commit: - # Upload huge files 1 by 1 - addition_commits.append([op]) - - elif additions_size + op.upload_info.size < max_upload_size_per_commit: - # Group other additions and split if size limit is reached (either max_nb_files or max_upload_size) - additions.append(op) - additions_size += op.upload_info.size - - else: - addition_commits.append(additions) - additions = [op] - additions_size = op.upload_info.size - - if len(additions) >= max_operations_per_commit: - addition_commits.append(additions) - additions = [] - additions_size = 0 - - if len(additions) > 0: - addition_commits.append(additions) - if len(deletions) > 0: - deletion_commits.append(deletions) - - return addition_commits, deletion_commits - - -@dataclass -class MultiCommitStep: - """Dataclass containing a list of CommitOperation to commit at once. - - A [`MultiCommitStep`] is one atomic part of a [`MultiCommitStrategy`]. Each step is identified by its own - deterministic ID based on the list of commit operations (hexadecimal sha256). ID is persistent between re-runs if - the list of commits is kept the same. - """ - - operations: List[Union[CommitOperationAdd, CommitOperationDelete]] - - id: str = field(init=False) - completed: bool = False - - def __post_init__(self) -> None: - if len(self.operations) == 0: - raise ValueError("A MultiCommitStep must have at least 1 commit operation, got 0.") - - # Generate commit id - sha = sha256() - for op in self.operations: - if isinstance(op, CommitOperationAdd): - sha.update(b"ADD") - sha.update(op.path_in_repo.encode()) - sha.update(op.upload_info.sha256) - elif isinstance(op, CommitOperationDelete): - sha.update(b"DELETE") - sha.update(op.path_in_repo.encode()) - sha.update(str(op.is_folder).encode()) - else: - NotImplementedError() - self.id = sha.hexdigest() - - def __str__(self) -> str: - """Format a step for PR description. - - Formatting can be changed in the future as long as it is single line, starts with `- [ ]`/`- [x]` and contains - `self.id`. Must be able to match `STEP_ID_REGEX`. - """ - additions = [op for op in self.operations if isinstance(op, CommitOperationAdd)] - file_deletions = [op for op in self.operations if isinstance(op, CommitOperationDelete) and not op.is_folder] - folder_deletions = [op for op in self.operations if isinstance(op, CommitOperationDelete) and op.is_folder] - if len(additions) > 0: - return ( - f"- [{'x' if self.completed else ' '}] Upload {len(additions)} file(s) " - f"totalling {_format_size(sum(add.upload_info.size for add in additions))}" - f" ({self.id})" - ) - else: - return ( - f"- [{'x' if self.completed else ' '}] Delete {len(file_deletions)} file(s) and" - f" {len(folder_deletions)} folder(s) ({self.id})" - ) - - -@dataclass -class MultiCommitStrategy: - """Dataclass containing a list of [`MultiCommitStep`] to commit iteratively. - - A strategy is identified by its own deterministic ID based on the list of its steps (hexadecimal sha256). ID is - persistent between re-runs if the list of commits is kept the same. - """ - - addition_commits: List[MultiCommitStep] - deletion_commits: List[MultiCommitStep] - - id: str = field(init=False) - all_steps: Set[str] = field(init=False) - - def __post_init__(self) -> None: - self.all_steps = {step.id for step in self.addition_commits + self.deletion_commits} - if len(self.all_steps) < len(self.addition_commits) + len(self.deletion_commits): - raise ValueError("Got duplicate commits in MultiCommitStrategy. All commits must be unique.") - - if len(self.all_steps) == 0: - raise ValueError("A MultiCommitStrategy must have at least 1 commit, got 0.") - - # Generate strategy id - sha = sha256() - for step in self.addition_commits + self.deletion_commits: - sha.update("new step".encode()) - sha.update(step.id.encode()) - self.id = sha.hexdigest() - - -def multi_commit_create_pull_request( - api: "HfApi", - repo_id: str, - commit_message: str, - commit_description: Optional[str], - strategy: MultiCommitStrategy, - token: Optional[str], - repo_type: Optional[str], -) -> DiscussionWithDetails: - return api.create_pull_request( - repo_id=repo_id, - title=f"[WIP] {commit_message} (multi-commit {strategy.id})", - description=multi_commit_generate_comment( - commit_message=commit_message, commit_description=commit_description, strategy=strategy - ), - token=token, - repo_type=repo_type, - ) - - -def multi_commit_generate_comment( - commit_message: str, - commit_description: Optional[str], - strategy: MultiCommitStrategy, -) -> str: - return MULTI_COMMIT_PR_DESCRIPTION_TEMPLATE.format( - commit_message=commit_message, - commit_description=commit_description or "", - multi_commit_id=strategy.id, - multi_commit_strategy="\n".join( - str(commit) for commit in strategy.deletion_commits + strategy.addition_commits - ), - ) - - -def multi_commit_parse_pr_description(description: str) -> Set[str]: - return {match[1] for match in STEP_ID_REGEX.findall(description)} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/sha.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/sha.py deleted file mode 100644 index 157ccb0379eb1c80389d8e06135f305d11889caf..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/sha.py +++ /dev/null @@ -1,27 +0,0 @@ -"""Utilities to efficiently compute the SHA 256 hash of a bunch of bytes.""" -from hashlib import sha256 -from typing import BinaryIO, Optional - - -def sha_fileobj(fileobj: BinaryIO, chunk_size: Optional[int] = None) -> bytes: - """ - Computes the sha256 hash of the given file object, by chunks of size `chunk_size`. - - Args: - fileobj (file-like object): - The File object to compute sha256 for, typically obtained with `open(path, "rb")` - chunk_size (`int`, *optional*): - The number of bytes to read from `fileobj` at once, defaults to 1MB. - - Returns: - `bytes`: `fileobj`'s sha256 hash as bytes - """ - chunk_size = chunk_size if chunk_size is not None else 1024 * 1024 - - sha = sha256() - while True: - chunk = fileobj.read(chunk_size) - sha.update(chunk) - if not chunk: - break - return sha.digest() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/tree.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/tree.py deleted file mode 100644 index 6641e5a44654c9414cff07b6abbc633de7108ecb..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/tree.py +++ /dev/null @@ -1,345 +0,0 @@ -"""A tree representation of a linear markdown-it token stream. - -This module is not part of upstream JavaScript markdown-it. -""" -from __future__ import annotations - -from collections.abc import Generator, Sequence -import textwrap -from typing import Any, NamedTuple, TypeVar, overload - -from .token import Token - - -class _NesterTokens(NamedTuple): - opening: Token - closing: Token - - -_NodeType = TypeVar("_NodeType", bound="SyntaxTreeNode") - - -class SyntaxTreeNode: - """A Markdown syntax tree node. - - A class that can be used to construct a tree representation of a linear - `markdown-it-py` token stream. - - Each node in the tree represents either: - - root of the Markdown document - - a single unnested `Token` - - a `Token` "_open" and "_close" token pair, and the tokens nested in - between - """ - - def __init__( - self, tokens: Sequence[Token] = (), *, create_root: bool = True - ) -> None: - """Initialize a `SyntaxTreeNode` from a token stream. - - If `create_root` is True, create a root node for the document. - """ - # Only nodes representing an unnested token have self.token - self.token: Token | None = None - - # Only containers have nester tokens - self.nester_tokens: _NesterTokens | None = None - - # Root node does not have self.parent - self._parent: Any = None - - # Empty list unless a non-empty container, or unnested token that has - # children (i.e. inline or img) - self._children: list[Any] = [] - - if create_root: - self._set_children_from_tokens(tokens) - return - - if not tokens: - raise ValueError( - "Can only create root from empty token sequence." - " Set `create_root=True`." - ) - elif len(tokens) == 1: - inline_token = tokens[0] - if inline_token.nesting: - raise ValueError( - "Unequal nesting level at the start and end of token stream." - ) - self.token = inline_token - if inline_token.children: - self._set_children_from_tokens(inline_token.children) - else: - self.nester_tokens = _NesterTokens(tokens[0], tokens[-1]) - self._set_children_from_tokens(tokens[1:-1]) - - def __repr__(self) -> str: - return f"{type(self).__name__}({self.type})" - - @overload - def __getitem__(self: _NodeType, item: int) -> _NodeType: - ... - - @overload - def __getitem__(self: _NodeType, item: slice) -> list[_NodeType]: - ... - - def __getitem__(self: _NodeType, item: int | slice) -> _NodeType | list[_NodeType]: - return self.children[item] - - def to_tokens(self: _NodeType) -> list[Token]: - """Recover the linear token stream.""" - - def recursive_collect_tokens(node: _NodeType, token_list: list[Token]) -> None: - if node.type == "root": - for child in node.children: - recursive_collect_tokens(child, token_list) - elif node.token: - token_list.append(node.token) - else: - assert node.nester_tokens - token_list.append(node.nester_tokens.opening) - for child in node.children: - recursive_collect_tokens(child, token_list) - token_list.append(node.nester_tokens.closing) - - tokens: list[Token] = [] - recursive_collect_tokens(self, tokens) - return tokens - - @property - def children(self: _NodeType) -> list[_NodeType]: - return self._children - - @children.setter - def children(self: _NodeType, value: list[_NodeType]) -> None: - self._children = value - - @property - def parent(self: _NodeType) -> _NodeType | None: - return self._parent # type: ignore - - @parent.setter - def parent(self: _NodeType, value: _NodeType | None) -> None: - self._parent = value - - @property - def is_root(self) -> bool: - """Is the node a special root node?""" - return not (self.token or self.nester_tokens) - - @property - def is_nested(self) -> bool: - """Is this node nested?. - - Returns `True` if the node represents a `Token` pair and tokens in the - sequence between them, where `Token.nesting` of the first `Token` in - the pair is 1 and nesting of the other `Token` is -1. - """ - return bool(self.nester_tokens) - - @property - def siblings(self: _NodeType) -> Sequence[_NodeType]: - """Get siblings of the node. - - Gets the whole group of siblings, including self. - """ - if not self.parent: - return [self] - return self.parent.children - - @property - def type(self) -> str: - """Get a string type of the represented syntax. - - - "root" for root nodes - - `Token.type` if the node represents an unnested token - - `Token.type` of the opening token, with "_open" suffix stripped, if - the node represents a nester token pair - """ - if self.is_root: - return "root" - if self.token: - return self.token.type - assert self.nester_tokens - return _removesuffix(self.nester_tokens.opening.type, "_open") - - @property - def next_sibling(self: _NodeType) -> _NodeType | None: - """Get the next node in the sequence of siblings. - - Returns `None` if this is the last sibling. - """ - self_index = self.siblings.index(self) - if self_index + 1 < len(self.siblings): - return self.siblings[self_index + 1] - return None - - @property - def previous_sibling(self: _NodeType) -> _NodeType | None: - """Get the previous node in the sequence of siblings. - - Returns `None` if this is the first sibling. - """ - self_index = self.siblings.index(self) - if self_index - 1 >= 0: - return self.siblings[self_index - 1] - return None - - def _add_child( - self, - tokens: Sequence[Token], - ) -> None: - """Make a child node for `self`.""" - child = type(self)(tokens, create_root=False) - child.parent = self - self.children.append(child) - - def _set_children_from_tokens(self, tokens: Sequence[Token]) -> None: - """Convert the token stream to a tree structure and set the resulting - nodes as children of `self`.""" - reversed_tokens = list(reversed(tokens)) - while reversed_tokens: - token = reversed_tokens.pop() - - if not token.nesting: - self._add_child([token]) - continue - if token.nesting != 1: - raise ValueError("Invalid token nesting") - - nested_tokens = [token] - nesting = 1 - while reversed_tokens and nesting: - token = reversed_tokens.pop() - nested_tokens.append(token) - nesting += token.nesting - if nesting: - raise ValueError(f"unclosed tokens starting {nested_tokens[0]}") - - self._add_child(nested_tokens) - - def pretty( - self, *, indent: int = 2, show_text: bool = False, _current: int = 0 - ) -> str: - """Create an XML style string of the tree.""" - prefix = " " * _current - text = prefix + f"<{self.type}" - if not self.is_root and self.attrs: - text += " " + " ".join(f"{k}={v!r}" for k, v in self.attrs.items()) - text += ">" - if ( - show_text - and not self.is_root - and self.type in ("text", "text_special") - and self.content - ): - text += "\n" + textwrap.indent(self.content, prefix + " " * indent) - for child in self.children: - text += "\n" + child.pretty( - indent=indent, show_text=show_text, _current=_current + indent - ) - return text - - def walk( - self: _NodeType, *, include_self: bool = True - ) -> Generator[_NodeType, None, None]: - """Recursively yield all descendant nodes in the tree starting at self. - - The order mimics the order of the underlying linear token - stream (i.e. depth first). - """ - if include_self: - yield self - for child in self.children: - yield from child.walk(include_self=True) - - # NOTE: - # The values of the properties defined below directly map to properties - # of the underlying `Token`s. A root node does not translate to a `Token` - # object, so calling these property getters on a root node will raise an - # `AttributeError`. - # - # There is no mapping for `Token.nesting` because the `is_nested` property - # provides that data, and can be called on any node type, including root. - - def _attribute_token(self) -> Token: - """Return the `Token` that is used as the data source for the - properties defined below.""" - if self.token: - return self.token - if self.nester_tokens: - return self.nester_tokens.opening - raise AttributeError("Root node does not have the accessed attribute") - - @property - def tag(self) -> str: - """html tag name, e.g. \"p\" """ - return self._attribute_token().tag - - @property - def attrs(self) -> dict[str, str | int | float]: - """Html attributes.""" - return self._attribute_token().attrs - - def attrGet(self, name: str) -> None | str | int | float: - """Get the value of attribute `name`, or null if it does not exist.""" - return self._attribute_token().attrGet(name) - - @property - def map(self) -> tuple[int, int] | None: - """Source map info. Format: `tuple[ line_begin, line_end ]`""" - map_ = self._attribute_token().map - if map_: - # Type ignore because `Token`s attribute types are not perfect - return tuple(map_) # type: ignore - return None - - @property - def level(self) -> int: - """nesting level, the same as `state.level`""" - return self._attribute_token().level - - @property - def content(self) -> str: - """In a case of self-closing tag (code, html, fence, etc.), it - has contents of this tag.""" - return self._attribute_token().content - - @property - def markup(self) -> str: - """'*' or '_' for emphasis, fence string for fence, etc.""" - return self._attribute_token().markup - - @property - def info(self) -> str: - """fence infostring""" - return self._attribute_token().info - - @property - def meta(self) -> dict[Any, Any]: - """A place for plugins to store an arbitrary data.""" - return self._attribute_token().meta - - @property - def block(self) -> bool: - """True for block-level tokens, false for inline tokens.""" - return self._attribute_token().block - - @property - def hidden(self) -> bool: - """If it's true, ignore this element when rendering. - Used for tight lists to hide paragraphs.""" - return self._attribute_token().hidden - - -def _removesuffix(string: str, suffix: str) -> str: - """Remove a suffix from a string. - - Replace this with str.removesuffix() from stdlib when minimum Python - version is 3.9. - """ - if suffix and string.endswith(suffix): - return string[: -len(suffix)] - return string diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/axes/_axes.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/axes/_axes.py deleted file mode 100644 index dd09f988f43da3f627a96151dbc617be940bd4e9..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/axes/_axes.py +++ /dev/null @@ -1,8284 +0,0 @@ -import functools -import itertools -import logging -import math -from numbers import Integral, Number - -import numpy as np -from numpy import ma - -import matplotlib as mpl -import matplotlib.category # Register category unit converter as side effect. -import matplotlib.cbook as cbook -import matplotlib.collections as mcoll -import matplotlib.colors as mcolors -import matplotlib.contour as mcontour -import matplotlib.dates # noqa # Register date unit converter as side effect. -import matplotlib.image as mimage -import matplotlib.legend as mlegend -import matplotlib.lines as mlines -import matplotlib.markers as mmarkers -import matplotlib.mlab as mlab -import matplotlib.patches as mpatches -import matplotlib.path as mpath -import matplotlib.quiver as mquiver -import matplotlib.stackplot as mstack -import matplotlib.streamplot as mstream -import matplotlib.table as mtable -import matplotlib.text as mtext -import matplotlib.ticker as mticker -import matplotlib.transforms as mtransforms -import matplotlib.tri as mtri -import matplotlib.units as munits -from matplotlib import _api, _docstring, _preprocess_data -from matplotlib.axes._base import ( - _AxesBase, _TransformedBoundsLocator, _process_plot_format) -from matplotlib.axes._secondary_axes import SecondaryAxis -from matplotlib.container import BarContainer, ErrorbarContainer, StemContainer - -_log = logging.getLogger(__name__) - - -# The axes module contains all the wrappers to plotting functions. -# All the other methods should go in the _AxesBase class. - - -@_docstring.interpd -class Axes(_AxesBase): - """ - An Axes object encapsulates all the elements of an individual (sub-)plot in - a figure. - - It contains most of the (sub-)plot elements: `~.axis.Axis`, - `~.axis.Tick`, `~.lines.Line2D`, `~.text.Text`, `~.patches.Polygon`, etc., - and sets the coordinate system. - - Like all visible elements in a figure, Axes is an `.Artist` subclass. - - The `Axes` instance supports callbacks through a callbacks attribute which - is a `~.cbook.CallbackRegistry` instance. The events you can connect to - are 'xlim_changed' and 'ylim_changed' and the callback will be called with - func(*ax*) where *ax* is the `Axes` instance. - - .. note:: - - As a user, you do not instantiate Axes directly, but use Axes creation - methods instead; e.g. from `.pyplot` or `.Figure`: - `~.pyplot.subplots`, `~.pyplot.subplot_mosaic` or `.Figure.add_axes`. - - Attributes - ---------- - dataLim : `.Bbox` - The bounding box enclosing all data displayed in the Axes. - viewLim : `.Bbox` - The view limits in data coordinates. - - """ - ### Labelling, legend and texts - - def get_title(self, loc="center"): - """ - Get an Axes title. - - Get one of the three available Axes titles. The available titles - are positioned above the Axes in the center, flush with the left - edge, and flush with the right edge. - - Parameters - ---------- - loc : {'center', 'left', 'right'}, str, default: 'center' - Which title to return. - - Returns - ------- - str - The title text string. - - """ - titles = {'left': self._left_title, - 'center': self.title, - 'right': self._right_title} - title = _api.check_getitem(titles, loc=loc.lower()) - return title.get_text() - - def set_title(self, label, fontdict=None, loc=None, pad=None, *, y=None, - **kwargs): - """ - Set a title for the Axes. - - Set one of the three available Axes titles. The available titles - are positioned above the Axes in the center, flush with the left - edge, and flush with the right edge. - - Parameters - ---------- - label : str - Text to use for the title - - fontdict : dict - A dictionary controlling the appearance of the title text, - the default *fontdict* is:: - - {'fontsize': rcParams['axes.titlesize'], - 'fontweight': rcParams['axes.titleweight'], - 'color': rcParams['axes.titlecolor'], - 'verticalalignment': 'baseline', - 'horizontalalignment': loc} - - loc : {'center', 'left', 'right'}, default: :rc:`axes.titlelocation` - Which title to set. - - y : float, default: :rc:`axes.titley` - Vertical Axes location for the title (1.0 is the top). If - None (the default) and :rc:`axes.titley` is also None, y is - determined automatically to avoid decorators on the Axes. - - pad : float, default: :rc:`axes.titlepad` - The offset of the title from the top of the Axes, in points. - - Returns - ------- - `.Text` - The matplotlib text instance representing the title - - Other Parameters - ---------------- - **kwargs : `~matplotlib.text.Text` properties - Other keyword arguments are text properties, see `.Text` for a list - of valid text properties. - """ - if loc is None: - loc = mpl.rcParams['axes.titlelocation'] - - if y is None: - y = mpl.rcParams['axes.titley'] - if y is None: - y = 1.0 - else: - self._autotitlepos = False - kwargs['y'] = y - - titles = {'left': self._left_title, - 'center': self.title, - 'right': self._right_title} - title = _api.check_getitem(titles, loc=loc.lower()) - default = { - 'fontsize': mpl.rcParams['axes.titlesize'], - 'fontweight': mpl.rcParams['axes.titleweight'], - 'verticalalignment': 'baseline', - 'horizontalalignment': loc.lower()} - titlecolor = mpl.rcParams['axes.titlecolor'] - if not cbook._str_lower_equal(titlecolor, 'auto'): - default["color"] = titlecolor - if pad is None: - pad = mpl.rcParams['axes.titlepad'] - self._set_title_offset_trans(float(pad)) - title.set_text(label) - title.update(default) - if fontdict is not None: - title.update(fontdict) - title._internal_update(kwargs) - return title - - def get_legend_handles_labels(self, legend_handler_map=None): - """ - Return handles and labels for legend - - ``ax.legend()`` is equivalent to :: - - h, l = ax.get_legend_handles_labels() - ax.legend(h, l) - """ - # pass through to legend. - handles, labels = mlegend._get_legend_handles_labels( - [self], legend_handler_map) - return handles, labels - - @_docstring.dedent_interpd - def legend(self, *args, **kwargs): - """ - Place a legend on the Axes. - - Call signatures:: - - legend() - legend(handles, labels) - legend(handles=handles) - legend(labels) - - The call signatures correspond to the following different ways to use - this method: - - **1. Automatic detection of elements to be shown in the legend** - - The elements to be added to the legend are automatically determined, - when you do not pass in any extra arguments. - - In this case, the labels are taken from the artist. You can specify - them either at artist creation or by calling the - :meth:`~.Artist.set_label` method on the artist:: - - ax.plot([1, 2, 3], label='Inline label') - ax.legend() - - or:: - - line, = ax.plot([1, 2, 3]) - line.set_label('Label via method') - ax.legend() - - .. note:: - Specific artists can be excluded from the automatic legend element - selection by using a label starting with an underscore, "_". - A string starting with an underscore is the default label for all - artists, so calling `.Axes.legend` without any arguments and - without setting the labels manually will result in no legend being - drawn. - - - **2. Explicitly listing the artists and labels in the legend** - - For full control of which artists have a legend entry, it is possible - to pass an iterable of legend artists followed by an iterable of - legend labels respectively:: - - ax.legend([line1, line2, line3], ['label1', 'label2', 'label3']) - - - **3. Explicitly listing the artists in the legend** - - This is similar to 2, but the labels are taken from the artists' - label properties. Example:: - - line1, = ax.plot([1, 2, 3], label='label1') - line2, = ax.plot([1, 2, 3], label='label2') - ax.legend(handles=[line1, line2]) - - - **4. Labeling existing plot elements** - - .. admonition:: Discouraged - - This call signature is discouraged, because the relation between - plot elements and labels is only implicit by their order and can - easily be mixed up. - - To make a legend for all artists on an Axes, call this function with - an iterable of strings, one for each legend item. For example:: - - ax.plot([1, 2, 3]) - ax.plot([5, 6, 7]) - ax.legend(['First line', 'Second line']) - - - Parameters - ---------- - handles : sequence of `.Artist`, optional - A list of Artists (lines, patches) to be added to the legend. - Use this together with *labels*, if you need full control on what - is shown in the legend and the automatic mechanism described above - is not sufficient. - - The length of handles and labels should be the same in this - case. If they are not, they are truncated to the smaller length. - - labels : list of str, optional - A list of labels to show next to the artists. - Use this together with *handles*, if you need full control on what - is shown in the legend and the automatic mechanism described above - is not sufficient. - - Returns - ------- - `~matplotlib.legend.Legend` - - Other Parameters - ---------------- - %(_legend_kw_axes)s - - See Also - -------- - .Figure.legend - - Notes - ----- - Some artists are not supported by this function. See - :doc:`/tutorials/intermediate/legend_guide` for details. - - Examples - -------- - .. plot:: gallery/text_labels_and_annotations/legend.py - """ - handles, labels, extra_args, kwargs = mlegend._parse_legend_args( - [self], - *args, - **kwargs) - if len(extra_args): - raise TypeError('legend only accepts two non-keyword arguments') - self.legend_ = mlegend.Legend(self, handles, labels, **kwargs) - self.legend_._remove_method = self._remove_legend - return self.legend_ - - def _remove_legend(self, legend): - self.legend_ = None - - def inset_axes(self, bounds, *, transform=None, zorder=5, **kwargs): - """ - Add a child inset Axes to this existing Axes. - - Warnings - -------- - This method is experimental as of 3.0, and the API may change. - - Parameters - ---------- - bounds : [x0, y0, width, height] - Lower-left corner of inset Axes, and its width and height. - - transform : `.Transform` - Defaults to `ax.transAxes`, i.e. the units of *rect* are in - Axes-relative coordinates. - - projection : {None, 'aitoff', 'hammer', 'lambert', 'mollweide', \ -'polar', 'rectilinear', str}, optional - The projection type of the inset `~.axes.Axes`. *str* is the name - of a custom projection, see `~matplotlib.projections`. The default - None results in a 'rectilinear' projection. - - polar : bool, default: False - If True, equivalent to projection='polar'. - - axes_class : subclass type of `~.axes.Axes`, optional - The `.axes.Axes` subclass that is instantiated. This parameter - is incompatible with *projection* and *polar*. See - :ref:`axisartist_users-guide-index` for examples. - - zorder : number - Defaults to 5 (same as `.Axes.legend`). Adjust higher or lower - to change whether it is above or below data plotted on the - parent Axes. - - **kwargs - Other keyword arguments are passed on to the inset Axes class. - - Returns - ------- - ax - The created `~.axes.Axes` instance. - - Examples - -------- - This example makes two inset Axes, the first is in Axes-relative - coordinates, and the second in data-coordinates:: - - fig, ax = plt.subplots() - ax.plot(range(10)) - axin1 = ax.inset_axes([0.8, 0.1, 0.15, 0.15]) - axin2 = ax.inset_axes( - [5, 7, 2.3, 2.3], transform=ax.transData) - - """ - if transform is None: - transform = self.transAxes - kwargs.setdefault('label', 'inset_axes') - - # This puts the rectangle into figure-relative coordinates. - inset_locator = _TransformedBoundsLocator(bounds, transform) - bounds = inset_locator(self, None).bounds - projection_class, pkw = self.figure._process_projection_requirements( - bounds, **kwargs) - inset_ax = projection_class(self.figure, bounds, zorder=zorder, **pkw) - - # this locator lets the axes move if in data coordinates. - # it gets called in `ax.apply_aspect() (of all places) - inset_ax.set_axes_locator(inset_locator) - - self.add_child_axes(inset_ax) - - return inset_ax - - @_docstring.dedent_interpd - def indicate_inset(self, bounds, inset_ax=None, *, transform=None, - facecolor='none', edgecolor='0.5', alpha=0.5, - zorder=4.99, **kwargs): - """ - Add an inset indicator to the Axes. This is a rectangle on the plot - at the position indicated by *bounds* that optionally has lines that - connect the rectangle to an inset Axes (`.Axes.inset_axes`). - - Warnings - -------- - This method is experimental as of 3.0, and the API may change. - - Parameters - ---------- - bounds : [x0, y0, width, height] - Lower-left corner of rectangle to be marked, and its width - and height. - - inset_ax : `.Axes` - An optional inset Axes to draw connecting lines to. Two lines are - drawn connecting the indicator box to the inset Axes on corners - chosen so as to not overlap with the indicator box. - - transform : `.Transform` - Transform for the rectangle coordinates. Defaults to - `ax.transAxes`, i.e. the units of *rect* are in Axes-relative - coordinates. - - facecolor : color, default: 'none' - Facecolor of the rectangle. - - edgecolor : color, default: '0.5' - Color of the rectangle and color of the connecting lines. - - alpha : float, default: 0.5 - Transparency of the rectangle and connector lines. - - zorder : float, default: 4.99 - Drawing order of the rectangle and connector lines. The default, - 4.99, is just below the default level of inset Axes. - - **kwargs - Other keyword arguments are passed on to the `.Rectangle` patch: - - %(Rectangle:kwdoc)s - - Returns - ------- - rectangle_patch : `.patches.Rectangle` - The indicator frame. - - connector_lines : 4-tuple of `.patches.ConnectionPatch` - The four connector lines connecting to (lower_left, upper_left, - lower_right upper_right) corners of *inset_ax*. Two lines are - set with visibility to *False*, but the user can set the - visibility to True if the automatic choice is not deemed correct. - - """ - # to make the axes connectors work, we need to apply the aspect to - # the parent axes. - self.apply_aspect() - - if transform is None: - transform = self.transData - kwargs.setdefault('label', '_indicate_inset') - - x, y, width, height = bounds - rectangle_patch = mpatches.Rectangle( - (x, y), width, height, - facecolor=facecolor, edgecolor=edgecolor, alpha=alpha, - zorder=zorder, transform=transform, **kwargs) - self.add_patch(rectangle_patch) - - connects = [] - - if inset_ax is not None: - # connect the inset_axes to the rectangle - for xy_inset_ax in [(0, 0), (0, 1), (1, 0), (1, 1)]: - # inset_ax positions are in axes coordinates - # The 0, 1 values define the four edges if the inset_ax - # lower_left, upper_left, lower_right upper_right. - ex, ey = xy_inset_ax - if self.xaxis.get_inverted(): - ex = 1 - ex - if self.yaxis.get_inverted(): - ey = 1 - ey - xy_data = x + ex * width, y + ey * height - p = mpatches.ConnectionPatch( - xyA=xy_inset_ax, coordsA=inset_ax.transAxes, - xyB=xy_data, coordsB=self.transData, - arrowstyle="-", zorder=zorder, - edgecolor=edgecolor, alpha=alpha) - connects.append(p) - self.add_patch(p) - - # decide which two of the lines to keep visible.... - pos = inset_ax.get_position() - bboxins = pos.transformed(self.figure.transSubfigure) - rectbbox = mtransforms.Bbox.from_bounds( - *bounds - ).transformed(transform) - x0 = rectbbox.x0 < bboxins.x0 - x1 = rectbbox.x1 < bboxins.x1 - y0 = rectbbox.y0 < bboxins.y0 - y1 = rectbbox.y1 < bboxins.y1 - connects[0].set_visible(x0 ^ y0) - connects[1].set_visible(x0 == y1) - connects[2].set_visible(x1 == y0) - connects[3].set_visible(x1 ^ y1) - - return rectangle_patch, tuple(connects) if connects else None - - def indicate_inset_zoom(self, inset_ax, **kwargs): - """ - Add an inset indicator rectangle to the Axes based on the axis - limits for an *inset_ax* and draw connectors between *inset_ax* - and the rectangle. - - Warnings - -------- - This method is experimental as of 3.0, and the API may change. - - Parameters - ---------- - inset_ax : `.Axes` - Inset Axes to draw connecting lines to. Two lines are - drawn connecting the indicator box to the inset Axes on corners - chosen so as to not overlap with the indicator box. - - **kwargs - Other keyword arguments are passed on to `.Axes.indicate_inset` - - Returns - ------- - rectangle_patch : `.patches.Rectangle` - Rectangle artist. - - connector_lines : 4-tuple of `.patches.ConnectionPatch` - Each of four connector lines coming from the rectangle drawn on - this axis, in the order lower left, upper left, lower right, - upper right. - Two are set with visibility to *False*, but the user can - set the visibility to *True* if the automatic choice is not deemed - correct. - """ - - xlim = inset_ax.get_xlim() - ylim = inset_ax.get_ylim() - rect = (xlim[0], ylim[0], xlim[1] - xlim[0], ylim[1] - ylim[0]) - return self.indicate_inset(rect, inset_ax, **kwargs) - - @_docstring.dedent_interpd - def secondary_xaxis(self, location, *, functions=None, **kwargs): - """ - Add a second x-axis to this `~.axes.Axes`. - - For example if we want to have a second scale for the data plotted on - the xaxis. - - %(_secax_docstring)s - - Examples - -------- - The main axis shows frequency, and the secondary axis shows period. - - .. plot:: - - fig, ax = plt.subplots() - ax.loglog(range(1, 360, 5), range(1, 360, 5)) - ax.set_xlabel('frequency [Hz]') - - def invert(x): - # 1/x with special treatment of x == 0 - x = np.array(x).astype(float) - near_zero = np.isclose(x, 0) - x[near_zero] = np.inf - x[~near_zero] = 1 / x[~near_zero] - return x - - # the inverse of 1/x is itself - secax = ax.secondary_xaxis('top', functions=(invert, invert)) - secax.set_xlabel('Period [s]') - plt.show() - """ - if location in ['top', 'bottom'] or isinstance(location, Number): - secondary_ax = SecondaryAxis(self, 'x', location, functions, - **kwargs) - self.add_child_axes(secondary_ax) - return secondary_ax - else: - raise ValueError('secondary_xaxis location must be either ' - 'a float or "top"/"bottom"') - - @_docstring.dedent_interpd - def secondary_yaxis(self, location, *, functions=None, **kwargs): - """ - Add a second y-axis to this `~.axes.Axes`. - - For example if we want to have a second scale for the data plotted on - the yaxis. - - %(_secax_docstring)s - - Examples - -------- - Add a secondary Axes that converts from radians to degrees - - .. plot:: - - fig, ax = plt.subplots() - ax.plot(range(1, 360, 5), range(1, 360, 5)) - ax.set_ylabel('degrees') - secax = ax.secondary_yaxis('right', functions=(np.deg2rad, - np.rad2deg)) - secax.set_ylabel('radians') - """ - if location in ['left', 'right'] or isinstance(location, Number): - secondary_ax = SecondaryAxis(self, 'y', location, - functions, **kwargs) - self.add_child_axes(secondary_ax) - return secondary_ax - else: - raise ValueError('secondary_yaxis location must be either ' - 'a float or "left"/"right"') - - @_docstring.dedent_interpd - def text(self, x, y, s, fontdict=None, **kwargs): - """ - Add text to the Axes. - - Add the text *s* to the Axes at location *x*, *y* in data coordinates. - - Parameters - ---------- - x, y : float - The position to place the text. By default, this is in data - coordinates. The coordinate system can be changed using the - *transform* parameter. - - s : str - The text. - - fontdict : dict, default: None - A dictionary to override the default text properties. If fontdict - is None, the defaults are determined by `.rcParams`. - - Returns - ------- - `.Text` - The created `.Text` instance. - - Other Parameters - ---------------- - **kwargs : `~matplotlib.text.Text` properties. - Other miscellaneous text parameters. - - %(Text:kwdoc)s - - Examples - -------- - Individual keyword arguments can be used to override any given - parameter:: - - >>> text(x, y, s, fontsize=12) - - The default transform specifies that text is in data coords, - alternatively, you can specify text in axis coords ((0, 0) is - lower-left and (1, 1) is upper-right). The example below places - text in the center of the Axes:: - - >>> text(0.5, 0.5, 'matplotlib', horizontalalignment='center', - ... verticalalignment='center', transform=ax.transAxes) - - You can put a rectangular box around the text instance (e.g., to - set a background color) by using the keyword *bbox*. *bbox* is - a dictionary of `~matplotlib.patches.Rectangle` - properties. For example:: - - >>> text(x, y, s, bbox=dict(facecolor='red', alpha=0.5)) - """ - effective_kwargs = { - 'verticalalignment': 'baseline', - 'horizontalalignment': 'left', - 'transform': self.transData, - 'clip_on': False, - **(fontdict if fontdict is not None else {}), - **kwargs, - } - t = mtext.Text(x, y, text=s, **effective_kwargs) - t.set_clip_path(self.patch) - self._add_text(t) - return t - - @_docstring.dedent_interpd - def annotate(self, text, xy, xytext=None, xycoords='data', textcoords=None, - arrowprops=None, annotation_clip=None, **kwargs): - # Signature must match Annotation. This is verified in - # test_annotate_signature(). - a = mtext.Annotation(text, xy, xytext=xytext, xycoords=xycoords, - textcoords=textcoords, arrowprops=arrowprops, - annotation_clip=annotation_clip, **kwargs) - a.set_transform(mtransforms.IdentityTransform()) - if 'clip_on' in kwargs: - a.set_clip_path(self.patch) - self._add_text(a) - return a - annotate.__doc__ = mtext.Annotation.__init__.__doc__ - #### Lines and spans - - @_docstring.dedent_interpd - def axhline(self, y=0, xmin=0, xmax=1, **kwargs): - """ - Add a horizontal line across the Axes. - - Parameters - ---------- - y : float, default: 0 - y position in data coordinates of the horizontal line. - - xmin : float, default: 0 - Should be between 0 and 1, 0 being the far left of the plot, 1 the - far right of the plot. - - xmax : float, default: 1 - Should be between 0 and 1, 0 being the far left of the plot, 1 the - far right of the plot. - - Returns - ------- - `~matplotlib.lines.Line2D` - - Other Parameters - ---------------- - **kwargs - Valid keyword arguments are `.Line2D` properties, except for - 'transform': - - %(Line2D:kwdoc)s - - See Also - -------- - hlines : Add horizontal lines in data coordinates. - axhspan : Add a horizontal span (rectangle) across the axis. - axline : Add a line with an arbitrary slope. - - Examples - -------- - * draw a thick red hline at 'y' = 0 that spans the xrange:: - - >>> axhline(linewidth=4, color='r') - - * draw a default hline at 'y' = 1 that spans the xrange:: - - >>> axhline(y=1) - - * draw a default hline at 'y' = .5 that spans the middle half of - the xrange:: - - >>> axhline(y=.5, xmin=0.25, xmax=0.75) - """ - self._check_no_units([xmin, xmax], ['xmin', 'xmax']) - if "transform" in kwargs: - raise ValueError("'transform' is not allowed as a keyword " - "argument; axhline generates its own transform.") - ymin, ymax = self.get_ybound() - - # Strip away the units for comparison with non-unitized bounds. - yy, = self._process_unit_info([("y", y)], kwargs) - scaley = (yy < ymin) or (yy > ymax) - - trans = self.get_yaxis_transform(which='grid') - l = mlines.Line2D([xmin, xmax], [y, y], transform=trans, **kwargs) - self.add_line(l) - if scaley: - self._request_autoscale_view("y") - return l - - @_docstring.dedent_interpd - def axvline(self, x=0, ymin=0, ymax=1, **kwargs): - """ - Add a vertical line across the Axes. - - Parameters - ---------- - x : float, default: 0 - x position in data coordinates of the vertical line. - - ymin : float, default: 0 - Should be between 0 and 1, 0 being the bottom of the plot, 1 the - top of the plot. - - ymax : float, default: 1 - Should be between 0 and 1, 0 being the bottom of the plot, 1 the - top of the plot. - - Returns - ------- - `~matplotlib.lines.Line2D` - - Other Parameters - ---------------- - **kwargs - Valid keyword arguments are `.Line2D` properties, except for - 'transform': - - %(Line2D:kwdoc)s - - See Also - -------- - vlines : Add vertical lines in data coordinates. - axvspan : Add a vertical span (rectangle) across the axis. - axline : Add a line with an arbitrary slope. - - Examples - -------- - * draw a thick red vline at *x* = 0 that spans the yrange:: - - >>> axvline(linewidth=4, color='r') - - * draw a default vline at *x* = 1 that spans the yrange:: - - >>> axvline(x=1) - - * draw a default vline at *x* = .5 that spans the middle half of - the yrange:: - - >>> axvline(x=.5, ymin=0.25, ymax=0.75) - """ - self._check_no_units([ymin, ymax], ['ymin', 'ymax']) - if "transform" in kwargs: - raise ValueError("'transform' is not allowed as a keyword " - "argument; axvline generates its own transform.") - xmin, xmax = self.get_xbound() - - # Strip away the units for comparison with non-unitized bounds. - xx, = self._process_unit_info([("x", x)], kwargs) - scalex = (xx < xmin) or (xx > xmax) - - trans = self.get_xaxis_transform(which='grid') - l = mlines.Line2D([x, x], [ymin, ymax], transform=trans, **kwargs) - self.add_line(l) - if scalex: - self._request_autoscale_view("x") - return l - - @staticmethod - def _check_no_units(vals, names): - # Helper method to check that vals are not unitized - for val, name in zip(vals, names): - if not munits._is_natively_supported(val): - raise ValueError(f"{name} must be a single scalar value, " - f"but got {val}") - - @_docstring.dedent_interpd - def axline(self, xy1, xy2=None, *, slope=None, **kwargs): - """ - Add an infinitely long straight line. - - The line can be defined either by two points *xy1* and *xy2*, or - by one point *xy1* and a *slope*. - - This draws a straight line "on the screen", regardless of the x and y - scales, and is thus also suitable for drawing exponential decays in - semilog plots, power laws in loglog plots, etc. However, *slope* - should only be used with linear scales; It has no clear meaning for - all other scales, and thus the behavior is undefined. Please specify - the line using the points *xy1*, *xy2* for non-linear scales. - - The *transform* keyword argument only applies to the points *xy1*, - *xy2*. The *slope* (if given) is always in data coordinates. This can - be used e.g. with ``ax.transAxes`` for drawing grid lines with a fixed - slope. - - Parameters - ---------- - xy1, xy2 : (float, float) - Points for the line to pass through. - Either *xy2* or *slope* has to be given. - slope : float, optional - The slope of the line. Either *xy2* or *slope* has to be given. - - Returns - ------- - `.Line2D` - - Other Parameters - ---------------- - **kwargs - Valid kwargs are `.Line2D` properties - - %(Line2D:kwdoc)s - - See Also - -------- - axhline : for horizontal lines - axvline : for vertical lines - - Examples - -------- - Draw a thick red line passing through (0, 0) and (1, 1):: - - >>> axline((0, 0), (1, 1), linewidth=4, color='r') - """ - if slope is not None and (self.get_xscale() != 'linear' or - self.get_yscale() != 'linear'): - raise TypeError("'slope' cannot be used with non-linear scales") - - datalim = [xy1] if xy2 is None else [xy1, xy2] - if "transform" in kwargs: - # if a transform is passed (i.e. line points not in data space), - # data limits should not be adjusted. - datalim = [] - - line = mlines._AxLine(xy1, xy2, slope, **kwargs) - # Like add_line, but correctly handling data limits. - self._set_artist_props(line) - if line.get_clip_path() is None: - line.set_clip_path(self.patch) - if not line.get_label(): - line.set_label(f"_child{len(self._children)}") - self._children.append(line) - line._remove_method = self._children.remove - self.update_datalim(datalim) - - self._request_autoscale_view() - return line - - @_docstring.dedent_interpd - def axhspan(self, ymin, ymax, xmin=0, xmax=1, **kwargs): - """ - Add a horizontal span (rectangle) across the Axes. - - The rectangle spans from *ymin* to *ymax* vertically, and, by default, - the whole x-axis horizontally. The x-span can be set using *xmin* - (default: 0) and *xmax* (default: 1) which are in axis units; e.g. - ``xmin = 0.5`` always refers to the middle of the x-axis regardless of - the limits set by `~.Axes.set_xlim`. - - Parameters - ---------- - ymin : float - Lower y-coordinate of the span, in data units. - ymax : float - Upper y-coordinate of the span, in data units. - xmin : float, default: 0 - Lower x-coordinate of the span, in x-axis (0-1) units. - xmax : float, default: 1 - Upper x-coordinate of the span, in x-axis (0-1) units. - - Returns - ------- - `~matplotlib.patches.Polygon` - Horizontal span (rectangle) from (xmin, ymin) to (xmax, ymax). - - Other Parameters - ---------------- - **kwargs : `~matplotlib.patches.Polygon` properties - - %(Polygon:kwdoc)s - - See Also - -------- - axvspan : Add a vertical span across the Axes. - """ - # Strip units away. - self._check_no_units([xmin, xmax], ['xmin', 'xmax']) - (ymin, ymax), = self._process_unit_info([("y", [ymin, ymax])], kwargs) - - verts = (xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin) - p = mpatches.Polygon(verts, **kwargs) - p.set_transform(self.get_yaxis_transform(which="grid")) - self.add_patch(p) - self._request_autoscale_view("y") - return p - - @_docstring.dedent_interpd - def axvspan(self, xmin, xmax, ymin=0, ymax=1, **kwargs): - """ - Add a vertical span (rectangle) across the Axes. - - The rectangle spans from *xmin* to *xmax* horizontally, and, by - default, the whole y-axis vertically. The y-span can be set using - *ymin* (default: 0) and *ymax* (default: 1) which are in axis units; - e.g. ``ymin = 0.5`` always refers to the middle of the y-axis - regardless of the limits set by `~.Axes.set_ylim`. - - Parameters - ---------- - xmin : float - Lower x-coordinate of the span, in data units. - xmax : float - Upper x-coordinate of the span, in data units. - ymin : float, default: 0 - Lower y-coordinate of the span, in y-axis units (0-1). - ymax : float, default: 1 - Upper y-coordinate of the span, in y-axis units (0-1). - - Returns - ------- - `~matplotlib.patches.Polygon` - Vertical span (rectangle) from (xmin, ymin) to (xmax, ymax). - - Other Parameters - ---------------- - **kwargs : `~matplotlib.patches.Polygon` properties - - %(Polygon:kwdoc)s - - See Also - -------- - axhspan : Add a horizontal span across the Axes. - - Examples - -------- - Draw a vertical, green, translucent rectangle from x = 1.25 to - x = 1.55 that spans the yrange of the Axes. - - >>> axvspan(1.25, 1.55, facecolor='g', alpha=0.5) - - """ - # Strip units away. - self._check_no_units([ymin, ymax], ['ymin', 'ymax']) - (xmin, xmax), = self._process_unit_info([("x", [xmin, xmax])], kwargs) - - verts = [(xmin, ymin), (xmin, ymax), (xmax, ymax), (xmax, ymin)] - p = mpatches.Polygon(verts, **kwargs) - p.set_transform(self.get_xaxis_transform(which="grid")) - p.get_path()._interpolation_steps = 100 - self.add_patch(p) - self._request_autoscale_view("x") - return p - - @_preprocess_data(replace_names=["y", "xmin", "xmax", "colors"], - label_namer="y") - def hlines(self, y, xmin, xmax, colors=None, linestyles='solid', - label='', **kwargs): - """ - Plot horizontal lines at each *y* from *xmin* to *xmax*. - - Parameters - ---------- - y : float or array-like - y-indexes where to plot the lines. - - xmin, xmax : float or array-like - Respective beginning and end of each line. If scalars are - provided, all lines will have the same length. - - colors : list of colors, default: :rc:`lines.color` - - linestyles : {'solid', 'dashed', 'dashdot', 'dotted'}, optional - - label : str, default: '' - - Returns - ------- - `~matplotlib.collections.LineCollection` - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - **kwargs : `~matplotlib.collections.LineCollection` properties. - - See Also - -------- - vlines : vertical lines - axhline : horizontal line across the Axes - """ - - # We do the conversion first since not all unitized data is uniform - xmin, xmax, y = self._process_unit_info( - [("x", xmin), ("x", xmax), ("y", y)], kwargs) - - if not np.iterable(y): - y = [y] - if not np.iterable(xmin): - xmin = [xmin] - if not np.iterable(xmax): - xmax = [xmax] - - # Create and combine masked_arrays from input - y, xmin, xmax = cbook._combine_masks(y, xmin, xmax) - y = np.ravel(y) - xmin = np.ravel(xmin) - xmax = np.ravel(xmax) - - masked_verts = np.ma.empty((len(y), 2, 2)) - masked_verts[:, 0, 0] = xmin - masked_verts[:, 0, 1] = y - masked_verts[:, 1, 0] = xmax - masked_verts[:, 1, 1] = y - - lines = mcoll.LineCollection(masked_verts, colors=colors, - linestyles=linestyles, label=label) - self.add_collection(lines, autolim=False) - lines._internal_update(kwargs) - - if len(y) > 0: - # Extreme values of xmin/xmax/y. Using masked_verts here handles - # the case of y being a masked *object* array (as can be generated - # e.g. by errorbar()), which would make nanmin/nanmax stumble. - minx = np.nanmin(masked_verts[..., 0]) - maxx = np.nanmax(masked_verts[..., 0]) - miny = np.nanmin(masked_verts[..., 1]) - maxy = np.nanmax(masked_verts[..., 1]) - corners = (minx, miny), (maxx, maxy) - self.update_datalim(corners) - self._request_autoscale_view() - - return lines - - @_preprocess_data(replace_names=["x", "ymin", "ymax", "colors"], - label_namer="x") - def vlines(self, x, ymin, ymax, colors=None, linestyles='solid', - label='', **kwargs): - """ - Plot vertical lines at each *x* from *ymin* to *ymax*. - - Parameters - ---------- - x : float or array-like - x-indexes where to plot the lines. - - ymin, ymax : float or array-like - Respective beginning and end of each line. If scalars are - provided, all lines will have the same length. - - colors : list of colors, default: :rc:`lines.color` - - linestyles : {'solid', 'dashed', 'dashdot', 'dotted'}, optional - - label : str, default: '' - - Returns - ------- - `~matplotlib.collections.LineCollection` - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - **kwargs : `~matplotlib.collections.LineCollection` properties. - - See Also - -------- - hlines : horizontal lines - axvline : vertical line across the Axes - """ - - # We do the conversion first since not all unitized data is uniform - x, ymin, ymax = self._process_unit_info( - [("x", x), ("y", ymin), ("y", ymax)], kwargs) - - if not np.iterable(x): - x = [x] - if not np.iterable(ymin): - ymin = [ymin] - if not np.iterable(ymax): - ymax = [ymax] - - # Create and combine masked_arrays from input - x, ymin, ymax = cbook._combine_masks(x, ymin, ymax) - x = np.ravel(x) - ymin = np.ravel(ymin) - ymax = np.ravel(ymax) - - masked_verts = np.ma.empty((len(x), 2, 2)) - masked_verts[:, 0, 0] = x - masked_verts[:, 0, 1] = ymin - masked_verts[:, 1, 0] = x - masked_verts[:, 1, 1] = ymax - - lines = mcoll.LineCollection(masked_verts, colors=colors, - linestyles=linestyles, label=label) - self.add_collection(lines, autolim=False) - lines._internal_update(kwargs) - - if len(x) > 0: - # Extreme values of x/ymin/ymax. Using masked_verts here handles - # the case of x being a masked *object* array (as can be generated - # e.g. by errorbar()), which would make nanmin/nanmax stumble. - minx = np.nanmin(masked_verts[..., 0]) - maxx = np.nanmax(masked_verts[..., 0]) - miny = np.nanmin(masked_verts[..., 1]) - maxy = np.nanmax(masked_verts[..., 1]) - corners = (minx, miny), (maxx, maxy) - self.update_datalim(corners) - self._request_autoscale_view() - - return lines - - @_preprocess_data(replace_names=["positions", "lineoffsets", - "linelengths", "linewidths", - "colors", "linestyles"]) - @_docstring.dedent_interpd - def eventplot(self, positions, orientation='horizontal', lineoffsets=1, - linelengths=1, linewidths=None, colors=None, alpha=None, - linestyles='solid', **kwargs): - """ - Plot identical parallel lines at the given positions. - - This type of plot is commonly used in neuroscience for representing - neural events, where it is usually called a spike raster, dot raster, - or raster plot. - - However, it is useful in any situation where you wish to show the - timing or position of multiple sets of discrete events, such as the - arrival times of people to a business on each day of the month or the - date of hurricanes each year of the last century. - - Parameters - ---------- - positions : array-like or list of array-like - A 1D array-like defines the positions of one sequence of events. - - Multiple groups of events may be passed as a list of array-likes. - Each group can be styled independently by passing lists of values - to *lineoffsets*, *linelengths*, *linewidths*, *colors* and - *linestyles*. - - Note that *positions* can be a 2D array, but in practice different - event groups usually have different counts so that one will use a - list of different-length arrays rather than a 2D array. - - orientation : {'horizontal', 'vertical'}, default: 'horizontal' - The direction of the event sequence: - - - 'horizontal': the events are arranged horizontally. - The indicator lines are vertical. - - 'vertical': the events are arranged vertically. - The indicator lines are horizontal. - - lineoffsets : float or array-like, default: 1 - The offset of the center of the lines from the origin, in the - direction orthogonal to *orientation*. - - If *positions* is 2D, this can be a sequence with length matching - the length of *positions*. - - linelengths : float or array-like, default: 1 - The total height of the lines (i.e. the lines stretches from - ``lineoffset - linelength/2`` to ``lineoffset + linelength/2``). - - If *positions* is 2D, this can be a sequence with length matching - the length of *positions*. - - linewidths : float or array-like, default: :rc:`lines.linewidth` - The line width(s) of the event lines, in points. - - If *positions* is 2D, this can be a sequence with length matching - the length of *positions*. - - colors : color or list of colors, default: :rc:`lines.color` - The color(s) of the event lines. - - If *positions* is 2D, this can be a sequence with length matching - the length of *positions*. - - alpha : float or array-like, default: 1 - The alpha blending value(s), between 0 (transparent) and 1 - (opaque). - - If *positions* is 2D, this can be a sequence with length matching - the length of *positions*. - - linestyles : str or tuple or list of such values, default: 'solid' - Default is 'solid'. Valid strings are ['solid', 'dashed', - 'dashdot', 'dotted', '-', '--', '-.', ':']. Dash tuples - should be of the form:: - - (offset, onoffseq), - - where *onoffseq* is an even length tuple of on and off ink - in points. - - If *positions* is 2D, this can be a sequence with length matching - the length of *positions*. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Other keyword arguments are line collection properties. See - `.LineCollection` for a list of the valid properties. - - Returns - ------- - list of `.EventCollection` - The `.EventCollection` that were added. - - Notes - ----- - For *linelengths*, *linewidths*, *colors*, *alpha* and *linestyles*, if - only a single value is given, that value is applied to all lines. If an - array-like is given, it must have the same length as *positions*, and - each value will be applied to the corresponding row of the array. - - Examples - -------- - .. plot:: gallery/lines_bars_and_markers/eventplot_demo.py - """ - - lineoffsets, linelengths = self._process_unit_info( - [("y", lineoffsets), ("y", linelengths)], kwargs) - - # fix positions, noting that it can be a list of lists: - if not np.iterable(positions): - positions = [positions] - elif any(np.iterable(position) for position in positions): - positions = [np.asanyarray(position) for position in positions] - else: - positions = [np.asanyarray(positions)] - - if len(positions) == 0: - return [] - - poss = [] - for position in positions: - poss += self._process_unit_info([("x", position)], kwargs) - positions = poss - - # prevent 'singular' keys from **kwargs dict from overriding the effect - # of 'plural' keyword arguments (e.g. 'color' overriding 'colors') - colors = cbook._local_over_kwdict(colors, kwargs, 'color') - linewidths = cbook._local_over_kwdict(linewidths, kwargs, 'linewidth') - linestyles = cbook._local_over_kwdict(linestyles, kwargs, 'linestyle') - - if not np.iterable(lineoffsets): - lineoffsets = [lineoffsets] - if not np.iterable(linelengths): - linelengths = [linelengths] - if not np.iterable(linewidths): - linewidths = [linewidths] - if not np.iterable(colors): - colors = [colors] - if not np.iterable(alpha): - alpha = [alpha] - if hasattr(linestyles, 'lower') or not np.iterable(linestyles): - linestyles = [linestyles] - - lineoffsets = np.asarray(lineoffsets) - linelengths = np.asarray(linelengths) - linewidths = np.asarray(linewidths) - - if len(lineoffsets) == 0: - lineoffsets = [None] - if len(linelengths) == 0: - linelengths = [None] - if len(linewidths) == 0: - lineoffsets = [None] - if len(linewidths) == 0: - lineoffsets = [None] - if len(colors) == 0: - colors = [None] - try: - # Early conversion of the colors into RGBA values to take care - # of cases like colors='0.5' or colors='C1'. (Issue #8193) - colors = mcolors.to_rgba_array(colors) - except ValueError: - # Will fail if any element of *colors* is None. But as long - # as len(colors) == 1 or len(positions), the rest of the - # code should process *colors* properly. - pass - - if len(lineoffsets) == 1 and len(positions) != 1: - lineoffsets = np.tile(lineoffsets, len(positions)) - lineoffsets[0] = 0 - lineoffsets = np.cumsum(lineoffsets) - if len(linelengths) == 1: - linelengths = np.tile(linelengths, len(positions)) - if len(linewidths) == 1: - linewidths = np.tile(linewidths, len(positions)) - if len(colors) == 1: - colors = list(colors) * len(positions) - if len(alpha) == 1: - alpha = list(alpha) * len(positions) - if len(linestyles) == 1: - linestyles = [linestyles] * len(positions) - - if len(lineoffsets) != len(positions): - raise ValueError('lineoffsets and positions are unequal sized ' - 'sequences') - if len(linelengths) != len(positions): - raise ValueError('linelengths and positions are unequal sized ' - 'sequences') - if len(linewidths) != len(positions): - raise ValueError('linewidths and positions are unequal sized ' - 'sequences') - if len(colors) != len(positions): - raise ValueError('colors and positions are unequal sized ' - 'sequences') - if len(alpha) != len(positions): - raise ValueError('alpha and positions are unequal sized ' - 'sequences') - if len(linestyles) != len(positions): - raise ValueError('linestyles and positions are unequal sized ' - 'sequences') - - colls = [] - for position, lineoffset, linelength, linewidth, color, alpha_, \ - linestyle in \ - zip(positions, lineoffsets, linelengths, linewidths, - colors, alpha, linestyles): - coll = mcoll.EventCollection(position, - orientation=orientation, - lineoffset=lineoffset, - linelength=linelength, - linewidth=linewidth, - color=color, - alpha=alpha_, - linestyle=linestyle) - self.add_collection(coll, autolim=False) - coll._internal_update(kwargs) - colls.append(coll) - - if len(positions) > 0: - # try to get min/max - min_max = [(np.min(_p), np.max(_p)) for _p in positions - if len(_p) > 0] - # if we have any non-empty positions, try to autoscale - if len(min_max) > 0: - mins, maxes = zip(*min_max) - minpos = np.min(mins) - maxpos = np.max(maxes) - - minline = (lineoffsets - linelengths).min() - maxline = (lineoffsets + linelengths).max() - - if orientation == "vertical": - corners = (minline, minpos), (maxline, maxpos) - else: # "horizontal" - corners = (minpos, minline), (maxpos, maxline) - self.update_datalim(corners) - self._request_autoscale_view() - - return colls - - #### Basic plotting - - # Uses a custom implementation of data-kwarg handling in - # _process_plot_var_args. - @_docstring.dedent_interpd - def plot(self, *args, scalex=True, scaley=True, data=None, **kwargs): - """ - Plot y versus x as lines and/or markers. - - Call signatures:: - - plot([x], y, [fmt], *, data=None, **kwargs) - plot([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs) - - The coordinates of the points or line nodes are given by *x*, *y*. - - The optional parameter *fmt* is a convenient way for defining basic - formatting like color, marker and linestyle. It's a shortcut string - notation described in the *Notes* section below. - - >>> plot(x, y) # plot x and y using default line style and color - >>> plot(x, y, 'bo') # plot x and y using blue circle markers - >>> plot(y) # plot y using x as index array 0..N-1 - >>> plot(y, 'r+') # ditto, but with red plusses - - You can use `.Line2D` properties as keyword arguments for more - control on the appearance. Line properties and *fmt* can be mixed. - The following two calls yield identical results: - - >>> plot(x, y, 'go--', linewidth=2, markersize=12) - >>> plot(x, y, color='green', marker='o', linestyle='dashed', - ... linewidth=2, markersize=12) - - When conflicting with *fmt*, keyword arguments take precedence. - - - **Plotting labelled data** - - There's a convenient way for plotting objects with labelled data (i.e. - data that can be accessed by index ``obj['y']``). Instead of giving - the data in *x* and *y*, you can provide the object in the *data* - parameter and just give the labels for *x* and *y*:: - - >>> plot('xlabel', 'ylabel', data=obj) - - All indexable objects are supported. This could e.g. be a `dict`, a - `pandas.DataFrame` or a structured numpy array. - - - **Plotting multiple sets of data** - - There are various ways to plot multiple sets of data. - - - The most straight forward way is just to call `plot` multiple times. - Example: - - >>> plot(x1, y1, 'bo') - >>> plot(x2, y2, 'go') - - - If *x* and/or *y* are 2D arrays a separate data set will be drawn - for every column. If both *x* and *y* are 2D, they must have the - same shape. If only one of them is 2D with shape (N, m) the other - must have length N and will be used for every data set m. - - Example: - - >>> x = [1, 2, 3] - >>> y = np.array([[1, 2], [3, 4], [5, 6]]) - >>> plot(x, y) - - is equivalent to: - - >>> for col in range(y.shape[1]): - ... plot(x, y[:, col]) - - - The third way is to specify multiple sets of *[x]*, *y*, *[fmt]* - groups:: - - >>> plot(x1, y1, 'g^', x2, y2, 'g-') - - In this case, any additional keyword argument applies to all - datasets. Also, this syntax cannot be combined with the *data* - parameter. - - By default, each line is assigned a different style specified by a - 'style cycle'. The *fmt* and line property parameters are only - necessary if you want explicit deviations from these defaults. - Alternatively, you can also change the style cycle using - :rc:`axes.prop_cycle`. - - - Parameters - ---------- - x, y : array-like or scalar - The horizontal / vertical coordinates of the data points. - *x* values are optional and default to ``range(len(y))``. - - Commonly, these parameters are 1D arrays. - - They can also be scalars, or two-dimensional (in that case, the - columns represent separate data sets). - - These arguments cannot be passed as keywords. - - fmt : str, optional - A format string, e.g. 'ro' for red circles. See the *Notes* - section for a full description of the format strings. - - Format strings are just an abbreviation for quickly setting - basic line properties. All of these and more can also be - controlled by keyword arguments. - - This argument cannot be passed as keyword. - - data : indexable object, optional - An object with labelled data. If given, provide the label names to - plot in *x* and *y*. - - .. note:: - Technically there's a slight ambiguity in calls where the - second label is a valid *fmt*. ``plot('n', 'o', data=obj)`` - could be ``plt(x, y)`` or ``plt(y, fmt)``. In such cases, - the former interpretation is chosen, but a warning is issued. - You may suppress the warning by adding an empty format string - ``plot('n', 'o', '', data=obj)``. - - Returns - ------- - list of `.Line2D` - A list of lines representing the plotted data. - - Other Parameters - ---------------- - scalex, scaley : bool, default: True - These parameters determine if the view limits are adapted to the - data limits. The values are passed on to - `~.axes.Axes.autoscale_view`. - - **kwargs : `~matplotlib.lines.Line2D` properties, optional - *kwargs* are used to specify properties like a line label (for - auto legends), linewidth, antialiasing, marker face color. - Example:: - - >>> plot([1, 2, 3], [1, 2, 3], 'go-', label='line 1', linewidth=2) - >>> plot([1, 2, 3], [1, 4, 9], 'rs', label='line 2') - - If you specify multiple lines with one plot call, the kwargs apply - to all those lines. In case the label object is iterable, each - element is used as labels for each set of data. - - Here is a list of available `.Line2D` properties: - - %(Line2D:kwdoc)s - - See Also - -------- - scatter : XY scatter plot with markers of varying size and/or color ( - sometimes also called bubble chart). - - Notes - ----- - **Format Strings** - - A format string consists of a part for color, marker and line:: - - fmt = '[marker][line][color]' - - Each of them is optional. If not provided, the value from the style - cycle is used. Exception: If ``line`` is given, but no ``marker``, - the data will be a line without markers. - - Other combinations such as ``[color][marker][line]`` are also - supported, but note that their parsing may be ambiguous. - - **Markers** - - ============= =============================== - character description - ============= =============================== - ``'.'`` point marker - ``','`` pixel marker - ``'o'`` circle marker - ``'v'`` triangle_down marker - ``'^'`` triangle_up marker - ``'<'`` triangle_left marker - ``'>'`` triangle_right marker - ``'1'`` tri_down marker - ``'2'`` tri_up marker - ``'3'`` tri_left marker - ``'4'`` tri_right marker - ``'8'`` octagon marker - ``'s'`` square marker - ``'p'`` pentagon marker - ``'P'`` plus (filled) marker - ``'*'`` star marker - ``'h'`` hexagon1 marker - ``'H'`` hexagon2 marker - ``'+'`` plus marker - ``'x'`` x marker - ``'X'`` x (filled) marker - ``'D'`` diamond marker - ``'d'`` thin_diamond marker - ``'|'`` vline marker - ``'_'`` hline marker - ============= =============================== - - **Line Styles** - - ============= =============================== - character description - ============= =============================== - ``'-'`` solid line style - ``'--'`` dashed line style - ``'-.'`` dash-dot line style - ``':'`` dotted line style - ============= =============================== - - Example format strings:: - - 'b' # blue markers with default shape - 'or' # red circles - '-g' # green solid line - '--' # dashed line with default color - '^k:' # black triangle_up markers connected by a dotted line - - **Colors** - - The supported color abbreviations are the single letter codes - - ============= =============================== - character color - ============= =============================== - ``'b'`` blue - ``'g'`` green - ``'r'`` red - ``'c'`` cyan - ``'m'`` magenta - ``'y'`` yellow - ``'k'`` black - ``'w'`` white - ============= =============================== - - and the ``'CN'`` colors that index into the default property cycle. - - If the color is the only part of the format string, you can - additionally use any `matplotlib.colors` spec, e.g. full names - (``'green'``) or hex strings (``'#008000'``). - """ - kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D) - lines = [*self._get_lines(*args, data=data, **kwargs)] - for line in lines: - self.add_line(line) - if scalex: - self._request_autoscale_view("x") - if scaley: - self._request_autoscale_view("y") - return lines - - @_preprocess_data(replace_names=["x", "y"], label_namer="y") - @_docstring.dedent_interpd - def plot_date(self, x, y, fmt='o', tz=None, xdate=True, ydate=False, - **kwargs): - """ - [*Discouraged*] Plot coercing the axis to treat floats as dates. - - .. admonition:: Discouraged - - This method exists for historic reasons and will be deprecated in - the future. - - - ``datetime``-like data should directly be plotted using - `~.Axes.plot`. - - If you need to plot plain numeric data as :ref:`date-format` or - need to set a timezone, call ``ax.xaxis.axis_date`` / - ``ax.yaxis.axis_date`` before `~.Axes.plot`. See - `.Axis.axis_date`. - - Similar to `.plot`, this plots *y* vs. *x* as lines or markers. - However, the axis labels are formatted as dates depending on *xdate* - and *ydate*. Note that `.plot` will work with `datetime` and - `numpy.datetime64` objects without resorting to this method. - - Parameters - ---------- - x, y : array-like - The coordinates of the data points. If *xdate* or *ydate* is - *True*, the respective values *x* or *y* are interpreted as - :ref:`Matplotlib dates `. - - fmt : str, optional - The plot format string. For details, see the corresponding - parameter in `.plot`. - - tz : timezone string or `datetime.tzinfo`, default: :rc:`timezone` - The time zone to use in labeling dates. - - xdate : bool, default: True - If *True*, the *x*-axis will be interpreted as Matplotlib dates. - - ydate : bool, default: False - If *True*, the *y*-axis will be interpreted as Matplotlib dates. - - Returns - ------- - list of `.Line2D` - Objects representing the plotted data. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - **kwargs - Keyword arguments control the `.Line2D` properties: - - %(Line2D:kwdoc)s - - See Also - -------- - matplotlib.dates : Helper functions on dates. - matplotlib.dates.date2num : Convert dates to num. - matplotlib.dates.num2date : Convert num to dates. - matplotlib.dates.drange : Create an equally spaced sequence of dates. - - Notes - ----- - If you are using custom date tickers and formatters, it may be - necessary to set the formatters/locators after the call to - `.plot_date`. `.plot_date` will set the default tick locator to - `.AutoDateLocator` (if the tick locator is not already set to a - `.DateLocator` instance) and the default tick formatter to - `.AutoDateFormatter` (if the tick formatter is not already set to a - `.DateFormatter` instance). - """ - if xdate: - self.xaxis_date(tz) - if ydate: - self.yaxis_date(tz) - return self.plot(x, y, fmt, **kwargs) - - # @_preprocess_data() # let 'plot' do the unpacking.. - @_docstring.dedent_interpd - def loglog(self, *args, **kwargs): - """ - Make a plot with log scaling on both the x- and y-axis. - - Call signatures:: - - loglog([x], y, [fmt], data=None, **kwargs) - loglog([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs) - - This is just a thin wrapper around `.plot` which additionally changes - both the x-axis and the y-axis to log scaling. All the concepts and - parameters of plot can be used here as well. - - The additional parameters *base*, *subs* and *nonpositive* control the - x/y-axis properties. They are just forwarded to `.Axes.set_xscale` and - `.Axes.set_yscale`. To use different properties on the x-axis and the - y-axis, use e.g. - ``ax.set_xscale("log", base=10); ax.set_yscale("log", base=2)``. - - Parameters - ---------- - base : float, default: 10 - Base of the logarithm. - - subs : sequence, optional - The location of the minor ticks. If *None*, reasonable locations - are automatically chosen depending on the number of decades in the - plot. See `.Axes.set_xscale`/`.Axes.set_yscale` for details. - - nonpositive : {'mask', 'clip'}, default: 'clip' - Non-positive values can be masked as invalid, or clipped to a very - small positive number. - - **kwargs - All parameters supported by `.plot`. - - Returns - ------- - list of `.Line2D` - Objects representing the plotted data. - """ - dx = {k: v for k, v in kwargs.items() - if k in ['base', 'subs', 'nonpositive', - 'basex', 'subsx', 'nonposx']} - self.set_xscale('log', **dx) - dy = {k: v for k, v in kwargs.items() - if k in ['base', 'subs', 'nonpositive', - 'basey', 'subsy', 'nonposy']} - self.set_yscale('log', **dy) - return self.plot( - *args, **{k: v for k, v in kwargs.items() if k not in {*dx, *dy}}) - - # @_preprocess_data() # let 'plot' do the unpacking.. - @_docstring.dedent_interpd - def semilogx(self, *args, **kwargs): - """ - Make a plot with log scaling on the x-axis. - - Call signatures:: - - semilogx([x], y, [fmt], data=None, **kwargs) - semilogx([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs) - - This is just a thin wrapper around `.plot` which additionally changes - the x-axis to log scaling. All the concepts and parameters of plot can - be used here as well. - - The additional parameters *base*, *subs*, and *nonpositive* control the - x-axis properties. They are just forwarded to `.Axes.set_xscale`. - - Parameters - ---------- - base : float, default: 10 - Base of the x logarithm. - - subs : array-like, optional - The location of the minor xticks. If *None*, reasonable locations - are automatically chosen depending on the number of decades in the - plot. See `.Axes.set_xscale` for details. - - nonpositive : {'mask', 'clip'}, default: 'clip' - Non-positive values in x can be masked as invalid, or clipped to a - very small positive number. - - **kwargs - All parameters supported by `.plot`. - - Returns - ------- - list of `.Line2D` - Objects representing the plotted data. - """ - d = {k: v for k, v in kwargs.items() - if k in ['base', 'subs', 'nonpositive', - 'basex', 'subsx', 'nonposx']} - self.set_xscale('log', **d) - return self.plot( - *args, **{k: v for k, v in kwargs.items() if k not in d}) - - # @_preprocess_data() # let 'plot' do the unpacking.. - @_docstring.dedent_interpd - def semilogy(self, *args, **kwargs): - """ - Make a plot with log scaling on the y-axis. - - Call signatures:: - - semilogy([x], y, [fmt], data=None, **kwargs) - semilogy([x], y, [fmt], [x2], y2, [fmt2], ..., **kwargs) - - This is just a thin wrapper around `.plot` which additionally changes - the y-axis to log scaling. All the concepts and parameters of plot can - be used here as well. - - The additional parameters *base*, *subs*, and *nonpositive* control the - y-axis properties. They are just forwarded to `.Axes.set_yscale`. - - Parameters - ---------- - base : float, default: 10 - Base of the y logarithm. - - subs : array-like, optional - The location of the minor yticks. If *None*, reasonable locations - are automatically chosen depending on the number of decades in the - plot. See `.Axes.set_yscale` for details. - - nonpositive : {'mask', 'clip'}, default: 'clip' - Non-positive values in y can be masked as invalid, or clipped to a - very small positive number. - - **kwargs - All parameters supported by `.plot`. - - Returns - ------- - list of `.Line2D` - Objects representing the plotted data. - """ - d = {k: v for k, v in kwargs.items() - if k in ['base', 'subs', 'nonpositive', - 'basey', 'subsy', 'nonposy']} - self.set_yscale('log', **d) - return self.plot( - *args, **{k: v for k, v in kwargs.items() if k not in d}) - - @_preprocess_data(replace_names=["x"], label_namer="x") - def acorr(self, x, **kwargs): - """ - Plot the autocorrelation of *x*. - - Parameters - ---------- - x : array-like - - detrend : callable, default: `.mlab.detrend_none` (no detrending) - A detrending function applied to *x*. It must have the - signature :: - - detrend(x: np.ndarray) -> np.ndarray - - normed : bool, default: True - If ``True``, input vectors are normalised to unit length. - - usevlines : bool, default: True - Determines the plot style. - - If ``True``, vertical lines are plotted from 0 to the acorr value - using `.Axes.vlines`. Additionally, a horizontal line is plotted - at y=0 using `.Axes.axhline`. - - If ``False``, markers are plotted at the acorr values using - `.Axes.plot`. - - maxlags : int, default: 10 - Number of lags to show. If ``None``, will return all - ``2 * len(x) - 1`` lags. - - Returns - ------- - lags : array (length ``2*maxlags+1``) - The lag vector. - c : array (length ``2*maxlags+1``) - The auto correlation vector. - line : `.LineCollection` or `.Line2D` - `.Artist` added to the Axes of the correlation: - - - `.LineCollection` if *usevlines* is True. - - `.Line2D` if *usevlines* is False. - b : `~matplotlib.lines.Line2D` or None - Horizontal line at 0 if *usevlines* is True - None *usevlines* is False. - - Other Parameters - ---------------- - linestyle : `~matplotlib.lines.Line2D` property, optional - The linestyle for plotting the data points. - Only used if *usevlines* is ``False``. - - marker : str, default: 'o' - The marker for plotting the data points. - Only used if *usevlines* is ``False``. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Additional parameters are passed to `.Axes.vlines` and - `.Axes.axhline` if *usevlines* is ``True``; otherwise they are - passed to `.Axes.plot`. - - Notes - ----- - The cross correlation is performed with `numpy.correlate` with - ``mode = "full"``. - """ - return self.xcorr(x, x, **kwargs) - - @_preprocess_data(replace_names=["x", "y"], label_namer="y") - def xcorr(self, x, y, normed=True, detrend=mlab.detrend_none, - usevlines=True, maxlags=10, **kwargs): - r""" - Plot the cross correlation between *x* and *y*. - - The correlation with lag k is defined as - :math:`\sum_n x[n+k] \cdot y^*[n]`, where :math:`y^*` is the complex - conjugate of :math:`y`. - - Parameters - ---------- - x, y : array-like of length n - - detrend : callable, default: `.mlab.detrend_none` (no detrending) - A detrending function applied to *x* and *y*. It must have the - signature :: - - detrend(x: np.ndarray) -> np.ndarray - - normed : bool, default: True - If ``True``, input vectors are normalised to unit length. - - usevlines : bool, default: True - Determines the plot style. - - If ``True``, vertical lines are plotted from 0 to the xcorr value - using `.Axes.vlines`. Additionally, a horizontal line is plotted - at y=0 using `.Axes.axhline`. - - If ``False``, markers are plotted at the xcorr values using - `.Axes.plot`. - - maxlags : int, default: 10 - Number of lags to show. If None, will return all ``2 * len(x) - 1`` - lags. - - Returns - ------- - lags : array (length ``2*maxlags+1``) - The lag vector. - c : array (length ``2*maxlags+1``) - The auto correlation vector. - line : `.LineCollection` or `.Line2D` - `.Artist` added to the Axes of the correlation: - - - `.LineCollection` if *usevlines* is True. - - `.Line2D` if *usevlines* is False. - b : `~matplotlib.lines.Line2D` or None - Horizontal line at 0 if *usevlines* is True - None *usevlines* is False. - - Other Parameters - ---------------- - linestyle : `~matplotlib.lines.Line2D` property, optional - The linestyle for plotting the data points. - Only used if *usevlines* is ``False``. - - marker : str, default: 'o' - The marker for plotting the data points. - Only used if *usevlines* is ``False``. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Additional parameters are passed to `.Axes.vlines` and - `.Axes.axhline` if *usevlines* is ``True``; otherwise they are - passed to `.Axes.plot`. - - Notes - ----- - The cross correlation is performed with `numpy.correlate` with - ``mode = "full"``. - """ - Nx = len(x) - if Nx != len(y): - raise ValueError('x and y must be equal length') - - x = detrend(np.asarray(x)) - y = detrend(np.asarray(y)) - - correls = np.correlate(x, y, mode="full") - - if normed: - correls = correls / np.sqrt(np.dot(x, x) * np.dot(y, y)) - - if maxlags is None: - maxlags = Nx - 1 - - if maxlags >= Nx or maxlags < 1: - raise ValueError('maxlags must be None or strictly ' - 'positive < %d' % Nx) - - lags = np.arange(-maxlags, maxlags + 1) - correls = correls[Nx - 1 - maxlags:Nx + maxlags] - - if usevlines: - a = self.vlines(lags, [0], correls, **kwargs) - # Make label empty so only vertical lines get a legend entry - kwargs.pop('label', '') - b = self.axhline(**kwargs) - else: - kwargs.setdefault('marker', 'o') - kwargs.setdefault('linestyle', 'None') - a, = self.plot(lags, correls, **kwargs) - b = None - return lags, correls, a, b - - #### Specialized plotting - - # @_preprocess_data() # let 'plot' do the unpacking.. - def step(self, x, y, *args, where='pre', data=None, **kwargs): - """ - Make a step plot. - - Call signatures:: - - step(x, y, [fmt], *, data=None, where='pre', **kwargs) - step(x, y, [fmt], x2, y2, [fmt2], ..., *, where='pre', **kwargs) - - This is just a thin wrapper around `.plot` which changes some - formatting options. Most of the concepts and parameters of plot can be - used here as well. - - .. note:: - - This method uses a standard plot with a step drawstyle: The *x* - values are the reference positions and steps extend left/right/both - directions depending on *where*. - - For the common case where you know the values and edges of the - steps, use `~.Axes.stairs` instead. - - Parameters - ---------- - x : array-like - 1D sequence of x positions. It is assumed, but not checked, that - it is uniformly increasing. - - y : array-like - 1D sequence of y levels. - - fmt : str, optional - A format string, e.g. 'g' for a green line. See `.plot` for a more - detailed description. - - Note: While full format strings are accepted, it is recommended to - only specify the color. Line styles are currently ignored (use - the keyword argument *linestyle* instead). Markers are accepted - and plotted on the given positions, however, this is a rarely - needed feature for step plots. - - where : {'pre', 'post', 'mid'}, default: 'pre' - Define where the steps should be placed: - - - 'pre': The y value is continued constantly to the left from - every *x* position, i.e. the interval ``(x[i-1], x[i]]`` has the - value ``y[i]``. - - 'post': The y value is continued constantly to the right from - every *x* position, i.e. the interval ``[x[i], x[i+1])`` has the - value ``y[i]``. - - 'mid': Steps occur half-way between the *x* positions. - - data : indexable object, optional - An object with labelled data. If given, provide the label names to - plot in *x* and *y*. - - **kwargs - Additional parameters are the same as those for `.plot`. - - Returns - ------- - list of `.Line2D` - Objects representing the plotted data. - """ - _api.check_in_list(('pre', 'post', 'mid'), where=where) - kwargs['drawstyle'] = 'steps-' + where - return self.plot(x, y, *args, data=data, **kwargs) - - @staticmethod - def _convert_dx(dx, x0, xconv, convert): - """ - Small helper to do logic of width conversion flexibly. - - *dx* and *x0* have units, but *xconv* has already been converted - to unitless (and is an ndarray). This allows the *dx* to have units - that are different from *x0*, but are still accepted by the - ``__add__`` operator of *x0*. - """ - - # x should be an array... - assert type(xconv) is np.ndarray - - if xconv.size == 0: - # xconv has already been converted, but maybe empty... - return convert(dx) - - try: - # attempt to add the width to x0; this works for - # datetime+timedelta, for instance - - # only use the first element of x and x0. This saves - # having to be sure addition works across the whole - # vector. This is particularly an issue if - # x0 and dx are lists so x0 + dx just concatenates the lists. - # We can't just cast x0 and dx to numpy arrays because that - # removes the units from unit packages like `pint` that - # wrap numpy arrays. - try: - x0 = cbook._safe_first_finite(x0) - except (TypeError, IndexError, KeyError): - pass - - try: - x = cbook._safe_first_finite(xconv) - except (TypeError, IndexError, KeyError): - x = xconv - - delist = False - if not np.iterable(dx): - dx = [dx] - delist = True - dx = [convert(x0 + ddx) - x for ddx in dx] - if delist: - dx = dx[0] - except (ValueError, TypeError, AttributeError): - # if the above fails (for any reason) just fallback to what - # we do by default and convert dx by itself. - dx = convert(dx) - return dx - - @_preprocess_data() - @_docstring.dedent_interpd - def bar(self, x, height, width=0.8, bottom=None, *, align="center", - **kwargs): - r""" - Make a bar plot. - - The bars are positioned at *x* with the given *align*\ment. Their - dimensions are given by *height* and *width*. The vertical baseline - is *bottom* (default 0). - - Many parameters can take either a single value applying to all bars - or a sequence of values, one for each bar. - - Parameters - ---------- - x : float or array-like - The x coordinates of the bars. See also *align* for the - alignment of the bars to the coordinates. - - height : float or array-like - The height(s) of the bars. - - width : float or array-like, default: 0.8 - The width(s) of the bars. - - bottom : float or array-like, default: 0 - The y coordinate(s) of the bottom side(s) of the bars. - - align : {'center', 'edge'}, default: 'center' - Alignment of the bars to the *x* coordinates: - - - 'center': Center the base on the *x* positions. - - 'edge': Align the left edges of the bars with the *x* positions. - - To align the bars on the right edge pass a negative *width* and - ``align='edge'``. - - Returns - ------- - `.BarContainer` - Container with all the bars and optionally errorbars. - - Other Parameters - ---------------- - color : color or list of color, optional - The colors of the bar faces. - - edgecolor : color or list of color, optional - The colors of the bar edges. - - linewidth : float or array-like, optional - Width of the bar edge(s). If 0, don't draw edges. - - tick_label : str or list of str, optional - The tick labels of the bars. - Default: None (Use default numeric labels.) - - label : str or list of str, optional - A single label is attached to the resulting `.BarContainer` as a - label for the whole dataset. - If a list is provided, it must be the same length as *x* and - labels the individual bars. Repeated labels are not de-duplicated - and will cause repeated label entries, so this is best used when - bars also differ in style (e.g., by passing a list to *color*.) - - xerr, yerr : float or array-like of shape(N,) or shape(2, N), optional - If not *None*, add horizontal / vertical errorbars to the bar tips. - The values are +/- sizes relative to the data: - - - scalar: symmetric +/- values for all bars - - shape(N,): symmetric +/- values for each bar - - shape(2, N): Separate - and + values for each bar. First row - contains the lower errors, the second row contains the upper - errors. - - *None*: No errorbar. (Default) - - See :doc:`/gallery/statistics/errorbar_features` for an example on - the usage of *xerr* and *yerr*. - - ecolor : color or list of color, default: 'black' - The line color of the errorbars. - - capsize : float, default: :rc:`errorbar.capsize` - The length of the error bar caps in points. - - error_kw : dict, optional - Dictionary of keyword arguments to be passed to the - `~.Axes.errorbar` method. Values of *ecolor* or *capsize* defined - here take precedence over the independent keyword arguments. - - log : bool, default: False - If *True*, set the y-axis to be log scale. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs : `.Rectangle` properties - - %(Rectangle:kwdoc)s - - See Also - -------- - barh : Plot a horizontal bar plot. - - Notes - ----- - Stacked bars can be achieved by passing individual *bottom* values per - bar. See :doc:`/gallery/lines_bars_and_markers/bar_stacked`. - """ - kwargs = cbook.normalize_kwargs(kwargs, mpatches.Patch) - color = kwargs.pop('color', None) - if color is None: - color = self._get_patches_for_fill.get_next_color() - edgecolor = kwargs.pop('edgecolor', None) - linewidth = kwargs.pop('linewidth', None) - hatch = kwargs.pop('hatch', None) - - # Because xerr and yerr will be passed to errorbar, most dimension - # checking and processing will be left to the errorbar method. - xerr = kwargs.pop('xerr', None) - yerr = kwargs.pop('yerr', None) - error_kw = kwargs.pop('error_kw', {}) - ezorder = error_kw.pop('zorder', None) - if ezorder is None: - ezorder = kwargs.get('zorder', None) - if ezorder is not None: - # If using the bar zorder, increment slightly to make sure - # errorbars are drawn on top of bars - ezorder += 0.01 - error_kw.setdefault('zorder', ezorder) - ecolor = kwargs.pop('ecolor', 'k') - capsize = kwargs.pop('capsize', mpl.rcParams["errorbar.capsize"]) - error_kw.setdefault('ecolor', ecolor) - error_kw.setdefault('capsize', capsize) - - # The keyword argument *orientation* is used by barh() to defer all - # logic and drawing to bar(). It is considered internal and is - # intentionally not mentioned in the docstring. - orientation = kwargs.pop('orientation', 'vertical') - _api.check_in_list(['vertical', 'horizontal'], orientation=orientation) - log = kwargs.pop('log', False) - label = kwargs.pop('label', '') - tick_labels = kwargs.pop('tick_label', None) - - y = bottom # Matches barh call signature. - if orientation == 'vertical': - if y is None: - y = 0 - else: # horizontal - if x is None: - x = 0 - - if orientation == 'vertical': - self._process_unit_info( - [("x", x), ("y", height)], kwargs, convert=False) - if log: - self.set_yscale('log', nonpositive='clip') - else: # horizontal - self._process_unit_info( - [("x", width), ("y", y)], kwargs, convert=False) - if log: - self.set_xscale('log', nonpositive='clip') - - # lets do some conversions now since some types cannot be - # subtracted uniformly - if self.xaxis is not None: - x0 = x - x = np.asarray(self.convert_xunits(x)) - width = self._convert_dx(width, x0, x, self.convert_xunits) - if xerr is not None: - xerr = self._convert_dx(xerr, x0, x, self.convert_xunits) - if self.yaxis is not None: - y0 = y - y = np.asarray(self.convert_yunits(y)) - height = self._convert_dx(height, y0, y, self.convert_yunits) - if yerr is not None: - yerr = self._convert_dx(yerr, y0, y, self.convert_yunits) - - x, height, width, y, linewidth, hatch = np.broadcast_arrays( - # Make args iterable too. - np.atleast_1d(x), height, width, y, linewidth, hatch) - - # Now that units have been converted, set the tick locations. - if orientation == 'vertical': - tick_label_axis = self.xaxis - tick_label_position = x - else: # horizontal - tick_label_axis = self.yaxis - tick_label_position = y - - if not isinstance(label, str) and np.iterable(label): - bar_container_label = '_nolegend_' - patch_labels = label - else: - bar_container_label = label - patch_labels = ['_nolegend_'] * len(x) - if len(patch_labels) != len(x): - raise ValueError(f'number of labels ({len(patch_labels)}) ' - f'does not match number of bars ({len(x)}).') - - linewidth = itertools.cycle(np.atleast_1d(linewidth)) - hatch = itertools.cycle(np.atleast_1d(hatch)) - color = itertools.chain(itertools.cycle(mcolors.to_rgba_array(color)), - # Fallback if color == "none". - itertools.repeat('none')) - if edgecolor is None: - edgecolor = itertools.repeat(None) - else: - edgecolor = itertools.chain( - itertools.cycle(mcolors.to_rgba_array(edgecolor)), - # Fallback if edgecolor == "none". - itertools.repeat('none')) - - # We will now resolve the alignment and really have - # left, bottom, width, height vectors - _api.check_in_list(['center', 'edge'], align=align) - if align == 'center': - if orientation == 'vertical': - try: - left = x - width / 2 - except TypeError as e: - raise TypeError(f'the dtypes of parameters x ({x.dtype}) ' - f'and width ({width.dtype}) ' - f'are incompatible') from e - bottom = y - else: # horizontal - try: - bottom = y - height / 2 - except TypeError as e: - raise TypeError(f'the dtypes of parameters y ({y.dtype}) ' - f'and height ({height.dtype}) ' - f'are incompatible') from e - left = x - else: # edge - left = x - bottom = y - - patches = [] - args = zip(left, bottom, width, height, color, edgecolor, linewidth, - hatch, patch_labels) - for l, b, w, h, c, e, lw, htch, lbl in args: - r = mpatches.Rectangle( - xy=(l, b), width=w, height=h, - facecolor=c, - edgecolor=e, - linewidth=lw, - label=lbl, - hatch=htch, - ) - r._internal_update(kwargs) - r.get_path()._interpolation_steps = 100 - if orientation == 'vertical': - r.sticky_edges.y.append(b) - else: # horizontal - r.sticky_edges.x.append(l) - self.add_patch(r) - patches.append(r) - - if xerr is not None or yerr is not None: - if orientation == 'vertical': - # using list comps rather than arrays to preserve unit info - ex = [l + 0.5 * w for l, w in zip(left, width)] - ey = [b + h for b, h in zip(bottom, height)] - - else: # horizontal - # using list comps rather than arrays to preserve unit info - ex = [l + w for l, w in zip(left, width)] - ey = [b + 0.5 * h for b, h in zip(bottom, height)] - - error_kw.setdefault("label", '_nolegend_') - - errorbar = self.errorbar(ex, ey, - yerr=yerr, xerr=xerr, - fmt='none', **error_kw) - else: - errorbar = None - - self._request_autoscale_view() - - if orientation == 'vertical': - datavalues = height - else: # horizontal - datavalues = width - - bar_container = BarContainer(patches, errorbar, datavalues=datavalues, - orientation=orientation, - label=bar_container_label) - self.add_container(bar_container) - - if tick_labels is not None: - tick_labels = np.broadcast_to(tick_labels, len(patches)) - tick_label_axis.set_ticks(tick_label_position) - tick_label_axis.set_ticklabels(tick_labels) - - return bar_container - - # @_preprocess_data() # let 'bar' do the unpacking.. - @_docstring.dedent_interpd - def barh(self, y, width, height=0.8, left=None, *, align="center", - data=None, **kwargs): - r""" - Make a horizontal bar plot. - - The bars are positioned at *y* with the given *align*\ment. Their - dimensions are given by *width* and *height*. The horizontal baseline - is *left* (default 0). - - Many parameters can take either a single value applying to all bars - or a sequence of values, one for each bar. - - Parameters - ---------- - y : float or array-like - The y coordinates of the bars. See also *align* for the - alignment of the bars to the coordinates. - - width : float or array-like - The width(s) of the bars. - - height : float or array-like, default: 0.8 - The heights of the bars. - - left : float or array-like, default: 0 - The x coordinates of the left side(s) of the bars. - - align : {'center', 'edge'}, default: 'center' - Alignment of the base to the *y* coordinates*: - - - 'center': Center the bars on the *y* positions. - - 'edge': Align the bottom edges of the bars with the *y* - positions. - - To align the bars on the top edge pass a negative *height* and - ``align='edge'``. - - Returns - ------- - `.BarContainer` - Container with all the bars and optionally errorbars. - - Other Parameters - ---------------- - color : color or list of color, optional - The colors of the bar faces. - - edgecolor : color or list of color, optional - The colors of the bar edges. - - linewidth : float or array-like, optional - Width of the bar edge(s). If 0, don't draw edges. - - tick_label : str or list of str, optional - The tick labels of the bars. - Default: None (Use default numeric labels.) - - label : str or list of str, optional - A single label is attached to the resulting `.BarContainer` as a - label for the whole dataset. - If a list is provided, it must be the same length as *y* and - labels the individual bars. Repeated labels are not de-duplicated - and will cause repeated label entries, so this is best used when - bars also differ in style (e.g., by passing a list to *color*.) - - xerr, yerr : float or array-like of shape(N,) or shape(2, N), optional - If not *None*, add horizontal / vertical errorbars to the bar tips. - The values are +/- sizes relative to the data: - - - scalar: symmetric +/- values for all bars - - shape(N,): symmetric +/- values for each bar - - shape(2, N): Separate - and + values for each bar. First row - contains the lower errors, the second row contains the upper - errors. - - *None*: No errorbar. (default) - - See :doc:`/gallery/statistics/errorbar_features` for an example on - the usage of *xerr* and *yerr*. - - ecolor : color or list of color, default: 'black' - The line color of the errorbars. - - capsize : float, default: :rc:`errorbar.capsize` - The length of the error bar caps in points. - - error_kw : dict, optional - Dictionary of keyword arguments to be passed to the - `~.Axes.errorbar` method. Values of *ecolor* or *capsize* defined - here take precedence over the independent keyword arguments. - - log : bool, default: False - If ``True``, set the x-axis to be log scale. - - data : indexable object, optional - If given, all parameters also accept a string ``s``, which is - interpreted as ``data[s]`` (unless this raises an exception). - - **kwargs : `.Rectangle` properties - - %(Rectangle:kwdoc)s - - See Also - -------- - bar : Plot a vertical bar plot. - - Notes - ----- - Stacked bars can be achieved by passing individual *left* values per - bar. See - :doc:`/gallery/lines_bars_and_markers/horizontal_barchart_distribution`. - """ - kwargs.setdefault('orientation', 'horizontal') - patches = self.bar(x=left, height=height, width=width, bottom=y, - align=align, data=data, **kwargs) - return patches - - def bar_label(self, container, labels=None, *, fmt="%g", label_type="edge", - padding=0, **kwargs): - """ - Label a bar plot. - - Adds labels to bars in the given `.BarContainer`. - You may need to adjust the axis limits to fit the labels. - - Parameters - ---------- - container : `.BarContainer` - Container with all the bars and optionally errorbars, likely - returned from `.bar` or `.barh`. - - labels : array-like, optional - A list of label texts, that should be displayed. If not given, the - label texts will be the data values formatted with *fmt*. - - fmt : str or callable, default: '%g' - An unnamed %-style or {}-style format string for the label or a - function to call with the value as the first argument. - When *fmt* is a string and can be interpreted in both formats, - %-style takes precedence over {}-style. - - .. versionadded:: 3.7 - Support for {}-style format string and callables. - - label_type : {'edge', 'center'}, default: 'edge' - The label type. Possible values: - - - 'edge': label placed at the end-point of the bar segment, and the - value displayed will be the position of that end-point. - - 'center': label placed in the center of the bar segment, and the - value displayed will be the length of that segment. - (useful for stacked bars, i.e., - :doc:`/gallery/lines_bars_and_markers/bar_label_demo`) - - padding : float, default: 0 - Distance of label from the end of the bar, in points. - - **kwargs - Any remaining keyword arguments are passed through to - `.Axes.annotate`. The alignment parameters ( - *horizontalalignment* / *ha*, *verticalalignment* / *va*) are - not supported because the labels are automatically aligned to - the bars. - - Returns - ------- - list of `.Text` - A list of `.Text` instances for the labels. - """ - for key in ['horizontalalignment', 'ha', 'verticalalignment', 'va']: - if key in kwargs: - raise ValueError( - f"Passing {key!r} to bar_label() is not supported.") - - a, b = self.yaxis.get_view_interval() - y_inverted = a > b - c, d = self.xaxis.get_view_interval() - x_inverted = c > d - - # want to know whether to put label on positive or negative direction - # cannot use np.sign here because it will return 0 if x == 0 - def sign(x): - return 1 if x >= 0 else -1 - - _api.check_in_list(['edge', 'center'], label_type=label_type) - - bars = container.patches - errorbar = container.errorbar - datavalues = container.datavalues - orientation = container.orientation - - if errorbar: - # check "ErrorbarContainer" for the definition of these elements - lines = errorbar.lines # attribute of "ErrorbarContainer" (tuple) - barlinecols = lines[2] # 0: data_line, 1: caplines, 2: barlinecols - barlinecol = barlinecols[0] # the "LineCollection" of error bars - errs = barlinecol.get_segments() - else: - errs = [] - - if labels is None: - labels = [] - - annotations = [] - - for bar, err, dat, lbl in itertools.zip_longest( - bars, errs, datavalues, labels - ): - (x0, y0), (x1, y1) = bar.get_bbox().get_points() - xc, yc = (x0 + x1) / 2, (y0 + y1) / 2 - - if orientation == "vertical": - extrema = max(y0, y1) if dat >= 0 else min(y0, y1) - length = abs(y0 - y1) - else: # horizontal - extrema = max(x0, x1) if dat >= 0 else min(x0, x1) - length = abs(x0 - x1) - - if err is None or np.size(err) == 0: - endpt = extrema - elif orientation == "vertical": - endpt = err[:, 1].max() if dat >= 0 else err[:, 1].min() - else: # horizontal - endpt = err[:, 0].max() if dat >= 0 else err[:, 0].min() - - if label_type == "center": - value = sign(dat) * length - else: # edge - value = extrema - - if label_type == "center": - xy = (0.5, 0.5) - kwargs["xycoords"] = ( - lambda r, b=bar: - mtransforms.Bbox.intersection( - b.get_window_extent(r), b.get_clip_box() - ) or mtransforms.Bbox.null() - ) - else: # edge - if orientation == "vertical": - xy = xc, endpt - else: # horizontal - xy = endpt, yc - - if orientation == "vertical": - y_direction = -1 if y_inverted else 1 - xytext = 0, y_direction * sign(dat) * padding - else: # horizontal - x_direction = -1 if x_inverted else 1 - xytext = x_direction * sign(dat) * padding, 0 - - if label_type == "center": - ha, va = "center", "center" - else: # edge - if orientation == "vertical": - ha = 'center' - if y_inverted: - va = 'top' if dat > 0 else 'bottom' # also handles NaN - else: - va = 'top' if dat < 0 else 'bottom' # also handles NaN - else: # horizontal - if x_inverted: - ha = 'right' if dat > 0 else 'left' # also handles NaN - else: - ha = 'right' if dat < 0 else 'left' # also handles NaN - va = 'center' - - if np.isnan(dat): - lbl = '' - - if lbl is None: - if isinstance(fmt, str): - lbl = cbook._auto_format_str(fmt, value) - elif callable(fmt): - lbl = fmt(value) - else: - raise TypeError("fmt must be a str or callable") - annotation = self.annotate(lbl, - xy, xytext, textcoords="offset points", - ha=ha, va=va, **kwargs) - annotations.append(annotation) - - return annotations - - @_preprocess_data() - @_docstring.dedent_interpd - def broken_barh(self, xranges, yrange, **kwargs): - """ - Plot a horizontal sequence of rectangles. - - A rectangle is drawn for each element of *xranges*. All rectangles - have the same vertical position and size defined by *yrange*. - - Parameters - ---------- - xranges : sequence of tuples (*xmin*, *xwidth*) - The x-positions and extents of the rectangles. For each tuple - (*xmin*, *xwidth*) a rectangle is drawn from *xmin* to *xmin* + - *xwidth*. - yrange : (*ymin*, *yheight*) - The y-position and extent for all the rectangles. - - Returns - ------- - `~.collections.PolyCollection` - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - **kwargs : `.PolyCollection` properties - - Each *kwarg* can be either a single argument applying to all - rectangles, e.g.:: - - facecolors='black' - - or a sequence of arguments over which is cycled, e.g.:: - - facecolors=('black', 'blue') - - would create interleaving black and blue rectangles. - - Supported keywords: - - %(PolyCollection:kwdoc)s - """ - # process the unit information - xdata = cbook._safe_first_finite(xranges) if len(xranges) else None - ydata = cbook._safe_first_finite(yrange) if len(yrange) else None - self._process_unit_info( - [("x", xdata), ("y", ydata)], kwargs, convert=False) - - vertices = [] - y0, dy = yrange - y0, y1 = self.convert_yunits((y0, y0 + dy)) - for xr in xranges: # convert the absolute values, not the x and dx - try: - x0, dx = xr - except Exception: - raise ValueError( - "each range in xrange must be a sequence with two " - "elements (i.e. xrange must be an (N, 2) array)") from None - x0, x1 = self.convert_xunits((x0, x0 + dx)) - vertices.append([(x0, y0), (x0, y1), (x1, y1), (x1, y0)]) - - col = mcoll.PolyCollection(np.array(vertices), **kwargs) - self.add_collection(col, autolim=True) - self._request_autoscale_view() - - return col - - @_preprocess_data() - @_api.delete_parameter("3.6", "use_line_collection") - def stem(self, *args, linefmt=None, markerfmt=None, basefmt=None, bottom=0, - label=None, use_line_collection=True, orientation='vertical'): - """ - Create a stem plot. - - A stem plot draws lines perpendicular to a baseline at each location - *locs* from the baseline to *heads*, and places a marker there. For - vertical stem plots (the default), the *locs* are *x* positions, and - the *heads* are *y* values. For horizontal stem plots, the *locs* are - *y* positions, and the *heads* are *x* values. - - Call signature:: - - stem([locs,] heads, linefmt=None, markerfmt=None, basefmt=None) - - The *locs*-positions are optional. *linefmt* may be provided as - positional, but all other formats must be provided as keyword - arguments. - - Parameters - ---------- - locs : array-like, default: (0, 1, ..., len(heads) - 1) - For vertical stem plots, the x-positions of the stems. - For horizontal stem plots, the y-positions of the stems. - - heads : array-like - For vertical stem plots, the y-values of the stem heads. - For horizontal stem plots, the x-values of the stem heads. - - linefmt : str, optional - A string defining the color and/or linestyle of the vertical lines: - - ========= ============= - Character Line Style - ========= ============= - ``'-'`` solid line - ``'--'`` dashed line - ``'-.'`` dash-dot line - ``':'`` dotted line - ========= ============= - - Default: 'C0-', i.e. solid line with the first color of the color - cycle. - - Note: Markers specified through this parameter (e.g. 'x') will be - silently ignored (unless using ``use_line_collection=False``). - Instead, markers should be specified using *markerfmt*. - - markerfmt : str, optional - A string defining the color and/or shape of the markers at the stem - heads. If the marker is not given, use the marker 'o', i.e. filled - circles. If the color is not given, use the color from *linefmt*. - - basefmt : str, default: 'C3-' ('C2-' in classic mode) - A format string defining the properties of the baseline. - - orientation : {'vertical', 'horizontal'}, default: 'vertical' - If 'vertical', will produce a plot with stems oriented vertically, - If 'horizontal', the stems will be oriented horizontally. - - bottom : float, default: 0 - The y/x-position of the baseline (depending on orientation). - - label : str, default: None - The label to use for the stems in legends. - - use_line_collection : bool, default: True - *Deprecated since 3.6* - - If ``True``, store and plot the stem lines as a - `~.collections.LineCollection` instead of individual lines, which - significantly increases performance. If ``False``, defaults to the - old behavior of using a list of `.Line2D` objects. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - Returns - ------- - `.StemContainer` - The container may be treated like a tuple - (*markerline*, *stemlines*, *baseline*) - - Notes - ----- - .. seealso:: - The MATLAB function - `stem `_ - which inspired this method. - """ - if not 1 <= len(args) <= 3: - raise TypeError('stem expected between 1 or 3 positional ' - 'arguments, got {}'.format(args)) - _api.check_in_list(['horizontal', 'vertical'], orientation=orientation) - - if len(args) == 1: - heads, = args - locs = np.arange(len(heads)) - args = () - elif isinstance(args[1], str): - heads, *args = args - locs = np.arange(len(heads)) - else: - locs, heads, *args = args - - if orientation == 'vertical': - locs, heads = self._process_unit_info([("x", locs), ("y", heads)]) - else: # horizontal - heads, locs = self._process_unit_info([("x", heads), ("y", locs)]) - - # resolve line format - if linefmt is None: - linefmt = args[0] if len(args) > 0 else "C0-" - linestyle, linemarker, linecolor = _process_plot_format(linefmt) - - # resolve marker format - if markerfmt is None: - # if not given as kwarg, fall back to 'o' - markerfmt = "o" - if markerfmt == '': - markerfmt = ' ' # = empty line style; '' would resolve rcParams - markerstyle, markermarker, markercolor = \ - _process_plot_format(markerfmt) - if markermarker is None: - markermarker = 'o' - if markerstyle is None: - markerstyle = 'None' - if markercolor is None: - markercolor = linecolor - - # resolve baseline format - if basefmt is None: - basefmt = ("C2-" if mpl.rcParams["_internal.classic_mode"] else - "C3-") - basestyle, basemarker, basecolor = _process_plot_format(basefmt) - - # New behaviour in 3.1 is to use a LineCollection for the stemlines - if use_line_collection: - if linestyle is None: - linestyle = mpl.rcParams['lines.linestyle'] - xlines = self.vlines if orientation == "vertical" else self.hlines - stemlines = xlines( - locs, bottom, heads, - colors=linecolor, linestyles=linestyle, label="_nolegend_") - # Old behaviour is to plot each of the lines individually - else: - stemlines = [] - for loc, head in zip(locs, heads): - if orientation == 'horizontal': - xs = [bottom, head] - ys = [loc, loc] - else: - xs = [loc, loc] - ys = [bottom, head] - l, = self.plot(xs, ys, - color=linecolor, linestyle=linestyle, - marker=linemarker, label="_nolegend_") - stemlines.append(l) - - if orientation == 'horizontal': - marker_x = heads - marker_y = locs - baseline_x = [bottom, bottom] - baseline_y = [np.min(locs), np.max(locs)] - else: - marker_x = locs - marker_y = heads - baseline_x = [np.min(locs), np.max(locs)] - baseline_y = [bottom, bottom] - - markerline, = self.plot(marker_x, marker_y, - color=markercolor, linestyle=markerstyle, - marker=markermarker, label="_nolegend_") - - baseline, = self.plot(baseline_x, baseline_y, - color=basecolor, linestyle=basestyle, - marker=basemarker, label="_nolegend_") - - stem_container = StemContainer((markerline, stemlines, baseline), - label=label) - self.add_container(stem_container) - return stem_container - - @_preprocess_data(replace_names=["x", "explode", "labels", "colors"]) - def pie(self, x, explode=None, labels=None, colors=None, - autopct=None, pctdistance=0.6, shadow=False, labeldistance=1.1, - startangle=0, radius=1, counterclock=True, - wedgeprops=None, textprops=None, center=(0, 0), - frame=False, rotatelabels=False, *, normalize=True, hatch=None): - """ - Plot a pie chart. - - Make a pie chart of array *x*. The fractional area of each wedge is - given by ``x/sum(x)``. - - The wedges are plotted counterclockwise, by default starting from the - x-axis. - - Parameters - ---------- - x : 1D array-like - The wedge sizes. - - explode : array-like, default: None - If not *None*, is a ``len(x)`` array which specifies the fraction - of the radius with which to offset each wedge. - - labels : list, default: None - A sequence of strings providing the labels for each wedge - - colors : array-like, default: None - A sequence of colors through which the pie chart will cycle. If - *None*, will use the colors in the currently active cycle. - - hatch : str or list, default: None - Hatching pattern applied to all pie wedges or sequence of patterns - through which the chart will cycle. For a list of valid patterns, - see :doc:`/gallery/shapes_and_collections/hatch_style_reference`. - - .. versionadded:: 3.7 - - autopct : None or str or callable, default: None - If not *None*, *autopct* is a string or function used to label the - wedges with their numeric value. The label will be placed inside - the wedge. If *autopct* is a format string, the label will be - ``fmt % pct``. If *autopct* is a function, then it will be called. - - pctdistance : float, default: 0.6 - The relative distance along the radius at which the text - generated by *autopct* is drawn. To draw the text outside the pie, - set *pctdistance* > 1. This parameter is ignored if *autopct* is - ``None``. - - labeldistance : float or None, default: 1.1 - The relative distance along the radius at which the labels are - drawn. To draw the labels inside the pie, set *labeldistance* < 1. - If set to ``None``, labels are not drawn but are still stored for - use in `.legend`. - - shadow : bool, default: False - Draw a shadow beneath the pie. - - startangle : float, default: 0 degrees - The angle by which the start of the pie is rotated, - counterclockwise from the x-axis. - - radius : float, default: 1 - The radius of the pie. - - counterclock : bool, default: True - Specify fractions direction, clockwise or counterclockwise. - - wedgeprops : dict, default: None - Dict of arguments passed to each `.patches.Wedge` of the pie. - For example, ``wedgeprops = {'linewidth': 3}`` sets the width of - the wedge border lines equal to 3. By default, ``clip_on=False``. - When there is a conflict between these properties and other - keywords, properties passed to *wedgeprops* take precedence. - - textprops : dict, default: None - Dict of arguments to pass to the text objects. - - center : (float, float), default: (0, 0) - The coordinates of the center of the chart. - - frame : bool, default: False - Plot Axes frame with the chart if true. - - rotatelabels : bool, default: False - Rotate each label to the angle of the corresponding slice if true. - - normalize : bool, default: True - When *True*, always make a full pie by normalizing x so that - ``sum(x) == 1``. *False* makes a partial pie if ``sum(x) <= 1`` - and raises a `ValueError` for ``sum(x) > 1``. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - Returns - ------- - patches : list - A sequence of `matplotlib.patches.Wedge` instances - - texts : list - A list of the label `.Text` instances. - - autotexts : list - A list of `.Text` instances for the numeric labels. This will only - be returned if the parameter *autopct* is not *None*. - - Notes - ----- - The pie chart will probably look best if the figure and Axes are - square, or the Axes aspect is equal. - This method sets the aspect ratio of the axis to "equal". - The Axes aspect ratio can be controlled with `.Axes.set_aspect`. - """ - self.set_aspect('equal') - # The use of float32 is "historical", but can't be changed without - # regenerating the test baselines. - x = np.asarray(x, np.float32) - if x.ndim > 1: - raise ValueError("x must be 1D") - - if np.any(x < 0): - raise ValueError("Wedge sizes 'x' must be non negative values") - - sx = x.sum() - - if normalize: - x = x / sx - elif sx > 1: - raise ValueError('Cannot plot an unnormalized pie with sum(x) > 1') - if labels is None: - labels = [''] * len(x) - if explode is None: - explode = [0] * len(x) - if len(x) != len(labels): - raise ValueError("'label' must be of length 'x'") - if len(x) != len(explode): - raise ValueError("'explode' must be of length 'x'") - if colors is None: - get_next_color = self._get_patches_for_fill.get_next_color - else: - color_cycle = itertools.cycle(colors) - - def get_next_color(): - return next(color_cycle) - - hatch_cycle = itertools.cycle(np.atleast_1d(hatch)) - - _api.check_isinstance(Number, radius=radius, startangle=startangle) - if radius <= 0: - raise ValueError(f'radius must be a positive number, not {radius}') - - # Starting theta1 is the start fraction of the circle - theta1 = startangle / 360 - - if wedgeprops is None: - wedgeprops = {} - if textprops is None: - textprops = {} - - texts = [] - slices = [] - autotexts = [] - - for frac, label, expl in zip(x, labels, explode): - x, y = center - theta2 = (theta1 + frac) if counterclock else (theta1 - frac) - thetam = 2 * np.pi * 0.5 * (theta1 + theta2) - x += expl * math.cos(thetam) - y += expl * math.sin(thetam) - - w = mpatches.Wedge((x, y), radius, 360. * min(theta1, theta2), - 360. * max(theta1, theta2), - facecolor=get_next_color(), - hatch=next(hatch_cycle), - clip_on=False, - label=label) - w.set(**wedgeprops) - slices.append(w) - self.add_patch(w) - - if shadow: - # Make sure to add a shadow after the call to add_patch so the - # figure and transform props will be set. - shad = mpatches.Shadow(w, -0.02, -0.02, label='_nolegend_') - self.add_patch(shad) - - if labeldistance is not None: - xt = x + labeldistance * radius * math.cos(thetam) - yt = y + labeldistance * radius * math.sin(thetam) - label_alignment_h = 'left' if xt > 0 else 'right' - label_alignment_v = 'center' - label_rotation = 'horizontal' - if rotatelabels: - label_alignment_v = 'bottom' if yt > 0 else 'top' - label_rotation = (np.rad2deg(thetam) - + (0 if xt > 0 else 180)) - t = self.text(xt, yt, label, - clip_on=False, - horizontalalignment=label_alignment_h, - verticalalignment=label_alignment_v, - rotation=label_rotation, - size=mpl.rcParams['xtick.labelsize']) - t.set(**textprops) - texts.append(t) - - if autopct is not None: - xt = x + pctdistance * radius * math.cos(thetam) - yt = y + pctdistance * radius * math.sin(thetam) - if isinstance(autopct, str): - s = autopct % (100. * frac) - elif callable(autopct): - s = autopct(100. * frac) - else: - raise TypeError( - 'autopct must be callable or a format string') - t = self.text(xt, yt, s, - clip_on=False, - horizontalalignment='center', - verticalalignment='center') - t.set(**textprops) - autotexts.append(t) - - theta1 = theta2 - - if frame: - self._request_autoscale_view() - else: - self.set(frame_on=False, xticks=[], yticks=[], - xlim=(-1.25 + center[0], 1.25 + center[0]), - ylim=(-1.25 + center[1], 1.25 + center[1])) - - if autopct is None: - return slices, texts - else: - return slices, texts, autotexts - - @staticmethod - def _errorevery_to_mask(x, errorevery): - """ - Normalize `errorbar`'s *errorevery* to be a boolean mask for data *x*. - - This function is split out to be usable both by 2D and 3D errorbars. - """ - if isinstance(errorevery, Integral): - errorevery = (0, errorevery) - if isinstance(errorevery, tuple): - if (len(errorevery) == 2 and - isinstance(errorevery[0], Integral) and - isinstance(errorevery[1], Integral)): - errorevery = slice(errorevery[0], None, errorevery[1]) - else: - raise ValueError( - f'{errorevery=!r} is a not a tuple of two integers') - elif isinstance(errorevery, slice): - pass - elif not isinstance(errorevery, str) and np.iterable(errorevery): - try: - x[errorevery] # fancy indexing - except (ValueError, IndexError) as err: - raise ValueError( - f"{errorevery=!r} is iterable but not a valid NumPy fancy " - "index to match 'xerr'/'yerr'") from err - else: - raise ValueError(f"{errorevery=!r} is not a recognized value") - everymask = np.zeros(len(x), bool) - everymask[errorevery] = True - return everymask - - @_preprocess_data(replace_names=["x", "y", "xerr", "yerr"], - label_namer="y") - @_docstring.dedent_interpd - def errorbar(self, x, y, yerr=None, xerr=None, - fmt='', ecolor=None, elinewidth=None, capsize=None, - barsabove=False, lolims=False, uplims=False, - xlolims=False, xuplims=False, errorevery=1, capthick=None, - **kwargs): - """ - Plot y versus x as lines and/or markers with attached errorbars. - - *x*, *y* define the data locations, *xerr*, *yerr* define the errorbar - sizes. By default, this draws the data markers/lines as well the - errorbars. Use fmt='none' to draw errorbars without any data markers. - - .. versionadded:: 3.7 - Caps and error lines are drawn in polar coordinates on polar plots. - - - Parameters - ---------- - x, y : float or array-like - The data positions. - - xerr, yerr : float or array-like, shape(N,) or shape(2, N), optional - The errorbar sizes: - - - scalar: Symmetric +/- values for all data points. - - shape(N,): Symmetric +/-values for each data point. - - shape(2, N): Separate - and + values for each bar. First row - contains the lower errors, the second row contains the upper - errors. - - *None*: No errorbar. - - All values must be >= 0. - - See :doc:`/gallery/statistics/errorbar_features` - for an example on the usage of ``xerr`` and ``yerr``. - - fmt : str, default: '' - The format for the data points / data lines. See `.plot` for - details. - - Use 'none' (case-insensitive) to plot errorbars without any data - markers. - - ecolor : color, default: None - The color of the errorbar lines. If None, use the color of the - line connecting the markers. - - elinewidth : float, default: None - The linewidth of the errorbar lines. If None, the linewidth of - the current style is used. - - capsize : float, default: :rc:`errorbar.capsize` - The length of the error bar caps in points. - - capthick : float, default: None - An alias to the keyword argument *markeredgewidth* (a.k.a. *mew*). - This setting is a more sensible name for the property that - controls the thickness of the error bar cap in points. For - backwards compatibility, if *mew* or *markeredgewidth* are given, - then they will over-ride *capthick*. This may change in future - releases. - - barsabove : bool, default: False - If True, will plot the errorbars above the plot - symbols. Default is below. - - lolims, uplims, xlolims, xuplims : bool, default: False - These arguments can be used to indicate that a value gives only - upper/lower limits. In that case a caret symbol is used to - indicate this. *lims*-arguments may be scalars, or array-likes of - the same length as *xerr* and *yerr*. To use limits with inverted - axes, `~.Axes.set_xlim` or `~.Axes.set_ylim` must be called before - :meth:`errorbar`. Note the tricky parameter names: setting e.g. - *lolims* to True means that the y-value is a *lower* limit of the - True value, so, only an *upward*-pointing arrow will be drawn! - - errorevery : int or (int, int), default: 1 - draws error bars on a subset of the data. *errorevery* =N draws - error bars on the points (x[::N], y[::N]). - *errorevery* =(start, N) draws error bars on the points - (x[start::N], y[start::N]). e.g. errorevery=(6, 3) - adds error bars to the data at (x[6], x[9], x[12], x[15], ...). - Used to avoid overlapping error bars when two series share x-axis - values. - - Returns - ------- - `.ErrorbarContainer` - The container contains: - - - plotline: `~matplotlib.lines.Line2D` instance of x, y plot markers - and/or line. - - caplines: A tuple of `~matplotlib.lines.Line2D` instances of the error - bar caps. - - barlinecols: A tuple of `.LineCollection` with the horizontal and - vertical error ranges. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - All other keyword arguments are passed on to the `~.Axes.plot` call - drawing the markers. For example, this code makes big red squares - with thick green edges:: - - x, y, yerr = rand(3, 10) - errorbar(x, y, yerr, marker='s', mfc='red', - mec='green', ms=20, mew=4) - - where *mfc*, *mec*, *ms* and *mew* are aliases for the longer - property names, *markerfacecolor*, *markeredgecolor*, *markersize* - and *markeredgewidth*. - - Valid kwargs for the marker properties are: - - - *dashes* - - *dash_capstyle* - - *dash_joinstyle* - - *drawstyle* - - *fillstyle* - - *linestyle* - - *marker* - - *markeredgecolor* - - *markeredgewidth* - - *markerfacecolor* - - *markerfacecoloralt* - - *markersize* - - *markevery* - - *solid_capstyle* - - *solid_joinstyle* - - Refer to the corresponding `.Line2D` property for more details: - - %(Line2D:kwdoc)s - """ - kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D) - # Drop anything that comes in as None to use the default instead. - kwargs = {k: v for k, v in kwargs.items() if v is not None} - kwargs.setdefault('zorder', 2) - - # Casting to object arrays preserves units. - if not isinstance(x, np.ndarray): - x = np.asarray(x, dtype=object) - if not isinstance(y, np.ndarray): - y = np.asarray(y, dtype=object) - - def _upcast_err(err): - """ - Safely handle tuple of containers that carry units. - - This function covers the case where the input to the xerr/yerr is a - length 2 tuple of equal length ndarray-subclasses that carry the - unit information in the container. - - If we have a tuple of nested numpy array (subclasses), we defer - coercing the units to be consistent to the underlying unit - library (and implicitly the broadcasting). - - Otherwise, fallback to casting to an object array. - """ - - if ( - # make sure it is not a scalar - np.iterable(err) and - # and it is not empty - len(err) > 0 and - # and the first element is an array sub-class use - # safe_first_element because getitem is index-first not - # location first on pandas objects so err[0] almost always - # fails. - isinstance(cbook._safe_first_finite(err), np.ndarray) - ): - # Get the type of the first element - atype = type(cbook._safe_first_finite(err)) - # Promote the outer container to match the inner container - if atype is np.ndarray: - # Converts using np.asarray, because data cannot - # be directly passed to init of np.ndarray - return np.asarray(err, dtype=object) - # If atype is not np.ndarray, directly pass data to init. - # This works for types such as unyts and astropy units - return atype(err) - # Otherwise wrap it in an object array - return np.asarray(err, dtype=object) - - if xerr is not None and not isinstance(xerr, np.ndarray): - xerr = _upcast_err(xerr) - if yerr is not None and not isinstance(yerr, np.ndarray): - yerr = _upcast_err(yerr) - x, y = np.atleast_1d(x, y) # Make sure all the args are iterable. - if len(x) != len(y): - raise ValueError("'x' and 'y' must have the same size") - - everymask = self._errorevery_to_mask(x, errorevery) - - label = kwargs.pop("label", None) - kwargs['label'] = '_nolegend_' - - # Create the main line and determine overall kwargs for child artists. - # We avoid calling self.plot() directly, or self._get_lines(), because - # that would call self._process_unit_info again, and do other indirect - # data processing. - (data_line, base_style), = self._get_lines._plot_args( - (x, y) if fmt == '' else (x, y, fmt), kwargs, return_kwargs=True) - - # Do this after creating `data_line` to avoid modifying `base_style`. - if barsabove: - data_line.set_zorder(kwargs['zorder'] - .1) - else: - data_line.set_zorder(kwargs['zorder'] + .1) - - # Add line to plot, or throw it away and use it to determine kwargs. - if fmt.lower() != 'none': - self.add_line(data_line) - else: - data_line = None - # Remove alpha=0 color that _get_lines._plot_args returns for - # 'none' format, and replace it with user-specified color, if - # supplied. - base_style.pop('color') - if 'color' in kwargs: - base_style['color'] = kwargs.pop('color') - - if 'color' not in base_style: - base_style['color'] = 'C0' - if ecolor is None: - ecolor = base_style['color'] - - # Eject any line-specific information from format string, as it's not - # needed for bars or caps. - for key in ['marker', 'markersize', 'markerfacecolor', - 'markerfacecoloralt', - 'markeredgewidth', 'markeredgecolor', 'markevery', - 'linestyle', 'fillstyle', 'drawstyle', 'dash_capstyle', - 'dash_joinstyle', 'solid_capstyle', 'solid_joinstyle', - 'dashes']: - base_style.pop(key, None) - - # Make the style dict for the line collections (the bars). - eb_lines_style = {**base_style, 'color': ecolor} - - if elinewidth is not None: - eb_lines_style['linewidth'] = elinewidth - elif 'linewidth' in kwargs: - eb_lines_style['linewidth'] = kwargs['linewidth'] - - for key in ('transform', 'alpha', 'zorder', 'rasterized'): - if key in kwargs: - eb_lines_style[key] = kwargs[key] - - # Make the style dict for caps (the "hats"). - eb_cap_style = {**base_style, 'linestyle': 'none'} - if capsize is None: - capsize = mpl.rcParams["errorbar.capsize"] - if capsize > 0: - eb_cap_style['markersize'] = 2. * capsize - if capthick is not None: - eb_cap_style['markeredgewidth'] = capthick - - # For backwards-compat, allow explicit setting of - # 'markeredgewidth' to over-ride capthick. - for key in ('markeredgewidth', 'transform', 'alpha', - 'zorder', 'rasterized'): - if key in kwargs: - eb_cap_style[key] = kwargs[key] - eb_cap_style['color'] = ecolor - - barcols = [] - caplines = {'x': [], 'y': []} - - # Vectorized fancy-indexer. - def apply_mask(arrays, mask): - return [array[mask] for array in arrays] - - # dep: dependent dataset, indep: independent dataset - for (dep_axis, dep, err, lolims, uplims, indep, lines_func, - marker, lomarker, himarker) in [ - ("x", x, xerr, xlolims, xuplims, y, self.hlines, - "|", mlines.CARETRIGHTBASE, mlines.CARETLEFTBASE), - ("y", y, yerr, lolims, uplims, x, self.vlines, - "_", mlines.CARETUPBASE, mlines.CARETDOWNBASE), - ]: - if err is None: - continue - lolims = np.broadcast_to(lolims, len(dep)).astype(bool) - uplims = np.broadcast_to(uplims, len(dep)).astype(bool) - try: - np.broadcast_to(err, (2, len(dep))) - except ValueError: - raise ValueError( - f"'{dep_axis}err' (shape: {np.shape(err)}) must be a " - f"scalar or a 1D or (2, n) array-like whose shape matches " - f"'{dep_axis}' (shape: {np.shape(dep)})") from None - res = np.zeros(err.shape, dtype=bool) # Default in case of nan - if np.any(np.less(err, -err, out=res, where=(err == err))): - # like err<0, but also works for timedelta and nan. - raise ValueError( - f"'{dep_axis}err' must not contain negative values") - # This is like - # elow, ehigh = np.broadcast_to(...) - # return dep - elow * ~lolims, dep + ehigh * ~uplims - # except that broadcast_to would strip units. - low, high = dep + np.row_stack([-(1 - lolims), 1 - uplims]) * err - barcols.append(lines_func( - *apply_mask([indep, low, high], everymask), **eb_lines_style)) - if self.name == "polar" and dep_axis == "x": - for b in barcols: - for p in b.get_paths(): - p._interpolation_steps = 2 - # Normal errorbars for points without upper/lower limits. - nolims = ~(lolims | uplims) - if nolims.any() and capsize > 0: - indep_masked, lo_masked, hi_masked = apply_mask( - [indep, low, high], nolims & everymask) - for lh_masked in [lo_masked, hi_masked]: - # Since this has to work for x and y as dependent data, we - # first set both x and y to the independent variable and - # overwrite the respective dependent data in a second step. - line = mlines.Line2D(indep_masked, indep_masked, - marker=marker, **eb_cap_style) - line.set(**{f"{dep_axis}data": lh_masked}) - caplines[dep_axis].append(line) - for idx, (lims, hl) in enumerate([(lolims, high), (uplims, low)]): - if not lims.any(): - continue - hlmarker = ( - himarker - if getattr(self, f"{dep_axis}axis").get_inverted() ^ idx - else lomarker) - x_masked, y_masked, hl_masked = apply_mask( - [x, y, hl], lims & everymask) - # As above, we set the dependent data in a second step. - line = mlines.Line2D(x_masked, y_masked, - marker=hlmarker, **eb_cap_style) - line.set(**{f"{dep_axis}data": hl_masked}) - caplines[dep_axis].append(line) - if capsize > 0: - caplines[dep_axis].append(mlines.Line2D( - x_masked, y_masked, marker=marker, **eb_cap_style)) - if self.name == 'polar': - for axis in caplines: - for l in caplines[axis]: - # Rotate caps to be perpendicular to the error bars - for theta, r in zip(l.get_xdata(), l.get_ydata()): - rotation = mtransforms.Affine2D().rotate(theta) - if axis == 'y': - rotation.rotate(-np.pi / 2) - ms = mmarkers.MarkerStyle(marker=marker, - transform=rotation) - self.add_line(mlines.Line2D([theta], [r], marker=ms, - **eb_cap_style)) - else: - for axis in caplines: - for l in caplines[axis]: - self.add_line(l) - - self._request_autoscale_view() - caplines = caplines['x'] + caplines['y'] - errorbar_container = ErrorbarContainer( - (data_line, tuple(caplines), tuple(barcols)), - has_xerr=(xerr is not None), has_yerr=(yerr is not None), - label=label) - self.containers.append(errorbar_container) - - return errorbar_container # (l0, caplines, barcols) - - @_preprocess_data() - def boxplot(self, x, notch=None, sym=None, vert=None, whis=None, - positions=None, widths=None, patch_artist=None, - bootstrap=None, usermedians=None, conf_intervals=None, - meanline=None, showmeans=None, showcaps=None, - showbox=None, showfliers=None, boxprops=None, - labels=None, flierprops=None, medianprops=None, - meanprops=None, capprops=None, whiskerprops=None, - manage_ticks=True, autorange=False, zorder=None, - capwidths=None): - """ - Draw a box and whisker plot. - - The box extends from the first quartile (Q1) to the third - quartile (Q3) of the data, with a line at the median. The - whiskers extend from the box by 1.5x the inter-quartile range - (IQR). Flier points are those past the end of the whiskers. - See https://en.wikipedia.org/wiki/Box_plot for reference. - - .. code-block:: none - - Q1-1.5IQR Q1 median Q3 Q3+1.5IQR - |-----:-----| - o |--------| : |--------| o o - |-----:-----| - flier <-----------> fliers - IQR - - - Parameters - ---------- - x : Array or a sequence of vectors. - The input data. If a 2D array, a boxplot is drawn for each column - in *x*. If a sequence of 1D arrays, a boxplot is drawn for each - array in *x*. - - notch : bool, default: False - Whether to draw a notched boxplot (`True`), or a rectangular - boxplot (`False`). The notches represent the confidence interval - (CI) around the median. The documentation for *bootstrap* - describes how the locations of the notches are computed by - default, but their locations may also be overridden by setting the - *conf_intervals* parameter. - - .. note:: - - In cases where the values of the CI are less than the - lower quartile or greater than the upper quartile, the - notches will extend beyond the box, giving it a - distinctive "flipped" appearance. This is expected - behavior and consistent with other statistical - visualization packages. - - sym : str, optional - The default symbol for flier points. An empty string ('') hides - the fliers. If `None`, then the fliers default to 'b+'. More - control is provided by the *flierprops* parameter. - - vert : bool, default: True - If `True`, draws vertical boxes. - If `False`, draw horizontal boxes. - - whis : float or (float, float), default: 1.5 - The position of the whiskers. - - If a float, the lower whisker is at the lowest datum above - ``Q1 - whis*(Q3-Q1)``, and the upper whisker at the highest datum - below ``Q3 + whis*(Q3-Q1)``, where Q1 and Q3 are the first and - third quartiles. The default value of ``whis = 1.5`` corresponds - to Tukey's original definition of boxplots. - - If a pair of floats, they indicate the percentiles at which to - draw the whiskers (e.g., (5, 95)). In particular, setting this to - (0, 100) results in whiskers covering the whole range of the data. - - In the edge case where ``Q1 == Q3``, *whis* is automatically set - to (0, 100) (cover the whole range of the data) if *autorange* is - True. - - Beyond the whiskers, data are considered outliers and are plotted - as individual points. - - bootstrap : int, optional - Specifies whether to bootstrap the confidence intervals - around the median for notched boxplots. If *bootstrap* is - None, no bootstrapping is performed, and notches are - calculated using a Gaussian-based asymptotic approximation - (see McGill, R., Tukey, J.W., and Larsen, W.A., 1978, and - Kendall and Stuart, 1967). Otherwise, bootstrap specifies - the number of times to bootstrap the median to determine its - 95% confidence intervals. Values between 1000 and 10000 are - recommended. - - usermedians : 1D array-like, optional - A 1D array-like of length ``len(x)``. Each entry that is not - `None` forces the value of the median for the corresponding - dataset. For entries that are `None`, the medians are computed - by Matplotlib as normal. - - conf_intervals : array-like, optional - A 2D array-like of shape ``(len(x), 2)``. Each entry that is not - None forces the location of the corresponding notch (which is - only drawn if *notch* is `True`). For entries that are `None`, - the notches are computed by the method specified by the other - parameters (e.g., *bootstrap*). - - positions : array-like, optional - The positions of the boxes. The ticks and limits are - automatically set to match the positions. Defaults to - ``range(1, N+1)`` where N is the number of boxes to be drawn. - - widths : float or array-like - The widths of the boxes. The default is 0.5, or ``0.15*(distance - between extreme positions)``, if that is smaller. - - patch_artist : bool, default: False - If `False` produces boxes with the Line2D artist. Otherwise, - boxes are drawn with Patch artists. - - labels : sequence, optional - Labels for each dataset (one per dataset). - - manage_ticks : bool, default: True - If True, the tick locations and labels will be adjusted to match - the boxplot positions. - - autorange : bool, default: False - When `True` and the data are distributed such that the 25th and - 75th percentiles are equal, *whis* is set to (0, 100) such - that the whisker ends are at the minimum and maximum of the data. - - meanline : bool, default: False - If `True` (and *showmeans* is `True`), will try to render the - mean as a line spanning the full width of the box according to - *meanprops* (see below). Not recommended if *shownotches* is also - True. Otherwise, means will be shown as points. - - zorder : float, default: ``Line2D.zorder = 2`` - The zorder of the boxplot. - - Returns - ------- - dict - A dictionary mapping each component of the boxplot to a list - of the `.Line2D` instances created. That dictionary has the - following keys (assuming vertical boxplots): - - - ``boxes``: the main body of the boxplot showing the - quartiles and the median's confidence intervals if - enabled. - - - ``medians``: horizontal lines at the median of each box. - - - ``whiskers``: the vertical lines extending to the most - extreme, non-outlier data points. - - - ``caps``: the horizontal lines at the ends of the - whiskers. - - - ``fliers``: points representing data that extend beyond - the whiskers (fliers). - - - ``means``: points or lines representing the means. - - Other Parameters - ---------------- - showcaps : bool, default: True - Show the caps on the ends of whiskers. - showbox : bool, default: True - Show the central box. - showfliers : bool, default: True - Show the outliers beyond the caps. - showmeans : bool, default: False - Show the arithmetic means. - capprops : dict, default: None - The style of the caps. - capwidths : float or array, default: None - The widths of the caps. - boxprops : dict, default: None - The style of the box. - whiskerprops : dict, default: None - The style of the whiskers. - flierprops : dict, default: None - The style of the fliers. - medianprops : dict, default: None - The style of the median. - meanprops : dict, default: None - The style of the mean. - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - See Also - -------- - violinplot : Draw an estimate of the probability density function. - """ - - # Missing arguments default to rcParams. - if whis is None: - whis = mpl.rcParams['boxplot.whiskers'] - if bootstrap is None: - bootstrap = mpl.rcParams['boxplot.bootstrap'] - - bxpstats = cbook.boxplot_stats(x, whis=whis, bootstrap=bootstrap, - labels=labels, autorange=autorange) - if notch is None: - notch = mpl.rcParams['boxplot.notch'] - if vert is None: - vert = mpl.rcParams['boxplot.vertical'] - if patch_artist is None: - patch_artist = mpl.rcParams['boxplot.patchartist'] - if meanline is None: - meanline = mpl.rcParams['boxplot.meanline'] - if showmeans is None: - showmeans = mpl.rcParams['boxplot.showmeans'] - if showcaps is None: - showcaps = mpl.rcParams['boxplot.showcaps'] - if showbox is None: - showbox = mpl.rcParams['boxplot.showbox'] - if showfliers is None: - showfliers = mpl.rcParams['boxplot.showfliers'] - - if boxprops is None: - boxprops = {} - if whiskerprops is None: - whiskerprops = {} - if capprops is None: - capprops = {} - if medianprops is None: - medianprops = {} - if meanprops is None: - meanprops = {} - if flierprops is None: - flierprops = {} - - if patch_artist: - boxprops['linestyle'] = 'solid' # Not consistent with bxp. - if 'color' in boxprops: - boxprops['edgecolor'] = boxprops.pop('color') - - # if non-default sym value, put it into the flier dictionary - # the logic for providing the default symbol ('b+') now lives - # in bxp in the initial value of flierkw - # handle all of the *sym* related logic here so we only have to pass - # on the flierprops dict. - if sym is not None: - # no-flier case, which should really be done with - # 'showfliers=False' but none-the-less deal with it to keep back - # compatibility - if sym == '': - # blow away existing dict and make one for invisible markers - flierprops = dict(linestyle='none', marker='', color='none') - # turn the fliers off just to be safe - showfliers = False - # now process the symbol string - else: - # process the symbol string - # discarded linestyle - _, marker, color = _process_plot_format(sym) - # if we have a marker, use it - if marker is not None: - flierprops['marker'] = marker - # if we have a color, use it - if color is not None: - # assume that if color is passed in the user want - # filled symbol, if the users want more control use - # flierprops - flierprops['color'] = color - flierprops['markerfacecolor'] = color - flierprops['markeredgecolor'] = color - - # replace medians if necessary: - if usermedians is not None: - if (len(np.ravel(usermedians)) != len(bxpstats) or - np.shape(usermedians)[0] != len(bxpstats)): - raise ValueError( - "'usermedians' and 'x' have different lengths") - else: - # reassign medians as necessary - for stats, med in zip(bxpstats, usermedians): - if med is not None: - stats['med'] = med - - if conf_intervals is not None: - if len(conf_intervals) != len(bxpstats): - raise ValueError( - "'conf_intervals' and 'x' have different lengths") - else: - for stats, ci in zip(bxpstats, conf_intervals): - if ci is not None: - if len(ci) != 2: - raise ValueError('each confidence interval must ' - 'have two values') - else: - if ci[0] is not None: - stats['cilo'] = ci[0] - if ci[1] is not None: - stats['cihi'] = ci[1] - - artists = self.bxp(bxpstats, positions=positions, widths=widths, - vert=vert, patch_artist=patch_artist, - shownotches=notch, showmeans=showmeans, - showcaps=showcaps, showbox=showbox, - boxprops=boxprops, flierprops=flierprops, - medianprops=medianprops, meanprops=meanprops, - meanline=meanline, showfliers=showfliers, - capprops=capprops, whiskerprops=whiskerprops, - manage_ticks=manage_ticks, zorder=zorder, - capwidths=capwidths) - return artists - - def bxp(self, bxpstats, positions=None, widths=None, vert=True, - patch_artist=False, shownotches=False, showmeans=False, - showcaps=True, showbox=True, showfliers=True, - boxprops=None, whiskerprops=None, flierprops=None, - medianprops=None, capprops=None, meanprops=None, - meanline=False, manage_ticks=True, zorder=None, - capwidths=None): - """ - Drawing function for box and whisker plots. - - Make a box and whisker plot for each column of *x* or each - vector in sequence *x*. The box extends from the lower to - upper quartile values of the data, with a line at the median. - The whiskers extend from the box to show the range of the - data. Flier points are those past the end of the whiskers. - - Parameters - ---------- - bxpstats : list of dicts - A list of dictionaries containing stats for each boxplot. - Required keys are: - - - ``med``: Median (scalar). - - ``q1``, ``q3``: First & third quartiles (scalars). - - ``whislo``, ``whishi``: Lower & upper whisker positions (scalars). - - Optional keys are: - - - ``mean``: Mean (scalar). Needed if ``showmeans=True``. - - ``fliers``: Data beyond the whiskers (array-like). - Needed if ``showfliers=True``. - - ``cilo``, ``cihi``: Lower & upper confidence intervals - about the median. Needed if ``shownotches=True``. - - ``label``: Name of the dataset (str). If available, - this will be used a tick label for the boxplot - - positions : array-like, default: [1, 2, ..., n] - The positions of the boxes. The ticks and limits - are automatically set to match the positions. - - widths : float or array-like, default: None - The widths of the boxes. The default is - ``clip(0.15*(distance between extreme positions), 0.15, 0.5)``. - - capwidths : float or array-like, default: None - Either a scalar or a vector and sets the width of each cap. - The default is ``0.5*(with of the box)``, see *widths*. - - vert : bool, default: True - If `True` (default), makes the boxes vertical. - If `False`, makes horizontal boxes. - - patch_artist : bool, default: False - If `False` produces boxes with the `.Line2D` artist. - If `True` produces boxes with the `~matplotlib.patches.Patch` artist. - - shownotches, showmeans, showcaps, showbox, showfliers : bool - Whether to draw the CI notches, the mean value (both default to - False), the caps, the box, and the fliers (all three default to - True). - - boxprops, whiskerprops, capprops, flierprops, medianprops, meanprops :\ - dict, optional - Artist properties for the boxes, whiskers, caps, fliers, medians, and - means. - - meanline : bool, default: False - If `True` (and *showmeans* is `True`), will try to render the mean - as a line spanning the full width of the box according to - *meanprops*. Not recommended if *shownotches* is also True. - Otherwise, means will be shown as points. - - manage_ticks : bool, default: True - If True, the tick locations and labels will be adjusted to match the - boxplot positions. - - zorder : float, default: ``Line2D.zorder = 2`` - The zorder of the resulting boxplot. - - Returns - ------- - dict - A dictionary mapping each component of the boxplot to a list - of the `.Line2D` instances created. That dictionary has the - following keys (assuming vertical boxplots): - - - ``boxes``: main bodies of the boxplot showing the quartiles, and - the median's confidence intervals if enabled. - - ``medians``: horizontal lines at the median of each box. - - ``whiskers``: vertical lines up to the last non-outlier data. - - ``caps``: horizontal lines at the ends of the whiskers. - - ``fliers``: points representing data beyond the whiskers (fliers). - - ``means``: points or lines representing the means. - - Examples - -------- - .. plot:: gallery/statistics/bxp.py - """ - - # lists of artists to be output - whiskers = [] - caps = [] - boxes = [] - medians = [] - means = [] - fliers = [] - - # empty list of xticklabels - datalabels = [] - - # Use default zorder if none specified - if zorder is None: - zorder = mlines.Line2D.zorder - - zdelta = 0.1 - - def merge_kw_rc(subkey, explicit, zdelta=0, usemarker=True): - d = {k.split('.')[-1]: v for k, v in mpl.rcParams.items() - if k.startswith(f'boxplot.{subkey}props')} - d['zorder'] = zorder + zdelta - if not usemarker: - d['marker'] = '' - d.update(cbook.normalize_kwargs(explicit, mlines.Line2D)) - return d - - box_kw = { - 'linestyle': mpl.rcParams['boxplot.boxprops.linestyle'], - 'linewidth': mpl.rcParams['boxplot.boxprops.linewidth'], - 'edgecolor': mpl.rcParams['boxplot.boxprops.color'], - 'facecolor': ('white' if mpl.rcParams['_internal.classic_mode'] - else mpl.rcParams['patch.facecolor']), - 'zorder': zorder, - **cbook.normalize_kwargs(boxprops, mpatches.PathPatch) - } if patch_artist else merge_kw_rc('box', boxprops, usemarker=False) - whisker_kw = merge_kw_rc('whisker', whiskerprops, usemarker=False) - cap_kw = merge_kw_rc('cap', capprops, usemarker=False) - flier_kw = merge_kw_rc('flier', flierprops) - median_kw = merge_kw_rc('median', medianprops, zdelta, usemarker=False) - mean_kw = merge_kw_rc('mean', meanprops, zdelta) - removed_prop = 'marker' if meanline else 'linestyle' - # Only remove the property if it's not set explicitly as a parameter. - if meanprops is None or removed_prop not in meanprops: - mean_kw[removed_prop] = '' - - # vertical or horizontal plot? - maybe_swap = slice(None) if vert else slice(None, None, -1) - - def do_plot(xs, ys, **kwargs): - return self.plot(*[xs, ys][maybe_swap], **kwargs)[0] - - def do_patch(xs, ys, **kwargs): - path = mpath.Path._create_closed( - np.column_stack([xs, ys][maybe_swap])) - patch = mpatches.PathPatch(path, **kwargs) - self.add_artist(patch) - return patch - - # input validation - N = len(bxpstats) - datashape_message = ("List of boxplot statistics and `{0}` " - "values must have same the length") - # check position - if positions is None: - positions = list(range(1, N + 1)) - elif len(positions) != N: - raise ValueError(datashape_message.format("positions")) - - positions = np.array(positions) - if len(positions) > 0 and not isinstance(positions[0], Number): - raise TypeError("positions should be an iterable of numbers") - - # width - if widths is None: - widths = [np.clip(0.15 * np.ptp(positions), 0.15, 0.5)] * N - elif np.isscalar(widths): - widths = [widths] * N - elif len(widths) != N: - raise ValueError(datashape_message.format("widths")) - - # capwidth - if capwidths is None: - capwidths = 0.5 * np.array(widths) - elif np.isscalar(capwidths): - capwidths = [capwidths] * N - elif len(capwidths) != N: - raise ValueError(datashape_message.format("capwidths")) - - for pos, width, stats, capwidth in zip(positions, widths, bxpstats, - capwidths): - # try to find a new label - datalabels.append(stats.get('label', pos)) - - # whisker coords - whis_x = [pos, pos] - whislo_y = [stats['q1'], stats['whislo']] - whishi_y = [stats['q3'], stats['whishi']] - # cap coords - cap_left = pos - capwidth * 0.5 - cap_right = pos + capwidth * 0.5 - cap_x = [cap_left, cap_right] - cap_lo = np.full(2, stats['whislo']) - cap_hi = np.full(2, stats['whishi']) - # box and median coords - box_left = pos - width * 0.5 - box_right = pos + width * 0.5 - med_y = [stats['med'], stats['med']] - # notched boxes - if shownotches: - notch_left = pos - width * 0.25 - notch_right = pos + width * 0.25 - box_x = [box_left, box_right, box_right, notch_right, - box_right, box_right, box_left, box_left, notch_left, - box_left, box_left] - box_y = [stats['q1'], stats['q1'], stats['cilo'], - stats['med'], stats['cihi'], stats['q3'], - stats['q3'], stats['cihi'], stats['med'], - stats['cilo'], stats['q1']] - med_x = [notch_left, notch_right] - # plain boxes - else: - box_x = [box_left, box_right, box_right, box_left, box_left] - box_y = [stats['q1'], stats['q1'], stats['q3'], stats['q3'], - stats['q1']] - med_x = [box_left, box_right] - - # maybe draw the box - if showbox: - do_box = do_patch if patch_artist else do_plot - boxes.append(do_box(box_x, box_y, **box_kw)) - # draw the whiskers - whiskers.append(do_plot(whis_x, whislo_y, **whisker_kw)) - whiskers.append(do_plot(whis_x, whishi_y, **whisker_kw)) - # maybe draw the caps - if showcaps: - caps.append(do_plot(cap_x, cap_lo, **cap_kw)) - caps.append(do_plot(cap_x, cap_hi, **cap_kw)) - # draw the medians - medians.append(do_plot(med_x, med_y, **median_kw)) - # maybe draw the means - if showmeans: - if meanline: - means.append(do_plot( - [box_left, box_right], [stats['mean'], stats['mean']], - **mean_kw - )) - else: - means.append(do_plot([pos], [stats['mean']], **mean_kw)) - # maybe draw the fliers - if showfliers: - flier_x = np.full(len(stats['fliers']), pos, dtype=np.float64) - flier_y = stats['fliers'] - fliers.append(do_plot(flier_x, flier_y, **flier_kw)) - - if manage_ticks: - axis_name = "x" if vert else "y" - interval = getattr(self.dataLim, f"interval{axis_name}") - axis = getattr(self, f"{axis_name}axis") - positions = axis.convert_units(positions) - # The 0.5 additional padding ensures reasonable-looking boxes - # even when drawing a single box. We set the sticky edge to - # prevent margins expansion, in order to match old behavior (back - # when separate calls to boxplot() would completely reset the axis - # limits regardless of what was drawn before). The sticky edges - # are attached to the median lines, as they are always present. - interval[:] = (min(interval[0], min(positions) - .5), - max(interval[1], max(positions) + .5)) - for median, position in zip(medians, positions): - getattr(median.sticky_edges, axis_name).extend( - [position - .5, position + .5]) - # Modified from Axis.set_ticks and Axis.set_ticklabels. - locator = axis.get_major_locator() - if not isinstance(axis.get_major_locator(), - mticker.FixedLocator): - locator = mticker.FixedLocator([]) - axis.set_major_locator(locator) - locator.locs = np.array([*locator.locs, *positions]) - formatter = axis.get_major_formatter() - if not isinstance(axis.get_major_formatter(), - mticker.FixedFormatter): - formatter = mticker.FixedFormatter([]) - axis.set_major_formatter(formatter) - formatter.seq = [*formatter.seq, *datalabels] - - self._request_autoscale_view() - - return dict(whiskers=whiskers, caps=caps, boxes=boxes, - medians=medians, fliers=fliers, means=means) - - @staticmethod - def _parse_scatter_color_args(c, edgecolors, kwargs, xsize, - get_next_color_func): - """ - Helper function to process color related arguments of `.Axes.scatter`. - - Argument precedence for facecolors: - - - c (if not None) - - kwargs['facecolor'] - - kwargs['facecolors'] - - kwargs['color'] (==kwcolor) - - 'b' if in classic mode else the result of ``get_next_color_func()`` - - Argument precedence for edgecolors: - - - kwargs['edgecolor'] - - edgecolors (is an explicit kw argument in scatter()) - - kwargs['color'] (==kwcolor) - - 'face' if not in classic mode else None - - Parameters - ---------- - c : color or sequence or sequence of color or None - See argument description of `.Axes.scatter`. - edgecolors : color or sequence of color or {'face', 'none'} or None - See argument description of `.Axes.scatter`. - kwargs : dict - Additional kwargs. If these keys exist, we pop and process them: - 'facecolors', 'facecolor', 'edgecolor', 'color' - Note: The dict is modified by this function. - xsize : int - The size of the x and y arrays passed to `.Axes.scatter`. - get_next_color_func : callable - A callable that returns a color. This color is used as facecolor - if no other color is provided. - - Note, that this is a function rather than a fixed color value to - support conditional evaluation of the next color. As of the - current implementation obtaining the next color from the - property cycle advances the cycle. This must only happen if we - actually use the color, which will only be decided within this - method. - - Returns - ------- - c - The input *c* if it was not *None*, else a color derived from the - other inputs or defaults. - colors : array(N, 4) or None - The facecolors as RGBA values, or *None* if a colormap is used. - edgecolors - The edgecolor. - - """ - facecolors = kwargs.pop('facecolors', None) - facecolors = kwargs.pop('facecolor', facecolors) - edgecolors = kwargs.pop('edgecolor', edgecolors) - - kwcolor = kwargs.pop('color', None) - - if kwcolor is not None and c is not None: - raise ValueError("Supply a 'c' argument or a 'color'" - " kwarg but not both; they differ but" - " their functionalities overlap.") - - if kwcolor is not None: - try: - mcolors.to_rgba_array(kwcolor) - except ValueError as err: - raise ValueError( - "'color' kwarg must be a color or sequence of color " - "specs. For a sequence of values to be color-mapped, use " - "the 'c' argument instead.") from err - if edgecolors is None: - edgecolors = kwcolor - if facecolors is None: - facecolors = kwcolor - - if edgecolors is None and not mpl.rcParams['_internal.classic_mode']: - edgecolors = mpl.rcParams['scatter.edgecolors'] - - c_was_none = c is None - if c is None: - c = (facecolors if facecolors is not None - else "b" if mpl.rcParams['_internal.classic_mode'] - else get_next_color_func()) - c_is_string_or_strings = ( - isinstance(c, str) - or (np.iterable(c) and len(c) > 0 - and isinstance(cbook._safe_first_finite(c), str))) - - def invalid_shape_exception(csize, xsize): - return ValueError( - f"'c' argument has {csize} elements, which is inconsistent " - f"with 'x' and 'y' with size {xsize}.") - - c_is_mapped = False # Unless proven otherwise below. - valid_shape = True # Unless proven otherwise below. - if not c_was_none and kwcolor is None and not c_is_string_or_strings: - try: # First, does 'c' look suitable for value-mapping? - c = np.asanyarray(c, dtype=float) - except ValueError: - pass # Failed to convert to float array; must be color specs. - else: - # handle the documented special case of a 2D array with 1 - # row which as RGB(A) to broadcast. - if c.shape == (1, 4) or c.shape == (1, 3): - c_is_mapped = False - if c.size != xsize: - valid_shape = False - # If c can be either mapped values or an RGB(A) color, prefer - # the former if shapes match, the latter otherwise. - elif c.size == xsize: - c = c.ravel() - c_is_mapped = True - else: # Wrong size; it must not be intended for mapping. - if c.shape in ((3,), (4,)): - _api.warn_external( - "*c* argument looks like a single numeric RGB or " - "RGBA sequence, which should be avoided as value-" - "mapping will have precedence in case its length " - "matches with *x* & *y*. Please use the *color* " - "keyword-argument or provide a 2D array " - "with a single row if you intend to specify " - "the same RGB or RGBA value for all points.") - valid_shape = False - if not c_is_mapped: - try: # Is 'c' acceptable as PathCollection facecolors? - colors = mcolors.to_rgba_array(c) - except (TypeError, ValueError) as err: - if "RGBA values should be within 0-1 range" in str(err): - raise - else: - if not valid_shape: - raise invalid_shape_exception(c.size, xsize) from err - # Both the mapping *and* the RGBA conversion failed: pretty - # severe failure => one may appreciate a verbose feedback. - raise ValueError( - f"'c' argument must be a color, a sequence of colors, " - f"or a sequence of numbers, not {c!r}") from err - else: - if len(colors) not in (0, 1, xsize): - # NB: remember that a single color is also acceptable. - # Besides *colors* will be an empty array if c == 'none'. - raise invalid_shape_exception(len(colors), xsize) - else: - colors = None # use cmap, norm after collection is created - return c, colors, edgecolors - - @_preprocess_data(replace_names=["x", "y", "s", "linewidths", - "edgecolors", "c", "facecolor", - "facecolors", "color"], - label_namer="y") - @_docstring.interpd - def scatter(self, x, y, s=None, c=None, marker=None, cmap=None, norm=None, - vmin=None, vmax=None, alpha=None, linewidths=None, *, - edgecolors=None, plotnonfinite=False, **kwargs): - """ - A scatter plot of *y* vs. *x* with varying marker size and/or color. - - Parameters - ---------- - x, y : float or array-like, shape (n, ) - The data positions. - - s : float or array-like, shape (n, ), optional - The marker size in points**2 (typographic points are 1/72 in.). - Default is ``rcParams['lines.markersize'] ** 2``. - - c : array-like or list of colors or color, optional - The marker colors. Possible values: - - - A scalar or sequence of n numbers to be mapped to colors using - *cmap* and *norm*. - - A 2D array in which the rows are RGB or RGBA. - - A sequence of colors of length n. - - A single color format string. - - Note that *c* should not be a single numeric RGB or RGBA sequence - because that is indistinguishable from an array of values to be - colormapped. If you want to specify the same RGB or RGBA value for - all points, use a 2D array with a single row. Otherwise, - value-matching will have precedence in case of a size matching with - *x* and *y*. - - If you wish to specify a single color for all points - prefer the *color* keyword argument. - - Defaults to `None`. In that case the marker color is determined - by the value of *color*, *facecolor* or *facecolors*. In case - those are not specified or `None`, the marker color is determined - by the next color of the ``Axes``' current "shape and fill" color - cycle. This cycle defaults to :rc:`axes.prop_cycle`. - - marker : `~.markers.MarkerStyle`, default: :rc:`scatter.marker` - The marker style. *marker* can be either an instance of the class - or the text shorthand for a particular marker. - See :mod:`matplotlib.markers` for more information about marker - styles. - - %(cmap_doc)s - - This parameter is ignored if *c* is RGB(A). - - %(norm_doc)s - - This parameter is ignored if *c* is RGB(A). - - %(vmin_vmax_doc)s - - This parameter is ignored if *c* is RGB(A). - - alpha : float, default: None - The alpha blending value, between 0 (transparent) and 1 (opaque). - - linewidths : float or array-like, default: :rc:`lines.linewidth` - The linewidth of the marker edges. Note: The default *edgecolors* - is 'face'. You may want to change this as well. - - edgecolors : {'face', 'none', *None*} or color or sequence of color, \ -default: :rc:`scatter.edgecolors` - The edge color of the marker. Possible values: - - - 'face': The edge color will always be the same as the face color. - - 'none': No patch boundary will be drawn. - - A color or sequence of colors. - - For non-filled markers, *edgecolors* is ignored. Instead, the color - is determined like with 'face', i.e. from *c*, *colors*, or - *facecolors*. - - plotnonfinite : bool, default: False - Whether to plot points with nonfinite *c* (i.e. ``inf``, ``-inf`` - or ``nan``). If ``True`` the points are drawn with the *bad* - colormap color (see `.Colormap.set_bad`). - - Returns - ------- - `~matplotlib.collections.PathCollection` - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - **kwargs : `~matplotlib.collections.Collection` properties - - See Also - -------- - plot : To plot scatter plots when markers are identical in size and - color. - - Notes - ----- - * The `.plot` function will be faster for scatterplots where markers - don't vary in size or color. - - * Any or all of *x*, *y*, *s*, and *c* may be masked arrays, in which - case all masks will be combined and only unmasked points will be - plotted. - - * Fundamentally, scatter works with 1D arrays; *x*, *y*, *s*, and *c* - may be input as N-D arrays, but within scatter they will be - flattened. The exception is *c*, which will be flattened only if its - size matches the size of *x* and *y*. - - """ - # Process **kwargs to handle aliases, conflicts with explicit kwargs: - x, y = self._process_unit_info([("x", x), ("y", y)], kwargs) - # np.ma.ravel yields an ndarray, not a masked array, - # unless its argument is a masked array. - x = np.ma.ravel(x) - y = np.ma.ravel(y) - if x.size != y.size: - raise ValueError("x and y must be the same size") - - if s is None: - s = (20 if mpl.rcParams['_internal.classic_mode'] else - mpl.rcParams['lines.markersize'] ** 2.0) - s = np.ma.ravel(s) - if (len(s) not in (1, x.size) or - (not np.issubdtype(s.dtype, np.floating) and - not np.issubdtype(s.dtype, np.integer))): - raise ValueError( - "s must be a scalar, " - "or float array-like with the same size as x and y") - - # get the original edgecolor the user passed before we normalize - orig_edgecolor = edgecolors - if edgecolors is None: - orig_edgecolor = kwargs.get('edgecolor', None) - c, colors, edgecolors = \ - self._parse_scatter_color_args( - c, edgecolors, kwargs, x.size, - get_next_color_func=self._get_patches_for_fill.get_next_color) - - if plotnonfinite and colors is None: - c = np.ma.masked_invalid(c) - x, y, s, edgecolors, linewidths = \ - cbook._combine_masks(x, y, s, edgecolors, linewidths) - else: - x, y, s, c, colors, edgecolors, linewidths = \ - cbook._combine_masks( - x, y, s, c, colors, edgecolors, linewidths) - # Unmask edgecolors if it was actually a single RGB or RGBA. - if (x.size in (3, 4) - and np.ma.is_masked(edgecolors) - and not np.ma.is_masked(orig_edgecolor)): - edgecolors = edgecolors.data - - scales = s # Renamed for readability below. - - # load default marker from rcParams - if marker is None: - marker = mpl.rcParams['scatter.marker'] - - if isinstance(marker, mmarkers.MarkerStyle): - marker_obj = marker - else: - marker_obj = mmarkers.MarkerStyle(marker) - - path = marker_obj.get_path().transformed( - marker_obj.get_transform()) - if not marker_obj.is_filled(): - if orig_edgecolor is not None: - _api.warn_external( - f"You passed a edgecolor/edgecolors ({orig_edgecolor!r}) " - f"for an unfilled marker ({marker!r}). Matplotlib is " - "ignoring the edgecolor in favor of the facecolor. This " - "behavior may change in the future." - ) - # We need to handle markers that can not be filled (like - # '+' and 'x') differently than markers that can be - # filled, but have their fillstyle set to 'none'. This is - # to get: - # - # - respecting the fillestyle if set - # - maintaining back-compatibility for querying the facecolor of - # the un-fillable markers. - # - # While not an ideal situation, but is better than the - # alternatives. - if marker_obj.get_fillstyle() == 'none': - # promote the facecolor to be the edgecolor - edgecolors = colors - # set the facecolor to 'none' (at the last chance) because - # we can not fill a path if the facecolor is non-null - # (which is defendable at the renderer level). - colors = 'none' - else: - # if we are not nulling the face color we can do this - # simpler - edgecolors = 'face' - - if linewidths is None: - linewidths = mpl.rcParams['lines.linewidth'] - elif np.iterable(linewidths): - linewidths = [ - lw if lw is not None else mpl.rcParams['lines.linewidth'] - for lw in linewidths] - - offsets = np.ma.column_stack([x, y]) - - collection = mcoll.PathCollection( - (path,), scales, - facecolors=colors, - edgecolors=edgecolors, - linewidths=linewidths, - offsets=offsets, - offset_transform=kwargs.pop('transform', self.transData), - alpha=alpha, - ) - collection.set_transform(mtransforms.IdentityTransform()) - if colors is None: - collection.set_array(c) - collection.set_cmap(cmap) - collection.set_norm(norm) - collection._scale_norm(norm, vmin, vmax) - else: - extra_kwargs = { - 'cmap': cmap, 'norm': norm, 'vmin': vmin, 'vmax': vmax - } - extra_keys = [k for k, v in extra_kwargs.items() if v is not None] - if any(extra_keys): - keys_str = ", ".join(f"'{k}'" for k in extra_keys) - _api.warn_external( - "No data for colormapping provided via 'c'. " - f"Parameters {keys_str} will be ignored") - collection._internal_update(kwargs) - - # Classic mode only: - # ensure there are margins to allow for the - # finite size of the symbols. In v2.x, margins - # are present by default, so we disable this - # scatter-specific override. - if mpl.rcParams['_internal.classic_mode']: - if self._xmargin < 0.05 and x.size > 0: - self.set_xmargin(0.05) - if self._ymargin < 0.05 and x.size > 0: - self.set_ymargin(0.05) - - self.add_collection(collection) - self._request_autoscale_view() - - return collection - - @_preprocess_data(replace_names=["x", "y", "C"], label_namer="y") - @_docstring.dedent_interpd - def hexbin(self, x, y, C=None, gridsize=100, bins=None, - xscale='linear', yscale='linear', extent=None, - cmap=None, norm=None, vmin=None, vmax=None, - alpha=None, linewidths=None, edgecolors='face', - reduce_C_function=np.mean, mincnt=None, marginals=False, - **kwargs): - """ - Make a 2D hexagonal binning plot of points *x*, *y*. - - If *C* is *None*, the value of the hexagon is determined by the number - of points in the hexagon. Otherwise, *C* specifies values at the - coordinate (x[i], y[i]). For each hexagon, these values are reduced - using *reduce_C_function*. - - Parameters - ---------- - x, y : array-like - The data positions. *x* and *y* must be of the same length. - - C : array-like, optional - If given, these values are accumulated in the bins. Otherwise, - every point has a value of 1. Must be of the same length as *x* - and *y*. - - gridsize : int or (int, int), default: 100 - If a single int, the number of hexagons in the *x*-direction. - The number of hexagons in the *y*-direction is chosen such that - the hexagons are approximately regular. - - Alternatively, if a tuple (*nx*, *ny*), the number of hexagons - in the *x*-direction and the *y*-direction. In the - *y*-direction, counting is done along vertically aligned - hexagons, not along the zig-zag chains of hexagons; see the - following illustration. - - .. plot:: - - import numpy - import matplotlib.pyplot as plt - - np.random.seed(19680801) - n= 300 - x = np.random.standard_normal(n) - y = np.random.standard_normal(n) - - fig, ax = plt.subplots(figsize=(4, 4)) - h = ax.hexbin(x, y, gridsize=(5, 3)) - hx, hy = h.get_offsets().T - ax.plot(hx[24::3], hy[24::3], 'ro-') - ax.plot(hx[-3:], hy[-3:], 'ro-') - ax.set_title('gridsize=(5, 3)') - ax.axis('off') - - To get approximately regular hexagons, choose - :math:`n_x = \\sqrt{3}\\,n_y`. - - bins : 'log' or int or sequence, default: None - Discretization of the hexagon values. - - - If *None*, no binning is applied; the color of each hexagon - directly corresponds to its count value. - - If 'log', use a logarithmic scale for the colormap. - Internally, :math:`log_{10}(i+1)` is used to determine the - hexagon color. This is equivalent to ``norm=LogNorm()``. - - If an integer, divide the counts in the specified number - of bins, and color the hexagons accordingly. - - If a sequence of values, the values of the lower bound of - the bins to be used. - - xscale : {'linear', 'log'}, default: 'linear' - Use a linear or log10 scale on the horizontal axis. - - yscale : {'linear', 'log'}, default: 'linear' - Use a linear or log10 scale on the vertical axis. - - mincnt : int > 0, default: *None* - If not *None*, only display cells with more than *mincnt* - number of points in the cell. - - marginals : bool, default: *False* - If marginals is *True*, plot the marginal density as - colormapped rectangles along the bottom of the x-axis and - left of the y-axis. - - extent : 4-tuple of float, default: *None* - The limits of the bins (xmin, xmax, ymin, ymax). - The default assigns the limits based on - *gridsize*, *x*, *y*, *xscale* and *yscale*. - - If *xscale* or *yscale* is set to 'log', the limits are - expected to be the exponent for a power of 10. E.g. for - x-limits of 1 and 50 in 'linear' scale and y-limits - of 10 and 1000 in 'log' scale, enter (1, 50, 1, 3). - - Returns - ------- - `~matplotlib.collections.PolyCollection` - A `.PolyCollection` defining the hexagonal bins. - - - `.PolyCollection.get_offsets` contains a Mx2 array containing - the x, y positions of the M hexagon centers. - - `.PolyCollection.get_array` contains the values of the M - hexagons. - - If *marginals* is *True*, horizontal - bar and vertical bar (both PolyCollections) will be attached - to the return collection as attributes *hbar* and *vbar*. - - Other Parameters - ---------------- - %(cmap_doc)s - - %(norm_doc)s - - %(vmin_vmax_doc)s - - alpha : float between 0 and 1, optional - The alpha blending value, between 0 (transparent) and 1 (opaque). - - linewidths : float, default: *None* - If *None*, defaults to 1.0. - - edgecolors : {'face', 'none', *None*} or color, default: 'face' - The color of the hexagon edges. Possible values are: - - - 'face': Draw the edges in the same color as the fill color. - - 'none': No edges are drawn. This can sometimes lead to unsightly - unpainted pixels between the hexagons. - - *None*: Draw outlines in the default color. - - An explicit color. - - reduce_C_function : callable, default: `numpy.mean` - The function to aggregate *C* within the bins. It is ignored if - *C* is not given. This must have the signature:: - - def reduce_C_function(C: array) -> float - - Commonly used functions are: - - - `numpy.mean`: average of the points - - `numpy.sum`: integral of the point values - - `numpy.amax`: value taken from the largest point - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs : `~matplotlib.collections.PolyCollection` properties - All other keyword arguments are passed on to `.PolyCollection`: - - %(PolyCollection:kwdoc)s - - See Also - -------- - hist2d : 2D histogram rectangular bins - """ - self._process_unit_info([("x", x), ("y", y)], kwargs, convert=False) - - x, y, C = cbook.delete_masked_points(x, y, C) - - # Set the size of the hexagon grid - if np.iterable(gridsize): - nx, ny = gridsize - else: - nx = gridsize - ny = int(nx / math.sqrt(3)) - # Count the number of data in each hexagon - x = np.asarray(x, float) - y = np.asarray(y, float) - - # Will be log()'d if necessary, and then rescaled. - tx = x - ty = y - - if xscale == 'log': - if np.any(x <= 0.0): - raise ValueError("x contains non-positive values, so can not " - "be log-scaled") - tx = np.log10(tx) - if yscale == 'log': - if np.any(y <= 0.0): - raise ValueError("y contains non-positive values, so can not " - "be log-scaled") - ty = np.log10(ty) - if extent is not None: - xmin, xmax, ymin, ymax = extent - else: - xmin, xmax = (tx.min(), tx.max()) if len(x) else (0, 1) - ymin, ymax = (ty.min(), ty.max()) if len(y) else (0, 1) - - # to avoid issues with singular data, expand the min/max pairs - xmin, xmax = mtransforms.nonsingular(xmin, xmax, expander=0.1) - ymin, ymax = mtransforms.nonsingular(ymin, ymax, expander=0.1) - - nx1 = nx + 1 - ny1 = ny + 1 - nx2 = nx - ny2 = ny - n = nx1 * ny1 + nx2 * ny2 - - # In the x-direction, the hexagons exactly cover the region from - # xmin to xmax. Need some padding to avoid roundoff errors. - padding = 1.e-9 * (xmax - xmin) - xmin -= padding - xmax += padding - sx = (xmax - xmin) / nx - sy = (ymax - ymin) / ny - # Positions in hexagon index coordinates. - ix = (tx - xmin) / sx - iy = (ty - ymin) / sy - ix1 = np.round(ix).astype(int) - iy1 = np.round(iy).astype(int) - ix2 = np.floor(ix).astype(int) - iy2 = np.floor(iy).astype(int) - # flat indices, plus one so that out-of-range points go to position 0. - i1 = np.where((0 <= ix1) & (ix1 < nx1) & (0 <= iy1) & (iy1 < ny1), - ix1 * ny1 + iy1 + 1, 0) - i2 = np.where((0 <= ix2) & (ix2 < nx2) & (0 <= iy2) & (iy2 < ny2), - ix2 * ny2 + iy2 + 1, 0) - - d1 = (ix - ix1) ** 2 + 3.0 * (iy - iy1) ** 2 - d2 = (ix - ix2 - 0.5) ** 2 + 3.0 * (iy - iy2 - 0.5) ** 2 - bdist = (d1 < d2) - - if C is None: # [1:] drops out-of-range points. - counts1 = np.bincount(i1[bdist], minlength=1 + nx1 * ny1)[1:] - counts2 = np.bincount(i2[~bdist], minlength=1 + nx2 * ny2)[1:] - accum = np.concatenate([counts1, counts2]).astype(float) - if mincnt is not None: - accum[accum < mincnt] = np.nan - C = np.ones(len(x)) - else: - # store the C values in a list per hexagon index - Cs_at_i1 = [[] for _ in range(1 + nx1 * ny1)] - Cs_at_i2 = [[] for _ in range(1 + nx2 * ny2)] - for i in range(len(x)): - if bdist[i]: - Cs_at_i1[i1[i]].append(C[i]) - else: - Cs_at_i2[i2[i]].append(C[i]) - if mincnt is None: - mincnt = 0 - accum = np.array( - [reduce_C_function(acc) if len(acc) > mincnt else np.nan - for Cs_at_i in [Cs_at_i1, Cs_at_i2] - for acc in Cs_at_i[1:]], # [1:] drops out-of-range points. - float) - - good_idxs = ~np.isnan(accum) - - offsets = np.zeros((n, 2), float) - offsets[:nx1 * ny1, 0] = np.repeat(np.arange(nx1), ny1) - offsets[:nx1 * ny1, 1] = np.tile(np.arange(ny1), nx1) - offsets[nx1 * ny1:, 0] = np.repeat(np.arange(nx2) + 0.5, ny2) - offsets[nx1 * ny1:, 1] = np.tile(np.arange(ny2), nx2) + 0.5 - offsets[:, 0] *= sx - offsets[:, 1] *= sy - offsets[:, 0] += xmin - offsets[:, 1] += ymin - # remove accumulation bins with no data - offsets = offsets[good_idxs, :] - accum = accum[good_idxs] - - polygon = [sx, sy / 3] * np.array( - [[.5, -.5], [.5, .5], [0., 1.], [-.5, .5], [-.5, -.5], [0., -1.]]) - - if linewidths is None: - linewidths = [1.0] - - if xscale == 'log' or yscale == 'log': - polygons = np.expand_dims(polygon, 0) + np.expand_dims(offsets, 1) - if xscale == 'log': - polygons[:, :, 0] = 10.0 ** polygons[:, :, 0] - xmin = 10.0 ** xmin - xmax = 10.0 ** xmax - self.set_xscale(xscale) - if yscale == 'log': - polygons[:, :, 1] = 10.0 ** polygons[:, :, 1] - ymin = 10.0 ** ymin - ymax = 10.0 ** ymax - self.set_yscale(yscale) - collection = mcoll.PolyCollection( - polygons, - edgecolors=edgecolors, - linewidths=linewidths, - ) - else: - collection = mcoll.PolyCollection( - [polygon], - edgecolors=edgecolors, - linewidths=linewidths, - offsets=offsets, - offset_transform=mtransforms.AffineDeltaTransform( - self.transData), - ) - - # Set normalizer if bins is 'log' - if bins == 'log': - if norm is not None: - _api.warn_external("Only one of 'bins' and 'norm' arguments " - f"can be supplied, ignoring bins={bins}") - else: - norm = mcolors.LogNorm(vmin=vmin, vmax=vmax) - vmin = vmax = None - bins = None - - # autoscale the norm with current accum values if it hasn't been set - if norm is not None: - if norm.vmin is None and norm.vmax is None: - norm.autoscale(accum) - - if bins is not None: - if not np.iterable(bins): - minimum, maximum = min(accum), max(accum) - bins -= 1 # one less edge than bins - bins = minimum + (maximum - minimum) * np.arange(bins) / bins - bins = np.sort(bins) - accum = bins.searchsorted(accum) - - collection.set_array(accum) - collection.set_cmap(cmap) - collection.set_norm(norm) - collection.set_alpha(alpha) - collection._internal_update(kwargs) - collection._scale_norm(norm, vmin, vmax) - - corners = ((xmin, ymin), (xmax, ymax)) - self.update_datalim(corners) - self._request_autoscale_view(tight=True) - - # add the collection last - self.add_collection(collection, autolim=False) - if not marginals: - return collection - - # Process marginals - bars = [] - for zname, z, zmin, zmax, zscale, nbins in [ - ("x", x, xmin, xmax, xscale, nx), - ("y", y, ymin, ymax, yscale, 2 * ny), - ]: - - if zscale == "log": - bin_edges = np.geomspace(zmin, zmax, nbins + 1) - else: - bin_edges = np.linspace(zmin, zmax, nbins + 1) - - verts = np.empty((nbins, 4, 2)) - verts[:, 0, 0] = verts[:, 1, 0] = bin_edges[:-1] - verts[:, 2, 0] = verts[:, 3, 0] = bin_edges[1:] - verts[:, 0, 1] = verts[:, 3, 1] = .00 - verts[:, 1, 1] = verts[:, 2, 1] = .05 - if zname == "y": - verts = verts[:, :, ::-1] # Swap x and y. - - # Sort z-values into bins defined by bin_edges. - bin_idxs = np.searchsorted(bin_edges, z) - 1 - values = np.empty(nbins) - for i in range(nbins): - # Get C-values for each bin, and compute bin value with - # reduce_C_function. - ci = C[bin_idxs == i] - values[i] = reduce_C_function(ci) if len(ci) > 0 else np.nan - - mask = ~np.isnan(values) - verts = verts[mask] - values = values[mask] - - trans = getattr(self, f"get_{zname}axis_transform")(which="grid") - bar = mcoll.PolyCollection( - verts, transform=trans, edgecolors="face") - bar.set_array(values) - bar.set_cmap(cmap) - bar.set_norm(norm) - bar.set_alpha(alpha) - bar._internal_update(kwargs) - bars.append(self.add_collection(bar, autolim=False)) - - collection.hbar, collection.vbar = bars - - def on_changed(collection): - collection.hbar.set_cmap(collection.get_cmap()) - collection.hbar.set_cmap(collection.get_cmap()) - collection.vbar.set_clim(collection.get_clim()) - collection.vbar.set_clim(collection.get_clim()) - - collection.callbacks.connect('changed', on_changed) - - return collection - - @_docstring.dedent_interpd - def arrow(self, x, y, dx, dy, **kwargs): - """ - Add an arrow to the Axes. - - This draws an arrow from ``(x, y)`` to ``(x+dx, y+dy)``. - - Parameters - ---------- - %(FancyArrow)s - - Returns - ------- - `.FancyArrow` - The created `.FancyArrow` object. - - Notes - ----- - The resulting arrow is affected by the Axes aspect ratio and limits. - This may produce an arrow whose head is not square with its stem. To - create an arrow whose head is square with its stem, - use :meth:`annotate` for example: - - >>> ax.annotate("", xy=(0.5, 0.5), xytext=(0, 0), - ... arrowprops=dict(arrowstyle="->")) - - """ - # Strip away units for the underlying patch since units - # do not make sense to most patch-like code - x = self.convert_xunits(x) - y = self.convert_yunits(y) - dx = self.convert_xunits(dx) - dy = self.convert_yunits(dy) - - a = mpatches.FancyArrow(x, y, dx, dy, **kwargs) - self.add_patch(a) - self._request_autoscale_view() - return a - - @_docstring.copy(mquiver.QuiverKey.__init__) - def quiverkey(self, Q, X, Y, U, label, **kwargs): - qk = mquiver.QuiverKey(Q, X, Y, U, label, **kwargs) - self.add_artist(qk) - return qk - - # Handle units for x and y, if they've been passed - def _quiver_units(self, args, kwargs): - if len(args) > 3: - x, y = args[0:2] - x, y = self._process_unit_info([("x", x), ("y", y)], kwargs) - return (x, y) + args[2:] - return args - - # args can be a combination of X, Y, U, V, C and all should be replaced - @_preprocess_data() - @_docstring.dedent_interpd - def quiver(self, *args, **kwargs): - """%(quiver_doc)s""" - # Make sure units are handled for x and y values - args = self._quiver_units(args, kwargs) - q = mquiver.Quiver(self, *args, **kwargs) - self.add_collection(q, autolim=True) - self._request_autoscale_view() - return q - - # args can be some combination of X, Y, U, V, C and all should be replaced - @_preprocess_data() - @_docstring.dedent_interpd - def barbs(self, *args, **kwargs): - """%(barbs_doc)s""" - # Make sure units are handled for x and y values - args = self._quiver_units(args, kwargs) - b = mquiver.Barbs(self, *args, **kwargs) - self.add_collection(b, autolim=True) - self._request_autoscale_view() - return b - - # Uses a custom implementation of data-kwarg handling in - # _process_plot_var_args. - def fill(self, *args, data=None, **kwargs): - """ - Plot filled polygons. - - Parameters - ---------- - *args : sequence of x, y, [color] - Each polygon is defined by the lists of *x* and *y* positions of - its nodes, optionally followed by a *color* specifier. See - :mod:`matplotlib.colors` for supported color specifiers. The - standard color cycle is used for polygons without a color - specifier. - - You can plot multiple polygons by providing multiple *x*, *y*, - *[color]* groups. - - For example, each of the following is legal:: - - ax.fill(x, y) # a polygon with default color - ax.fill(x, y, "b") # a blue polygon - ax.fill(x, y, x2, y2) # two polygons - ax.fill(x, y, "b", x2, y2, "r") # a blue and a red polygon - - data : indexable object, optional - An object with labelled data. If given, provide the label names to - plot in *x* and *y*, e.g.:: - - ax.fill("time", "signal", - data={"time": [0, 1, 2], "signal": [0, 1, 0]}) - - Returns - ------- - list of `~matplotlib.patches.Polygon` - - Other Parameters - ---------------- - **kwargs : `~matplotlib.patches.Polygon` properties - - Notes - ----- - Use :meth:`fill_between` if you would like to fill the region between - two curves. - """ - # For compatibility(!), get aliases from Line2D rather than Patch. - kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D) - # _get_patches_for_fill returns a generator, convert it to a list. - patches = [*self._get_patches_for_fill(*args, data=data, **kwargs)] - for poly in patches: - self.add_patch(poly) - self._request_autoscale_view() - return patches - - def _fill_between_x_or_y( - self, ind_dir, ind, dep1, dep2=0, *, - where=None, interpolate=False, step=None, **kwargs): - # Common implementation between fill_between (*ind_dir*="x") and - # fill_betweenx (*ind_dir*="y"). *ind* is the independent variable, - # *dep* the dependent variable. The docstring below is interpolated - # to generate both methods' docstrings. - """ - Fill the area between two {dir} curves. - - The curves are defined by the points (*{ind}*, *{dep}1*) and (*{ind}*, - *{dep}2*). This creates one or multiple polygons describing the filled - area. - - You may exclude some {dir} sections from filling using *where*. - - By default, the edges connect the given points directly. Use *step* - if the filling should be a step function, i.e. constant in between - *{ind}*. - - Parameters - ---------- - {ind} : array (length N) - The {ind} coordinates of the nodes defining the curves. - - {dep}1 : array (length N) or scalar - The {dep} coordinates of the nodes defining the first curve. - - {dep}2 : array (length N) or scalar, default: 0 - The {dep} coordinates of the nodes defining the second curve. - - where : array of bool (length N), optional - Define *where* to exclude some {dir} regions from being filled. - The filled regions are defined by the coordinates ``{ind}[where]``. - More precisely, fill between ``{ind}[i]`` and ``{ind}[i+1]`` if - ``where[i] and where[i+1]``. Note that this definition implies - that an isolated *True* value between two *False* values in *where* - will not result in filling. Both sides of the *True* position - remain unfilled due to the adjacent *False* values. - - interpolate : bool, default: False - This option is only relevant if *where* is used and the two curves - are crossing each other. - - Semantically, *where* is often used for *{dep}1* > *{dep}2* or - similar. By default, the nodes of the polygon defining the filled - region will only be placed at the positions in the *{ind}* array. - Such a polygon cannot describe the above semantics close to the - intersection. The {ind}-sections containing the intersection are - simply clipped. - - Setting *interpolate* to *True* will calculate the actual - intersection point and extend the filled region up to this point. - - step : {{'pre', 'post', 'mid'}}, optional - Define *step* if the filling should be a step function, - i.e. constant in between *{ind}*. The value determines where the - step will occur: - - - 'pre': The y value is continued constantly to the left from - every *x* position, i.e. the interval ``(x[i-1], x[i]]`` has the - value ``y[i]``. - - 'post': The y value is continued constantly to the right from - every *x* position, i.e. the interval ``[x[i], x[i+1])`` has the - value ``y[i]``. - - 'mid': Steps occur half-way between the *x* positions. - - Returns - ------- - `.PolyCollection` - A `.PolyCollection` containing the plotted polygons. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - All other keyword arguments are passed on to `.PolyCollection`. - They control the `.Polygon` properties: - - %(PolyCollection:kwdoc)s - - See Also - -------- - fill_between : Fill between two sets of y-values. - fill_betweenx : Fill between two sets of x-values. - """ - - dep_dir = {"x": "y", "y": "x"}[ind_dir] - - if not mpl.rcParams["_internal.classic_mode"]: - kwargs = cbook.normalize_kwargs(kwargs, mcoll.Collection) - if not any(c in kwargs for c in ("color", "facecolor")): - kwargs["facecolor"] = \ - self._get_patches_for_fill.get_next_color() - - # Handle united data, such as dates - ind, dep1, dep2 = map( - ma.masked_invalid, self._process_unit_info( - [(ind_dir, ind), (dep_dir, dep1), (dep_dir, dep2)], kwargs)) - - for name, array in [ - (ind_dir, ind), (f"{dep_dir}1", dep1), (f"{dep_dir}2", dep2)]: - if array.ndim > 1: - raise ValueError(f"{name!r} is not 1-dimensional") - - if where is None: - where = True - else: - where = np.asarray(where, dtype=bool) - if where.size != ind.size: - raise ValueError(f"where size ({where.size}) does not match " - f"{ind_dir} size ({ind.size})") - where = where & ~functools.reduce( - np.logical_or, map(np.ma.getmaskarray, [ind, dep1, dep2])) - - ind, dep1, dep2 = np.broadcast_arrays( - np.atleast_1d(ind), dep1, dep2, subok=True) - - polys = [] - for idx0, idx1 in cbook.contiguous_regions(where): - indslice = ind[idx0:idx1] - dep1slice = dep1[idx0:idx1] - dep2slice = dep2[idx0:idx1] - if step is not None: - step_func = cbook.STEP_LOOKUP_MAP["steps-" + step] - indslice, dep1slice, dep2slice = \ - step_func(indslice, dep1slice, dep2slice) - - if not len(indslice): - continue - - N = len(indslice) - pts = np.zeros((2 * N + 2, 2)) - - if interpolate: - def get_interp_point(idx): - im1 = max(idx - 1, 0) - ind_values = ind[im1:idx+1] - diff_values = dep1[im1:idx+1] - dep2[im1:idx+1] - dep1_values = dep1[im1:idx+1] - - if len(diff_values) == 2: - if np.ma.is_masked(diff_values[1]): - return ind[im1], dep1[im1] - elif np.ma.is_masked(diff_values[0]): - return ind[idx], dep1[idx] - - diff_order = diff_values.argsort() - diff_root_ind = np.interp( - 0, diff_values[diff_order], ind_values[diff_order]) - ind_order = ind_values.argsort() - diff_root_dep = np.interp( - diff_root_ind, - ind_values[ind_order], dep1_values[ind_order]) - return diff_root_ind, diff_root_dep - - start = get_interp_point(idx0) - end = get_interp_point(idx1) - else: - # Handle scalar dep2 (e.g. 0): the fill should go all - # the way down to 0 even if none of the dep1 sample points do. - start = indslice[0], dep2slice[0] - end = indslice[-1], dep2slice[-1] - - pts[0] = start - pts[N + 1] = end - - pts[1:N+1, 0] = indslice - pts[1:N+1, 1] = dep1slice - pts[N+2:, 0] = indslice[::-1] - pts[N+2:, 1] = dep2slice[::-1] - - if ind_dir == "y": - pts = pts[:, ::-1] - - polys.append(pts) - - collection = mcoll.PolyCollection(polys, **kwargs) - - # now update the datalim and autoscale - pts = np.row_stack([np.column_stack([ind[where], dep1[where]]), - np.column_stack([ind[where], dep2[where]])]) - if ind_dir == "y": - pts = pts[:, ::-1] - self.update_datalim(pts, updatex=True, updatey=True) - self.add_collection(collection, autolim=False) - self._request_autoscale_view() - return collection - - def fill_between(self, x, y1, y2=0, where=None, interpolate=False, - step=None, **kwargs): - return self._fill_between_x_or_y( - "x", x, y1, y2, - where=where, interpolate=interpolate, step=step, **kwargs) - - if _fill_between_x_or_y.__doc__: - fill_between.__doc__ = _fill_between_x_or_y.__doc__.format( - dir="horizontal", ind="x", dep="y" - ) - fill_between = _preprocess_data( - _docstring.dedent_interpd(fill_between), - replace_names=["x", "y1", "y2", "where"]) - - def fill_betweenx(self, y, x1, x2=0, where=None, - step=None, interpolate=False, **kwargs): - return self._fill_between_x_or_y( - "y", y, x1, x2, - where=where, interpolate=interpolate, step=step, **kwargs) - - if _fill_between_x_or_y.__doc__: - fill_betweenx.__doc__ = _fill_between_x_or_y.__doc__.format( - dir="vertical", ind="y", dep="x" - ) - fill_betweenx = _preprocess_data( - _docstring.dedent_interpd(fill_betweenx), - replace_names=["y", "x1", "x2", "where"]) - - #### plotting z(x, y): imshow, pcolor and relatives, contour - - @_preprocess_data() - @_docstring.interpd - def imshow(self, X, cmap=None, norm=None, *, aspect=None, - interpolation=None, alpha=None, - vmin=None, vmax=None, origin=None, extent=None, - interpolation_stage=None, filternorm=True, filterrad=4.0, - resample=None, url=None, **kwargs): - """ - Display data as an image, i.e., on a 2D regular raster. - - The input may either be actual RGB(A) data, or 2D scalar data, which - will be rendered as a pseudocolor image. For displaying a grayscale - image set up the colormapping using the parameters - ``cmap='gray', vmin=0, vmax=255``. - - The number of pixels used to render an image is set by the Axes size - and the *dpi* of the figure. This can lead to aliasing artifacts when - the image is resampled because the displayed image size will usually - not match the size of *X* (see - :doc:`/gallery/images_contours_and_fields/image_antialiasing`). - The resampling can be controlled via the *interpolation* parameter - and/or :rc:`image.interpolation`. - - Parameters - ---------- - X : array-like or PIL image - The image data. Supported array shapes are: - - - (M, N): an image with scalar data. The values are mapped to - colors using normalization and a colormap. See parameters *norm*, - *cmap*, *vmin*, *vmax*. - - (M, N, 3): an image with RGB values (0-1 float or 0-255 int). - - (M, N, 4): an image with RGBA values (0-1 float or 0-255 int), - i.e. including transparency. - - The first two dimensions (M, N) define the rows and columns of - the image. - - Out-of-range RGB(A) values are clipped. - - %(cmap_doc)s - - This parameter is ignored if *X* is RGB(A). - - %(norm_doc)s - - This parameter is ignored if *X* is RGB(A). - - %(vmin_vmax_doc)s - - This parameter is ignored if *X* is RGB(A). - - aspect : {'equal', 'auto'} or float, default: :rc:`image.aspect` - The aspect ratio of the Axes. This parameter is particularly - relevant for images since it determines whether data pixels are - square. - - This parameter is a shortcut for explicitly calling - `.Axes.set_aspect`. See there for further details. - - - 'equal': Ensures an aspect ratio of 1. Pixels will be square - (unless pixel sizes are explicitly made non-square in data - coordinates using *extent*). - - 'auto': The Axes is kept fixed and the aspect is adjusted so - that the data fit in the Axes. In general, this will result in - non-square pixels. - - interpolation : str, default: :rc:`image.interpolation` - The interpolation method used. - - Supported values are 'none', 'antialiased', 'nearest', 'bilinear', - 'bicubic', 'spline16', 'spline36', 'hanning', 'hamming', 'hermite', - 'kaiser', 'quadric', 'catrom', 'gaussian', 'bessel', 'mitchell', - 'sinc', 'lanczos', 'blackman'. - - The data *X* is resampled to the pixel size of the image on the - figure canvas, using the interpolation method to either up- or - downsample the data. - - If *interpolation* is 'none', then for the ps, pdf, and svg - backends no down- or upsampling occurs, and the image data is - passed to the backend as a native image. Note that different ps, - pdf, and svg viewers may display these raw pixels differently. On - other backends, 'none' is the same as 'nearest'. - - If *interpolation* is the default 'antialiased', then 'nearest' - interpolation is used if the image is upsampled by more than a - factor of three (i.e. the number of display pixels is at least - three times the size of the data array). If the upsampling rate is - smaller than 3, or the image is downsampled, then 'hanning' - interpolation is used to act as an anti-aliasing filter, unless the - image happens to be upsampled by exactly a factor of two or one. - - See - :doc:`/gallery/images_contours_and_fields/interpolation_methods` - for an overview of the supported interpolation methods, and - :doc:`/gallery/images_contours_and_fields/image_antialiasing` for - a discussion of image antialiasing. - - Some interpolation methods require an additional radius parameter, - which can be set by *filterrad*. Additionally, the antigrain image - resize filter is controlled by the parameter *filternorm*. - - interpolation_stage : {'data', 'rgba'}, default: 'data' - If 'data', interpolation - is carried out on the data provided by the user. If 'rgba', the - interpolation is carried out after the colormapping has been - applied (visual interpolation). - - alpha : float or array-like, optional - The alpha blending value, between 0 (transparent) and 1 (opaque). - If *alpha* is an array, the alpha blending values are applied pixel - by pixel, and *alpha* must have the same shape as *X*. - - origin : {'upper', 'lower'}, default: :rc:`image.origin` - Place the [0, 0] index of the array in the upper left or lower - left corner of the Axes. The convention (the default) 'upper' is - typically used for matrices and images. - - Note that the vertical axis points upward for 'lower' - but downward for 'upper'. - - See the :doc:`/tutorials/intermediate/imshow_extent` tutorial for - examples and a more detailed description. - - extent : floats (left, right, bottom, top), optional - The bounding box in data coordinates that the image will fill. - These values may be unitful and match the units of the Axes. - The image is stretched individually along x and y to fill the box. - - The default extent is determined by the following conditions. - Pixels have unit size in data coordinates. Their centers are on - integer coordinates, and their center coordinates range from 0 to - columns-1 horizontally and from 0 to rows-1 vertically. - - Note that the direction of the vertical axis and thus the default - values for top and bottom depend on *origin*: - - - For ``origin == 'upper'`` the default is - ``(-0.5, numcols-0.5, numrows-0.5, -0.5)``. - - For ``origin == 'lower'`` the default is - ``(-0.5, numcols-0.5, -0.5, numrows-0.5)``. - - See the :doc:`/tutorials/intermediate/imshow_extent` tutorial for - examples and a more detailed description. - - filternorm : bool, default: True - A parameter for the antigrain image resize filter (see the - antigrain documentation). If *filternorm* is set, the filter - normalizes integer values and corrects the rounding errors. It - doesn't do anything with the source floating point values, it - corrects only integers according to the rule of 1.0 which means - that any sum of pixel weights must be equal to 1.0. So, the - filter function must produce a graph of the proper shape. - - filterrad : float > 0, default: 4.0 - The filter radius for filters that have a radius parameter, i.e. - when interpolation is one of: 'sinc', 'lanczos' or 'blackman'. - - resample : bool, default: :rc:`image.resample` - When *True*, use a full resampling method. When *False*, only - resample when the output image is larger than the input image. - - url : str, optional - Set the url of the created `.AxesImage`. See `.Artist.set_url`. - - Returns - ------- - `~matplotlib.image.AxesImage` - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs : `~matplotlib.artist.Artist` properties - These parameters are passed on to the constructor of the - `.AxesImage` artist. - - See Also - -------- - matshow : Plot a matrix or an array as an image. - - Notes - ----- - Unless *extent* is used, pixel centers will be located at integer - coordinates. In other words: the origin will coincide with the center - of pixel (0, 0). - - There are two common representations for RGB images with an alpha - channel: - - - Straight (unassociated) alpha: R, G, and B channels represent the - color of the pixel, disregarding its opacity. - - Premultiplied (associated) alpha: R, G, and B channels represent - the color of the pixel, adjusted for its opacity by multiplication. - - `~matplotlib.pyplot.imshow` expects RGB images adopting the straight - (unassociated) alpha representation. - """ - if aspect is None: - aspect = mpl.rcParams['image.aspect'] - self.set_aspect(aspect) - im = mimage.AxesImage(self, cmap=cmap, norm=norm, - interpolation=interpolation, origin=origin, - extent=extent, filternorm=filternorm, - filterrad=filterrad, resample=resample, - interpolation_stage=interpolation_stage, - **kwargs) - - im.set_data(X) - im.set_alpha(alpha) - if im.get_clip_path() is None: - # image does not already have clipping set, clip to axes patch - im.set_clip_path(self.patch) - im._scale_norm(norm, vmin, vmax) - im.set_url(url) - - # update ax.dataLim, and, if autoscaling, set viewLim - # to tightly fit the image, regardless of dataLim. - im.set_extent(im.get_extent()) - - self.add_image(im) - return im - - def _pcolorargs(self, funcname, *args, shading='auto', **kwargs): - # - create X and Y if not present; - # - reshape X and Y as needed if they are 1-D; - # - check for proper sizes based on `shading` kwarg; - # - reset shading if shading='auto' to flat or nearest - # depending on size; - - _valid_shading = ['gouraud', 'nearest', 'flat', 'auto'] - try: - _api.check_in_list(_valid_shading, shading=shading) - except ValueError: - _api.warn_external(f"shading value '{shading}' not in list of " - f"valid values {_valid_shading}. Setting " - "shading='auto'.") - shading = 'auto' - - if len(args) == 1: - C = np.asanyarray(args[0]) - nrows, ncols = C.shape[:2] - if shading in ['gouraud', 'nearest']: - X, Y = np.meshgrid(np.arange(ncols), np.arange(nrows)) - else: - X, Y = np.meshgrid(np.arange(ncols + 1), np.arange(nrows + 1)) - shading = 'flat' - C = cbook.safe_masked_invalid(C, copy=True) - return X, Y, C, shading - - if len(args) == 3: - # Check x and y for bad data... - C = np.asanyarray(args[2]) - # unit conversion allows e.g. datetime objects as axis values - X, Y = args[:2] - X, Y = self._process_unit_info([("x", X), ("y", Y)], kwargs) - X, Y = [cbook.safe_masked_invalid(a, copy=True) for a in [X, Y]] - - if funcname == 'pcolormesh': - if np.ma.is_masked(X) or np.ma.is_masked(Y): - raise ValueError( - 'x and y arguments to pcolormesh cannot have ' - 'non-finite values or be of type ' - 'numpy.ma.core.MaskedArray with masked values') - # safe_masked_invalid() returns an ndarray for dtypes other - # than floating point. - if isinstance(X, np.ma.core.MaskedArray): - X = X.data # strip mask as downstream doesn't like it... - if isinstance(Y, np.ma.core.MaskedArray): - Y = Y.data - nrows, ncols = C.shape[:2] - else: - raise _api.nargs_error(funcname, takes="1 or 3", given=len(args)) - - Nx = X.shape[-1] - Ny = Y.shape[0] - if X.ndim != 2 or X.shape[0] == 1: - x = X.reshape(1, Nx) - X = x.repeat(Ny, axis=0) - if Y.ndim != 2 or Y.shape[1] == 1: - y = Y.reshape(Ny, 1) - Y = y.repeat(Nx, axis=1) - if X.shape != Y.shape: - raise TypeError(f'Incompatible X, Y inputs to {funcname}; ' - f'see help({funcname})') - - if shading == 'auto': - if ncols == Nx and nrows == Ny: - shading = 'nearest' - else: - shading = 'flat' - - if shading == 'flat': - if (Nx, Ny) != (ncols + 1, nrows + 1): - raise TypeError(f"Dimensions of C {C.shape} should" - f" be one smaller than X({Nx}) and Y({Ny})" - f" while using shading='flat'" - f" see help({funcname})") - else: # ['nearest', 'gouraud']: - if (Nx, Ny) != (ncols, nrows): - raise TypeError('Dimensions of C %s are incompatible with' - ' X (%d) and/or Y (%d); see help(%s)' % ( - C.shape, Nx, Ny, funcname)) - if shading == 'nearest': - # grid is specified at the center, so define corners - # at the midpoints between the grid centers and then use the - # flat algorithm. - def _interp_grid(X): - # helper for below - if np.shape(X)[1] > 1: - dX = np.diff(X, axis=1)/2. - if not (np.all(dX >= 0) or np.all(dX <= 0)): - _api.warn_external( - f"The input coordinates to {funcname} are " - "interpreted as cell centers, but are not " - "monotonically increasing or decreasing. " - "This may lead to incorrectly calculated cell " - "edges, in which case, please supply " - f"explicit cell edges to {funcname}.") - X = np.hstack((X[:, [0]] - dX[:, [0]], - X[:, :-1] + dX, - X[:, [-1]] + dX[:, [-1]])) - else: - # This is just degenerate, but we can't reliably guess - # a dX if there is just one value. - X = np.hstack((X, X)) - return X - - if ncols == Nx: - X = _interp_grid(X) - Y = _interp_grid(Y) - if nrows == Ny: - X = _interp_grid(X.T).T - Y = _interp_grid(Y.T).T - shading = 'flat' - - C = cbook.safe_masked_invalid(C, copy=True) - return X, Y, C, shading - - @_preprocess_data() - @_docstring.dedent_interpd - def pcolor(self, *args, shading=None, alpha=None, norm=None, cmap=None, - vmin=None, vmax=None, **kwargs): - r""" - Create a pseudocolor plot with a non-regular rectangular grid. - - Call signature:: - - pcolor([X, Y,] C, **kwargs) - - *X* and *Y* can be used to specify the corners of the quadrilaterals. - - .. hint:: - - ``pcolor()`` can be very slow for large arrays. In most - cases you should use the similar but much faster - `~.Axes.pcolormesh` instead. See - :ref:`Differences between pcolor() and pcolormesh() - ` for a discussion of the - differences. - - Parameters - ---------- - C : 2D array-like - The color-mapped values. Color-mapping is controlled by *cmap*, - *norm*, *vmin*, and *vmax*. - - X, Y : array-like, optional - The coordinates of the corners of quadrilaterals of a pcolormesh:: - - (X[i+1, j], Y[i+1, j]) (X[i+1, j+1], Y[i+1, j+1]) - ●╶───╴● - │ │ - ●╶───╴● - (X[i, j], Y[i, j]) (X[i, j+1], Y[i, j+1]) - - Note that the column index corresponds to the x-coordinate, and - the row index corresponds to y. For details, see the - :ref:`Notes ` section below. - - If ``shading='flat'`` the dimensions of *X* and *Y* should be one - greater than those of *C*, and the quadrilateral is colored due - to the value at ``C[i, j]``. If *X*, *Y* and *C* have equal - dimensions, a warning will be raised and the last row and column - of *C* will be ignored. - - If ``shading='nearest'``, the dimensions of *X* and *Y* should be - the same as those of *C* (if not, a ValueError will be raised). The - color ``C[i, j]`` will be centered on ``(X[i, j], Y[i, j])``. - - If *X* and/or *Y* are 1-D arrays or column vectors they will be - expanded as needed into the appropriate 2D arrays, making a - rectangular grid. - - shading : {'flat', 'nearest', 'auto'}, default: :rc:`pcolor.shading` - The fill style for the quadrilateral. Possible values: - - - 'flat': A solid color is used for each quad. The color of the - quad (i, j), (i+1, j), (i, j+1), (i+1, j+1) is given by - ``C[i, j]``. The dimensions of *X* and *Y* should be - one greater than those of *C*; if they are the same as *C*, - then a deprecation warning is raised, and the last row - and column of *C* are dropped. - - 'nearest': Each grid point will have a color centered on it, - extending halfway between the adjacent grid centers. The - dimensions of *X* and *Y* must be the same as *C*. - - 'auto': Choose 'flat' if dimensions of *X* and *Y* are one - larger than *C*. Choose 'nearest' if dimensions are the same. - - See :doc:`/gallery/images_contours_and_fields/pcolormesh_grids` - for more description. - - %(cmap_doc)s - - %(norm_doc)s - - %(vmin_vmax_doc)s - - edgecolors : {'none', None, 'face', color, color sequence}, optional - The color of the edges. Defaults to 'none'. Possible values: - - - 'none' or '': No edge. - - *None*: :rc:`patch.edgecolor` will be used. Note that currently - :rc:`patch.force_edgecolor` has to be True for this to work. - - 'face': Use the adjacent face color. - - A color or sequence of colors will set the edge color. - - The singular form *edgecolor* works as an alias. - - alpha : float, default: None - The alpha blending value of the face color, between 0 (transparent) - and 1 (opaque). Note: The edgecolor is currently not affected by - this. - - snap : bool, default: False - Whether to snap the mesh to pixel boundaries. - - Returns - ------- - `matplotlib.collections.Collection` - - Other Parameters - ---------------- - antialiaseds : bool, default: False - The default *antialiaseds* is False if the default - *edgecolors*\ ="none" is used. This eliminates artificial lines - at patch boundaries, and works regardless of the value of alpha. - If *edgecolors* is not "none", then the default *antialiaseds* - is taken from :rc:`patch.antialiased`. - Stroking the edges may be preferred if *alpha* is 1, but will - cause artifacts otherwise. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Additionally, the following arguments are allowed. They are passed - along to the `~matplotlib.collections.PolyCollection` constructor: - - %(PolyCollection:kwdoc)s - - See Also - -------- - pcolormesh : for an explanation of the differences between - pcolor and pcolormesh. - imshow : If *X* and *Y* are each equidistant, `~.Axes.imshow` can be a - faster alternative. - - Notes - ----- - **Masked arrays** - - *X*, *Y* and *C* may be masked arrays. If either ``C[i, j]``, or one - of the vertices surrounding ``C[i, j]`` (*X* or *Y* at - ``[i, j], [i+1, j], [i, j+1], [i+1, j+1]``) is masked, nothing is - plotted. - - .. _axes-pcolor-grid-orientation: - - **Grid orientation** - - The grid orientation follows the standard matrix convention: An array - *C* with shape (nrows, ncolumns) is plotted with the column number as - *X* and the row number as *Y*. - """ - - if shading is None: - shading = mpl.rcParams['pcolor.shading'] - shading = shading.lower() - X, Y, C, shading = self._pcolorargs('pcolor', *args, shading=shading, - kwargs=kwargs) - Ny, Nx = X.shape - - # convert to MA, if necessary. - C = ma.asarray(C) - X = ma.asarray(X) - Y = ma.asarray(Y) - - mask = ma.getmaskarray(X) + ma.getmaskarray(Y) - xymask = (mask[0:-1, 0:-1] + mask[1:, 1:] + - mask[0:-1, 1:] + mask[1:, 0:-1]) - # don't plot if C or any of the surrounding vertices are masked. - mask = ma.getmaskarray(C) + xymask - - unmask = ~mask - X1 = ma.filled(X[:-1, :-1])[unmask] - Y1 = ma.filled(Y[:-1, :-1])[unmask] - X2 = ma.filled(X[1:, :-1])[unmask] - Y2 = ma.filled(Y[1:, :-1])[unmask] - X3 = ma.filled(X[1:, 1:])[unmask] - Y3 = ma.filled(Y[1:, 1:])[unmask] - X4 = ma.filled(X[:-1, 1:])[unmask] - Y4 = ma.filled(Y[:-1, 1:])[unmask] - npoly = len(X1) - - xy = np.stack([X1, Y1, X2, Y2, X3, Y3, X4, Y4, X1, Y1], axis=-1) - verts = xy.reshape((npoly, 5, 2)) - - C = ma.filled(C[:Ny - 1, :Nx - 1])[unmask] - - linewidths = (0.25,) - if 'linewidth' in kwargs: - kwargs['linewidths'] = kwargs.pop('linewidth') - kwargs.setdefault('linewidths', linewidths) - - if 'edgecolor' in kwargs: - kwargs['edgecolors'] = kwargs.pop('edgecolor') - ec = kwargs.setdefault('edgecolors', 'none') - - # aa setting will default via collections to patch.antialiased - # unless the boundary is not stroked, in which case the - # default will be False; with unstroked boundaries, aa - # makes artifacts that are often disturbing. - if 'antialiased' in kwargs: - kwargs['antialiaseds'] = kwargs.pop('antialiased') - if 'antialiaseds' not in kwargs and cbook._str_lower_equal(ec, "none"): - kwargs['antialiaseds'] = False - - kwargs.setdefault('snap', False) - - collection = mcoll.PolyCollection( - verts, array=C, cmap=cmap, norm=norm, alpha=alpha, **kwargs) - collection._scale_norm(norm, vmin, vmax) - - x = X.compressed() - y = Y.compressed() - - # Transform from native to data coordinates? - t = collection._transform - if (not isinstance(t, mtransforms.Transform) and - hasattr(t, '_as_mpl_transform')): - t = t._as_mpl_transform(self.axes) - - if t and any(t.contains_branch_seperately(self.transData)): - trans_to_data = t - self.transData - pts = np.vstack([x, y]).T.astype(float) - transformed_pts = trans_to_data.transform(pts) - x = transformed_pts[..., 0] - y = transformed_pts[..., 1] - - self.add_collection(collection, autolim=False) - - minx = np.min(x) - maxx = np.max(x) - miny = np.min(y) - maxy = np.max(y) - collection.sticky_edges.x[:] = [minx, maxx] - collection.sticky_edges.y[:] = [miny, maxy] - corners = (minx, miny), (maxx, maxy) - self.update_datalim(corners) - self._request_autoscale_view() - return collection - - @_preprocess_data() - @_docstring.dedent_interpd - def pcolormesh(self, *args, alpha=None, norm=None, cmap=None, vmin=None, - vmax=None, shading=None, antialiased=False, **kwargs): - """ - Create a pseudocolor plot with a non-regular rectangular grid. - - Call signature:: - - pcolormesh([X, Y,] C, **kwargs) - - *X* and *Y* can be used to specify the corners of the quadrilaterals. - - .. hint:: - - `~.Axes.pcolormesh` is similar to `~.Axes.pcolor`. It is much faster - and preferred in most cases. For a detailed discussion on the - differences see :ref:`Differences between pcolor() and pcolormesh() - `. - - Parameters - ---------- - C : array-like - The mesh data. Supported array shapes are: - - - (M, N) or M*N: a mesh with scalar data. The values are mapped to - colors using normalization and a colormap. See parameters *norm*, - *cmap*, *vmin*, *vmax*. - - (M, N, 3): an image with RGB values (0-1 float or 0-255 int). - - (M, N, 4): an image with RGBA values (0-1 float or 0-255 int), - i.e. including transparency. - - The first two dimensions (M, N) define the rows and columns of - the mesh data. - - X, Y : array-like, optional - The coordinates of the corners of quadrilaterals of a pcolormesh:: - - (X[i+1, j], Y[i+1, j]) (X[i+1, j+1], Y[i+1, j+1]) - ●╶───╴● - │ │ - ●╶───╴● - (X[i, j], Y[i, j]) (X[i, j+1], Y[i, j+1]) - - Note that the column index corresponds to the x-coordinate, and - the row index corresponds to y. For details, see the - :ref:`Notes ` section below. - - If ``shading='flat'`` the dimensions of *X* and *Y* should be one - greater than those of *C*, and the quadrilateral is colored due - to the value at ``C[i, j]``. If *X*, *Y* and *C* have equal - dimensions, a warning will be raised and the last row and column - of *C* will be ignored. - - If ``shading='nearest'`` or ``'gouraud'``, the dimensions of *X* - and *Y* should be the same as those of *C* (if not, a ValueError - will be raised). For ``'nearest'`` the color ``C[i, j]`` is - centered on ``(X[i, j], Y[i, j])``. For ``'gouraud'``, a smooth - interpolation is caried out between the quadrilateral corners. - - If *X* and/or *Y* are 1-D arrays or column vectors they will be - expanded as needed into the appropriate 2D arrays, making a - rectangular grid. - - %(cmap_doc)s - - %(norm_doc)s - - %(vmin_vmax_doc)s - - edgecolors : {'none', None, 'face', color, color sequence}, optional - The color of the edges. Defaults to 'none'. Possible values: - - - 'none' or '': No edge. - - *None*: :rc:`patch.edgecolor` will be used. Note that currently - :rc:`patch.force_edgecolor` has to be True for this to work. - - 'face': Use the adjacent face color. - - A color or sequence of colors will set the edge color. - - The singular form *edgecolor* works as an alias. - - alpha : float, default: None - The alpha blending value, between 0 (transparent) and 1 (opaque). - - shading : {'flat', 'nearest', 'gouraud', 'auto'}, optional - The fill style for the quadrilateral; defaults to - :rc:`pcolor.shading`. Possible values: - - - 'flat': A solid color is used for each quad. The color of the - quad (i, j), (i+1, j), (i, j+1), (i+1, j+1) is given by - ``C[i, j]``. The dimensions of *X* and *Y* should be - one greater than those of *C*; if they are the same as *C*, - then a deprecation warning is raised, and the last row - and column of *C* are dropped. - - 'nearest': Each grid point will have a color centered on it, - extending halfway between the adjacent grid centers. The - dimensions of *X* and *Y* must be the same as *C*. - - 'gouraud': Each quad will be Gouraud shaded: The color of the - corners (i', j') are given by ``C[i', j']``. The color values of - the area in between is interpolated from the corner values. - The dimensions of *X* and *Y* must be the same as *C*. When - Gouraud shading is used, *edgecolors* is ignored. - - 'auto': Choose 'flat' if dimensions of *X* and *Y* are one - larger than *C*. Choose 'nearest' if dimensions are the same. - - See :doc:`/gallery/images_contours_and_fields/pcolormesh_grids` - for more description. - - snap : bool, default: False - Whether to snap the mesh to pixel boundaries. - - rasterized : bool, optional - Rasterize the pcolormesh when drawing vector graphics. This can - speed up rendering and produce smaller files for large data sets. - See also :doc:`/gallery/misc/rasterization_demo`. - - Returns - ------- - `matplotlib.collections.QuadMesh` - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Additionally, the following arguments are allowed. They are passed - along to the `~matplotlib.collections.QuadMesh` constructor: - - %(QuadMesh:kwdoc)s - - See Also - -------- - pcolor : An alternative implementation with slightly different - features. For a detailed discussion on the differences see - :ref:`Differences between pcolor() and pcolormesh() - `. - imshow : If *X* and *Y* are each equidistant, `~.Axes.imshow` can be a - faster alternative. - - Notes - ----- - **Masked arrays** - - *C* may be a masked array. If ``C[i, j]`` is masked, the corresponding - quadrilateral will be transparent. Masking of *X* and *Y* is not - supported. Use `~.Axes.pcolor` if you need this functionality. - - .. _axes-pcolormesh-grid-orientation: - - **Grid orientation** - - The grid orientation follows the standard matrix convention: An array - *C* with shape (nrows, ncolumns) is plotted with the column number as - *X* and the row number as *Y*. - - .. _differences-pcolor-pcolormesh: - - **Differences between pcolor() and pcolormesh()** - - Both methods are used to create a pseudocolor plot of a 2D array - using quadrilaterals. - - The main difference lies in the created object and internal data - handling: - While `~.Axes.pcolor` returns a `.PolyCollection`, `~.Axes.pcolormesh` - returns a `.QuadMesh`. The latter is more specialized for the given - purpose and thus is faster. It should almost always be preferred. - - There is also a slight difference in the handling of masked arrays. - Both `~.Axes.pcolor` and `~.Axes.pcolormesh` support masked arrays - for *C*. However, only `~.Axes.pcolor` supports masked arrays for *X* - and *Y*. The reason lies in the internal handling of the masked values. - `~.Axes.pcolor` leaves out the respective polygons from the - PolyCollection. `~.Axes.pcolormesh` sets the facecolor of the masked - elements to transparent. You can see the difference when using - edgecolors. While all edges are drawn irrespective of masking in a - QuadMesh, the edge between two adjacent masked quadrilaterals in - `~.Axes.pcolor` is not drawn as the corresponding polygons do not - exist in the PolyCollection. - - Another difference is the support of Gouraud shading in - `~.Axes.pcolormesh`, which is not available with `~.Axes.pcolor`. - - """ - if shading is None: - shading = mpl.rcParams['pcolor.shading'] - shading = shading.lower() - kwargs.setdefault('edgecolors', 'none') - - X, Y, C, shading = self._pcolorargs('pcolormesh', *args, - shading=shading, kwargs=kwargs) - coords = np.stack([X, Y], axis=-1) - # convert to one dimensional array, except for 3D RGB(A) arrays - if C.ndim != 3: - C = C.ravel() - - kwargs.setdefault('snap', mpl.rcParams['pcolormesh.snap']) - - collection = mcoll.QuadMesh( - coords, antialiased=antialiased, shading=shading, - array=C, cmap=cmap, norm=norm, alpha=alpha, **kwargs) - collection._scale_norm(norm, vmin, vmax) - - coords = coords.reshape(-1, 2) # flatten the grid structure; keep x, y - - # Transform from native to data coordinates? - t = collection._transform - if (not isinstance(t, mtransforms.Transform) and - hasattr(t, '_as_mpl_transform')): - t = t._as_mpl_transform(self.axes) - - if t and any(t.contains_branch_seperately(self.transData)): - trans_to_data = t - self.transData - coords = trans_to_data.transform(coords) - - self.add_collection(collection, autolim=False) - - minx, miny = np.min(coords, axis=0) - maxx, maxy = np.max(coords, axis=0) - collection.sticky_edges.x[:] = [minx, maxx] - collection.sticky_edges.y[:] = [miny, maxy] - corners = (minx, miny), (maxx, maxy) - self.update_datalim(corners) - self._request_autoscale_view() - return collection - - @_preprocess_data() - @_docstring.dedent_interpd - def pcolorfast(self, *args, alpha=None, norm=None, cmap=None, vmin=None, - vmax=None, **kwargs): - """ - Create a pseudocolor plot with a non-regular rectangular grid. - - Call signature:: - - ax.pcolorfast([X, Y], C, /, **kwargs) - - This method is similar to `~.Axes.pcolor` and `~.Axes.pcolormesh`. - It's designed to provide the fastest pcolor-type plotting with the - Agg backend. To achieve this, it uses different algorithms internally - depending on the complexity of the input grid (regular rectangular, - non-regular rectangular or arbitrary quadrilateral). - - .. warning:: - - This method is experimental. Compared to `~.Axes.pcolor` or - `~.Axes.pcolormesh` it has some limitations: - - - It supports only flat shading (no outlines) - - It lacks support for log scaling of the axes. - - It does not have a pyplot wrapper. - - Parameters - ---------- - C : array-like - The image data. Supported array shapes are: - - - (M, N): an image with scalar data. Color-mapping is controlled - by *cmap*, *norm*, *vmin*, and *vmax*. - - (M, N, 3): an image with RGB values (0-1 float or 0-255 int). - - (M, N, 4): an image with RGBA values (0-1 float or 0-255 int), - i.e. including transparency. - - The first two dimensions (M, N) define the rows and columns of - the image. - - This parameter can only be passed positionally. - - X, Y : tuple or array-like, default: ``(0, N)``, ``(0, M)`` - *X* and *Y* are used to specify the coordinates of the - quadrilaterals. There are different ways to do this: - - - Use tuples ``X=(xmin, xmax)`` and ``Y=(ymin, ymax)`` to define - a *uniform rectangular grid*. - - The tuples define the outer edges of the grid. All individual - quadrilaterals will be of the same size. This is the fastest - version. - - - Use 1D arrays *X*, *Y* to specify a *non-uniform rectangular - grid*. - - In this case *X* and *Y* have to be monotonic 1D arrays of length - *N+1* and *M+1*, specifying the x and y boundaries of the cells. - - The speed is intermediate. Note: The grid is checked, and if - found to be uniform the fast version is used. - - - Use 2D arrays *X*, *Y* if you need an *arbitrary quadrilateral - grid* (i.e. if the quadrilaterals are not rectangular). - - In this case *X* and *Y* are 2D arrays with shape (M + 1, N + 1), - specifying the x and y coordinates of the corners of the colored - quadrilaterals. - - This is the most general, but the slowest to render. It may - produce faster and more compact output using ps, pdf, and - svg backends, however. - - These arguments can only be passed positionally. - - %(cmap_doc)s - - This parameter is ignored if *C* is RGB(A). - - %(norm_doc)s - - This parameter is ignored if *C* is RGB(A). - - %(vmin_vmax_doc)s - - This parameter is ignored if *C* is RGB(A). - - alpha : float, default: None - The alpha blending value, between 0 (transparent) and 1 (opaque). - - snap : bool, default: False - Whether to snap the mesh to pixel boundaries. - - Returns - ------- - `.AxesImage` or `.PcolorImage` or `.QuadMesh` - The return type depends on the type of grid: - - - `.AxesImage` for a regular rectangular grid. - - `.PcolorImage` for a non-regular rectangular grid. - - `.QuadMesh` for a non-rectangular grid. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Supported additional parameters depend on the type of grid. - See return types of *image* for further description. - """ - - C = args[-1] - nr, nc = np.shape(C)[:2] - if len(args) == 1: - style = "image" - x = [0, nc] - y = [0, nr] - elif len(args) == 3: - x, y = args[:2] - x = np.asarray(x) - y = np.asarray(y) - if x.ndim == 1 and y.ndim == 1: - if x.size == 2 and y.size == 2: - style = "image" - else: - dx = np.diff(x) - dy = np.diff(y) - if (np.ptp(dx) < 0.01 * abs(dx.mean()) and - np.ptp(dy) < 0.01 * abs(dy.mean())): - style = "image" - else: - style = "pcolorimage" - elif x.ndim == 2 and y.ndim == 2: - style = "quadmesh" - else: - raise TypeError("arguments do not match valid signatures") - else: - raise TypeError("need 1 argument or 3 arguments") - - if style == "quadmesh": - # data point in each cell is value at lower left corner - coords = np.stack([x, y], axis=-1) - if np.ndim(C) not in {2, 3}: - raise ValueError("C must be 2D or 3D") - collection = mcoll.QuadMesh( - coords, array=C, - alpha=alpha, cmap=cmap, norm=norm, - antialiased=False, edgecolors="none") - self.add_collection(collection, autolim=False) - xl, xr, yb, yt = x.min(), x.max(), y.min(), y.max() - ret = collection - - else: # It's one of the two image styles. - extent = xl, xr, yb, yt = x[0], x[-1], y[0], y[-1] - if style == "image": - im = mimage.AxesImage( - self, cmap=cmap, norm=norm, - data=C, alpha=alpha, extent=extent, - interpolation='nearest', origin='lower', - **kwargs) - elif style == "pcolorimage": - im = mimage.PcolorImage( - self, x, y, C, - cmap=cmap, norm=norm, alpha=alpha, extent=extent, - **kwargs) - self.add_image(im) - ret = im - - if np.ndim(C) == 2: # C.ndim == 3 is RGB(A) so doesn't need scaling. - ret._scale_norm(norm, vmin, vmax) - - if ret.get_clip_path() is None: - # image does not already have clipping set, clip to axes patch - ret.set_clip_path(self.patch) - - ret.sticky_edges.x[:] = [xl, xr] - ret.sticky_edges.y[:] = [yb, yt] - self.update_datalim(np.array([[xl, yb], [xr, yt]])) - self._request_autoscale_view(tight=True) - return ret - - @_preprocess_data() - @_docstring.dedent_interpd - def contour(self, *args, **kwargs): - """ - Plot contour lines. - - Call signature:: - - contour([X, Y,] Z, [levels], **kwargs) - %(contour_doc)s - """ - kwargs['filled'] = False - contours = mcontour.QuadContourSet(self, *args, **kwargs) - self._request_autoscale_view() - return contours - - @_preprocess_data() - @_docstring.dedent_interpd - def contourf(self, *args, **kwargs): - """ - Plot filled contours. - - Call signature:: - - contourf([X, Y,] Z, [levels], **kwargs) - %(contour_doc)s - """ - kwargs['filled'] = True - contours = mcontour.QuadContourSet(self, *args, **kwargs) - self._request_autoscale_view() - return contours - - def clabel(self, CS, levels=None, **kwargs): - """ - Label a contour plot. - - Adds labels to line contours in given `.ContourSet`. - - Parameters - ---------- - CS : `.ContourSet` instance - Line contours to label. - - levels : array-like, optional - A list of level values, that should be labeled. The list must be - a subset of ``CS.levels``. If not given, all levels are labeled. - - **kwargs - All other parameters are documented in `~.ContourLabeler.clabel`. - """ - return CS.clabel(levels, **kwargs) - - #### Data analysis - - @_preprocess_data(replace_names=["x", 'weights'], label_namer="x") - def hist(self, x, bins=None, range=None, density=False, weights=None, - cumulative=False, bottom=None, histtype='bar', align='mid', - orientation='vertical', rwidth=None, log=False, - color=None, label=None, stacked=False, **kwargs): - """ - Compute and plot a histogram. - - This method uses `numpy.histogram` to bin the data in *x* and count the - number of values in each bin, then draws the distribution either as a - `.BarContainer` or `.Polygon`. The *bins*, *range*, *density*, and - *weights* parameters are forwarded to `numpy.histogram`. - - If the data has already been binned and counted, use `~.bar` or - `~.stairs` to plot the distribution:: - - counts, bins = np.histogram(x) - plt.stairs(counts, bins) - - Alternatively, plot pre-computed bins and counts using ``hist()`` by - treating each bin as a single point with a weight equal to its count:: - - plt.hist(bins[:-1], bins, weights=counts) - - The data input *x* can be a singular array, a list of datasets of - potentially different lengths ([*x0*, *x1*, ...]), or a 2D ndarray in - which each column is a dataset. Note that the ndarray form is - transposed relative to the list form. If the input is an array, then - the return value is a tuple (*n*, *bins*, *patches*); if the input is a - sequence of arrays, then the return value is a tuple - ([*n0*, *n1*, ...], *bins*, [*patches0*, *patches1*, ...]). - - Masked arrays are not supported. - - Parameters - ---------- - x : (n,) array or sequence of (n,) arrays - Input values, this takes either a single array or a sequence of - arrays which are not required to be of the same length. - - bins : int or sequence or str, default: :rc:`hist.bins` - If *bins* is an integer, it defines the number of equal-width bins - in the range. - - If *bins* is a sequence, it defines the bin edges, including the - left edge of the first bin and the right edge of the last bin; - in this case, bins may be unequally spaced. All but the last - (righthand-most) bin is half-open. In other words, if *bins* is:: - - [1, 2, 3, 4] - - then the first bin is ``[1, 2)`` (including 1, but excluding 2) and - the second ``[2, 3)``. The last bin, however, is ``[3, 4]``, which - *includes* 4. - - If *bins* is a string, it is one of the binning strategies - supported by `numpy.histogram_bin_edges`: 'auto', 'fd', 'doane', - 'scott', 'stone', 'rice', 'sturges', or 'sqrt'. - - range : tuple or None, default: None - The lower and upper range of the bins. Lower and upper outliers - are ignored. If not provided, *range* is ``(x.min(), x.max())``. - Range has no effect if *bins* is a sequence. - - If *bins* is a sequence or *range* is specified, autoscaling - is based on the specified bin range instead of the - range of x. - - density : bool, default: False - If ``True``, draw and return a probability density: each bin - will display the bin's raw count divided by the total number of - counts *and the bin width* - (``density = counts / (sum(counts) * np.diff(bins))``), - so that the area under the histogram integrates to 1 - (``np.sum(density * np.diff(bins)) == 1``). - - If *stacked* is also ``True``, the sum of the histograms is - normalized to 1. - - weights : (n,) array-like or None, default: None - An array of weights, of the same shape as *x*. Each value in - *x* only contributes its associated weight towards the bin count - (instead of 1). If *density* is ``True``, the weights are - normalized, so that the integral of the density over the range - remains 1. - - cumulative : bool or -1, default: False - If ``True``, then a histogram is computed where each bin gives the - counts in that bin plus all bins for smaller values. The last bin - gives the total number of datapoints. - - If *density* is also ``True`` then the histogram is normalized such - that the last bin equals 1. - - If *cumulative* is a number less than 0 (e.g., -1), the direction - of accumulation is reversed. In this case, if *density* is also - ``True``, then the histogram is normalized such that the first bin - equals 1. - - bottom : array-like, scalar, or None, default: None - Location of the bottom of each bin, i.e. bins are drawn from - ``bottom`` to ``bottom + hist(x, bins)`` If a scalar, the bottom - of each bin is shifted by the same amount. If an array, each bin - is shifted independently and the length of bottom must match the - number of bins. If None, defaults to 0. - - histtype : {'bar', 'barstacked', 'step', 'stepfilled'}, default: 'bar' - The type of histogram to draw. - - - 'bar' is a traditional bar-type histogram. If multiple data - are given the bars are arranged side by side. - - 'barstacked' is a bar-type histogram where multiple - data are stacked on top of each other. - - 'step' generates a lineplot that is by default unfilled. - - 'stepfilled' generates a lineplot that is by default filled. - - align : {'left', 'mid', 'right'}, default: 'mid' - The horizontal alignment of the histogram bars. - - - 'left': bars are centered on the left bin edges. - - 'mid': bars are centered between the bin edges. - - 'right': bars are centered on the right bin edges. - - orientation : {'vertical', 'horizontal'}, default: 'vertical' - If 'horizontal', `~.Axes.barh` will be used for bar-type histograms - and the *bottom* kwarg will be the left edges. - - rwidth : float or None, default: None - The relative width of the bars as a fraction of the bin width. If - ``None``, automatically compute the width. - - Ignored if *histtype* is 'step' or 'stepfilled'. - - log : bool, default: False - If ``True``, the histogram axis will be set to a log scale. - - color : color or array-like of colors or None, default: None - Color or sequence of colors, one per dataset. Default (``None``) - uses the standard line color sequence. - - label : str or None, default: None - String, or sequence of strings to match multiple datasets. Bar - charts yield multiple patches per dataset, but only the first gets - the label, so that `~.Axes.legend` will work as expected. - - stacked : bool, default: False - If ``True``, multiple data are stacked on top of each other If - ``False`` multiple data are arranged side by side if histtype is - 'bar' or on top of each other if histtype is 'step' - - Returns - ------- - n : array or list of arrays - The values of the histogram bins. See *density* and *weights* for a - description of the possible semantics. If input *x* is an array, - then this is an array of length *nbins*. If input is a sequence of - arrays ``[data1, data2, ...]``, then this is a list of arrays with - the values of the histograms for each of the arrays in the same - order. The dtype of the array *n* (or of its element arrays) will - always be float even if no weighting or normalization is used. - - bins : array - The edges of the bins. Length nbins + 1 (nbins left edges and right - edge of last bin). Always a single array even when multiple data - sets are passed in. - - patches : `.BarContainer` or list of a single `.Polygon` or list of \ -such objects - Container of individual artists used to create the histogram - or list of such containers if there are multiple input datasets. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - `~matplotlib.patches.Patch` properties - - See Also - -------- - hist2d : 2D histogram with rectangular bins - hexbin : 2D histogram with hexagonal bins - stairs : Plot a pre-computed histogram - bar : Plot a pre-computed histogram - - Notes - ----- - For large numbers of bins (>1000), plotting can be significantly - accelerated by using `~.Axes.stairs` to plot a pre-computed histogram - (``plt.stairs(*np.histogram(data))``), or by setting *histtype* to - 'step' or 'stepfilled' rather than 'bar' or 'barstacked'. - """ - # Avoid shadowing the builtin. - bin_range = range - from builtins import range - - if np.isscalar(x): - x = [x] - - if bins is None: - bins = mpl.rcParams['hist.bins'] - - # Validate string inputs here to avoid cluttering subsequent code. - _api.check_in_list(['bar', 'barstacked', 'step', 'stepfilled'], - histtype=histtype) - _api.check_in_list(['left', 'mid', 'right'], align=align) - _api.check_in_list(['horizontal', 'vertical'], orientation=orientation) - - if histtype == 'barstacked' and not stacked: - stacked = True - - # Massage 'x' for processing. - x = cbook._reshape_2D(x, 'x') - nx = len(x) # number of datasets - - # Process unit information. _process_unit_info sets the unit and - # converts the first dataset; then we convert each following dataset - # one at a time. - if orientation == "vertical": - convert_units = self.convert_xunits - x = [*self._process_unit_info([("x", x[0])], kwargs), - *map(convert_units, x[1:])] - else: # horizontal - convert_units = self.convert_yunits - x = [*self._process_unit_info([("y", x[0])], kwargs), - *map(convert_units, x[1:])] - - if bin_range is not None: - bin_range = convert_units(bin_range) - - if not cbook.is_scalar_or_string(bins): - bins = convert_units(bins) - - # We need to do to 'weights' what was done to 'x' - if weights is not None: - w = cbook._reshape_2D(weights, 'weights') - else: - w = [None] * nx - - if len(w) != nx: - raise ValueError('weights should have the same shape as x') - - input_empty = True - for xi, wi in zip(x, w): - len_xi = len(xi) - if wi is not None and len(wi) != len_xi: - raise ValueError('weights should have the same shape as x') - if len_xi: - input_empty = False - - if color is None: - colors = [self._get_lines.get_next_color() for i in range(nx)] - else: - colors = mcolors.to_rgba_array(color) - if len(colors) != nx: - raise ValueError(f"The 'color' keyword argument must have one " - f"color per dataset, but {nx} datasets and " - f"{len(colors)} colors were provided") - - hist_kwargs = dict() - - # if the bin_range is not given, compute without nan numpy - # does not do this for us when guessing the range (but will - # happily ignore nans when computing the histogram). - if bin_range is None: - xmin = np.inf - xmax = -np.inf - for xi in x: - if len(xi): - # python's min/max ignore nan, - # np.minnan returns nan for all nan input - xmin = min(xmin, np.nanmin(xi)) - xmax = max(xmax, np.nanmax(xi)) - if xmin <= xmax: # Only happens if we have seen a finite value. - bin_range = (xmin, xmax) - - # If bins are not specified either explicitly or via range, - # we need to figure out the range required for all datasets, - # and supply that to np.histogram. - if not input_empty and len(x) > 1: - if weights is not None: - _w = np.concatenate(w) - else: - _w = None - bins = np.histogram_bin_edges( - np.concatenate(x), bins, bin_range, _w) - else: - hist_kwargs['range'] = bin_range - - density = bool(density) - if density and not stacked: - hist_kwargs['density'] = density - - # List to store all the top coordinates of the histograms - tops = [] # Will have shape (n_datasets, n_bins). - # Loop through datasets - for i in range(nx): - # this will automatically overwrite bins, - # so that each histogram uses the same bins - m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs) - tops.append(m) - tops = np.array(tops, float) # causes problems later if it's an int - bins = np.array(bins, float) # causes problems if float16 - if stacked: - tops = tops.cumsum(axis=0) - # If a stacked density plot, normalize so the area of all the - # stacked histograms together is 1 - if density: - tops = (tops / np.diff(bins)) / tops[-1].sum() - if cumulative: - slc = slice(None) - if isinstance(cumulative, Number) and cumulative < 0: - slc = slice(None, None, -1) - if density: - tops = (tops * np.diff(bins))[:, slc].cumsum(axis=1)[:, slc] - else: - tops = tops[:, slc].cumsum(axis=1)[:, slc] - - patches = [] - - if histtype.startswith('bar'): - - totwidth = np.diff(bins) - - if rwidth is not None: - dr = np.clip(rwidth, 0, 1) - elif (len(tops) > 1 and - ((not stacked) or mpl.rcParams['_internal.classic_mode'])): - dr = 0.8 - else: - dr = 1.0 - - if histtype == 'bar' and not stacked: - width = dr * totwidth / nx - dw = width - boffset = -0.5 * dr * totwidth * (1 - 1 / nx) - elif histtype == 'barstacked' or stacked: - width = dr * totwidth - boffset, dw = 0.0, 0.0 - - if align == 'mid': - boffset += 0.5 * totwidth - elif align == 'right': - boffset += totwidth - - if orientation == 'horizontal': - _barfunc = self.barh - bottom_kwarg = 'left' - else: # orientation == 'vertical' - _barfunc = self.bar - bottom_kwarg = 'bottom' - - for top, color in zip(tops, colors): - if bottom is None: - bottom = np.zeros(len(top)) - if stacked: - height = top - bottom - else: - height = top - bars = _barfunc(bins[:-1]+boffset, height, width, - align='center', log=log, - color=color, **{bottom_kwarg: bottom}) - patches.append(bars) - if stacked: - bottom = top - boffset += dw - # Remove stickies from all bars but the lowest ones, as otherwise - # margin expansion would be unable to cross the stickies in the - # middle of the bars. - for bars in patches[1:]: - for patch in bars: - patch.sticky_edges.x[:] = patch.sticky_edges.y[:] = [] - - elif histtype.startswith('step'): - # these define the perimeter of the polygon - x = np.zeros(4 * len(bins) - 3) - y = np.zeros(4 * len(bins) - 3) - - x[0:2*len(bins)-1:2], x[1:2*len(bins)-1:2] = bins, bins[:-1] - x[2*len(bins)-1:] = x[1:2*len(bins)-1][::-1] - - if bottom is None: - bottom = 0 - - y[1:2*len(bins)-1:2] = y[2:2*len(bins):2] = bottom - y[2*len(bins)-1:] = y[1:2*len(bins)-1][::-1] - - if log: - if orientation == 'horizontal': - self.set_xscale('log', nonpositive='clip') - else: # orientation == 'vertical' - self.set_yscale('log', nonpositive='clip') - - if align == 'left': - x -= 0.5*(bins[1]-bins[0]) - elif align == 'right': - x += 0.5*(bins[1]-bins[0]) - - # If fill kwarg is set, it will be passed to the patch collection, - # overriding this - fill = (histtype == 'stepfilled') - - xvals, yvals = [], [] - for top in tops: - if stacked: - # top of the previous polygon becomes the bottom - y[2*len(bins)-1:] = y[1:2*len(bins)-1][::-1] - # set the top of this polygon - y[1:2*len(bins)-1:2] = y[2:2*len(bins):2] = top + bottom - - # The starting point of the polygon has not yet been - # updated. So far only the endpoint was adjusted. This - # assignment closes the polygon. The redundant endpoint is - # later discarded (for step and stepfilled). - y[0] = y[-1] - - if orientation == 'horizontal': - xvals.append(y.copy()) - yvals.append(x.copy()) - else: - xvals.append(x.copy()) - yvals.append(y.copy()) - - # stepfill is closed, step is not - split = -1 if fill else 2 * len(bins) - # add patches in reverse order so that when stacking, - # items lower in the stack are plotted on top of - # items higher in the stack - for x, y, color in reversed(list(zip(xvals, yvals, colors))): - patches.append(self.fill( - x[:split], y[:split], - closed=True if fill else None, - facecolor=color, - edgecolor=None if fill else color, - fill=fill if fill else None, - zorder=None if fill else mlines.Line2D.zorder)) - for patch_list in patches: - for patch in patch_list: - if orientation == 'vertical': - patch.sticky_edges.y.append(0) - elif orientation == 'horizontal': - patch.sticky_edges.x.append(0) - - # we return patches, so put it back in the expected order - patches.reverse() - - # If None, make all labels None (via zip_longest below); otherwise, - # cast each element to str, but keep a single str as it. - labels = [] if label is None else np.atleast_1d(np.asarray(label, str)) - for patch, lbl in itertools.zip_longest(patches, labels): - if patch: - p = patch[0] - p._internal_update(kwargs) - if lbl is not None: - p.set_label(lbl) - for p in patch[1:]: - p._internal_update(kwargs) - p.set_label('_nolegend_') - - if nx == 1: - return tops[0], bins, patches[0] - else: - patch_type = ("BarContainer" if histtype.startswith("bar") - else "list[Polygon]") - return tops, bins, cbook.silent_list(patch_type, patches) - - @_preprocess_data() - def stairs(self, values, edges=None, *, - orientation='vertical', baseline=0, fill=False, **kwargs): - """ - A stepwise constant function as a line with bounding edges - or a filled plot. - - Parameters - ---------- - values : array-like - The step heights. - - edges : array-like - The edge positions, with ``len(edges) == len(vals) + 1``, - between which the curve takes on vals values. - - orientation : {'vertical', 'horizontal'}, default: 'vertical' - The direction of the steps. Vertical means that *values* are along - the y-axis, and edges are along the x-axis. - - baseline : float, array-like or None, default: 0 - The bottom value of the bounding edges or when - ``fill=True``, position of lower edge. If *fill* is - True or an array is passed to *baseline*, a closed - path is drawn. - - fill : bool, default: False - Whether the area under the step curve should be filled. - - Returns - ------- - StepPatch : `~matplotlib.patches.StepPatch` - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - `~matplotlib.patches.StepPatch` properties - - """ - - if 'color' in kwargs: - _color = kwargs.pop('color') - else: - _color = self._get_lines.get_next_color() - if fill: - kwargs.setdefault('linewidth', 0) - kwargs.setdefault('facecolor', _color) - else: - kwargs.setdefault('edgecolor', _color) - - if edges is None: - edges = np.arange(len(values) + 1) - - edges, values, baseline = self._process_unit_info( - [("x", edges), ("y", values), ("y", baseline)], kwargs) - - patch = mpatches.StepPatch(values, - edges, - baseline=baseline, - orientation=orientation, - fill=fill, - **kwargs) - self.add_patch(patch) - if baseline is None: - baseline = 0 - if orientation == 'vertical': - patch.sticky_edges.y.append(np.min(baseline)) - self.update_datalim([(edges[0], np.min(baseline))]) - else: - patch.sticky_edges.x.append(np.min(baseline)) - self.update_datalim([(np.min(baseline), edges[0])]) - self._request_autoscale_view() - return patch - - @_preprocess_data(replace_names=["x", "y", "weights"]) - @_docstring.dedent_interpd - def hist2d(self, x, y, bins=10, range=None, density=False, weights=None, - cmin=None, cmax=None, **kwargs): - """ - Make a 2D histogram plot. - - Parameters - ---------- - x, y : array-like, shape (n, ) - Input values - - bins : None or int or [int, int] or array-like or [array, array] - - The bin specification: - - - If int, the number of bins for the two dimensions - (nx=ny=bins). - - If ``[int, int]``, the number of bins in each dimension - (nx, ny = bins). - - If array-like, the bin edges for the two dimensions - (x_edges=y_edges=bins). - - If ``[array, array]``, the bin edges in each dimension - (x_edges, y_edges = bins). - - The default value is 10. - - range : array-like shape(2, 2), optional - The leftmost and rightmost edges of the bins along each dimension - (if not specified explicitly in the bins parameters): ``[[xmin, - xmax], [ymin, ymax]]``. All values outside of this range will be - considered outliers and not tallied in the histogram. - - density : bool, default: False - Normalize histogram. See the documentation for the *density* - parameter of `~.Axes.hist` for more details. - - weights : array-like, shape (n, ), optional - An array of values w_i weighing each sample (x_i, y_i). - - cmin, cmax : float, default: None - All bins that has count less than *cmin* or more than *cmax* will - not be displayed (set to NaN before passing to imshow) and these - count values in the return value count histogram will also be set - to nan upon return. - - Returns - ------- - h : 2D array - The bi-dimensional histogram of samples x and y. Values in x are - histogrammed along the first dimension and values in y are - histogrammed along the second dimension. - xedges : 1D array - The bin edges along the x-axis. - yedges : 1D array - The bin edges along the y-axis. - image : `~.matplotlib.collections.QuadMesh` - - Other Parameters - ---------------- - %(cmap_doc)s - - %(norm_doc)s - - %(vmin_vmax_doc)s - - alpha : ``0 <= scalar <= 1`` or ``None``, optional - The alpha blending value. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Additional parameters are passed along to the - `~.Axes.pcolormesh` method and `~matplotlib.collections.QuadMesh` - constructor. - - See Also - -------- - hist : 1D histogram plotting - hexbin : 2D histogram with hexagonal bins - - Notes - ----- - - Currently ``hist2d`` calculates its own axis limits, and any limits - previously set are ignored. - - Rendering the histogram with a logarithmic color scale is - accomplished by passing a `.colors.LogNorm` instance to the *norm* - keyword argument. Likewise, power-law normalization (similar - in effect to gamma correction) can be accomplished with - `.colors.PowerNorm`. - """ - - h, xedges, yedges = np.histogram2d(x, y, bins=bins, range=range, - density=density, weights=weights) - - if cmin is not None: - h[h < cmin] = None - if cmax is not None: - h[h > cmax] = None - - pc = self.pcolormesh(xedges, yedges, h.T, **kwargs) - self.set_xlim(xedges[0], xedges[-1]) - self.set_ylim(yedges[0], yedges[-1]) - - return h, xedges, yedges, pc - - @_preprocess_data(replace_names=["x"]) - @_docstring.dedent_interpd - def psd(self, x, NFFT=None, Fs=None, Fc=None, detrend=None, - window=None, noverlap=None, pad_to=None, - sides=None, scale_by_freq=None, return_line=None, **kwargs): - r""" - Plot the power spectral density. - - The power spectral density :math:`P_{xx}` by Welch's average - periodogram method. The vector *x* is divided into *NFFT* length - segments. Each segment is detrended by function *detrend* and - windowed by function *window*. *noverlap* gives the length of - the overlap between segments. The :math:`|\mathrm{fft}(i)|^2` - of each segment :math:`i` are averaged to compute :math:`P_{xx}`, - with a scaling to correct for power loss due to windowing. - - If len(*x*) < *NFFT*, it will be zero padded to *NFFT*. - - Parameters - ---------- - x : 1-D array or sequence - Array or sequence containing the data - - %(Spectral)s - - %(PSD)s - - noverlap : int, default: 0 (no overlap) - The number of points of overlap between segments. - - Fc : int, default: 0 - The center frequency of *x*, which offsets the x extents of the - plot to reflect the frequency range used when a signal is acquired - and then filtered and downsampled to baseband. - - return_line : bool, default: False - Whether to include the line object plotted in the returned values. - - Returns - ------- - Pxx : 1-D array - The values for the power spectrum :math:`P_{xx}` before scaling - (real valued). - - freqs : 1-D array - The frequencies corresponding to the elements in *Pxx*. - - line : `~matplotlib.lines.Line2D` - The line created by this function. - Only returned if *return_line* is True. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Keyword arguments control the `.Line2D` properties: - - %(Line2D:kwdoc)s - - See Also - -------- - specgram - Differs in the default overlap; in not returning the mean of the - segment periodograms; in returning the times of the segments; and - in plotting a colormap instead of a line. - magnitude_spectrum - Plots the magnitude spectrum. - csd - Plots the spectral density between two signals. - - Notes - ----- - For plotting, the power is plotted as - :math:`10\log_{10}(P_{xx})` for decibels, though *Pxx* itself - is returned. - - References - ---------- - Bendat & Piersol -- Random Data: Analysis and Measurement Procedures, - John Wiley & Sons (1986) - """ - if Fc is None: - Fc = 0 - - pxx, freqs = mlab.psd(x=x, NFFT=NFFT, Fs=Fs, detrend=detrend, - window=window, noverlap=noverlap, pad_to=pad_to, - sides=sides, scale_by_freq=scale_by_freq) - freqs += Fc - - if scale_by_freq in (None, True): - psd_units = 'dB/Hz' - else: - psd_units = 'dB' - - line = self.plot(freqs, 10 * np.log10(pxx), **kwargs) - self.set_xlabel('Frequency') - self.set_ylabel('Power Spectral Density (%s)' % psd_units) - self.grid(True) - - vmin, vmax = self.get_ybound() - step = max(10 * int(np.log10(vmax - vmin)), 1) - ticks = np.arange(math.floor(vmin), math.ceil(vmax) + 1, step) - self.set_yticks(ticks) - - if return_line is None or not return_line: - return pxx, freqs - else: - return pxx, freqs, line - - @_preprocess_data(replace_names=["x", "y"], label_namer="y") - @_docstring.dedent_interpd - def csd(self, x, y, NFFT=None, Fs=None, Fc=None, detrend=None, - window=None, noverlap=None, pad_to=None, - sides=None, scale_by_freq=None, return_line=None, **kwargs): - r""" - Plot the cross-spectral density. - - The cross spectral density :math:`P_{xy}` by Welch's average - periodogram method. The vectors *x* and *y* are divided into - *NFFT* length segments. Each segment is detrended by function - *detrend* and windowed by function *window*. *noverlap* gives - the length of the overlap between segments. The product of - the direct FFTs of *x* and *y* are averaged over each segment - to compute :math:`P_{xy}`, with a scaling to correct for power - loss due to windowing. - - If len(*x*) < *NFFT* or len(*y*) < *NFFT*, they will be zero - padded to *NFFT*. - - Parameters - ---------- - x, y : 1-D arrays or sequences - Arrays or sequences containing the data. - - %(Spectral)s - - %(PSD)s - - noverlap : int, default: 0 (no overlap) - The number of points of overlap between segments. - - Fc : int, default: 0 - The center frequency of *x*, which offsets the x extents of the - plot to reflect the frequency range used when a signal is acquired - and then filtered and downsampled to baseband. - - return_line : bool, default: False - Whether to include the line object plotted in the returned values. - - Returns - ------- - Pxy : 1-D array - The values for the cross spectrum :math:`P_{xy}` before scaling - (complex valued). - - freqs : 1-D array - The frequencies corresponding to the elements in *Pxy*. - - line : `~matplotlib.lines.Line2D` - The line created by this function. - Only returned if *return_line* is True. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Keyword arguments control the `.Line2D` properties: - - %(Line2D:kwdoc)s - - See Also - -------- - psd : is equivalent to setting ``y = x``. - - Notes - ----- - For plotting, the power is plotted as - :math:`10 \log_{10}(P_{xy})` for decibels, though :math:`P_{xy}` itself - is returned. - - References - ---------- - Bendat & Piersol -- Random Data: Analysis and Measurement Procedures, - John Wiley & Sons (1986) - """ - if Fc is None: - Fc = 0 - - pxy, freqs = mlab.csd(x=x, y=y, NFFT=NFFT, Fs=Fs, detrend=detrend, - window=window, noverlap=noverlap, pad_to=pad_to, - sides=sides, scale_by_freq=scale_by_freq) - # pxy is complex - freqs += Fc - - line = self.plot(freqs, 10 * np.log10(np.abs(pxy)), **kwargs) - self.set_xlabel('Frequency') - self.set_ylabel('Cross Spectrum Magnitude (dB)') - self.grid(True) - - vmin, vmax = self.get_ybound() - step = max(10 * int(np.log10(vmax - vmin)), 1) - ticks = np.arange(math.floor(vmin), math.ceil(vmax) + 1, step) - self.set_yticks(ticks) - - if return_line is None or not return_line: - return pxy, freqs - else: - return pxy, freqs, line - - @_preprocess_data(replace_names=["x"]) - @_docstring.dedent_interpd - def magnitude_spectrum(self, x, Fs=None, Fc=None, window=None, - pad_to=None, sides=None, scale=None, - **kwargs): - """ - Plot the magnitude spectrum. - - Compute the magnitude spectrum of *x*. Data is padded to a - length of *pad_to* and the windowing function *window* is applied to - the signal. - - Parameters - ---------- - x : 1-D array or sequence - Array or sequence containing the data. - - %(Spectral)s - - %(Single_Spectrum)s - - scale : {'default', 'linear', 'dB'} - The scaling of the values in the *spec*. 'linear' is no scaling. - 'dB' returns the values in dB scale, i.e., the dB amplitude - (20 * log10). 'default' is 'linear'. - - Fc : int, default: 0 - The center frequency of *x*, which offsets the x extents of the - plot to reflect the frequency range used when a signal is acquired - and then filtered and downsampled to baseband. - - Returns - ------- - spectrum : 1-D array - The values for the magnitude spectrum before scaling (real valued). - - freqs : 1-D array - The frequencies corresponding to the elements in *spectrum*. - - line : `~matplotlib.lines.Line2D` - The line created by this function. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Keyword arguments control the `.Line2D` properties: - - %(Line2D:kwdoc)s - - See Also - -------- - psd - Plots the power spectral density. - angle_spectrum - Plots the angles of the corresponding frequencies. - phase_spectrum - Plots the phase (unwrapped angle) of the corresponding frequencies. - specgram - Can plot the magnitude spectrum of segments within the signal in a - colormap. - """ - if Fc is None: - Fc = 0 - - spec, freqs = mlab.magnitude_spectrum(x=x, Fs=Fs, window=window, - pad_to=pad_to, sides=sides) - freqs += Fc - - yunits = _api.check_getitem( - {None: 'energy', 'default': 'energy', 'linear': 'energy', - 'dB': 'dB'}, - scale=scale) - if yunits == 'energy': - Z = spec - else: # yunits == 'dB' - Z = 20. * np.log10(spec) - - line, = self.plot(freqs, Z, **kwargs) - self.set_xlabel('Frequency') - self.set_ylabel('Magnitude (%s)' % yunits) - - return spec, freqs, line - - @_preprocess_data(replace_names=["x"]) - @_docstring.dedent_interpd - def angle_spectrum(self, x, Fs=None, Fc=None, window=None, - pad_to=None, sides=None, **kwargs): - """ - Plot the angle spectrum. - - Compute the angle spectrum (wrapped phase spectrum) of *x*. - Data is padded to a length of *pad_to* and the windowing function - *window* is applied to the signal. - - Parameters - ---------- - x : 1-D array or sequence - Array or sequence containing the data. - - %(Spectral)s - - %(Single_Spectrum)s - - Fc : int, default: 0 - The center frequency of *x*, which offsets the x extents of the - plot to reflect the frequency range used when a signal is acquired - and then filtered and downsampled to baseband. - - Returns - ------- - spectrum : 1-D array - The values for the angle spectrum in radians (real valued). - - freqs : 1-D array - The frequencies corresponding to the elements in *spectrum*. - - line : `~matplotlib.lines.Line2D` - The line created by this function. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Keyword arguments control the `.Line2D` properties: - - %(Line2D:kwdoc)s - - See Also - -------- - magnitude_spectrum - Plots the magnitudes of the corresponding frequencies. - phase_spectrum - Plots the unwrapped version of this function. - specgram - Can plot the angle spectrum of segments within the signal in a - colormap. - """ - if Fc is None: - Fc = 0 - - spec, freqs = mlab.angle_spectrum(x=x, Fs=Fs, window=window, - pad_to=pad_to, sides=sides) - freqs += Fc - - lines = self.plot(freqs, spec, **kwargs) - self.set_xlabel('Frequency') - self.set_ylabel('Angle (radians)') - - return spec, freqs, lines[0] - - @_preprocess_data(replace_names=["x"]) - @_docstring.dedent_interpd - def phase_spectrum(self, x, Fs=None, Fc=None, window=None, - pad_to=None, sides=None, **kwargs): - """ - Plot the phase spectrum. - - Compute the phase spectrum (unwrapped angle spectrum) of *x*. - Data is padded to a length of *pad_to* and the windowing function - *window* is applied to the signal. - - Parameters - ---------- - x : 1-D array or sequence - Array or sequence containing the data - - %(Spectral)s - - %(Single_Spectrum)s - - Fc : int, default: 0 - The center frequency of *x*, which offsets the x extents of the - plot to reflect the frequency range used when a signal is acquired - and then filtered and downsampled to baseband. - - Returns - ------- - spectrum : 1-D array - The values for the phase spectrum in radians (real valued). - - freqs : 1-D array - The frequencies corresponding to the elements in *spectrum*. - - line : `~matplotlib.lines.Line2D` - The line created by this function. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Keyword arguments control the `.Line2D` properties: - - %(Line2D:kwdoc)s - - See Also - -------- - magnitude_spectrum - Plots the magnitudes of the corresponding frequencies. - angle_spectrum - Plots the wrapped version of this function. - specgram - Can plot the phase spectrum of segments within the signal in a - colormap. - """ - if Fc is None: - Fc = 0 - - spec, freqs = mlab.phase_spectrum(x=x, Fs=Fs, window=window, - pad_to=pad_to, sides=sides) - freqs += Fc - - lines = self.plot(freqs, spec, **kwargs) - self.set_xlabel('Frequency') - self.set_ylabel('Phase (radians)') - - return spec, freqs, lines[0] - - @_preprocess_data(replace_names=["x", "y"]) - @_docstring.dedent_interpd - def cohere(self, x, y, NFFT=256, Fs=2, Fc=0, detrend=mlab.detrend_none, - window=mlab.window_hanning, noverlap=0, pad_to=None, - sides='default', scale_by_freq=None, **kwargs): - r""" - Plot the coherence between *x* and *y*. - - Coherence is the normalized cross spectral density: - - .. math:: - - C_{xy} = \frac{|P_{xy}|^2}{P_{xx}P_{yy}} - - Parameters - ---------- - %(Spectral)s - - %(PSD)s - - noverlap : int, default: 0 (no overlap) - The number of points of overlap between blocks. - - Fc : int, default: 0 - The center frequency of *x*, which offsets the x extents of the - plot to reflect the frequency range used when a signal is acquired - and then filtered and downsampled to baseband. - - Returns - ------- - Cxy : 1-D array - The coherence vector. - - freqs : 1-D array - The frequencies for the elements in *Cxy*. - - Other Parameters - ---------------- - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Keyword arguments control the `.Line2D` properties: - - %(Line2D:kwdoc)s - - References - ---------- - Bendat & Piersol -- Random Data: Analysis and Measurement Procedures, - John Wiley & Sons (1986) - """ - cxy, freqs = mlab.cohere(x=x, y=y, NFFT=NFFT, Fs=Fs, detrend=detrend, - window=window, noverlap=noverlap, - scale_by_freq=scale_by_freq, sides=sides, - pad_to=pad_to) - freqs += Fc - - self.plot(freqs, cxy, **kwargs) - self.set_xlabel('Frequency') - self.set_ylabel('Coherence') - self.grid(True) - - return cxy, freqs - - @_preprocess_data(replace_names=["x"]) - @_docstring.dedent_interpd - def specgram(self, x, NFFT=None, Fs=None, Fc=None, detrend=None, - window=None, noverlap=None, - cmap=None, xextent=None, pad_to=None, sides=None, - scale_by_freq=None, mode=None, scale=None, - vmin=None, vmax=None, **kwargs): - """ - Plot a spectrogram. - - Compute and plot a spectrogram of data in *x*. Data are split into - *NFFT* length segments and the spectrum of each section is - computed. The windowing function *window* is applied to each - segment, and the amount of overlap of each segment is - specified with *noverlap*. The spectrogram is plotted as a colormap - (using imshow). - - Parameters - ---------- - x : 1-D array or sequence - Array or sequence containing the data. - - %(Spectral)s - - %(PSD)s - - mode : {'default', 'psd', 'magnitude', 'angle', 'phase'} - What sort of spectrum to use. Default is 'psd', which takes the - power spectral density. 'magnitude' returns the magnitude - spectrum. 'angle' returns the phase spectrum without unwrapping. - 'phase' returns the phase spectrum with unwrapping. - - noverlap : int, default: 128 - The number of points of overlap between blocks. - - scale : {'default', 'linear', 'dB'} - The scaling of the values in the *spec*. 'linear' is no scaling. - 'dB' returns the values in dB scale. When *mode* is 'psd', - this is dB power (10 * log10). Otherwise, this is dB amplitude - (20 * log10). 'default' is 'dB' if *mode* is 'psd' or - 'magnitude' and 'linear' otherwise. This must be 'linear' - if *mode* is 'angle' or 'phase'. - - Fc : int, default: 0 - The center frequency of *x*, which offsets the x extents of the - plot to reflect the frequency range used when a signal is acquired - and then filtered and downsampled to baseband. - - cmap : `.Colormap`, default: :rc:`image.cmap` - - xextent : *None* or (xmin, xmax) - The image extent along the x-axis. The default sets *xmin* to the - left border of the first bin (*spectrum* column) and *xmax* to the - right border of the last bin. Note that for *noverlap>0* the width - of the bins is smaller than those of the segments. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - **kwargs - Additional keyword arguments are passed on to `~.axes.Axes.imshow` - which makes the specgram image. The origin keyword argument - is not supported. - - Returns - ------- - spectrum : 2D array - Columns are the periodograms of successive segments. - - freqs : 1-D array - The frequencies corresponding to the rows in *spectrum*. - - t : 1-D array - The times corresponding to midpoints of segments (i.e., the columns - in *spectrum*). - - im : `.AxesImage` - The image created by imshow containing the spectrogram. - - See Also - -------- - psd - Differs in the default overlap; in returning the mean of the - segment periodograms; in not returning times; and in generating a - line plot instead of colormap. - magnitude_spectrum - A single spectrum, similar to having a single segment when *mode* - is 'magnitude'. Plots a line instead of a colormap. - angle_spectrum - A single spectrum, similar to having a single segment when *mode* - is 'angle'. Plots a line instead of a colormap. - phase_spectrum - A single spectrum, similar to having a single segment when *mode* - is 'phase'. Plots a line instead of a colormap. - - Notes - ----- - The parameters *detrend* and *scale_by_freq* do only apply when *mode* - is set to 'psd'. - """ - if NFFT is None: - NFFT = 256 # same default as in mlab.specgram() - if Fc is None: - Fc = 0 # same default as in mlab._spectral_helper() - if noverlap is None: - noverlap = 128 # same default as in mlab.specgram() - if Fs is None: - Fs = 2 # same default as in mlab._spectral_helper() - - if mode == 'complex': - raise ValueError('Cannot plot a complex specgram') - - if scale is None or scale == 'default': - if mode in ['angle', 'phase']: - scale = 'linear' - else: - scale = 'dB' - elif mode in ['angle', 'phase'] and scale == 'dB': - raise ValueError('Cannot use dB scale with angle or phase mode') - - spec, freqs, t = mlab.specgram(x=x, NFFT=NFFT, Fs=Fs, - detrend=detrend, window=window, - noverlap=noverlap, pad_to=pad_to, - sides=sides, - scale_by_freq=scale_by_freq, - mode=mode) - - if scale == 'linear': - Z = spec - elif scale == 'dB': - if mode is None or mode == 'default' or mode == 'psd': - Z = 10. * np.log10(spec) - else: - Z = 20. * np.log10(spec) - else: - raise ValueError(f'Unknown scale {scale!r}') - - Z = np.flipud(Z) - - if xextent is None: - # padding is needed for first and last segment: - pad_xextent = (NFFT-noverlap) / Fs / 2 - xextent = np.min(t) - pad_xextent, np.max(t) + pad_xextent - xmin, xmax = xextent - freqs += Fc - extent = xmin, xmax, freqs[0], freqs[-1] - - if 'origin' in kwargs: - raise _api.kwarg_error("specgram", "origin") - - im = self.imshow(Z, cmap, extent=extent, vmin=vmin, vmax=vmax, - origin='upper', **kwargs) - self.axis('auto') - - return spec, freqs, t, im - - @_docstring.dedent_interpd - def spy(self, Z, precision=0, marker=None, markersize=None, - aspect='equal', origin="upper", **kwargs): - """ - Plot the sparsity pattern of a 2D array. - - This visualizes the non-zero values of the array. - - Two plotting styles are available: image and marker. Both - are available for full arrays, but only the marker style - works for `scipy.sparse.spmatrix` instances. - - **Image style** - - If *marker* and *markersize* are *None*, `~.Axes.imshow` is used. Any - extra remaining keyword arguments are passed to this method. - - **Marker style** - - If *Z* is a `scipy.sparse.spmatrix` or *marker* or *markersize* are - *None*, a `.Line2D` object will be returned with the value of marker - determining the marker type, and any remaining keyword arguments - passed to `~.Axes.plot`. - - Parameters - ---------- - Z : (M, N) array-like - The array to be plotted. - - precision : float or 'present', default: 0 - If *precision* is 0, any non-zero value will be plotted. Otherwise, - values of :math:`|Z| > precision` will be plotted. - - For `scipy.sparse.spmatrix` instances, you can also - pass 'present'. In this case any value present in the array - will be plotted, even if it is identically zero. - - aspect : {'equal', 'auto', None} or float, default: 'equal' - The aspect ratio of the Axes. This parameter is particularly - relevant for images since it determines whether data pixels are - square. - - This parameter is a shortcut for explicitly calling - `.Axes.set_aspect`. See there for further details. - - - 'equal': Ensures an aspect ratio of 1. Pixels will be square. - - 'auto': The Axes is kept fixed and the aspect is adjusted so - that the data fit in the Axes. In general, this will result in - non-square pixels. - - *None*: Use :rc:`image.aspect`. - - origin : {'upper', 'lower'}, default: :rc:`image.origin` - Place the [0, 0] index of the array in the upper left or lower left - corner of the Axes. The convention 'upper' is typically used for - matrices and images. - - Returns - ------- - `~matplotlib.image.AxesImage` or `.Line2D` - The return type depends on the plotting style (see above). - - Other Parameters - ---------------- - **kwargs - The supported additional parameters depend on the plotting style. - - For the image style, you can pass the following additional - parameters of `~.Axes.imshow`: - - - *cmap* - - *alpha* - - *url* - - any `.Artist` properties (passed on to the `.AxesImage`) - - For the marker style, you can pass any `.Line2D` property except - for *linestyle*: - - %(Line2D:kwdoc)s - """ - if marker is None and markersize is None and hasattr(Z, 'tocoo'): - marker = 's' - _api.check_in_list(["upper", "lower"], origin=origin) - if marker is None and markersize is None: - Z = np.asarray(Z) - mask = np.abs(Z) > precision - - if 'cmap' not in kwargs: - kwargs['cmap'] = mcolors.ListedColormap(['w', 'k'], - name='binary') - if 'interpolation' in kwargs: - raise _api.kwarg_error("spy", "interpolation") - if 'norm' not in kwargs: - kwargs['norm'] = mcolors.NoNorm() - ret = self.imshow(mask, interpolation='nearest', - aspect=aspect, origin=origin, - **kwargs) - else: - if hasattr(Z, 'tocoo'): - c = Z.tocoo() - if precision == 'present': - y = c.row - x = c.col - else: - nonzero = np.abs(c.data) > precision - y = c.row[nonzero] - x = c.col[nonzero] - else: - Z = np.asarray(Z) - nonzero = np.abs(Z) > precision - y, x = np.nonzero(nonzero) - if marker is None: - marker = 's' - if markersize is None: - markersize = 10 - if 'linestyle' in kwargs: - raise _api.kwarg_error("spy", "linestyle") - ret = mlines.Line2D( - x, y, linestyle='None', marker=marker, markersize=markersize, - **kwargs) - self.add_line(ret) - nr, nc = Z.shape - self.set_xlim(-0.5, nc - 0.5) - if origin == "upper": - self.set_ylim(nr - 0.5, -0.5) - else: - self.set_ylim(-0.5, nr - 0.5) - self.set_aspect(aspect) - self.title.set_y(1.05) - if origin == "upper": - self.xaxis.tick_top() - else: # lower - self.xaxis.tick_bottom() - self.xaxis.set_ticks_position('both') - self.xaxis.set_major_locator( - mticker.MaxNLocator(nbins=9, steps=[1, 2, 5, 10], integer=True)) - self.yaxis.set_major_locator( - mticker.MaxNLocator(nbins=9, steps=[1, 2, 5, 10], integer=True)) - return ret - - def matshow(self, Z, **kwargs): - """ - Plot the values of a 2D matrix or array as color-coded image. - - The matrix will be shown the way it would be printed, with the first - row at the top. Row and column numbering is zero-based. - - Parameters - ---------- - Z : (M, N) array-like - The matrix to be displayed. - - Returns - ------- - `~matplotlib.image.AxesImage` - - Other Parameters - ---------------- - **kwargs : `~matplotlib.axes.Axes.imshow` arguments - - See Also - -------- - imshow : More general function to plot data on a 2D regular raster. - - Notes - ----- - This is just a convenience function wrapping `.imshow` to set useful - defaults for displaying a matrix. In particular: - - - Set ``origin='upper'``. - - Set ``interpolation='nearest'``. - - Set ``aspect='equal'``. - - Ticks are placed to the left and above. - - Ticks are formatted to show integer indices. - - """ - Z = np.asanyarray(Z) - kw = {'origin': 'upper', - 'interpolation': 'nearest', - 'aspect': 'equal', # (already the imshow default) - **kwargs} - im = self.imshow(Z, **kw) - self.title.set_y(1.05) - self.xaxis.tick_top() - self.xaxis.set_ticks_position('both') - self.xaxis.set_major_locator( - mticker.MaxNLocator(nbins=9, steps=[1, 2, 5, 10], integer=True)) - self.yaxis.set_major_locator( - mticker.MaxNLocator(nbins=9, steps=[1, 2, 5, 10], integer=True)) - return im - - @_preprocess_data(replace_names=["dataset"]) - def violinplot(self, dataset, positions=None, vert=True, widths=0.5, - showmeans=False, showextrema=True, showmedians=False, - quantiles=None, points=100, bw_method=None): - """ - Make a violin plot. - - Make a violin plot for each column of *dataset* or each vector in - sequence *dataset*. Each filled area extends to represent the - entire data range, with optional lines at the mean, the median, - the minimum, the maximum, and user-specified quantiles. - - Parameters - ---------- - dataset : Array or a sequence of vectors. - The input data. - - positions : array-like, default: [1, 2, ..., n] - The positions of the violins. The ticks and limits are - automatically set to match the positions. - - vert : bool, default: True. - If true, creates a vertical violin plot. - Otherwise, creates a horizontal violin plot. - - widths : array-like, default: 0.5 - Either a scalar or a vector that sets the maximal width of - each violin. The default is 0.5, which uses about half of the - available horizontal space. - - showmeans : bool, default: False - If `True`, will toggle rendering of the means. - - showextrema : bool, default: True - If `True`, will toggle rendering of the extrema. - - showmedians : bool, default: False - If `True`, will toggle rendering of the medians. - - quantiles : array-like, default: None - If not None, set a list of floats in interval [0, 1] for each violin, - which stands for the quantiles that will be rendered for that - violin. - - points : int, default: 100 - Defines the number of points to evaluate each of the - gaussian kernel density estimations at. - - bw_method : str, scalar or callable, optional - The method used to calculate the estimator bandwidth. This can be - 'scott', 'silverman', a scalar constant or a callable. If a - scalar, this will be used directly as `kde.factor`. If a - callable, it should take a `matplotlib.mlab.GaussianKDE` instance as - its only parameter and return a scalar. If None (default), 'scott' - is used. - - data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - - Returns - ------- - dict - A dictionary mapping each component of the violinplot to a - list of the corresponding collection instances created. The - dictionary has the following keys: - - - ``bodies``: A list of the `~.collections.PolyCollection` - instances containing the filled area of each violin. - - - ``cmeans``: A `~.collections.LineCollection` instance that marks - the mean values of each of the violin's distribution. - - - ``cmins``: A `~.collections.LineCollection` instance that marks - the bottom of each violin's distribution. - - - ``cmaxes``: A `~.collections.LineCollection` instance that marks - the top of each violin's distribution. - - - ``cbars``: A `~.collections.LineCollection` instance that marks - the centers of each violin's distribution. - - - ``cmedians``: A `~.collections.LineCollection` instance that - marks the median values of each of the violin's distribution. - - - ``cquantiles``: A `~.collections.LineCollection` instance created - to identify the quantile values of each of the violin's - distribution. - - """ - - def _kde_method(X, coords): - # Unpack in case of e.g. Pandas or xarray object - X = cbook._unpack_to_numpy(X) - # fallback gracefully if the vector contains only one value - if np.all(X[0] == X): - return (X[0] == coords).astype(float) - kde = mlab.GaussianKDE(X, bw_method) - return kde.evaluate(coords) - - vpstats = cbook.violin_stats(dataset, _kde_method, points=points, - quantiles=quantiles) - return self.violin(vpstats, positions=positions, vert=vert, - widths=widths, showmeans=showmeans, - showextrema=showextrema, showmedians=showmedians) - - def violin(self, vpstats, positions=None, vert=True, widths=0.5, - showmeans=False, showextrema=True, showmedians=False): - """ - Drawing function for violin plots. - - Draw a violin plot for each column of *vpstats*. Each filled area - extends to represent the entire data range, with optional lines at the - mean, the median, the minimum, the maximum, and the quantiles values. - - Parameters - ---------- - vpstats : list of dicts - A list of dictionaries containing stats for each violin plot. - Required keys are: - - - ``coords``: A list of scalars containing the coordinates that - the violin's kernel density estimate were evaluated at. - - - ``vals``: A list of scalars containing the values of the - kernel density estimate at each of the coordinates given - in *coords*. - - - ``mean``: The mean value for this violin's dataset. - - - ``median``: The median value for this violin's dataset. - - - ``min``: The minimum value for this violin's dataset. - - - ``max``: The maximum value for this violin's dataset. - - Optional keys are: - - - ``quantiles``: A list of scalars containing the quantile values - for this violin's dataset. - - positions : array-like, default: [1, 2, ..., n] - The positions of the violins. The ticks and limits are - automatically set to match the positions. - - vert : bool, default: True. - If true, plots the violins vertically. - Otherwise, plots the violins horizontally. - - widths : array-like, default: 0.5 - Either a scalar or a vector that sets the maximal width of - each violin. The default is 0.5, which uses about half of the - available horizontal space. - - showmeans : bool, default: False - If true, will toggle rendering of the means. - - showextrema : bool, default: True - If true, will toggle rendering of the extrema. - - showmedians : bool, default: False - If true, will toggle rendering of the medians. - - Returns - ------- - dict - A dictionary mapping each component of the violinplot to a - list of the corresponding collection instances created. The - dictionary has the following keys: - - - ``bodies``: A list of the `~.collections.PolyCollection` - instances containing the filled area of each violin. - - - ``cmeans``: A `~.collections.LineCollection` instance that marks - the mean values of each of the violin's distribution. - - - ``cmins``: A `~.collections.LineCollection` instance that marks - the bottom of each violin's distribution. - - - ``cmaxes``: A `~.collections.LineCollection` instance that marks - the top of each violin's distribution. - - - ``cbars``: A `~.collections.LineCollection` instance that marks - the centers of each violin's distribution. - - - ``cmedians``: A `~.collections.LineCollection` instance that - marks the median values of each of the violin's distribution. - - - ``cquantiles``: A `~.collections.LineCollection` instance created - to identify the quantiles values of each of the violin's - distribution. - """ - - # Statistical quantities to be plotted on the violins - means = [] - mins = [] - maxes = [] - medians = [] - quantiles = [] - - qlens = [] # Number of quantiles in each dataset. - - artists = {} # Collections to be returned - - N = len(vpstats) - datashape_message = ("List of violinplot statistics and `{0}` " - "values must have the same length") - - # Validate positions - if positions is None: - positions = range(1, N + 1) - elif len(positions) != N: - raise ValueError(datashape_message.format("positions")) - - # Validate widths - if np.isscalar(widths): - widths = [widths] * N - elif len(widths) != N: - raise ValueError(datashape_message.format("widths")) - - # Calculate ranges for statistics lines (shape (2, N)). - line_ends = [[-0.25], [0.25]] * np.array(widths) + positions - - # Colors. - if mpl.rcParams['_internal.classic_mode']: - fillcolor = 'y' - linecolor = 'r' - else: - fillcolor = linecolor = self._get_lines.get_next_color() - - # Check whether we are rendering vertically or horizontally - if vert: - fill = self.fill_betweenx - perp_lines = functools.partial(self.hlines, colors=linecolor) - par_lines = functools.partial(self.vlines, colors=linecolor) - else: - fill = self.fill_between - perp_lines = functools.partial(self.vlines, colors=linecolor) - par_lines = functools.partial(self.hlines, colors=linecolor) - - # Render violins - bodies = [] - for stats, pos, width in zip(vpstats, positions, widths): - # The 0.5 factor reflects the fact that we plot from v-p to v+p. - vals = np.array(stats['vals']) - vals = 0.5 * width * vals / vals.max() - bodies += [fill(stats['coords'], -vals + pos, vals + pos, - facecolor=fillcolor, alpha=0.3)] - means.append(stats['mean']) - mins.append(stats['min']) - maxes.append(stats['max']) - medians.append(stats['median']) - q = stats.get('quantiles') # a list of floats, or None - if q is None: - q = [] - quantiles.extend(q) - qlens.append(len(q)) - artists['bodies'] = bodies - - if showmeans: # Render means - artists['cmeans'] = perp_lines(means, *line_ends) - if showextrema: # Render extrema - artists['cmaxes'] = perp_lines(maxes, *line_ends) - artists['cmins'] = perp_lines(mins, *line_ends) - artists['cbars'] = par_lines(positions, mins, maxes) - if showmedians: # Render medians - artists['cmedians'] = perp_lines(medians, *line_ends) - if quantiles: # Render quantiles: each width is repeated qlen times. - artists['cquantiles'] = perp_lines( - quantiles, *np.repeat(line_ends, qlens, axis=1)) - - return artists - - # Methods that are entirely implemented in other modules. - - table = mtable.table - - # args can be either Y or y1, y2, ... and all should be replaced - stackplot = _preprocess_data()(mstack.stackplot) - - streamplot = _preprocess_data( - replace_names=["x", "y", "u", "v", "start_points"])(mstream.streamplot) - - tricontour = mtri.tricontour - tricontourf = mtri.tricontourf - tripcolor = mtri.tripcolor - triplot = mtri.triplot - - def _get_aspect_ratio(self): - """ - Convenience method to calculate the aspect ratio of the axes in - the display coordinate system. - """ - figure_size = self.get_figure().get_size_inches() - ll, ur = self.get_position() * figure_size - width, height = ur - ll - return height / (width * self.get_data_ratio()) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/paint_by_example/image_encoder.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/paint_by_example/image_encoder.py deleted file mode 100644 index 831489eefed167264c8fd8f57e1ed59610ebb858..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/paint_by_example/image_encoder.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import torch -from torch import nn -from transformers import CLIPPreTrainedModel, CLIPVisionModel - -from ...models.attention import BasicTransformerBlock -from ...utils import logging - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class PaintByExampleImageEncoder(CLIPPreTrainedModel): - def __init__(self, config, proj_size=768): - super().__init__(config) - self.proj_size = proj_size - - self.model = CLIPVisionModel(config) - self.mapper = PaintByExampleMapper(config) - self.final_layer_norm = nn.LayerNorm(config.hidden_size) - self.proj_out = nn.Linear(config.hidden_size, self.proj_size) - - # uncondition for scaling - self.uncond_vector = nn.Parameter(torch.randn((1, 1, self.proj_size))) - - def forward(self, pixel_values, return_uncond_vector=False): - clip_output = self.model(pixel_values=pixel_values) - latent_states = clip_output.pooler_output - latent_states = self.mapper(latent_states[:, None]) - latent_states = self.final_layer_norm(latent_states) - latent_states = self.proj_out(latent_states) - if return_uncond_vector: - return latent_states, self.uncond_vector - - return latent_states - - -class PaintByExampleMapper(nn.Module): - def __init__(self, config): - super().__init__() - num_layers = (config.num_hidden_layers + 1) // 5 - hid_size = config.hidden_size - num_heads = 1 - self.blocks = nn.ModuleList( - [ - BasicTransformerBlock(hid_size, num_heads, hid_size, activation_fn="gelu", attention_bias=True) - for _ in range(num_layers) - ] - ) - - def forward(self, hidden_states): - for block in self.blocks: - hidden_states = block(hidden_states) - - return hidden_states diff --git a/spaces/deelerb/3dselfie/PIFu/lib/model/ConvPIFuNet.py b/spaces/deelerb/3dselfie/PIFu/lib/model/ConvPIFuNet.py deleted file mode 100644 index 1d43d262aa237d03db0cf329b4d199061ee6a006..0000000000000000000000000000000000000000 --- a/spaces/deelerb/3dselfie/PIFu/lib/model/ConvPIFuNet.py +++ /dev/null @@ -1,99 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from .BasePIFuNet import BasePIFuNet -from .SurfaceClassifier import SurfaceClassifier -from .DepthNormalizer import DepthNormalizer -from .ConvFilters import * -from ..net_util import init_net - -class ConvPIFuNet(BasePIFuNet): - ''' - Conv Piximp network is the standard 3-phase network that we will use. - The image filter is a pure multi-layer convolutional network, - while during feature extraction phase all features in the pyramid at the projected location - will be aggregated. - It does the following: - 1. Compute image feature pyramids and store it in self.im_feat_list - 2. Calculate calibration and indexing on each of the feat, and append them together - 3. Classification. - ''' - - def __init__(self, - opt, - projection_mode='orthogonal', - error_term=nn.MSELoss(), - ): - super(ConvPIFuNet, self).__init__( - projection_mode=projection_mode, - error_term=error_term) - - self.name = 'convpifu' - - self.opt = opt - self.num_views = self.opt.num_views - - self.image_filter = self.define_imagefilter(opt) - - self.surface_classifier = SurfaceClassifier( - filter_channels=self.opt.mlp_dim, - num_views=self.opt.num_views, - no_residual=self.opt.no_residual, - last_op=nn.Sigmoid()) - - self.normalizer = DepthNormalizer(opt) - - # This is a list of [B x Feat_i x H x W] features - self.im_feat_list = [] - - init_net(self) - - def define_imagefilter(self, opt): - net = None - if opt.netIMF == 'multiconv': - net = MultiConv(opt.enc_dim) - elif 'resnet' in opt.netIMF: - net = ResNet(model=opt.netIMF) - elif opt.netIMF == 'vgg16': - net = Vgg16() - else: - raise NotImplementedError('model name [%s] is not recognized' % opt.imf_type) - - return net - - def filter(self, images): - ''' - Filter the input images - store all intermediate features. - :param images: [B, C, H, W] input images - ''' - self.im_feat_list = self.image_filter(images) - - def query(self, points, calibs, transforms=None, labels=None): - ''' - Given 3D points, query the network predictions for each point. - Image features should be pre-computed before this call. - store all intermediate features. - query() function may behave differently during training/testing. - :param points: [B, 3, N] world space coordinates of points - :param calibs: [B, 3, 4] calibration matrices for each image - :param transforms: Optional [B, 2, 3] image space coordinate transforms - :param labels: Optional [B, Res, N] gt labeling - :return: [B, Res, N] predictions for each point - ''' - if labels is not None: - self.labels = labels - - xyz = self.projection(points, calibs, transforms) - xy = xyz[:, :2, :] - z = xyz[:, 2:3, :] - - z_feat = self.normalizer(z) - - # This is a list of [B, Feat_i, N] features - point_local_feat_list = [self.index(im_feat, xy) for im_feat in self.im_feat_list] - point_local_feat_list.append(z_feat) - # [B, Feat_all, N] - point_local_feat = torch.cat(point_local_feat_list, 1) - - self.preds = self.surface_classifier(point_local_feat) diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/latent_diffusion/openaimodel.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/latent_diffusion/openaimodel.py deleted file mode 100644 index 831d7aafb36bba16888e4389153979a6c13639f5..0000000000000000000000000000000000000000 --- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/latent_diffusion/openaimodel.py +++ /dev/null @@ -1,1069 +0,0 @@ -from abc import abstractmethod -import math - -import numpy as np -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from audioldm.latent_diffusion.util import ( - checkpoint, - conv_nd, - linear, - avg_pool_nd, - zero_module, - normalization, - timestep_embedding, -) -from audioldm.latent_diffusion.attention import SpatialTransformer - - -# dummy replace -def convert_module_to_f16(x): - pass - - -def convert_module_to_f32(x): - pass - - -## go -class AttentionPool2d(nn.Module): - """ - Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py - """ - - def __init__( - self, - spacial_dim: int, - embed_dim: int, - num_heads_channels: int, - output_dim: int = None, - ): - super().__init__() - self.positional_embedding = nn.Parameter( - th.randn(embed_dim, spacial_dim**2 + 1) / embed_dim**0.5 - ) - self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1) - self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1) - self.num_heads = embed_dim // num_heads_channels - self.attention = QKVAttention(self.num_heads) - - def forward(self, x): - b, c, *_spatial = x.shape - x = x.reshape(b, c, -1).contiguous() # NC(HW) - x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1) - x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1) - x = self.qkv_proj(x) - x = self.attention(x) - x = self.c_proj(x) - return x[:, :, 0] - - -class TimestepBlock(nn.Module): - """ - Any module where forward() takes timestep embeddings as a second argument. - """ - - @abstractmethod - def forward(self, x, emb): - """ - Apply the module to `x` given `emb` timestep embeddings. - """ - - -class TimestepEmbedSequential(nn.Sequential, TimestepBlock): - """ - A sequential module that passes timestep embeddings to the children that - support it as an extra input. - """ - - def forward(self, x, emb, context=None): - for layer in self: - if isinstance(layer, TimestepBlock): - x = layer(x, emb) - elif isinstance(layer, SpatialTransformer): - x = layer(x, context) - else: - x = layer(x) - return x - - -class Upsample(nn.Module): - """ - An upsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - upsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - if use_conv: - self.conv = conv_nd( - dims, self.channels, self.out_channels, 3, padding=padding - ) - - def forward(self, x): - assert x.shape[1] == self.channels - if self.dims == 3: - x = F.interpolate( - x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" - ) - else: - x = F.interpolate(x, scale_factor=2, mode="nearest") - if self.use_conv: - x = self.conv(x) - return x - - -class TransposedUpsample(nn.Module): - "Learned 2x upsampling without padding" - - def __init__(self, channels, out_channels=None, ks=5): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - - self.up = nn.ConvTranspose2d( - self.channels, self.out_channels, kernel_size=ks, stride=2 - ) - - def forward(self, x): - return self.up(x) - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - stride = 2 if dims != 3 else (1, 2, 2) - if use_conv: - self.op = conv_nd( - dims, - self.channels, - self.out_channels, - 3, - stride=stride, - padding=padding, - ) - else: - assert self.channels == self.out_channels - self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResBlock(TimestepBlock): - """ - A residual block that can optionally change the number of channels. - :param channels: the number of input channels. - :param emb_channels: the number of timestep embedding channels. - :param dropout: the rate of dropout. - :param out_channels: if specified, the number of out channels. - :param use_conv: if True and out_channels is specified, use a spatial - convolution instead of a smaller 1x1 convolution to change the - channels in the skip connection. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param use_checkpoint: if True, use gradient checkpointing on this module. - :param up: if True, use this block for upsampling. - :param down: if True, use this block for downsampling. - """ - - def __init__( - self, - channels, - emb_channels, - dropout, - out_channels=None, - use_conv=False, - use_scale_shift_norm=False, - dims=2, - use_checkpoint=False, - up=False, - down=False, - ): - super().__init__() - self.channels = channels - self.emb_channels = emb_channels - self.dropout = dropout - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_checkpoint = use_checkpoint - self.use_scale_shift_norm = use_scale_shift_norm - - self.in_layers = nn.Sequential( - normalization(channels), - nn.SiLU(), - conv_nd(dims, channels, self.out_channels, 3, padding=1), - ) - - self.updown = up or down - - if up: - self.h_upd = Upsample(channels, False, dims) - self.x_upd = Upsample(channels, False, dims) - elif down: - self.h_upd = Downsample(channels, False, dims) - self.x_upd = Downsample(channels, False, dims) - else: - self.h_upd = self.x_upd = nn.Identity() - - self.emb_layers = nn.Sequential( - nn.SiLU(), - linear( - emb_channels, - 2 * self.out_channels if use_scale_shift_norm else self.out_channels, - ), - ) - self.out_layers = nn.Sequential( - normalization(self.out_channels), - nn.SiLU(), - nn.Dropout(p=dropout), - zero_module( - conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1) - ), - ) - - if self.out_channels == channels: - self.skip_connection = nn.Identity() - elif use_conv: - self.skip_connection = conv_nd( - dims, channels, self.out_channels, 3, padding=1 - ) - else: - self.skip_connection = conv_nd(dims, channels, self.out_channels, 1) - - def forward(self, x, emb): - """ - Apply the block to a Tensor, conditioned on a timestep embedding. - :param x: an [N x C x ...] Tensor of features. - :param emb: an [N x emb_channels] Tensor of timestep embeddings. - :return: an [N x C x ...] Tensor of outputs. - """ - return checkpoint( - self._forward, (x, emb), self.parameters(), self.use_checkpoint - ) - - def _forward(self, x, emb): - if self.updown: - in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1] - h = in_rest(x) - h = self.h_upd(h) - x = self.x_upd(x) - h = in_conv(h) - else: - h = self.in_layers(x) - emb_out = self.emb_layers(emb).type(h.dtype) - while len(emb_out.shape) < len(h.shape): - emb_out = emb_out[..., None] - if self.use_scale_shift_norm: - out_norm, out_rest = self.out_layers[0], self.out_layers[1:] - scale, shift = th.chunk(emb_out, 2, dim=1) - h = out_norm(h) * (1 + scale) + shift - h = out_rest(h) - else: - h = h + emb_out - h = self.out_layers(h) - return self.skip_connection(x) + h - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. - Originally ported from here, but adapted to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - """ - - def __init__( - self, - channels, - num_heads=1, - num_head_channels=-1, - use_checkpoint=False, - use_new_attention_order=False, - ): - super().__init__() - self.channels = channels - if num_head_channels == -1: - self.num_heads = num_heads - else: - assert ( - channels % num_head_channels == 0 - ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}" - self.num_heads = channels // num_head_channels - self.use_checkpoint = use_checkpoint - self.norm = normalization(channels) - self.qkv = conv_nd(1, channels, channels * 3, 1) - if use_new_attention_order: - # split qkv before split heads - self.attention = QKVAttention(self.num_heads) - else: - # split heads before split qkv - self.attention = QKVAttentionLegacy(self.num_heads) - - self.proj_out = zero_module(conv_nd(1, channels, channels, 1)) - - def forward(self, x): - return checkpoint( - self._forward, (x,), self.parameters(), True - ) # TODO: check checkpoint usage, is True # TODO: fix the .half call!!! - # return pt_checkpoint(self._forward, x) # pytorch - - def _forward(self, x): - b, c, *spatial = x.shape - x = x.reshape(b, c, -1).contiguous() - qkv = self.qkv(self.norm(x)).contiguous() - h = self.attention(qkv).contiguous() - h = self.proj_out(h).contiguous() - return (x + h).reshape(b, c, *spatial).contiguous() - - -def count_flops_attn(model, _x, y): - """ - A counter for the `thop` package to count the operations in an - attention operation. - Meant to be used like: - macs, params = thop.profile( - model, - inputs=(inputs, timestamps), - custom_ops={QKVAttention: QKVAttention.count_flops}, - ) - """ - b, c, *spatial = y[0].shape - num_spatial = int(np.prod(spatial)) - # We perform two matmuls with the same number of ops. - # The first computes the weight matrix, the second computes - # the combination of the value vectors. - matmul_ops = 2 * b * (num_spatial**2) * c - model.total_ops += th.DoubleTensor([matmul_ops]) - - -class QKVAttentionLegacy(nn.Module): - """ - A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = ( - qkv.reshape(bs * self.n_heads, ch * 3, length).contiguous().split(ch, dim=1) - ) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v) - return a.reshape(bs, -1, length).contiguous() - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class QKVAttention(nn.Module): - """ - A module which performs QKV attention and splits in a different order. - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.chunk(3, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", - (q * scale).view(bs * self.n_heads, ch, length), - (k * scale).view(bs * self.n_heads, ch, length), - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum( - "bts,bcs->bct", - weight, - v.reshape(bs * self.n_heads, ch, length).contiguous(), - ) - return a.reshape(bs, -1, length).contiguous() - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class UNetModel(nn.Module): - """ - The full UNet model with attention and timestep embedding. - :param in_channels: channels in the input Tensor. - :param model_channels: base channel count for the model. - :param out_channels: channels in the output Tensor. - :param num_res_blocks: number of residual blocks per downsample. - :param attention_resolutions: a collection of downsample rates at which - attention will take place. May be a set, list, or tuple. - For example, if this contains 4, then at 4x downsampling, attention - will be used. - :param dropout: the dropout probability. - :param channel_mult: channel multiplier for each level of the UNet. - :param conv_resample: if True, use learned convolutions for upsampling and - downsampling. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param num_classes: if specified (as an int), then this model will be - class-conditional with `num_classes` classes. - :param use_checkpoint: use gradient checkpointing to reduce memory usage. - :param num_heads: the number of attention heads in each attention layer. - :param num_heads_channels: if specified, ignore num_heads and instead use - a fixed channel width per attention head. - :param num_heads_upsample: works with num_heads to set a different number - of heads for upsampling. Deprecated. - :param use_scale_shift_norm: use a FiLM-like conditioning mechanism. - :param resblock_updown: use residual blocks for up/downsampling. - :param use_new_attention_order: use a different attention pattern for potentially - increased efficiency. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - num_classes=None, - extra_film_condition_dim=None, - use_checkpoint=False, - use_fp16=False, - num_heads=-1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - extra_film_use_concat=False, # If true, concatenate extrafilm condition with time embedding, else addition - resblock_updown=False, - use_new_attention_order=False, - use_spatial_transformer=False, # custom transformer support - transformer_depth=1, # custom transformer support - context_dim=None, # custom transformer support - n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model - legacy=True, - ): - super().__init__() - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - if num_heads == -1: - assert ( - num_head_channels != -1 - ), "Either num_heads or num_head_channels has to be set" - - if num_head_channels == -1: - assert ( - num_heads != -1 - ), "Either num_heads or num_head_channels has to be set" - - self.image_size = image_size - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.num_classes = num_classes - self.extra_film_condition_dim = extra_film_condition_dim - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - self.predict_codebook_ids = n_embed is not None - self.extra_film_use_concat = extra_film_use_concat - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - assert not ( - self.num_classes is not None and self.extra_film_condition_dim is not None - ), "As for the condition of theh UNet model, you can only set using class label or an extra embedding vector (such as from CLAP). You cannot set both num_classes and extra_film_condition_dim." - - if self.num_classes is not None: - self.label_emb = nn.Embedding(num_classes, time_embed_dim) - - self.use_extra_film_by_concat = ( - self.extra_film_condition_dim is not None and self.extra_film_use_concat - ) - self.use_extra_film_by_addition = ( - self.extra_film_condition_dim is not None and not self.extra_film_use_concat - ) - - if self.extra_film_condition_dim is not None: - self.film_emb = nn.Linear(self.extra_film_condition_dim, time_embed_dim) - # print("+ Use extra condition on UNet channel using Film. Extra condition dimension is %s. " % self.extra_film_condition_dim) - # if(self.use_extra_film_by_concat): - # print("\t By concatenation with time embedding") - # elif(self.use_extra_film_by_concat): - # print("\t By addition with time embedding") - - if use_spatial_transformer and ( - self.use_extra_film_by_concat or self.use_extra_film_by_addition - ): - # print("+ Spatial transformer will only be used as self-attention. Because you have choose to use film as your global condition.") - spatial_transformer_no_context = True - else: - spatial_transformer_no_context = False - - if use_spatial_transformer and not spatial_transformer_no_context: - assert ( - context_dim is not None - ), "Fool!! You forgot to include the dimension of your cross-attention conditioning..." - - if context_dim is not None and not spatial_transformer_no_context: - assert ( - use_spatial_transformer - ), "Fool!! You forgot to use the spatial transformer for your cross-attention conditioning..." - from omegaconf.listconfig import ListConfig - - if type(context_dim) == ListConfig: - context_dim = list(context_dim) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim - if (not self.use_extra_film_by_concat) - else time_embed_dim * 2, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - dim_head = ( - ch // num_heads - if use_spatial_transformer - else num_head_channels - ) - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) - if not use_spatial_transformer - else SpatialTransformer( - ch, - num_heads, - dim_head, - depth=transformer_depth, - context_dim=context_dim, - no_context=spatial_transformer_no_context, - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim - if (not self.use_extra_film_by_concat) - else time_embed_dim * 2, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - # num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim - if (not self.use_extra_film_by_concat) - else time_embed_dim * 2, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) - if not use_spatial_transformer - else SpatialTransformer( - ch, - num_heads, - dim_head, - depth=transformer_depth, - context_dim=context_dim, - no_context=spatial_transformer_no_context, - ), - ResBlock( - ch, - time_embed_dim - if (not self.use_extra_film_by_concat) - else time_embed_dim * 2, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - - self.output_blocks = nn.ModuleList([]) - for level, mult in list(enumerate(channel_mult))[::-1]: - for i in range(num_res_blocks + 1): - ich = input_block_chans.pop() - layers = [ - ResBlock( - ch + ich, - time_embed_dim - if (not self.use_extra_film_by_concat) - else time_embed_dim * 2, - dropout, - out_channels=model_channels * mult, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = model_channels * mult - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - # num_heads = 1 - dim_head = ( - ch // num_heads - if use_spatial_transformer - else num_head_channels - ) - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads_upsample, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) - if not use_spatial_transformer - else SpatialTransformer( - ch, - num_heads, - dim_head, - depth=transformer_depth, - context_dim=context_dim, - no_context=spatial_transformer_no_context, - ) - ) - if level and i == num_res_blocks: - out_ch = ch - layers.append( - ResBlock( - ch, - time_embed_dim - if (not self.use_extra_film_by_concat) - else time_embed_dim * 2, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - up=True, - ) - if resblock_updown - else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ds //= 2 - self.output_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)), - ) - if self.predict_codebook_ids: - self.id_predictor = nn.Sequential( - normalization(ch), - conv_nd(dims, model_channels, n_embed, 1), - # nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits - ) - - self.shape_reported = False - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - self.output_blocks.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - self.output_blocks.apply(convert_module_to_f32) - - def forward(self, x, timesteps=None, context=None, y=None, **kwargs): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :param context: conditioning plugged in via crossattn - :param y: an [N] Tensor of labels, if class-conditional. an [N, extra_film_condition_dim] Tensor if film-embed conditional - :return: an [N x C x ...] Tensor of outputs. - """ - if not self.shape_reported: - # print("The shape of UNet input is", x.size()) - self.shape_reported = True - - assert (y is not None) == ( - self.num_classes is not None or self.extra_film_condition_dim is not None - ), "must specify y if and only if the model is class-conditional or film embedding conditional" - hs = [] - t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False) - emb = self.time_embed(t_emb) - - if self.num_classes is not None: - assert y.shape == (x.shape[0],) - emb = emb + self.label_emb(y) - - if self.use_extra_film_by_addition: - emb = emb + self.film_emb(y) - elif self.use_extra_film_by_concat: - emb = th.cat([emb, self.film_emb(y)], dim=-1) - - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb, context) - hs.append(h) - h = self.middle_block(h, emb, context) - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1) - h = module(h, emb, context) - h = h.type(x.dtype) - if self.predict_codebook_ids: - return self.id_predictor(h) - else: - return self.out(h) - - -class EncoderUNetModel(nn.Module): - """ - The half UNet model with attention and timestep embedding. - For usage, see UNet. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - use_checkpoint=False, - use_fp16=False, - num_heads=1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - pool="adaptive", - *args, - **kwargs, - ): - super().__init__() - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - use_new_attention_order=use_new_attention_order, - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=num_head_channels, - use_new_attention_order=use_new_attention_order, - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - self.pool = pool - if pool == "adaptive": - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - nn.AdaptiveAvgPool2d((1, 1)), - zero_module(conv_nd(dims, ch, out_channels, 1)), - nn.Flatten(), - ) - elif pool == "attention": - assert num_head_channels != -1 - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - AttentionPool2d( - (image_size // ds), ch, num_head_channels, out_channels - ), - ) - elif pool == "spatial": - self.out = nn.Sequential( - nn.Linear(self._feature_size, 2048), - nn.ReLU(), - nn.Linear(2048, self.out_channels), - ) - elif pool == "spatial_v2": - self.out = nn.Sequential( - nn.Linear(self._feature_size, 2048), - normalization(2048), - nn.SiLU(), - nn.Linear(2048, self.out_channels), - ) - else: - raise NotImplementedError(f"Unexpected {pool} pooling") - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - - def forward(self, x, timesteps): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :return: an [N x K] Tensor of outputs. - """ - emb = self.time_embed(timestep_embedding(timesteps, self.model_channels)) - - results = [] - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb) - if self.pool.startswith("spatial"): - results.append(h.type(x.dtype).mean(dim=(2, 3))) - h = self.middle_block(h, emb) - if self.pool.startswith("spatial"): - results.append(h.type(x.dtype).mean(dim=(2, 3))) - h = th.cat(results, axis=-1) - return self.out(h) - else: - h = h.type(x.dtype) - return self.out(h) diff --git a/spaces/deepwisdom/MetaGPT/metagpt/prompts/structure_action.py b/spaces/deepwisdom/MetaGPT/metagpt/prompts/structure_action.py deleted file mode 100644 index 97c57cf249556cfc2af8f534bbd4fe8284d6a683..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/prompts/structure_action.py +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/30 10:12 -@Author : alexanderwu -@File : structure_action.py -""" - -ACTION_SYSTEM = """SYSTEM: -You serve as an assistant that helps me play Minecraft. -I will give you a sentence. Please convert this sentence into one or several actions according to the following instructions. -Each action should be a tuple of four items, written in the form (’verb’, ’object’, ’tools’, ’materials’) -’verb’ is the verb of this action. -’object’ refers to the target object of the action. -’tools’ specifies the tools required for the action. -’material’ specifies the materials required for the action. -If some of the items are not required, set them to be ’None’. -""" - -ACTION_USER = """USER: -The sentence is {sentence}. Generate the action tuple according to the requirements. -""" diff --git a/spaces/derina/BartSummarizer/README.md b/spaces/derina/BartSummarizer/README.md deleted file mode 100644 index 175a7da038f05ad0b4775429f77dbdd4c12f62cb..0000000000000000000000000000000000000000 --- a/spaces/derina/BartSummarizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OpenAISummarizer -emoji: 👁 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.28.1 -app_file: app.py -pinned: false -license: bsd ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/digitalxingtong/Miiu-Bert-Vits2/text/chinese.py b/spaces/digitalxingtong/Miiu-Bert-Vits2/text/chinese.py deleted file mode 100644 index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Miiu-Bert-Vits2/text/chinese.py +++ /dev/null @@ -1,193 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text import symbols -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in - open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()} - -import jieba.posseg as psg - - -rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - '$': '.', - '“': "'", - '”': "'", - '‘': "'", - '’': "'", - '(': "'", - ')': "'", - '(': "'", - ')': "'", - '《': "'", - '》': "'", - '【': "'", - '】': "'", - '[': "'", - ']': "'", - '—': "-", - '~': "-", - '~': "-", - '「': "'", - '」': "'", - -} - -tone_modifier = ToneSandhi() - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣","母") - pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text) - - return replaced_text - -def g2p(text): - pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip()!=''] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch. - phones = ['_'] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - pinyins = [] - # Replace all English words in the sentence - seg = re.sub('[a-zA-Z]+', '', seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == 'eng': - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, - sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c+v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = '0' - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c+v_without_tone - assert tone in '12345' - - if c: - # 多音节 - v_rep_map = { - "uei": 'ui', - 'iou': 'iu', - 'uen': 'un', - } - if v_without_tone in v_rep_map.keys(): - pinyin = c+v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - 'ing': 'ying', - 'i': 'yi', - 'in': 'yin', - 'u': 'wu', - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - 'v': 'yu', - 'e': 'e', - 'i': 'y', - 'u': 'w', - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]]+pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(' ') - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - - -def text_normalize(text): - numbers = re.findall(r'\d+(?:\.?\d+)?', text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - -def get_bert_feature(text, word2ph): - from text import chinese_bert - return chinese_bert.get_bert_feature(text, word2ph) - -if __name__ == '__main__': - from text.chinese_bert import get_bert_feature - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/digitalxingtong/Taffy-Bert-VITS2/monotonic_align/setup.py b/spaces/digitalxingtong/Taffy-Bert-VITS2/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Taffy-Bert-VITS2/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/digitalxingtong/Xingtong-All-in-One/README.md b/spaces/digitalxingtong/Xingtong-All-in-One/README.md deleted file mode 100644 index 4171b70798d66b8e0a4b8319ad2c8c9dc582510f..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-All-in-One/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Xingtong All In One -emoji: 🌖 -colorFrom: gray -colorTo: green -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dineshreddy/WALT/configs/_base_/models/mask_rcnn_swin_fpn.py b/spaces/dineshreddy/WALT/configs/_base_/models/mask_rcnn_swin_fpn.py deleted file mode 100644 index e3d42197f4646cd9ecafac2095d3f8e079f0a729..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/configs/_base_/models/mask_rcnn_swin_fpn.py +++ /dev/null @@ -1,127 +0,0 @@ -# model settings -model = dict( - type='MaskRCNN', - pretrained=None, - backbone=dict( - type='SwinTransformer', - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - mlp_ratio=4., - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.2, - ape=False, - patch_norm=True, - out_indices=(0, 1, 2, 3), - use_checkpoint=False), - neck=dict( - type='FPN', - in_channels=[96, 192, 384, 768], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - mask_head=dict( - type='FCNMaskHead', - num_convs=4, - in_channels=256, - conv_out_channels=256, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - mask_size=28, - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100, - mask_thr_binary=0.5))) diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/dbnet/README.md b/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/dbnet/README.md deleted file mode 100644 index d2007c72ec2b45e70d30c6edea128b7e0be2baca..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/dbnet/README.md +++ /dev/null @@ -1,33 +0,0 @@ -# DBNet - -> [Real-time Scene Text Detection with Differentiable Binarization](https://arxiv.org/abs/1911.08947) - - - -## Abstract - -Recently, segmentation-based methods are quite popular in scene text detection, as the segmentation results can more accurately describe scene text of various shapes such as curve text. However, the post-processing of binarization is essential for segmentation-based detection, which converts probability maps produced by a segmentation method into bounding boxes/regions of text. In this paper, we propose a module named Differentiable Binarization (DB), which can perform the binarization process in a segmentation network. Optimized along with a DB module, a segmentation network can adaptively set the thresholds for binarization, which not only simplifies the post-processing but also enhances the performance of text detection. Based on a simple segmentation network, we validate the performance improvements of DB on five benchmark datasets, which consistently achieves state-of-the-art results, in terms of both detection accuracy and speed. In particular, with a light-weight backbone, the performance improvements by DB are significant so that we can look for an ideal tradeoff between detection accuracy and efficiency. Specifically, with a backbone of ResNet-18, our detector achieves an F-measure of 82.8, running at 62 FPS, on the MSRA-TD500 dataset. - -
    - -
    - -## Results and models - -### ICDAR2015 - -| Method | Pretrained Model | Training set | Test set | #epochs | Test size | Recall | Precision | Hmean | Download | -| :---------------------------------------: | :-------------------------------------------------: | :-------------: | :------------: | :-----: | :-------: | :----: | :-------: | :---: | :-----------------------------------------: | -| [DBNet_r18](/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py) | ImageNet | ICDAR2015 Train | ICDAR2015 Test | 1200 | 736 | 0.731 | 0.871 | 0.795 | [model](https://download.openmmlab.com/mmocr/textdet/dbnet/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597.pth) \| [log](https://download.openmmlab.com/mmocr/textdet/dbnet/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597.log.json) | -| [DBNet_r50dcn](/configs/textdet/dbnet/dbnet_r50dcnv2_fpnc_1200e_icdar2015.py) | [Synthtext](https://download.openmmlab.com/mmocr/textdet/dbnet/dbnet_r50dcnv2_fpnc_sbn_2e_synthtext_20210325-aa96e477.pth) | ICDAR2015 Train | ICDAR2015 Test | 1200 | 1024 | 0.814 | 0.868 | 0.840 | [model](https://download.openmmlab.com/mmocr/textdet/dbnet/dbnet_r50dcnv2_fpnc_sbn_1200e_icdar2015_20211025-9fe3b590.pth) \| [log](https://download.openmmlab.com/mmocr/textdet/dbnet/dbnet_r50dcnv2_fpnc_sbn_1200e_icdar2015_20211025-9fe3b590.log.json) | - -## Citation - -```bibtex -@article{Liao_Wan_Yao_Chen_Bai_2020, - title={Real-Time Scene Text Detection with Differentiable Binarization}, - journal={Proceedings of the AAAI Conference on Artificial Intelligence}, - author={Liao, Minghui and Wan, Zhaoyi and Yao, Cong and Chen, Kai and Bai, Xiang}, - year={2020}, - pages={11474-11481}} -``` diff --git a/spaces/dirge/voicevox/voicevox_engine/preset/PresetError.py b/spaces/dirge/voicevox/voicevox_engine/preset/PresetError.py deleted file mode 100644 index 6f5f802f57b03ebcc07f1173f47b9cb384e0fbd1..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/voicevox_engine/preset/PresetError.py +++ /dev/null @@ -1,2 +0,0 @@ -class PresetError(Exception): - pass diff --git a/spaces/duycse1603/math2tex/ScanSSD/utils/augmentations.py b/spaces/duycse1603/math2tex/ScanSSD/utils/augmentations.py deleted file mode 100644 index 67992ad00e50093366c9ea1adb0320f7c14ee56f..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/ScanSSD/utils/augmentations.py +++ /dev/null @@ -1,424 +0,0 @@ -import torch -from torchvision import transforms -import cv2 -import numpy as np -import types -from numpy import random -from matplotlib import pyplot as plt -from PIL import Image, ImageOps - -def intersect(box_a, box_b): - max_xy = np.minimum(box_a[:, 2:], box_b[2:]) - min_xy = np.maximum(box_a[:, :2], box_b[:2]) - inter = np.clip((max_xy - min_xy), a_min=0, a_max=np.inf) - return inter[:, 0] * inter[:, 1] - - -def jaccard_numpy(box_a, box_b): - """Compute the jaccard overlap of two sets of boxes. The jaccard overlap - is simply the intersection over union of two boxes. - E.g.: - A ∩ B / A ∪ B = A ∩ B / (area(A) + area(B) - A ∩ B) - Args: - box_a: Multiple bounding boxes, Shape: [num_boxes,4] - box_b: Single bounding box, Shape: [4] - Return: - jaccard overlap: Shape: [box_a.shape[0], box_a.shape[1]] - """ - inter = intersect(box_a, box_b) - area_a = ((box_a[:, 2]-box_a[:, 0]) * - (box_a[:, 3]-box_a[:, 1])) # [A,B] - area_b = ((box_b[2]-box_b[0]) * - (box_b[3]-box_b[1])) # [A,B] - union = area_a + area_b - inter - return inter / union # [A,B] - - -class Compose(object): - """Composes several augmentations together. - Args: - transforms (List[Transform]): list of transforms to compose. - Example: - >>> augmentations.Compose([ - >>> transforms.CenterCrop(10), - >>> transforms.ToTensor(), - >>> ]) - """ - - def __init__(self, transforms): - self.transforms = transforms - - def __call__(self, img, boxes=None, labels=None): - for t in self.transforms: - img, boxes, labels = t(img, boxes, labels) - return img, boxes, labels - - -class Lambda(object): - """Applies a lambda as a transform.""" - - def __init__(self, lambd): - assert isinstance(lambd, types.LambdaType) - self.lambd = lambd - - def __call__(self, img, boxes=None, labels=None): - return self.lambd(img, boxes, labels) - - -class ConvertFromInts(object): - def __call__(self, image, boxes=None, labels=None): - return image.astype(np.float32), boxes, labels - - -class SubtractMeans(object): - def __init__(self, mean): - self.mean = np.array(mean, dtype=np.float32) - - def __call__(self, image, boxes=None, labels=None): - image = image.astype(np.float32) - image -= self.mean - return image.astype(np.float32), boxes, labels - - -class ToAbsoluteCoords(object): - def __call__(self, image, boxes=None, labels=None): - height, width, channels = image.shape - boxes[:, 0] *= width - boxes[:, 2] *= width - boxes[:, 1] *= height - boxes[:, 3] *= height - - return image, boxes, labels - - -class ToPercentCoords(object): - def __call__(self, image, boxes=None, labels=None): - height, width, channels = image.shape - boxes[:, 0] /= width - boxes[:, 2] /= width - boxes[:, 1] /= height - boxes[:, 3] /= height - - return image, boxes, labels - - -class Resize(object): - def __init__(self, size=300): - self.size = size - - def __call__(self, image, boxes=None, labels=None): - # plt.imshow(image) - # plt.savefig("eval/bface.png", dpi=600) - # plt.close() - - image = cv2.resize(image, (self.size,self.size), interpolation=cv2.INTER_AREA) - # plt.imshow(image) - # plt.savefig("eval/face.png", dpi=600) - # plt.close() - return image, boxes, labels - - -class RandomSaturation(object): - def __init__(self, lower=0.5, upper=1.5): - self.lower = lower - self.upper = upper - assert self.upper >= self.lower, "contrast upper must be >= lower." - assert self.lower >= 0, "contrast lower must be non-negative." - - def __call__(self, image, boxes=None, labels=None): - if random.randint(2): - image[:, :, 1] *= random.uniform(self.lower, self.upper) - - return image, boxes, labels - - -class RandomHue(object): - def __init__(self, delta=18.0): - assert delta >= 0.0 and delta <= 360.0 - self.delta = delta - - def __call__(self, image, boxes=None, labels=None): - if random.randint(2): - image[:, :, 0] += random.uniform(-self.delta, self.delta) - image[:, :, 0][image[:, :, 0] > 360.0] -= 360.0 - image[:, :, 0][image[:, :, 0] < 0.0] += 360.0 - return image, boxes, labels - - -class RandomLightingNoise(object): - def __init__(self): - self.perms = ((0, 1, 2), (0, 2, 1), - (1, 0, 2), (1, 2, 0), - (2, 0, 1), (2, 1, 0)) - - def __call__(self, image, boxes=None, labels=None): - if random.randint(2): - swap = self.perms[random.randint(len(self.perms))] - shuffle = SwapChannels(swap) # shuffle channels - image = shuffle(image) - return image, boxes, labels - - -class ConvertColor(object): - def __init__(self, current='BGR', transform='HSV'): - self.transform = transform - self.current = current - - def __call__(self, image, boxes=None, labels=None): - if self.current == 'BGR' and self.transform == 'HSV': - image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) - elif self.current == 'HSV' and self.transform == 'BGR': - image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR) - else: - raise NotImplementedError - return image, boxes, labels - - -class RandomContrast(object): - def __init__(self, lower=0.5, upper=1.5): - self.lower = lower - self.upper = upper - assert self.upper >= self.lower, "contrast upper must be >= lower." - assert self.lower >= 0, "contrast lower must be non-negative." - - # expects float image - def __call__(self, image, boxes=None, labels=None): - if random.randint(2): - alpha = random.uniform(self.lower, self.upper) - image *= alpha - return image, boxes, labels - - -class RandomBrightness(object): - def __init__(self, delta=32): - assert delta >= 0.0 - assert delta <= 255.0 - self.delta = delta - - def __call__(self, image, boxes=None, labels=None): - if random.randint(2): - delta = random.uniform(-self.delta, self.delta) - image += delta - return image, boxes, labels - - -class ToCV2Image(object): - def __call__(self, tensor, boxes=None, labels=None): - return tensor.cpu().numpy().astype(np.float32).transpose((1, 2, 0)), boxes, labels - - -class ToTensor(object): - def __call__(self, cvimage, boxes=None, labels=None): - return torch.from_numpy(cvimage.astype(np.float32)).permute(2, 0, 1), boxes, labels - - -class RandomSampleCrop(object): - """Crop - Arguments: - img (Image): the image being input during training - boxes (Tensor): the original bounding boxes in pt form - labels (Tensor): the class labels for each bbox - mode (float tuple): the min and max jaccard overlaps - Return: - (img, boxes, classes) - img (Image): the cropped image - boxes (Tensor): the adjusted bounding boxes in pt form - labels (Tensor): the class labels for each bbox - """ - def __init__(self): - self.sample_options = ( - # using entire original input image - None, - # sample a patch s.t. MIN jaccard w/ obj in .1,.3,.4,.7,.9 - (0.1, None), - (0.3, None), - (0.5, None), - (0.7, None), - (0.9, None), - # randomly sample a patch - (None, None), - ) - - def __call__(self, image, boxes=None, labels=None): - height, width, _ = image.shape - while True: - # randomly choose a mode - mode = random.choice(self.sample_options) - if mode is None: - return image, boxes, labels - - min_iou, max_iou = mode - if min_iou is None: - min_iou = float('-inf') - if max_iou is None: - max_iou = float('inf') - - # max trails (50) - for _ in range(50): - current_image = image - - w = random.uniform(0.3 * width, width) - h = random.uniform(0.3 * height, height) - - # aspect ratio constraint b/t .5 & 2 - if h / w < 0.5 or h / w > 2: - continue - - left = random.uniform(width - w) - top = random.uniform(height - h) - - # convert to integer rect x1,y1,x2,y2 - rect = np.array([int(left), int(top), int(left+w), int(top+h)]) - - # calculate IoU (jaccard overlap) b/t the cropped and gt boxes - overlap = jaccard_numpy(boxes, rect) - - # is min and max overlap constraint satisfied? if not try again - if overlap.max() < min_iou or overlap.min() > max_iou: - continue - - # cut the crop from the image - current_image = current_image[rect[1]:rect[3], rect[0]:rect[2],:] - - # keep overlap with gt box IF center in sampled patch - centers = (boxes[:, :2] + boxes[:, 2:]) / 2.0 - - # mask in all gt boxes that above and to the left of centers - m1 = (rect[0] < centers[:, 0]) * (rect[1] < centers[:, 1]) - - # mask in all gt boxes that under and to the right of centers - m2 = (rect[2] > centers[:, 0]) * (rect[3] > centers[:, 1]) - - # mask in that both m1 and m2 are true - mask = m1 * m2 - - # have any valid boxes? try again if not - if not mask.any(): - continue - - # take only matching gt boxes - current_boxes = boxes[mask, :].copy() - - # take only matching gt labels - current_labels = labels[mask] - - # should we use the box left and top corner or the crop's - current_boxes[:, :2] = np.maximum(current_boxes[:, :2], - rect[:2]) - # adjust to crop (by substracting crop's left,top) - current_boxes[:, :2] -= rect[:2] - - current_boxes[:, 2:] = np.minimum(current_boxes[:, 2:], - rect[2:]) - # adjust to crop (by substracting crop's left,top) - current_boxes[:, 2:] -= rect[:2] - - return current_image, current_boxes, current_labels - - -class Expand(object): - def __init__(self, mean): - self.mean = mean - - def __call__(self, image, boxes, labels): - if random.randint(2): - return image, boxes, labels - - height, width, depth = image.shape - ratio = random.uniform(1, 4) - left = random.uniform(0, width*ratio - width) - top = random.uniform(0, height*ratio - height) - - expand_image = np.zeros( - (int(height*ratio), int(width*ratio), depth), - dtype=image.dtype) - expand_image[:, :, :] = self.mean - expand_image[int(top):int(top + height), - int(left):int(left + width)] = image - image = expand_image - - boxes = boxes.copy() - boxes[:, :2] += (int(left), int(top)) - boxes[:, 2:] += (int(left), int(top)) - - return image, boxes, labels - - -class RandomMirror(object): - def __call__(self, image, boxes, classes): - _, width, _ = image.shape - if random.randint(2): - image = image[:, ::-1] - boxes = boxes.copy() - boxes[:, 0::2] = width - boxes[:, 2::-2] - return image, boxes, classes - - -class SwapChannels(object): - """Transforms a tensorized image by swapping the channels in the order - specified in the swap tuple. - Args: - swaps (int triple): final order of channels - eg: (2, 1, 0) - """ - - def __init__(self, swaps): - self.swaps = swaps - - def __call__(self, image): - """ - Args: - image (Tensor): image tensor to be transformed - Return: - a tensor with channels swapped according to swap - """ - # if torch.is_tensor(image): - # image = image.data.cpu().numpy() - # else: - # image = np.array(image) - image = image[:, :, self.swaps] - return image - - -class PhotometricDistort(object): - def __init__(self): - self.pd = [ - RandomContrast(), - ConvertColor(transform='HSV'), - RandomSaturation(), - RandomHue(), - ConvertColor(current='HSV', transform='BGR'), - RandomContrast() - ] - self.rand_brightness = RandomBrightness() - self.rand_light_noise = RandomLightingNoise() - - def __call__(self, image, boxes, labels): - im = image.copy() - im, boxes, labels = self.rand_brightness(im, boxes, labels) - if random.randint(2): - distort = Compose(self.pd[:-1]) - else: - distort = Compose(self.pd[1:]) - im, boxes, labels = distort(im, boxes, labels) - return self.rand_light_noise(im, boxes, labels) - - -class SSDAugmentation(object): - def __init__(self, size=300, mean=(104, 117, 123)): - self.mean = mean - self.size = size - self.augment = Compose([ - ConvertFromInts(), - ToAbsoluteCoords(), - PhotometricDistort(), - Expand(self.mean), - RandomSampleCrop(), - #RandomMirror(), - ToPercentCoords(), - Resize(self.size), - SubtractMeans(self.mean) - ]) - - def __call__(self, img, boxes, labels): - return self.augment(img, boxes, labels) diff --git a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/models.py b/spaces/eIysia/VITS-Umamusume-voice-synthesizer/models.py deleted file mode 100644 index 7dcd22edf811b952514080f5f06cc43d635ead28..0000000000000000000000000000000000000000 --- a/spaces/eIysia/VITS-Umamusume-voice-synthesizer/models.py +++ /dev/null @@ -1,542 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - emotion_embedding): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emotion_embedding = emotion_embedding - - if self.n_vocab!=0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - if emotion_embedding: - self.emotion_emb = nn.Linear(1024, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, emotion_embedding=None): - if self.n_vocab!=0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - if emotion_embedding is not None: - x = x + self.emotion_emb(emotion_embedding.unsqueeze(1)) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - emotion_embedding=False, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - emotion_embedding) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None, emotion_embedding=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emotion_embedding) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/ehristoforu/Iro/app.py b/spaces/ehristoforu/Iro/app.py deleted file mode 100644 index 1e8746048aec810850e0991cb7496ccabf09f2a1..0000000000000000000000000000000000000000 --- a/spaces/ehristoforu/Iro/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import gradio as gr - -with gr.Blocks() as demo: - with gr.Tab("Русский ➡️ Английский"): - gr.load("facebook/wmt19-ru-en", src="models") - with gr.Tab("Английский ➡️ Русский"): - gr.load("facebook/wmt19-en-ru", src="models") - -demo.launch() \ No newline at end of file diff --git a/spaces/enesbol/case_dif/inference.py b/spaces/enesbol/case_dif/inference.py deleted file mode 100644 index 3fa790032e73d474dea3e6d1ba65cb3c741edb5e..0000000000000000000000000000000000000000 --- a/spaces/enesbol/case_dif/inference.py +++ /dev/null @@ -1,89 +0,0 @@ -""" -author: Min Seok Lee and Wooseok Shin -""" -import os -import cv2 -import time -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision.transforms import transforms -from tqdm import tqdm -from dataloader import get_test_augmentation, get_loader -from model.TRACER import TRACER -from util.utils import load_pretrained - - -class Inference(): - def __init__(self, args, save_path): - super(Inference, self).__init__() - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.test_transform = get_test_augmentation(img_size=args.img_size) - self.args = args - self.save_path = save_path - - # Network - self.model = TRACER(args).to(self.device) - if args.multi_gpu: - self.model = nn.DataParallel(self.model).to(self.device) - - path = load_pretrained(f'TE-{args.arch}') - self.model.load_state_dict(path) - print('###### pre-trained Model restored #####') - - te_img_folder = os.path.join(args.data_path, args.dataset) - te_gt_folder = None - - self.test_loader = get_loader(te_img_folder, te_gt_folder, edge_folder=None, phase='test', - batch_size=args.batch_size, shuffle=False, - num_workers=args.num_workers, transform=self.test_transform) - - if args.save_map is not None: - os.makedirs(os.path.join('mask', self.args.dataset), exist_ok=True) - os.makedirs(os.path.join('object', self.args.dataset), exist_ok=True) - - def test(self): - self.model.eval() - t = time.time() - - with torch.no_grad(): - for i, (images, original_size, image_name) in enumerate(tqdm(self.test_loader)): - images = torch.tensor(images, device=self.device, dtype=torch.float32) - - outputs, edge_mask, ds_map = self.model(images) - H, W = original_size - - for i in range(images.size(0)): - h, w = H[i].item(), W[i].item() - output = F.interpolate(outputs[i].unsqueeze(0), size=(h, w), mode='bilinear') - - # Save prediction map - if self.args.save_map is not None: - output = (output.squeeze().detach().cpu().numpy() * 255.0).astype(np.uint8) - - salient_object = self.post_processing(images[i], output, h, w) - cv2.imwrite(os.path.join('mask', self.args.dataset, image_name[i] + '.png'), output) - cv2.imwrite(os.path.join('object', self.args.dataset, image_name[i] + '.png'), salient_object) - - print(f'time: {time.time() - t:.3f}s') - - def post_processing(self, original_image, output_image, height, width, threshold=200): - invTrans = transforms.Compose([transforms.Normalize(mean=[0., 0., 0.], - std=[1 / 0.229, 1 / 0.224, 1 / 0.225]), - transforms.Normalize(mean=[-0.485, -0.456, -0.406], - std=[1., 1., 1.]), - ]) - original_image = invTrans(original_image) - - original_image = F.interpolate(original_image.unsqueeze(0), size=(height, width), mode='bilinear') - original_image = (original_image.squeeze().permute(1, 2, 0).detach().cpu().numpy() * 255.0).astype(np.uint8) - - rgba_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2BGRA) - output_rbga_image = cv2.cvtColor(output_image, cv2.COLOR_BGR2BGRA) - - output_rbga_image[:, :, 3] = output_image # Extract edges - edge_y, edge_x, _ = np.where(output_rbga_image <= threshold) # Edge coordinates - - rgba_image[edge_y, edge_x, 3] = 0 - return cv2.cvtColor(rgba_image, cv2.COLOR_RGBA2BGRA) diff --git a/spaces/enzostvs/stable-diffusion-tpu/components/main/index.tsx b/spaces/enzostvs/stable-diffusion-tpu/components/main/index.tsx deleted file mode 100644 index 7f7367130d3669d6d6361ba4df7b87985705ac46..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/components/main/index.tsx +++ /dev/null @@ -1,122 +0,0 @@ -"use client"; - -import { useState } from "react"; -import { HiUserGroup, HiHeart, HiAdjustmentsHorizontal } from "react-icons/hi2"; -import Link from "next/link"; -import Image from "next/image"; -import classNames from "classnames"; -import { createBreakpoint } from "react-use"; - -import { InputGeneration } from "@/components/input-generation"; -import { Button } from "@/components/button"; -import { useUser } from "@/utils/useUser"; - -import { useInputGeneration } from "./hooks/useInputGeneration"; -import { Collections } from "./collections"; -import { Settings } from "./settings"; - -const categories = [ - { - key: "community", - label: "Community", - icon: , - }, - { - key: "my-own", - label: "My generations", - isLogged: true, - icon: , - }, -]; - -const useBreakpoint = createBreakpoint({ XL: 1280, L: 1024, S: 768, XS: 640 }); - -export const Main = () => { - const { openWindowLogin, user } = useUser(); - const breakpoint = useBreakpoint(); - const { list_styles, style, setStyle, loading } = useInputGeneration(); - const [category, setCategory] = useState("community"); - const [advancedSettings, setAdvancedSettings] = useState(false); - - return ( -
    -
    -
    - -
    - {categories.map(({ key, label, icon, isLogged }) => - isLogged && !user ? ( - Sign in with Hugging Face - ) : ( - - ) - )} -
    -
    -
    -
    - {user?.sub ? ( - <> - Logged as - - @{user?.preferred_username} - {user?.preferred_username} - - - ) : ( - "to save your generations in your own gallery" - )} -
    -

    setAdvancedSettings(!advancedSettings)} - > - - Advanced settings -

    - - -
    - ); -}; diff --git a/spaces/evi0mo/vits-fastapi-server/api.py b/spaces/evi0mo/vits-fastapi-server/api.py deleted file mode 100644 index 874605a58cd832ae7c0b0e0b377b4a34947f2b03..0000000000000000000000000000000000000000 --- a/spaces/evi0mo/vits-fastapi-server/api.py +++ /dev/null @@ -1,205 +0,0 @@ -import json -import re -import sys -import time -from os.path import abspath, dirname, join - -import torch -import numpy as np -from loguru import logger -from torch import no_grad, LongTensor - -import commons -import utils -from models_infer import SynthesizerTrn -from text import text_to_sequence - -sys.path.append("..") -from base_utils import ndarray_to_bytes, playsound_via_pygame - -device = "cuda:0" if torch.cuda.is_available() else "cpu" - -# 语言类型标准 -language_marks = { - "Japanese": "", - "jp": "[JA]", - "zh": "[ZH]", - "en": "[EN]", - "mix": "", -} - -# 文件配置 -dir_path = abspath(dirname(__file__)) -config = join(dir_path, "configs/uma_trilingual.json") -model = join(dir_path, "pth/uma_trilingual.pth") - -# 说话人-id映射 -with open(config, "r") as f: - data = f.read() - json_data = json.loads(data) -speakers_map = json_data["speakers"] - -# 加载模型 -logger.info(f"loading VITS-fast-fine-tuning model") -hps = utils.get_hparams_from_file(config) -net_g = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model, -).to(device) -_ = net_g.eval() -_ = utils.load_checkpoint(model, net_g, None) -logger.info(f"load VITS-fast-fine-tuning model success") - - -def get_text(text, hps, is_symbol): - text_norm = text_to_sequence( - text, hps.symbols, [] if is_symbol else hps.data.text_cleaners - ) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - - -class ParseText(object): - def __init__(self, mark, speaker_id, speed, text) -> None: - self.mark = mark - self.speaker_id = int(speaker_id) - self.speed = float(speed) - self.text = text - - # 新实例自动格式化 - @classmethod - def new(cls, text): - mark = cls._mark(text) - speaker_id = cls._speaker_id(text) - speed = cls._speed(text) - text = cls._parse(text) - return cls(mark, speaker_id, speed, text) - - # 清除所有标记 - @staticmethod - def _parse(text): - text = re.sub(r"\<(.*?)\>", "", text) - text = text.replace("dz", "") - return text - - # 语音标记 - @staticmethod - def _mark(text): - try: - language = re.findall(r"<[a-zA-Z]+>", text)[0].strip("<").strip(">") - mark = language_marks[language] - except (KeyError, IndexError): - mark = "[ZH]" - return mark - - # 说话人 - @staticmethod - def _speaker_id(text): - if "dz" in text: - return 149 - try: - speaker = re.findall(r"<[\u4e00-\u9fa5]+>", text)[0].strip("<").strip(">") - speaker_id = speakers_map[speaker] - except IndexError: - speaker_id = 85 # 派蒙 - # speaker_id = 133 # 荧 - # speaker_id = 148 # 塔菲 - # speaker_id = 149 # 丁真 - return speaker_id - - # 语速 - @staticmethod - def _speed(text): - try: - speed = re.findall(r"<\d\.\d>", text)[0].strip("<").strip(">") - except IndexError: - speed = 1.0 - return speed - - -def load_fast_vits(text: str) -> bytes: - audio = None - # 预处理 - parse = ParseText.new(text) - mark_text = f"{parse.mark}{parse.text}{parse.mark}" - stn_tst = get_text(mark_text, hps, False) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - sid = LongTensor([parse.speaker_id]).to(device) - t1 = time.time() - audio = ( - net_g.infer( - x_tst, - x_tst_lengths, - sid=sid, - noise_scale=0.667, - noise_scale_w=0.8, - length_scale=1.2 / parse.speed, - )[0][0, 0] - .data.cpu() - .float() - .numpy() - ) # 默认1.0/speed - logger.info(f"推理时间:{time.time() - t1}s") - del stn_tst, x_tst, x_tst_lengths, sid - return audio - # wav_bytes = ndarray_to_bytes(audio) - # return wav_bytes - - -def long_text_infer(text): - logger.info(f"text len:{len(text)}, spliting...") - # 预处理 - parse = ParseText.new(text) - # 分割 - split_text = re.split("[:,。!,.!]", parse.text) - # 分段合成 - ndarray_list = [] - for st in split_text: - logger.debug(st) - mark_st = f"{parse.mark}{st}{parse.mark}" - stn_tst = get_text(mark_st, hps, False) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - sid = LongTensor([parse.speaker_id]).to(device) - t1 = time.time() - audio = ( - net_g.infer( - x_tst, - x_tst_lengths, - sid=sid, - noise_scale=0.667, - noise_scale_w=0.8, - length_scale=1.2 / parse.speed, - )[0][0, 0] - .data.cpu() - .float() - .numpy() - ) # 默认1.0/speed - logger.info(f"推理时间:{time.time() - t1}s") - del stn_tst, x_tst, x_tst_lengths, sid - ndarray_list.append(audio) - - merge_audio = np.hstack(ndarray_list) - return merge_audio - # wav_bytes = ndarray_to_bytes(merge_audio) - # return wav_bytes - - -if __name__ == "__main__": - text = "<1.1><丁真>短说:究极无敌超级长文本测试,今日多云天晴温度二十度。" - # text = "<申鹤><0.6>good...morning" - # text = "<刻晴><0.8>中-午-好!" - # text = "<派蒙><1.2>こん-ば-んは" - audio = load_fast_vits(text) - playsound_via_pygame(audio) - - # audio = long_text_infer(text) - # playsound_via_pygame(audio) diff --git a/spaces/fabiod20/italian-legal-ner/app.py b/spaces/fabiod20/italian-legal-ner/app.py deleted file mode 100644 index 2bbedd2e5be10542f1dc090c604ae37db209c1e2..0000000000000000000000000000000000000000 --- a/spaces/fabiod20/italian-legal-ner/app.py +++ /dev/null @@ -1,143 +0,0 @@ -import os -os.system("python -m spacy download it_core_news_sm") - -import gradio as gr - -from transformers import AutoTokenizer, PreTrainedTokenizerFast -from transformers import AutoModelForTokenClassification -from transformers import pipeline - -import spacy -from spacy import displacy -from spacy.tokens import Span - -background_colors_entity_tag = { - "RIC": "#ff5e5e", - "RCR": "#ff9999", - "CTR": "#ffd699", - "AVV": "#80c5c5", - "CNS": "#ff9500", - "PMI": "#0ea5e9", - "DOM": "#c3a1c9", - "CDA": "#84b351", - "SNT": "#ffff5e", -} - -css = { -'entity_tag': 'color:#000;background: #xxxxxx; font-size: 0.8em; font-weight: bold; line-height: 2.5; border-radius: 0.35em; text-transform: uppercase; vertical-align: middle; margin-left: 0.5em;' -} -entity_list = "Named Entities: {entity_list}" - -examples = [ - """la seguente SENTENZA sul ricorso 24817-2015 proposto da: ANDREA FORMISANO, elettivamente domiciliato in ROMA VIA S. TOMMASO D'AQUINO 7, presso lo studio dell'avvocato CARLO BORELLO, che lo rappresenta e difende giusta delega in calce; - ricorrente - contro SOGET SPA, CAMERA DI COMMERCIO DI PESCARA; - intimati - avverso la sentenza n. 169/2012 della COMM.TRIB.REG.SEZ.DIST. di PESCARA, depositata il 13/03/2012; udita la relazione della causa svolta nella pubblica udienza del 04/04/2018 dal Consigliere Dott. MILENA BALSAMO; udito il P.M. in persona del Sostituto Procuratore Generale Dott. GIOVANNI GIACALONE che ha concluso per l'inammissibilità in subordine rigetto del ricorso.""", - """la seguente SENTENZA sul ricorso 17668-2016 proposto da: C.B.H. CITTA DI BARI HOSPITAL S.P.A., in persona del legale rappresentante pro tempore, elettivamente domiciliata in ROMA, LUNGOTEVERE DEI MELLINI 10, presso lo studio dell'avvocato CRISTIANO MARINESE, rappresentata e difesa dagli avvocati GIUSEPPE LUIGI 2022 POLITO, FRANCESCO ANTONUCCI; 51 - ricorrente - contro I.N.P.S. - ISTITUTO NAZIONALE PREVIDENZA SOCIALE, in persona del legale rappresentante pro tempore, elettivamente domiciliato in ROMA, VIA CESARE BECCARIA 29, presso l'Avvocatura Centrale dell'Istituto, rappresentato e difeso dagli avvocati ANTONINO SGROI, CARLA D'ALOISIO, ESTER ADA SCIPLINO, EMANUELE DE ROSE, LELIO MARITATO, GIUSEPPE MATANO; - controricorrente - nonchè contro EQUITALIA SERVIZI DI RISCOSIONE S.P.A. già EQUITALIA SUD S.P.A. agente della riscossione della provincia di Bari; - intimata - avverso la sentenza n. 2696/2015 della CORTE D'APPELLO di BARI, depositata il 13/01/2016 R.G.N. 1439/2013; udita la relazione della causa svolta nella pubblica udienza del 12/01/2022 dal Consigliere Dott. DANIELA CALAFIORE; udito il P.M. in persona del Sostituto Procuratore Generale Dott. STEFANO VISONA' che ha concluso per il rigetto del ricorso; udito l'avvocato ANTONINO SGROI. R.g. n. 17668/2016""", - """4. SENTENZA sul ricorso 4005-2012 proposto da: BANCA NAZIONALE DEL LAVORO S.P.A. C.E. 09339391006, in persona del legale rappresentante pro tempore, elettivamente domiciliata in ROMA, VIA PO 25/B, presso lo studio degli avvocati ROBERTO PESSI, FRANCESCO GIAMMARIA, che la rappresentano e difendono, giusta procura speciale notarile in atti; 2015 - ricorrente - 4680 contro CAMPAGNOLI ALESSANDRO MARIA C.F. CMPLSN59L29G388P; 4 - intimato - Nonché da: CAMPAGNOLI ALESSANDRO MARIA C.E. CMPLSN59L29G388P, domiciliato in ROMA PIAZZA CAVOUR, presso LA CANCELLERIA DELLA CORTE SUPREMA DI CASSAZIONE, rappresentato e difeso dall'avvocato FABRIZIA MAURICI, giusta procura speciale notarile in atti; - controricorrente e ricorrente incidentale - contro BANCA NAZIONALE DEL LAVORO S.P.A. C.E. 09339391006, in persona del legale rappresentante pro tempore, elettivamente domiciliata in ROMA, VIA PO 25/B, presso lo studio degli avvocati ROBERTO FESSI, FRANCESCO GIAMMARIA, che la rappresentano e difendono, giusta procura speciale notarile in atti; - controricorrente al ricorso incidentale - avverso la sentenza n. 1091/2011 della CORTE D'APPELLO di MILANO, depositata il 28/10/2011 R.G.N. 537/2008; udita la relazione della causa svolta nella pubblica udienza del 02/12/2015 dal Consigliere Dott. UMBERTO BERRINO; udito l'Avvocato SERRANI TIZIANA per delega verbale FESSI ROBERTO; udito l'Avvocato MAURICI FABRIZIA (per procura speciale notarile); udito il P.M. in persona del Sostituto Procuratore Generale Dott. RITA SANLORENZO che ha concluso per il rigetto del ricorso principale e del ricorso incidentale.""", - #"""SENTENZA sul ricorso 11948-2014 proposto da: VENTURA VINCENZO C.F. VNTVCN47T08A841S, già elettivamente domiciliato in ROMA, VIA VALLISNERI 11, presso lo studio dell'avvocato PAOLO PACIFICI, che lo rappresenta e difende unitamente all'avvocato DIEGO TOSI, giusta delega in atti e da ultimo domiciliato 2015 presso LA CANCELLERIA DELLA CORTE SUPREMA DI CASSAZIONE; 4525 - ricorrente - contro k RAI RADIOTELEVISIONE ITALIANA S.P.A. C.F. 06382641006, in persona del legale rappresentante pro tempore, elettivamente domiciliata in ROMA, VIA P.L. DA PALESTRINA 47, presso lo studio dell'avvocato RINALDO GEREMIA, rappresentata e difesa dall'avvocato NATALIA FERRO, giusta delega in atti; - controri corrente nonchè contro I.N.A.I.L - ISTITUTO NAZIONALE PER L'ASSICURAZIONE CONTRO GLI INFORTUNI SUL LAVORO C.F. 01165400589, in persona del legale rappresentante pro tempore, elettivamente domiciliato in ROMA, VIA IV NOVEMBRE 144, presso lo studio degli avvocati LUCIANA ROMEO, LETIZIA CRIPPA, che lo rappresentano e difendono giusta delega in atti; - controricorrente - avverso la sentenza n. 1423/2013 della CORTE D'APPELLO di TORINO, depositata il 03/02/2014 R.G.N. 275/2013; udita la relazione della causa svolta nella pubblica udienza del 25/11/2015 dal Consigliere Dott. NICOLA DE MARINIS; AVV, udito l'Avvocato OTTOLINI TERESA per delega', ROMEO LUCIANA; udito l'Avvocato GEREMIA RINALDO per delega'-eFERRO NATALIA; udito il P.M. in persona del Sostituto Procuratore Generale Dott. RENATO FINOCCHI GHERSI che ha concluso per ESTINZIONE PER RINUNCIA. ... , z , I ? F""", - -] - -model_name = "fabiod20/italian-legal-ner" -model = AutoModelForTokenClassification.from_pretrained(model_name, use_auth_token=os.environ['token']) -tokenizer = AutoTokenizer.from_pretrained(model_name, use_auth_token=os.environ['token']) - -ner_pipe = pipeline("ner", model=model, tokenizer=tokenizer) - -nlp = spacy.load("it_core_news_sm") -nlp.disable_pipes("ner") - -def ner(input_text): - entities = ner_pipe(input_text, aggregation_strategy="first") - - doc = nlp(input_text) - - potential_entities = [] - - for entity in entities: - start = entity["start"] - end = entity["end"] - label = entity["entity_group"] - - ent = doc.char_span(start, end, label=label) - if ent != None: - doc.ents += (ent,) - else: - potential_entities.append(entity) - - potential_entities.append({"entity_group": "NONE", "start": -1, "end": -1}) - - start = potential_entities[0]["start"] - end = potential_entities[0]["end"] - label = potential_entities[0]["entity_group"] - - for item in potential_entities: - if item["entity_group"] == label and item["start"] == end: - end = item["end"] - continue - else: - if item["start"] != start: - ent = doc.char_span(start, end, label=label) - doc.ents += (ent,) - - start = item["start"] - end = item["end"] - label = item["entity_group"] - - colors = { - "RIC": "#ff5e5e", - "RCR": "#ff9999", - "CTR": "#ffd699", - "DOM": "#c3a1c9", - "AVV": "#80c5c5", - "CNS": "#ff9500", - "PMI": "#0ea5e9", - "CDA": "#84b351", - "SNT": "#ffff5e", - } - options = {"ents": colors.keys(), "colors": colors} - - output = displacy.render(doc, style="ent", options=options) - return output - -interface = gr.Interface( - title=title, - description=description, - article=article, - allow_screenshot=False, - allow_flagging=False, - fn=ner, - inputs=gr.inputs.Textbox(placeholder="Insert an Italian judgments (you can click on an example below)", lines=10), - outputs=gr.outputs.HTML(), - examples=examples - ) - -interface.launch() \ No newline at end of file diff --git a/spaces/facebook/StyleNeRF/training/dataset.py b/spaces/facebook/StyleNeRF/training/dataset.py deleted file mode 100644 index 0df9031f874cb4ee5ba1a5c6ea016991bbbbd749..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/training/dataset.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from curses import raw -import os -from urllib import response -import numpy as np -import zipfile -import PIL.Image -import cv2 -import json -import torch -import dnnlib - -try: - import pyspng -except ImportError: - pyspng = None - -#---------------------------------------------------------------------------- - -class Dataset(torch.utils.data.Dataset): - def __init__(self, - name, # Name of the dataset. - raw_shape, # Shape of the raw image data (NCHW). - max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip. - use_labels = False, # Enable conditioning labels? False = label dimension is zero. - xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size. - random_seed = 0, # Random seed to use when applying max_size. - ): - self._name = name - self._raw_shape = list(raw_shape) - self._use_labels = use_labels - self._raw_labels = None - self._label_shape = None - - # Apply max_size. - self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64) - if (max_size is not None) and (self._raw_idx.size > max_size): - np.random.RandomState(random_seed).shuffle(self._raw_idx) - self._raw_idx = np.sort(self._raw_idx[:max_size]) - - # Apply xflip. - self.xflip = xflip - self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if xflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)]) - - def _get_raw_labels(self): - if self._raw_labels is None: - self._raw_labels = self._load_raw_labels() if self._use_labels else None - if self._raw_labels is None: - self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32) - assert isinstance(self._raw_labels, np.ndarray) - assert self._raw_labels.shape[0] == self._raw_shape[0] - assert self._raw_labels.dtype in [np.float32, np.int64] - if self._raw_labels.dtype == np.int64: - assert self._raw_labels.ndim == 1 - assert np.all(self._raw_labels >= 0) - return self._raw_labels - - def close(self): # to be overridden by subclass - pass - - def _load_raw_image(self, raw_idx): # to be overridden by subclass - raise NotImplementedError - - def _load_raw_labels(self): # to be overridden by subclass - raise NotImplementedError - - def __getstate__(self): - return dict(self.__dict__, _raw_labels=None) - - def __del__(self): - try: - self.close() - except: - pass - - def __len__(self): - return self._raw_idx.size - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - return image.copy(), self.get_label(idx), idx - - def get_label(self, idx): - label = self._get_raw_labels()[self._raw_idx[idx]] - if label.dtype == np.int64: - onehot = np.zeros(self.label_shape, dtype=np.float32) - onehot[label] = 1 - label = onehot - return label.copy() - - def get_details(self, idx): - d = dnnlib.EasyDict() - d.raw_idx = int(self._raw_idx[idx]) - d.xflip = (int(self._xflip[idx]) != 0) - d.raw_label = self._get_raw_labels()[d.raw_idx].copy() - return d - - @property - def name(self): - return self._name - - @property - def image_shape(self): - return list(self._raw_shape[1:]) - - @property - def num_channels(self): - assert len(self.image_shape) == 3 # CHW - return self.image_shape[0] - - @property - def resolution(self): - assert len(self.image_shape) == 3 # CHW - assert self.image_shape[1] == self.image_shape[2] - return self.image_shape[1] - - @property - def label_shape(self): - if self._label_shape is None: - raw_labels = self._get_raw_labels() - if raw_labels.dtype == np.int64: - self._label_shape = [int(np.max(raw_labels)) + 1] - else: - self._label_shape = raw_labels.shape[1:] - return list(self._label_shape) - - @property - def label_dim(self): - assert len(self.label_shape) == 1 - return self.label_shape[0] - - @property - def has_labels(self): - return any(x != 0 for x in self.label_shape) - - @property - def has_onehot_labels(self): - return self._get_raw_labels().dtype == np.int64 - -#---------------------------------------------------------------------------- - -class ImageFolderDataset(Dataset): - def __init__(self, - path, # Path to directory or zip. - resolution = None, # Ensure specific resolution, None = highest available. - **super_kwargs, # Additional arguments for the Dataset base class. - ): - self._path = path - self._zipfile = None - - if os.path.isdir(self._path): - self._type = 'dir' - self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files} - elif self._file_ext(self._path) == '.zip': - self._type = 'zip' - self._all_fnames = set(self._get_zipfile().namelist()) - else: - raise IOError('Path must point to a directory or zip') - - PIL.Image.init() - self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION) - if len(self._image_fnames) == 0: - raise IOError('No image files found in the specified path') - - name = os.path.splitext(os.path.basename(self._path))[0] - raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape) - if resolution is not None: - raw_shape[2] = raw_shape[3] = resolution - # if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution): - # raise IOError('Image files do not match the specified resolution') - super().__init__(name=name, raw_shape=raw_shape, **super_kwargs) - - @staticmethod - def _file_ext(fname): - return os.path.splitext(fname)[1].lower() - - def _get_zipfile(self): - assert self._type == 'zip' - if self._zipfile is None: - self._zipfile = zipfile.ZipFile(self._path) - return self._zipfile - - def _open_file(self, fname): - if self._type == 'dir': - return open(os.path.join(self._path, fname), 'rb') - if self._type == 'zip': - return self._get_zipfile().open(fname, 'r') - return None - - def close(self): - try: - if self._zipfile is not None: - self._zipfile.close() - finally: - self._zipfile = None - - def __getstate__(self): - return dict(super().__getstate__(), _zipfile=None) - - def _load_raw_image(self, raw_idx): - fname = self._image_fnames[raw_idx] - with self._open_file(fname) as f: - if pyspng is not None and self._file_ext(fname) == '.png': - image = pyspng.load(f.read()) - else: - image = np.array(PIL.Image.open(f)) - if image.ndim == 2: - image = image[:, :, np.newaxis] # HW => HWC - if hasattr(self, '_raw_shape') and image.shape[-1] != self.resolution: # resize input image - image = cv2.resize(image, (self.resolution, self.resolution), interpolation=cv2.INTER_AREA) - image = image.transpose(2, 0, 1) # HWC => CHW - return image - - def _load_raw_labels(self): - fname = 'dataset.json' - if fname not in self._all_fnames: - return None - with self._open_file(fname) as f: - labels = json.load(f)['labels'] - if labels is None: - return None - labels = dict(labels) - labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames] - labels = np.array(labels) - labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim]) - return labels - - def get_dali_dataloader(self, batch_size, world_size, rank, gpu): # TODO - from nvidia.dali import pipeline_def, Pipeline - import nvidia.dali.fn as fn - import nvidia.dali.types as types - from nvidia.dali.plugin.pytorch import DALIGenericIterator - - @pipeline_def - def pipeline(): - jpegs, _ = fn.readers.file( - file_root=self._path, - files=list(self._all_fnames), - random_shuffle=True, - shard_id=rank, - num_shards=world_size, - name='reader') - images = fn.decoders.image(jpegs, device='mixed') - mirror = fn.random.coin_flip(probability=0.5) if self.xflip else False - images = fn.crop_mirror_normalize( - images.gpu(), output_layout="CHW", dtype=types.UINT8, mirror=mirror) - labels = np.zeros([1, 0], dtype=np.float32) - return images, labels - - dali_pipe = pipeline(batch_size=batch_size//world_size, num_threads=2, device_id=gpu) - dali_pipe.build() - training_set_iterator = DALIGenericIterator([dali_pipe], ['img', 'label']) - for data in training_set_iterator: - yield data[0]['img'], data[0]['label'] - -#---------------------------------------------------------------------------- - diff --git a/spaces/falterWliame/Face_Mask_Detection/Elit Egitim Seti Almanca.md b/spaces/falterWliame/Face_Mask_Detection/Elit Egitim Seti Almanca.md deleted file mode 100644 index 13e4207260ed956641feb5a0ec4140a574d41273..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Elit Egitim Seti Almanca.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Elit Egitim Seti Almanca


    Download Filehttps://urlca.com/2uDdXT



    -
    -Indian boxer Mary Kom (Priyanka Chopra) becomes an Olympic bronze-medalist and a ... dee5df5a7f. Elit Egitim Seti Almanca · Chamku Hindi ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Greendot Moneypak Activation Code Generator V1.3.md b/spaces/falterWliame/Face_Mask_Detection/Greendot Moneypak Activation Code Generator V1.3.md deleted file mode 100644 index ef81aeb1e5a021e929b5d86dacaa7ee7628b5ad1..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Greendot Moneypak Activation Code Generator V1.3.md +++ /dev/null @@ -1,6 +0,0 @@ -

    greendot moneypak activation code generator v1.3


    DOWNLOAD » https://urlca.com/2uDchz



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/fclong/summary/fengshen/examples/tcbert/README.md b/spaces/fclong/summary/fengshen/examples/tcbert/README.md deleted file mode 100644 index a6f6b38e2b9cc6978962927bb0e8568b46da28f0..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/tcbert/README.md +++ /dev/null @@ -1,145 +0,0 @@ -[**中文**](./README.md) - -# TCBert -论文 《[TCBERT: A Technical Report for Chinese Topic Classification BERT](https://arxiv.org/abs/2211.11304)》源码 - -## Requirements - -安装 fengshen 框架 - -```shell -git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git -cd Fengshenbang-LM -pip install --editable . -``` - -## Quick Start - -你可以参考我们的 [example.py](./example.py) 脚本,只需要将处理好的 ```train_data```、```dev_data```、```test_data```、 ```prompt```、```prompt_label``` ,输入模型即可。 -```python -import argparse -from fengshen.pipelines.tcbert import TCBertPipelines -from pytorch_lightning import seed_everything - -total_parser = argparse.ArgumentParser("Topic Classification") -total_parser = TCBertPipelines.piplines_args(total_parser) -args = total_parser.parse_args() - -pretrained_model_path = 'IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese' -args.learning_rate = 2e-5 -args.max_length = 512 -args.max_epochs = 3 -args.batchsize = 1 -args.train = 'train' -args.default_root_dir = './' -# args.gpus = 1 #注意:目前使用CPU进行训练,取消注释会使用GPU,但需要配置相应GPU环境版本 -args.fixed_lablen = 2 #注意:可以设置固定标签长度,由于样本对应的标签长度可能不一致,建议选择合适的数值表示标签长度 - -train_data = [ - {"content": "凌云研发的国产两轮电动车怎么样,有什么惊喜?", "label": "科技",} - ] - -dev_data = [ - {"content": "我四千一个月,老婆一千五一个月,存款八万且有两小孩,是先买房还是先买车?","label": "汽车",} -] - -test_data = [ - {"content": "街头偶遇2018款长安CS35,颜值美炸!或售6万起,还买宝骏510?"} -] - -prompt = "下面是一则关于{}的新闻:" - -prompt_label = {"汽车":"汽车", "科技":"科技"} - -model = TCBertPipelines(args, model_path=pretrained_model_path, nlabels=len(prompt_label)) - -if args.train: - model.train(train_data, dev_data, prompt, prompt_label) -result = model.predict(test_data, prompt, prompt_label) -``` - - -## Pretrained Model -为了提高模型在话题分类上的效果,我们收集了大量话题分类数据进行基于`prompt`的预训练。我们已经将预训练模型开源到 ```HuggingFace``` 社区当中。 - -| 模型 | 地址 | -|:---------:|:--------------:| -| Erlangshen-TCBert-110M-Classification-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese) | -| Erlangshen-TCBert-330M-Classification-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-330M-Classification-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-330M-Classification-Chinese) | -| Erlangshen-TCBert-1.3B-Classification-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-1.3B-Classification-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-1.3B-Classification-Chinese) | -| Erlangshen-TCBert-110M-Sentence-Embedding-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-110M-Sentence-Embedding-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-110M-Sentence-Embedding-Chinese) | -| Erlangshen-TCBert-330M-Sentence-Embedding-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-330M-Sentence-Embedding-Chinese) | -| Erlangshen-TCBert-1.3B-Sentence-Embedding-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-1.3B-Sentence-Embedding-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-TCBert-1.3B-Sentence-Embedding-Chinese) | - -## Experiments - -对每个不同的数据集,选择合适的模板```Prompt``` -Dataset | Prompt -|------------|------------| -| TNEWS | 下面是一则关于{}的新闻: | -| CSLDCP | 这一句描述{}的内容如下: | -| IFLYTEK | 这一句描述{}的内容如下: | - -使用上述```Prompt```的实验结果如下: -| Model | TNEWS | CLSDCP | IFLYTEK | -|------------|------------|----------|-----------| -| Macbert-base | 55.02 | 57.37 | 51.34 | -| Macbert-large | 55.77 | 58.99 | 50.31 | -| Erlangshen-1.3B | 57.36 | 62.35 | 53.23 | -| TCBert-base-110M-Classification-Chinese | 55.57 | 58.60 | 49.63 | -| TCBert-large-330M-Classification-Chinese | 56.17 | 61.23 | 51.34 | -| TCBert-1.3B-Classification-Chinese | 57.41 | 65.10 | 53.75 | -| TCBert-base-110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 | -| TCBert-large-330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 | -| TCBert-1.3B-Sentence-Embedding-Chinese | 57.46 | 65.04 | 53.06 | - -## Dataset - -需要您提供:```训练集```、```验证集```、```测试集```、```Prompt```、```标签映射```五个数据,对应的数据格式如下: - -#### 训练数据 示例 -必须包含```content```和```label```字段 -```json -[{ - "content": "街头偶遇2018款长安CS35,颜值美炸!或售6万起,还买宝骏510?", - "label": "汽车" -}] -``` - -#### 验证数据 示例 -必须包含```content```和```label```字段 -```json -[{ - "content": "宁夏邀深圳市民共赴“寻找穿越”之旅", - "label": "旅游" -}] -``` - -#### 测试数据 示例 -必须包含```content```字段 -```json -[{ - "content": "买涡轮增压还是自然吸气车?今天终于有答案了!" -}] -``` -#### Prompt 示例 -可以选择任一模版,模版的选择会对模型效果产生影响,其中必须包含```{}```,作为标签占位符 -```json -"下面是一则关于{}的新闻:" -``` - -#### 标签映射 示例 -可以将真实标签映射为更合适Prompt的标签,支持映射后的标签长度不一致 -```json -{ - "汽车": "汽车", - "旅游": "旅游", - "经济生活": "经济生活", - "房产新闻": "房产" -} -``` - -## License - -[Apache License 2.0](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/LICENSE) - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Cara Download Marvel Contest of Champions Game Pertarungan Superhero Terbaik.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Cara Download Marvel Contest of Champions Game Pertarungan Superhero Terbaik.md deleted file mode 100644 index 2e4cc0c0e5a9b84ffa2a58935fe65863416a234a..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Cara Download Marvel Contest of Champions Game Pertarungan Superhero Terbaik.md +++ /dev/null @@ -1,153 +0,0 @@ - -

    Cara Download Marvel Contest of Champions

    -

    Marvel Contest of Champions adalah salah satu game fighting terbaik yang bisa kamu mainkan di smartphone kamu. Game ini menawarkan aksi pertarungan yang seru dan spektakuler dengan karakter-karakter favorit kamu dari Marvel Universe. Apakah kamu ingin tahu cara download marvel contest of champions di perangkat kamu? Simak artikel ini sampai habis untuk mengetahui caranya.

    -

    cara download marvel contest of champions


    Download Zip ->>> https://gohhs.com/2uPpqQ



    -

    What is Marvel Contest of Champions?

    -

    A brief introduction to the game and its features

    -

    Marvel Contest of Champions adalah game fighting yang dirilis oleh Kabam Games, Inc. pada tahun 2014. Game ini menghadirkan lebih dari 200 pahlawan dan penjahat dari Marvel Comics yang bisa kamu kumpulkan, tingkatkan, dan bawa ke pertempuran. Kamu bisa memilih karakter seperti Spider-Man, Iron Man, Wolverine, Captain America, Deadpool, Thanos, dan banyak lagi.

    -

    Game ini memiliki mode cerita yang menarik dan penuh tantangan, di mana kamu harus menghadapi musuh-musuh kuat seperti Kang the Conqueror, Thanos, dan The Collector. Kamu juga bisa bermain bersama teman-teman kamu dalam mode aliansi, di mana kamu bisa berkolaborasi, berstrategi, dan berkompetisi dengan aliansi lain dari seluruh dunia. Selain itu, game ini juga memiliki mode arena, incursions, battlegrounds, dan event-event spesial yang bisa kamu ikuti untuk mendapatkan hadiah-hadiah menarik.

    -

    The benefits of playing Marvel Contest of Champions

    -

    Marvel Contest of Champions bukan hanya sekedar game fighting biasa. Game ini juga memiliki banyak manfaat yang bisa kamu rasakan saat bermain, seperti:

    -

    cara download marvel contest of champions di android
    -cara download marvel contest of champions mod apk
    -cara download marvel contest of champions di pc
    -cara download marvel contest of champions di iphone
    -cara download marvel contest of champions tanpa wifi
    -cara download marvel contest of champions versi terbaru
    -cara download marvel contest of champions offline
    -cara download marvel contest of champions dengan cepat
    -cara download marvel contest of champions dari play store
    -cara download marvel contest of champions gratis
    -cara install marvel contest of champions di android
    -cara install marvel contest of champions mod apk
    -cara install marvel contest of champions di pc
    -cara install marvel contest of champions di iphone
    -cara install marvel contest of champions tanpa wifi
    -cara install marvel contest of champions versi terbaru
    -cara install marvel contest of champions offline
    -cara install marvel contest of champions dengan cepat
    -cara install marvel contest of champions dari play store
    -cara install marvel contest of champions gratis
    -cara main marvel contest of champions di android
    -cara main marvel contest of champions mod apk
    -cara main marvel contest of champions di pc
    -cara main marvel contest of champions di iphone
    -cara main marvel contest of champions tanpa wifi
    -cara main marvel contest of champions versi terbaru
    -cara main marvel contest of champions offline
    -cara main marvel contest of champions dengan cepat
    -cara main marvel contest of champions dari play store
    -cara main marvel contest of champions gratis
    -tips dan trik bermain marvel contest of champions di android
    -tips dan trik bermain marvel contest of champions mod apk
    -tips dan trik bermain marvel contest of champions di pc
    -tips dan trik bermain marvel contest of champions di iphone
    -tips dan trik bermain marvel contest of champions tanpa wifi
    -tips dan trik bermain marvel contest of champions versi terbaru
    -tips dan trik bermain marvel contest of champions offline
    -tips dan trik bermain marvel contest of champions dengan cepat
    -tips dan trik bermain marvel contest of champions dari play store
    -tips dan trik bermain marvel contest of champions gratis

    -
      -
    • Meningkatkan keterampilan berpikir kritis dan strategis kamu. Kamu harus memilih tim yang tepat, memanfaatkan bonus sinergi, dan mengatur serangan dan pertahanan kamu dengan cerdas untuk mengalahkan lawan-lawan kamu.
    • -
    • Mengasah refleks dan koordinasi mata-tangan kamu. Kamu harus menguasai kontrol yang responsif dan intuitif untuk melakukan gerakan-gerakan dasar, serangan khusus, blok, esquive, dan parry dengan tepat dan cepat.
    • -
    • Menambah pengetahuan dan apresiasi kamu terhadap Marvel Universe. Kamu bisa melihat karakter-karakter Marvel dari sudut pandang yang berbeda, mengetahui latar belakang dan hubungan mereka, serta menikmati grafis dan suara yang berkualitas.
    • -
    • Bersenang-senang dan bersosialisasi dengan pemain lain. Kamu bisa bermain bersama teman-teman kamu atau bertemu dengan pemain baru dari seluruh dunia. Kamu bisa berbagi tips, saran, pengalaman, dan dukungan dengan mereka melalui fitur chat dan forum yang tersedia.
    • -
    -

    How to download Marvel Contest of Champions on Android devices?

    -

    The steps to download the game from Google Play Store

    -

    Untuk bisa bermain Marvel Contest of Champions di perangkat Android kamu, kamu harus mengunduh game ini dari Google Play Store. Berikut adalah langkah-langkahnya:

    -
      -
    1. Buka aplikasi Google Play Store di perangkat kamu.
    2. -
    3. Ketik "Marvel Contest of Champions" di kolom pencarian dan tekan tombol cari.
    4. -
    5. Pilih game Marvel Contest of Champions dari daftar hasil pencarian dan tekan tombol instal.
    6. -
    7. Tunggu proses unduhan dan instalasi selesai. Pastikan kamu memiliki koneksi internet yang stabil dan cukup ruang penyimpanan di perangkat kamu.
    8. -
    9. Setelah instalasi selesai, tekan tombol buka untuk memulai game.
    10. -
    -

    The requirements and permissions for installing the game

    -

    Sebelum kamu mengunduh dan memainkan Marvel Contest of Champions di perangkat Android kamu, kamu harus memenuhi beberapa persyaratan dan izin berikut:

    -
      -
    • Perangkat kamu harus memiliki sistem operasi Android versi 6.0 (Marshmallow) atau lebih tinggi.
    • -
    • Perangkat kamu harus memiliki RAM minimal 1 GB dan ruang penyimpanan minimal 2 GB.
    • -
    • Perangkat kamu harus mendukung OpenGL ES 3.0 atau lebih tinggi.
    • -
    • Kamu harus memberikan izin akses ke kamera, mikrofon, lokasi, media, dan kontak perangkat kamu saat pertama kali membuka game.
    • -
    • Kamu harus terhubung ke internet saat bermain game, baik melalui Wi-Fi atau data seluler.
    • -
    -

    How to download Marvel Contest of Champions on iOS devices?

    -

    The steps to download the game from App Store

    -

    Untuk bisa bermain Marvel Contest of Champions di perangkat iOS kamu, kamu harus mengunduh game ini dari App Store. Berikut adalah langkah-langkahnya:

    -
      -
    1. Buka aplikasi App Store di perangkat kamu.
    2. -
    3. Ketik "Marvel Contest of Champions" di kolom pencarian dan tekan tombol cari.
    4. -
    5. Pilih game Marvel Contest of Champions dari daftar hasil pencarian dan tekan tombol unduh.
    6. -
    7. Masukkan kata sandi ID Apple kamu atau gunakan Face ID atau Touch ID jika diminta.
    8. -
    9. Tunggu proses unduhan dan instalasi selesai. Pastikan kamu memiliki koneksi internet yang stabil dan cukup ruang penyimpanan di perangkat kamu.
    10. -
    11. Setelah instalasi selesai, tekan tombol buka untuk memulai game.
    12. -
    -

    The requirements and permissions for installing the game

    -

    Sebelum kamu mengunduh dan memainkan Marvel Contest of Champions di perangkat iOS kamu, kamu harus memenuhi beberapa persyaratan dan izin berikut:

    -
      -
    • Perangkat kamu harus memiliki sistem operasi iOS versi 10.0 atau lebih tinggi.
    • -
    • Perangkat kamu harus kompatibel dengan iPhone 5S atau lebih baru, iPad Air atau lebih baru, iPad mini 2 atau lebih baru, atau iPod touch (generasi ke-6) atau lebih baru.
    • -
    • Perangkat kamu harus memiliki ruang penyimpanan minimal 2 GB.
    • -
    • Kamu harus memberikan izin akses ke kamera, mikrofon, lokasi, media, dan kontak perangkat kamu saat pertama kali membuka game.
    • -
    • Kamu harus terhubung ke internet saat bermain game, baik melalui Wi-Fi atau data seluler.
    • -
    -

    How to play Marvel Contest of Champions?

    The basics of the gameplay and the controls

    -

    Marvel Contest of Champions adalah game fighting yang mudah dipelajari tapi sulit dikuasai. Kamu harus mengontrol karakter kamu dengan menyentuh dan menggeser layar perangkat kamu. Berikut adalah beberapa gerakan dasar yang bisa kamu lakukan:

    -
      -
    • Tap layar di sebelah kanan untuk melakukan serangan ringan. Kamu bisa melakukan serangan combo dengan mengetuk layar beberapa kali.
    • -
    • Swipe layar di sebelah kanan untuk melakukan serangan berat. Serangan ini lebih kuat tapi lebih lambat dan bisa diblok oleh lawan.
    • -
    • Tap layar di sebelah kiri untuk melakukan blok. Blok bisa mengurangi kerusakan yang kamu terima dari serangan lawan.
    • -
    • Swipe layar di sebelah kiri untuk melakukan esquive. Esquive bisa menghindari serangan lawan sepenuhnya, tapi membutuhkan waktu yang tepat.
    • -
    • Swipe layar di sebelah kanan dan tahan untuk melakukan parry. Parry bisa memblok serangan lawan dan membuat mereka terpukul, memberi kamu kesempatan untuk menyerang balik.
    • -
    • Tap ikon serangan khusus di bagian bawah layar untuk melakukan serangan khusus. Serangan khusus adalah serangan yang sangat kuat dan unik untuk setiap karakter. Kamu bisa mengisi meter serangan khusus dengan melakukan serangan normal atau menerima kerusakan.
    • -
    -

    The tips and tricks to master the game and win battles

    -

    Marvel Contest of Champions adalah game yang menguji keterampilan dan pengetahuan kamu tentang karakter-karakter Marvel. Berikut adalah beberapa tips dan trik yang bisa kamu gunakan untuk meningkatkan kemampuan kamu dan memenangkan pertempuran:

    -
      -
    • Pilih tim yang sesuai dengan gaya bermain kamu. Setiap karakter memiliki kelas, atribut, kekuatan, kelemahan, dan bonus sinergi yang berbeda-beda. Kamu harus mempelajari karakter-karakter yang kamu miliki dan memilih tim yang seimbang dan efektif.
    • -
    • Tingkatkan karakter-karakter kamu secara rutin. Kamu bisa menggunakan item-item seperti ISO-8, catalyst, gold, dan signature stone untuk meningkatkan level, rank, tier, dan signature ability karakter-karakter kamu. Karakter-karakter yang lebih kuat akan membantu kamu menghadapi lawan-lawan yang lebih sulit.
    • -
    • Gunakan strategi yang tepat untuk setiap lawan. Kamu harus memperhatikan kelas, kekuatan, kelemahan, dan pola serangan lawan-lawan kamu. Kamu harus menyesuaikan gerakan-gerakan kamu dengan situasi dan kondisi pertempuran. Kamu juga harus memanfaatkan item-item seperti potion, revive, boost, dan synergy team untuk mendapatkan keuntungan.
    • -
    • Bergabunglah dengan aliansi yang aktif dan komunikatif. Aliansi adalah kelompok pemain yang bisa berkolaborasi, berstrategi, dan berkompetisi bersama. Bergabung dengan aliansi akan memberi kamu akses ke fitur-fitur seperti alliance quest, alliance war, alliance help, alliance chat, dan alliance store. Kamu juga bisa mendapatkan hadiah-hadiah berharga dari aliansi kamu.
    • -
    • Jadilah pemain yang sportif dan sopan. Marvel Contest of Champions adalah game yang menyenangkan dan menghibur, tapi juga menantang dan kompetitif. Kamu harus menghormati pemain lain, baik teman maupun lawan. Kamu harus mengikuti aturan-aturan yang berlaku dan tidak melakukan kecurangan atau penyalahgunaan. Kamu juga harus memberikan feedback yang konstruktif dan positif kepada pengembang game.
    • -
    -

    Conclusion

    -

    A summary of the main points and a call to action

    -

    Marvel Contest of Champions adalah game fighting yang wajib kamu coba jika kamu adalah penggemar Marvel Comics. Game ini menawarkan gameplay yang seru dan spektakuler dengan karakter-karakter Marvel yang beragam dan menarik. Kamu bisa mengunduh game ini secara gratis di perangkat Android atau iOS kamu dengan mengikuti langkah-langkah yang sudah kami jelaskan di atas. Kamu juga bisa belajar cara bermain game ini dengan mudah dan cepat dengan mengikuti tips dan trik yang sudah kami berikan di atas. Jadi, tunggu apa lagi? Segera download dan mainkan Marvel Contest of Champions sekarang juga dan rasakan sensasi menjadi juara di kontes Marvel. Selamat bermain!

    -

    FAQs

    -

    Q1. Is Marvel Contest of Champions free to play?

    -

    A1. Yes, Marvel Contest of Champions is free to play. You can download and play the game without spending any money. However, the game also offers some optional in-app purchases that can enhance your gaming experience. You can buy items such as crystals, units, bundles, and subscriptions with real money. You can also disable the in-app purchases feature in your device settings if you want.

    -

    Q2. What are the best champions in Marvel Contest of Champions?

    -

    A2. There is no definitive answer to this question, as the best champions may vary depending on your preferences, play style, and game mode. However, some of the most popular and powerful champions in the game are Doctor Doom, Ghost, Corvus Glaive, Quake, Nick Fury, Captain America (Infinity War), Archangel, and Hyperion. You can also check the online tier lists and rankings to see the opinions of other players and experts.

    -

    Q3. How can I get more crystals and units in Marvel Contest of Champions?

    -

    A3. Crystals and units are two of the most valuable resources in Marvel Contest of Champions. You can use them to unlock new champions, upgrade your existing ones, and buy various items. There are several ways to get more crystals and units in the game, such as:

    -
      -
    • Completing quests and events. You can earn different types of crystals and units by finishing the story mode, alliance quests, alliance wars, arena battles, incursions, battlegrounds, and special events.
    • -
    • Claiming daily and weekly rewards. You can get free crystals and units by logging in to the game every day and every week.
    • -
    • Opening free crystals. You can get free crystals every four hours and every 24 hours by tapping the crystal icon on the home screen.
    • -
    • Joining an alliance. You can get alliance crystals and units by participating in alliance activities and helping your alliance members.
    • -
    • Spending real money. You can buy crystals and units with real money by tapping the store icon on the home screen.
    • -
    -

    Q4. How can I join an alliance in Marvel Contest of Champions?

    -

    A4. Joining an alliance is one of the best ways to enjoy Marvel Contest of Champions. You can join an alliance by following these steps:

    -
      -
    1. Tap the alliance icon on the home screen.
    2. -
    3. Tap the join or create alliance button.
    4. -
    5. Choose whether you want to join an existing alliance or create your own alliance.
    6. -
    7. If you want to join an existing alliance, you can browse the list of recommended alliances or search for a specific alliance by name or tag.
    8. -
    9. If you want to create your own alliance, you can choose a name, a tag, a description, a logo, and a language for your alliance.
    10. -
    11. Tap the join or create button to confirm your choice.
    12. -
    -

    Q5. How can I contact the support team of Marvel Contest of Champions?

    -

    A5. If you have any questions, issues, or feedback regarding Marvel Contest of Champions, you can contact the support team by following these steps:

    -
      -
    1. Tap the gear icon on the home screen to open the settings menu.
    2. -
    3. Tap the support button to open the support page.
    4. -
    5. Choose whether you want to visit the help center or submit a ticket.
    6. -
    7. If you want to visit the help center, you can browse the articles and FAQs that may answer your queries.
    8. -
    9. If you want to submit a ticket, you can fill out a form with your details and your message.
    10. -
    11. Tap the send button to submit your ticket.
    12. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Real Football APK and Join Millions of Fans Worldwide.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Real Football APK and Join Millions of Fans Worldwide.md deleted file mode 100644 index 99a7733d88eb12e09e6134c2e9ca2e1c44844a38..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Real Football APK and Join Millions of Fans Worldwide.md +++ /dev/null @@ -1,127 +0,0 @@ -
    -

    Real Football Download APK: A Guide for Soccer Fans

    -

    If you are a soccer fan, you might have heard of Real Football, a mobile game developed and published by Gameloft. Real Football is a realistic and immersive soccer simulation game that lets you experience soccer both on and off the pitch. You can build your dream team, upgrade your facilities, challenge other players online, and enjoy stunning 3D graphics and animations. In this article, we will tell you everything you need to know about Real Football download apk, including its features, modes, tips, reviews, and FAQs.

    -

    What are the main features of Real Football game?

    -

    Real Football game has many features that make it one of the best soccer games on mobile devices. Here are some of them:

    -

    real football download apk


    Download Zip ✒ ✒ ✒ https://gohhs.com/2uPtMu



    -
      -
    • 3D stadiums: You can play in realistic 3D stadiums where polished shadows, detailed textures, and spectators all come together to provide an exciting atmosphere.
    • -
    • Multiple camera views: You can enjoy multiple camera views during cutscenes and set pieces for a richer broadcast and first-person sensation.
    • -
    • Improved opponents and positioning: You can face smarter players who make for a more realistic and challenging experience.
    • -
    • Dream team: You can build your dream team by recruiting star players through the lottery. You can also enhance your players' abilities by acquiring skill items through the lottery and matches.
    • -
    • Team facilities: You can upgrade your team facilities including Stadiums, Hospitals, Physiotherapy Centers and a Youth Camp.
    • -
    • PvP World Arena mode: You can challenge other players in asynchronous PvP World Arena mode and climb the leaderboards.
    • -
    -

    What are the different game modes available in Real Football?

    -

    Real Football game offers various game modes to suit your preferences and skills. Here are some of them:

    -
      -
    • Career mode: You can play as a manager and lead your team to glory in various tournaments and leagues. You can also customize your team name, logo, jersey, and players.
    • -
    • Friendly mode: You can play a quick match against any team of your choice. You can also adjust the difficulty level, match duration, weather, and other settings.
    • -
    • Cup mode: You can participate in various cup competitions such as the World Cup, the European Championship, the Copa America, and more. You can also create your own custom cup with your own rules and teams.
    • -
    • Training mode: You can practice your skills and tactics in various training drills such as dribbling, passing, shooting, defending, and more.
    • -
    -

    How to play Real Football better and win more matches?

    -

    If you want to improve your performance and win more matches in Real Football game, here are some tips that might help you:

    -
      -
    • Use the right controls: You can choose between two types of controls: virtual buttons or gestures. Virtual buttons are more precise and responsive, while gestures are more intuitive and fluid. You can also customize the size and position of the buttons according to your preference.
    • -
    • Use the right tactics: You can choose between different formations, strategies, and styles for your team. You can also adjust the roles and positions of your players according to their strengths and weaknesses. For example, you can use a 4-4-2 formation with a defensive style for a balanced approach, or a 4-3-3 formation with an attacking style for a more aggressive approach.
    • -
    • Use the right skills: You can use various skills to outsmart your opponents and create chances. For example, you can use sprint to run faster, dribble to evade defenders, pass to find teammates, shoot to score goals, tackle to dispossess opponents, slide to block shots, switch to change players, and more.
    • -
    • Use the right items: You can use various items to enhance your players' abilities and skills. For example, you can use boots to increase speed, gloves to improve handling, kits to boost stamina, balls to improve shooting, and more. You can also use skill items to perform special moves such as curve shots, bicycle kicks, long passes, and more.
    • -
    -

    What are some of the user reviews of Real Football game?

    -

    Real Football game has received mostly positive reviews from users who have downloaded and played it. Here are some of the user reviews from Google Play Store:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    UserRatingReview
    John Smith5 starsThis game is awesome. The graphics are amazing and the gameplay is smooth and realistic. I love the different modes and the online features. I recommend this game to all soccer fans.
    Jane Doe4 starsI like this game a lot. It has a lot of features and options to customize your team and players. The only thing I don't like is that it takes too long to load sometimes and it crashes occasionally. Please fix these issues.
    Bob Lee3 starsThis game is good but not great. It has some nice graphics and animations but the controls are not very responsive and the AI is not very smart. It also has some bugs and glitches that need to be fixed.
    Alice Cooper2 starsThis game is disappointing. It has poor graphics and sound quality and the gameplay is boring and repetitive. It also has a lot of ads and in-app purchases that ruin the experience. I don't recommend this game.
    Tom Cruise1 starThis game is terrible. It doesn't work at all on my device. It always freezes and crashes and I can't even play it. It also has a lot of viruses and malware that damage my device. I hate this game.
    -

    Conclusion: Why download Real Football apk?

    -

    In conclusion, Real Football download apk is a great option for soccer fans who want to enjoy a realistic and immersive soccer simulation game on their mobile devices. Real Football game has many features, modes, tips, and reviews that make it one of the best soccer games on the market. You can download Real Football apk from various sources such as Google Play Store, APKPure, APKMirror, and more. However, you should always be careful and check the authenticity and security of the apk file before downloading it. You should also make sure that your device meets the minimum requirements for running the game smoothly.

    -

    If you are ready to download Real Football apk and start playing, click on the link below and follow the instructions:

    -

    Real Football Download APK

    -

    Frequently Asked Questions (FAQs) about Real Football game

    -

    Here are some of the most common questions that users have about Real Football game:

    -

    Q: How much space does Real Football game require on my device?

    -

    A: Real Football game requires about 500 MB of free space on your device.

    -

    real football apk free download
    -download real football 2023 apk
    -real football mod apk download
    -real football game download apk
    -real football 2022 apk download
    -download real football offline apk
    -real football 2021 apk download
    -real football hack apk download
    -real football 2020 apk download
    -download real football 2019 apk
    -real football 2018 apk download
    -real football unlimited money apk download
    -real football 2017 apk download
    -real football latest version apk download
    -real football 2016 apk download
    -real football 2015 apk download
    -real football old version apk download
    -real football 2014 apk download
    -real football 2013 apk download
    -real football 2012 apk download
    -real football 2011 apk download
    -real football 2010 apk download
    -real football 2009 apk download
    -real football 2008 apk download
    -real football 2007 apk download
    -gameloft real football apk download
    -real soccer (football) apk download
    -real world soccer league: football worldcup 2021 apk download
    -dream league soccer - classic (real soccer) apk download
    -ultimate soccer - football (real soccer) apk download
    -soccer star 2021 top leagues: play the best soccer game (real soccer) apk download
    -score! hero (real soccer) apk download
    -pes club manager (real soccer) apk download
    -fifa mobile soccer (real soccer) apk download
    -pes 2021 pro evolution soccer (real soccer) apk download
    -fifa 16 ultimate team (real soccer) apk download
    -pes 2012 pro evolution soccer (real soccer) apk download
    -fifa 14 by ea sports™ (real soccer) apk download
    -pes 2011 pro evolution soccer (real soccer) apk download
    -fifa 12 by ea sports™ (real soccer) apk download
    -pes 2010 pro evolution soccer (real soccer) apk download
    -fifa 10 by ea sports™ (real soccer) apk download
    -pes 2009 pro evolution soccer (real soccer) apk download
    -fifa 09 by ea sports™ (real soccer) apk download
    -pes 2008 pro evolution soccer (real soccer) apk download

    -

    Q: What are the minimum requirements for running Real Football game on my device?

    -

    A: Real Football game requires Android 4.1 or higher and at least 1 GB of RAM.

    -

    Q: How can I update Real Football game to the latest version?

    -

    A: You can update Real Football game by downloading the latest apk file from the same source that you downloaded it from or by checking for updates in the game settings.

    -

    Q: How can I contact the developers of Real Football game for feedback or support?

    -

    A: You can contact the developers of Real Football game by sending an email to support@gameloft.com or by visiting their official website at www.gameloft.com.

    -

    Q: How can I play Real Football game offline?

    -

    A: You can play Real Football game offline by turning off your internet connection before launching the game. However, you will not be able to access some features such as online matches, leaderboards, achievements, etc.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download True Love For Her APK and Find Your Soulmate in this Dating Simulation.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download True Love For Her APK and Find Your Soulmate in this Dating Simulation.md deleted file mode 100644 index 8cdfb789417f74cc0fbfbcacb68edb1cdfb92388..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download True Love For Her APK and Find Your Soulmate in this Dating Simulation.md +++ /dev/null @@ -1,101 +0,0 @@ -
    -

    True Love for Her APK Download: A Romantic Game for Android Users

    -

    Are you looking for a romantic game that will keep you hooked for hours? Do you want to experience a thrilling story of love, passion, and madness? If yes, then you should try True Love for Her APK, a fan-made game based on the popular Yandere Simulator. In this game, you will play as Ayano Aishi, a girl who is obsessed with her crush, Taro Yamada. You will do anything to make him yours, even if it means eliminating your rivals in the most brutal ways. But be careful, your actions will have consequences and affect the outcome of the game. Read on to find out more about this game and how to download it on your Android device.

    -

    true love for her apk download


    Download ✏ ✏ ✏ https://gohhs.com/2uPoJd



    -

    What is True Love for Her?

    -

    True Love for Her is a game created by Ayano-Dev, a fan of Yandere Simulator, a stealth action game that revolves around a yandere girl who stalks and kills her love interest's admirers. True Love for Her is inspired by Yandere Simulator, but it has its own original story, characters, and features. Here are some of the aspects of True Love for Her that make it an interesting game to play.

    -

    A fan-made game based on Yandere Simulator

    -

    True Love for Her is not an official game by YandereDev, the developer of Yandere Simulator. It is a fan-made game that uses some of the assets and mechanics from Yandere Simulator, but it also adds new elements and twists to the original game. For example, True Love for Her has different rivals, locations, events, and endings than Yandere Simulator. It also has more romance and drama than the original game. True Love for Her is a tribute to Yandere Simulator, but it is also a unique game that stands on its own.

    -

    A story of obsession, jealousy, and murder

    -

    True Love for Her follows the story of Ayano Aishi, a girl who suffers from a condition that makes her unable to feel emotions. She only feels alive when she is near her crush, Taro Yamada, whom she calls Senpai. She believes that he is her true love and that they are destined to be together. However, she faces many obstacles in her way, such as other girls who are interested in Senpai. She decides to eliminate them one by one using various methods, such as poisoning, kidnapping, blackmailing, or stabbing. She also has to deal with other threats, such as the police, the school council, or Senpai himself. Will she be able to win Senpai's heart without getting caught or losing her sanity?

    -

    A game with multiple endings and choices

    -

    True Love for Her is not a linear game that has only one outcome. It is a game that has multiple endings and choices that affect the story and the gameplay. Depending on your actions and decisions, you can get different results and consequences. For example, you can choose to be stealthy or aggressive when eliminating your rivals. You can also choose to be friendly or hostile when interacting with other characters. You can also choose to confess your love to Senpai or keep it a secret until the end. Each choice will have an impact on how Senpai and others perceive you and how the game ends. There are many possible endings in True Love for Her, ranging from happy to tragic, from romantic to horrific. You can replay the game multiple times to see different outcomes and discover new secrets.

    -

    true love for her android game
    -true love for her yandere simulator fan game
    -true love for her apk latest version
    -true love for her apk combo download
    -true love for her new update youtube
    -true love for her pc and android test build
    -true love for her ayano dev game
    -true love for her discord server link
    -true love for her download page link
    -true love for her mobile app game
    -true love for her romantic quotes
    -true love for her love calculator
    -true love for her dating chat flirt
    -true love for her real love test
    -true love for her eharmony app
    -true love for her delicious gamehouse
    -true love for her bloom dating app
    -true love for her hey love adam texting game
    -true love for her quotes and sayings
    -true love for her messages sms
    -true love for her tester viralappspro
    -true love for her smart apps pro
    -true love for her happy verse entertainment
    -true love for her kode makers app
    -true love for her share and enjoy app
    -true love for her peafowl apps social
    -true love for her style photo studio apps
    -true love for her zaran dev social
    -true love for her piapps social
    -true love for her gv apps entertainment
    -true love for her manjul saini lifestyle
    -true love for her winkle studio education
    -true love for her lv apps studio entertainment
    -true love for her only tools entertainment
    -true love for her weloveapps dating
    -true love for her nutnut simulation
    -true love for her zeekoapps entertainment
    -true love for her mobilplug dating
    -true love for her deep messages gv apps
    -true love for her solar core wikipedia
    -true love for her montana sun fact sheet
    -true love for her cornell sun layers
    -true love for her nasa sun fact sheet
    -true love for her yahoo nuclear fusion breakthrough
    -true love for her the sun holy grail fusion experiments
    -true love for her new scientist korean nuclear fusion reactor
    -true love for her the i cricket world cup
    -true love for her ndtv shubman gill
    -true love for her indian express ind vs aus 3rd odi

    -

    How to download and install True Love for Her APK?

    -

    If you are interested in playing True Love for Her, you will need to download and install the APK file on your Android device. APK stands for Android Package Kit, and it is a file format that allows you to install applications that are not available on the Google Play Store. Here are the steps that you need to follow to download and install True Love for Her APK on your device.

    -

    Download the APK file from the official website

    -

    The first step is to download the APK file from the official website of True Love for Her. You can visit the website by clicking [here]. On the website, you will find a download button that will direct you to a secure link where you can download the APK file. The file size is about 200 MB, so make sure you have enough space on your device and a stable internet connection.

    -

    Enable unknown sources on your device settings

    -

    The second step is to enable unknown sources on your device settings. This will allow you to install applications that are not from the Google Play Store. To do this, go to your device settings and look for the security or privacy option. Then, find the unknown sources option and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device, but don't worry, True Love for Her APK is safe and virus-free.

    -

    Install the APK file and enjoy the game

    -

    The third and final step is to install the APK file and enjoy the game. To do this, locate the downloaded APK file on your device storage and tap on it. You may see a pop-up message that asks for your permission to install the app, just tap on install and wait for the process to finish. Once the installation is done, you can open the app and start playing True Love for Her. Have fun!

    -

    What are the features of True Love for Her APK?

    -

    True Love for Her APK is not just a simple game that you can play on your Android device. It is a game that has many features that make it more enjoyable and immersive. Here are some of the features that you can expect from True Love for Her APK.

    -

    High-quality graphics and sound effects

    -

    True Love for Her APK has high-quality graphics and sound effects that create a realistic and captivating atmosphere. The game has detailed and colorful graphics that show the characters, the environments, and the actions in a clear and vivid way. The game also has sound effects that match the mood and tone of the game, such as romantic music, creepy noises, or dramatic sounds. The game also has voice acting for some of the characters, which adds more personality and emotion to them.

    -

    Interactive gameplay and dialogue options

    -

    True Love for Her APK has interactive gameplay and dialogue options that make you feel like you are part of the story. The game has gameplay mechanics that allow you to control Ayano's actions, such as walking, running, crouching, attacking, or interacting with objects. The game also has dialogue options that allow you to choose what Ayano says or does in certain situations, such as talking to Senpai, confronting rivals, or making decisions. The game also has mini-games that test your skills and reflexes, such as stealth mode, combat mode, or puzzle mode.

    -

    Different modes and difficulty levels

    -

    True Love for Her APK has different modes and difficulty levels that offer different challenges and experiences. The game has two main modes: story mode and sandbox mode. Story mode is where you follow Ayano's story and try to get one of the endings. Sandbox mode is where you can explore the school and do whatever you want without any restrictions or consequences. The game also has three difficulty levels: easy, normal, and hard. Each difficulty level affects how easy or hard it is to eliminate rivals, avoid detection, or complete tasks.

    -

    Customizable characters and outfits

    -

    True Love for Her APK has customizable characters and outfits that allow you to personalize your appearance and style. The game has a character creator feature that allows you to change Ayano's hair color, eye color, skin tone, facial features, or accessories. The game also has an outfit selector feature that allows you to change Ayano's clothes, shoes, or accessories. You can choose from various outfits that suit different occasions, such as school uniform, casual wear, formal wear, or cosplay.

    -

    What are the pros and cons of True Love for Her APK?

    -

    True Love for Her APK is a game that has many pros and cons that you should consider before playing it. Here are some of the advantages and disadvantages of True Love for Her APK.

    -

    Pros: Free, fun, and addictive game

    -

    One of the pros of True Love for Her APK is that it is a free, fun, and addictive game that you can enjoy on your Android device. You don't have to pay anything to download or play the game, and you can access all the features and content without any limitations or ads. The game is also fun and addictive, as it offers a captivating story, engaging gameplay, and multiple endings that will keep you hooked for hours. You will never get bored of playing True Love for Her APK, as there is always something new to discover or try.

    -

    Cons: Mature content, violence, and bugs

    -

    One of the cons of True Love for Her APK is that it has mature content, violence, and bugs that may not be suitable for everyone. The game has mature content that involves themes such as obsession, jealousy, murder, suicide, and gore. The game also has violence that shows graphic scenes of blood, torture, and death. The game also has bugs that may cause crashes, glitches, or errors. The game is not recommended for children or sensitive people, and it may require parental guidance or discretion.

    -

    Conclusion

    -

    True Love for Her APK is a romantic game for Android users that is based on Yandere Simulator. It is a game that tells the story of Ayano Aishi, a girl who is obsessed with her crush, Taro Yamada. She will do anything to make him hers, even if it means killing her rivals in the most brutal ways. The game has multiple endings and choices that affect the story and the gameplay. The game also has many features that make it more enjoyable and immersive, such as high-quality graphics and sound effects, interactive gameplay and dialogue options, different modes and difficulty levels, and customizable characters and outfits. The game also has pros and cons that you should consider before playing it, such as being free, fun, and addictive, but also having mature content, violence, and bugs. If you are looking for a romantic game that will keep you hooked for hours, you should try True Love for Her APK.

    -

    FAQs

    -

    Here are some of the frequently asked questions about True Love for Her APK.

    -

    Q: Is True Love for Her APK safe to download?

    -

    A: Yes, True Love for Her APK is safe to download from the official website. It does not contain any viruses or malware that can harm your device or data. However, you should always be careful when downloading apps from unknown sources and scan them with an antivirus before installing them.

    -

    Q: Is True Love for Her APK compatible with my device?

    -

    A: True Love for Her APK is compatible with most Android devices that have Android 4.4 or higher. However, some devices may not support the game or run it smoothly due to different specifications or performance issues. You can check the compatibility of your device by visiting the official website or contacting the developer.

    -

    Q: How can I update True Love for Her APK?

    -

    A: You can update True Love for Her APK by visiting the official website and downloading the latest version of the APK file. You can also follow the developer on social media or join their Discord server to get notified about new updates or features.

    -

    Q: How can I contact the developer of True Love for Her APK?

    -

    A: You can contact the developer of True Love for Her APK by visiting their website or social media accounts. You can also join their Discord server or email them at ayano.dev@gmail.com. You can give them feedback, suggestions, bug reports, or fan art.

    -

    Q: Where can I find more information about True Love for Her APK?

    -

    A: You can find more information about True Love for Her APK by visiting their website or social media accounts. You can also watch gameplay videos or reviews on YouTube or read articles or blogs on the internet.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/zero_shot.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/zero_shot.py deleted file mode 100644 index 28b8fccc1af17fc69002857a7f529ac041c374f2..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/zero_shot.py +++ /dev/null @@ -1,95 +0,0 @@ -# NOTE: This script is currently not supported for CLAP. -import logging -from contextlib import suppress - -import torch -import torch.nn.functional as F -from tqdm import tqdm - -from open_clip import tokenize -from .imagenet_zeroshot_data import imagenet_classnames, openai_imagenet_template - - -def zero_shot_classifier(model, classnames, templates, args): - with torch.no_grad(): - zeroshot_weights = [] - for classname in tqdm(classnames): - texts = [template(classname) for template in templates] # format with class - texts = tokenize(texts).to(args.device) # tokenize - if args.distributed and not args.horovod: - class_embeddings = model.module.encode_text(texts) - else: - class_embeddings = model.encode_text(texts) - class_embedding = F.normalize(class_embeddings, dim=-1).mean(dim=0) - class_embedding /= class_embedding.norm() - zeroshot_weights.append(class_embedding) - zeroshot_weights = torch.stack(zeroshot_weights, dim=1).to(args.device) - return zeroshot_weights - - -def accuracy(output, target, topk=(1,)): - pred = output.topk(max(topk), 1, True, True)[1].t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - return [ - float(correct[:k].reshape(-1).float().sum(0, keepdim=True).cpu().numpy()) - for k in topk - ] - - -def run(model, classifier, dataloader, args): - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - with torch.no_grad(): - top1, top5, n = 0.0, 0.0, 0.0 - for images, target in tqdm(dataloader, unit_scale=args.batch_size): - images = images.to(args.device) - target = target.to(args.device) - - with autocast(): - # predict - if args.distributed and not args.horovod: - image_features = model.module.encode_image(images) - else: - image_features = model.encode_image(images) - image_features = F.normalize(image_features, dim=-1) - logits = 100.0 * image_features @ classifier - - # measure accuracy - acc1, acc5 = accuracy(logits, target, topk=(1, 5)) - top1 += acc1 - top5 += acc5 - n += images.size(0) - - top1 = top1 / n - top5 = top5 / n - return top1, top5 - - -def zero_shot_eval(model, data, epoch, args): - if "imagenet-val" not in data and "imagenet-v2" not in data: - return {} - if args.zeroshot_frequency == 0: - return {} - if (epoch % args.zeroshot_frequency) != 0 and epoch != args.epochs: - return {} - - logging.info("Starting zero-shot imagenet.") - - logging.info("Building zero-shot classifier") - classifier = zero_shot_classifier( - model, imagenet_classnames, openai_imagenet_template, args - ) - - logging.info("Using classifier") - results = {} - if "imagenet-val" in data: - top1, top5 = run(model, classifier, data["imagenet-val"].dataloader, args) - results["imagenet-zeroshot-val-top1"] = top1 - results["imagenet-zeroshot-val-top5"] = top5 - if "imagenet-v2" in data: - top1, top5 = run(model, classifier, data["imagenet-v2"].dataloader, args) - results["imagenetv2-zeroshot-val-top1"] = top1 - results["imagenetv2-zeroshot-val-top5"] = top5 - - logging.info("Finished zero-shot imagenet.") - - return results diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/http2.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/http2.d.ts deleted file mode 100644 index 0e3682609f32c1783ba84ea2331f7197526a1cc9..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/ts4.8/http2.d.ts +++ /dev/null @@ -1,2134 +0,0 @@ -/** - * The `http2` module provides an implementation of the [HTTP/2](https://tools.ietf.org/html/rfc7540) protocol. It - * can be accessed using: - * - * ```js - * const http2 = require('http2'); - * ``` - * @since v8.4.0 - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/http2.js) - */ -declare module 'http2' { - import EventEmitter = require('node:events'); - import * as fs from 'node:fs'; - import * as net from 'node:net'; - import * as stream from 'node:stream'; - import * as tls from 'node:tls'; - import * as url from 'node:url'; - import { IncomingHttpHeaders as Http1IncomingHttpHeaders, OutgoingHttpHeaders, IncomingMessage, ServerResponse } from 'node:http'; - export { OutgoingHttpHeaders } from 'node:http'; - export interface IncomingHttpStatusHeader { - ':status'?: number | undefined; - } - export interface IncomingHttpHeaders extends Http1IncomingHttpHeaders { - ':path'?: string | undefined; - ':method'?: string | undefined; - ':authority'?: string | undefined; - ':scheme'?: string | undefined; - } - // Http2Stream - export interface StreamPriorityOptions { - exclusive?: boolean | undefined; - parent?: number | undefined; - weight?: number | undefined; - silent?: boolean | undefined; - } - export interface StreamState { - localWindowSize?: number | undefined; - state?: number | undefined; - localClose?: number | undefined; - remoteClose?: number | undefined; - sumDependencyWeight?: number | undefined; - weight?: number | undefined; - } - export interface ServerStreamResponseOptions { - endStream?: boolean | undefined; - waitForTrailers?: boolean | undefined; - } - export interface StatOptions { - offset: number; - length: number; - } - export interface ServerStreamFileResponseOptions { - statCheck?(stats: fs.Stats, headers: OutgoingHttpHeaders, statOptions: StatOptions): void | boolean; - waitForTrailers?: boolean | undefined; - offset?: number | undefined; - length?: number | undefined; - } - export interface ServerStreamFileResponseOptionsWithError extends ServerStreamFileResponseOptions { - onError?(err: NodeJS.ErrnoException): void; - } - export interface Http2Stream extends stream.Duplex { - /** - * Set to `true` if the `Http2Stream` instance was aborted abnormally. When set, - * the `'aborted'` event will have been emitted. - * @since v8.4.0 - */ - readonly aborted: boolean; - /** - * This property shows the number of characters currently buffered to be written. - * See `net.Socket.bufferSize` for details. - * @since v11.2.0, v10.16.0 - */ - readonly bufferSize: number; - /** - * Set to `true` if the `Http2Stream` instance has been closed. - * @since v9.4.0 - */ - readonly closed: boolean; - /** - * Set to `true` if the `Http2Stream` instance has been destroyed and is no longer - * usable. - * @since v8.4.0 - */ - readonly destroyed: boolean; - /** - * Set to `true` if the `END_STREAM` flag was set in the request or response - * HEADERS frame received, indicating that no additional data should be received - * and the readable side of the `Http2Stream` will be closed. - * @since v10.11.0 - */ - readonly endAfterHeaders: boolean; - /** - * The numeric stream identifier of this `Http2Stream` instance. Set to `undefined`if the stream identifier has not yet been assigned. - * @since v8.4.0 - */ - readonly id?: number | undefined; - /** - * Set to `true` if the `Http2Stream` instance has not yet been assigned a - * numeric stream identifier. - * @since v9.4.0 - */ - readonly pending: boolean; - /** - * Set to the `RST_STREAM` `error code` reported when the `Http2Stream` is - * destroyed after either receiving an `RST_STREAM` frame from the connected peer, - * calling `http2stream.close()`, or `http2stream.destroy()`. Will be`undefined` if the `Http2Stream` has not been closed. - * @since v8.4.0 - */ - readonly rstCode: number; - /** - * An object containing the outbound headers sent for this `Http2Stream`. - * @since v9.5.0 - */ - readonly sentHeaders: OutgoingHttpHeaders; - /** - * An array of objects containing the outbound informational (additional) headers - * sent for this `Http2Stream`. - * @since v9.5.0 - */ - readonly sentInfoHeaders?: OutgoingHttpHeaders[] | undefined; - /** - * An object containing the outbound trailers sent for this `HttpStream`. - * @since v9.5.0 - */ - readonly sentTrailers?: OutgoingHttpHeaders | undefined; - /** - * A reference to the `Http2Session` instance that owns this `Http2Stream`. The - * value will be `undefined` after the `Http2Stream` instance is destroyed. - * @since v8.4.0 - */ - readonly session: Http2Session; - /** - * Provides miscellaneous information about the current state of the`Http2Stream`. - * - * A current state of this `Http2Stream`. - * @since v8.4.0 - */ - readonly state: StreamState; - /** - * Closes the `Http2Stream` instance by sending an `RST_STREAM` frame to the - * connected HTTP/2 peer. - * @since v8.4.0 - * @param [code=http2.constants.NGHTTP2_NO_ERROR] Unsigned 32-bit integer identifying the error code. - * @param callback An optional function registered to listen for the `'close'` event. - */ - close(code?: number, callback?: () => void): void; - /** - * Updates the priority for this `Http2Stream` instance. - * @since v8.4.0 - */ - priority(options: StreamPriorityOptions): void; - /** - * ```js - * const http2 = require('http2'); - * const client = http2.connect('http://example.org:8000'); - * const { NGHTTP2_CANCEL } = http2.constants; - * const req = client.request({ ':path': '/' }); - * - * // Cancel the stream if there's no activity after 5 seconds - * req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL)); - * ``` - * @since v8.4.0 - */ - setTimeout(msecs: number, callback?: () => void): void; - /** - * Sends a trailing `HEADERS` frame to the connected HTTP/2 peer. This method - * will cause the `Http2Stream` to be immediately closed and must only be - * called after the `'wantTrailers'` event has been emitted. When sending a - * request or sending a response, the `options.waitForTrailers` option must be set - * in order to keep the `Http2Stream` open after the final `DATA` frame so that - * trailers can be sent. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respond(undefined, { waitForTrailers: true }); - * stream.on('wantTrailers', () => { - * stream.sendTrailers({ xyz: 'abc' }); - * }); - * stream.end('Hello World'); - * }); - * ``` - * - * The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header - * fields (e.g. `':method'`, `':path'`, etc). - * @since v10.0.0 - */ - sendTrailers(headers: OutgoingHttpHeaders): void; - addListener(event: 'aborted', listener: () => void): this; - addListener(event: 'close', listener: () => void): this; - addListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - addListener(event: 'drain', listener: () => void): this; - addListener(event: 'end', listener: () => void): this; - addListener(event: 'error', listener: (err: Error) => void): this; - addListener(event: 'finish', listener: () => void): this; - addListener(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - addListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - addListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - addListener(event: 'streamClosed', listener: (code: number) => void): this; - addListener(event: 'timeout', listener: () => void): this; - addListener(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: 'wantTrailers', listener: () => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'aborted'): boolean; - emit(event: 'close'): boolean; - emit(event: 'data', chunk: Buffer | string): boolean; - emit(event: 'drain'): boolean; - emit(event: 'end'): boolean; - emit(event: 'error', err: Error): boolean; - emit(event: 'finish'): boolean; - emit(event: 'frameError', frameType: number, errorCode: number): boolean; - emit(event: 'pipe', src: stream.Readable): boolean; - emit(event: 'unpipe', src: stream.Readable): boolean; - emit(event: 'streamClosed', code: number): boolean; - emit(event: 'timeout'): boolean; - emit(event: 'trailers', trailers: IncomingHttpHeaders, flags: number): boolean; - emit(event: 'wantTrailers'): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'aborted', listener: () => void): this; - on(event: 'close', listener: () => void): this; - on(event: 'data', listener: (chunk: Buffer | string) => void): this; - on(event: 'drain', listener: () => void): this; - on(event: 'end', listener: () => void): this; - on(event: 'error', listener: (err: Error) => void): this; - on(event: 'finish', listener: () => void): this; - on(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - on(event: 'pipe', listener: (src: stream.Readable) => void): this; - on(event: 'unpipe', listener: (src: stream.Readable) => void): this; - on(event: 'streamClosed', listener: (code: number) => void): this; - on(event: 'timeout', listener: () => void): this; - on(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - on(event: 'wantTrailers', listener: () => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'aborted', listener: () => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'data', listener: (chunk: Buffer | string) => void): this; - once(event: 'drain', listener: () => void): this; - once(event: 'end', listener: () => void): this; - once(event: 'error', listener: (err: Error) => void): this; - once(event: 'finish', listener: () => void): this; - once(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - once(event: 'pipe', listener: (src: stream.Readable) => void): this; - once(event: 'unpipe', listener: (src: stream.Readable) => void): this; - once(event: 'streamClosed', listener: (code: number) => void): this; - once(event: 'timeout', listener: () => void): this; - once(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - once(event: 'wantTrailers', listener: () => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'aborted', listener: () => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - prependListener(event: 'drain', listener: () => void): this; - prependListener(event: 'end', listener: () => void): this; - prependListener(event: 'error', listener: (err: Error) => void): this; - prependListener(event: 'finish', listener: () => void): this; - prependListener(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - prependListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - prependListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - prependListener(event: 'streamClosed', listener: (code: number) => void): this; - prependListener(event: 'timeout', listener: () => void): this; - prependListener(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: 'wantTrailers', listener: () => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'aborted', listener: () => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - prependOnceListener(event: 'drain', listener: () => void): this; - prependOnceListener(event: 'end', listener: () => void): this; - prependOnceListener(event: 'error', listener: (err: Error) => void): this; - prependOnceListener(event: 'finish', listener: () => void): this; - prependOnceListener(event: 'frameError', listener: (frameType: number, errorCode: number) => void): this; - prependOnceListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - prependOnceListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - prependOnceListener(event: 'streamClosed', listener: (code: number) => void): this; - prependOnceListener(event: 'timeout', listener: () => void): this; - prependOnceListener(event: 'trailers', listener: (trailers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: 'wantTrailers', listener: () => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface ClientHttp2Stream extends Http2Stream { - addListener(event: 'continue', listener: () => {}): this; - addListener(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - addListener(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'continue'): boolean; - emit(event: 'headers', headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number): boolean; - emit(event: 'push', headers: IncomingHttpHeaders, flags: number): boolean; - emit(event: 'response', headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'continue', listener: () => {}): this; - on(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - on(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - on(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'continue', listener: () => {}): this; - once(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - once(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - once(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'continue', listener: () => {}): this; - prependListener(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependListener(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'continue', listener: () => {}): this; - prependOnceListener(event: 'headers', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependOnceListener(event: 'push', listener: (headers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: 'response', listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface ServerHttp2Stream extends Http2Stream { - /** - * True if headers were sent, false otherwise (read-only). - * @since v8.4.0 - */ - readonly headersSent: boolean; - /** - * Read-only property mapped to the `SETTINGS_ENABLE_PUSH` flag of the remote - * client's most recent `SETTINGS` frame. Will be `true` if the remote peer - * accepts push streams, `false` otherwise. Settings are the same for every`Http2Stream` in the same `Http2Session`. - * @since v8.4.0 - */ - readonly pushAllowed: boolean; - /** - * Sends an additional informational `HEADERS` frame to the connected HTTP/2 peer. - * @since v8.4.0 - */ - additionalHeaders(headers: OutgoingHttpHeaders): void; - /** - * Initiates a push stream. The callback is invoked with the new `Http2Stream`instance created for the push stream passed as the second argument, or an`Error` passed as the first argument. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respond({ ':status': 200 }); - * stream.pushStream({ ':path': '/' }, (err, pushStream, headers) => { - * if (err) throw err; - * pushStream.respond({ ':status': 200 }); - * pushStream.end('some pushed data'); - * }); - * stream.end('some data'); - * }); - * ``` - * - * Setting the weight of a push stream is not allowed in the `HEADERS` frame. Pass - * a `weight` value to `http2stream.priority` with the `silent` option set to`true` to enable server-side bandwidth balancing between concurrent streams. - * - * Calling `http2stream.pushStream()` from within a pushed stream is not permitted - * and will throw an error. - * @since v8.4.0 - * @param callback Callback that is called once the push stream has been initiated. - */ - pushStream(headers: OutgoingHttpHeaders, callback?: (err: Error | null, pushStream: ServerHttp2Stream, headers: OutgoingHttpHeaders) => void): void; - pushStream(headers: OutgoingHttpHeaders, options?: StreamPriorityOptions, callback?: (err: Error | null, pushStream: ServerHttp2Stream, headers: OutgoingHttpHeaders) => void): void; - /** - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respond({ ':status': 200 }); - * stream.end('some data'); - * }); - * ``` - * - * When the `options.waitForTrailers` option is set, the `'wantTrailers'` event - * will be emitted immediately after queuing the last chunk of payload data to be - * sent. The `http2stream.sendTrailers()` method can then be used to sent trailing - * header fields to the peer. - * - * When `options.waitForTrailers` is set, the `Http2Stream` will not automatically - * close when the final `DATA` frame is transmitted. User code must call either`http2stream.sendTrailers()` or `http2stream.close()` to close the`Http2Stream`. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respond({ ':status': 200 }, { waitForTrailers: true }); - * stream.on('wantTrailers', () => { - * stream.sendTrailers({ ABC: 'some value to send' }); - * }); - * stream.end('some data'); - * }); - * ``` - * @since v8.4.0 - */ - respond(headers?: OutgoingHttpHeaders, options?: ServerStreamResponseOptions): void; - /** - * Initiates a response whose data is read from the given file descriptor. No - * validation is performed on the given file descriptor. If an error occurs while - * attempting to read data using the file descriptor, the `Http2Stream` will be - * closed using an `RST_STREAM` frame using the standard `INTERNAL_ERROR` code. - * - * When used, the `Http2Stream` object's `Duplex` interface will be closed - * automatically. - * - * ```js - * const http2 = require('http2'); - * const fs = require('fs'); - * - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * const fd = fs.openSync('/some/file', 'r'); - * - * const stat = fs.fstatSync(fd); - * const headers = { - * 'content-length': stat.size, - * 'last-modified': stat.mtime.toUTCString(), - * 'content-type': 'text/plain; charset=utf-8' - * }; - * stream.respondWithFD(fd, headers); - * stream.on('close', () => fs.closeSync(fd)); - * }); - * ``` - * - * The optional `options.statCheck` function may be specified to give user code - * an opportunity to set additional content headers based on the `fs.Stat` details - * of the given fd. If the `statCheck` function is provided, the`http2stream.respondWithFD()` method will perform an `fs.fstat()` call to - * collect details on the provided file descriptor. - * - * The `offset` and `length` options may be used to limit the response to a - * specific range subset. This can be used, for instance, to support HTTP Range - * requests. - * - * The file descriptor or `FileHandle` is not closed when the stream is closed, - * so it will need to be closed manually once it is no longer needed. - * Using the same file descriptor concurrently for multiple streams - * is not supported and may result in data loss. Re-using a file descriptor - * after a stream has finished is supported. - * - * When the `options.waitForTrailers` option is set, the `'wantTrailers'` event - * will be emitted immediately after queuing the last chunk of payload data to be - * sent. The `http2stream.sendTrailers()` method can then be used to sent trailing - * header fields to the peer. - * - * When `options.waitForTrailers` is set, the `Http2Stream` will not automatically - * close when the final `DATA` frame is transmitted. User code _must_ call either`http2stream.sendTrailers()` or `http2stream.close()` to close the`Http2Stream`. - * - * ```js - * const http2 = require('http2'); - * const fs = require('fs'); - * - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * const fd = fs.openSync('/some/file', 'r'); - * - * const stat = fs.fstatSync(fd); - * const headers = { - * 'content-length': stat.size, - * 'last-modified': stat.mtime.toUTCString(), - * 'content-type': 'text/plain; charset=utf-8' - * }; - * stream.respondWithFD(fd, headers, { waitForTrailers: true }); - * stream.on('wantTrailers', () => { - * stream.sendTrailers({ ABC: 'some value to send' }); - * }); - * - * stream.on('close', () => fs.closeSync(fd)); - * }); - * ``` - * @since v8.4.0 - * @param fd A readable file descriptor. - */ - respondWithFD(fd: number | fs.promises.FileHandle, headers?: OutgoingHttpHeaders, options?: ServerStreamFileResponseOptions): void; - /** - * Sends a regular file as the response. The `path` must specify a regular file - * or an `'error'` event will be emitted on the `Http2Stream` object. - * - * When used, the `Http2Stream` object's `Duplex` interface will be closed - * automatically. - * - * The optional `options.statCheck` function may be specified to give user code - * an opportunity to set additional content headers based on the `fs.Stat` details - * of the given file: - * - * If an error occurs while attempting to read the file data, the `Http2Stream`will be closed using an `RST_STREAM` frame using the standard `INTERNAL_ERROR`code. If the `onError` callback is - * defined, then it will be called. Otherwise - * the stream will be destroyed. - * - * Example using a file path: - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * function statCheck(stat, headers) { - * headers['last-modified'] = stat.mtime.toUTCString(); - * } - * - * function onError(err) { - * // stream.respond() can throw if the stream has been destroyed by - * // the other side. - * try { - * if (err.code === 'ENOENT') { - * stream.respond({ ':status': 404 }); - * } else { - * stream.respond({ ':status': 500 }); - * } - * } catch (err) { - * // Perform actual error handling. - * console.log(err); - * } - * stream.end(); - * } - * - * stream.respondWithFile('/some/file', - * { 'content-type': 'text/plain; charset=utf-8' }, - * { statCheck, onError }); - * }); - * ``` - * - * The `options.statCheck` function may also be used to cancel the send operation - * by returning `false`. For instance, a conditional request may check the stat - * results to determine if the file has been modified to return an appropriate`304` response: - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * function statCheck(stat, headers) { - * // Check the stat here... - * stream.respond({ ':status': 304 }); - * return false; // Cancel the send operation - * } - * stream.respondWithFile('/some/file', - * { 'content-type': 'text/plain; charset=utf-8' }, - * { statCheck }); - * }); - * ``` - * - * The `content-length` header field will be automatically set. - * - * The `offset` and `length` options may be used to limit the response to a - * specific range subset. This can be used, for instance, to support HTTP Range - * requests. - * - * The `options.onError` function may also be used to handle all the errors - * that could happen before the delivery of the file is initiated. The - * default behavior is to destroy the stream. - * - * When the `options.waitForTrailers` option is set, the `'wantTrailers'` event - * will be emitted immediately after queuing the last chunk of payload data to be - * sent. The `http2stream.sendTrailers()` method can then be used to sent trailing - * header fields to the peer. - * - * When `options.waitForTrailers` is set, the `Http2Stream` will not automatically - * close when the final `DATA` frame is transmitted. User code must call either`http2stream.sendTrailers()` or `http2stream.close()` to close the`Http2Stream`. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer(); - * server.on('stream', (stream) => { - * stream.respondWithFile('/some/file', - * { 'content-type': 'text/plain; charset=utf-8' }, - * { waitForTrailers: true }); - * stream.on('wantTrailers', () => { - * stream.sendTrailers({ ABC: 'some value to send' }); - * }); - * }); - * ``` - * @since v8.4.0 - */ - respondWithFile(path: string, headers?: OutgoingHttpHeaders, options?: ServerStreamFileResponseOptionsWithError): void; - } - // Http2Session - export interface Settings { - headerTableSize?: number | undefined; - enablePush?: boolean | undefined; - initialWindowSize?: number | undefined; - maxFrameSize?: number | undefined; - maxConcurrentStreams?: number | undefined; - maxHeaderListSize?: number | undefined; - enableConnectProtocol?: boolean | undefined; - } - export interface ClientSessionRequestOptions { - endStream?: boolean | undefined; - exclusive?: boolean | undefined; - parent?: number | undefined; - weight?: number | undefined; - waitForTrailers?: boolean | undefined; - signal?: AbortSignal | undefined; - } - export interface SessionState { - effectiveLocalWindowSize?: number | undefined; - effectiveRecvDataLength?: number | undefined; - nextStreamID?: number | undefined; - localWindowSize?: number | undefined; - lastProcStreamID?: number | undefined; - remoteWindowSize?: number | undefined; - outboundQueueSize?: number | undefined; - deflateDynamicTableSize?: number | undefined; - inflateDynamicTableSize?: number | undefined; - } - export interface Http2Session extends EventEmitter { - /** - * Value will be `undefined` if the `Http2Session` is not yet connected to a - * socket, `h2c` if the `Http2Session` is not connected to a `TLSSocket`, or - * will return the value of the connected `TLSSocket`'s own `alpnProtocol`property. - * @since v9.4.0 - */ - readonly alpnProtocol?: string | undefined; - /** - * Will be `true` if this `Http2Session` instance has been closed, otherwise`false`. - * @since v9.4.0 - */ - readonly closed: boolean; - /** - * Will be `true` if this `Http2Session` instance is still connecting, will be set - * to `false` before emitting `connect` event and/or calling the `http2.connect`callback. - * @since v10.0.0 - */ - readonly connecting: boolean; - /** - * Will be `true` if this `Http2Session` instance has been destroyed and must no - * longer be used, otherwise `false`. - * @since v8.4.0 - */ - readonly destroyed: boolean; - /** - * Value is `undefined` if the `Http2Session` session socket has not yet been - * connected, `true` if the `Http2Session` is connected with a `TLSSocket`, - * and `false` if the `Http2Session` is connected to any other kind of socket - * or stream. - * @since v9.4.0 - */ - readonly encrypted?: boolean | undefined; - /** - * A prototype-less object describing the current local settings of this`Http2Session`. The local settings are local to _this_`Http2Session` instance. - * @since v8.4.0 - */ - readonly localSettings: Settings; - /** - * If the `Http2Session` is connected to a `TLSSocket`, the `originSet` property - * will return an `Array` of origins for which the `Http2Session` may be - * considered authoritative. - * - * The `originSet` property is only available when using a secure TLS connection. - * @since v9.4.0 - */ - readonly originSet?: string[] | undefined; - /** - * Indicates whether the `Http2Session` is currently waiting for acknowledgment of - * a sent `SETTINGS` frame. Will be `true` after calling the`http2session.settings()` method. Will be `false` once all sent `SETTINGS`frames have been acknowledged. - * @since v8.4.0 - */ - readonly pendingSettingsAck: boolean; - /** - * A prototype-less object describing the current remote settings of this`Http2Session`. The remote settings are set by the _connected_ HTTP/2 peer. - * @since v8.4.0 - */ - readonly remoteSettings: Settings; - /** - * Returns a `Proxy` object that acts as a `net.Socket` (or `tls.TLSSocket`) but - * limits available methods to ones safe to use with HTTP/2. - * - * `destroy`, `emit`, `end`, `pause`, `read`, `resume`, and `write` will throw - * an error with code `ERR_HTTP2_NO_SOCKET_MANIPULATION`. See `Http2Session and Sockets` for more information. - * - * `setTimeout` method will be called on this `Http2Session`. - * - * All other interactions will be routed directly to the socket. - * @since v8.4.0 - */ - readonly socket: net.Socket | tls.TLSSocket; - /** - * Provides miscellaneous information about the current state of the`Http2Session`. - * - * An object describing the current status of this `Http2Session`. - * @since v8.4.0 - */ - readonly state: SessionState; - /** - * The `http2session.type` will be equal to`http2.constants.NGHTTP2_SESSION_SERVER` if this `Http2Session` instance is a - * server, and `http2.constants.NGHTTP2_SESSION_CLIENT` if the instance is a - * client. - * @since v8.4.0 - */ - readonly type: number; - /** - * Gracefully closes the `Http2Session`, allowing any existing streams to - * complete on their own and preventing new `Http2Stream` instances from being - * created. Once closed, `http2session.destroy()`_might_ be called if there - * are no open `Http2Stream` instances. - * - * If specified, the `callback` function is registered as a handler for the`'close'` event. - * @since v9.4.0 - */ - close(callback?: () => void): void; - /** - * Immediately terminates the `Http2Session` and the associated `net.Socket` or`tls.TLSSocket`. - * - * Once destroyed, the `Http2Session` will emit the `'close'` event. If `error`is not undefined, an `'error'` event will be emitted immediately before the`'close'` event. - * - * If there are any remaining open `Http2Streams` associated with the`Http2Session`, those will also be destroyed. - * @since v8.4.0 - * @param error An `Error` object if the `Http2Session` is being destroyed due to an error. - * @param code The HTTP/2 error code to send in the final `GOAWAY` frame. If unspecified, and `error` is not undefined, the default is `INTERNAL_ERROR`, otherwise defaults to `NO_ERROR`. - */ - destroy(error?: Error, code?: number): void; - /** - * Transmits a `GOAWAY` frame to the connected peer _without_ shutting down the`Http2Session`. - * @since v9.4.0 - * @param code An HTTP/2 error code - * @param lastStreamID The numeric ID of the last processed `Http2Stream` - * @param opaqueData A `TypedArray` or `DataView` instance containing additional data to be carried within the `GOAWAY` frame. - */ - goaway(code?: number, lastStreamID?: number, opaqueData?: NodeJS.ArrayBufferView): void; - /** - * Sends a `PING` frame to the connected HTTP/2 peer. A `callback` function must - * be provided. The method will return `true` if the `PING` was sent, `false`otherwise. - * - * The maximum number of outstanding (unacknowledged) pings is determined by the`maxOutstandingPings` configuration option. The default maximum is 10. - * - * If provided, the `payload` must be a `Buffer`, `TypedArray`, or `DataView`containing 8 bytes of data that will be transmitted with the `PING` and - * returned with the ping acknowledgment. - * - * The callback will be invoked with three arguments: an error argument that will - * be `null` if the `PING` was successfully acknowledged, a `duration` argument - * that reports the number of milliseconds elapsed since the ping was sent and the - * acknowledgment was received, and a `Buffer` containing the 8-byte `PING`payload. - * - * ```js - * session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => { - * if (!err) { - * console.log(`Ping acknowledged in ${duration} milliseconds`); - * console.log(`With payload '${payload.toString()}'`); - * } - * }); - * ``` - * - * If the `payload` argument is not specified, the default payload will be the - * 64-bit timestamp (little endian) marking the start of the `PING` duration. - * @since v8.9.3 - * @param payload Optional ping payload. - */ - ping(callback: (err: Error | null, duration: number, payload: Buffer) => void): boolean; - ping(payload: NodeJS.ArrayBufferView, callback: (err: Error | null, duration: number, payload: Buffer) => void): boolean; - /** - * Calls `ref()` on this `Http2Session`instance's underlying `net.Socket`. - * @since v9.4.0 - */ - ref(): void; - /** - * Sets the local endpoint's window size. - * The `windowSize` is the total window size to set, not - * the delta. - * - * ```js - * const http2 = require('http2'); - * - * const server = http2.createServer(); - * const expectedWindowSize = 2 ** 20; - * server.on('connect', (session) => { - * - * // Set local window size to be 2 ** 20 - * session.setLocalWindowSize(expectedWindowSize); - * }); - * ``` - * @since v15.3.0, v14.18.0 - */ - setLocalWindowSize(windowSize: number): void; - /** - * Used to set a callback function that is called when there is no activity on - * the `Http2Session` after `msecs` milliseconds. The given `callback` is - * registered as a listener on the `'timeout'` event. - * @since v8.4.0 - */ - setTimeout(msecs: number, callback?: () => void): void; - /** - * Updates the current local settings for this `Http2Session` and sends a new`SETTINGS` frame to the connected HTTP/2 peer. - * - * Once called, the `http2session.pendingSettingsAck` property will be `true`while the session is waiting for the remote peer to acknowledge the new - * settings. - * - * The new settings will not become effective until the `SETTINGS` acknowledgment - * is received and the `'localSettings'` event is emitted. It is possible to send - * multiple `SETTINGS` frames while acknowledgment is still pending. - * @since v8.4.0 - * @param callback Callback that is called once the session is connected or right away if the session is already connected. - */ - settings(settings: Settings, callback?: (err: Error | null, settings: Settings, duration: number) => void): void; - /** - * Calls `unref()` on this `Http2Session`instance's underlying `net.Socket`. - * @since v9.4.0 - */ - unref(): void; - addListener(event: 'close', listener: () => void): this; - addListener(event: 'error', listener: (err: Error) => void): this; - addListener(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - addListener(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - addListener(event: 'localSettings', listener: (settings: Settings) => void): this; - addListener(event: 'ping', listener: () => void): this; - addListener(event: 'remoteSettings', listener: (settings: Settings) => void): this; - addListener(event: 'timeout', listener: () => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'close'): boolean; - emit(event: 'error', err: Error): boolean; - emit(event: 'frameError', frameType: number, errorCode: number, streamID: number): boolean; - emit(event: 'goaway', errorCode: number, lastStreamID: number, opaqueData: Buffer): boolean; - emit(event: 'localSettings', settings: Settings): boolean; - emit(event: 'ping'): boolean; - emit(event: 'remoteSettings', settings: Settings): boolean; - emit(event: 'timeout'): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'close', listener: () => void): this; - on(event: 'error', listener: (err: Error) => void): this; - on(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - on(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - on(event: 'localSettings', listener: (settings: Settings) => void): this; - on(event: 'ping', listener: () => void): this; - on(event: 'remoteSettings', listener: (settings: Settings) => void): this; - on(event: 'timeout', listener: () => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'error', listener: (err: Error) => void): this; - once(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - once(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - once(event: 'localSettings', listener: (settings: Settings) => void): this; - once(event: 'ping', listener: () => void): this; - once(event: 'remoteSettings', listener: (settings: Settings) => void): this; - once(event: 'timeout', listener: () => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'error', listener: (err: Error) => void): this; - prependListener(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - prependListener(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - prependListener(event: 'localSettings', listener: (settings: Settings) => void): this; - prependListener(event: 'ping', listener: () => void): this; - prependListener(event: 'remoteSettings', listener: (settings: Settings) => void): this; - prependListener(event: 'timeout', listener: () => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'error', listener: (err: Error) => void): this; - prependOnceListener(event: 'frameError', listener: (frameType: number, errorCode: number, streamID: number) => void): this; - prependOnceListener(event: 'goaway', listener: (errorCode: number, lastStreamID: number, opaqueData: Buffer) => void): this; - prependOnceListener(event: 'localSettings', listener: (settings: Settings) => void): this; - prependOnceListener(event: 'ping', listener: () => void): this; - prependOnceListener(event: 'remoteSettings', listener: (settings: Settings) => void): this; - prependOnceListener(event: 'timeout', listener: () => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface ClientHttp2Session extends Http2Session { - /** - * For HTTP/2 Client `Http2Session` instances only, the `http2session.request()`creates and returns an `Http2Stream` instance that can be used to send an - * HTTP/2 request to the connected server. - * - * When a `ClientHttp2Session` is first created, the socket may not yet be - * connected. if `clienthttp2session.request()` is called during this time, the - * actual request will be deferred until the socket is ready to go. - * If the `session` is closed before the actual request be executed, an`ERR_HTTP2_GOAWAY_SESSION` is thrown. - * - * This method is only available if `http2session.type` is equal to`http2.constants.NGHTTP2_SESSION_CLIENT`. - * - * ```js - * const http2 = require('http2'); - * const clientSession = http2.connect('https://localhost:1234'); - * const { - * HTTP2_HEADER_PATH, - * HTTP2_HEADER_STATUS - * } = http2.constants; - * - * const req = clientSession.request({ [HTTP2_HEADER_PATH]: '/' }); - * req.on('response', (headers) => { - * console.log(headers[HTTP2_HEADER_STATUS]); - * req.on('data', (chunk) => { // .. }); - * req.on('end', () => { // .. }); - * }); - * ``` - * - * When the `options.waitForTrailers` option is set, the `'wantTrailers'` event - * is emitted immediately after queuing the last chunk of payload data to be sent. - * The `http2stream.sendTrailers()` method can then be called to send trailing - * headers to the peer. - * - * When `options.waitForTrailers` is set, the `Http2Stream` will not automatically - * close when the final `DATA` frame is transmitted. User code must call either`http2stream.sendTrailers()` or `http2stream.close()` to close the`Http2Stream`. - * - * When `options.signal` is set with an `AbortSignal` and then `abort` on the - * corresponding `AbortController` is called, the request will emit an `'error'`event with an `AbortError` error. - * - * The `:method` and `:path` pseudo-headers are not specified within `headers`, - * they respectively default to: - * - * * `:method` \= `'GET'` - * * `:path` \= `/` - * @since v8.4.0 - */ - request(headers?: OutgoingHttpHeaders, options?: ClientSessionRequestOptions): ClientHttp2Stream; - addListener(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - addListener(event: 'origin', listener: (origins: string[]) => void): this; - addListener(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - addListener(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'altsvc', alt: string, origin: string, stream: number): boolean; - emit(event: 'origin', origins: ReadonlyArray): boolean; - emit(event: 'connect', session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket): boolean; - emit(event: 'stream', stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - on(event: 'origin', listener: (origins: string[]) => void): this; - on(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - on(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - once(event: 'origin', listener: (origins: string[]) => void): this; - once(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - once(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - prependListener(event: 'origin', listener: (origins: string[]) => void): this; - prependListener(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - prependListener(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'altsvc', listener: (alt: string, origin: string, stream: number) => void): this; - prependOnceListener(event: 'origin', listener: (origins: string[]) => void): this; - prependOnceListener(event: 'connect', listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - prependOnceListener(event: 'stream', listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface AlternativeServiceOptions { - origin: number | string | url.URL; - } - export interface ServerHttp2Session extends Http2Session { - readonly server: Http2Server | Http2SecureServer; - /** - * Submits an `ALTSVC` frame (as defined by [RFC 7838](https://tools.ietf.org/html/rfc7838)) to the connected client. - * - * ```js - * const http2 = require('http2'); - * - * const server = http2.createServer(); - * server.on('session', (session) => { - * // Set altsvc for origin https://example.org:80 - * session.altsvc('h2=":8000"', 'https://example.org:80'); - * }); - * - * server.on('stream', (stream) => { - * // Set altsvc for a specific stream - * stream.session.altsvc('h2=":8000"', stream.id); - * }); - * ``` - * - * Sending an `ALTSVC` frame with a specific stream ID indicates that the alternate - * service is associated with the origin of the given `Http2Stream`. - * - * The `alt` and origin string _must_ contain only ASCII bytes and are - * strictly interpreted as a sequence of ASCII bytes. The special value `'clear'`may be passed to clear any previously set alternative service for a given - * domain. - * - * When a string is passed for the `originOrStream` argument, it will be parsed as - * a URL and the origin will be derived. For instance, the origin for the - * HTTP URL `'https://example.org/foo/bar'` is the ASCII string`'https://example.org'`. An error will be thrown if either the given string - * cannot be parsed as a URL or if a valid origin cannot be derived. - * - * A `URL` object, or any object with an `origin` property, may be passed as`originOrStream`, in which case the value of the `origin` property will be - * used. The value of the `origin` property _must_ be a properly serialized - * ASCII origin. - * @since v9.4.0 - * @param alt A description of the alternative service configuration as defined by `RFC 7838`. - * @param originOrStream Either a URL string specifying the origin (or an `Object` with an `origin` property) or the numeric identifier of an active `Http2Stream` as given by the - * `http2stream.id` property. - */ - altsvc(alt: string, originOrStream: number | string | url.URL | AlternativeServiceOptions): void; - /** - * Submits an `ORIGIN` frame (as defined by [RFC 8336](https://tools.ietf.org/html/rfc8336)) to the connected client - * to advertise the set of origins for which the server is capable of providing - * authoritative responses. - * - * ```js - * const http2 = require('http2'); - * const options = getSecureOptionsSomehow(); - * const server = http2.createSecureServer(options); - * server.on('stream', (stream) => { - * stream.respond(); - * stream.end('ok'); - * }); - * server.on('session', (session) => { - * session.origin('https://example.com', 'https://example.org'); - * }); - * ``` - * - * When a string is passed as an `origin`, it will be parsed as a URL and the - * origin will be derived. For instance, the origin for the HTTP URL`'https://example.org/foo/bar'` is the ASCII string`'https://example.org'`. An error will be thrown if either the given - * string - * cannot be parsed as a URL or if a valid origin cannot be derived. - * - * A `URL` object, or any object with an `origin` property, may be passed as - * an `origin`, in which case the value of the `origin` property will be - * used. The value of the `origin` property _must_ be a properly serialized - * ASCII origin. - * - * Alternatively, the `origins` option may be used when creating a new HTTP/2 - * server using the `http2.createSecureServer()` method: - * - * ```js - * const http2 = require('http2'); - * const options = getSecureOptionsSomehow(); - * options.origins = ['https://example.com', 'https://example.org']; - * const server = http2.createSecureServer(options); - * server.on('stream', (stream) => { - * stream.respond(); - * stream.end('ok'); - * }); - * ``` - * @since v10.12.0 - * @param origins One or more URL Strings passed as separate arguments. - */ - origin( - ...origins: Array< - | string - | url.URL - | { - origin: string; - } - > - ): void; - addListener(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - addListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'connect', session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket): boolean; - emit(event: 'stream', stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - on(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - once(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - prependListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'connect', listener: (session: ServerHttp2Session, socket: net.Socket | tls.TLSSocket) => void): this; - prependOnceListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - // Http2Server - export interface SessionOptions { - maxDeflateDynamicTableSize?: number | undefined; - maxSessionMemory?: number | undefined; - maxHeaderListPairs?: number | undefined; - maxOutstandingPings?: number | undefined; - maxSendHeaderBlockLength?: number | undefined; - paddingStrategy?: number | undefined; - peerMaxConcurrentStreams?: number | undefined; - settings?: Settings | undefined; - /** - * Specifies a timeout in milliseconds that - * a server should wait when an [`'unknownProtocol'`][] is emitted. If the - * socket has not been destroyed by that time the server will destroy it. - * @default 100000 - */ - unknownProtocolTimeout?: number | undefined; - selectPadding?(frameLen: number, maxFrameLen: number): number; - createConnection?(authority: url.URL, option: SessionOptions): stream.Duplex; - } - export interface ClientSessionOptions extends SessionOptions { - maxReservedRemoteStreams?: number | undefined; - createConnection?: ((authority: url.URL, option: SessionOptions) => stream.Duplex) | undefined; - protocol?: 'http:' | 'https:' | undefined; - } - export interface ServerSessionOptions extends SessionOptions { - Http1IncomingMessage?: typeof IncomingMessage | undefined; - Http1ServerResponse?: typeof ServerResponse | undefined; - Http2ServerRequest?: typeof Http2ServerRequest | undefined; - Http2ServerResponse?: typeof Http2ServerResponse | undefined; - } - export interface SecureClientSessionOptions extends ClientSessionOptions, tls.ConnectionOptions {} - export interface SecureServerSessionOptions extends ServerSessionOptions, tls.TlsOptions {} - export interface ServerOptions extends ServerSessionOptions {} - export interface SecureServerOptions extends SecureServerSessionOptions { - allowHTTP1?: boolean | undefined; - origins?: string[] | undefined; - } - interface HTTP2ServerCommon { - setTimeout(msec?: number, callback?: () => void): this; - /** - * Throws ERR_HTTP2_INVALID_SETTING_VALUE for invalid settings values. - * Throws ERR_INVALID_ARG_TYPE for invalid settings argument. - */ - updateSettings(settings: Settings): void; - } - export interface Http2Server extends net.Server, HTTP2ServerCommon { - addListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - addListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - addListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - addListener(event: 'sessionError', listener: (err: Error) => void): this; - addListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: 'timeout', listener: () => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'checkContinue', request: Http2ServerRequest, response: Http2ServerResponse): boolean; - emit(event: 'request', request: Http2ServerRequest, response: Http2ServerResponse): boolean; - emit(event: 'session', session: ServerHttp2Session): boolean; - emit(event: 'sessionError', err: Error): boolean; - emit(event: 'stream', stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number): boolean; - emit(event: 'timeout'): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - on(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - on(event: 'session', listener: (session: ServerHttp2Session) => void): this; - on(event: 'sessionError', listener: (err: Error) => void): this; - on(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - on(event: 'timeout', listener: () => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - once(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - once(event: 'session', listener: (session: ServerHttp2Session) => void): this; - once(event: 'sessionError', listener: (err: Error) => void): this; - once(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - once(event: 'timeout', listener: () => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - prependListener(event: 'sessionError', listener: (err: Error) => void): this; - prependListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: 'timeout', listener: () => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependOnceListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependOnceListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - prependOnceListener(event: 'sessionError', listener: (err: Error) => void): this; - prependOnceListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: 'timeout', listener: () => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export interface Http2SecureServer extends tls.Server, HTTP2ServerCommon { - addListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - addListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - addListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - addListener(event: 'sessionError', listener: (err: Error) => void): this; - addListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - addListener(event: 'timeout', listener: () => void): this; - addListener(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'checkContinue', request: Http2ServerRequest, response: Http2ServerResponse): boolean; - emit(event: 'request', request: Http2ServerRequest, response: Http2ServerResponse): boolean; - emit(event: 'session', session: ServerHttp2Session): boolean; - emit(event: 'sessionError', err: Error): boolean; - emit(event: 'stream', stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number): boolean; - emit(event: 'timeout'): boolean; - emit(event: 'unknownProtocol', socket: tls.TLSSocket): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - on(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - on(event: 'session', listener: (session: ServerHttp2Session) => void): this; - on(event: 'sessionError', listener: (err: Error) => void): this; - on(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - on(event: 'timeout', listener: () => void): this; - on(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - once(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - once(event: 'session', listener: (session: ServerHttp2Session) => void): this; - once(event: 'sessionError', listener: (err: Error) => void): this; - once(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - once(event: 'timeout', listener: () => void): this; - once(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - prependListener(event: 'sessionError', listener: (err: Error) => void): this; - prependListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependListener(event: 'timeout', listener: () => void): this; - prependListener(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'checkContinue', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependOnceListener(event: 'request', listener: (request: Http2ServerRequest, response: Http2ServerResponse) => void): this; - prependOnceListener(event: 'session', listener: (session: ServerHttp2Session) => void): this; - prependOnceListener(event: 'sessionError', listener: (err: Error) => void): this; - prependOnceListener(event: 'stream', listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void): this; - prependOnceListener(event: 'timeout', listener: () => void): this; - prependOnceListener(event: 'unknownProtocol', listener: (socket: tls.TLSSocket) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - /** - * A `Http2ServerRequest` object is created by {@link Server} or {@link SecureServer} and passed as the first argument to the `'request'` event. It may be used to access a request status, - * headers, and - * data. - * @since v8.4.0 - */ - export class Http2ServerRequest extends stream.Readable { - constructor(stream: ServerHttp2Stream, headers: IncomingHttpHeaders, options: stream.ReadableOptions, rawHeaders: ReadonlyArray); - /** - * The `request.aborted` property will be `true` if the request has - * been aborted. - * @since v10.1.0 - */ - readonly aborted: boolean; - /** - * The request authority pseudo header field. Because HTTP/2 allows requests - * to set either `:authority` or `host`, this value is derived from`req.headers[':authority']` if present. Otherwise, it is derived from`req.headers['host']`. - * @since v8.4.0 - */ - readonly authority: string; - /** - * See `request.socket`. - * @since v8.4.0 - * @deprecated Since v13.0.0 - Use `socket`. - */ - readonly connection: net.Socket | tls.TLSSocket; - /** - * The `request.complete` property will be `true` if the request has - * been completed, aborted, or destroyed. - * @since v12.10.0 - */ - readonly complete: boolean; - /** - * The request/response headers object. - * - * Key-value pairs of header names and values. Header names are lower-cased. - * - * ```js - * // Prints something like: - * // - * // { 'user-agent': 'curl/7.22.0', - * // host: '127.0.0.1:8000', - * // accept: '*' } - * console.log(request.headers); - * ``` - * - * See `HTTP/2 Headers Object`. - * - * In HTTP/2, the request path, host name, protocol, and method are represented as - * special headers prefixed with the `:` character (e.g. `':path'`). These special - * headers will be included in the `request.headers` object. Care must be taken not - * to inadvertently modify these special headers or errors may occur. For instance, - * removing all headers from the request will cause errors to occur: - * - * ```js - * removeAllHeaders(request.headers); - * assert(request.url); // Fails because the :path header has been removed - * ``` - * @since v8.4.0 - */ - readonly headers: IncomingHttpHeaders; - /** - * In case of server request, the HTTP version sent by the client. In the case of - * client response, the HTTP version of the connected-to server. Returns`'2.0'`. - * - * Also `message.httpVersionMajor` is the first integer and`message.httpVersionMinor` is the second. - * @since v8.4.0 - */ - readonly httpVersion: string; - readonly httpVersionMinor: number; - readonly httpVersionMajor: number; - /** - * The request method as a string. Read-only. Examples: `'GET'`, `'DELETE'`. - * @since v8.4.0 - */ - readonly method: string; - /** - * The raw request/response headers list exactly as they were received. - * - * The keys and values are in the same list. It is _not_ a - * list of tuples. So, the even-numbered offsets are key values, and the - * odd-numbered offsets are the associated values. - * - * Header names are not lowercased, and duplicates are not merged. - * - * ```js - * // Prints something like: - * // - * // [ 'user-agent', - * // 'this is invalid because there can be only one', - * // 'User-Agent', - * // 'curl/7.22.0', - * // 'Host', - * // '127.0.0.1:8000', - * // 'ACCEPT', - * // '*' ] - * console.log(request.rawHeaders); - * ``` - * @since v8.4.0 - */ - readonly rawHeaders: string[]; - /** - * The raw request/response trailer keys and values exactly as they were - * received. Only populated at the `'end'` event. - * @since v8.4.0 - */ - readonly rawTrailers: string[]; - /** - * The request scheme pseudo header field indicating the scheme - * portion of the target URL. - * @since v8.4.0 - */ - readonly scheme: string; - /** - * Returns a `Proxy` object that acts as a `net.Socket` (or `tls.TLSSocket`) but - * applies getters, setters, and methods based on HTTP/2 logic. - * - * `destroyed`, `readable`, and `writable` properties will be retrieved from and - * set on `request.stream`. - * - * `destroy`, `emit`, `end`, `on` and `once` methods will be called on`request.stream`. - * - * `setTimeout` method will be called on `request.stream.session`. - * - * `pause`, `read`, `resume`, and `write` will throw an error with code`ERR_HTTP2_NO_SOCKET_MANIPULATION`. See `Http2Session and Sockets` for - * more information. - * - * All other interactions will be routed directly to the socket. With TLS support, - * use `request.socket.getPeerCertificate()` to obtain the client's - * authentication details. - * @since v8.4.0 - */ - readonly socket: net.Socket | tls.TLSSocket; - /** - * The `Http2Stream` object backing the request. - * @since v8.4.0 - */ - readonly stream: ServerHttp2Stream; - /** - * The request/response trailers object. Only populated at the `'end'` event. - * @since v8.4.0 - */ - readonly trailers: IncomingHttpHeaders; - /** - * Request URL string. This contains only the URL that is present in the actual - * HTTP request. If the request is: - * - * ```http - * GET /status?name=ryan HTTP/1.1 - * Accept: text/plain - * ``` - * - * Then `request.url` will be: - * - * ```js - * '/status?name=ryan' - * ``` - * - * To parse the url into its parts, `new URL()` can be used: - * - * ```console - * $ node - * > new URL('/status?name=ryan', 'http://example.com') - * URL { - * href: 'http://example.com/status?name=ryan', - * origin: 'http://example.com', - * protocol: 'http:', - * username: '', - * password: '', - * host: 'example.com', - * hostname: 'example.com', - * port: '', - * pathname: '/status', - * search: '?name=ryan', - * searchParams: URLSearchParams { 'name' => 'ryan' }, - * hash: '' - * } - * ``` - * @since v8.4.0 - */ - url: string; - /** - * Sets the `Http2Stream`'s timeout value to `msecs`. If a callback is - * provided, then it is added as a listener on the `'timeout'` event on - * the response object. - * - * If no `'timeout'` listener is added to the request, the response, or - * the server, then `Http2Stream` s are destroyed when they time out. If a - * handler is assigned to the request, the response, or the server's `'timeout'`events, timed out sockets must be handled explicitly. - * @since v8.4.0 - */ - setTimeout(msecs: number, callback?: () => void): void; - read(size?: number): Buffer | string | null; - addListener(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - addListener(event: 'close', listener: () => void): this; - addListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - addListener(event: 'end', listener: () => void): this; - addListener(event: 'readable', listener: () => void): this; - addListener(event: 'error', listener: (err: Error) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'aborted', hadError: boolean, code: number): boolean; - emit(event: 'close'): boolean; - emit(event: 'data', chunk: Buffer | string): boolean; - emit(event: 'end'): boolean; - emit(event: 'readable'): boolean; - emit(event: 'error', err: Error): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - on(event: 'close', listener: () => void): this; - on(event: 'data', listener: (chunk: Buffer | string) => void): this; - on(event: 'end', listener: () => void): this; - on(event: 'readable', listener: () => void): this; - on(event: 'error', listener: (err: Error) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'data', listener: (chunk: Buffer | string) => void): this; - once(event: 'end', listener: () => void): this; - once(event: 'readable', listener: () => void): this; - once(event: 'error', listener: (err: Error) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - prependListener(event: 'end', listener: () => void): this; - prependListener(event: 'readable', listener: () => void): this; - prependListener(event: 'error', listener: (err: Error) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'aborted', listener: (hadError: boolean, code: number) => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'data', listener: (chunk: Buffer | string) => void): this; - prependOnceListener(event: 'end', listener: () => void): this; - prependOnceListener(event: 'readable', listener: () => void): this; - prependOnceListener(event: 'error', listener: (err: Error) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - /** - * This object is created internally by an HTTP server, not by the user. It is - * passed as the second parameter to the `'request'` event. - * @since v8.4.0 - */ - export class Http2ServerResponse extends stream.Writable { - constructor(stream: ServerHttp2Stream); - /** - * See `response.socket`. - * @since v8.4.0 - * @deprecated Since v13.0.0 - Use `socket`. - */ - readonly connection: net.Socket | tls.TLSSocket; - /** - * Boolean value that indicates whether the response has completed. Starts - * as `false`. After `response.end()` executes, the value will be `true`. - * @since v8.4.0 - * @deprecated Since v13.4.0,v12.16.0 - Use `writableEnded`. - */ - readonly finished: boolean; - /** - * True if headers were sent, false otherwise (read-only). - * @since v8.4.0 - */ - readonly headersSent: boolean; - /** - * A reference to the original HTTP2 request object. - * @since v15.7.0 - */ - readonly req: Http2ServerRequest; - /** - * Returns a `Proxy` object that acts as a `net.Socket` (or `tls.TLSSocket`) but - * applies getters, setters, and methods based on HTTP/2 logic. - * - * `destroyed`, `readable`, and `writable` properties will be retrieved from and - * set on `response.stream`. - * - * `destroy`, `emit`, `end`, `on` and `once` methods will be called on`response.stream`. - * - * `setTimeout` method will be called on `response.stream.session`. - * - * `pause`, `read`, `resume`, and `write` will throw an error with code`ERR_HTTP2_NO_SOCKET_MANIPULATION`. See `Http2Session and Sockets` for - * more information. - * - * All other interactions will be routed directly to the socket. - * - * ```js - * const http2 = require('http2'); - * const server = http2.createServer((req, res) => { - * const ip = req.socket.remoteAddress; - * const port = req.socket.remotePort; - * res.end(`Your IP address is ${ip} and your source port is ${port}.`); - * }).listen(3000); - * ``` - * @since v8.4.0 - */ - readonly socket: net.Socket | tls.TLSSocket; - /** - * The `Http2Stream` object backing the response. - * @since v8.4.0 - */ - readonly stream: ServerHttp2Stream; - /** - * When true, the Date header will be automatically generated and sent in - * the response if it is not already present in the headers. Defaults to true. - * - * This should only be disabled for testing; HTTP requires the Date header - * in responses. - * @since v8.4.0 - */ - sendDate: boolean; - /** - * When using implicit headers (not calling `response.writeHead()` explicitly), - * this property controls the status code that will be sent to the client when - * the headers get flushed. - * - * ```js - * response.statusCode = 404; - * ``` - * - * After response header was sent to the client, this property indicates the - * status code which was sent out. - * @since v8.4.0 - */ - statusCode: number; - /** - * Status message is not supported by HTTP/2 (RFC 7540 8.1.2.4). It returns - * an empty string. - * @since v8.4.0 - */ - statusMessage: ''; - /** - * This method adds HTTP trailing headers (a header but at the end of the - * message) to the response. - * - * Attempting to set a header field name or value that contains invalid characters - * will result in a `TypeError` being thrown. - * @since v8.4.0 - */ - addTrailers(trailers: OutgoingHttpHeaders): void; - /** - * This method signals to the server that all of the response headers and body - * have been sent; that server should consider this message complete. - * The method, `response.end()`, MUST be called on each response. - * - * If `data` is specified, it is equivalent to calling `response.write(data, encoding)` followed by `response.end(callback)`. - * - * If `callback` is specified, it will be called when the response stream - * is finished. - * @since v8.4.0 - */ - end(callback?: () => void): this; - end(data: string | Uint8Array, callback?: () => void): this; - end(data: string | Uint8Array, encoding: BufferEncoding, callback?: () => void): this; - /** - * Reads out a header that has already been queued but not sent to the client. - * The name is case-insensitive. - * - * ```js - * const contentType = response.getHeader('content-type'); - * ``` - * @since v8.4.0 - */ - getHeader(name: string): string; - /** - * Returns an array containing the unique names of the current outgoing headers. - * All header names are lowercase. - * - * ```js - * response.setHeader('Foo', 'bar'); - * response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']); - * - * const headerNames = response.getHeaderNames(); - * // headerNames === ['foo', 'set-cookie'] - * ``` - * @since v8.4.0 - */ - getHeaderNames(): string[]; - /** - * Returns a shallow copy of the current outgoing headers. Since a shallow copy - * is used, array values may be mutated without additional calls to various - * header-related http module methods. The keys of the returned object are the - * header names and the values are the respective header values. All header names - * are lowercase. - * - * The object returned by the `response.getHeaders()` method _does not_prototypically inherit from the JavaScript `Object`. This means that typical`Object` methods such as `obj.toString()`, - * `obj.hasOwnProperty()`, and others - * are not defined and _will not work_. - * - * ```js - * response.setHeader('Foo', 'bar'); - * response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']); - * - * const headers = response.getHeaders(); - * // headers === { foo: 'bar', 'set-cookie': ['foo=bar', 'bar=baz'] } - * ``` - * @since v8.4.0 - */ - getHeaders(): OutgoingHttpHeaders; - /** - * Returns `true` if the header identified by `name` is currently set in the - * outgoing headers. The header name matching is case-insensitive. - * - * ```js - * const hasContentType = response.hasHeader('content-type'); - * ``` - * @since v8.4.0 - */ - hasHeader(name: string): boolean; - /** - * Removes a header that has been queued for implicit sending. - * - * ```js - * response.removeHeader('Content-Encoding'); - * ``` - * @since v8.4.0 - */ - removeHeader(name: string): void; - /** - * Sets a single header value for implicit headers. If this header already exists - * in the to-be-sent headers, its value will be replaced. Use an array of strings - * here to send multiple headers with the same name. - * - * ```js - * response.setHeader('Content-Type', 'text/html; charset=utf-8'); - * ``` - * - * or - * - * ```js - * response.setHeader('Set-Cookie', ['type=ninja', 'language=javascript']); - * ``` - * - * Attempting to set a header field name or value that contains invalid characters - * will result in a `TypeError` being thrown. - * - * When headers have been set with `response.setHeader()`, they will be merged - * with any headers passed to `response.writeHead()`, with the headers passed - * to `response.writeHead()` given precedence. - * - * ```js - * // Returns content-type = text/plain - * const server = http2.createServer((req, res) => { - * res.setHeader('Content-Type', 'text/html; charset=utf-8'); - * res.setHeader('X-Foo', 'bar'); - * res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); - * res.end('ok'); - * }); - * ``` - * @since v8.4.0 - */ - setHeader(name: string, value: number | string | ReadonlyArray): void; - /** - * Sets the `Http2Stream`'s timeout value to `msecs`. If a callback is - * provided, then it is added as a listener on the `'timeout'` event on - * the response object. - * - * If no `'timeout'` listener is added to the request, the response, or - * the server, then `Http2Stream` s are destroyed when they time out. If a - * handler is assigned to the request, the response, or the server's `'timeout'`events, timed out sockets must be handled explicitly. - * @since v8.4.0 - */ - setTimeout(msecs: number, callback?: () => void): void; - /** - * If this method is called and `response.writeHead()` has not been called, - * it will switch to implicit header mode and flush the implicit headers. - * - * This sends a chunk of the response body. This method may - * be called multiple times to provide successive parts of the body. - * - * In the `http` module, the response body is omitted when the - * request is a HEAD request. Similarly, the `204` and `304` responses _must not_ include a message body. - * - * `chunk` can be a string or a buffer. If `chunk` is a string, - * the second parameter specifies how to encode it into a byte stream. - * By default the `encoding` is `'utf8'`. `callback` will be called when this chunk - * of data is flushed. - * - * This is the raw HTTP body and has nothing to do with higher-level multi-part - * body encodings that may be used. - * - * The first time `response.write()` is called, it will send the buffered - * header information and the first chunk of the body to the client. The second - * time `response.write()` is called, Node.js assumes data will be streamed, - * and sends the new data separately. That is, the response is buffered up to the - * first chunk of the body. - * - * Returns `true` if the entire data was flushed successfully to the kernel - * buffer. Returns `false` if all or part of the data was queued in user memory.`'drain'` will be emitted when the buffer is free again. - * @since v8.4.0 - */ - write(chunk: string | Uint8Array, callback?: (err: Error) => void): boolean; - write(chunk: string | Uint8Array, encoding: BufferEncoding, callback?: (err: Error) => void): boolean; - /** - * Sends a status `100 Continue` to the client, indicating that the request body - * should be sent. See the `'checkContinue'` event on `Http2Server` and`Http2SecureServer`. - * @since v8.4.0 - */ - writeContinue(): void; - /** - * Sends a status `103 Early Hints` to the client with a Link header, - * indicating that the user agent can preload/preconnect the linked resources. - * The `hints` is an object containing the values of headers to be sent with - * early hints message. - * - * Example: - * - * ```js - * const earlyHintsLink = '; rel=preload; as=style'; - * response.writeEarlyHints({ - * 'link': earlyHintsLink, - * }); - * - * const earlyHintsLinks = [ - * '; rel=preload; as=style', - * '; rel=preload; as=script', - * ]; - * response.writeEarlyHints({ - * 'link': earlyHintsLinks, - * 'x-trace-id': 'id for diagnostics' - * }); - * ``` - * - * @since v18.11.0 - * @param hints An object containing the values of headers - */ - writeEarlyHints(hints: Record): void; - /** - * Sends a response header to the request. The status code is a 3-digit HTTP - * status code, like `404`. The last argument, `headers`, are the response headers. - * - * Returns a reference to the `Http2ServerResponse`, so that calls can be chained. - * - * For compatibility with `HTTP/1`, a human-readable `statusMessage` may be - * passed as the second argument. However, because the `statusMessage` has no - * meaning within HTTP/2, the argument will have no effect and a process warning - * will be emitted. - * - * ```js - * const body = 'hello world'; - * response.writeHead(200, { - * 'Content-Length': Buffer.byteLength(body), - * 'Content-Type': 'text/plain; charset=utf-8', - * }); - * ``` - * - * `Content-Length` is given in bytes not characters. The`Buffer.byteLength()` API may be used to determine the number of bytes in a - * given encoding. On outbound messages, Node.js does not check if Content-Length - * and the length of the body being transmitted are equal or not. However, when - * receiving messages, Node.js will automatically reject messages when the`Content-Length` does not match the actual payload size. - * - * This method may be called at most one time on a message before `response.end()` is called. - * - * If `response.write()` or `response.end()` are called before calling - * this, the implicit/mutable headers will be calculated and call this function. - * - * When headers have been set with `response.setHeader()`, they will be merged - * with any headers passed to `response.writeHead()`, with the headers passed - * to `response.writeHead()` given precedence. - * - * ```js - * // Returns content-type = text/plain - * const server = http2.createServer((req, res) => { - * res.setHeader('Content-Type', 'text/html; charset=utf-8'); - * res.setHeader('X-Foo', 'bar'); - * res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); - * res.end('ok'); - * }); - * ``` - * - * Attempting to set a header field name or value that contains invalid characters - * will result in a `TypeError` being thrown. - * @since v8.4.0 - */ - writeHead(statusCode: number, headers?: OutgoingHttpHeaders): this; - writeHead(statusCode: number, statusMessage: string, headers?: OutgoingHttpHeaders): this; - /** - * Call `http2stream.pushStream()` with the given headers, and wrap the - * given `Http2Stream` on a newly created `Http2ServerResponse` as the callback - * parameter if successful. When `Http2ServerRequest` is closed, the callback is - * called with an error `ERR_HTTP2_INVALID_STREAM`. - * @since v8.4.0 - * @param headers An object describing the headers - * @param callback Called once `http2stream.pushStream()` is finished, or either when the attempt to create the pushed `Http2Stream` has failed or has been rejected, or the state of - * `Http2ServerRequest` is closed prior to calling the `http2stream.pushStream()` method - */ - createPushResponse(headers: OutgoingHttpHeaders, callback: (err: Error | null, res: Http2ServerResponse) => void): void; - addListener(event: 'close', listener: () => void): this; - addListener(event: 'drain', listener: () => void): this; - addListener(event: 'error', listener: (error: Error) => void): this; - addListener(event: 'finish', listener: () => void): this; - addListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - addListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - addListener(event: string | symbol, listener: (...args: any[]) => void): this; - emit(event: 'close'): boolean; - emit(event: 'drain'): boolean; - emit(event: 'error', error: Error): boolean; - emit(event: 'finish'): boolean; - emit(event: 'pipe', src: stream.Readable): boolean; - emit(event: 'unpipe', src: stream.Readable): boolean; - emit(event: string | symbol, ...args: any[]): boolean; - on(event: 'close', listener: () => void): this; - on(event: 'drain', listener: () => void): this; - on(event: 'error', listener: (error: Error) => void): this; - on(event: 'finish', listener: () => void): this; - on(event: 'pipe', listener: (src: stream.Readable) => void): this; - on(event: 'unpipe', listener: (src: stream.Readable) => void): this; - on(event: string | symbol, listener: (...args: any[]) => void): this; - once(event: 'close', listener: () => void): this; - once(event: 'drain', listener: () => void): this; - once(event: 'error', listener: (error: Error) => void): this; - once(event: 'finish', listener: () => void): this; - once(event: 'pipe', listener: (src: stream.Readable) => void): this; - once(event: 'unpipe', listener: (src: stream.Readable) => void): this; - once(event: string | symbol, listener: (...args: any[]) => void): this; - prependListener(event: 'close', listener: () => void): this; - prependListener(event: 'drain', listener: () => void): this; - prependListener(event: 'error', listener: (error: Error) => void): this; - prependListener(event: 'finish', listener: () => void): this; - prependListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - prependListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - prependListener(event: string | symbol, listener: (...args: any[]) => void): this; - prependOnceListener(event: 'close', listener: () => void): this; - prependOnceListener(event: 'drain', listener: () => void): this; - prependOnceListener(event: 'error', listener: (error: Error) => void): this; - prependOnceListener(event: 'finish', listener: () => void): this; - prependOnceListener(event: 'pipe', listener: (src: stream.Readable) => void): this; - prependOnceListener(event: 'unpipe', listener: (src: stream.Readable) => void): this; - prependOnceListener(event: string | symbol, listener: (...args: any[]) => void): this; - } - export namespace constants { - const NGHTTP2_SESSION_SERVER: number; - const NGHTTP2_SESSION_CLIENT: number; - const NGHTTP2_STREAM_STATE_IDLE: number; - const NGHTTP2_STREAM_STATE_OPEN: number; - const NGHTTP2_STREAM_STATE_RESERVED_LOCAL: number; - const NGHTTP2_STREAM_STATE_RESERVED_REMOTE: number; - const NGHTTP2_STREAM_STATE_HALF_CLOSED_LOCAL: number; - const NGHTTP2_STREAM_STATE_HALF_CLOSED_REMOTE: number; - const NGHTTP2_STREAM_STATE_CLOSED: number; - const NGHTTP2_NO_ERROR: number; - const NGHTTP2_PROTOCOL_ERROR: number; - const NGHTTP2_INTERNAL_ERROR: number; - const NGHTTP2_FLOW_CONTROL_ERROR: number; - const NGHTTP2_SETTINGS_TIMEOUT: number; - const NGHTTP2_STREAM_CLOSED: number; - const NGHTTP2_FRAME_SIZE_ERROR: number; - const NGHTTP2_REFUSED_STREAM: number; - const NGHTTP2_CANCEL: number; - const NGHTTP2_COMPRESSION_ERROR: number; - const NGHTTP2_CONNECT_ERROR: number; - const NGHTTP2_ENHANCE_YOUR_CALM: number; - const NGHTTP2_INADEQUATE_SECURITY: number; - const NGHTTP2_HTTP_1_1_REQUIRED: number; - const NGHTTP2_ERR_FRAME_SIZE_ERROR: number; - const NGHTTP2_FLAG_NONE: number; - const NGHTTP2_FLAG_END_STREAM: number; - const NGHTTP2_FLAG_END_HEADERS: number; - const NGHTTP2_FLAG_ACK: number; - const NGHTTP2_FLAG_PADDED: number; - const NGHTTP2_FLAG_PRIORITY: number; - const DEFAULT_SETTINGS_HEADER_TABLE_SIZE: number; - const DEFAULT_SETTINGS_ENABLE_PUSH: number; - const DEFAULT_SETTINGS_INITIAL_WINDOW_SIZE: number; - const DEFAULT_SETTINGS_MAX_FRAME_SIZE: number; - const MAX_MAX_FRAME_SIZE: number; - const MIN_MAX_FRAME_SIZE: number; - const MAX_INITIAL_WINDOW_SIZE: number; - const NGHTTP2_DEFAULT_WEIGHT: number; - const NGHTTP2_SETTINGS_HEADER_TABLE_SIZE: number; - const NGHTTP2_SETTINGS_ENABLE_PUSH: number; - const NGHTTP2_SETTINGS_MAX_CONCURRENT_STREAMS: number; - const NGHTTP2_SETTINGS_INITIAL_WINDOW_SIZE: number; - const NGHTTP2_SETTINGS_MAX_FRAME_SIZE: number; - const NGHTTP2_SETTINGS_MAX_HEADER_LIST_SIZE: number; - const PADDING_STRATEGY_NONE: number; - const PADDING_STRATEGY_MAX: number; - const PADDING_STRATEGY_CALLBACK: number; - const HTTP2_HEADER_STATUS: string; - const HTTP2_HEADER_METHOD: string; - const HTTP2_HEADER_AUTHORITY: string; - const HTTP2_HEADER_SCHEME: string; - const HTTP2_HEADER_PATH: string; - const HTTP2_HEADER_ACCEPT_CHARSET: string; - const HTTP2_HEADER_ACCEPT_ENCODING: string; - const HTTP2_HEADER_ACCEPT_LANGUAGE: string; - const HTTP2_HEADER_ACCEPT_RANGES: string; - const HTTP2_HEADER_ACCEPT: string; - const HTTP2_HEADER_ACCESS_CONTROL_ALLOW_ORIGIN: string; - const HTTP2_HEADER_AGE: string; - const HTTP2_HEADER_ALLOW: string; - const HTTP2_HEADER_AUTHORIZATION: string; - const HTTP2_HEADER_CACHE_CONTROL: string; - const HTTP2_HEADER_CONNECTION: string; - const HTTP2_HEADER_CONTENT_DISPOSITION: string; - const HTTP2_HEADER_CONTENT_ENCODING: string; - const HTTP2_HEADER_CONTENT_LANGUAGE: string; - const HTTP2_HEADER_CONTENT_LENGTH: string; - const HTTP2_HEADER_CONTENT_LOCATION: string; - const HTTP2_HEADER_CONTENT_MD5: string; - const HTTP2_HEADER_CONTENT_RANGE: string; - const HTTP2_HEADER_CONTENT_TYPE: string; - const HTTP2_HEADER_COOKIE: string; - const HTTP2_HEADER_DATE: string; - const HTTP2_HEADER_ETAG: string; - const HTTP2_HEADER_EXPECT: string; - const HTTP2_HEADER_EXPIRES: string; - const HTTP2_HEADER_FROM: string; - const HTTP2_HEADER_HOST: string; - const HTTP2_HEADER_IF_MATCH: string; - const HTTP2_HEADER_IF_MODIFIED_SINCE: string; - const HTTP2_HEADER_IF_NONE_MATCH: string; - const HTTP2_HEADER_IF_RANGE: string; - const HTTP2_HEADER_IF_UNMODIFIED_SINCE: string; - const HTTP2_HEADER_LAST_MODIFIED: string; - const HTTP2_HEADER_LINK: string; - const HTTP2_HEADER_LOCATION: string; - const HTTP2_HEADER_MAX_FORWARDS: string; - const HTTP2_HEADER_PREFER: string; - const HTTP2_HEADER_PROXY_AUTHENTICATE: string; - const HTTP2_HEADER_PROXY_AUTHORIZATION: string; - const HTTP2_HEADER_RANGE: string; - const HTTP2_HEADER_REFERER: string; - const HTTP2_HEADER_REFRESH: string; - const HTTP2_HEADER_RETRY_AFTER: string; - const HTTP2_HEADER_SERVER: string; - const HTTP2_HEADER_SET_COOKIE: string; - const HTTP2_HEADER_STRICT_TRANSPORT_SECURITY: string; - const HTTP2_HEADER_TRANSFER_ENCODING: string; - const HTTP2_HEADER_TE: string; - const HTTP2_HEADER_UPGRADE: string; - const HTTP2_HEADER_USER_AGENT: string; - const HTTP2_HEADER_VARY: string; - const HTTP2_HEADER_VIA: string; - const HTTP2_HEADER_WWW_AUTHENTICATE: string; - const HTTP2_HEADER_HTTP2_SETTINGS: string; - const HTTP2_HEADER_KEEP_ALIVE: string; - const HTTP2_HEADER_PROXY_CONNECTION: string; - const HTTP2_METHOD_ACL: string; - const HTTP2_METHOD_BASELINE_CONTROL: string; - const HTTP2_METHOD_BIND: string; - const HTTP2_METHOD_CHECKIN: string; - const HTTP2_METHOD_CHECKOUT: string; - const HTTP2_METHOD_CONNECT: string; - const HTTP2_METHOD_COPY: string; - const HTTP2_METHOD_DELETE: string; - const HTTP2_METHOD_GET: string; - const HTTP2_METHOD_HEAD: string; - const HTTP2_METHOD_LABEL: string; - const HTTP2_METHOD_LINK: string; - const HTTP2_METHOD_LOCK: string; - const HTTP2_METHOD_MERGE: string; - const HTTP2_METHOD_MKACTIVITY: string; - const HTTP2_METHOD_MKCALENDAR: string; - const HTTP2_METHOD_MKCOL: string; - const HTTP2_METHOD_MKREDIRECTREF: string; - const HTTP2_METHOD_MKWORKSPACE: string; - const HTTP2_METHOD_MOVE: string; - const HTTP2_METHOD_OPTIONS: string; - const HTTP2_METHOD_ORDERPATCH: string; - const HTTP2_METHOD_PATCH: string; - const HTTP2_METHOD_POST: string; - const HTTP2_METHOD_PRI: string; - const HTTP2_METHOD_PROPFIND: string; - const HTTP2_METHOD_PROPPATCH: string; - const HTTP2_METHOD_PUT: string; - const HTTP2_METHOD_REBIND: string; - const HTTP2_METHOD_REPORT: string; - const HTTP2_METHOD_SEARCH: string; - const HTTP2_METHOD_TRACE: string; - const HTTP2_METHOD_UNBIND: string; - const HTTP2_METHOD_UNCHECKOUT: string; - const HTTP2_METHOD_UNLINK: string; - const HTTP2_METHOD_UNLOCK: string; - const HTTP2_METHOD_UPDATE: string; - const HTTP2_METHOD_UPDATEREDIRECTREF: string; - const HTTP2_METHOD_VERSION_CONTROL: string; - const HTTP_STATUS_CONTINUE: number; - const HTTP_STATUS_SWITCHING_PROTOCOLS: number; - const HTTP_STATUS_PROCESSING: number; - const HTTP_STATUS_OK: number; - const HTTP_STATUS_CREATED: number; - const HTTP_STATUS_ACCEPTED: number; - const HTTP_STATUS_NON_AUTHORITATIVE_INFORMATION: number; - const HTTP_STATUS_NO_CONTENT: number; - const HTTP_STATUS_RESET_CONTENT: number; - const HTTP_STATUS_PARTIAL_CONTENT: number; - const HTTP_STATUS_MULTI_STATUS: number; - const HTTP_STATUS_ALREADY_REPORTED: number; - const HTTP_STATUS_IM_USED: number; - const HTTP_STATUS_MULTIPLE_CHOICES: number; - const HTTP_STATUS_MOVED_PERMANENTLY: number; - const HTTP_STATUS_FOUND: number; - const HTTP_STATUS_SEE_OTHER: number; - const HTTP_STATUS_NOT_MODIFIED: number; - const HTTP_STATUS_USE_PROXY: number; - const HTTP_STATUS_TEMPORARY_REDIRECT: number; - const HTTP_STATUS_PERMANENT_REDIRECT: number; - const HTTP_STATUS_BAD_REQUEST: number; - const HTTP_STATUS_UNAUTHORIZED: number; - const HTTP_STATUS_PAYMENT_REQUIRED: number; - const HTTP_STATUS_FORBIDDEN: number; - const HTTP_STATUS_NOT_FOUND: number; - const HTTP_STATUS_METHOD_NOT_ALLOWED: number; - const HTTP_STATUS_NOT_ACCEPTABLE: number; - const HTTP_STATUS_PROXY_AUTHENTICATION_REQUIRED: number; - const HTTP_STATUS_REQUEST_TIMEOUT: number; - const HTTP_STATUS_CONFLICT: number; - const HTTP_STATUS_GONE: number; - const HTTP_STATUS_LENGTH_REQUIRED: number; - const HTTP_STATUS_PRECONDITION_FAILED: number; - const HTTP_STATUS_PAYLOAD_TOO_LARGE: number; - const HTTP_STATUS_URI_TOO_LONG: number; - const HTTP_STATUS_UNSUPPORTED_MEDIA_TYPE: number; - const HTTP_STATUS_RANGE_NOT_SATISFIABLE: number; - const HTTP_STATUS_EXPECTATION_FAILED: number; - const HTTP_STATUS_TEAPOT: number; - const HTTP_STATUS_MISDIRECTED_REQUEST: number; - const HTTP_STATUS_UNPROCESSABLE_ENTITY: number; - const HTTP_STATUS_LOCKED: number; - const HTTP_STATUS_FAILED_DEPENDENCY: number; - const HTTP_STATUS_UNORDERED_COLLECTION: number; - const HTTP_STATUS_UPGRADE_REQUIRED: number; - const HTTP_STATUS_PRECONDITION_REQUIRED: number; - const HTTP_STATUS_TOO_MANY_REQUESTS: number; - const HTTP_STATUS_REQUEST_HEADER_FIELDS_TOO_LARGE: number; - const HTTP_STATUS_UNAVAILABLE_FOR_LEGAL_REASONS: number; - const HTTP_STATUS_INTERNAL_SERVER_ERROR: number; - const HTTP_STATUS_NOT_IMPLEMENTED: number; - const HTTP_STATUS_BAD_GATEWAY: number; - const HTTP_STATUS_SERVICE_UNAVAILABLE: number; - const HTTP_STATUS_GATEWAY_TIMEOUT: number; - const HTTP_STATUS_HTTP_VERSION_NOT_SUPPORTED: number; - const HTTP_STATUS_VARIANT_ALSO_NEGOTIATES: number; - const HTTP_STATUS_INSUFFICIENT_STORAGE: number; - const HTTP_STATUS_LOOP_DETECTED: number; - const HTTP_STATUS_BANDWIDTH_LIMIT_EXCEEDED: number; - const HTTP_STATUS_NOT_EXTENDED: number; - const HTTP_STATUS_NETWORK_AUTHENTICATION_REQUIRED: number; - } - /** - * This symbol can be set as a property on the HTTP/2 headers object with - * an array value in order to provide a list of headers considered sensitive. - */ - export const sensitiveHeaders: symbol; - /** - * Returns an object containing the default settings for an `Http2Session`instance. This method returns a new object instance every time it is called - * so instances returned may be safely modified for use. - * @since v8.4.0 - */ - export function getDefaultSettings(): Settings; - /** - * Returns a `Buffer` instance containing serialized representation of the given - * HTTP/2 settings as specified in the [HTTP/2](https://tools.ietf.org/html/rfc7540) specification. This is intended - * for use with the `HTTP2-Settings` header field. - * - * ```js - * const http2 = require('http2'); - * - * const packed = http2.getPackedSettings({ enablePush: false }); - * - * console.log(packed.toString('base64')); - * // Prints: AAIAAAAA - * ``` - * @since v8.4.0 - */ - export function getPackedSettings(settings: Settings): Buffer; - /** - * Returns a `HTTP/2 Settings Object` containing the deserialized settings from - * the given `Buffer` as generated by `http2.getPackedSettings()`. - * @since v8.4.0 - * @param buf The packed settings. - */ - export function getUnpackedSettings(buf: Uint8Array): Settings; - /** - * Returns a `net.Server` instance that creates and manages `Http2Session`instances. - * - * Since there are no browsers known that support [unencrypted HTTP/2](https://http2.github.io/faq/#does-http2-require-encryption), the use of {@link createSecureServer} is necessary when - * communicating - * with browser clients. - * - * ```js - * const http2 = require('http2'); - * - * // Create an unencrypted HTTP/2 server. - * // Since there are no browsers known that support - * // unencrypted HTTP/2, the use of `http2.createSecureServer()` - * // is necessary when communicating with browser clients. - * const server = http2.createServer(); - * - * server.on('stream', (stream, headers) => { - * stream.respond({ - * 'content-type': 'text/html; charset=utf-8', - * ':status': 200 - * }); - * stream.end('

    Hello World

    '); - * }); - * - * server.listen(80); - * ``` - * @since v8.4.0 - * @param onRequestHandler See `Compatibility API` - */ - export function createServer(onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void): Http2Server; - export function createServer(options: ServerOptions, onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void): Http2Server; - /** - * Returns a `tls.Server` instance that creates and manages `Http2Session`instances. - * - * ```js - * const http2 = require('http2'); - * const fs = require('fs'); - * - * const options = { - * key: fs.readFileSync('server-key.pem'), - * cert: fs.readFileSync('server-cert.pem') - * }; - * - * // Create a secure HTTP/2 server - * const server = http2.createSecureServer(options); - * - * server.on('stream', (stream, headers) => { - * stream.respond({ - * 'content-type': 'text/html; charset=utf-8', - * ':status': 200 - * }); - * stream.end('

    Hello World

    '); - * }); - * - * server.listen(80); - * ``` - * @since v8.4.0 - * @param onRequestHandler See `Compatibility API` - */ - export function createSecureServer(onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void): Http2SecureServer; - export function createSecureServer(options: SecureServerOptions, onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void): Http2SecureServer; - /** - * Returns a `ClientHttp2Session` instance. - * - * ```js - * const http2 = require('http2'); - * const client = http2.connect('https://localhost:1234'); - * - * // Use the client - * - * client.close(); - * ``` - * @since v8.4.0 - * @param authority The remote HTTP/2 server to connect to. This must be in the form of a minimal, valid URL with the `http://` or `https://` prefix, host name, and IP port (if a non-default port - * is used). Userinfo (user ID and password), path, querystring, and fragment details in the URL will be ignored. - * @param listener Will be registered as a one-time listener of the {@link 'connect'} event. - */ - export function connect(authority: string | url.URL, listener: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void): ClientHttp2Session; - export function connect( - authority: string | url.URL, - options?: ClientSessionOptions | SecureClientSessionOptions, - listener?: (session: ClientHttp2Session, socket: net.Socket | tls.TLSSocket) => void - ): ClientHttp2Session; -} -declare module 'node:http2' { - export * from 'http2'; -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test-core-js.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test-core-js.js deleted file mode 100644 index e53c40022533f691fd17d623cd24a8ecb5a82669..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/test-core-js.js +++ /dev/null @@ -1,26 +0,0 @@ -'use strict'; - -require('core-js'); - -var inspect = require('./'); -var test = require('tape'); - -test('Maps', function (t) { - t.equal(inspect(new Map([[1, 2]])), 'Map (1) {1 => 2}'); - t.end(); -}); - -test('WeakMaps', function (t) { - t.equal(inspect(new WeakMap([[{}, 2]])), 'WeakMap { ? }'); - t.end(); -}); - -test('Sets', function (t) { - t.equal(inspect(new Set([[1, 2]])), 'Set (1) {[ 1, 2 ]}'); - t.end(); -}); - -test('WeakSets', function (t) { - t.equal(inspect(new WeakSet([[1, 2]])), 'WeakSet { ? }'); - t.end(); -}); diff --git a/spaces/fishaudio/fish-diffusion/configs/Kiritan.py b/spaces/fishaudio/fish-diffusion/configs/Kiritan.py deleted file mode 100644 index adbcc11bdf74cac263ee428f2f84a62a6aff9aef..0000000000000000000000000000000000000000 --- a/spaces/fishaudio/fish-diffusion/configs/Kiritan.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - "./_base_/archs/hifi_svc.py", -] - -speaker_mapping = {'kiritan': 0,} - -model = dict( - type="HiFiSVC", - speaker_encoder=dict( - input_size=len(speaker_mapping), - ), -) - -preprocessing = dict( - text_features_extractor=dict( - type="ContentVec", - ), - pitch_extractor=dict( - type="ParselMouthPitchExtractor", - keep_zeros=False, - f0_min=40.0, - f0_max=1600.0, - ), - energy_extractor=dict( - type="RMSEnergyExtractor", - ), - augmentations=[ - dict( - type="RandomPitchShifting", - key_shifts=[-5., 5.], - probability=1.5, - ), - dict( - type="RandomTimeStretching", - factors=[0.8, 1.2], - probability=0.75, - ) - ], -) \ No newline at end of file diff --git a/spaces/flax-community/koclip/text2patch.py b/spaces/flax-community/koclip/text2patch.py deleted file mode 100644 index 907b155f70bab08ccaa2d812717aa1688533ae29..0000000000000000000000000000000000000000 --- a/spaces/flax-community/koclip/text2patch.py +++ /dev/null @@ -1,93 +0,0 @@ -import os - -import jax -import jax.numpy as jnp -import numpy as np -import requests -import streamlit as st -from PIL import Image - -from utils import load_model - - -def split_image(im, num_rows=3, num_cols=3): - im = np.array(im) - row_size = im.shape[0] // num_rows - col_size = im.shape[1] // num_cols - tiles = [ - im[row : row + row_size, col : col + col_size] - for row in range(0, num_rows * row_size, row_size) - for col in range(0, num_cols * col_size, col_size) - ] - return tiles - - -def app(model_name): - model, processor = load_model(f"koclip/{model_name}") - - st.title("Patch-based Relevance Ranking") - st.markdown( - """ - Given a piece of text, the CLIP model finds the part of an image that best explains the text. - To try it out, you can - - 1. Upload an image - 2. Explain a part of the image in text - - which will yield the most relevant image tile from a grid of the image. You can specify how - granular you want to be with your search by specifying the number of rows and columns that - make up the image grid. - - --- - """ - ) - - query1 = st.text_input( - "Enter a URL to an image...", - value="https://img.sbs.co.kr/newimg/news/20200823/201463830_1280.jpg", - ) - query2 = st.file_uploader("or upload an image...", type=["jpg", "jpeg", "png"]) - captions = st.text_input( - "Enter a prompt to query the image.", - value="이건 서울의 경복궁 사진이다.", - ) - - col1, col2 = st.beta_columns(2) - with col1: - num_rows = st.slider( - "Number of rows", min_value=1, max_value=5, value=3, step=1 - ) - with col2: - num_cols = st.slider( - "Number of columns", min_value=1, max_value=5, value=3, step=1 - ) - - if st.button("질문 (Query)"): - if not any([query1, query2]): - st.error("Please upload an image or paste an image URL.") - else: - st.markdown("""---""") - with st.spinner("Computing..."): - image_data = ( - query2 - if query2 is not None - else requests.get(query1, stream=True).raw - ) - image = Image.open(image_data) - st.image(image) - - images = split_image(image, num_rows, num_cols) - - inputs = processor( - text=captions, images=images, return_tensors="jax", padding=True - ) - inputs["pixel_values"] = jnp.transpose( - inputs["pixel_values"], axes=[0, 2, 3, 1] - ) - outputs = model(**inputs) - probs = jax.nn.softmax(outputs.logits_per_image, axis=0) - for idx, prob in sorted( - enumerate(probs), key=lambda x: x[1], reverse=True - ): - st.text(f"Score: {prob[0]:.3f}") - st.image(images[idx]) diff --git a/spaces/flowers-team/SocialAISchool/data_analysis_neurips.py b/spaces/flowers-team/SocialAISchool/data_analysis_neurips.py deleted file mode 100644 index 3b413df8effc9c15fa8b27f00d8a9a27a99c3994..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/data_analysis_neurips.py +++ /dev/null @@ -1,570 +0,0 @@ -#!/usr/bin/env python -import seaborn -import numpy as np -import os -from collections import OrderedDict -import pandas as pd -import matplotlib.pyplot as plt -import sys -from termcolor import cprint - -# Load data - -# Global vars for tracking and labeling data at load time. -exp_idx = 0 -label_parser_dict = None - -smooth_factor = 10 -leg_size = 30 - -subsample_step = 1 -load_subsample_step = 50 - -default_colors = ["blue","orange","green","magenta", "brown", "red",'black',"grey",u'#ff7f0e', - "cyan", "pink",'purple', u'#1f77b4', - "darkorchid","sienna","lightpink", "indigo","mediumseagreen",'aqua', - 'deeppink','silver','khaki','goldenrod','y','y','y','y','y','y','y','y','y','y','y','y' ] + ['y']*50 - -def get_all_runs(logdir, load_subsample_step=1): - """ - Recursively look through logdir for output files produced by - Assumes that any file "progress.txt" is a valid hit. - """ - global exp_idx - global units - datasets = [] - for root, _, files in os.walk(logdir): - if 'log.csv' in files: - run_name = root[8:] - exp_name = None - - # try to load a config file containing hyperparameters - config = None - try: - config_path = open(os.path.join(root,'config.json')) - config = json.load(config_path) - if 'exp_name' in config: - exp_name = config['exp_name'] - except: - print('No file named config.json') - - exp_idx += 1 - - # load progress data - try: - print(os.path.join(root,'log.csv')) - exp_data = pd.read_csv(os.path.join(root,'log.csv')) - except: - raise ValueError("CSV {} faulty".format(os.path.join(root, 'log.csv'))) - - exp_data = exp_data[::load_subsample_step] - data_dict = exp_data.to_dict("list") - - data_dict['config'] = config - nb_epochs = len(data_dict['frames']) - print('{} -> {}'.format(run_name, nb_epochs)) - - - datasets.append(data_dict) - - return datasets - -def get_datasets(rootdir, load_only="", load_subsample_step=1, ignore_pattern="ignore"): - _, models_list, _ = next(os.walk(rootdir)) - print(models_list) - for dir_name in models_list.copy(): - # add "ignore" in a directory name to avoid loading its content - if ignore_pattern in dir_name or load_only not in dir_name: - models_list.remove(dir_name) - for expe_name in list(labels.keys()): - if expe_name not in models_list: - del labels[expe_name] - - # setting per-model type colors - for i,m_name in enumerate(models_list): - for m_type, m_color in per_model_colors.items(): - if m_type in m_name: - colors[m_name] = m_color - print("extracting data for {}...".format(m_name)) - m_id = m_name - models_saves[m_id] = OrderedDict() - models_saves[m_id]['data'] = get_all_runs(rootdir+m_name, load_subsample_step=load_subsample_step) - print("done") - if m_name not in labels: - labels[m_name] = m_name - - """ - retrieve all experiences located in "data to vizu" folder - """ -labels = OrderedDict() -per_model_colors = OrderedDict() -# per_model_colors = OrderedDict([('ALP-GMM',u'#1f77b4'), -# ('hmn','pink'), -# ('ADR','black')]) - -# LOAD DATA -models_saves = OrderedDict() -colors = OrderedDict() - -static_lines = {} -# get_datasets("storage/",load_only="RERUN_WizardGuide") -# get_datasets("storage/",load_only="RERUN_WizardTwoGuides") -try: - figure_id = eval(sys.argv[1]) -except: - figure_id = sys.argv[1] - -print("fig:", figure_id) -if figure_id == 0: - # train change - env_type = "No_NPC_environment" - fig_type = "train" - - get_datasets("storage/", "RERUN_WizardGuide_lang64_mm", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardGuide_lang64_deaf_no_explo", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardGuide_lang64_no_explo", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardGuide_lang64_curr_dial", load_subsample_step=load_subsample_step) - top_n = 16 -elif figure_id == 1: - # arch change - env_type = "No_NPC_environment" - fig_type = "arch" - - get_datasets("storage/", "RERUN_WizardGuide_lang64_mm", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardGuide_lang64_bow", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardGuide_lang64_no_mem", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardGuide_lang64_bigru", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardGuide_lang64_attgru", load_subsample_step=load_subsample_step) - top_n = 16 -elif figure_id == 2: - # train change FULL - env_type = "FULL_environment" - fig_type = "train" - - get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_mm", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_deaf_no_explo", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_no_explo", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_curr_dial", load_subsample_step=load_subsample_step) - top_n = 16 -elif figure_id == 3: - # arch change FULL - env_type = "FULL_environment" - fig_type = "arch" - - get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_mm", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_bow", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_no_mem", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_bigru", load_subsample_step=load_subsample_step) - get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_attgru", load_subsample_step=load_subsample_step) - top_n = 16 -elif str(figure_id) == "ShowMe": - - get_datasets("storage/", "20-05_NeurIPS_ShowMe_ABL_CEB", load_subsample_step=load_subsample_step, ignore_pattern="tanh_0.3") - get_datasets("storage/", "20-05_NeurIPS_ShowMe_NO_BONUS_ABL", load_subsample_step=load_subsample_step) - get_datasets("storage/", "20-05_NeurIPS_ShowMe_CEB", load_subsample_step=load_subsample_step, ignore_pattern="tanh_0.3") - get_datasets("storage/", "20-05_NeurIPS_ShowMe_NO_BONUS_env", load_subsample_step=load_subsample_step) - - label_parser_dict = { - "20-05_NeurIPS_ShowMe_ABL_CEB" : "ShowMe_exp_bonus_no_social_skills_required", - "20-05_NeurIPS_ShowMe_NO_BONUS_ABL" : "ShowMe_no_bonus_no_social_skills_required", - "20-05_NeurIPS_ShowMe_CEB" : "ShowMe_exp_bonus", - "20-05_NeurIPS_ShowMe_NO_BONUS_env" : "ShowMe_no_bonus", - } - - env_type = str(figure_id) - - fig_type = "test" - top_n = 16 - -elif str(figure_id) == "Help": - - # env_type = "Bobo" - # get_datasets("storage/", "Bobo") - get_datasets("storage/", "24-05_NeurIPS_Help", load_subsample_step=load_subsample_step, ignore_pattern="ABL") - # get_datasets("storage/", "26-05_NeurIPS_gpu_Help_NoSocial_NO_BONUS_ABL", load_subsample_step=load_subsample_step) - get_datasets("storage/", "26-05_NeurIPS_gpu_Help_NoSocial_NO_BONUS_env", load_subsample_step=load_subsample_step) - - label_parser_dict = { - "Help_NO_BONUS_env": "PPO", - "Help_BONUS_env": "PPO+Explo", - # "Help_NO_BONUS_ABL_env": "ExiterRole_no_bonus_no_NPC", - # "Help_BONUS_ABL_env": "ExiterRole_bonus_no_NPC", - "26-05_NeurIPS_gpu_Help_NoSocial_NO_BONUS_env": "Unsocial PPO", - # "26-05_NeurIPS_gpu_Help_NoSocial_NO_BONUS_ABL": "ExiterRole_Insocial_ABL" - } - - static_lines = { - "PPO (helper)": (0.12, 0.05, "#1f77b4"), - "PPO+Explo (helper)": (0.11, 0.04, "indianred"), - # "Help_exp_bonus": (0.11525, 0.04916 , default_colors[2]), - # "HelperRole_ABL_no_exp_bonus": (0.022375, 0.01848, default_colors[3]), - "Unsocial PPO (helper)": (0.15, 0.06, "grey"), - # "HelperRole_ABL_Insocial": (0.01775, 0.010544, default_colors[4]), - } - - env_type = str(figure_id) - - fig_type = "test" - top_n = 16 - -elif str(figure_id) == "TalkItOut": - print("You mean Polite") - exit() - -elif str(figure_id) == "TalkItOutPolite": - # env_type = "TalkItOut" - # get_datasets("storage/", "ORIENT_env_MiniGrid-TalkItOut") - - # env_type = "GuideThief" - # get_datasets("storage/", "GuideThief") - - # env_type = "Bobo" - # get_datasets("storage/", "Bobo") - get_datasets("storage/", "20-05_NeurIPS_TalkItOutPolite", load_subsample_step=load_subsample_step) - # get_datasets("storage/", "21-05_NeurIPS_small_bonus_TalkItOutPolite") - get_datasets("storage/", "26-05_NeurIPS_gpu_TalkItOutPolite_NoSocial_NO_BONUS_env", load_subsample_step=load_subsample_step) - get_datasets("storage/", "26-05_NeurIPS_gpu_TalkItOutPolite_NoSocial_NO_BONUS_NoLiar", load_subsample_step=load_subsample_step) - - label_parser_dict = { - "TalkItOutPolite_NO_BONUS_env": "PPO", - "TalkItOutPolite_e": "PPO+Explo", - "TalkItOutPolite_NO_BONUS_NoLiar": "PPO (no liar)", - "TalkItOutPolite_NoLiar_e": "PPO+Explo (no liar)", - "26-05_NeurIPS_gpu_TalkItOutPolite_NoSocial_NO_BONUS_env": "Unsocial PPO", - "26-05_NeurIPS_gpu_TalkItOutPolite_NoSocial_NO_BONUS_NoLiar": "Unsocial PPO (no liar)", - } - - - env_type = str(figure_id) - - fig_type = "test" - top_n = 16 - -elif str(figure_id) == "DiverseExit": - get_datasets("storage/", "24-05_NeurIPS_DiverseExit", load_subsample_step=load_subsample_step) - get_datasets("storage/", "26-05_NeurIPS_gpu_DiverseExit", load_subsample_step=load_subsample_step) - - label_parser_dict = { - "DiverseExit_NO_BONUS": "No_bonus", - "DiverseExit_BONUS": "BOnus", - "gpu_DiverseExit_NoSocial": "No_social", - } - - env_type = str(figure_id) - - fig_type = "test" - top_n = 16 - -else: - get_datasets("storage/", str(figure_id), load_subsample_step=load_subsample_step) - - env_type = str(figure_id) - - fig_type = "test" - top_n = 8 - -#### get_datasets("storage/", "RERUN_WizardGuide_lang64_nameless") -#### get_datasets("storage/", "RERUN_WizardTwoGuides_lang64_nameless") - - -if per_model_colors: # order runs for legend order as in per_models_colors, with corresponding colors - ordered_labels = OrderedDict() - for teacher_type in per_model_colors.keys(): - for k,v in labels.items(): - if teacher_type in k: - ordered_labels[k] = v - labels = ordered_labels -else: - print('not using per_model_color') - for k in models_saves.keys(): - labels[k] = k - -def plot_with_shade(subplot_nb, ax,x,y,err,color,shade_color,label, - y_min=None,y_max=None, legend=False, leg_size=30, leg_loc='best', title=None, - ylim=[0,100], xlim=[0,40], leg_args={}, leg_linewidth=13.0, linewidth=10.0, ticksize=20, - zorder=None, xlabel='perf',ylabel='env steps'): - #plt.rcParams.update({'font.size': 15}) - ax.locator_params(axis='x', nbins=4) - ax.locator_params(axis='y', nbins=3) - ax.tick_params(axis='both', which='major', labelsize=ticksize) - ax.plot(x,y, color=color, label=label,linewidth=linewidth,zorder=zorder) - ax.fill_between(x,y-err,y+err,color=shade_color,alpha=0.2) - if legend: - leg = ax.legend(loc=leg_loc, **leg_args) #34 - for legobj in leg.legendHandles: - legobj.set_linewidth(leg_linewidth) - ax.set_xlabel(xlabel, fontsize=30) - if subplot_nb == 0: - ax.set_ylabel(ylabel, fontsize=30,labelpad=-4) - ax.set_xlim(xmin=xlim[0],xmax=xlim[1]) - ax.set_ylim(bottom=ylim[0],top=ylim[1]) - if title: - ax.set_title(title, fontsize=22) -# Plot utils -def plot_with_shade_grg(subplot_nb, ax,x,y,err,color,shade_color,label, - y_min=None,y_max=None, legend=False, leg_size=30, leg_loc='best', title=None, - ylim=[0,100], xlim=[0,40], leg_args={}, leg_linewidth=13.0, linewidth=10.0, ticksize=20, - zorder=None, xlabel='perf',ylabel='env steps', linestyle="-"): - #plt.rcParams.update({'font.size': 15}) - ax.locator_params(axis='x', nbins=4) - ax.locator_params(axis='y', nbins=3) - ax.tick_params(axis='both', which='major', labelsize=ticksize) - - - ax.plot(x, y, color=color, label=label,linewidth=linewidth,zorder=zorder, linestyle=linestyle) - ax.fill_between(x, y-err, y+err,color=shade_color,alpha=0.2) - if legend: - leg = ax.legend(loc=leg_loc, **leg_args) #34 - for legobj in leg.legendHandles: - legobj.set_linewidth(leg_linewidth) - ax.set_xlabel(xlabel, fontsize=30) - if subplot_nb == 0: - ax.set_ylabel(ylabel, fontsize=30, labelpad=-4) - ax.set_xlim(xmin=xlim[0],xmax=xlim[1]) - ax.set_ylim(bottom=ylim[0],top=ylim[1]) - if title: - ax.set_title(title, fontsize=22) - - -# Metric plot -metric = 'bin_extrinsic_return_mean' -# metric = 'mission_string_observed_mean' -# metric = 'extrinsic_return_mean' -# metric = 'extrinsic_return_max' -# metric = "rreturn_mean" -# metric = 'rreturn_max' -# metric = 'FPS' - -f, ax = plt.subplots(1, 1, figsize=(10.0, 6.0)) -ax = [ax] -max_y = -np.inf -min_y = np.inf -# hardcoded -min_y, max_y = 0.0, 1.0 -max_steps = 0 -exclude_patterns = [] -include_patterns = [] - - -def label_parser(label, figure_id, label_parser_dict=None): - if label_parser_dict: - if sum([1 for k, v in label_parser_dict.items() if k in label]) != 1: - if label in label_parser_dict: - # see if there is an exact match - return label_parser_dict[label] - else: - print("ERROR multiple curves match a lable and there is no exact match") - print(label) - exit() - - for k, v in label_parser_dict.items(): - if k in label: return v - - else: - # return label.split("_env_")[1] - if figure_id not in [1,2,3,4]: - return label - else: - label_parser_dict = { - "RERUN_WizardGuide_lang64_no_explo": "MH-BabyAI", - "RERUN_WizardTwoGuides_lang64_no_explo": "MH-BabyAI", - - "RERUN_WizardGuide_lang64_mm_baby_short_rec_env": "MH-BabyAI-ExpBonus", - "RERUN_WizardTwoGuides_lang64_mm_baby_short_rec_env": "MH-BabyAI-ExpBonus", - - "RERUN_WizardGuide_lang64_deaf_no_explo": "Deaf-MH-BabyAI", - "RERUN_WizardTwoGuides_lang64_deaf_no_explo": "Deaf-MH-BabyAI", - - "RERUN_WizardGuide_lang64_bow": "MH-BabyAI-ExpBonus-BOW", - "RERUN_WizardTwoGuides_lang64_bow": "MH-BabyAI-ExpBonus-BOW", - - "RERUN_WizardGuide_lang64_no_mem": "MH-BabyAI-ExpBonus-no-mem", - "RERUN_WizardTwoGuides_lang64_no_mem": "MH-BabyAI-ExpBonus-no-mem", - - "RERUN_WizardGuide_lang64_bigru": "MH-BabyAI-ExpBonus-bigru", - "RERUN_WizardTwoGuides_lang64_bigru": "MH-BabyAI-ExpBonus-bigru", - - "RERUN_WizardGuide_lang64_attgru": "MH-BabyAI-ExpBonus-attgru", - "RERUN_WizardTwoGuides_lang64_attgru": "MH-BabyAI-ExpBonus-attgru", - - "RERUN_WizardGuide_lang64_curr_dial": "MH-BabyAI-ExpBonus-current-dialogue", - "RERUN_WizardTwoGuides_lang64_curr_dial": "MH-BabyAI-ExpBonus-current-dialogue", - - "RERUN_WizardTwoGuides_lang64_mm_baby_short_rec_100M": "MH-BabyAI-ExpBonus-100M" - } - if sum([1 for k, v in label_parser_dict.items() if k in label]) != 1: - print("ERROR multiple curves match a lable") - print(label) - exit() - - for k, v in label_parser_dict.items(): - if k in label: return v - - return label - -per_seed=False - -for i, m_id in enumerate(models_saves.keys()): - #excluding some experiments - if any([ex_pat in m_id for ex_pat in exclude_patterns]): - continue - if len(include_patterns) > 0: - if not any([in_pat in m_id for in_pat in include_patterns]): - continue - runs_data = models_saves[m_id]['data'] - ys = [] - - # DIRTY FIX FOR FAULTY LOGGING - print("m_id:", m_id) - if runs_data[0]['frames'][1] == 'frames': - runs_data[0]['frames'] = list(filter(('frames').__ne__, runs_data[0]['frames'])) - ########################################### - - - # determine minimal run length across seeds - minimum = sorted([len(run['frames']) for run in runs_data if len(run['frames'])])[-top_n] - min_len = np.min([len(run['frames']) for run in runs_data if len(run['frames']) >= minimum]) - -# min_len = np.min([len(run['frames']) for run in runs_data if len(run['frames']) > 10]) - - - print("min_len:", min_len) - - #compute env steps (x axis) - longest_id = np.argmax([len(rd['frames']) for rd in runs_data]) - steps = np.array(runs_data[longest_id]['frames'], dtype=np.int) / 1000000 - steps = steps[:min_len] - for run in runs_data: - data = run[metric] - # DIRTY FIX FOR FAULTY LOGGING (headers in data) - if data[1] == metric: - data = np.array(list(filter((metric).__ne__, data)), dtype=np.float16) - ########################################### - if len(data) >= min_len: - if len(data) > min_len: - print("run has too many {} datapoints ({}). Discarding {}".format(m_id, len(data), - len(data)-min_len)) - data = data[0:min_len] - ys.append(data) - ys_same_len = ys # RUNS MUST HAVE SAME LEN - - # computes stats - n_seeds = len(ys_same_len) - sems = np.std(ys_same_len,axis=0)/np.sqrt(len(ys_same_len)) # sem - stds = np.std(ys_same_len,axis=0) # std - means = np.mean(ys_same_len,axis=0) - color = default_colors[i] - - # per-metric adjusments - ylabel=metric - if metric == 'bin_extrinsic_return_mean': - ylabel = "success rate" - if metric == 'duration': - ylabel = "time (hours)" - means = means / 3600 - sems = sems / 3600 - stds = stds / 3600 - - #plot x y bounds - curr_max_y = np.max(means) - curr_min_y = np.min(means) - curr_max_steps = np.max(steps) - if curr_max_y > max_y: - max_y = curr_max_y - if curr_min_y < min_y: - min_y = curr_min_y - if curr_max_steps > max_steps: - max_steps = curr_max_steps - - if subsample_step: - steps = steps[0::subsample_step] - means = means[0::subsample_step] - stds = stds[0::subsample_step] - sems = sems[0::subsample_step] - ys_same_len = [y[0::subsample_step] for y in ys_same_len] - - # display seeds separtely - if per_seed: - for s_i, seed_ys in enumerate(ys_same_len): - seed_c = default_colors[i+s_i] - label = m_id#+"(s:{})".format(s_i) - plot_with_shade(0, ax[0], steps, seed_ys, stds*0, seed_c, seed_c, label, - legend=False, xlim=[0, max_steps], ylim=[min_y, max_y], - leg_size=leg_size, xlabel="env steps (millions)", ylabel=ylabel, smooth_factor=smooth_factor, - ) - else: - label = label_parser(m_id, figure_id, label_parser_dict=label_parser_dict) - label = label #+"({})".format(n_seeds) - - - def smooth(x_, n=50): - if type(x_) == list: - x_ = np.array(x_) - return np.array([x_[max(i - n, 0):i + 1].mean() for i in range(len(x_))]) - if smooth_factor: - means = smooth(means,smooth_factor) - stds = smooth(stds,smooth_factor) - x_lim = 30 - if figure_id == "TalkItOutPolite": - leg_args = { - 'ncol': 1, - 'columnspacing': 1.0, - 'handlelength': 1.0, - 'frameon': False, - # 'bbox_to_anchor': (0.00, 0.23, 0.10, .102), - 'bbox_to_anchor': (0.55, 0.35, 0.10, .102), - 'labelspacing': 0.2, - 'fontsize': 27 - } - elif figure_id == "Help": - leg_args = { - 'ncol': 1, - 'columnspacing': 1.0, - 'handlelength': 1.0, - 'frameon': False, - # 'bbox_to_anchor': (0.00, 0.23, 0.10, .102), - 'bbox_to_anchor': (0.39, 0.20, 0.10, .102), - 'labelspacing': 0.2, - 'fontsize': 27 - } - else: - leg_args = {} - - color_code = dict([ - ('PPO+Explo', 'indianred'), - ('PPO', "#1f77b4"), - ('Unsocial PPO', "grey"), - ('PPO (no liar)', "#043252"), - ('PPO+Explo (no liar)', "darkred"), - ('Unsocial PPO (no liar)', "black"), - ('PPO+Explo (helper)', 'indianred'), - ('PPO (helper)', "#1f77b4"), - ('Unsocial PPO (helper)', "grey")] - ) - color = color_code.get(label, np.random.choice(default_colors)) - print("C:",color) - plot_with_shade_grg( - 0, ax[0], steps, means, stds, color, color, label, - legend=True, - xlim=[0, steps[-1] if not x_lim else x_lim], - ylim=[0, 1.0], xlabel="env steps (millions)", ylabel=ylabel, title=None, - leg_args =leg_args) - # - # plot_with_shade(0, ax[0], steps, means, stds, color, color,label, - # legend=True, xlim=[0, max_steps], ylim=[min_y, max_y], - # leg_size=leg_size, xlabel="Env steps (millions)", ylabel=ylabel, linewidth=5.0, smooth_factor=smooth_factor) - - -for label, (mean, std, color) in static_lines.items(): - plot_with_shade_grg( - 0, ax[0], steps, np.array([mean]*len(steps)), np.array([std]*len(steps)), color, color, label, - legend=True, - xlim=[0, max_steps], - ylim=[0, 1.0], - xlabel="env steps (millions)", ylabel=ylabel, linestyle=":", - leg_args=leg_args) - -plt.tight_layout() -f.savefig('graphics/{}_results.svg'.format(str(figure_id))) -f.savefig('graphics/{}_results.png'.format(str(figure_id))) -plt.show() \ No newline at end of file diff --git a/spaces/flowers-team/SocialAISchool/torch-ac/torch_ac/utils/penv.py b/spaces/flowers-team/SocialAISchool/torch-ac/torch_ac/utils/penv.py deleted file mode 100644 index e92891cb2138265e8b8135f1fc444529aefde0e5..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/torch-ac/torch_ac/utils/penv.py +++ /dev/null @@ -1,74 +0,0 @@ -from multiprocessing import Process, Pipe -import gym - -def worker(conn, env): - while True: - cmd, data = conn.recv() - if cmd == "step": - obs, reward, done, info = env.step(data) - if done: - obs = env.reset() - conn.send((obs, reward, done, info)) - elif cmd == "set_curriculum_parameters": - env.set_curriculum_parameters(data) - conn.send(None) - elif cmd == "reset": - obs = env.reset() - conn.send(obs) - elif cmd == "get_mission": - ks = env.get_mission() - conn.send(ks) - else: - raise NotImplementedError - -class ParallelEnv(gym.Env): - """A concurrent execution of environments in multiple processes.""" - - def __init__(self, envs): - assert len(envs) >= 1, "No environment given." - - self.envs = envs - self.observation_space = self.envs[0].observation_space - self.action_space = self.envs[0].action_space - - if hasattr(self.envs[0], "curriculum"): - self.curriculum = self.envs[0].curriculum - - self.locals = [] - for env in self.envs[1:]: - local, remote = Pipe() - self.locals.append(local) - p = Process(target=worker, args=(remote, env)) - p.daemon = True - p.start() - remote.close() - - def broadcast_curriculum_parameters(self, data): - # broadcast curriculum_data to every worker - for local in self.locals: - local.send(("set_curriculum_parameters", data)) - results = [self.envs[0].set_curriculum_parameters(data)] + [local.recv() for local in self.locals] - - def get_mission(self): - for local in self.locals: - local.send(("get_mission", None)) - results = [self.envs[0].get_mission()] + [local.recv() for local in self.locals] - return results - - def reset(self): - for local in self.locals: - local.send(("reset", None)) - results = [self.envs[0].reset()] + [local.recv() for local in self.locals] - return results - - def step(self, actions): - for local, action in zip(self.locals, actions[1:]): - local.send(("step", action)) - obs, reward, done, info = self.envs[0].step(actions[0]) - if done: - obs = self.envs[0].reset() - results = zip(*[(obs, reward, done, info)] + [local.recv() for local in self.locals]) - return results - - def render(self): - raise NotImplementedError \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/deeplabv3_unet_s5-d16.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/deeplabv3_unet_s5-d16.py deleted file mode 100644 index 0cd262999d8b2cb8e14a5c32190ae73f479d8e81..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/deeplabv3_unet_s5-d16.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='ASPPHead', - in_channels=64, - in_index=4, - channels=16, - dilations=(1, 12, 24, 36), - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/__init__.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/__init__.py deleted file mode 100644 index d0051d609d3de4e7562e3fe638335c66617c4d91..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr, - gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert, - rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb) -from .geometric import (cutout, imcrop, imflip, imflip_, impad, - impad_to_multiple, imrescale, imresize, imresize_like, - imresize_to_multiple, imrotate, imshear, imtranslate, - rescale_size) -from .io import imfrombytes, imread, imwrite, supported_backends, use_backend -from .misc import tensor2imgs -from .photometric import (adjust_brightness, adjust_color, adjust_contrast, - adjust_lighting, adjust_sharpness, auto_contrast, - clahe, imdenormalize, imequalize, iminvert, - imnormalize, imnormalize_, lut_transform, posterize, - solarize) - -__all__ = [ - 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb', - 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale', - 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size', - 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate', - 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend', - 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize', - 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr', - 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize', - 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe', - 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting' -] diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/colorspace.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/colorspace.py deleted file mode 100644 index 814533952fdfda23d67cb6a3073692d8c1156add..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/colorspace.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - - -def imconvert(img, src, dst): - """Convert an image from the src colorspace to dst colorspace. - - Args: - img (ndarray): The input image. - src (str): The source colorspace, e.g., 'rgb', 'hsv'. - dst (str): The destination colorspace, e.g., 'rgb', 'hsv'. - - Returns: - ndarray: The converted image. - """ - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - out_img = cv2.cvtColor(img, code) - return out_img - - -def bgr2gray(img, keepdim=False): - """Convert a BGR image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def rgb2gray(img, keepdim=False): - """Convert a RGB image to grayscale image. - - Args: - img (ndarray): The input image. - keepdim (bool): If False (by default), then return the grayscale image - with 2 dims, otherwise 3 dims. - - Returns: - ndarray: The converted grayscale image. - """ - out_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) - if keepdim: - out_img = out_img[..., None] - return out_img - - -def gray2bgr(img): - """Convert a grayscale image to BGR image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted BGR image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - return out_img - - -def gray2rgb(img): - """Convert a grayscale image to RGB image. - - Args: - img (ndarray): The input image. - - Returns: - ndarray: The converted RGB image. - """ - img = img[..., None] if img.ndim == 2 else img - out_img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) - return out_img - - -def _convert_input_type_range(img): - """Convert the type and range of the input image. - - It converts the input image to np.float32 type and range of [0, 1]. - It is mainly used for pre-processing the input image in colorspace - conversion functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - (ndarray): The converted image with type of np.float32 and range of - [0, 1]. - """ - img_type = img.dtype - img = img.astype(np.float32) - if img_type == np.float32: - pass - elif img_type == np.uint8: - img /= 255. - else: - raise TypeError('The img type should be np.float32 or np.uint8, ' - f'but got {img_type}') - return img - - -def _convert_output_type_range(img, dst_type): - """Convert the type and range of the image according to dst_type. - - It converts the image to desired type and range. If `dst_type` is np.uint8, - images will be converted to np.uint8 type with range [0, 255]. If - `dst_type` is np.float32, it converts the image to np.float32 type with - range [0, 1]. - It is mainly used for post-processing images in colorspace conversion - functions such as rgb2ycbcr and ycbcr2rgb. - - Args: - img (ndarray): The image to be converted with np.float32 type and - range [0, 255]. - dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it - converts the image to np.uint8 type with range [0, 255]. If - dst_type is np.float32, it converts the image to np.float32 type - with range [0, 1]. - - Returns: - (ndarray): The converted image with desired type and range. - """ - if dst_type not in (np.uint8, np.float32): - raise TypeError('The dst_type should be np.float32 or np.uint8, ' - f'but got {dst_type}') - if dst_type == np.uint8: - img = img.round() - else: - img /= 255. - return img.astype(dst_type) - - -def rgb2ycbcr(img, y_only=False): - """Convert a RGB image to YCbCr image. - - This function produces the same results as Matlab's `rgb2ycbcr` function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `RGB <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [65.481, 128.553, 24.966]) + 16.0 - else: - out_img = np.matmul( - img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def bgr2ycbcr(img, y_only=False): - """Convert a BGR image to YCbCr image. - - The bgr version of rgb2ycbcr. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - y_only (bool): Whether to only return Y channel. Default: False. - - Returns: - ndarray: The converted YCbCr image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) - if y_only: - out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0 - else: - out_img = np.matmul( - img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) + [16, 128, 128] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2rgb(img): - """Convert a YCbCr image to RGB image. - - This function produces the same results as Matlab's ycbcr2rgb function. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> RGB`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted RGB image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [ - -222.921, 135.576, -276.836 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def ycbcr2bgr(img): - """Convert a YCbCr image to BGR image. - - The bgr version of ycbcr2rgb. - It implements the ITU-R BT.601 conversion for standard-definition - television. See more details in - https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion. - - It differs from a similar function in cv2.cvtColor: `YCrCb <-> BGR`. - In OpenCV, it implements a JPEG conversion. See more details in - https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion. - - Args: - img (ndarray): The input image. It accepts: - 1. np.uint8 type with range [0, 255]; - 2. np.float32 type with range [0, 1]. - - Returns: - ndarray: The converted BGR image. The output image has the same type - and range as input image. - """ - img_type = img.dtype - img = _convert_input_type_range(img) * 255 - out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], - [0.00791071, -0.00153632, 0], - [0, -0.00318811, 0.00625893]]) * 255.0 + [ - -276.836, 135.576, -222.921 - ] - out_img = _convert_output_type_range(out_img, img_type) - return out_img - - -def convert_color_factory(src, dst): - - code = getattr(cv2, f'COLOR_{src.upper()}2{dst.upper()}') - - def convert_color(img): - out_img = cv2.cvtColor(img, code) - return out_img - - convert_color.__doc__ = f"""Convert a {src.upper()} image to {dst.upper()} - image. - - Args: - img (ndarray or str): The input image. - - Returns: - ndarray: The converted {dst.upper()} image. - """ - - return convert_color - - -bgr2rgb = convert_color_factory('bgr', 'rgb') - -rgb2bgr = convert_color_factory('rgb', 'bgr') - -bgr2hsv = convert_color_factory('bgr', 'hsv') - -hsv2bgr = convert_color_factory('hsv', 'bgr') - -bgr2hls = convert_color_factory('bgr', 'hls') - -hls2bgr = convert_color_factory('hls', 'bgr') diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/upfirdn2d.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/upfirdn2d.py deleted file mode 100644 index c8bb2c3c949eed38a6465ed369fa881538dca010..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/upfirdn2d.py +++ /dev/null @@ -1,330 +0,0 @@ -# modified from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.py # noqa:E501 - -# Copyright (c) 2021, NVIDIA Corporation. All rights reserved. -# NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -# Augmentation (ADA) -# ======================================================================= - -# 1. Definitions - -# "Licensor" means any person or entity that distributes its Work. - -# "Software" means the original work of authorship made available under -# this License. - -# "Work" means the Software and any additions to or derivative works of -# the Software that are made available under this License. - -# The terms "reproduce," "reproduction," "derivative works," and -# "distribution" have the meaning as provided under U.S. copyright law; -# provided, however, that for the purposes of this License, derivative -# works shall not include works that remain separable from, or merely -# link (or bind by name) to the interfaces of, the Work. - -# Works, including the Software, are "made available" under this License -# by including in or with the Work either (a) a copyright notice -# referencing the applicability of this License to the Work, or (b) a -# copy of this License. - -# 2. License Grants - -# 2.1 Copyright Grant. Subject to the terms and conditions of this -# License, each Licensor grants to you a perpetual, worldwide, -# non-exclusive, royalty-free, copyright license to reproduce, -# prepare derivative works of, publicly display, publicly perform, -# sublicense and distribute its Work and any resulting derivative -# works in any form. - -# 3. Limitations - -# 3.1 Redistribution. You may reproduce or distribute the Work only -# if (a) you do so under this License, (b) you include a complete -# copy of this License with your distribution, and (c) you retain -# without modification any copyright, patent, trademark, or -# attribution notices that are present in the Work. - -# 3.2 Derivative Works. You may specify that additional or different -# terms apply to the use, reproduction, and distribution of your -# derivative works of the Work ("Your Terms") only if (a) Your Terms -# provide that the use limitation in Section 3.3 applies to your -# derivative works, and (b) you identify the specific derivative -# works that are subject to Your Terms. Notwithstanding Your Terms, -# this License (including the redistribution requirements in Section -# 3.1) will continue to apply to the Work itself. - -# 3.3 Use Limitation. The Work and any derivative works thereof only -# may be used or intended for use non-commercially. Notwithstanding -# the foregoing, NVIDIA and its affiliates may use the Work and any -# derivative works commercially. As used herein, "non-commercially" -# means for research or evaluation purposes only. - -# 3.4 Patent Claims. If you bring or threaten to bring a patent claim -# against any Licensor (including any claim, cross-claim or -# counterclaim in a lawsuit) to enforce any patents that you allege -# are infringed by any Work, then your rights under this License from -# such Licensor (including the grant in Section 2.1) will terminate -# immediately. - -# 3.5 Trademarks. This License does not grant any rights to use any -# Licensor’s or its affiliates’ names, logos, or trademarks, except -# as necessary to reproduce the notices described in this License. - -# 3.6 Termination. If you violate any term of this License, then your -# rights under this License (including the grant in Section 2.1) will -# terminate immediately. - -# 4. Disclaimer of Warranty. - -# THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -# NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -# THIS LICENSE. - -# 5. Limitation of Liability. - -# EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -# THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -# SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -# INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -# OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -# (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -# LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -# COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -# THE POSSIBILITY OF SUCH DAMAGES. - -# ======================================================================= - -import torch -from torch.autograd import Function -from torch.nn import functional as F - -from annotator.uniformer.mmcv.utils import to_2tuple -from ..utils import ext_loader - -upfirdn2d_ext = ext_loader.load_ext('_ext', ['upfirdn2d']) - - -class UpFirDn2dBackward(Function): - - @staticmethod - def forward(ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, - in_size, out_size): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_ext.upfirdn2d( - grad_output, - grad_kernel, - up_x=down_x, - up_y=down_y, - down_x=up_x, - down_y=up_y, - pad_x0=g_pad_x0, - pad_x1=g_pad_x1, - pad_y0=g_pad_y0, - pad_y1=g_pad_y1) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], - in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], - ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_ext.upfirdn2d( - gradgrad_input, - kernel, - up_x=ctx.up_x, - up_y=ctx.up_y, - down_x=ctx.down_x, - down_y=ctx.down_y, - pad_x0=ctx.pad_x0, - pad_x1=ctx.pad_x1, - pad_y0=ctx.pad_y0, - pad_y1=ctx.pad_y1) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], - # ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.in_size[1], - ctx.out_size[0], ctx.out_size[1]) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_ext.upfirdn2d( - input, - kernel, - up_x=up_x, - up_y=up_y, - down_x=down_x, - down_y=down_y, - pad_x0=pad_x0, - pad_x1=pad_x1, - pad_y0=pad_y0, - pad_y1=pad_y1) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - """UpFRIDn for 2d features. - - UpFIRDn is short for upsample, apply FIR filter and downsample. More - details can be found in: - https://www.mathworks.com/help/signal/ref/upfirdn.html - - Args: - input (Tensor): Tensor with shape of (n, c, h, w). - kernel (Tensor): Filter kernel. - up (int | tuple[int], optional): Upsampling factor. If given a number, - we will use this factor for the both height and width side. - Defaults to 1. - down (int | tuple[int], optional): Downsampling factor. If given a - number, we will use this factor for the both height and width side. - Defaults to 1. - pad (tuple[int], optional): Padding for tensors, (x_pad, y_pad) or - (x_pad_0, x_pad_1, y_pad_0, y_pad_1). Defaults to (0, 0). - - Returns: - Tensor: Tensor after UpFIRDn. - """ - if input.device.type == 'cpu': - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - up = to_2tuple(up) - - down = to_2tuple(down) - - out = upfirdn2d_native(input, kernel, up[0], up[1], down[0], down[1], - pad[0], pad[1], pad[2], pad[3]) - else: - _up = to_2tuple(up) - - _down = to_2tuple(down) - - if len(pad) == 4: - _pad = pad - elif len(pad) == 2: - _pad = (pad[0], pad[1], pad[0], pad[1]) - - out = UpFirDn2d.apply(input, kernel, _up, _down, _pad) - - return out - - -def upfirdn2d_native(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, - pad_y0, pad_y1): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, - [0, 0, - max(pad_x0, 0), - max(pad_x1, 0), - max(pad_y0, 0), - max(pad_y1, 0)]) - out = out[:, - max(-pad_y0, 0):out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0):out.shape[2] - max(-pad_x1, 0), :, ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/godot-demo/godot-3d-voxel/README.md b/spaces/godot-demo/godot-3d-voxel/README.md deleted file mode 100644 index b4eca11d82ddbafa831284bd2684a7cc37992db9..0000000000000000000000000000000000000000 --- a/spaces/godot-demo/godot-3d-voxel/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Godot 3d Voxel -emoji: 🌍 -colorFrom: red -colorTo: blue -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/gradio/HuBERT/fairseq/modules/unfold.py b/spaces/gradio/HuBERT/fairseq/modules/unfold.py deleted file mode 100644 index 138272f1ef4f673b29e36aed4531106f7ce95968..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/unfold.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn.functional as F - - -def unfold1d(x, kernel_size, padding_l, pad_value=0): - """unfold T x B x C to T x B x C x K""" - if kernel_size > 1: - T, B, C = x.size() - x = F.pad( - x, (0, 0, 0, 0, padding_l, kernel_size - 1 - padding_l), value=pad_value - ) - x = x.as_strided((T, B, C, kernel_size), (B * C, C, 1, B * C)) - else: - x = x.unsqueeze(3) - return x diff --git a/spaces/gradio/HuBERT/tests/test_concat_dataset.py b/spaces/gradio/HuBERT/tests/test_concat_dataset.py deleted file mode 100644 index d94aeffd481a2e107eb5747e41d76435b3f3dc8a..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/tests/test_concat_dataset.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import torch -from fairseq.data import LanguagePairDataset, TokenBlockDataset -from fairseq.data.concat_dataset import ConcatDataset -from tests.test_train import mock_dict - - -class TestConcatDataset(unittest.TestCase): - def setUp(self): - d = mock_dict() - tokens_1 = torch.LongTensor([1]).view(1, -1) - tokens_ds1 = TokenBlockDataset( - tokens_1, - sizes=[tokens_1.size(-1)], - block_size=1, - pad=0, - eos=1, - include_targets=False, - ) - self.dataset_1 = LanguagePairDataset( - tokens_ds1, tokens_ds1.sizes, d, shuffle=False - ) - tokens_2 = torch.LongTensor([2]).view(1, -1) - tokens_ds2 = TokenBlockDataset( - tokens_2, - sizes=[tokens_2.size(-1)], - block_size=1, - pad=0, - eos=1, - include_targets=False, - ) - self.dataset_2 = LanguagePairDataset( - tokens_ds2, tokens_ds2.sizes, d, shuffle=False - ) - - def test_concat_dataset_basics(self): - d = ConcatDataset([self.dataset_1, self.dataset_2]) - assert len(d) == 2 - assert d[0]["source"][0] == 1 - assert d[1]["source"][0] == 2 - - d = ConcatDataset([self.dataset_1, self.dataset_2], sample_ratios=[1, 2]) - assert len(d) == 3 - assert d[0]["source"][0] == 1 - assert d[1]["source"][0] == 2 - assert d[2]["source"][0] == 2 - - d = ConcatDataset([self.dataset_1, self.dataset_2], sample_ratios=[2, 1]) - assert len(d) == 3 - assert d[0]["source"][0] == 1 - assert d[1]["source"][0] == 1 - assert d[2]["source"][0] == 2 diff --git a/spaces/gradio/longformer/tvm/ndarray.py b/spaces/gradio/longformer/tvm/ndarray.py deleted file mode 100644 index 9a00f78eb77fa6f591396caffd8e1b430a11d37b..0000000000000000000000000000000000000000 --- a/spaces/gradio/longformer/tvm/ndarray.py +++ /dev/null @@ -1,232 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -"""TVM Runtime NDArray API. - -tvm.ndarray provides a minimum runtime array API to test -the correctness of the program. -""" -# pylint: disable=invalid-name,unused-import -from __future__ import absolute_import as _abs -import numpy as _np - -from ._ffi.ndarray import TVMContext, TVMType, NDArrayBase -from ._ffi.ndarray import context, empty, from_dlpack -from ._ffi.ndarray import _set_class_ndarray -from ._ffi.ndarray import register_extension, free_extension_handle - -class NDArray(NDArrayBase): - """Lightweight NDArray class of TVM runtime. - - Strictly this is only an Array Container (a buffer object) - No arthimetic operations are defined. - All operations are performed by TVM functions. - - The goal is not to re-build yet another array library. - Instead, this is a minimal data structure to demonstrate - how can we use TVM in existing project which might have their own array containers. - """ - - -def cpu(dev_id=0): - """Construct a CPU device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - """ - return TVMContext(1, dev_id) - - -def gpu(dev_id=0): - """Construct a CPU device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - """ - return TVMContext(2, dev_id) - -def rocm(dev_id=0): - """Construct a ROCM device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - """ - return TVMContext(10, dev_id) - - -def opencl(dev_id=0): - """Construct a OpenCL device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - """ - return TVMContext(4, dev_id) - - -def metal(dev_id=0): - """Construct a metal device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - """ - return TVMContext(8, dev_id) - - -def vpi(dev_id=0): - """Construct a VPI simulated device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - """ - return TVMContext(9, dev_id) - - -def vulkan(dev_id=0): - """Construct a Vulkan device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - """ - return TVMContext(7, dev_id) - - -def opengl(dev_id=0): - """Construct a OpenGL device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - """ - return TVMContext(11, dev_id) - - -def ext_dev(dev_id=0): - """Construct a extension device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - - Note - ---- - This API is reserved for quick testing of new - device by plugin device API as ext_dev. - """ - return TVMContext(12, dev_id) - - -def micro_dev(dev_id=0): - """Construct a micro device - - Parameters - ---------- - dev_id : int, optional - The integer device id - - Returns - ------- - ctx : TVMContext - The created context - """ - return TVMContext(13, dev_id) - - -cl = opencl -mtl = metal - - -def array(arr, ctx=cpu(0)): - """Create an array from source arr. - - Parameters - ---------- - arr : numpy.ndarray - The array to be copied from - - ctx : TVMContext, optional - The device context to create the array - - Returns - ------- - ret : NDArray - The created array - """ - if not isinstance(arr, (_np.ndarray, NDArray)): - arr = _np.array(arr) - return empty(arr.shape, arr.dtype, ctx).copyfrom(arr) - -_set_class_ndarray(NDArray) diff --git a/spaces/h2oai/wave-tour/examples/textbox.py b/spaces/h2oai/wave-tour/examples/textbox.py deleted file mode 100644 index 39c29bdee321e568ee2811b5c1e198351362ea3f..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/textbox.py +++ /dev/null @@ -1,45 +0,0 @@ -# Form / Textbox -# Use a #textbox to allow users to provide text inputs. -# #form -# --- -from h2o_wave import main, app, Q, ui - - -@app('/demo') -async def serve(q: Q): - if q.args.show_inputs: - q.page['example'].items = [ - ui.text(f'textbox={q.args.textbox}'), - ui.text(f'textbox_disabled={q.args.textbox_disabled}'), - ui.text(f'textbox_readonly={q.args.textbox_readonly}'), - ui.text(f'textbox_required={q.args.textbox_required}'), - ui.text(f'textbox_error={q.args.textbox_error}'), - ui.text(f'textbox_mask={q.args.textbox_mask}'), - ui.text(f'textbox_icon={q.args.textbox_icon}'), - ui.text(f'textbox_prefix={q.args.textbox_prefix}'), - ui.text(f'textbox_suffix={q.args.textbox_suffix}'), - ui.text(f'textbox_placeholder={q.args.textbox_placeholder}'), - ui.text(f'textbox_disabled_placeholder={q.args.textbox_disabled_placeholder}'), - ui.text(f'textbox_multiline={q.args.textbox_multiline}'), - ui.text(f'textbox_spellcheck_disabled={q.args.textbox_spellcheck_disabled}'), - ui.button(name='show_form', label='Back', primary=True), - ] - else: - q.page['example'] = ui.form_card(box='1 1 -1 -1', items=[ - ui.textbox(name='textbox', label='Standard'), - ui.textbox(name='textbox_disabled', label='Disabled', value='I am disabled', disabled=True), - ui.textbox(name='textbox_readonly', label='Read-only', value='I am read-only', readonly=True), - ui.textbox(name='textbox_required', label='Required', required=True), - ui.textbox(name='textbox_error', label='With error message', error='I have an error'), - ui.textbox(name='textbox_mask', label='With input mask', mask='(999) 999 - 9999'), - ui.textbox(name='textbox_icon', label='With icon', icon='Calendar'), - ui.textbox(name='textbox_prefix', label='With prefix', prefix='http://'), - ui.textbox(name='textbox_suffix', label='With suffix', suffix='@h2o.ai'), - ui.textbox(name='textbox_placeholder', label='With placeholder', placeholder='I need some input'), - ui.textbox(name='textbox_disabled_placeholder', label='Disabled with placeholder', disabled=True, - placeholder='I am disabled'), - ui.textbox(name='textbox_multiline', label='Multiline textarea', multiline=True), - ui.textbox(name='textbox_spellcheck_disabled', label='Spellcheck disabled', spellcheck=False), - ui.button(name='show_inputs', label='Submit', primary=True), - ]) - await q.page.save() diff --git a/spaces/hackathon-pln-es/DemoAcosoTwitter/README.md b/spaces/hackathon-pln-es/DemoAcosoTwitter/README.md deleted file mode 100644 index 16cb44f4fadf3e5e277ad472540cee6e58616eb9..0000000000000000000000000000000000000000 --- a/spaces/hackathon-pln-es/DemoAcosoTwitter/README.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Demo-Acoso-Twitter -emoji: 👁️‍🗨️💻 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 2.8.14 -app_file: app.py -pinned: false -license: apache-2.0 -models : hackathon-pln-es/Detect-Acoso-Twitter-Es -datasets: hackathon-pln-es/Dataset-Acoso-Twitter-Es ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference -# UNL: Universidad Nacional de Loja - -## Miembros del equipo: -- Anderson Quizhpe
    -- Luis Negrón
    -- David Pacheco
    -- Bryan Requenes
    -- Paul Pasaca \ No newline at end of file diff --git a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/data/datasets/register_coco_stuff.py b/spaces/hamacojr/SAM-CAT-Seg/cat_seg/data/datasets/register_coco_stuff.py deleted file mode 100644 index 35c823dee37b1657dc61d1f5beab8c0ecaa98855..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/cat_seg/data/datasets/register_coco_stuff.py +++ /dev/null @@ -1,216 +0,0 @@ -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets import load_sem_seg - -COCO_CATEGORIES = [ - {"color": [220, 20, 60], "isthing": 1, "id": 1, "name": "person"}, - {"color": [119, 11, 32], "isthing": 1, "id": 2, "name": "bicycle"}, - {"color": [0, 0, 142], "isthing": 1, "id": 3, "name": "car"}, - {"color": [0, 0, 230], "isthing": 1, "id": 4, "name": "motorcycle"}, - {"color": [106, 0, 228], "isthing": 1, "id": 5, "name": "airplane"}, - {"color": [0, 60, 100], "isthing": 1, "id": 6, "name": "bus"}, - {"color": [0, 80, 100], "isthing": 1, "id": 7, "name": "train"}, - {"color": [0, 0, 70], "isthing": 1, "id": 8, "name": "truck"}, - {"color": [0, 0, 192], "isthing": 1, "id": 9, "name": "boat"}, - {"color": [250, 170, 30], "isthing": 1, "id": 10, "name": "traffic light"}, - {"color": [100, 170, 30], "isthing": 1, "id": 11, "name": "fire hydrant"}, - {"color": [220, 220, 0], "isthing": 1, "id": 13, "name": "stop sign"}, - {"color": [175, 116, 175], "isthing": 1, "id": 14, "name": "parking meter"}, - {"color": [250, 0, 30], "isthing": 1, "id": 15, "name": "bench"}, - {"color": [165, 42, 42], "isthing": 1, "id": 16, "name": "bird"}, - {"color": [255, 77, 255], "isthing": 1, "id": 17, "name": "cat"}, - {"color": [0, 226, 252], "isthing": 1, "id": 18, "name": "dog"}, - {"color": [182, 182, 255], "isthing": 1, "id": 19, "name": "horse"}, - {"color": [0, 82, 0], "isthing": 1, "id": 20, "name": "sheep"}, - {"color": [120, 166, 157], "isthing": 1, "id": 21, "name": "cow"}, - {"color": [110, 76, 0], "isthing": 1, "id": 22, "name": "elephant"}, - {"color": [174, 57, 255], "isthing": 1, "id": 23, "name": "bear"}, - {"color": [199, 100, 0], "isthing": 1, "id": 24, "name": "zebra"}, - {"color": [72, 0, 118], "isthing": 1, "id": 25, "name": "giraffe"}, - {"color": [255, 179, 240], "isthing": 1, "id": 27, "name": "backpack"}, - {"color": [0, 125, 92], "isthing": 1, "id": 28, "name": "umbrella"}, - {"color": [209, 0, 151], "isthing": 1, "id": 31, "name": "handbag"}, - {"color": [188, 208, 182], "isthing": 1, "id": 32, "name": "tie"}, - {"color": [0, 220, 176], "isthing": 1, "id": 33, "name": "suitcase"}, - {"color": [255, 99, 164], "isthing": 1, "id": 34, "name": "frisbee"}, - {"color": [92, 0, 73], "isthing": 1, "id": 35, "name": "skis"}, - {"color": [133, 129, 255], "isthing": 1, "id": 36, "name": "snowboard"}, - {"color": [78, 180, 255], "isthing": 1, "id": 37, "name": "sports ball"}, - {"color": [0, 228, 0], "isthing": 1, "id": 38, "name": "kite"}, - {"color": [174, 255, 243], "isthing": 1, "id": 39, "name": "baseball bat"}, - {"color": [45, 89, 255], "isthing": 1, "id": 40, "name": "baseball glove"}, - {"color": [134, 134, 103], "isthing": 1, "id": 41, "name": "skateboard"}, - {"color": [145, 148, 174], "isthing": 1, "id": 42, "name": "surfboard"}, - {"color": [255, 208, 186], "isthing": 1, "id": 43, "name": "tennis racket"}, - {"color": [197, 226, 255], "isthing": 1, "id": 44, "name": "bottle"}, - {"color": [171, 134, 1], "isthing": 1, "id": 46, "name": "wine glass"}, - {"color": [109, 63, 54], "isthing": 1, "id": 47, "name": "cup"}, - {"color": [207, 138, 255], "isthing": 1, "id": 48, "name": "fork"}, - {"color": [151, 0, 95], "isthing": 1, "id": 49, "name": "knife"}, - {"color": [9, 80, 61], "isthing": 1, "id": 50, "name": "spoon"}, - {"color": [84, 105, 51], "isthing": 1, "id": 51, "name": "bowl"}, - {"color": [74, 65, 105], "isthing": 1, "id": 52, "name": "banana"}, - {"color": [166, 196, 102], "isthing": 1, "id": 53, "name": "apple"}, - {"color": [208, 195, 210], "isthing": 1, "id": 54, "name": "sandwich"}, - {"color": [255, 109, 65], "isthing": 1, "id": 55, "name": "orange"}, - {"color": [0, 143, 149], "isthing": 1, "id": 56, "name": "broccoli"}, - {"color": [179, 0, 194], "isthing": 1, "id": 57, "name": "carrot"}, - {"color": [209, 99, 106], "isthing": 1, "id": 58, "name": "hot dog"}, - {"color": [5, 121, 0], "isthing": 1, "id": 59, "name": "pizza"}, - {"color": [227, 255, 205], "isthing": 1, "id": 60, "name": "donut"}, - {"color": [147, 186, 208], "isthing": 1, "id": 61, "name": "cake"}, - {"color": [153, 69, 1], "isthing": 1, "id": 62, "name": "chair"}, - {"color": [3, 95, 161], "isthing": 1, "id": 63, "name": "couch"}, - {"color": [163, 255, 0], "isthing": 1, "id": 64, "name": "potted plant"}, - {"color": [119, 0, 170], "isthing": 1, "id": 65, "name": "bed"}, - {"color": [0, 182, 199], "isthing": 1, "id": 67, "name": "dining table"}, - {"color": [0, 165, 120], "isthing": 1, "id": 70, "name": "toilet"}, - {"color": [183, 130, 88], "isthing": 1, "id": 72, "name": "tv"}, - {"color": [95, 32, 0], "isthing": 1, "id": 73, "name": "laptop"}, - {"color": [130, 114, 135], "isthing": 1, "id": 74, "name": "mouse"}, - {"color": [110, 129, 133], "isthing": 1, "id": 75, "name": "remote"}, - {"color": [166, 74, 118], "isthing": 1, "id": 76, "name": "keyboard"}, - {"color": [219, 142, 185], "isthing": 1, "id": 77, "name": "cell phone"}, - {"color": [79, 210, 114], "isthing": 1, "id": 78, "name": "microwave"}, - {"color": [178, 90, 62], "isthing": 1, "id": 79, "name": "oven"}, - {"color": [65, 70, 15], "isthing": 1, "id": 80, "name": "toaster"}, - {"color": [127, 167, 115], "isthing": 1, "id": 81, "name": "sink"}, - {"color": [59, 105, 106], "isthing": 1, "id": 82, "name": "refrigerator"}, - {"color": [142, 108, 45], "isthing": 1, "id": 84, "name": "book"}, - {"color": [196, 172, 0], "isthing": 1, "id": 85, "name": "clock"}, - {"color": [95, 54, 80], "isthing": 1, "id": 86, "name": "vase"}, - {"color": [128, 76, 255], "isthing": 1, "id": 87, "name": "scissors"}, - {"color": [201, 57, 1], "isthing": 1, "id": 88, "name": "teddy bear"}, - {"color": [246, 0, 122], "isthing": 1, "id": 89, "name": "hair drier"}, - {"color": [191, 162, 208], "isthing": 1, "id": 90, "name": "toothbrush"}, - {"id": 92, "name": "banner", "supercategory": "textile"}, - {"id": 93, "name": "blanket", "supercategory": "textile"}, - {"id": 94, "name": "branch", "supercategory": "plant"}, - {"id": 95, "name": "bridge", "supercategory": "building"}, - {"id": 96, "name": "building-other", "supercategory": "building"}, - {"id": 97, "name": "bush", "supercategory": "plant"}, - {"id": 98, "name": "cabinet", "supercategory": "furniture-stuff"}, - {"id": 99, "name": "cage", "supercategory": "structural"}, - {"id": 100, "name": "cardboard", "supercategory": "raw-material"}, - {"id": 101, "name": "carpet", "supercategory": "floor"}, - {"id": 102, "name": "ceiling-other", "supercategory": "ceiling"}, - {"id": 103, "name": "ceiling-tile", "supercategory": "ceiling"}, - {"id": 104, "name": "cloth", "supercategory": "textile"}, - {"id": 105, "name": "clothes", "supercategory": "textile"}, - {"id": 106, "name": "clouds", "supercategory": "sky"}, - {"id": 107, "name": "counter", "supercategory": "furniture-stuff"}, - {"id": 108, "name": "cupboard", "supercategory": "furniture-stuff"}, - {"id": 109, "name": "curtain", "supercategory": "textile"}, - {"id": 110, "name": "desk-stuff", "supercategory": "furniture-stuff"}, - {"id": 111, "name": "dirt", "supercategory": "ground"}, - {"id": 112, "name": "door-stuff", "supercategory": "furniture-stuff"}, - {"id": 113, "name": "fence", "supercategory": "structural"}, - {"id": 114, "name": "floor-marble", "supercategory": "floor"}, - {"id": 115, "name": "floor-other", "supercategory": "floor"}, - {"id": 116, "name": "floor-stone", "supercategory": "floor"}, - {"id": 117, "name": "floor-tile", "supercategory": "floor"}, - {"id": 118, "name": "floor-wood", "supercategory": "floor"}, - {"id": 119, "name": "flower", "supercategory": "plant"}, - {"id": 120, "name": "fog", "supercategory": "water"}, - {"id": 121, "name": "food-other", "supercategory": "food-stuff"}, - {"id": 122, "name": "fruit", "supercategory": "food-stuff"}, - {"id": 123, "name": "furniture-other", "supercategory": "furniture-stuff"}, - {"id": 124, "name": "grass", "supercategory": "plant"}, - {"id": 125, "name": "gravel", "supercategory": "ground"}, - {"id": 126, "name": "ground-other", "supercategory": "ground"}, - {"id": 127, "name": "hill", "supercategory": "solid"}, - {"id": 128, "name": "house", "supercategory": "building"}, - {"id": 129, "name": "leaves", "supercategory": "plant"}, - {"id": 130, "name": "light", "supercategory": "furniture-stuff"}, - {"id": 131, "name": "mat", "supercategory": "textile"}, - {"id": 132, "name": "metal", "supercategory": "raw-material"}, - {"id": 133, "name": "mirror-stuff", "supercategory": "furniture-stuff"}, - {"id": 134, "name": "moss", "supercategory": "plant"}, - {"id": 135, "name": "mountain", "supercategory": "solid"}, - {"id": 136, "name": "mud", "supercategory": "ground"}, - {"id": 137, "name": "napkin", "supercategory": "textile"}, - {"id": 138, "name": "net", "supercategory": "structural"}, - {"id": 139, "name": "paper", "supercategory": "raw-material"}, - {"id": 140, "name": "pavement", "supercategory": "ground"}, - {"id": 141, "name": "pillow", "supercategory": "textile"}, - {"id": 142, "name": "plant-other", "supercategory": "plant"}, - {"id": 143, "name": "plastic", "supercategory": "raw-material"}, - {"id": 144, "name": "platform", "supercategory": "ground"}, - {"id": 145, "name": "playingfield", "supercategory": "ground"}, - {"id": 146, "name": "railing", "supercategory": "structural"}, - {"id": 147, "name": "railroad", "supercategory": "ground"}, - {"id": 148, "name": "river", "supercategory": "water"}, - {"id": 149, "name": "road", "supercategory": "ground"}, - {"id": 150, "name": "rock", "supercategory": "solid"}, - {"id": 151, "name": "roof", "supercategory": "building"}, - {"id": 152, "name": "rug", "supercategory": "textile"}, - {"id": 153, "name": "salad", "supercategory": "food-stuff"}, - {"id": 154, "name": "sand", "supercategory": "ground"}, - {"id": 155, "name": "sea", "supercategory": "water"}, - {"id": 156, "name": "shelf", "supercategory": "furniture-stuff"}, - {"id": 157, "name": "sky-other", "supercategory": "sky"}, - {"id": 158, "name": "skyscraper", "supercategory": "building"}, - {"id": 159, "name": "snow", "supercategory": "ground"}, - {"id": 160, "name": "solid-other", "supercategory": "solid"}, - {"id": 161, "name": "stairs", "supercategory": "furniture-stuff"}, - {"id": 162, "name": "stone", "supercategory": "solid"}, - {"id": 163, "name": "straw", "supercategory": "plant"}, - {"id": 164, "name": "structural-other", "supercategory": "structural"}, - {"id": 165, "name": "table", "supercategory": "furniture-stuff"}, - {"id": 166, "name": "tent", "supercategory": "building"}, - {"id": 167, "name": "textile-other", "supercategory": "textile"}, - {"id": 168, "name": "towel", "supercategory": "textile"}, - {"id": 169, "name": "tree", "supercategory": "plant"}, - {"id": 170, "name": "vegetable", "supercategory": "food-stuff"}, - {"id": 171, "name": "wall-brick", "supercategory": "wall"}, - {"id": 172, "name": "wall-concrete", "supercategory": "wall"}, - {"id": 173, "name": "wall-other", "supercategory": "wall"}, - {"id": 174, "name": "wall-panel", "supercategory": "wall"}, - {"id": 175, "name": "wall-stone", "supercategory": "wall"}, - {"id": 176, "name": "wall-tile", "supercategory": "wall"}, - {"id": 177, "name": "wall-wood", "supercategory": "wall"}, - {"id": 178, "name": "water-other", "supercategory": "water"}, - {"id": 179, "name": "waterdrops", "supercategory": "water"}, - {"id": 180, "name": "window-blind", "supercategory": "window"}, - {"id": 181, "name": "window-other", "supercategory": "window"}, - {"id": 182, "name": "wood", "supercategory": "solid"}, -] - - -def _get_coco_stuff_meta(): - stuff_ids = [k["id"] for k in COCO_CATEGORIES] - assert len(stuff_ids) == 171, len(stuff_ids) - - stuff_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(stuff_ids)} - stuff_classes = [k["name"] for k in COCO_CATEGORIES] - - ret = { - "stuff_dataset_id_to_contiguous_id": stuff_dataset_id_to_contiguous_id, - "stuff_classes": stuff_classes, - } - return ret - -def register_all_coco_stuff_10k(root): - root = os.path.join(root, "coco-stuff") - meta = _get_coco_stuff_meta() - for name, image_dirname, sem_seg_dirname in [ - ("train", "images/train2017", "annotations_detectron2/train2017"), - ("test", "images/val2017", "annotations_detectron2/val2017"), - ]: - image_dir = os.path.join(root, image_dirname) - gt_dir = os.path.join(root, sem_seg_dirname) - name = f"coco_2017_{name}_stuff_all_sem_seg" - DatasetCatalog.register( - name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext="png", image_ext="jpg") - ) - MetadataCatalog.get(name).set( - image_root=image_dir, - sem_seg_root=gt_dir, - evaluator_type="sem_seg", - ignore_label=255, - **meta, - ) - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_coco_stuff_10k(_root) diff --git a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/conv2d_gradfix.py b/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/conv2d_gradfix.py deleted file mode 100644 index 388778fa971d7bc5c64b5fd6c0e5492863ee1c5f..0000000000000000000000000000000000000000 --- a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/conv2d_gradfix.py +++ /dev/null @@ -1,198 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.conv2d` that supports -arbitrarily high order gradients with zero performance penalty.""" - -import contextlib -import torch - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. -weight_gradients_disabled = False # Forcefully disable computation of gradients with respect to the weights. - -@contextlib.contextmanager -def no_weight_gradients(disable=True): - global weight_gradients_disabled - old = weight_gradients_disabled - if disable: - weight_gradients_disabled = True - yield - weight_gradients_disabled = old - -#---------------------------------------------------------------------------- - -def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=False, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=0, dilation=dilation, groups=groups).apply(input, weight, bias) - return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, dilation=dilation, groups=groups) - -def conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1): - if _should_use_custom_op(input): - return _conv2d_gradfix(transpose=True, weight_shape=weight.shape, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation).apply(input, weight, bias) - return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, stride=stride, padding=padding, output_padding=output_padding, groups=groups, dilation=dilation) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(input): - assert isinstance(input, torch.Tensor) - if (not enabled) or (not torch.backends.cudnn.enabled): - return False - if input.device.type != 'cuda': - return False - return True - -def _tuple_of_ints(xs, ndim): - xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim - assert len(xs) == ndim - assert all(isinstance(x, int) for x in xs) - return xs - -#---------------------------------------------------------------------------- - -_conv2d_gradfix_cache = dict() -_null_tensor = torch.empty([0]) - -def _conv2d_gradfix(transpose, weight_shape, stride, padding, output_padding, dilation, groups): - # Parse arguments. - ndim = 2 - weight_shape = tuple(weight_shape) - stride = _tuple_of_ints(stride, ndim) - padding = _tuple_of_ints(padding, ndim) - output_padding = _tuple_of_ints(output_padding, ndim) - dilation = _tuple_of_ints(dilation, ndim) - - # Lookup from cache. - key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups) - if key in _conv2d_gradfix_cache: - return _conv2d_gradfix_cache[key] - - # Validate arguments. - assert groups >= 1 - assert len(weight_shape) == ndim + 2 - assert all(stride[i] >= 1 for i in range(ndim)) - assert all(padding[i] >= 0 for i in range(ndim)) - assert all(dilation[i] >= 0 for i in range(ndim)) - if not transpose: - assert all(output_padding[i] == 0 for i in range(ndim)) - else: # transpose - assert all(0 <= output_padding[i] < max(stride[i], dilation[i]) for i in range(ndim)) - - # Helpers. - common_kwargs = dict(stride=stride, padding=padding, dilation=dilation, groups=groups) - def calc_output_padding(input_shape, output_shape): - if transpose: - return [0, 0] - return [ - input_shape[i + 2] - - (output_shape[i + 2] - 1) * stride[i] - - (1 - 2 * padding[i]) - - dilation[i] * (weight_shape[i + 2] - 1) - for i in range(ndim) - ] - - # Forward & backward. - class Conv2d(torch.autograd.Function): - @staticmethod - def forward(ctx, input, weight, bias): - assert weight.shape == weight_shape - ctx.save_for_backward( - input if weight.requires_grad else _null_tensor, - weight if input.requires_grad else _null_tensor, - ) - ctx.input_shape = input.shape - - # Simple 1x1 convolution => cuBLAS (only on Volta, not on Ampere). - if weight_shape[2:] == stride == dilation == (1, 1) and padding == (0, 0) and torch.cuda.get_device_capability(input.device) < (8, 0): - a = weight.reshape(groups, weight_shape[0] // groups, weight_shape[1]) - b = input.reshape(input.shape[0], groups, input.shape[1] // groups, -1) - c = (a.transpose(1, 2) if transpose else a) @ b.permute(1, 2, 0, 3).flatten(2) - c = c.reshape(-1, input.shape[0], *input.shape[2:]).transpose(0, 1) - c = c if bias is None else c + bias.unsqueeze(0).unsqueeze(2).unsqueeze(3) - return c.contiguous(memory_format=(torch.channels_last if input.stride(1) == 1 else torch.contiguous_format)) - - # General case => cuDNN. - if transpose: - return torch.nn.functional.conv_transpose2d(input=input, weight=weight, bias=bias, output_padding=output_padding, **common_kwargs) - return torch.nn.functional.conv2d(input=input, weight=weight, bias=bias, **common_kwargs) - - @staticmethod - def backward(ctx, grad_output): - input, weight = ctx.saved_tensors - input_shape = ctx.input_shape - grad_input = None - grad_weight = None - grad_bias = None - - if ctx.needs_input_grad[0]: - p = calc_output_padding(input_shape=input_shape, output_shape=grad_output.shape) - op = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs) - grad_input = op.apply(grad_output, weight, None) - assert grad_input.shape == input_shape - - if ctx.needs_input_grad[1] and not weight_gradients_disabled: - grad_weight = Conv2dGradWeight.apply(grad_output, input) - assert grad_weight.shape == weight_shape - - if ctx.needs_input_grad[2]: - grad_bias = grad_output.sum([0, 2, 3]) - - return grad_input, grad_weight, grad_bias - - # Gradient with respect to the weights. - class Conv2dGradWeight(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input): - ctx.save_for_backward( - grad_output if input.requires_grad else _null_tensor, - input if grad_output.requires_grad else _null_tensor, - ) - ctx.grad_output_shape = grad_output.shape - ctx.input_shape = input.shape - - # Simple 1x1 convolution => cuBLAS (on both Volta and Ampere). - if weight_shape[2:] == stride == dilation == (1, 1) and padding == (0, 0): - a = grad_output.reshape(grad_output.shape[0], groups, grad_output.shape[1] // groups, -1).permute(1, 2, 0, 3).flatten(2) - b = input.reshape(input.shape[0], groups, input.shape[1] // groups, -1).permute(1, 2, 0, 3).flatten(2) - c = (b @ a.transpose(1, 2) if transpose else a @ b.transpose(1, 2)).reshape(weight_shape) - return c.contiguous(memory_format=(torch.channels_last if input.stride(1) == 1 else torch.contiguous_format)) - - # General case => cuDNN. - name = 'aten::cudnn_convolution_transpose_backward_weight' if transpose else 'aten::cudnn_convolution_backward_weight' - flags = [torch.backends.cudnn.benchmark, torch.backends.cudnn.deterministic, torch.backends.cudnn.allow_tf32] - return torch._C._jit_get_operation(name)(weight_shape, grad_output, input, padding, stride, dilation, groups, *flags) - - @staticmethod - def backward(ctx, grad2_grad_weight): - grad_output, input = ctx.saved_tensors - grad_output_shape = ctx.grad_output_shape - input_shape = ctx.input_shape - grad2_grad_output = None - grad2_input = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = Conv2d.apply(input, grad2_grad_weight, None) - assert grad2_grad_output.shape == grad_output_shape - - if ctx.needs_input_grad[1]: - p = calc_output_padding(input_shape=input_shape, output_shape=grad_output_shape) - op = _conv2d_gradfix(transpose=(not transpose), weight_shape=weight_shape, output_padding=p, **common_kwargs) - grad2_input = op.apply(grad_output, grad2_grad_weight, None) - assert grad2_input.shape == input_shape - - return grad2_grad_output, grad2_input - - _conv2d_gradfix_cache[key] = Conv2d - return Conv2d - -#---------------------------------------------------------------------------- diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/activations.py b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/activations.py deleted file mode 100644 index e4d4bbde5ec8610a5ff13fe2ef2281721c14ca1a..0000000000000000000000000000000000000000 --- a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/activations.py +++ /dev/null @@ -1,103 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, AGPL-3.0 license -""" -Activation functions -""" - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class SiLU(nn.Module): - # SiLU activation https://arxiv.org/pdf/1606.08415.pdf - @staticmethod - def forward(x): - return x * torch.sigmoid(x) - - -class Hardswish(nn.Module): - # Hard-SiLU activation - @staticmethod - def forward(x): - # return x * F.hardsigmoid(x) # for TorchScript and CoreML - return x * F.hardtanh(x + 3, 0.0, 6.0) / 6.0 # for TorchScript, CoreML and ONNX - - -class Mish(nn.Module): - # Mish activation https://github.com/digantamisra98/Mish - @staticmethod - def forward(x): - return x * F.softplus(x).tanh() - - -class MemoryEfficientMish(nn.Module): - # Mish activation memory-efficient - class F(torch.autograd.Function): - - @staticmethod - def forward(ctx, x): - ctx.save_for_backward(x) - return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x))) - - @staticmethod - def backward(ctx, grad_output): - x = ctx.saved_tensors[0] - sx = torch.sigmoid(x) - fx = F.softplus(x).tanh() - return grad_output * (fx + x * sx * (1 - fx * fx)) - - def forward(self, x): - return self.F.apply(x) - - -class FReLU(nn.Module): - # FReLU activation https://arxiv.org/abs/2007.11824 - def __init__(self, c1, k=3): # ch_in, kernel - super().__init__() - self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False) - self.bn = nn.BatchNorm2d(c1) - - def forward(self, x): - return torch.max(x, self.bn(self.conv(x))) - - -class AconC(nn.Module): - r""" ACON activation (activate or not) - AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter - according to "Activate or Not: Learning Customized Activation" . - """ - - def __init__(self, c1): - super().__init__() - self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1)) - self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1)) - self.beta = nn.Parameter(torch.ones(1, c1, 1, 1)) - - def forward(self, x): - dpx = (self.p1 - self.p2) * x - return dpx * torch.sigmoid(self.beta * dpx) + self.p2 * x - - -class MetaAconC(nn.Module): - r""" ACON activation (activate or not) - MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network - according to "Activate or Not: Learning Customized Activation" . - """ - - def __init__(self, c1, k=1, s=1, r=16): # ch_in, kernel, stride, r - super().__init__() - c2 = max(r, c1 // r) - self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1)) - self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1)) - self.fc1 = nn.Conv2d(c1, c2, k, s, bias=True) - self.fc2 = nn.Conv2d(c2, c1, k, s, bias=True) - # self.bn1 = nn.BatchNorm2d(c2) - # self.bn2 = nn.BatchNorm2d(c1) - - def forward(self, x): - y = x.mean(dim=2, keepdims=True).mean(dim=3, keepdims=True) - # batch-size 1 bug/instabilities https://github.com/ultralytics/yolov5/issues/2891 - # beta = torch.sigmoid(self.bn2(self.fc2(self.bn1(self.fc1(y))))) # bug/unstable - beta = torch.sigmoid(self.fc2(self.fc1(y))) # bug patch BN layers removed - dpx = (self.p1 - self.p2) * x - return dpx * torch.sigmoid(beta * dpx) + self.p2 * x diff --git a/spaces/hebert2099/MusicGen/tests/modules/test_transformer.py b/spaces/hebert2099/MusicGen/tests/modules/test_transformer.py deleted file mode 100644 index 8c9953d9e8f139db7b8ce3063e3d5a78d2f5d088..0000000000000000000000000000000000000000 --- a/spaces/hebert2099/MusicGen/tests/modules/test_transformer.py +++ /dev/null @@ -1,247 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.transformer import StreamingMultiheadAttention, StreamingTransformer - - -def test_transformer_causal_streaming(): - torch.manual_seed(1234) - - for context, custom in product([None, 10], [False, True]): - # Test that causality and receptive fields are properly handled. - # looking at the gradients - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=custom, - dropout=0.) - steps = 20 - for k in [0, 10, 15, 19]: - x = torch.randn(4, steps, 16, requires_grad=True) - y = tr(x) - y[:, k].abs().sum().backward() - if k + 1 < steps: - assert torch.allclose(x.grad[:, k + 1:], torch.tensor(0.)), x.grad[:, k + 1:].norm() - assert not torch.allclose(x.grad[:, :k + 1], torch.tensor(0.)), x.grad[:, :k + 1].norm() - if context is not None and k > context: - limit = k - context - 1 - assert torch.allclose(x.grad[:, :limit], - torch.tensor(0.)), x.grad[:, :limit].norm() - - # Now check that streaming gives the same result at batch eval. - x = torch.randn(4, steps, 16) - y = tr(x) - ys = [] - with tr.streaming(): - for k in range(steps): - chunk = x[:, k:k + 1, :] - ys.append(tr(chunk)) - y_stream = torch.cat(ys, dim=1) - delta = torch.norm(y_stream - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_transformer_vs_pytorch(): - torch.manual_seed(1234) - # Check that in the non causal setting, we get the same result as - # PyTorch Transformer encoder. - for custom in [False, True]: - tr = StreamingTransformer( - 16, 4, 2, - causal=False, custom=custom, dropout=0., positional_scale=0.) - layer = torch.nn.TransformerEncoderLayer(16, 4, dropout=0., batch_first=True) - tr_ref = torch.nn.TransformerEncoder(layer, 2) - tr.load_state_dict(tr_ref.state_dict()) - - x = torch.randn(4, 20, 16) - y = tr(x) - y2 = tr_ref(x) - delta = torch.norm(y2 - y) / torch.norm(y) - assert delta < 1e-6, delta - - -def test_streaming_api(): - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0.) - tr.eval() - steps = 12 - x = torch.randn(1, steps, 16) - - with torch.no_grad(): - with tr.streaming(): - _ = tr(x[:, :1]) - state = {k: v.clone() for k, v in tr.get_streaming_state().items()} - y = tr(x[:, 1:2]) - tr.set_streaming_state(state) - y2 = tr(x[:, 1:2]) - assert torch.allclose(y, y2), (y - y2).norm() - assert tr.flush() is None - - -def test_memory_efficient(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1) - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - assert torch.allclose(y, y2), (y - y2).norm() - - -def test_attention_as_float32(): - torch.manual_seed(1234) - cases = [ - {'custom': True}, - {'custom': False}, - ] - for case in cases: - tr = StreamingTransformer(16, 4, 2, dropout=0., dtype=torch.bfloat16, **case) - tr_float32 = StreamingTransformer( - 16, 4, 2, dropout=0., attention_as_float32=True, dtype=torch.bfloat16, **case) - if not case['custom']: - # we are not using autocast here because it doesn't really - # work as expected on CPU, so we have to manually cast the weights of the MHA. - for layer in tr_float32.layers: - layer.self_attn.mha.to(torch.float32) - tr_float32.load_state_dict(tr.state_dict()) - steps = 12 - x = torch.randn(3, steps, 16, dtype=torch.bfloat16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_float32(x) - assert not torch.allclose(y, y2), (y - y2).norm() - - -@torch.no_grad() -def test_streaming_memory_efficient(): - torch.manual_seed(1234) - tr = StreamingTransformer(16, 4, 2, causal=True, dropout=0., custom=True) - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, causal=True) - tr.load_state_dict(tr_mem_efficient.state_dict()) - tr.eval() - tr_mem_efficient.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr_mem_efficient.streaming(): - outs = [] - # frame_sizes = [2] + [1] * (steps - 2) - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr_mem_efficient(frame)) - - out = torch.cat(outs, dim=1) - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_cross_attention(): - torch.manual_seed(1234) - for norm_first in [True, False]: - m = StreamingTransformer( - 16, 4, 2, cross_attention=False, norm_first=norm_first, dropout=0., custom=True) - m_cross = StreamingTransformer( - 16, 4, 2, cross_attention=True, norm_first=norm_first, dropout=0., custom=True) - m_cross.load_state_dict(m.state_dict(), strict=False) - x = torch.randn(2, 5, 16) - cross_x = torch.randn(2, 3, 16) - y_ref = m(x) - y_cross_zero = m_cross(x, cross_attention_src=0 * cross_x) - # With norm_first, the two should be exactly yhe same, - # but with norm_first=False, we get 2 normalization in a row - # and the epsilon value leads to a tiny change. - atol = 0. if norm_first else 1e-6 - print((y_ref - y_cross_zero).norm() / y_ref.norm()) - assert torch.allclose(y_ref, y_cross_zero, atol=atol) - - # We now expect a difference even with a generous atol of 1e-2. - y_cross = m_cross(x, cross_attention_src=cross_x) - assert not torch.allclose(y_cross, y_cross_zero, atol=1e-2) - - with pytest.raises(AssertionError): - _ = m_cross(x) - _ = m(x, cross_attention_src=cross_x) - - -def test_cross_attention_compat(): - torch.manual_seed(1234) - num_heads = 2 - dim = num_heads * 64 - with pytest.raises(AssertionError): - StreamingMultiheadAttention(dim, num_heads, causal=True, cross_attention=True) - - cross_attn = StreamingMultiheadAttention( - dim, num_heads, dropout=0, cross_attention=True, custom=True) - ref_attn = torch.nn.MultiheadAttention(dim, num_heads, dropout=0, batch_first=True) - - # We can load the regular attention state dict - # so we have compat when loading old checkpoints. - cross_attn.load_state_dict(ref_attn.state_dict()) - - queries = torch.randn(3, 7, dim) - keys = torch.randn(3, 9, dim) - values = torch.randn(3, 9, dim) - - y = cross_attn(queries, keys, values)[0] - y_ref = ref_attn(queries, keys, values)[0] - assert torch.allclose(y, y_ref, atol=1e-7) - - # Now let's check that streaming is working properly. - with cross_attn.streaming(): - ys = [] - for step in range(queries.shape[1]): - ys.append(cross_attn(queries[:, step: step + 1], keys, values)[0]) - y_streaming = torch.cat(ys, dim=1) - assert torch.allclose(y_streaming, y, atol=1e-7) - - -def test_repeat_kv(): - torch.manual_seed(1234) - num_heads = 8 - kv_repeat = 4 - dim = num_heads * 64 - with pytest.raises(AssertionError): - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, cross_attention=True) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat) - mha = StreamingMultiheadAttention( - dim, num_heads, causal=True, kv_repeat=kv_repeat, custom=True) - x = torch.randn(4, 18, dim) - y = mha(x, x, x)[0] - assert x.shape == y.shape - - -def test_qk_layer_norm(): - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, bias_attn=False) - steps = 12 - x = torch.randn(3, steps, 16) - y = tr(x) - - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., qk_layer_norm=True, cross_attention=True) - z = torch.randn(3, 21, 16) - y = tr(x, cross_attention_src=z) - assert y.shape == x.shape diff --git a/spaces/hhhyrhe/vits-uma-genshin-honkai/Docker/vits.sh b/spaces/hhhyrhe/vits-uma-genshin-honkai/Docker/vits.sh deleted file mode 100644 index 2b87f26eda96d3800b73b4a21b210c78888a2299..0000000000000000000000000000000000000000 --- a/spaces/hhhyrhe/vits-uma-genshin-honkai/Docker/vits.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash -run() { - echo -e "\033[32m已完成初始化,启动服务...\033[0m" - python3 /app/vits-uma-genshin-honkai/app.py -} -install() { - echo -e "\033[33m正在初始化:安装依赖....\033[0m" - pip install -r /app/vits-uma-genshin-honkai/requirements.txt -i https://mirrors.ustc.edu.cn/pypi/web/simple - echo -e "\033[33m正在下载模型....\033[0m" - rm -f /app/vits-uma-genshin-honkai/model/G_953000.pth - wget -O /app/vits-uma-genshin-honkai/model/G_953000.pth https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai/resolve/main/model/G_953000.pth - echo -e "\033[32m初始化完成!\033[0m" - run -} - -if [ ! -f "/app/vits-uma-genshin-honkai/model/G_953000.pth" ] || [ "$(stat -c%s "/app/vits-uma-genshin-honkai/model/G_953000.pth")" -lt 10000 ]; then - install -else - run -fi diff --git a/spaces/hizkifw/clipbooru/README.md b/spaces/hizkifw/clipbooru/README.md deleted file mode 100644 index c672e2e480cf8c6e63b01add53a78c8d432942f8..0000000000000000000000000000000000000000 --- a/spaces/hizkifw/clipbooru/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Clipbooru -emoji: 🌍 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_zone_1.sh b/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_zone_1.sh deleted file mode 100644 index ab33a894696f61a6a952c870f1c586c870d8429e..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/scripts_new/run_glacier_zone_1.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -l -#SBATCH --nodes=1 --gres=gpu:1 --time=24:00:00 -#SBATCH --job-name=Task502_glacier_zone_1 - -export data_raw="/home/woody/iwi5/iwi5039h/data_raw" -export nnUNet_raw_data_base="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_raw_data_base/" -export nnUNet_preprocessed="/home/woody/iwi5/iwi5039h/nnUNet_data/nnUNet_preprocessed/" -export RESULTS_FOLDER="/home/woody/iwi5/iwi5039h/nnUNet_data/RESULTS_FOLDER" - -cd nnunet_glacer -pwd -conda activate nnunet - -python3 nnunet/run/run_training.py 2d nnUNetTrainerV2 502 1 --disable_postprocessing_on_folds -python3 nnunet/inference/predict_simple.py -i $nnUNet_raw_data_base/nnUNet_raw_data/Task502_Glacier_zone/imagesTs -o $RESULTS_FOLDER/test_predictions/Task502_Glacier_zone/fold_1 -t 502 -m 2d -f 1 -python3 nnunet/dataset_conversion/Task502_Glacier_reverse.py -i $RESULTS_FOLDER/test_predictions/Task502_Glacier_zone/fold_1 -python3 ./evaluate_nnUNet.py --predictions $RESULTS_FOLDER/test_predictions/Task502_Glacier_zone/fold_1/pngs --labels_fronts $data_raw/fronts/test --labels_zones $data_raw/zones/test --sar_images $data_raw/sar_images/test diff --git a/spaces/hongaik/hc_text_classification/.ipynb_checkpoints/app-checkpoint.py b/spaces/hongaik/hc_text_classification/.ipynb_checkpoints/app-checkpoint.py deleted file mode 100644 index a063cd369a469888e50c7f06db4f18abf0890d74..0000000000000000000000000000000000000000 --- a/spaces/hongaik/hc_text_classification/.ipynb_checkpoints/app-checkpoint.py +++ /dev/null @@ -1,59 +0,0 @@ -import streamlit as st -import plotly.express as px -from plotly.subplots import make_subplots -from utils import * - -########## Title for the Web App ########## -st.title("Text Classification for HC") - -########## Create Input field ########## -feedback = st.text_input('Type your text here', 'Customer suggested that the customer service needs to be improved and the response time needs to be improved.') - -if st.button('Click for predictions!'): - with st.spinner('Generating predictions...'): - - topics_prob, sentiment_prob, touchpoint_prob = get_single_prediction(feedback) - - bar_topic = px.bar(topics_prob, x='probability', y='topic') - - bar_touchpoint = px.bar(touchpoint_prob, x='probability', y='touchpoint') - - pie = px.pie(sentiment_prob, - values='probability', - names='sentiment', - color_discrete_map={'positive':'rgb(0, 204, 0)', - 'negative':'rgb(215, 11, 11)' - }, - color='sentiment' - ) - - st.plotly_chart(bar_topic, use_container_width=True) - st.plotly_chart(bar_touchpoint, use_container_width=True) - st.plotly_chart(pie, use_container_width=True) - -st.write("\n") -st.subheader('Or... Upload a csv file if you have a file instead.') -st.write("\n") - -st.download_button( - label="Download sample file here", - data=sample_file, - file_name='sample_data.csv', - mime='text/csv', - ) - -uploaded_file = st.file_uploader("Please upload a csv file with only 1 column of texts.") - -if uploaded_file is not None: - - with st.spinner('Generating predictions...'): - results = get_multiple_predictions(uploaded_file) - - st.download_button( - label="Download results as CSV", - data=results, - file_name='results.csv', - mime='text/csv', - ) - - \ No newline at end of file diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv2_mbf.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv2_mbf.py deleted file mode 100644 index 098afd8d2d6ca353d0b02281d02ac54e584f8281..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/ms1mv2_mbf.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.5, 0.0) -config.network = "mbf" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 1e-4 -config.batch_size = 128 -config.lr = 0.1 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/faces_emore" -config.num_classes = 85742 -config.num_image = 5822653 -config.num_epoch = 40 -config.warmup_epoch = 0 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/hzy123/bingo/src/components/chat-scroll-anchor.tsx b/spaces/hzy123/bingo/src/components/chat-scroll-anchor.tsx deleted file mode 100644 index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/components/chat-scroll-anchor.tsx +++ /dev/null @@ -1,29 +0,0 @@ -'use client' - -import * as React from 'react' -import { useInView } from 'react-intersection-observer' - -import { useAtBottom } from '@/lib/hooks/use-at-bottom' - -interface ChatScrollAnchorProps { - trackVisibility?: boolean -} - -export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) { - const isAtBottom = useAtBottom() - const { ref, entry, inView } = useInView({ - trackVisibility, - delay: 100, - rootMargin: '0px 0px -150px 0px' - }) - - React.useEffect(() => { - if (isAtBottom && trackVisibility && !inView) { - entry?.target.scrollIntoView({ - block: 'start' - }) - } - }, [inView, entry, isAtBottom, trackVisibility]) - - return
    -} diff --git a/spaces/inamXcontru/PoeticTTS/4K Stogram 2.7.2.1795 With Crack Full Download The Ultimate Solution for Instagram Marketing.md b/spaces/inamXcontru/PoeticTTS/4K Stogram 2.7.2.1795 With Crack Full Download The Ultimate Solution for Instagram Marketing.md deleted file mode 100644 index 102e15669ab5d12f6e22b8cd1a1a8879019d1fa8..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/4K Stogram 2.7.2.1795 With Crack Full Download The Ultimate Solution for Instagram Marketing.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    4K Stogram 3.3.3 Crack is a special tool which is equipped with the features for the viewing and then downloading of the videos, photos, stories and different audio tracks from Instagram. There is no restriction on the status of the account. You can download these all contents from the public accounts and the private accounts. For performing actions, it provides you with a lot of features. With just simple clicks, you can download videos, photos and various backup on Instagram.This program is a complete set of features and is very simple to use. With this program, you can save your important Instagram data which you can import or export anytime and anywhere without any kind of problem. You can back up all of the data which is completely secure. While you make a comparison with all other social software, you will find this one is the worlds best tool. With this application support, you can perform a lot of functions with the media of your Instagram.

    -

    4K Stogram 2.7.2.1795 With Crack Full Download


    Download Filehttps://gohhs.com/2uz2LY



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/General Chemistry 10th Edition Ebbing And Gammon.pdf.md b/spaces/inplisQlawa/anything-midjourney-v4-1/General Chemistry 10th Edition Ebbing And Gammon.pdf.md deleted file mode 100644 index 5553a8b4f7fa2e1a4571f2d1fb9287bc27b525ad..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/General Chemistry 10th Edition Ebbing And Gammon.pdf.md +++ /dev/null @@ -1,12 +0,0 @@ -

    General Chemistry 10th Edition Ebbing And Gammon.pdf


    Download 🆓 https://urlin.us/2uEvnG



    - -Student Solutions Guide for Ebbing / Gammon's General Chemistry, 10th. ISBN-13: 9781111989415. The Student Solutions Guide features designed solutions for everyone. Tutorial. -Solving problems in chemistry. -Grade 9 -To the textbook of Rudzitis G.E., Feldman F.F. -Student Solutions Guide for General Chemistry, 10th edition. -ISBN: 9781119498544. -Description: The Student Solutions Guide for General Chemistry, 10th edition is an electronic version of the study guide that collects problem-solving materials from the fields of General Chemistry and Environmental Chemistry. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Mathematicallogicdiscretemathematicsbytremblaymanoharpdffree [PORTABLE]125.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Mathematicallogicdiscretemathematicsbytremblaymanoharpdffree [PORTABLE]125.md deleted file mode 100644 index 3456799f29ac4c05ac8c8c43d523fe1b8d34ffaf..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Mathematicallogicdiscretemathematicsbytremblaymanoharpdffree [PORTABLE]125.md +++ /dev/null @@ -1,9 +0,0 @@ -

    mathematicallogicdiscretemathematicsbytremblaymanoharpdffree125


    Download Zip ✫✫✫ https://urlin.us/2uEyqP



    - -None for this reason, so as not to give you the opportunity to use it, we -For this reason, not to give you the opportunity to use it, we -. Online store (hereinafter referred to as the site) - a store that sells goods via the Internet. -. For this reason, not to give you the opportunity to use it, we 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Biblia Nacar Colunga Comentada Pdfl TOP.md b/spaces/inreVtussa/clothingai/Examples/Biblia Nacar Colunga Comentada Pdfl TOP.md deleted file mode 100644 index acdd8f35016517e9612473acc668d4d8a399658b..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Biblia Nacar Colunga Comentada Pdfl TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Biblia Nacar Colunga Comentada Pdfl


    Download Filehttps://tiurll.com/2uCiuN



    -
    -August 18, 2016 - Sagrada BIBI-de-Okhar-Colung. Date added: 2016-08-18 23:57:04. FOLDOUTCOUNT: 0. ID: SagradabiblianAnacarcolunga1944. I had a feeling that I was looking at the picture Salvador Dali. In the park next to the main square. I walked alone, so no one could take pictures there. All people passed down the street and around the cathedral. At the time when I looked at the cathedral, I saw only him, and not people who were there. I felt in the surrealistic world. I was in the world drawn by Salvador Dali. It was a surreal landscape. I liked what I see: a cathedral that looked old and ancient. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/isabel/mental-health-project/reader.py b/spaces/isabel/mental-health-project/reader.py deleted file mode 100644 index 2089f121665bf06f1c4d8a54d78df7b435b01ae9..0000000000000000000000000000000000000000 --- a/spaces/isabel/mental-health-project/reader.py +++ /dev/null @@ -1,161 +0,0 @@ -import os -from yattag import Doc -## --------------------------------- ### -### reading: info.txt ### -### -------------------------------- ### -# placeholders in case info.txt does not exist -def get_article(acc, most_imp_feat): - filename = "info.txt" - placeholder = "please create an info.txt to customize this text" - note = "**Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for that dataset. An accuracy of 50% means that half of the model's predictions for that dataset were accurate. Model accuracy and most important feature can be helpful for understanding how the model works, but should not be considered absolute facts about the real world." - - title = bkgd = data_collection = priv_cons = bias_cons = img_src = membs = description = placeholder - # check if info.txt is present - if os.path.isfile(filename): - # open info.txt in read mode - info = open(filename, "r") - - # read each line to a string - description = "An AI project created by " + info.readline() - title = info.readline() - bkgd = info.readline() - data_collection = info.readline() - priv_cons = info.readline() - bias_cons = info.readline() - img_src = info.readline() - membs = info.readline() - - # close file - info.close() - - # use yattag library to generate html - doc, tag, text, line = Doc().ttl() - # create html based on info.txt - with tag('div'): - with tag('div', klass='box model-container'): - with tag('div', klass='spacer'): - with tag('div', klass='box model-div'): - line('h2', "Model Accuracy", klass='acc') - line('p', acc) - with tag('div', klass='box model-div'): - line('h2', "Most Important Feature", klass='feat') - line('p', most_imp_feat) - with tag('div', klass='spacer'): - line('p', note) - with tag('div', klass='box'): - line('h2', 'Problem Statement and Research Summary', klass='prj') - line('p', bkgd) - with tag('div', klass='box'): - line('h2', 'Data Collection Plan', klass='data') - line('p', data_collection) - with tag('div', klass='box'): - line('h2', 'Ethical Considerations (Data Privacy and Bias)', klass='ethics') - with tag('ul'): - line('li', priv_cons) - line('li', bias_cons) - with tag('div', klass='box'): - line('h2', 'Our Team', klass='team') - line('p', membs) - doc.stag('img', src=img_src) - - css = ''' - .box { - border: 2px solid black; - text-align: center; - margin: 10px; - padding: 5%; - } - ul { - display: inline-block; - text-align: left; - } - img { - display: block; - margin: auto; - } - .description { - text-align: center; - } - .panel_button { - display: block !important; - width: 100% !important; - background-color: #00EACD !important; - color: #000; - transition: all .2s ease-out 0s !important; - box-shadow: 0 10px #00AEAB !important; - border-radius: 10px !important; - } - .panel_button:hover { - box-shadow: 0 5px #00AEAB; - transform: translateY(5px); - } - .submit { - color: black !important; - } - .selected { - background-color: #656bd6 !important; - } - .radio_item { - border-radius: 10px; - padding-left: 10px !important; - padding-right: 10px !important; - } - .radio_item:hover { - color: #656bd6 !important; - } - .title { - background-image: url(https://media.giphy.com/media/26BROrSHlmyzzHf3i/giphy.gif); - background-size: cover; - color: transparent; - -moz-background-clip: text; - -webkit-background-clip: text; - text-transform: uppercase; - font-size: 60px; - line-height: .75; - margin: 10px 0; - } - .panel_header { - color: black !important; - } - input { - background-color: #efeffa !important; - } - .acc, .feat { - background-color: #FF3399 !important - } - .prj { - background-color: #FFCE3B !important; - } - .data { - background-color: #ED6800 !important; - } - .ethics { - background-color: #3EE6F9 !important; - } - .team { - background-color: #9581EF !important; - } - .model-container { - display: flex; - flex-direction: column; - justify-content: center; - } - .spacer { - display: flex; - justify-content: center; - } - .model-div { - width: 45%; - } - @media screen and (max-width: 700px) { - .model-container { - flex-wrap: wrap; - } - } - ''' - return { - 'article': doc.getvalue(), - 'css': css, - 'title': title, - 'description': description, - } \ No newline at end of file diff --git a/spaces/ismot/8testi1/LICENSE.md b/spaces/ismot/8testi1/LICENSE.md deleted file mode 100644 index f288702d2fa16d3cdf0035b15a9fcbc552cd88e7..0000000000000000000000000000000000000000 --- a/spaces/ismot/8testi1/LICENSE.md +++ /dev/null @@ -1,674 +0,0 @@ - GNU GENERAL PUBLIC LICENSE - Version 3, 29 June 2007 - - Copyright (C) 2007 Free Software Foundation, Inc. - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - Preamble - - The GNU General Public License is a free, copyleft license for -software and other kinds of works. - - The licenses for most software and other practical works are designed -to take away your freedom to share and change the works. By contrast, -the GNU General Public License is intended to guarantee your freedom to -share and change all versions of a program--to make sure it remains free -software for all its users. We, the Free Software Foundation, use the -GNU General Public License for most of our software; it applies also to -any other work released this way by its authors. You can apply it to -your programs, too. - - When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -them if you wish), that you receive source code or can get it if you -want it, that you can change the software or use pieces of it in new -free programs, and that you know you can do these things. - - To protect your rights, we need to prevent others from denying you -these rights or asking you to surrender the rights. Therefore, you have -certain responsibilities if you distribute copies of the software, or if -you modify it: responsibilities to respect the freedom of others. - - For example, if you distribute copies of such a program, whether -gratis or for a fee, you must pass on to the recipients the same -freedoms that you received. You must make sure that they, too, receive -or can get the source code. And you must show them these terms so they -know their rights. - - Developers that use the GNU GPL protect your rights with two steps: -(1) assert copyright on the software, and (2) offer you this License -giving you legal permission to copy, distribute and/or modify it. - - For the developers' and authors' protection, the GPL clearly explains -that there is no warranty for this free software. For both users' and -authors' sake, the GPL requires that modified versions be marked as -changed, so that their problems will not be attributed erroneously to -authors of previous versions. - - Some devices are designed to deny users access to install or run -modified versions of the software inside them, although the manufacturer -can do so. This is fundamentally incompatible with the aim of -protecting users' freedom to change the software. The systematic -pattern of such abuse occurs in the area of products for individuals to -use, which is precisely where it is most unacceptable. Therefore, we -have designed this version of the GPL to prohibit the practice for those -products. If such problems arise substantially in other domains, we -stand ready to extend this provision to those domains in future versions -of the GPL, as needed to protect the freedom of users. - - Finally, every program is threatened constantly by software patents. -States should not allow patents to restrict development and use of -software on general-purpose computers, but in those that do, we wish to -avoid the special danger that patents applied to a free program could -make it effectively proprietary. To prevent this, the GPL assures that -patents cannot be used to render the program non-free. - - The precise terms and conditions for copying, distribution and -modification follow. - - TERMS AND CONDITIONS - - 0. Definitions. - - "This License" refers to version 3 of the GNU General Public License. - - "Copyright" also means copyright-like laws that apply to other kinds of -works, such as semiconductor masks. - - "The Program" refers to any copyrightable work licensed under this -License. Each licensee is addressed as "you". "Licensees" and -"recipients" may be individuals or organizations. - - To "modify" a work means to copy from or adapt all or part of the work -in a fashion requiring copyright permission, other than the making of an -exact copy. The resulting work is called a "modified version" of the -earlier work or a work "based on" the earlier work. - - A "covered work" means either the unmodified Program or a work based -on the Program. - - To "propagate" a work means to do anything with it that, without -permission, would make you directly or secondarily liable for -infringement under applicable copyright law, except executing it on a -computer or modifying a private copy. Propagation includes copying, -distribution (with or without modification), making available to the -public, and in some countries other activities as well. - - To "convey" a work means any kind of propagation that enables other -parties to make or receive copies. Mere interaction with a user through -a computer network, with no transfer of a copy, is not conveying. - - An interactive user interface displays "Appropriate Legal Notices" -to the extent that it includes a convenient and prominently visible -feature that (1) displays an appropriate copyright notice, and (2) -tells the user that there is no warranty for the work (except to the -extent that warranties are provided), that licensees may convey the -work under this License, and how to view a copy of this License. If -the interface presents a list of user commands or options, such as a -menu, a prominent item in the list meets this criterion. - - 1. Source Code. - - The "source code" for a work means the preferred form of the work -for making modifications to it. "Object code" means any non-source -form of a work. - - A "Standard Interface" means an interface that either is an official -standard defined by a recognized standards body, or, in the case of -interfaces specified for a particular programming language, one that -is widely used among developers working in that language. - - The "System Libraries" of an executable work include anything, other -than the work as a whole, that (a) is included in the normal form of -packaging a Major Component, but which is not part of that Major -Component, and (b) serves only to enable use of the work with that -Major Component, or to implement a Standard Interface for which an -implementation is available to the public in source code form. A -"Major Component", in this context, means a major essential component -(kernel, window system, and so on) of the specific operating system -(if any) on which the executable work runs, or a compiler used to -produce the work, or an object code interpreter used to run it. - - The "Corresponding Source" for a work in object code form means all -the source code needed to generate, install, and (for an executable -work) run the object code and to modify the work, including scripts to -control those activities. However, it does not include the work's -System Libraries, or general-purpose tools or generally available free -programs which are used unmodified in performing those activities but -which are not part of the work. For example, Corresponding Source -includes interface definition files associated with source files for -the work, and the source code for shared libraries and dynamically -linked subprograms that the work is specifically designed to require, -such as by intimate data communication or control flow between those -subprograms and other parts of the work. - - The Corresponding Source need not include anything that users -can regenerate automatically from other parts of the Corresponding -Source. - - The Corresponding Source for a work in source code form is that -same work. - - 2. Basic Permissions. - - All rights granted under this License are granted for the term of -copyright on the Program, and are irrevocable provided the stated -conditions are met. This License explicitly affirms your unlimited -permission to run the unmodified Program. The output from running a -covered work is covered by this License only if the output, given its -content, constitutes a covered work. This License acknowledges your -rights of fair use or other equivalent, as provided by copyright law. - - You may make, run and propagate covered works that you do not -convey, without conditions so long as your license otherwise remains -in force. You may convey covered works to others for the sole purpose -of having them make modifications exclusively for you, or provide you -with facilities for running those works, provided that you comply with -the terms of this License in conveying all material for which you do -not control copyright. Those thus making or running the covered works -for you must do so exclusively on your behalf, under your direction -and control, on terms that prohibit them from making any copies of -your copyrighted material outside their relationship with you. - - Conveying under any other circumstances is permitted solely under -the conditions stated below. Sublicensing is not allowed; section 10 -makes it unnecessary. - - 3. Protecting Users' Legal Rights From Anti-Circumvention Law. - - No covered work shall be deemed part of an effective technological -measure under any applicable law fulfilling obligations under article -11 of the WIPO copyright treaty adopted on 20 December 1996, or -similar laws prohibiting or restricting circumvention of such -measures. - - When you convey a covered work, you waive any legal power to forbid -circumvention of technological measures to the extent such circumvention -is effected by exercising rights under this License with respect to -the covered work, and you disclaim any intention to limit operation or -modification of the work as a means of enforcing, against the work's -users, your or third parties' legal rights to forbid circumvention of -technological measures. - - 4. Conveying Verbatim Copies. - - You may convey verbatim copies of the Program's source code as you -receive it, in any medium, provided that you conspicuously and -appropriately publish on each copy an appropriate copyright notice; -keep intact all notices stating that this License and any -non-permissive terms added in accord with section 7 apply to the code; -keep intact all notices of the absence of any warranty; and give all -recipients a copy of this License along with the Program. - - You may charge any price or no price for each copy that you convey, -and you may offer support or warranty protection for a fee. - - 5. Conveying Modified Source Versions. - - You may convey a work based on the Program, or the modifications to -produce it from the Program, in the form of source code under the -terms of section 4, provided that you also meet all of these conditions: - - a) The work must carry prominent notices stating that you modified - it, and giving a relevant date. - - b) The work must carry prominent notices stating that it is - released under this License and any conditions added under section - 7. This requirement modifies the requirement in section 4 to - "keep intact all notices". - - c) You must license the entire work, as a whole, under this - License to anyone who comes into possession of a copy. This - License will therefore apply, along with any applicable section 7 - additional terms, to the whole of the work, and all its parts, - regardless of how they are packaged. This License gives no - permission to license the work in any other way, but it does not - invalidate such permission if you have separately received it. - - d) If the work has interactive user interfaces, each must display - Appropriate Legal Notices; however, if the Program has interactive - interfaces that do not display Appropriate Legal Notices, your - work need not make them do so. - - A compilation of a covered work with other separate and independent -works, which are not by their nature extensions of the covered work, -and which are not combined with it such as to form a larger program, -in or on a volume of a storage or distribution medium, is called an -"aggregate" if the compilation and its resulting copyright are not -used to limit the access or legal rights of the compilation's users -beyond what the individual works permit. Inclusion of a covered work -in an aggregate does not cause this License to apply to the other -parts of the aggregate. - - 6. Conveying Non-Source Forms. - - You may convey a covered work in object code form under the terms -of sections 4 and 5, provided that you also convey the -machine-readable Corresponding Source under the terms of this License, -in one of these ways: - - a) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by the - Corresponding Source fixed on a durable physical medium - customarily used for software interchange. - - b) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by a - written offer, valid for at least three years and valid for as - long as you offer spare parts or customer support for that product - model, to give anyone who possesses the object code either (1) a - copy of the Corresponding Source for all the software in the - product that is covered by this License, on a durable physical - medium customarily used for software interchange, for a price no - more than your reasonable cost of physically performing this - conveying of source, or (2) access to copy the - Corresponding Source from a network server at no charge. - - c) Convey individual copies of the object code with a copy of the - written offer to provide the Corresponding Source. This - alternative is allowed only occasionally and noncommercially, and - only if you received the object code with such an offer, in accord - with subsection 6b. - - d) Convey the object code by offering access from a designated - place (gratis or for a charge), and offer equivalent access to the - Corresponding Source in the same way through the same place at no - further charge. You need not require recipients to copy the - Corresponding Source along with the object code. If the place to - copy the object code is a network server, the Corresponding Source - may be on a different server (operated by you or a third party) - that supports equivalent copying facilities, provided you maintain - clear directions next to the object code saying where to find the - Corresponding Source. Regardless of what server hosts the - Corresponding Source, you remain obligated to ensure that it is - available for as long as needed to satisfy these requirements. - - e) Convey the object code using peer-to-peer transmission, provided - you inform other peers where the object code and Corresponding - Source of the work are being offered to the general public at no - charge under subsection 6d. - - A separable portion of the object code, whose source code is excluded -from the Corresponding Source as a System Library, need not be -included in conveying the object code work. - - A "User Product" is either (1) a "consumer product", which means any -tangible personal property which is normally used for personal, family, -or household purposes, or (2) anything designed or sold for incorporation -into a dwelling. In determining whether a product is a consumer product, -doubtful cases shall be resolved in favor of coverage. For a particular -product received by a particular user, "normally used" refers to a -typical or common use of that class of product, regardless of the status -of the particular user or of the way in which the particular user -actually uses, or expects or is expected to use, the product. A product -is a consumer product regardless of whether the product has substantial -commercial, industrial or non-consumer uses, unless such uses represent -the only significant mode of use of the product. - - "Installation Information" for a User Product means any methods, -procedures, authorization keys, or other information required to install -and execute modified versions of a covered work in that User Product from -a modified version of its Corresponding Source. The information must -suffice to ensure that the continued functioning of the modified object -code is in no case prevented or interfered with solely because -modification has been made. - - If you convey an object code work under this section in, or with, or -specifically for use in, a User Product, and the conveying occurs as -part of a transaction in which the right of possession and use of the -User Product is transferred to the recipient in perpetuity or for a -fixed term (regardless of how the transaction is characterized), the -Corresponding Source conveyed under this section must be accompanied -by the Installation Information. But this requirement does not apply -if neither you nor any third party retains the ability to install -modified object code on the User Product (for example, the work has -been installed in ROM). - - The requirement to provide Installation Information does not include a -requirement to continue to provide support service, warranty, or updates -for a work that has been modified or installed by the recipient, or for -the User Product in which it has been modified or installed. Access to a -network may be denied when the modification itself materially and -adversely affects the operation of the network or violates the rules and -protocols for communication across the network. - - Corresponding Source conveyed, and Installation Information provided, -in accord with this section must be in a format that is publicly -documented (and with an implementation available to the public in -source code form), and must require no special password or key for -unpacking, reading or copying. - - 7. Additional Terms. - - "Additional permissions" are terms that supplement the terms of this -License by making exceptions from one or more of its conditions. -Additional permissions that are applicable to the entire Program shall -be treated as though they were included in this License, to the extent -that they are valid under applicable law. If additional permissions -apply only to part of the Program, that part may be used separately -under those permissions, but the entire Program remains governed by -this License without regard to the additional permissions. - - When you convey a copy of a covered work, you may at your option -remove any additional permissions from that copy, or from any part of -it. (Additional permissions may be written to require their own -removal in certain cases when you modify the work.) You may place -additional permissions on material, added by you to a covered work, -for which you have or can give appropriate copyright permission. - - Notwithstanding any other provision of this License, for material you -add to a covered work, you may (if authorized by the copyright holders of -that material) supplement the terms of this License with terms: - - a) Disclaiming warranty or limiting liability differently from the - terms of sections 15 and 16 of this License; or - - b) Requiring preservation of specified reasonable legal notices or - author attributions in that material or in the Appropriate Legal - Notices displayed by works containing it; or - - c) Prohibiting misrepresentation of the origin of that material, or - requiring that modified versions of such material be marked in - reasonable ways as different from the original version; or - - d) Limiting the use for publicity purposes of names of licensors or - authors of the material; or - - e) Declining to grant rights under trademark law for use of some - trade names, trademarks, or service marks; or - - f) Requiring indemnification of licensors and authors of that - material by anyone who conveys the material (or modified versions of - it) with contractual assumptions of liability to the recipient, for - any liability that these contractual assumptions directly impose on - those licensors and authors. - - All other non-permissive additional terms are considered "further -restrictions" within the meaning of section 10. If the Program as you -received it, or any part of it, contains a notice stating that it is -governed by this License along with a term that is a further -restriction, you may remove that term. If a license document contains -a further restriction but permits relicensing or conveying under this -License, you may add to a covered work material governed by the terms -of that license document, provided that the further restriction does -not survive such relicensing or conveying. - - If you add terms to a covered work in accord with this section, you -must place, in the relevant source files, a statement of the -additional terms that apply to those files, or a notice indicating -where to find the applicable terms. - - Additional terms, permissive or non-permissive, may be stated in the -form of a separately written license, or stated as exceptions; -the above requirements apply either way. - - 8. Termination. - - You may not propagate or modify a covered work except as expressly -provided under this License. Any attempt otherwise to propagate or -modify it is void, and will automatically terminate your rights under -this License (including any patent licenses granted under the third -paragraph of section 11). - - However, if you cease all violation of this License, then your -license from a particular copyright holder is reinstated (a) -provisionally, unless and until the copyright holder explicitly and -finally terminates your license, and (b) permanently, if the copyright -holder fails to notify you of the violation by some reasonable means -prior to 60 days after the cessation. - - Moreover, your license from a particular copyright holder is -reinstated permanently if the copyright holder notifies you of the -violation by some reasonable means, this is the first time you have -received notice of violation of this License (for any work) from that -copyright holder, and you cure the violation prior to 30 days after -your receipt of the notice. - - Termination of your rights under this section does not terminate the -licenses of parties who have received copies or rights from you under -this License. If your rights have been terminated and not permanently -reinstated, you do not qualify to receive new licenses for the same -material under section 10. - - 9. Acceptance Not Required for Having Copies. - - You are not required to accept this License in order to receive or -run a copy of the Program. Ancillary propagation of a covered work -occurring solely as a consequence of using peer-to-peer transmission -to receive a copy likewise does not require acceptance. However, -nothing other than this License grants you permission to propagate or -modify any covered work. These actions infringe copyright if you do -not accept this License. Therefore, by modifying or propagating a -covered work, you indicate your acceptance of this License to do so. - - 10. Automatic Licensing of Downstream Recipients. - - Each time you convey a covered work, the recipient automatically -receives a license from the original licensors, to run, modify and -propagate that work, subject to this License. You are not responsible -for enforcing compliance by third parties with this License. - - An "entity transaction" is a transaction transferring control of an -organization, or substantially all assets of one, or subdividing an -organization, or merging organizations. If propagation of a covered -work results from an entity transaction, each party to that -transaction who receives a copy of the work also receives whatever -licenses to the work the party's predecessor in interest had or could -give under the previous paragraph, plus a right to possession of the -Corresponding Source of the work from the predecessor in interest, if -the predecessor has it or can get it with reasonable efforts. - - You may not impose any further restrictions on the exercise of the -rights granted or affirmed under this License. For example, you may -not impose a license fee, royalty, or other charge for exercise of -rights granted under this License, and you may not initiate litigation -(including a cross-claim or counterclaim in a lawsuit) alleging that -any patent claim is infringed by making, using, selling, offering for -sale, or importing the Program or any portion of it. - - 11. Patents. - - A "contributor" is a copyright holder who authorizes use under this -License of the Program or a work on which the Program is based. The -work thus licensed is called the contributor's "contributor version". - - A contributor's "essential patent claims" are all patent claims -owned or controlled by the contributor, whether already acquired or -hereafter acquired, that would be infringed by some manner, permitted -by this License, of making, using, or selling its contributor version, -but do not include claims that would be infringed only as a -consequence of further modification of the contributor version. For -purposes of this definition, "control" includes the right to grant -patent sublicenses in a manner consistent with the requirements of -this License. - - Each contributor grants you a non-exclusive, worldwide, royalty-free -patent license under the contributor's essential patent claims, to -make, use, sell, offer for sale, import and otherwise run, modify and -propagate the contents of its contributor version. - - In the following three paragraphs, a "patent license" is any express -agreement or commitment, however denominated, not to enforce a patent -(such as an express permission to practice a patent or covenant not to -sue for patent infringement). To "grant" such a patent license to a -party means to make such an agreement or commitment not to enforce a -patent against the party. - - If you convey a covered work, knowingly relying on a patent license, -and the Corresponding Source of the work is not available for anyone -to copy, free of charge and under the terms of this License, through a -publicly available network server or other readily accessible means, -then you must either (1) cause the Corresponding Source to be so -available, or (2) arrange to deprive yourself of the benefit of the -patent license for this particular work, or (3) arrange, in a manner -consistent with the requirements of this License, to extend the patent -license to downstream recipients. "Knowingly relying" means you have -actual knowledge that, but for the patent license, your conveying the -covered work in a country, or your recipient's use of the covered work -in a country, would infringe one or more identifiable patents in that -country that you have reason to believe are valid. - - If, pursuant to or in connection with a single transaction or -arrangement, you convey, or propagate by procuring conveyance of, a -covered work, and grant a patent license to some of the parties -receiving the covered work authorizing them to use, propagate, modify -or convey a specific copy of the covered work, then the patent license -you grant is automatically extended to all recipients of the covered -work and works based on it. - - A patent license is "discriminatory" if it does not include within -the scope of its coverage, prohibits the exercise of, or is -conditioned on the non-exercise of one or more of the rights that are -specifically granted under this License. You may not convey a covered -work if you are a party to an arrangement with a third party that is -in the business of distributing software, under which you make payment -to the third party based on the extent of your activity of conveying -the work, and under which the third party grants, to any of the -parties who would receive the covered work from you, a discriminatory -patent license (a) in connection with copies of the covered work -conveyed by you (or copies made from those copies), or (b) primarily -for and in connection with specific products or compilations that -contain the covered work, unless you entered into that arrangement, -or that patent license was granted, prior to 28 March 2007. - - Nothing in this License shall be construed as excluding or limiting -any implied license or other defenses to infringement that may -otherwise be available to you under applicable patent law. - - 12. No Surrender of Others' Freedom. - - If conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot convey a -covered work so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you may -not convey it at all. For example, if you agree to terms that obligate you -to collect a royalty for further conveying from those to whom you convey -the Program, the only way you could satisfy both those terms and this -License would be to refrain entirely from conveying the Program. - - 13. Use with the GNU Affero General Public License. - - Notwithstanding any other provision of this License, you have -permission to link or combine any covered work with a work licensed -under version 3 of the GNU Affero General Public License into a single -combined work, and to convey the resulting work. The terms of this -License will continue to apply to the part which is the covered work, -but the special requirements of the GNU Affero General Public License, -section 13, concerning interaction through a network will apply to the -combination as such. - - 14. Revised Versions of this License. - - The Free Software Foundation may publish revised and/or new versions of -the GNU General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - - Each version is given a distinguishing version number. If the -Program specifies that a certain numbered version of the GNU General -Public License "or any later version" applies to it, you have the -option of following the terms and conditions either of that numbered -version or of any later version published by the Free Software -Foundation. If the Program does not specify a version number of the -GNU General Public License, you may choose any version ever published -by the Free Software Foundation. - - If the Program specifies that a proxy can decide which future -versions of the GNU General Public License can be used, that proxy's -public statement of acceptance of a version permanently authorizes you -to choose that version for the Program. - - Later license versions may give you additional or different -permissions. However, no additional obligations are imposed on any -author or copyright holder as a result of your choosing to follow a -later version. - - 15. Disclaimer of Warranty. - - THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY -APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT -HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY -OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, -THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR -PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM -IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF -ALL NECESSARY SERVICING, REPAIR OR CORRECTION. - - 16. Limitation of Liability. - - IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS -THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY -GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE -USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF -DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD -PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), -EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF -SUCH DAMAGES. - - 17. Interpretation of Sections 15 and 16. - - If the disclaimer of warranty and limitation of liability provided -above cannot be given local legal effect according to their terms, -reviewing courts shall apply local law that most closely approximates -an absolute waiver of all civil liability in connection with the -Program, unless a warranty or assumption of liability accompanies a -copy of the Program in return for a fee. - - END OF TERMS AND CONDITIONS - - How to Apply These Terms to Your New Programs - - If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - - To do so, attach the following notices to the program. It is safest -to attach them to the start of each source file to most effectively -state the exclusion of warranty; and each file should have at least -the "copyright" line and a pointer to where the full notice is found. - - - Copyright (C) - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 3 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program. If not, see . - -Also add information on how to contact you by electronic and paper mail. - - If the program does terminal interaction, make it output a short -notice like this when it starts in an interactive mode: - - Copyright (C) - This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. - This is free software, and you are welcome to redistribute it - under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, your program's commands -might be different; for a GUI interface, you would use an "about box". - - You should also get your employer (if you work as a programmer) or school, -if any, to sign a "copyright disclaimer" for the program, if necessary. -For more information on this, and how to apply and follow the GNU GPL, see -. - - The GNU General Public License does not permit incorporating your program -into proprietary programs. If your program is a subroutine library, you -may consider it more useful to permit linking proprietary applications with -the library. If this is what you want to do, use the GNU Lesser General -Public License instead of this License. But first, please read -. diff --git a/spaces/ivanmeyer/dreamlike-photoreal-2.0/README.md b/spaces/ivanmeyer/dreamlike-photoreal-2.0/README.md deleted file mode 100644 index a70a7b6bfda1bdeb1d5d103e33a80e6780b24740..0000000000000000000000000000000000000000 --- a/spaces/ivanmeyer/dreamlike-photoreal-2.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dreamlike Photoreal 2.0 -emoji: 📉 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -duplicated_from: akhaliq/dreamlike-photoreal-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/izumi-lab/llama-13b-japanese-lora-v0-1ep/Makefile b/spaces/izumi-lab/llama-13b-japanese-lora-v0-1ep/Makefile deleted file mode 100644 index 4f458021aed1d71e5ce346617b3b02d29985b5af..0000000000000000000000000000000000000000 --- a/spaces/izumi-lab/llama-13b-japanese-lora-v0-1ep/Makefile +++ /dev/null @@ -1,35 +0,0 @@ - -RUN := poetry run - -.PHONY: check -check: lint mypy - -.PHONY: lint -lint: lint-black lint-isort lint-flake8 - -.PHONY: lint-black -lint-black: - $(RUN) black --check --diff --quiet . - -.PHONY: lint-isort -lint-isort: - $(RUN) isort --check --quiet . - -.PHONY: lint-flake8 -lint-flake8: - $(RUN) pflake8 . - -.PHONY: mypy -mypy: - $(RUN) mypy . - -.PHONY: format -format: format-black format-isort - -.PHONY: format-black -format-black: - $(RUN) black --quiet . - -.PHONY: format-isort -format-isort: - $(RUN) isort --quiet . diff --git a/spaces/jackli888/stable-diffusion-webui/modules/extensions.py b/spaces/jackli888/stable-diffusion-webui/modules/extensions.py deleted file mode 100644 index 1be7509685e5c11a6f0e44cd39d11613c8ba3e9f..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/extensions.py +++ /dev/null @@ -1,107 +0,0 @@ -import os -import sys -import traceback - -import time -import git - -from modules import paths, shared - -extensions = [] -extensions_dir = os.path.join(paths.data_path, "extensions") -extensions_builtin_dir = os.path.join(paths.script_path, "extensions-builtin") - -if not os.path.exists(extensions_dir): - os.makedirs(extensions_dir) - -def active(): - return [x for x in extensions if x.enabled] - - -class Extension: - def __init__(self, name, path, enabled=True, is_builtin=False): - self.name = name - self.path = path - self.enabled = enabled - self.status = '' - self.can_update = False - self.is_builtin = is_builtin - self.version = '' - - repo = None - try: - if os.path.exists(os.path.join(path, ".git")): - repo = git.Repo(path) - except Exception: - print(f"Error reading github repository info from {path}:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - - if repo is None or repo.bare: - self.remote = None - else: - try: - self.remote = next(repo.remote().urls, None) - self.status = 'unknown' - head = repo.head.commit - ts = time.asctime(time.gmtime(repo.head.commit.committed_date)) - self.version = f'{head.hexsha[:8]} ({ts})' - - except Exception: - self.remote = None - - def list_files(self, subdir, extension): - from modules import scripts - - dirpath = os.path.join(self.path, subdir) - if not os.path.isdir(dirpath): - return [] - - res = [] - for filename in sorted(os.listdir(dirpath)): - res.append(scripts.ScriptFile(self.path, filename, os.path.join(dirpath, filename))) - - res = [x for x in res if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)] - - return res - - def check_updates(self): - repo = git.Repo(self.path) - for fetch in repo.remote().fetch("--dry-run"): - if fetch.flags != fetch.HEAD_UPTODATE: - self.can_update = True - self.status = "behind" - return - - self.can_update = False - self.status = "latest" - - def fetch_and_reset_hard(self): - repo = git.Repo(self.path) - # Fix: `error: Your local changes to the following files would be overwritten by merge`, - # because WSL2 Docker set 755 file permissions instead of 644, this results to the error. - repo.git.fetch('--all') - repo.git.reset('--hard', 'origin') - - -def list_extensions(): - extensions.clear() - - if not os.path.isdir(extensions_dir): - return - - paths = [] - for dirname in [extensions_dir, extensions_builtin_dir]: - if not os.path.isdir(dirname): - return - - for extension_dirname in sorted(os.listdir(dirname)): - path = os.path.join(dirname, extension_dirname) - if not os.path.isdir(path): - continue - - paths.append((extension_dirname, path, dirname == extensions_builtin_dir)) - - for dirname, path, is_builtin in paths: - extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin) - extensions.append(extension) - diff --git a/spaces/jdczlx/ChatGPT-chuanhu/modules/presets.py b/spaces/jdczlx/ChatGPT-chuanhu/modules/presets.py deleted file mode 100644 index fcfb53e73e9c5217d312e1a53a7b82c3dbbc82d5..0000000000000000000000000000000000000000 --- a/spaces/jdczlx/ChatGPT-chuanhu/modules/presets.py +++ /dev/null @@ -1,165 +0,0 @@ -# -*- coding:utf-8 -*- -import gradio as gr - -# ChatGPT 设置 -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -standard_error_msg = "☹️发生了错误:" # 错误信息的标准前缀 -error_retrieve_prompt = "请检查网络连接,或者API-Key是否有效。" # 获取对话时发生错误 -connection_timeout_prompt = "连接超时,无法获取对话。" # 连接超时 -read_timeout_prompt = "读取超时,无法获取对话。" # 读取超时 -proxy_error_prompt = "代理错误,无法获取对话。" # 代理错误 -ssl_error_prompt = "SSL错误,无法获取对话。" # SSL 错误 -no_apikey_msg = "API key长度不是51位,请检查是否输入正确。" # API key 长度不足 51 位 -no_input_msg = "请输入对话内容。" # 未输入对话内容 - -max_token_streaming = 3500 # 流式对话时的最大 token 数 -timeout_streaming = 10 # 流式对话时的超时时间 -max_token_all = 3500 # 非流式对话时的最大 token 数 -timeout_all = 200 # 非流式对话时的超时时间 -enable_streaming_option = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -title = """

    川虎ChatGPT 🚀

    """ -description = """\ -
    - -由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536) 和 [明昭MZhao](https://space.bilibili.com/24807452)开发 - -访问川虎ChatGPT的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本 - -此App使用 `gpt-3.5-turbo` 大语言模型 -
    -""" - -summarize_prompt = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt - -MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-0301", - "gpt-4", - "gpt-4-0314", - "gpt-4-32k", - "gpt-4-32k-0314", -] # 可选的模型 - -REPLY_LANGUAGES = [ - "中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#02C160", - c100="rgba(2, 193, 96, 0.2)", - c200="#02C160", - c300="rgba(2, 193, 96, 0.32)", - c400="rgba(2, 193, 96, 0.32)", - c500="rgba(2, 193, 96, 1.0)", - c600="rgba(2, 193, 96, 1.0)", - c700="rgba(2, 193, 96, 0.32)", - c800="rgba(2, 193, 96, 0.32)", - c900="#02C160", - c950="#02C160", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f9fafb", - c100="#f3f4f6", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - c900="#272727", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - button_primary_background_fill="#06AE56", - button_primary_background_fill_dark="#06AE56", - button_primary_background_fill_hover="#07C863", - button_primary_border_color="#06AE56", - button_primary_border_color_dark="#06AE56", - button_primary_text_color="#FFFFFF", - button_primary_text_color_dark="#FFFFFF", - button_secondary_background_fill="#F2F2F2", - button_secondary_background_fill_dark="#2B2B2B", - button_secondary_text_color="#393939", - button_secondary_text_color_dark="#FFFFFF", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - block_title_text_color="*primary_500", - block_title_background_fill="*primary_100", - input_background_fill="#F6F6F6", - ) diff --git a/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/modules/diffusionmodules/positionnet.py b/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/modules/diffusionmodules/positionnet.py deleted file mode 100644 index 8cfa9bf3a43964b1e1669fec71d2d32356356e70..0000000000000000000000000000000000000000 --- a/spaces/jennysun/jwsun-multisubject-render-model/gligen/ldm/modules/diffusionmodules/positionnet.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -import torch.nn as nn -from ldm.modules.attention import BasicTransformerBlock -from ldm.modules.diffusionmodules.util import checkpoint, FourierEmbedder -import torch.nn.functional as F - - - -class PositionNet(nn.Module): - def __init__(self, positive_len, out_dim, fourier_freqs=8): - super().__init__() - self.positive_len = positive_len - self.out_dim = out_dim - - self.fourier_embedder = FourierEmbedder(num_freqs=fourier_freqs) - self.position_dim = fourier_freqs*2*4 # 2 is sin&cos, 4 is xyxy - - self.linears = nn.Sequential( - nn.Linear( self.positive_len + self.position_dim, 512), - nn.SiLU(), - nn.Linear( 512, 512), - nn.SiLU(), - nn.Linear(512, out_dim), - ) - - self.null_positive_feature = torch.nn.Parameter(torch.zeros([self.positive_len])) - self.null_position_feature = torch.nn.Parameter(torch.zeros([self.position_dim])) - - - def forward(self, boxes, masks, positive_embeddings): - B, N, _ = boxes.shape - masks = masks.unsqueeze(-1) - - # embedding position (it may includes padding as placeholder) - xyxy_embedding = self.fourier_embedder(boxes) # B*N*4 --> B*N*C - - # learnable null embedding - positive_null = self.null_positive_feature.view(1,1,-1) - xyxy_null = self.null_position_feature.view(1,1,-1) - - # replace padding with learnable null embedding - positive_embeddings = positive_embeddings*masks + (1-masks)*positive_null - xyxy_embedding = xyxy_embedding*masks + (1-masks)*xyxy_null - - objs = self.linears( torch.cat([positive_embeddings, xyxy_embedding], dim=-1) ) - assert objs.shape == torch.Size([B,N,self.out_dim]) - return objs - - - diff --git a/spaces/jeonsworld/whisper-medium-ko/README.md b/spaces/jeonsworld/whisper-medium-ko/README.md deleted file mode 100644 index 22bc36ebcecfb16cec2c86ce05d4efc16fb06c90..0000000000000000000000000000000000000000 --- a/spaces/jeonsworld/whisper-medium-ko/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: whisper-medium-ko -emoji: 📉 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false ---- \ No newline at end of file diff --git a/spaces/jhwen/bingo/tests/parse.ts b/spaces/jhwen/bingo/tests/parse.ts deleted file mode 100644 index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000 --- a/spaces/jhwen/bingo/tests/parse.ts +++ /dev/null @@ -1,13 +0,0 @@ -import { promises as fs } from 'fs' -import { join } from 'path' -import { parseHeadersFromCurl } from '@/lib/utils' - -(async () => { - const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8') - const headers = parseHeadersFromCurl(content) - console.log(headers) - - const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8') - const cmdHeaders = parseHeadersFromCurl(cmdContent) - console.log(cmdHeaders) -})() diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/model/run_model.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/model/run_model.py deleted file mode 100644 index 9d3abbb2fa471b9406094e4d33b0a9ec3817395c..0000000000000000000000000000000000000000 --- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/model/run_model.py +++ /dev/null @@ -1,254 +0,0 @@ -#!/usr/bin/python -# coding: utf-8 - -# Author: LE YUAN -# Date: 2020-10-23 - -import pickle -import sys -import timeit -import math -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.optim as optim -from sklearn.metrics import mean_squared_error,r2_score - - -class KcatPrediction(nn.Module): - def __init__(self): - super(KcatPrediction, self).__init__() - self.embed_fingerprint = nn.Embedding(n_fingerprint, dim) - self.embed_word = nn.Embedding(n_word, dim) - self.W_gnn = nn.ModuleList([nn.Linear(dim, dim) - for _ in range(layer_gnn)]) - self.W_cnn = nn.ModuleList([nn.Conv2d( - in_channels=1, out_channels=1, kernel_size=2*window+1, - stride=1, padding=window) for _ in range(layer_cnn)]) - self.W_attention = nn.Linear(dim, dim) - self.W_out = nn.ModuleList([nn.Linear(2*dim, 2*dim) - for _ in range(layer_output)]) - # self.W_interaction = nn.Linear(2*dim, 2) - self.W_interaction = nn.Linear(2*dim, 1) - - def gnn(self, xs, A, layer): - for i in range(layer): - hs = torch.relu(self.W_gnn[i](xs)) - xs = xs + torch.matmul(A, hs) - # return torch.unsqueeze(torch.sum(xs, 0), 0) - return torch.unsqueeze(torch.mean(xs, 0), 0) - - def attention_cnn(self, x, xs, layer): - """The attention mechanism is applied to the last layer of CNN.""" - - xs = torch.unsqueeze(torch.unsqueeze(xs, 0), 0) - for i in range(layer): - xs = torch.relu(self.W_cnn[i](xs)) - xs = torch.squeeze(torch.squeeze(xs, 0), 0) - - h = torch.relu(self.W_attention(x)) - hs = torch.relu(self.W_attention(xs)) - weights = torch.tanh(F.linear(h, hs)) - ys = torch.t(weights) * hs - - # return torch.unsqueeze(torch.sum(ys, 0), 0) - return torch.unsqueeze(torch.mean(ys, 0), 0) - - def forward(self, inputs): - - fingerprints, adjacency, words = inputs - - """Compound vector with GNN.""" - fingerprint_vectors = self.embed_fingerprint(fingerprints) - compound_vector = self.gnn(fingerprint_vectors, adjacency, layer_gnn) - - """Protein vector with attention-CNN.""" - word_vectors = self.embed_word(words) - protein_vector = self.attention_cnn(compound_vector, - word_vectors, layer_cnn) - - """Concatenate the above two vectors and output the interaction.""" - cat_vector = torch.cat((compound_vector, protein_vector), 1) - for j in range(layer_output): - cat_vector = torch.relu(self.W_out[j](cat_vector)) - interaction = self.W_interaction(cat_vector) - # print(interaction) - - return interaction - - def __call__(self, data, train=True): - - inputs, correct_interaction = data[:-1], data[-1] - predicted_interaction = self.forward(inputs) - # print(predicted_interaction) - - if train: - loss = F.mse_loss(predicted_interaction, correct_interaction) - correct_values = correct_interaction.to('cpu').data.numpy() - predicted_values = predicted_interaction.to('cpu').data.numpy()[0] - return loss, correct_values, predicted_values - else: - correct_values = correct_interaction.to('cpu').data.numpy() - predicted_values = predicted_interaction.to('cpu').data.numpy()[0] - # correct_values = np.concatenate(correct_values) - # predicted_values = np.concatenate(predicted_values) - # ys = F.softmax(predicted_interaction, 1).to('cpu').data.numpy() - # predicted_values = list(map(lambda x: np.argmax(x), ys)) - # print(correct_values) - # print(predicted_values) - # predicted_scores = list(map(lambda x: x[1], ys)) - return correct_values, predicted_values - - -class Trainer(object): - def __init__(self, model): - self.model = model - self.optimizer = optim.Adam(self.model.parameters(), - lr=lr, weight_decay=weight_decay) - - def train(self, dataset): - np.random.shuffle(dataset) - N = len(dataset) - loss_total = 0 - trainCorrect, trainPredict = [], [] - for data in dataset: - loss, correct_values, predicted_values = self.model(data) - self.optimizer.zero_grad() - loss.backward() - self.optimizer.step() - loss_total += loss.to('cpu').data.numpy() - - correct_values = math.log10(math.pow(2,correct_values)) - predicted_values = math.log10(math.pow(2,predicted_values)) - trainCorrect.append(correct_values) - trainPredict.append(predicted_values) - rmse_train = np.sqrt(mean_squared_error(trainCorrect,trainPredict)) - r2_train = r2_score(trainCorrect,trainPredict) - return loss_total, rmse_train, r2_train - - -class Tester(object): - def __init__(self, model): - self.model = model - - def test(self, dataset): - N = len(dataset) - SAE = 0 # sum absolute error. - testY, testPredict = [], [] - for data in dataset : - (correct_values, predicted_values) = self.model(data, train=False) - correct_values = math.log10(math.pow(2,correct_values)) - predicted_values = math.log10(math.pow(2,predicted_values)) - SAE += np.abs(predicted_values-correct_values) - # SAE += sum(np.abs(predicted_values-correct_values)) - testY.append(correct_values) - testPredict.append(predicted_values) - MAE = SAE / N # mean absolute error. - rmse = np.sqrt(mean_squared_error(testY,testPredict)) - r2 = r2_score(testY,testPredict) - return MAE, rmse, r2 - - def save_MAEs(self, MAEs, filename): - with open(filename, 'a') as f: - f.write('\t'.join(map(str, MAEs)) + '\n') - - def save_model(self, model, filename): - torch.save(model.state_dict(), filename) - -def load_tensor(file_name, dtype): - return [dtype(d).to(device) for d in np.load(file_name + '.npy', allow_pickle=True)] - - -def load_pickle(file_name): - with open(file_name, 'rb') as f: - return pickle.load(f) - -def shuffle_dataset(dataset, seed): - np.random.seed(seed) - np.random.shuffle(dataset) - return dataset - -def split_dataset(dataset, ratio): - n = int(ratio * len(dataset)) - dataset_1, dataset_2 = dataset[:n], dataset[n:] - return dataset_1, dataset_2 - - -if __name__ == "__main__": - - """Hyperparameters.""" - (DATASET, radius, ngram, dim, layer_gnn, window, layer_cnn, layer_output, - lr, lr_decay, decay_interval, weight_decay, iteration, - setting) = sys.argv[1:] - (dim, layer_gnn, window, layer_cnn, layer_output, decay_interval, - iteration) = map(int, [dim, layer_gnn, window, layer_cnn, layer_output, - decay_interval, iteration]) - lr, lr_decay, weight_decay = map(float, [lr, lr_decay, weight_decay]) - - # print(type(radius)) - - """CPU or GPU.""" - if torch.cuda.is_available(): - device = torch.device('cuda') - print('The code uses GPU...') - else: - device = torch.device('cpu') - print('The code uses CPU!!!') - - """Load preprocessed data.""" - dir_input = ('../../Data/input/') - compounds = load_tensor(dir_input + 'compounds', torch.LongTensor) - adjacencies = load_tensor(dir_input + 'adjacencies', torch.FloatTensor) - proteins = load_tensor(dir_input + 'proteins', torch.LongTensor) - interactions = load_tensor(dir_input + 'regression', torch.FloatTensor) - fingerprint_dict = load_pickle(dir_input + 'fingerprint_dict.pickle') - word_dict = load_pickle(dir_input + 'sequence_dict.pickle') - n_fingerprint = len(fingerprint_dict) - n_word = len(word_dict) - # print(n_fingerprint) # 3958 - # print(n_word) # 8542 - # 394 and 474 when radius=1 and ngram=2 - - """Create a dataset and split it into train/dev/test.""" - dataset = list(zip(compounds, adjacencies, proteins, interactions)) - dataset = shuffle_dataset(dataset, 1234) - dataset_train, dataset_ = split_dataset(dataset, 0.8) - dataset_dev, dataset_test = split_dataset(dataset_, 0.5) - - """Set a model.""" - torch.manual_seed(1234) - model = KcatPrediction().to(device) - trainer = Trainer(model) - tester = Tester(model) - - """Output files.""" - file_MAEs = '../../Data/Results/output/MAEs--' + setting + '.txt' - file_model = '../../Data/Results/output/' + setting - MAEs = ('Epoch\tTime(sec)\tRMSE_train\tR2_train\tMAE_dev\tMAE_test\tRMSE_dev\tRMSE_test\tR2_dev\tR2_test') - with open(file_MAEs, 'w') as f: - f.write(MAEs + '\n') - - """Start training.""" - print('Training...') - print(MAEs) - start = timeit.default_timer() - - for epoch in range(1, iteration+1): - - if epoch % decay_interval == 0: - trainer.optimizer.param_groups[0]['lr'] *= lr_decay - - loss_train, rmse_train, r2_train = trainer.train(dataset_train) - MAE_dev, RMSE_dev, R2_dev = tester.test(dataset_dev) - MAE_test, RMSE_test, R2_test = tester.test(dataset_test) - - end = timeit.default_timer() - time = end - start - - MAEs = [epoch, time, rmse_train, r2_train, MAE_dev, - MAE_test, RMSE_dev, RMSE_test, R2_dev, R2_test] - tester.save_MAEs(MAEs, file_MAEs) - tester.save_model(model, file_model) - - print('\t'.join(map(str, MAEs))) diff --git a/spaces/jinlinyi/PerspectiveFields/README.md b/spaces/jinlinyi/PerspectiveFields/README.md deleted file mode 100644 index 321a23de483401ffaec2babdd075daf2ba51afb5..0000000000000000000000000000000000000000 --- a/spaces/jinlinyi/PerspectiveFields/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PerspectiveFields -emoji: 🏃 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/joaopdrm/Emotion_Analisys/app.py b/spaces/joaopdrm/Emotion_Analisys/app.py deleted file mode 100644 index 4e2c90cad231ca07c85f85cabdec7bfe2c98978b..0000000000000000000000000000000000000000 --- a/spaces/joaopdrm/Emotion_Analisys/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import gradio as gr -from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline - - -class Emotionclass: - def __init__(self, model: str): - self.model = AutoModelForSequenceClassification.from_pretrained(model) - self.tokenizer = AutoTokenizer.from_pretrained(model) - self.pipeline = pipeline( - "text-classification", - model=self.model, - tokenizer=self.tokenizer, - return_all_scores=True, - ) - - def predict(self, input: str): - output = self.pipeline(input)[0] - result = { - "sad": output[0]["score"], - "joy": output[1]["score"], - "love": output[2]["score"], - "anger": output[3]["score"], - "fear": output[4]["score"], - "surprise": output[5]["score"], - } - return result - - -def main(): - model = Emotionclass("bhadresh-savani/bert-base-uncased-emotion") - iface = gr.Interface( - fn=model.predict, - inputs=gr.inputs.Textbox( - lines=3, - placeholder="type here", - label="Input", - ), - outputs="label", - title="Sentiment Classification", - ) - - iface.launch() - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/pytest_plugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/pytest_plugin.py deleted file mode 100644 index dd9a9f617901ef2c2fa7c1b4ceb5dd92ecbfd5de..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/aiohttp/pytest_plugin.py +++ /dev/null @@ -1,391 +0,0 @@ -import asyncio -import contextlib -import warnings -from collections.abc import Callable -from typing import Any, Awaitable, Callable, Dict, Generator, Optional, Union - -import pytest - -from aiohttp.helpers import PY_37, isasyncgenfunction -from aiohttp.web import Application - -from .test_utils import ( - BaseTestServer, - RawTestServer, - TestClient, - TestServer, - loop_context, - setup_test_loop, - teardown_test_loop, - unused_port as _unused_port, -) - -try: - import uvloop -except ImportError: # pragma: no cover - uvloop = None - -try: - import tokio -except ImportError: # pragma: no cover - tokio = None - -AiohttpClient = Callable[[Union[Application, BaseTestServer]], Awaitable[TestClient]] - - -def pytest_addoption(parser): # type: ignore[no-untyped-def] - parser.addoption( - "--aiohttp-fast", - action="store_true", - default=False, - help="run tests faster by disabling extra checks", - ) - parser.addoption( - "--aiohttp-loop", - action="store", - default="pyloop", - help="run tests with specific loop: pyloop, uvloop, tokio or all", - ) - parser.addoption( - "--aiohttp-enable-loop-debug", - action="store_true", - default=False, - help="enable event loop debug mode", - ) - - -def pytest_fixture_setup(fixturedef): # type: ignore[no-untyped-def] - """Set up pytest fixture. - - Allow fixtures to be coroutines. Run coroutine fixtures in an event loop. - """ - func = fixturedef.func - - if isasyncgenfunction(func): - # async generator fixture - is_async_gen = True - elif asyncio.iscoroutinefunction(func): - # regular async fixture - is_async_gen = False - else: - # not an async fixture, nothing to do - return - - strip_request = False - if "request" not in fixturedef.argnames: - fixturedef.argnames += ("request",) - strip_request = True - - def wrapper(*args, **kwargs): # type: ignore[no-untyped-def] - request = kwargs["request"] - if strip_request: - del kwargs["request"] - - # if neither the fixture nor the test use the 'loop' fixture, - # 'getfixturevalue' will fail because the test is not parameterized - # (this can be removed someday if 'loop' is no longer parameterized) - if "loop" not in request.fixturenames: - raise Exception( - "Asynchronous fixtures must depend on the 'loop' fixture or " - "be used in tests depending from it." - ) - - _loop = request.getfixturevalue("loop") - - if is_async_gen: - # for async generators, we need to advance the generator once, - # then advance it again in a finalizer - gen = func(*args, **kwargs) - - def finalizer(): # type: ignore[no-untyped-def] - try: - return _loop.run_until_complete(gen.__anext__()) - except StopAsyncIteration: - pass - - request.addfinalizer(finalizer) - return _loop.run_until_complete(gen.__anext__()) - else: - return _loop.run_until_complete(func(*args, **kwargs)) - - fixturedef.func = wrapper - - -@pytest.fixture -def fast(request): # type: ignore[no-untyped-def] - """--fast config option""" - return request.config.getoption("--aiohttp-fast") - - -@pytest.fixture -def loop_debug(request): # type: ignore[no-untyped-def] - """--enable-loop-debug config option""" - return request.config.getoption("--aiohttp-enable-loop-debug") - - -@contextlib.contextmanager -def _runtime_warning_context(): # type: ignore[no-untyped-def] - """Context manager which checks for RuntimeWarnings. - - This exists specifically to - avoid "coroutine 'X' was never awaited" warnings being missed. - - If RuntimeWarnings occur in the context a RuntimeError is raised. - """ - with warnings.catch_warnings(record=True) as _warnings: - yield - rw = [ - "{w.filename}:{w.lineno}:{w.message}".format(w=w) - for w in _warnings - if w.category == RuntimeWarning - ] - if rw: - raise RuntimeError( - "{} Runtime Warning{},\n{}".format( - len(rw), "" if len(rw) == 1 else "s", "\n".join(rw) - ) - ) - - -@contextlib.contextmanager -def _passthrough_loop_context(loop, fast=False): # type: ignore[no-untyped-def] - """Passthrough loop context. - - Sets up and tears down a loop unless one is passed in via the loop - argument when it's passed straight through. - """ - if loop: - # loop already exists, pass it straight through - yield loop - else: - # this shadows loop_context's standard behavior - loop = setup_test_loop() - yield loop - teardown_test_loop(loop, fast=fast) - - -def pytest_pycollect_makeitem(collector, name, obj): # type: ignore[no-untyped-def] - """Fix pytest collecting for coroutines.""" - if collector.funcnamefilter(name) and asyncio.iscoroutinefunction(obj): - return list(collector._genfunctions(name, obj)) - - -def pytest_pyfunc_call(pyfuncitem): # type: ignore[no-untyped-def] - """Run coroutines in an event loop instead of a normal function call.""" - fast = pyfuncitem.config.getoption("--aiohttp-fast") - if asyncio.iscoroutinefunction(pyfuncitem.function): - existing_loop = pyfuncitem.funcargs.get( - "proactor_loop" - ) or pyfuncitem.funcargs.get("loop", None) - with _runtime_warning_context(): - with _passthrough_loop_context(existing_loop, fast=fast) as _loop: - testargs = { - arg: pyfuncitem.funcargs[arg] - for arg in pyfuncitem._fixtureinfo.argnames - } - _loop.run_until_complete(pyfuncitem.obj(**testargs)) - - return True - - -def pytest_generate_tests(metafunc): # type: ignore[no-untyped-def] - if "loop_factory" not in metafunc.fixturenames: - return - - loops = metafunc.config.option.aiohttp_loop - avail_factories = {"pyloop": asyncio.DefaultEventLoopPolicy} - - if uvloop is not None: # pragma: no cover - avail_factories["uvloop"] = uvloop.EventLoopPolicy - - if tokio is not None: # pragma: no cover - avail_factories["tokio"] = tokio.EventLoopPolicy - - if loops == "all": - loops = "pyloop,uvloop?,tokio?" - - factories = {} # type: ignore[var-annotated] - for name in loops.split(","): - required = not name.endswith("?") - name = name.strip(" ?") - if name not in avail_factories: # pragma: no cover - if required: - raise ValueError( - "Unknown loop '%s', available loops: %s" - % (name, list(factories.keys())) - ) - else: - continue - factories[name] = avail_factories[name] - metafunc.parametrize( - "loop_factory", list(factories.values()), ids=list(factories.keys()) - ) - - -@pytest.fixture -def loop(loop_factory, fast, loop_debug): # type: ignore[no-untyped-def] - """Return an instance of the event loop.""" - policy = loop_factory() - asyncio.set_event_loop_policy(policy) - with loop_context(fast=fast) as _loop: - if loop_debug: - _loop.set_debug(True) # pragma: no cover - asyncio.set_event_loop(_loop) - yield _loop - - -@pytest.fixture -def proactor_loop(): # type: ignore[no-untyped-def] - if not PY_37: - policy = asyncio.get_event_loop_policy() - policy._loop_factory = asyncio.ProactorEventLoop # type: ignore[attr-defined] - else: - policy = asyncio.WindowsProactorEventLoopPolicy() # type: ignore[attr-defined] - asyncio.set_event_loop_policy(policy) - - with loop_context(policy.new_event_loop) as _loop: - asyncio.set_event_loop(_loop) - yield _loop - - -@pytest.fixture -def unused_port(aiohttp_unused_port): # type: ignore[no-untyped-def] # pragma: no cover - warnings.warn( - "Deprecated, use aiohttp_unused_port fixture instead", - DeprecationWarning, - stacklevel=2, - ) - return aiohttp_unused_port - - -@pytest.fixture -def aiohttp_unused_port(): # type: ignore[no-untyped-def] - """Return a port that is unused on the current host.""" - return _unused_port - - -@pytest.fixture -def aiohttp_server(loop): # type: ignore[no-untyped-def] - """Factory to create a TestServer instance, given an app. - - aiohttp_server(app, **kwargs) - """ - servers = [] - - async def go(app, *, port=None, **kwargs): # type: ignore[no-untyped-def] - server = TestServer(app, port=port) - await server.start_server(loop=loop, **kwargs) - servers.append(server) - return server - - yield go - - async def finalize() -> None: - while servers: - await servers.pop().close() - - loop.run_until_complete(finalize()) - - -@pytest.fixture -def test_server(aiohttp_server): # type: ignore[no-untyped-def] # pragma: no cover - warnings.warn( - "Deprecated, use aiohttp_server fixture instead", - DeprecationWarning, - stacklevel=2, - ) - return aiohttp_server - - -@pytest.fixture -def aiohttp_raw_server(loop): # type: ignore[no-untyped-def] - """Factory to create a RawTestServer instance, given a web handler. - - aiohttp_raw_server(handler, **kwargs) - """ - servers = [] - - async def go(handler, *, port=None, **kwargs): # type: ignore[no-untyped-def] - server = RawTestServer(handler, port=port) - await server.start_server(loop=loop, **kwargs) - servers.append(server) - return server - - yield go - - async def finalize() -> None: - while servers: - await servers.pop().close() - - loop.run_until_complete(finalize()) - - -@pytest.fixture -def raw_test_server( # type: ignore[no-untyped-def] # pragma: no cover - aiohttp_raw_server, -): - warnings.warn( - "Deprecated, use aiohttp_raw_server fixture instead", - DeprecationWarning, - stacklevel=2, - ) - return aiohttp_raw_server - - -@pytest.fixture -def aiohttp_client( - loop: asyncio.AbstractEventLoop, -) -> Generator[AiohttpClient, None, None]: - """Factory to create a TestClient instance. - - aiohttp_client(app, **kwargs) - aiohttp_client(server, **kwargs) - aiohttp_client(raw_server, **kwargs) - """ - clients = [] - - async def go( - __param: Union[Application, BaseTestServer], - *args: Any, - server_kwargs: Optional[Dict[str, Any]] = None, - **kwargs: Any - ) -> TestClient: - - if isinstance(__param, Callable) and not isinstance( # type: ignore[arg-type] - __param, (Application, BaseTestServer) - ): - __param = __param(loop, *args, **kwargs) - kwargs = {} - else: - assert not args, "args should be empty" - - if isinstance(__param, Application): - server_kwargs = server_kwargs or {} - server = TestServer(__param, loop=loop, **server_kwargs) - client = TestClient(server, loop=loop, **kwargs) - elif isinstance(__param, BaseTestServer): - client = TestClient(__param, loop=loop, **kwargs) - else: - raise ValueError("Unknown argument type: %r" % type(__param)) - - await client.start_server() - clients.append(client) - return client - - yield go - - async def finalize() -> None: - while clients: - await clients.pop().close() - - loop.run_until_complete(finalize()) - - -@pytest.fixture -def test_client(aiohttp_client): # type: ignore[no-untyped-def] # pragma: no cover - warnings.warn( - "Deprecated, use aiohttp_client fixture instead", - DeprecationWarning, - stacklevel=2, - ) - return aiohttp_client diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/jupyter/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/jupyter/__init__.py deleted file mode 100644 index 651ab11e4cf8de15370bbf02efd36315c1d27e82..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/jupyter/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -try: - import anywidget # noqa: F401 -except ImportError: - # When anywidget isn't available, create stand-in JupyterChart class - # that raises an informative import error on construction. This - # way we can make JupyterChart available in the altair namespace - # when anywidget is not installed - class JupyterChart: - def __init__(self, *args, **kwargs): - raise ImportError( - "The Altair JupyterChart requires the anywidget \n" - "Python package which may be installed using pip with\n" - " pip install anywidget\n" - "or using conda with\n" - " conda install -c conda-forge anywidget\n" - "Afterwards, you will need to restart your Python kernel." - ) - -else: - from .jupyter_chart import JupyterChart # noqa: F401 diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/bezierTools.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/bezierTools.py deleted file mode 100644 index 7772a4bf8588d2723f2435c7a2ba56ce47a71cf1..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/misc/bezierTools.py +++ /dev/null @@ -1,1474 +0,0 @@ -# -*- coding: utf-8 -*- -"""fontTools.misc.bezierTools.py -- tools for working with Bezier path segments. -""" - -from fontTools.misc.arrayTools import calcBounds, sectRect, rectArea -from fontTools.misc.transform import Identity -import math -from collections import namedtuple - -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - - -Intersection = namedtuple("Intersection", ["pt", "t1", "t2"]) - - -__all__ = [ - "approximateCubicArcLength", - "approximateCubicArcLengthC", - "approximateQuadraticArcLength", - "approximateQuadraticArcLengthC", - "calcCubicArcLength", - "calcCubicArcLengthC", - "calcQuadraticArcLength", - "calcQuadraticArcLengthC", - "calcCubicBounds", - "calcQuadraticBounds", - "splitLine", - "splitQuadratic", - "splitCubic", - "splitQuadraticAtT", - "splitCubicAtT", - "splitCubicAtTC", - "splitCubicIntoTwoAtTC", - "solveQuadratic", - "solveCubic", - "quadraticPointAtT", - "cubicPointAtT", - "cubicPointAtTC", - "linePointAtT", - "segmentPointAtT", - "lineLineIntersections", - "curveLineIntersections", - "curveCurveIntersections", - "segmentSegmentIntersections", -] - - -def calcCubicArcLength(pt1, pt2, pt3, pt4, tolerance=0.005): - """Calculates the arc length for a cubic Bezier segment. - - Whereas :func:`approximateCubicArcLength` approximates the length, this - function calculates it by "measuring", recursively dividing the curve - until the divided segments are shorter than ``tolerance``. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - tolerance: Controls the precision of the calcuation. - - Returns: - Arc length value. - """ - return calcCubicArcLengthC( - complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4), tolerance - ) - - -def _split_cubic_into_two(p0, p1, p2, p3): - mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - deriv3 = (p3 + p2 - p1 - p0) * 0.125 - return ( - (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - ) - - -@cython.returns(cython.double) -@cython.locals( - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals(mult=cython.double, arch=cython.double, box=cython.double) -def _calcCubicArcLengthCRecurse(mult, p0, p1, p2, p3): - arch = abs(p0 - p3) - box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3) - if arch * mult >= box: - return (arch + box) * 0.5 - else: - one, two = _split_cubic_into_two(p0, p1, p2, p3) - return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse( - mult, *two - ) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, -) -@cython.locals( - tolerance=cython.double, - mult=cython.double, -) -def calcCubicArcLengthC(pt1, pt2, pt3, pt4, tolerance=0.005): - """Calculates the arc length for a cubic Bezier segment. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers. - tolerance: Controls the precision of the calcuation. - - Returns: - Arc length value. - """ - mult = 1.0 + 1.5 * tolerance # The 1.5 is a empirical hack; no math - return _calcCubicArcLengthCRecurse(mult, pt1, pt2, pt3, pt4) - - -epsilonDigits = 6 -epsilon = 1e-10 - - -@cython.cfunc -@cython.inline -@cython.returns(cython.double) -@cython.locals(v1=cython.complex, v2=cython.complex) -def _dot(v1, v2): - return (v1 * v2.conjugate()).real - - -@cython.cfunc -@cython.inline -@cython.returns(cython.double) -@cython.locals(x=cython.complex) -def _intSecAtan(x): - # In : sympy.integrate(sp.sec(sp.atan(x))) - # Out: x*sqrt(x**2 + 1)/2 + asinh(x)/2 - return x * math.sqrt(x**2 + 1) / 2 + math.asinh(x) / 2 - - -def calcQuadraticArcLength(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Args: - pt1: Start point of the Bezier as 2D tuple. - pt2: Handle point of the Bezier as 2D tuple. - pt3: End point of the Bezier as 2D tuple. - - Returns: - Arc length value. - - Example:: - - >>> calcQuadraticArcLength((0, 0), (0, 0), (0, 0)) # empty segment - 0.0 - >>> calcQuadraticArcLength((0, 0), (50, 0), (80, 0)) # collinear points - 80.0 - >>> calcQuadraticArcLength((0, 0), (0, 50), (0, 80)) # collinear points vertical - 80.0 - >>> calcQuadraticArcLength((0, 0), (50, 20), (100, 40)) # collinear points - 107.70329614269008 - >>> calcQuadraticArcLength((0, 0), (0, 100), (100, 0)) - 154.02976155645263 - >>> calcQuadraticArcLength((0, 0), (0, 50), (100, 0)) - 120.21581243984076 - >>> calcQuadraticArcLength((0, 0), (50, -10), (80, 50)) - 102.53273816445825 - >>> calcQuadraticArcLength((0, 0), (40, 0), (-40, 0)) # collinear points, control point outside - 66.66666666666667 - >>> calcQuadraticArcLength((0, 0), (40, 0), (0, 0)) # collinear points, looping back - 40.0 - """ - return calcQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3)) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - d0=cython.complex, - d1=cython.complex, - d=cython.complex, - n=cython.complex, -) -@cython.locals( - scale=cython.double, - origDist=cython.double, - a=cython.double, - b=cython.double, - x0=cython.double, - x1=cython.double, - Len=cython.double, -) -def calcQuadraticArcLengthC(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Args: - pt1: Start point of the Bezier as a complex number. - pt2: Handle point of the Bezier as a complex number. - pt3: End point of the Bezier as a complex number. - - Returns: - Arc length value. - """ - # Analytical solution to the length of a quadratic bezier. - # Documentation: https://github.com/fonttools/fonttools/issues/3055 - d0 = pt2 - pt1 - d1 = pt3 - pt2 - d = d1 - d0 - n = d * 1j - scale = abs(n) - if scale == 0.0: - return abs(pt3 - pt1) - origDist = _dot(n, d0) - if abs(origDist) < epsilon: - if _dot(d0, d1) >= 0: - return abs(pt3 - pt1) - a, b = abs(d0), abs(d1) - return (a * a + b * b) / (a + b) - x0 = _dot(d, d0) / origDist - x1 = _dot(d, d1) / origDist - Len = abs(2 * (_intSecAtan(x1) - _intSecAtan(x0)) * origDist / (scale * (x1 - x0))) - return Len - - -def approximateQuadraticArcLength(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Uses Gauss-Legendre quadrature for a branch-free approximation. - See :func:`calcQuadraticArcLength` for a slower but more accurate result. - - Args: - pt1: Start point of the Bezier as 2D tuple. - pt2: Handle point of the Bezier as 2D tuple. - pt3: End point of the Bezier as 2D tuple. - - Returns: - Approximate arc length value. - """ - return approximateQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3)) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, -) -@cython.locals( - v0=cython.double, - v1=cython.double, - v2=cython.double, -) -def approximateQuadraticArcLengthC(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Uses Gauss-Legendre quadrature for a branch-free approximation. - See :func:`calcQuadraticArcLength` for a slower but more accurate result. - - Args: - pt1: Start point of the Bezier as a complex number. - pt2: Handle point of the Bezier as a complex number. - pt3: End point of the Bezier as a complex number. - - Returns: - Approximate arc length value. - """ - # This, essentially, approximates the length-of-derivative function - # to be integrated with the best-matching fifth-degree polynomial - # approximation of it. - # - # https://en.wikipedia.org/wiki/Gaussian_quadrature#Gauss.E2.80.93Legendre_quadrature - - # abs(BezierCurveC[2].diff(t).subs({t:T})) for T in sorted(.5, .5±sqrt(3/5)/2), - # weighted 5/18, 8/18, 5/18 respectively. - v0 = abs( - -0.492943519233745 * pt1 + 0.430331482911935 * pt2 + 0.0626120363218102 * pt3 - ) - v1 = abs(pt3 - pt1) * 0.4444444444444444 - v2 = abs( - -0.0626120363218102 * pt1 - 0.430331482911935 * pt2 + 0.492943519233745 * pt3 - ) - - return v0 + v1 + v2 - - -def calcQuadraticBounds(pt1, pt2, pt3): - """Calculates the bounding rectangle for a quadratic Bezier segment. - - Args: - pt1: Start point of the Bezier as a 2D tuple. - pt2: Handle point of the Bezier as a 2D tuple. - pt3: End point of the Bezier as a 2D tuple. - - Returns: - A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``. - - Example:: - - >>> calcQuadraticBounds((0, 0), (50, 100), (100, 0)) - (0, 0, 100, 50.0) - >>> calcQuadraticBounds((0, 0), (100, 0), (100, 100)) - (0.0, 0.0, 100, 100) - """ - (ax, ay), (bx, by), (cx, cy) = calcQuadraticParameters(pt1, pt2, pt3) - ax2 = ax * 2.0 - ay2 = ay * 2.0 - roots = [] - if ax2 != 0: - roots.append(-bx / ax2) - if ay2 != 0: - roots.append(-by / ay2) - points = [ - (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) - for t in roots - if 0 <= t < 1 - ] + [pt1, pt3] - return calcBounds(points) - - -def approximateCubicArcLength(pt1, pt2, pt3, pt4): - """Approximates the arc length for a cubic Bezier segment. - - Uses Gauss-Lobatto quadrature with n=5 points to approximate arc length. - See :func:`calcCubicArcLength` for a slower but more accurate result. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - - Returns: - Arc length value. - - Example:: - - >>> approximateCubicArcLength((0, 0), (25, 100), (75, 100), (100, 0)) - 190.04332968932817 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, 50), (100, 100)) - 154.8852074945903 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (150, 0)) # line; exact result should be 150. - 149.99999999999991 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (-50, 0)) # cusp; exact result should be 150. - 136.9267662156362 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, -50), (-50, 0)) # cusp - 154.80848416537057 - """ - return approximateCubicArcLengthC( - complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4) - ) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, -) -@cython.locals( - v0=cython.double, - v1=cython.double, - v2=cython.double, - v3=cython.double, - v4=cython.double, -) -def approximateCubicArcLengthC(pt1, pt2, pt3, pt4): - """Approximates the arc length for a cubic Bezier segment. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers. - - Returns: - Arc length value. - """ - # This, essentially, approximates the length-of-derivative function - # to be integrated with the best-matching seventh-degree polynomial - # approximation of it. - # - # https://en.wikipedia.org/wiki/Gaussian_quadrature#Gauss.E2.80.93Lobatto_rules - - # abs(BezierCurveC[3].diff(t).subs({t:T})) for T in sorted(0, .5±(3/7)**.5/2, .5, 1), - # weighted 1/20, 49/180, 32/90, 49/180, 1/20 respectively. - v0 = abs(pt2 - pt1) * 0.15 - v1 = abs( - -0.558983582205757 * pt1 - + 0.325650248872424 * pt2 - + 0.208983582205757 * pt3 - + 0.024349751127576 * pt4 - ) - v2 = abs(pt4 - pt1 + pt3 - pt2) * 0.26666666666666666 - v3 = abs( - -0.024349751127576 * pt1 - - 0.208983582205757 * pt2 - - 0.325650248872424 * pt3 - + 0.558983582205757 * pt4 - ) - v4 = abs(pt4 - pt3) * 0.15 - - return v0 + v1 + v2 + v3 + v4 - - -def calcCubicBounds(pt1, pt2, pt3, pt4): - """Calculates the bounding rectangle for a quadratic Bezier segment. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - - Returns: - A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``. - - Example:: - - >>> calcCubicBounds((0, 0), (25, 100), (75, 100), (100, 0)) - (0, 0, 100, 75.0) - >>> calcCubicBounds((0, 0), (50, 0), (100, 50), (100, 100)) - (0.0, 0.0, 100, 100) - >>> print("%f %f %f %f" % calcCubicBounds((50, 0), (0, 100), (100, 100), (50, 0))) - 35.566243 0.000000 64.433757 75.000000 - """ - (ax, ay), (bx, by), (cx, cy), (dx, dy) = calcCubicParameters(pt1, pt2, pt3, pt4) - # calc first derivative - ax3 = ax * 3.0 - ay3 = ay * 3.0 - bx2 = bx * 2.0 - by2 = by * 2.0 - xRoots = [t for t in solveQuadratic(ax3, bx2, cx) if 0 <= t < 1] - yRoots = [t for t in solveQuadratic(ay3, by2, cy) if 0 <= t < 1] - roots = xRoots + yRoots - - points = [ - ( - ax * t * t * t + bx * t * t + cx * t + dx, - ay * t * t * t + by * t * t + cy * t + dy, - ) - for t in roots - ] + [pt1, pt4] - return calcBounds(points) - - -def splitLine(pt1, pt2, where, isHorizontal): - """Split a line at a given coordinate. - - Args: - pt1: Start point of line as 2D tuple. - pt2: End point of line as 2D tuple. - where: Position at which to split the line. - isHorizontal: Direction of the ray splitting the line. If true, - ``where`` is interpreted as a Y coordinate; if false, then - ``where`` is interpreted as an X coordinate. - - Returns: - A list of two line segments (each line segment being two 2D tuples) - if the line was successfully split, or a list containing the original - line. - - Example:: - - >>> printSegments(splitLine((0, 0), (100, 100), 50, True)) - ((0, 0), (50, 50)) - ((50, 50), (100, 100)) - >>> printSegments(splitLine((0, 0), (100, 100), 100, True)) - ((0, 0), (100, 100)) - >>> printSegments(splitLine((0, 0), (100, 100), 0, True)) - ((0, 0), (0, 0)) - ((0, 0), (100, 100)) - >>> printSegments(splitLine((0, 0), (100, 100), 0, False)) - ((0, 0), (0, 0)) - ((0, 0), (100, 100)) - >>> printSegments(splitLine((100, 0), (0, 0), 50, False)) - ((100, 0), (50, 0)) - ((50, 0), (0, 0)) - >>> printSegments(splitLine((0, 100), (0, 0), 50, True)) - ((0, 100), (0, 50)) - ((0, 50), (0, 0)) - """ - pt1x, pt1y = pt1 - pt2x, pt2y = pt2 - - ax = pt2x - pt1x - ay = pt2y - pt1y - - bx = pt1x - by = pt1y - - a = (ax, ay)[isHorizontal] - - if a == 0: - return [(pt1, pt2)] - t = (where - (bx, by)[isHorizontal]) / a - if 0 <= t < 1: - midPt = ax * t + bx, ay * t + by - return [(pt1, midPt), (midPt, pt2)] - else: - return [(pt1, pt2)] - - -def splitQuadratic(pt1, pt2, pt3, where, isHorizontal): - """Split a quadratic Bezier curve at a given coordinate. - - Args: - pt1,pt2,pt3: Control points of the Bezier as 2D tuples. - where: Position at which to split the curve. - isHorizontal: Direction of the ray splitting the curve. If true, - ``where`` is interpreted as a Y coordinate; if false, then - ``where`` is interpreted as an X coordinate. - - Returns: - A list of two curve segments (each curve segment being three 2D tuples) - if the curve was successfully split, or a list containing the original - curve. - - Example:: - - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 150, False)) - ((0, 0), (50, 100), (100, 0)) - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, False)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (75, 50), (100, 0)) - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, False)) - ((0, 0), (12.5, 25), (25, 37.5)) - ((25, 37.5), (62.5, 75), (100, 0)) - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, True)) - ((0, 0), (7.32233, 14.6447), (14.6447, 25)) - ((14.6447, 25), (50, 75), (85.3553, 25)) - ((85.3553, 25), (92.6777, 14.6447), (100, -7.10543e-15)) - >>> # XXX I'm not at all sure if the following behavior is desirable: - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, True)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (50, 50), (50, 50)) - ((50, 50), (75, 50), (100, 0)) - """ - a, b, c = calcQuadraticParameters(pt1, pt2, pt3) - solutions = solveQuadratic( - a[isHorizontal], b[isHorizontal], c[isHorizontal] - where - ) - solutions = sorted(t for t in solutions if 0 <= t < 1) - if not solutions: - return [(pt1, pt2, pt3)] - return _splitQuadraticAtT(a, b, c, *solutions) - - -def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal): - """Split a cubic Bezier curve at a given coordinate. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - where: Position at which to split the curve. - isHorizontal: Direction of the ray splitting the curve. If true, - ``where`` is interpreted as a Y coordinate; if false, then - ``where`` is interpreted as an X coordinate. - - Returns: - A list of two curve segments (each curve segment being four 2D tuples) - if the curve was successfully split, or a list containing the original - curve. - - Example:: - - >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 150, False)) - ((0, 0), (25, 100), (75, 100), (100, 0)) - >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 50, False)) - ((0, 0), (12.5, 50), (31.25, 75), (50, 75)) - ((50, 75), (68.75, 75), (87.5, 50), (100, 0)) - >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 25, True)) - ((0, 0), (2.29379, 9.17517), (4.79804, 17.5085), (7.47414, 25)) - ((7.47414, 25), (31.2886, 91.6667), (68.7114, 91.6667), (92.5259, 25)) - ((92.5259, 25), (95.202, 17.5085), (97.7062, 9.17517), (100, 1.77636e-15)) - """ - a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) - solutions = solveCubic( - a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where - ) - solutions = sorted(t for t in solutions if 0 <= t < 1) - if not solutions: - return [(pt1, pt2, pt3, pt4)] - return _splitCubicAtT(a, b, c, d, *solutions) - - -def splitQuadraticAtT(pt1, pt2, pt3, *ts): - """Split a quadratic Bezier curve at one or more values of t. - - Args: - pt1,pt2,pt3: Control points of the Bezier as 2D tuples. - *ts: Positions at which to split the curve. - - Returns: - A list of curve segments (each curve segment being three 2D tuples). - - Examples:: - - >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (75, 50), (100, 0)) - >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5, 0.75)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (62.5, 50), (75, 37.5)) - ((75, 37.5), (87.5, 25), (100, 0)) - """ - a, b, c = calcQuadraticParameters(pt1, pt2, pt3) - return _splitQuadraticAtT(a, b, c, *ts) - - -def splitCubicAtT(pt1, pt2, pt3, pt4, *ts): - """Split a cubic Bezier curve at one or more values of t. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - *ts: Positions at which to split the curve. - - Returns: - A list of curve segments (each curve segment being four 2D tuples). - - Examples:: - - >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5)) - ((0, 0), (12.5, 50), (31.25, 75), (50, 75)) - ((50, 75), (68.75, 75), (87.5, 50), (100, 0)) - >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5, 0.75)) - ((0, 0), (12.5, 50), (31.25, 75), (50, 75)) - ((50, 75), (59.375, 75), (68.75, 68.75), (77.3438, 56.25)) - ((77.3438, 56.25), (85.9375, 43.75), (93.75, 25), (100, 0)) - """ - a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) - return _splitCubicAtT(a, b, c, d, *ts) - - -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, - a=cython.complex, - b=cython.complex, - c=cython.complex, - d=cython.complex, -) -def splitCubicAtTC(pt1, pt2, pt3, pt4, *ts): - """Split a cubic Bezier curve at one or more values of t. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers.. - *ts: Positions at which to split the curve. - - Yields: - Curve segments (each curve segment being four complex numbers). - """ - a, b, c, d = calcCubicParametersC(pt1, pt2, pt3, pt4) - yield from _splitCubicAtTC(a, b, c, d, *ts) - - -@cython.returns(cython.complex) -@cython.locals( - t=cython.double, - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, - pointAtT=cython.complex, - off1=cython.complex, - off2=cython.complex, -) -@cython.locals( - t2=cython.double, _1_t=cython.double, _1_t_2=cython.double, _2_t_1_t=cython.double -) -def splitCubicIntoTwoAtTC(pt1, pt2, pt3, pt4, t): - """Split a cubic Bezier curve at t. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers. - t: Position at which to split the curve. - - Returns: - A tuple of two curve segments (each curve segment being four complex numbers). - """ - t2 = t * t - _1_t = 1 - t - _1_t_2 = _1_t * _1_t - _2_t_1_t = 2 * t * _1_t - pointAtT = ( - _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - ) - off1 = _1_t_2 * pt1 + _2_t_1_t * pt2 + t2 * pt3 - off2 = _1_t_2 * pt2 + _2_t_1_t * pt3 + t2 * pt4 - - pt2 = pt1 + (pt2 - pt1) * t - pt3 = pt4 + (pt3 - pt4) * _1_t - - return ((pt1, pt2, off1, pointAtT), (pointAtT, off2, pt3, pt4)) - - -def _splitQuadraticAtT(a, b, c, *ts): - ts = list(ts) - segments = [] - ts.insert(0, 0.0) - ts.append(1.0) - ax, ay = a - bx, by = b - cx, cy = c - for i in range(len(ts) - 1): - t1 = ts[i] - t2 = ts[i + 1] - delta = t2 - t1 - # calc new a, b and c - delta_2 = delta * delta - a1x = ax * delta_2 - a1y = ay * delta_2 - b1x = (2 * ax * t1 + bx) * delta - b1y = (2 * ay * t1 + by) * delta - t1_2 = t1 * t1 - c1x = ax * t1_2 + bx * t1 + cx - c1y = ay * t1_2 + by * t1 + cy - - pt1, pt2, pt3 = calcQuadraticPoints((a1x, a1y), (b1x, b1y), (c1x, c1y)) - segments.append((pt1, pt2, pt3)) - return segments - - -def _splitCubicAtT(a, b, c, d, *ts): - ts = list(ts) - ts.insert(0, 0.0) - ts.append(1.0) - segments = [] - ax, ay = a - bx, by = b - cx, cy = c - dx, dy = d - for i in range(len(ts) - 1): - t1 = ts[i] - t2 = ts[i + 1] - delta = t2 - t1 - - delta_2 = delta * delta - delta_3 = delta * delta_2 - t1_2 = t1 * t1 - t1_3 = t1 * t1_2 - - # calc new a, b, c and d - a1x = ax * delta_3 - a1y = ay * delta_3 - b1x = (3 * ax * t1 + bx) * delta_2 - b1y = (3 * ay * t1 + by) * delta_2 - c1x = (2 * bx * t1 + cx + 3 * ax * t1_2) * delta - c1y = (2 * by * t1 + cy + 3 * ay * t1_2) * delta - d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx - d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy - pt1, pt2, pt3, pt4 = calcCubicPoints( - (a1x, a1y), (b1x, b1y), (c1x, c1y), (d1x, d1y) - ) - segments.append((pt1, pt2, pt3, pt4)) - return segments - - -@cython.locals( - a=cython.complex, - b=cython.complex, - c=cython.complex, - d=cython.complex, - t1=cython.double, - t2=cython.double, - delta=cython.double, - delta_2=cython.double, - delta_3=cython.double, - a1=cython.complex, - b1=cython.complex, - c1=cython.complex, - d1=cython.complex, -) -def _splitCubicAtTC(a, b, c, d, *ts): - ts = list(ts) - ts.insert(0, 0.0) - ts.append(1.0) - for i in range(len(ts) - 1): - t1 = ts[i] - t2 = ts[i + 1] - delta = t2 - t1 - - delta_2 = delta * delta - delta_3 = delta * delta_2 - t1_2 = t1 * t1 - t1_3 = t1 * t1_2 - - # calc new a, b, c and d - a1 = a * delta_3 - b1 = (3 * a * t1 + b) * delta_2 - c1 = (2 * b * t1 + c + 3 * a * t1_2) * delta - d1 = a * t1_3 + b * t1_2 + c * t1 + d - pt1, pt2, pt3, pt4 = calcCubicPointsC(a1, b1, c1, d1) - yield (pt1, pt2, pt3, pt4) - - -# -# Equation solvers. -# - -from math import sqrt, acos, cos, pi - - -def solveQuadratic(a, b, c, sqrt=sqrt): - """Solve a quadratic equation. - - Solves *a*x*x + b*x + c = 0* where a, b and c are real. - - Args: - a: coefficient of *x²* - b: coefficient of *x* - c: constant term - - Returns: - A list of roots. Note that the returned list is neither guaranteed to - be sorted nor to contain unique values! - """ - if abs(a) < epsilon: - if abs(b) < epsilon: - # We have a non-equation; therefore, we have no valid solution - roots = [] - else: - # We have a linear equation with 1 root. - roots = [-c / b] - else: - # We have a true quadratic equation. Apply the quadratic formula to find two roots. - DD = b * b - 4.0 * a * c - if DD >= 0.0: - rDD = sqrt(DD) - roots = [(-b + rDD) / 2.0 / a, (-b - rDD) / 2.0 / a] - else: - # complex roots, ignore - roots = [] - return roots - - -def solveCubic(a, b, c, d): - """Solve a cubic equation. - - Solves *a*x*x*x + b*x*x + c*x + d = 0* where a, b, c and d are real. - - Args: - a: coefficient of *x³* - b: coefficient of *x²* - c: coefficient of *x* - d: constant term - - Returns: - A list of roots. Note that the returned list is neither guaranteed to - be sorted nor to contain unique values! - - Examples:: - - >>> solveCubic(1, 1, -6, 0) - [-3.0, -0.0, 2.0] - >>> solveCubic(-10.0, -9.0, 48.0, -29.0) - [-2.9, 1.0, 1.0] - >>> solveCubic(-9.875, -9.0, 47.625, -28.75) - [-2.911392, 1.0, 1.0] - >>> solveCubic(1.0, -4.5, 6.75, -3.375) - [1.5, 1.5, 1.5] - >>> solveCubic(-12.0, 18.0, -9.0, 1.50023651123) - [0.5, 0.5, 0.5] - >>> solveCubic( - ... 9.0, 0.0, 0.0, -7.62939453125e-05 - ... ) == [-0.0, -0.0, -0.0] - True - """ - # - # adapted from: - # CUBIC.C - Solve a cubic polynomial - # public domain by Ross Cottrell - # found at: http://www.strangecreations.com/library/snippets/Cubic.C - # - if abs(a) < epsilon: - # don't just test for zero; for very small values of 'a' solveCubic() - # returns unreliable results, so we fall back to quad. - return solveQuadratic(b, c, d) - a = float(a) - a1 = b / a - a2 = c / a - a3 = d / a - - Q = (a1 * a1 - 3.0 * a2) / 9.0 - R = (2.0 * a1 * a1 * a1 - 9.0 * a1 * a2 + 27.0 * a3) / 54.0 - - R2 = R * R - Q3 = Q * Q * Q - R2 = 0 if R2 < epsilon else R2 - Q3 = 0 if abs(Q3) < epsilon else Q3 - - R2_Q3 = R2 - Q3 - - if R2 == 0.0 and Q3 == 0.0: - x = round(-a1 / 3.0, epsilonDigits) - return [x, x, x] - elif R2_Q3 <= epsilon * 0.5: - # The epsilon * .5 above ensures that Q3 is not zero. - theta = acos(max(min(R / sqrt(Q3), 1.0), -1.0)) - rQ2 = -2.0 * sqrt(Q) - a1_3 = a1 / 3.0 - x0 = rQ2 * cos(theta / 3.0) - a1_3 - x1 = rQ2 * cos((theta + 2.0 * pi) / 3.0) - a1_3 - x2 = rQ2 * cos((theta + 4.0 * pi) / 3.0) - a1_3 - x0, x1, x2 = sorted([x0, x1, x2]) - # Merge roots that are close-enough - if x1 - x0 < epsilon and x2 - x1 < epsilon: - x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits) - elif x1 - x0 < epsilon: - x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) - x2 = round(x2, epsilonDigits) - elif x2 - x1 < epsilon: - x0 = round(x0, epsilonDigits) - x1 = x2 = round((x1 + x2) / 2.0, epsilonDigits) - else: - x0 = round(x0, epsilonDigits) - x1 = round(x1, epsilonDigits) - x2 = round(x2, epsilonDigits) - return [x0, x1, x2] - else: - x = pow(sqrt(R2_Q3) + abs(R), 1 / 3.0) - x = x + Q / x - if R >= 0.0: - x = -x - x = round(x - a1 / 3.0, epsilonDigits) - return [x] - - -# -# Conversion routines for points to parameters and vice versa -# - - -def calcQuadraticParameters(pt1, pt2, pt3): - x2, y2 = pt2 - x3, y3 = pt3 - cx, cy = pt1 - bx = (x2 - cx) * 2.0 - by = (y2 - cy) * 2.0 - ax = x3 - cx - bx - ay = y3 - cy - by - return (ax, ay), (bx, by), (cx, cy) - - -def calcCubicParameters(pt1, pt2, pt3, pt4): - x2, y2 = pt2 - x3, y3 = pt3 - x4, y4 = pt4 - dx, dy = pt1 - cx = (x2 - dx) * 3.0 - cy = (y2 - dy) * 3.0 - bx = (x3 - x2) * 3.0 - cx - by = (y3 - y2) * 3.0 - cy - ax = x4 - dx - cx - bx - ay = y4 - dy - cy - by - return (ax, ay), (bx, by), (cx, cy), (dx, dy) - - -@cython.cfunc -@cython.inline -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, - a=cython.complex, - b=cython.complex, - c=cython.complex, -) -def calcCubicParametersC(pt1, pt2, pt3, pt4): - c = (pt2 - pt1) * 3.0 - b = (pt3 - pt2) * 3.0 - c - a = pt4 - pt1 - c - b - return (a, b, c, pt1) - - -def calcQuadraticPoints(a, b, c): - ax, ay = a - bx, by = b - cx, cy = c - x1 = cx - y1 = cy - x2 = (bx * 0.5) + cx - y2 = (by * 0.5) + cy - x3 = ax + bx + cx - y3 = ay + by + cy - return (x1, y1), (x2, y2), (x3, y3) - - -def calcCubicPoints(a, b, c, d): - ax, ay = a - bx, by = b - cx, cy = c - dx, dy = d - x1 = dx - y1 = dy - x2 = (cx / 3.0) + dx - y2 = (cy / 3.0) + dy - x3 = (bx + cx) / 3.0 + x2 - y3 = (by + cy) / 3.0 + y2 - x4 = ax + dx + cx + bx - y4 = ay + dy + cy + by - return (x1, y1), (x2, y2), (x3, y3), (x4, y4) - - -@cython.cfunc -@cython.inline -@cython.locals( - a=cython.complex, - b=cython.complex, - c=cython.complex, - d=cython.complex, - p2=cython.complex, - p3=cython.complex, - p4=cython.complex, -) -def calcCubicPointsC(a, b, c, d): - p2 = c * (1 / 3) + d - p3 = (b + c) * (1 / 3) + p2 - p4 = a + b + c + d - return (d, p2, p3, p4) - - -# -# Point at time -# - - -def linePointAtT(pt1, pt2, t): - """Finds the point at time `t` on a line. - - Args: - pt1, pt2: Coordinates of the line as 2D tuples. - t: The time along the line. - - Returns: - A 2D tuple with the coordinates of the point. - """ - return ((pt1[0] * (1 - t) + pt2[0] * t), (pt1[1] * (1 - t) + pt2[1] * t)) - - -def quadraticPointAtT(pt1, pt2, pt3, t): - """Finds the point at time `t` on a quadratic curve. - - Args: - pt1, pt2, pt3: Coordinates of the curve as 2D tuples. - t: The time along the curve. - - Returns: - A 2D tuple with the coordinates of the point. - """ - x = (1 - t) * (1 - t) * pt1[0] + 2 * (1 - t) * t * pt2[0] + t * t * pt3[0] - y = (1 - t) * (1 - t) * pt1[1] + 2 * (1 - t) * t * pt2[1] + t * t * pt3[1] - return (x, y) - - -def cubicPointAtT(pt1, pt2, pt3, pt4, t): - """Finds the point at time `t` on a cubic curve. - - Args: - pt1, pt2, pt3, pt4: Coordinates of the curve as 2D tuples. - t: The time along the curve. - - Returns: - A 2D tuple with the coordinates of the point. - """ - t2 = t * t - _1_t = 1 - t - _1_t_2 = _1_t * _1_t - x = ( - _1_t_2 * _1_t * pt1[0] - + 3 * (_1_t_2 * t * pt2[0] + _1_t * t2 * pt3[0]) - + t2 * t * pt4[0] - ) - y = ( - _1_t_2 * _1_t * pt1[1] - + 3 * (_1_t_2 * t * pt2[1] + _1_t * t2 * pt3[1]) - + t2 * t * pt4[1] - ) - return (x, y) - - -@cython.returns(cython.complex) -@cython.locals( - t=cython.double, - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, -) -@cython.locals(t2=cython.double, _1_t=cython.double, _1_t_2=cython.double) -def cubicPointAtTC(pt1, pt2, pt3, pt4, t): - """Finds the point at time `t` on a cubic curve. - - Args: - pt1, pt2, pt3, pt4: Coordinates of the curve as complex numbers. - t: The time along the curve. - - Returns: - A complex number with the coordinates of the point. - """ - t2 = t * t - _1_t = 1 - t - _1_t_2 = _1_t * _1_t - return _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - - -def segmentPointAtT(seg, t): - if len(seg) == 2: - return linePointAtT(*seg, t) - elif len(seg) == 3: - return quadraticPointAtT(*seg, t) - elif len(seg) == 4: - return cubicPointAtT(*seg, t) - raise ValueError("Unknown curve degree") - - -# -# Intersection finders -# - - -def _line_t_of_pt(s, e, pt): - sx, sy = s - ex, ey = e - px, py = pt - if abs(sx - ex) < epsilon and abs(sy - ey) < epsilon: - # Line is a point! - return -1 - # Use the largest - if abs(sx - ex) > abs(sy - ey): - return (px - sx) / (ex - sx) - else: - return (py - sy) / (ey - sy) - - -def _both_points_are_on_same_side_of_origin(a, b, origin): - xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) - yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) - return not (xDiff <= 0.0 and yDiff <= 0.0) - - -def lineLineIntersections(s1, e1, s2, e2): - """Finds intersections between two line segments. - - Args: - s1, e1: Coordinates of the first line as 2D tuples. - s2, e2: Coordinates of the second line as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - - >>> a = lineLineIntersections( (310,389), (453, 222), (289, 251), (447, 367)) - >>> len(a) - 1 - >>> intersection = a[0] - >>> intersection.pt - (374.44882952482897, 313.73458370177315) - >>> (intersection.t1, intersection.t2) - (0.45069111555824465, 0.5408153767394238) - """ - s1x, s1y = s1 - e1x, e1y = e1 - s2x, s2y = s2 - e2x, e2y = e2 - if ( - math.isclose(s2x, e2x) and math.isclose(s1x, e1x) and not math.isclose(s1x, s2x) - ): # Parallel vertical - return [] - if ( - math.isclose(s2y, e2y) and math.isclose(s1y, e1y) and not math.isclose(s1y, s2y) - ): # Parallel horizontal - return [] - if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny - return [] - if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny - return [] - if math.isclose(e1x, s1x): - x = s1x - slope34 = (e2y - s2y) / (e2x - s2x) - y = slope34 * (x - s2x) + s2y - pt = (x, y) - return [ - Intersection( - pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - ) - ] - if math.isclose(s2x, e2x): - x = s2x - slope12 = (e1y - s1y) / (e1x - s1x) - y = slope12 * (x - s1x) + s1y - pt = (x, y) - return [ - Intersection( - pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - ) - ] - - slope12 = (e1y - s1y) / (e1x - s1x) - slope34 = (e2y - s2y) / (e2x - s2x) - if math.isclose(slope12, slope34): - return [] - x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34) - y = slope12 * (x - s1x) + s1y - pt = (x, y) - if _both_points_are_on_same_side_of_origin( - pt, e1, s1 - ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - return [ - Intersection( - pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - ) - ] - return [] - - -def _alignment_transformation(segment): - # Returns a transformation which aligns a segment horizontally at the - # origin. Apply this transformation to curves and root-find to find - # intersections with the segment. - start = segment[0] - end = segment[-1] - angle = math.atan2(end[1] - start[1], end[0] - start[0]) - return Identity.rotate(-angle).translate(-start[0], -start[1]) - - -def _curve_line_intersections_t(curve, line): - aligned_curve = _alignment_transformation(line).transformPoints(curve) - if len(curve) == 3: - a, b, c = calcQuadraticParameters(*aligned_curve) - intersections = solveQuadratic(a[1], b[1], c[1]) - elif len(curve) == 4: - a, b, c, d = calcCubicParameters(*aligned_curve) - intersections = solveCubic(a[1], b[1], c[1], d[1]) - else: - raise ValueError("Unknown curve degree") - return sorted(i for i in intersections if 0.0 <= i <= 1) - - -def curveLineIntersections(curve, line): - """Finds intersections between a curve and a line. - - Args: - curve: List of coordinates of the curve segment as 2D tuples. - line: List of coordinates of the line segment as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - >>> curve = [ (100, 240), (30, 60), (210, 230), (160, 30) ] - >>> line = [ (25, 260), (230, 20) ] - >>> intersections = curveLineIntersections(curve, line) - >>> len(intersections) - 3 - >>> intersections[0].pt - (84.9000930760723, 189.87306176459828) - """ - if len(curve) == 3: - pointFinder = quadraticPointAtT - elif len(curve) == 4: - pointFinder = cubicPointAtT - else: - raise ValueError("Unknown curve degree") - intersections = [] - for t in _curve_line_intersections_t(curve, line): - pt = pointFinder(*curve, t) - # Back-project the point onto the line, to avoid problems with - # numerical accuracy in the case of vertical and horizontal lines - line_t = _line_t_of_pt(*line, pt) - pt = linePointAtT(*line, line_t) - intersections.append(Intersection(pt=pt, t1=t, t2=line_t)) - return intersections - - -def _curve_bounds(c): - if len(c) == 3: - return calcQuadraticBounds(*c) - elif len(c) == 4: - return calcCubicBounds(*c) - raise ValueError("Unknown curve degree") - - -def _split_segment_at_t(c, t): - if len(c) == 2: - s, e = c - midpoint = linePointAtT(s, e, t) - return [(s, midpoint), (midpoint, e)] - if len(c) == 3: - return splitQuadraticAtT(*c, t) - elif len(c) == 4: - return splitCubicAtT(*c, t) - raise ValueError("Unknown curve degree") - - -def _curve_curve_intersections_t( - curve1, curve2, precision=1e-3, range1=None, range2=None -): - bounds1 = _curve_bounds(curve1) - bounds2 = _curve_bounds(curve2) - - if not range1: - range1 = (0.0, 1.0) - if not range2: - range2 = (0.0, 1.0) - - # If bounds don't intersect, go home - intersects, _ = sectRect(bounds1, bounds2) - if not intersects: - return [] - - def midpoint(r): - return 0.5 * (r[0] + r[1]) - - # If they do overlap but they're tiny, approximate - if rectArea(bounds1) < precision and rectArea(bounds2) < precision: - return [(midpoint(range1), midpoint(range2))] - - c11, c12 = _split_segment_at_t(curve1, 0.5) - c11_range = (range1[0], midpoint(range1)) - c12_range = (midpoint(range1), range1[1]) - - c21, c22 = _split_segment_at_t(curve2, 0.5) - c21_range = (range2[0], midpoint(range2)) - c22_range = (midpoint(range2), range2[1]) - - found = [] - found.extend( - _curve_curve_intersections_t( - c11, c21, precision, range1=c11_range, range2=c21_range - ) - ) - found.extend( - _curve_curve_intersections_t( - c12, c21, precision, range1=c12_range, range2=c21_range - ) - ) - found.extend( - _curve_curve_intersections_t( - c11, c22, precision, range1=c11_range, range2=c22_range - ) - ) - found.extend( - _curve_curve_intersections_t( - c12, c22, precision, range1=c12_range, range2=c22_range - ) - ) - - unique_key = lambda ts: (int(ts[0] / precision), int(ts[1] / precision)) - seen = set() - unique_values = [] - - for ts in found: - key = unique_key(ts) - if key in seen: - continue - seen.add(key) - unique_values.append(ts) - - return unique_values - - -def curveCurveIntersections(curve1, curve2): - """Finds intersections between a curve and a curve. - - Args: - curve1: List of coordinates of the first curve segment as 2D tuples. - curve2: List of coordinates of the second curve segment as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ] - >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ] - >>> intersections = curveCurveIntersections(curve1, curve2) - >>> len(intersections) - 3 - >>> intersections[0].pt - (81.7831487395506, 109.88904552375288) - """ - intersection_ts = _curve_curve_intersections_t(curve1, curve2) - return [ - Intersection(pt=segmentPointAtT(curve1, ts[0]), t1=ts[0], t2=ts[1]) - for ts in intersection_ts - ] - - -def segmentSegmentIntersections(seg1, seg2): - """Finds intersections between two segments. - - Args: - seg1: List of coordinates of the first segment as 2D tuples. - seg2: List of coordinates of the second segment as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ] - >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ] - >>> intersections = segmentSegmentIntersections(curve1, curve2) - >>> len(intersections) - 3 - >>> intersections[0].pt - (81.7831487395506, 109.88904552375288) - >>> curve3 = [ (100, 240), (30, 60), (210, 230), (160, 30) ] - >>> line = [ (25, 260), (230, 20) ] - >>> intersections = segmentSegmentIntersections(curve3, line) - >>> len(intersections) - 3 - >>> intersections[0].pt - (84.9000930760723, 189.87306176459828) - - """ - # Arrange by degree - swapped = False - if len(seg2) > len(seg1): - seg2, seg1 = seg1, seg2 - swapped = True - if len(seg1) > 2: - if len(seg2) > 2: - intersections = curveCurveIntersections(seg1, seg2) - else: - intersections = curveLineIntersections(seg1, seg2) - elif len(seg1) == 2 and len(seg2) == 2: - intersections = lineLineIntersections(*seg1, *seg2) - else: - raise ValueError("Couldn't work out which intersection function to use") - if not swapped: - return intersections - return [Intersection(pt=i.pt, t1=i.t2, t2=i.t1) for i in intersections] - - -def _segmentrepr(obj): - """ - >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]]) - '(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))' - """ - try: - it = iter(obj) - except TypeError: - return "%g" % obj - else: - return "(%s)" % ", ".join(_segmentrepr(x) for x in it) - - -def printSegments(segments): - """Helper for the doctests, displaying each segment in a list of - segments on a single line as a tuple. - """ - for segment in segments: - print(_segmentrepr(segment)) - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/jonatanklosko/chai/assets/js/hooks/messages.js b/spaces/jonatanklosko/chai/assets/js/hooks/messages.js deleted file mode 100644 index 1289a4559e09ec6d774004baa393be83fbaaa914..0000000000000000000000000000000000000000 --- a/spaces/jonatanklosko/chai/assets/js/hooks/messages.js +++ /dev/null @@ -1,15 +0,0 @@ -const Messages = { - mounted() { - this.scroll(); - }, - - updated() { - this.scroll(); - }, - - scroll() { - this.el.scrollTop = this.el.scrollHeight; - }, -}; - -export default Messages; diff --git a/spaces/jordonpeter01/ai-comic-factory/src/lib/fonts.ts b/spaces/jordonpeter01/ai-comic-factory/src/lib/fonts.ts deleted file mode 100644 index 7498aa46bc21fe19cc1b878ee928f9d55c31f927..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/ai-comic-factory/src/lib/fonts.ts +++ /dev/null @@ -1,119 +0,0 @@ -import { - Indie_Flower, - The_Girl_Next_Door, - -} from "next/font/google" -import localFont from "next/font/local" - -export const indieflower = Indie_Flower({ - subsets: ["latin"], - weight: "400", - variable: "--font-indieflower", -}) - -export const thegirlnextdoor = The_Girl_Next_Door({ - subsets: ["latin"], - weight: "400", - variable: "--font-the-girl-next-door", -}) - -export const komika = localFont({ - src: "../fonts/Komika-Hand/Komika-Hand.woff2", - variable: "--font-komika" -}) - -export const actionman = localFont({ - src: "../fonts/Action-Man/Action-Man.woff2", - variable: "--font-action-man" -}) - -export const karantula = localFont({ - src: "../fonts/Karantula/Karantula.woff2", - variable: "--font-karantula" -}) - -export const manoskope = localFont({ - src: "../fonts/Manoskope/MANOSKOPE-Bold.woff2", - variable: "--font-manoskope" -}) - -export const paeteround = localFont({ - src: "../fonts/Paete-Round/Paete-Round.woff2", - variable: "--font-paete-round" -}) - -export const qarmic = localFont({ - src: "../fonts/Qarmic-Sans/Qarmic-Sans-Abridged.woff2", - variable: "--font-qarmic-sans" -}) - -export const archrival = localFont({ - src: "../fonts/SF-Arch-Rival/SF-Arch-Rival.woff2", - variable: "--font-sf-arch-rival" -}) - -export const cartoonist = localFont({ - src: "../fonts/SF-Cartoonist-Hand/SF-Cartoonist-Hand.woff2", - variable: "--font-sf-cartoonist-hand" -}) - -export const toontime = localFont({ - src: "../fonts/SF-Toontime/SF-Toontime.woff2", - variable: "--font-sf-toontime" -}) - -export const vtc = localFont({ - src: "../fonts/VTC-Letterer-Pro/VTC-Letterer-Pro.woff2", - variable: "--font-vtc-letterer-pro" -}) - - -export const digitalstrip = localFont({ - src: "../fonts/DigitalStripBB/DigitalStripBB_Reg.woff2", - variable: "--font-digital-strip-bb" -}) - -// https://nextjs.org/docs/pages/building-your-application/optimizing/fonts -// If loading a variable font, you don"t need to specify the font weight -export const fonts = { - indieflower, - thegirlnextdoor, - // komika, - actionman, - karantula, - manoskope, - // paeteround, - // qarmic, - // archrival, - // cartoonist, - // toontime, - // vtc, - digitalstrip -} - -// https://nextjs.org/docs/pages/building-your-application/optimizing/fonts -// If loading a variable font, you don"t need to specify the font weight -export const fontList = Object.keys(fonts) - -export type FontName = keyof typeof fonts - -export const defaultFont = "cartoonist" as FontName - -export const classNames = Object.values(fonts).map(font => font.className) - -export const className = classNames.join(" ") - -export type FontClass = - | "font-indieflower" - | "font-thegirlnextdoor" - | "font-komika" - | "font-actionman" - | "font-karantula" - | "font-manoskope" - | "font-paeteround" - | "font-qarmic" - | "font-archrival" - | "font-cartoonist" - | "font-toontime" - | "font-vtc" - | "font-digitalstrip" diff --git a/spaces/jw2yang/unicl-img-recog-demo/model/text_encoder/__init__.py b/spaces/jw2yang/unicl-img-recog-demo/model/text_encoder/__init__.py deleted file mode 100644 index e09753c06e7cd77d8df3bee03b04ae9f85ce80bb..0000000000000000000000000000000000000000 --- a/spaces/jw2yang/unicl-img-recog-demo/model/text_encoder/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from .build import build_lang_encoder as build_text_encoder -from .build import build_tokenizer - -from .transformer import * -from .hf_model import * diff --git a/spaces/jyseo/3DFuse/my/utils/event.py b/spaces/jyseo/3DFuse/my/utils/event.py deleted file mode 100644 index 741ab144fef51eef800dc7a03208059675ee8860..0000000000000000000000000000000000000000 --- a/spaces/jyseo/3DFuse/my/utils/event.py +++ /dev/null @@ -1,143 +0,0 @@ -# design inspiration from detectron2 -from pathlib import Path -import json -import os -from contextlib import contextmanager -from .ticker import IntervalTicker - - -_CURRENT_STORAGE_STACK = [] - - -def get_event_storage(): - """ - Returns: - The :class:`EventStorage` object that's currently being used. - Throws an error if no :class:`EventStorage` is currently enabled. - """ - assert len( - _CURRENT_STORAGE_STACK - ), "get_event_storage() has to be called inside a 'with EventStorage(...)' context!" - return _CURRENT_STORAGE_STACK[-1] - - -def read_lined_json(fname): - with Path(fname).open('r') as f: - for line in f: - item = json.loads(line) - yield item - - -def read_stats(dirname, key): - if dirname is None or not (fname := Path(dirname) / "history.json").is_file(): - return [], [] - stats = read_lined_json(fname) - stats = list(filter(lambda x: key in x, stats)) - xs = [e['iter'] for e in stats] - ys = [e[key] for e in stats] - return xs, ys - - -class EventStorage(): - def __init__(self, output_dir="./hotdog", start_iter=0, flush_period=60): - self.iter = start_iter - self.ticker = IntervalTicker(flush_period) - self.history = [] - self._current_prefix = "" - self._init_curr_buffer_() - - self.output_dir = output_dir - self.writable = False - - def _open(self): - if self.writable: - output_dir = Path(self.output_dir) - if not output_dir.is_dir(): - output_dir.mkdir(parents=True, exist_ok=True) - json_fname = output_dir / 'history.json' - - self._file_handle = json_fname.open('a', encoding='utf8') - self.output_dir = output_dir # make sure it's a path object - - def _init_curr_buffer_(self): - self.curr_buffer = {'iter': self.iter} - - def step(self, flush=False): - self.history.append(self.curr_buffer) - - on_flush_period = self.ticker.tick() - if flush or on_flush_period: - self.flush_history() - - self.iter += 1 - self._init_curr_buffer_() - - def flush_history(self): - if self.writable: - for item in self.history: - line = json.dumps(item, sort_keys=True, ensure_ascii=False) + "\n" - self._file_handle.write(line) - self._file_handle.flush() - self.history = [] - - def full_key(self, key): - assert isinstance(key, str) - name = self._current_prefix + key - return name - - def put(self, key, val): - key = self.full_key(key) - assert isinstance(val, (int, float, str)) - if isinstance(val, float): - val = round(val, 3) - self.curr_buffer[key] = val - - def put_scalars(self, **kwargs): - for k, v in kwargs.items(): - self.put(k, v) - - def put_artifact(self, key, ext,p, save_func): - if not self.writable: - return - p=p.replace(" ","_") - os.makedirs(self.output_dir / key, exist_ok=True) - fname = (self.output_dir / key / f"step_{self.iter}_{p}").with_suffix(ext) - fname = str(fname) - - # must be called inside so that - # 1. the func is not executed if the metric is not writable - # 2. the key is only inserted if the func succeeds - save_func(fname) - self.put(key, fname) - return fname - - def close(self): - self.flush_history() - if self.writable: - self._file_handle.close() - - def get_last(self): - if len(self.history) > 0: - last = self.history[-1] - return last - - def __enter__(self): - if len(_CURRENT_STORAGE_STACK) > 0: - parent = _CURRENT_STORAGE_STACK[-1] - root, dirname = parent.output_dir, self.output_dir - if root is not None and dirname is not None: - child_dir = parent.output_dir / f"{self.output_dir}_{parent.iter}" - self.output_dir = child_dir - parent.put(str(dirname), str(child_dir)) - - if self.output_dir is not None: - self.writable = True - self._open() - - _CURRENT_STORAGE_STACK.append(self) - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - assert _CURRENT_STORAGE_STACK[-1] == self - _CURRENT_STORAGE_STACK.pop() - self.close() diff --git a/spaces/kangvcar/RealChar/client/web/src/index.js b/spaces/kangvcar/RealChar/client/web/src/index.js deleted file mode 100644 index d563c0fb10ba0e42724b21286eb546ee4e5734fc..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/client/web/src/index.js +++ /dev/null @@ -1,17 +0,0 @@ -import React from 'react'; -import ReactDOM from 'react-dom/client'; -import './index.css'; -import App from './App'; -import reportWebVitals from './reportWebVitals'; - -const root = ReactDOM.createRoot(document.getElementById('root')); -root.render( - - - -); - -// If you want to start measuring performance in your app, pass a function -// to log results (for example: reportWebVitals(console.log)) -// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals -reportWebVitals(); diff --git a/spaces/karay/diar_speech/player.html b/spaces/karay/diar_speech/player.html deleted file mode 100644 index a267066b4d794c70661838a617c36ecfc59c54cb..0000000000000000000000000000000000000000 --- a/spaces/karay/diar_speech/player.html +++ /dev/null @@ -1,274 +0,0 @@ - - - -Speakers - - - - -
    - -
    -
    - -
    -
    - -
    -
    -
    -
    00:00 / 00:00
    -
    -
    -
    -
    - - - \ No newline at end of file diff --git a/spaces/kastan/ai-teaching-assistant-beta/gpu_memory_utils.py b/spaces/kastan/ai-teaching-assistant-beta/gpu_memory_utils.py deleted file mode 100644 index 573713a52280fd8cb828600dab3faa20fc2696d7..0000000000000000000000000000000000000000 --- a/spaces/kastan/ai-teaching-assistant-beta/gpu_memory_utils.py +++ /dev/null @@ -1,57 +0,0 @@ -import GPUtil # pip install gputil - - -def get_gpu_ids_with_sufficient_memory(memory_requirement_GB): - ''' - Returns the MINIMAL SET of GPU IDs that, combined, have at least `memory_requirement` MB of free memory. - You will need to use all returned GPU IDs to get the desired memory requirement. - It returns lower IDs first [0, 1, ...] - - If `memory_requirement` is 0, returns all available GPUs. - If `memory_requirement` is not available, returns an empty list. - ''' - memory_requirement_MB = float(memory_requirement_GB * 1024) - GPUs = sorted(GPUtil.getGPUs(), key=lambda x: x.memoryFree, reverse=True) - total_memory = sum(gpu.memoryFree for gpu in GPUs) - if memory_requirement_MB > total_memory: - return [] - GPU_IDs = [] - for gpu in GPUs: - if memory_requirement_MB <= 0: - break - GPU_IDs.append(gpu.id) - memory_requirement_MB -= gpu.memoryFree - return GPU_IDs - - -def get_device_with_most_free_memory(): - ''' - Returns the GPU ID of the GPU with the most free memory. - ''' - GPUs = GPUtil.getGPUs() - return sorted(GPUs, key=lambda x: x.memoryFree, reverse=True)[0].id - - -def get_free_memory_dict(leave_extra_memory_unused_GiB: float = 2, leave_extra_memory_unused_gpu0_GiB: float = 3): - ''' - Returns a dictionary of GPU IDs and their free memory, in MiB. - Compatible with huggingface Accelerate formatting: `max_memory=get_free_memory_dict()` - - Accelerate seems to use more memory than we give it, so we default to telling Accelerate we have 2 GiB less than we actually do. - - Example output: - {0: '24753MiB', 1: '26223MiB', 2: '25603MiB', 3: '9044MiB'} - ''' - GPUs = GPUtil.getGPUs() - memory_map = {gpu.id: int(round(gpu.memoryFree)) for gpu in GPUs} - if leave_extra_memory_unused_GiB > 0: - for device_id, memory_MiB in memory_map.items(): - memory_map[device_id] = memory_MiB - (leave_extra_memory_unused_GiB * 1024) - if leave_extra_memory_unused_gpu0_GiB > 0 and 0 in memory_map: - memory_map[0] = memory_map[0] - (leave_extra_memory_unused_gpu0_GiB * 1024) - - # format to Accelerate's liking - for device_id, memory_MiB in memory_map.items(): - memory_map[device_id] = f"{int(round(memory_MiB))}MiB" - - return memory_map diff --git a/spaces/kdrkdrkdr/ShirokoTTS/text/cleaners.py b/spaces/kdrkdrkdr/ShirokoTTS/text/cleaners.py deleted file mode 100644 index e48d53fed89e6e163bc4285dc24682cc3efcb56a..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/ShirokoTTS/text/cleaners.py +++ /dev/null @@ -1 +0,0 @@ -from ptml2ja import ml2ja_ipa \ No newline at end of file diff --git a/spaces/kdrkdrkdr/YuukaTTS/export_model.py b/spaces/kdrkdrkdr/YuukaTTS/export_model.py deleted file mode 100644 index 98a49835df5a7a2486e76ddf94fbbb4444b52203..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/YuukaTTS/export_model.py +++ /dev/null @@ -1,13 +0,0 @@ -import torch - -if __name__ == '__main__': - model_path = "saved_model/11/model.pth" - output_path = "saved_model/11/model1.pth" - checkpoint_dict = torch.load(model_path, map_location='cpu') - checkpoint_dict_new = {} - for k, v in checkpoint_dict.items(): - if k == "optimizer": - print("remove optimizer") - continue - checkpoint_dict_new[k] = v - torch.save(checkpoint_dict_new, output_path) diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/options/train_options.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/options/train_options.py deleted file mode 100644 index 1337bfdd5f372b5c686a91b394a2aadbe5741f44..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/options/train_options.py +++ /dev/null @@ -1,53 +0,0 @@ -"""This script contains the training options for Deep3DFaceRecon_pytorch -""" - -from .base_options import BaseOptions -from util import util - -class TrainOptions(BaseOptions): - """This class includes training options. - - It also includes shared options defined in BaseOptions. - """ - - def initialize(self, parser): - parser = BaseOptions.initialize(self, parser) - # dataset parameters - # for train - parser.add_argument('--data_root', type=str, default='./', help='dataset root') - parser.add_argument('--flist', type=str, default='datalist/train/masks.txt', help='list of mask names of training set') - parser.add_argument('--batch_size', type=int, default=32) - parser.add_argument('--dataset_mode', type=str, default='flist', help='chooses how datasets are loaded. [None | flist]') - parser.add_argument('--serial_batches', action='store_true', help='if true, takes images in order to make batches, otherwise takes them randomly') - parser.add_argument('--num_threads', default=4, type=int, help='# threads for loading data') - parser.add_argument('--max_dataset_size', type=int, default=float("inf"), help='Maximum number of samples allowed per dataset. If the dataset directory contains more than max_dataset_size, only a subset is loaded.') - parser.add_argument('--preprocess', type=str, default='shift_scale_rot_flip', help='scaling and cropping of images at load time [shift_scale_rot_flip | shift_scale | shift | shift_rot_flip ]') - parser.add_argument('--use_aug', type=util.str2bool, nargs='?', const=True, default=True, help='whether use data augmentation') - - # for val - parser.add_argument('--flist_val', type=str, default='datalist/val/masks.txt', help='list of mask names of val set') - parser.add_argument('--batch_size_val', type=int, default=32) - - - # visualization parameters - parser.add_argument('--display_freq', type=int, default=1000, help='frequency of showing training results on screen') - parser.add_argument('--print_freq', type=int, default=100, help='frequency of showing training results on console') - - # network saving and loading parameters - parser.add_argument('--save_latest_freq', type=int, default=5000, help='frequency of saving the latest results') - parser.add_argument('--save_epoch_freq', type=int, default=1, help='frequency of saving checkpoints at the end of epochs') - parser.add_argument('--evaluation_freq', type=int, default=5000, help='evaluation freq') - parser.add_argument('--save_by_iter', action='store_true', help='whether saves model by iteration') - parser.add_argument('--continue_train', action='store_true', help='continue training: load the latest model') - parser.add_argument('--epoch_count', type=int, default=1, help='the starting epoch count, we save the model by , +, ...') - parser.add_argument('--phase', type=str, default='train', help='train, val, test, etc') - parser.add_argument('--pretrained_name', type=str, default=None, help='resume training from another checkpoint') - - # training parameters - parser.add_argument('--n_epochs', type=int, default=20, help='number of epochs with the initial learning rate') - parser.add_argument('--lr', type=float, default=0.0001, help='initial learning rate for adam') - parser.add_argument('--lr_policy', type=str, default='step', help='learning rate policy. [linear | step | plateau | cosine]') - parser.add_argument('--lr_decay_epochs', type=int, default=10, help='multiply by a gamma every lr_decay_epochs epoches') - - self.isTrain = True - return parser diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/facerender/sync_batchnorm/replicate.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/facerender/sync_batchnorm/replicate.py deleted file mode 100644 index b71c7b8ed51a1d6c55b1f753bdd8d90bad79bd06..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/facerender/sync_batchnorm/replicate.py +++ /dev/null @@ -1,94 +0,0 @@ -# -*- coding: utf-8 -*- -# File : replicate.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import functools - -from torch.nn.parallel.data_parallel import DataParallel - -__all__ = [ - 'CallbackContext', - 'execute_replication_callbacks', - 'DataParallelWithCallback', - 'patch_replication_callback' -] - - -class CallbackContext(object): - pass - - -def execute_replication_callbacks(modules): - """ - Execute an replication callback `__data_parallel_replicate__` on each module created by original replication. - - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Note that, as all modules are isomorphism, we assign each sub-module with a context - (shared among multiple copies of this module on different devices). - Through this context, different copies can share some information. - - We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback - of any slave copies. - """ - master_copy = modules[0] - nr_modules = len(list(master_copy.modules())) - ctxs = [CallbackContext() for _ in range(nr_modules)] - - for i, module in enumerate(modules): - for j, m in enumerate(module.modules()): - if hasattr(m, '__data_parallel_replicate__'): - m.__data_parallel_replicate__(ctxs[j], i) - - -class DataParallelWithCallback(DataParallel): - """ - Data Parallel with a replication callback. - - An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by - original `replicate` function. - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - # sync_bn.__data_parallel_replicate__ will be invoked. - """ - - def replicate(self, module, device_ids): - modules = super(DataParallelWithCallback, self).replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - -def patch_replication_callback(data_parallel): - """ - Monkey-patch an existing `DataParallel` object. Add the replication callback. - Useful when you have customized `DataParallel` implementation. - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) - > patch_replication_callback(sync_bn) - # this is equivalent to - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - """ - - assert isinstance(data_parallel, DataParallel) - - old_replicate = data_parallel.replicate - - @functools.wraps(old_replicate) - def new_replicate(module, device_ids): - modules = old_replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - data_parallel.replicate = new_replicate diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/utils/__init__.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kevinwang676/FreeVC-en/speaker_encoder/voice_encoder.py b/spaces/kevinwang676/FreeVC-en/speaker_encoder/voice_encoder.py deleted file mode 100644 index 88cdee2de76b72db58c5dd19a888597e0fe12fbb..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/FreeVC-en/speaker_encoder/voice_encoder.py +++ /dev/null @@ -1,173 +0,0 @@ -from speaker_encoder.hparams import * -from speaker_encoder import audio -from pathlib import Path -from typing import Union, List -from torch import nn -from time import perf_counter as timer -import numpy as np -import torch - - -class SpeakerEncoder(nn.Module): - def __init__(self, weights_fpath, device: Union[str, torch.device]=None, verbose=True): - """ - :param device: either a torch device or the name of a torch device (e.g. "cpu", "cuda"). - If None, defaults to cuda if it is available on your machine, otherwise the model will - run on cpu. Outputs are always returned on the cpu, as numpy arrays. - """ - super().__init__() - - # Define the network - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - # Get the target device - if device is None: - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - elif isinstance(device, str): - device = torch.device(device) - self.device = device - - # Load the pretrained model'speaker weights - # weights_fpath = Path(__file__).resolve().parent.joinpath("pretrained.pt") - # if not weights_fpath.exists(): - # raise Exception("Couldn't find the voice encoder pretrained model at %s." % - # weights_fpath) - - start = timer() - checkpoint = torch.load(weights_fpath, map_location="cpu") - - self.load_state_dict(checkpoint["model_state"], strict=False) - self.to(device) - - if verbose: - print("Loaded the voice encoder model on %s in %.2f seconds." % - (device.type, timer() - start)) - - def forward(self, mels: torch.FloatTensor): - """ - Computes the embeddings of a batch of utterance spectrograms. - :param mels: a batch of mel spectrograms of same duration as a float32 tensor of shape - (batch_size, n_frames, n_channels) - :return: the embeddings as a float 32 tensor of shape (batch_size, embedding_size). - Embeddings are positive and L2-normed, thus they lay in the range [0, 1]. - """ - # Pass the input through the LSTM layers and retrieve the final hidden state of the last - # layer. Apply a cutoff to 0 for negative values and L2 normalize the embeddings. - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - @staticmethod - def compute_partial_slices(n_samples: int, rate, min_coverage): - """ - Computes where to split an utterance waveform and its corresponding mel spectrogram to - obtain partial utterances of each. Both the waveform and the - mel spectrogram slices are returned, so as to make each partial utterance waveform - correspond to its spectrogram. - - The returned ranges may be indexing further than the length of the waveform. It is - recommended that you pad the waveform with zeros up to wav_slices[-1].stop. - - :param n_samples: the number of samples in the waveform - :param rate: how many partial utterances should occur per second. Partial utterances must - cover the span of the entire utterance, thus the rate should not be lower than the inverse - of the duration of a partial utterance. By default, partial utterances are 1.6s long and - the minimum rate is thus 0.625. - :param min_coverage: when reaching the last partial utterance, it may or may not have - enough frames. If at least of are present, - then the last partial utterance will be considered by zero-padding the audio. Otherwise, - it will be discarded. If there aren't enough frames for one partial utterance, - this parameter is ignored so that the function always returns at least one slice. - :return: the waveform slices and mel spectrogram slices as lists of array slices. Index - respectively the waveform and the mel spectrogram with these slices to obtain the partial - utterances. - """ - assert 0 < min_coverage <= 1 - - # Compute how many frames separate two partial utterances - samples_per_frame = int((sampling_rate * mel_window_step / 1000)) - n_frames = int(np.ceil((n_samples + 1) / samples_per_frame)) - frame_step = int(np.round((sampling_rate / rate) / samples_per_frame)) - assert 0 < frame_step, "The rate is too high" - assert frame_step <= partials_n_frames, "The rate is too low, it should be %f at least" % \ - (sampling_rate / (samples_per_frame * partials_n_frames)) - - # Compute the slices - wav_slices, mel_slices = [], [] - steps = max(1, n_frames - partials_n_frames + frame_step + 1) - for i in range(0, steps, frame_step): - mel_range = np.array([i, i + partials_n_frames]) - wav_range = mel_range * samples_per_frame - mel_slices.append(slice(*mel_range)) - wav_slices.append(slice(*wav_range)) - - # Evaluate whether extra padding is warranted or not - last_wav_range = wav_slices[-1] - coverage = (n_samples - last_wav_range.start) / (last_wav_range.stop - last_wav_range.start) - if coverage < min_coverage and len(mel_slices) > 1: - mel_slices = mel_slices[:-1] - wav_slices = wav_slices[:-1] - - return wav_slices, mel_slices - - def embed_utterance(self, wav: np.ndarray, return_partials=False, rate=1.3, min_coverage=0.75): - """ - Computes an embedding for a single utterance. The utterance is divided in partial - utterances and an embedding is computed for each. The complete utterance embedding is the - L2-normed average embedding of the partial utterances. - - TODO: independent batched version of this function - - :param wav: a preprocessed utterance waveform as a numpy array of float32 - :param return_partials: if True, the partial embeddings will also be returned along with - the wav slices corresponding to each partial utterance. - :param rate: how many partial utterances should occur per second. Partial utterances must - cover the span of the entire utterance, thus the rate should not be lower than the inverse - of the duration of a partial utterance. By default, partial utterances are 1.6s long and - the minimum rate is thus 0.625. - :param min_coverage: when reaching the last partial utterance, it may or may not have - enough frames. If at least of are present, - then the last partial utterance will be considered by zero-padding the audio. Otherwise, - it will be discarded. If there aren't enough frames for one partial utterance, - this parameter is ignored so that the function always returns at least one slice. - :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). If - is True, the partial utterances as a numpy array of float32 of shape - (n_partials, model_embedding_size) and the wav partials as a list of slices will also be - returned. - """ - # Compute where to split the utterance into partials and pad the waveform with zeros if - # the partial utterances cover a larger range. - wav_slices, mel_slices = self.compute_partial_slices(len(wav), rate, min_coverage) - max_wave_length = wav_slices[-1].stop - if max_wave_length >= len(wav): - wav = np.pad(wav, (0, max_wave_length - len(wav)), "constant") - - # Split the utterance into partials and forward them through the model - mel = audio.wav_to_mel_spectrogram(wav) - mels = np.array([mel[s] for s in mel_slices]) - with torch.no_grad(): - mels = torch.from_numpy(mels).to(self.device) - partial_embeds = self(mels).cpu().numpy() - - # Compute the utterance embedding from the partial embeddings - raw_embed = np.mean(partial_embeds, axis=0) - embed = raw_embed / np.linalg.norm(raw_embed, 2) - - if return_partials: - return embed, partial_embeds, wav_slices - return embed - - def embed_speaker(self, wavs: List[np.ndarray], **kwargs): - """ - Compute the embedding of a collection of wavs (presumably from the same speaker) by - averaging their embedding and L2-normalizing it. - - :param wavs: list of wavs a numpy arrays of float32. - :param kwargs: extra arguments to embed_utterance() - :return: the embedding as a numpy array of float32 of shape (model_embedding_size,). - """ - raw_embed = np.mean([self.embed_utterance(wav, return_partials=False, **kwargs) \ - for wav in wavs], axis=0) - return raw_embed / np.linalg.norm(raw_embed, 2) \ No newline at end of file diff --git a/spaces/kevinwang676/voice-conversion-yourtts/parseinput.py b/spaces/kevinwang676/voice-conversion-yourtts/parseinput.py deleted file mode 100644 index 990a4edbabc9b81e275e203d654cda6ba8561ac4..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/voice-conversion-yourtts/parseinput.py +++ /dev/null @@ -1,128 +0,0 @@ -import re -import xml.etree.ElementTree as ET -from xml.sax import saxutils -#import nltk - -# Chunked generation originally from https://github.com/serp-ai/bark-with-voice-clone -def split_and_recombine_text(text, desired_length=100, max_length=150): - # return nltk.sent_tokenize(text) - - # from https://github.com/neonbjb/tortoise-tts - """Split text it into chunks of a desired length trying to keep sentences intact.""" - # normalize text, remove redundant whitespace and convert non-ascii quotes to ascii - text = re.sub(r"\n\n+", "\n", text) - text = re.sub(r"\s+", " ", text) - text = re.sub(r"[“”]", '"', text) - - rv = [] - in_quote = False - current = "" - split_pos = [] - pos = -1 - end_pos = len(text) - 1 - - def seek(delta): - nonlocal pos, in_quote, current - is_neg = delta < 0 - for _ in range(abs(delta)): - if is_neg: - pos -= 1 - current = current[:-1] - else: - pos += 1 - current += text[pos] - if text[pos] == '"': - in_quote = not in_quote - return text[pos] - - def peek(delta): - p = pos + delta - return text[p] if p < end_pos and p >= 0 else "" - - def commit(): - nonlocal rv, current, split_pos - rv.append(current) - current = "" - split_pos = [] - - while pos < end_pos: - c = seek(1) - # do we need to force a split? - if len(current) >= max_length: - if len(split_pos) > 0 and len(current) > (desired_length / 2): - # we have at least one sentence and we are over half the desired length, seek back to the last split - d = pos - split_pos[-1] - seek(-d) - else: - # no full sentences, seek back until we are not in the middle of a word and split there - while c not in "!?.,\n " and pos > 0 and len(current) > desired_length: - c = seek(-1) - commit() - # check for sentence boundaries - elif not in_quote and (c in "!?]\n" or (c == "." and peek(1) in "\n ")): - # seek forward if we have consecutive boundary markers but still within the max length - while ( - pos < len(text) - 1 and len(current) < max_length and peek(1) in "!?.]" - ): - c = seek(1) - split_pos.append(pos) - if len(current) >= desired_length: - commit() - # treat end of quote as a boundary if its followed by a space or newline - elif in_quote and peek(1) == '"' and peek(2) in "\n ": - seek(2) - split_pos.append(pos) - rv.append(current) - - # clean up, remove lines with only whitespace or punctuation - rv = [s.strip() for s in rv] - rv = [s for s in rv if len(s) > 0 and not re.match(r"^[\s\.,;:!?]*$", s)] - - return rv - -def is_ssml(value): - try: - ET.fromstring(value) - except ET.ParseError: - return False - return True - -def build_ssml(rawtext, selected_voice): - texts = rawtext.split("\n") - joinedparts = "" - for textpart in texts: - textpart = textpart.strip() - if len(textpart) < 1: - continue - joinedparts = joinedparts + f"\n{saxutils.escape(textpart)}" - ssml = f""" - - {joinedparts} - - """ - return ssml - -def create_clips_from_ssml(ssmlinput): - # Parse the XML - tree = ET.ElementTree(ET.fromstring(ssmlinput)) - root = tree.getroot() - - # Create an empty list - voice_list = [] - - # Loop through all voice tags - for voice in root.iter('{http://www.w3.org/2001/10/synthesis}voice'): - # Extract the voice name attribute and the content text - voice_name = voice.attrib['name'] - voice_content = voice.text.strip() if voice.text else '' - if(len(voice_content) > 0): - parts = split_and_recombine_text(voice_content) - for p in parts: - if(len(p) > 1): - # add to tuple list - voice_list.append((voice_name, p)) - return voice_list diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/noisychannel/rerank_options.py b/spaces/koajoel/PolyFormer/fairseq/examples/noisychannel/rerank_options.py deleted file mode 100644 index de91939e6635bdf33c9dc330116be07d9e8be6a2..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/noisychannel/rerank_options.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import options - - -def get_reranking_parser(default_task="translation"): - parser = options.get_parser("Generation and reranking", default_task) - add_reranking_args(parser) - return parser - - -def get_tuning_parser(default_task="translation"): - parser = options.get_parser("Reranking tuning", default_task) - add_reranking_args(parser) - add_tuning_args(parser) - return parser - - -def add_reranking_args(parser): - group = parser.add_argument_group("Reranking") - # fmt: off - group.add_argument('--score-model1', '-s1', type=str, metavar='FILE', required=True, - help='path to first model or ensemble of models for rescoring') - group.add_argument('--score-model2', '-s2', type=str, metavar='FILE', required=False, - help='path to second model or ensemble of models for rescoring') - group.add_argument('--num-rescore', '-n', type=int, metavar='N', default=10, - help='the number of candidate hypothesis to rescore') - group.add_argument('-bz', '--batch-size', type=int, metavar='N', default=128, - help='batch size for generating the nbest list') - group.add_argument('--gen-subset', default='test', metavar='SET', choices=['test', 'train', 'valid'], - help='data subset to generate (train, valid, test)') - group.add_argument('--gen-model', default=None, metavar='FILE', - help='the model to generate translations') - group.add_argument('-b1', '--backwards1', action='store_true', - help='whether or not the first model group is backwards') - group.add_argument('-b2', '--backwards2', action='store_true', - help='whether or not the second model group is backwards') - group.add_argument('-a', '--weight1', default=1, nargs='+', type=float, - help='the weight(s) of the first model') - group.add_argument('-b', '--weight2', default=1, nargs='+', type=float, - help='the weight(s) of the second model, or the gen model if using nbest from interactive.py') - group.add_argument('-c', '--weight3', default=1, nargs='+', type=float, - help='the weight(s) of the third model') - - # lm arguments - group.add_argument('-lm', '--language-model', default=None, metavar='FILE', - help='language model for target language to rescore translations') - group.add_argument('--lm-dict', default=None, metavar='FILE', - help='the dict of the language model for the target language') - group.add_argument('--lm-name', default=None, - help='the name of the language model for the target language') - group.add_argument('--lm-bpe-code', default=None, metavar='FILE', - help='the bpe code for the language model for the target language') - group.add_argument('--data-dir-name', default=None, - help='name of data directory') - group.add_argument('--lenpen', default=1, nargs='+', type=float, - help='length penalty: <1.0 favors shorter, >1.0 favors longer sentences') - group.add_argument('--score-dict-dir', default=None, - help='the directory with dictionaries for the scoring models') - group.add_argument('--right-to-left1', action='store_true', - help='whether the first model group is a right to left model') - group.add_argument('--right-to-left2', action='store_true', - help='whether the second model group is a right to left model') - group.add_argument('--post-process', '--remove-bpe', default='@@ ', - help='the bpe symbol, used for the bitext and LM') - group.add_argument('--prefix-len', default=None, type=int, - help='the length of the target prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--sampling', action='store_true', - help='use sampling instead of beam search for generating n best list') - group.add_argument('--diff-bpe', action='store_true', - help='bpe for rescoring and nbest list not the same') - group.add_argument('--rescore-bpe-code', default=None, - help='bpe code for rescoring models') - group.add_argument('--nbest-list', default=None, - help='use predefined nbest list in interactive.py format') - group.add_argument('--write-hypos', default=None, - help='filename prefix to write hypos to') - group.add_argument('--ref-translation', default=None, - help='reference translation to use with nbest list from interactive.py') - group.add_argument('--backwards-score-dict-dir', default=None, - help='the directory with dictionaries for the backwards model,' - 'if None then it is assumed the fw and backwards models share dictionaries') - - # extra scaling args - group.add_argument('--gen-model-name', default=None, - help='the name of the models that generated the nbest list') - group.add_argument('--model1-name', default=None, - help='the name of the set for model1 group ') - group.add_argument('--model2-name', default=None, - help='the name of the set for model2 group') - group.add_argument('--shard-id', default=0, type=int, - help='the id of the shard to generate') - group.add_argument('--num-shards', default=1, type=int, - help='the number of shards to generate across') - group.add_argument('--all-shards', action='store_true', - help='use all shards') - group.add_argument('--target-prefix-frac', default=None, type=float, - help='the fraction of the target prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--source-prefix-frac', default=None, type=float, - help='the fraction of the source prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--normalize', action='store_true', - help='whether to normalize by src and target len') - # fmt: on - return group - - -def add_tuning_args(parser): - group = parser.add_argument_group("Tuning") - - group.add_argument( - "--lower-bound", - default=[-0.7], - nargs="+", - type=float, - help="lower bound of search space", - ) - group.add_argument( - "--upper-bound", - default=[3], - nargs="+", - type=float, - help="upper bound of search space", - ) - group.add_argument( - "--tune-param", - default=["lenpen"], - nargs="+", - choices=["lenpen", "weight1", "weight2", "weight3"], - help="the parameter(s) to tune", - ) - group.add_argument( - "--tune-subset", - default="valid", - choices=["valid", "test", "train"], - help="the subset to tune on ", - ) - group.add_argument( - "--num-trials", - default=1000, - type=int, - help="number of trials to do for random search", - ) - group.add_argument( - "--share-weights", action="store_true", help="share weight2 and weight 3" - ) - return group diff --git a/spaces/kobayashi123/bingo/README.md b/spaces/kobayashi123/bingo/README.md deleted file mode 100644 index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000 --- a/spaces/kobayashi123/bingo/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: bingo -emoji: 😊 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -问题反馈请前往 https://github.com/weaigc/bingo/issues -
    - - diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/constants.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/constants.py deleted file mode 100644 index 41a1c23b0a7fe134b1f662545876eb65b31b071e..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/constants.py +++ /dev/null @@ -1,20 +0,0 @@ -#: list of lorem ipsum words used by the lipsum() helper function -LOREM_IPSUM_WORDS = """\ -a ac accumsan ad adipiscing aenean aliquam aliquet amet ante aptent arcu at -auctor augue bibendum blandit class commodo condimentum congue consectetuer -consequat conubia convallis cras cubilia cum curabitur curae cursus dapibus -diam dictum dictumst dignissim dis dolor donec dui duis egestas eget eleifend -elementum elit enim erat eros est et etiam eu euismod facilisi facilisis fames -faucibus felis fermentum feugiat fringilla fusce gravida habitant habitasse hac -hendrerit hymenaeos iaculis id imperdiet in inceptos integer interdum ipsum -justo lacinia lacus laoreet lectus leo libero ligula litora lobortis lorem -luctus maecenas magna magnis malesuada massa mattis mauris metus mi molestie -mollis montes morbi mus nam nascetur natoque nec neque netus nibh nisi nisl non -nonummy nostra nulla nullam nunc odio orci ornare parturient pede pellentesque -penatibus per pharetra phasellus placerat platea porta porttitor posuere -potenti praesent pretium primis proin pulvinar purus quam quis quisque rhoncus -ridiculus risus rutrum sagittis sapien scelerisque sed sem semper senectus sit -sociis sociosqu sodales sollicitudin suscipit suspendisse taciti tellus tempor -tempus tincidunt torquent tortor tristique turpis ullamcorper ultrices -ultricies urna ut varius vehicula vel velit venenatis vestibulum vitae vivamus -viverra volutpat vulputate""" diff --git a/spaces/ky2k/summarize_text/app.py b/spaces/ky2k/summarize_text/app.py deleted file mode 100644 index fb58c3c2cb3991a2a582482a2628f92aa8f971ee..0000000000000000000000000000000000000000 --- a/spaces/ky2k/summarize_text/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import gradio as gr -from summarizer import TransformerSummarizer, Summarizer - -title = "Summarizer" -description = """ -This is a demo of a text summarization NN - based on GPT-2, XLNet, BERT, -works with English, Ukrainian, and Russian (and a few other languages too, these are SOTA NN after all). -""" - -NN_OPTIONS_LIST = ["mean", "max", "min", "median"] -NN_LIST = ["GPT-2", "XLNet", "BERT"] - - -def start_fn(article_input: str, reduce_option="mean", model_type='GPT-2') -> str: - """ - GPT-2 based solution, input full text, output summarized text - :param model_type: - :param reduce_option: - :param article_input: - :return summarized article_output: - """ - if model_type == "GPT-2": - GPT2_model = TransformerSummarizer(transformer_type="GPT2", transformer_model_key="gpt2-medium", - reduce_option=reduce_option) - full = ''.join(GPT2_model(article_input, min_length=60)) - return full - elif model_type == "XLNet": - XLNet_model = TransformerSummarizer(transformer_type="XLNet", transformer_model_key="xlnet-base-cased", - reduce_option=reduce_option) - full = ''.join(XLNet_model(article_input, min_length=60)) - return full - - elif model_type == "BERT": - BERT_model = Summarizer(reduce_option=reduce_option) - full = ''.join(BERT_model(article_input, min_length=60)) - return full - - -face = gr.Interface(fn=start_fn, - inputs=[gr.inputs.Textbox(lines=2, placeholder="Paste article here.", label='Input Article'), - gr.inputs.Dropdown(NN_OPTIONS_LIST, label="Summarize mode"), - gr.inputs.Dropdown(NN_LIST, label="Selected NN")], - outputs=gr.inputs.Textbox(lines=2, placeholder="Summarized article here.", label='Summarized ' - 'Article'), - title=title, - description=description, ) -face.launch(server_name="0.0.0.0", share=True) diff --git a/spaces/kyleebrooks/VectorDatabaseCreate/app.py b/spaces/kyleebrooks/VectorDatabaseCreate/app.py deleted file mode 100644 index a003267dbeb7d95a35f53d84fbb4ef023b8ac807..0000000000000000000000000000000000000000 --- a/spaces/kyleebrooks/VectorDatabaseCreate/app.py +++ /dev/null @@ -1,233 +0,0 @@ -from llama_index import SimpleDirectoryReader, Prompt, LLMPredictor, GPTVectorStoreIndex, VectorStoreIndex, PromptHelper, ServiceContext, load_index_from_storage, StorageContext -from llama_index.node_parser import SimpleNodeParser -from llama_index.data_structs import Node -from langchain.chat_models import ChatOpenAI -from huggingface_hub import whoami -from huggingface_hub import HfApi -from huggingface_hub import login -import os -import openai -import tiktoken -import shutil -import gradio as gr - - - -#if you have OpenAI API key as a string, enable the below -openai.api_key = "" -os.environ["OPENAI_API_KEY"] = '' -large_document="" -api=HfApi() -model_type="" -messages = [] -Chat_message = [] -chat_history=[] -custom_chat_history=[] -max_input_size = 4096 -num_outputs = 512 -chunk_size_limit = 600 -chunk_overlap_ratio = .1 - - -prompt_helper = PromptHelper(max_input_size, num_outputs, chunk_overlap_ratio, chunk_size_limit) - -store = './storage' -#store = 'kyleebrooks/Data/storage' - -max_response_tokens = 1000 -token_limit= 4097 - -template = ( - "This Chatbot is helpful, accurate, and will use the context below for answering all questions. This Chatbot will not answer questions not included in the context provided \n" - "---------------------\n" - "{context_str}" - "\n---------------------\n" - "Given this information, please answer the question by providing a detailed summary and provide accurate citations for all referenced areas at the end of each response. {query_str}\n" -) -qa_template = Prompt(template) - -def upload_file (index, input_file): - login(token="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") - json_list=["docstore.json", "graph_store.json", "index_store.json", "vector_store.json"] - os.mkdir("/tmp/gradio/json") - index.storage_context.persist(persist_dir="/tmp/gradio/json") - for i in json_list: - print(i) - api.upload_file( - path_or_fileobj="/tmp/gradio/json/"+i, - #path_or_fileobj=i.name, - path_in_repo="storage/"+i, - repo_id="kyleebrooks/VectorDatabaseCreate", - repo_type="space" # dataset - ) - -#loads openai key -def load_api_key (api_key): - os.environ["OPENAI_API_KEY"] = str(api_key) - openai.api_key = str(api_key) - -#identifies the current number of tokens used for the conversation -def num_tokens_from_messages(messages, model_type): - encoding = tiktoken.encoding_for_model(model_type) - num_tokens = 0 - for message in messages: - num_tokens += 4 # every message follows {role/name}\n{content}\n - for key, value in message.items(): - num_tokens += len(encoding.encode(value)) - if key == "name": # if there's a name, the role is omitted - num_tokens += -1 # role is always required and always 1 token - num_tokens += 2 # every reply is primed with assistant - print(num_tokens) - return num_tokens - -#constructs the index and saves to a subfolder -def construct_index(create_index, input_file, model_type, save_index): - if create_index == "Yes": - login(token="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") - source=input_file[0].name - suffix = source.rsplit("/", 1)[1] - prefix = source.rsplit("/", 2)[0] - directories=[] - print(prefix+" This is the Prefix") - for i in input_file: - directories.append(i.name) - print(i.name) - response="constructing index" - print('Constructing index') - # load in the documents from the docs subfolder - llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name=model_type, max_tokens=num_outputs)) - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper) - docs = SimpleDirectoryReader(input_files=directories, filename_as_id=True).load_data() - #Large_document=str(docs) - #node_parser = SimpleNodeParser.from_defaults(chunk_size=1024, chunk_overlap=20) - # Use the Node Parser to get nodes from the document - #nodes = node_parser.get_nodes_from_documents([large_document], show_progress=False) - # Each node in the 'nodes' list will contain a smaller chunk of the text file - - #index = GPTVectorStoreIndex.from_documents(nodes, service_context=service_context) - index = GPTVectorStoreIndex.from_documents(docs, service_context=service_context) - #index = VectorStoreIndex.from_documents(docs, service_context=service_context) - index.set_index_id('vector_index') - # Stores json files in a subfolder - if save_index=="Yes": - upload_file(index, input_file) - index_status="Index constructed and saved, allow time for loading" - else: - index_status="Index constructed but not saved for future use" - index.storage_context.persist(persist_dir=store) - # Clears out temporary files - shutil.rmtree(prefix) - response=index_status - return response - else: - response= "You did not select Yes to load a new index." - - return response - - -#resets the conversation -def generate_restart(prompt, model_type): - - messages.clear() - messages.append({"role":"system", "content": "Tell the user that this conversation has been reset due to the discussion size reaching maximum size, and to please start by asking a new question."}) - storage_context = StorageContext.from_defaults(persist_dir=store) - llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name=model_type, max_tokens=num_outputs)) - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper) - #index = load_index_from_storage(storage_context) - index = load_index_from_storage( - StorageContext.from_defaults(persist_dir=store), - service_context=service_context, - ) - #query_engine = index.as_query_engine(text_qa_template=qa_template) - chat_engine = index.as_chat_engine(text_qa_template=qa_template) - string_message=str(messages) - #response = query_engine.query(string_message) - response = chat_engine.chat(messages) - messages.clear() - messages.append({"role":"system", "content": "This Chatbot is helpful, accurate, and provides all relevnt information from the Treasury Financial Manual (TFM) when responding. This Chatbot always provides accurate citations from the TFM."}) - messages.append({"role":"user","content": ""}) - messages.append({"role":"assistant","content": ""}) - print("restert initiated") - print(messages) - return response.response - -#generates the ChatGPT call -def generate_response(prompt, model_type): - - messages.append({"role": "user", "content": prompt}) - storage_context = StorageContext.from_defaults(persist_dir=store) - llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name=model_type, max_tokens=num_outputs)) - service_context = ServiceContext.from_defaults(llm=ChatOpenAI(temperature=0., model_name=model_type)) - #service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper) - index = load_index_from_storage( - StorageContext.from_defaults(persist_dir=store), - service_context=service_context, - ) - #chat_engine = index.as_chat_engine(verbose=True, chat_history=chat_history, text_qa_template=qa_template, chat_mode='condense_question') - query_engine = index.as_query_engine(text_qa_template=qa_template) - string_message=str(messages) - response = query_engine.query(prompt) - #response = chat_engine.chat(prompt, chat_history) - string_response=str(response) - messages.append({"role": "assistant", "content":string_response}) - num_tokens_from_messages(messages, model_type) - print(messages) - print("below is history") - print(chat_history) - - return ('MIL Custom Index Chatbot: '+response.response) - - -def my_chatbot(input, history, model_type): - history = history or [] - if num_tokens_from_messages(messages, model_type)<(int(token_limit)-int(max_response_tokens)): - output = generate_response(input, model_type) - history.append((input, output)) - return history, history - else: - history.clear() - output = generate_restart(input, model_type) - history.append((input, output)) - prompt=input - return prompt, prompt - -def index_chatbot(input_text): - if not hasattr(chatbot, 'index'): - storage_context = StorageContext.from_defaults(persist_dir=store) - index = load_index_from_storage(storage_context) - query_engine = chatbot.index.as_query_engine(text_qa_template=QA_TEMPLATE) - response = chatbot.query_engine.query(input_text) - return response.response - - -with gr.Blocks() as demo: - - gr.Markdown("""

    MIL Custom Vector Index Chatbot

    """) - gr.Image(value="logo.PNG", width=200, height=150, interactive=False, show_share_button=False) - api_key = gr.Textbox(type='password', label="Enter the API key", width=250) - input_file = gr.Files() - #load_btn.click(in_to_out,input_file,output_file) - with gr.Row().style(equal_height=True): - create_index = gr.Radio(["Yes", "No"], label = "index creation", info="Would you like to create a new index?", value="No") - model_type = gr.Radio(["gpt-3.5-turbo", "gpt-4"], label = "Model_Type", info="Would you like to create a new index?", value="gpt-3.5-turbo") - save_index = gr.Radio(["Yes", "No"], label = "Save Index", info="Would you like to save the index for future use?", value="No") - output = gr.Textbox( - label="Output", - info="", - lines=1 - ) - submit_index = gr.Button("Create Index") - submit_index.click(load_api_key, [api_key]) - chatbot = gr.Chatbot() - state = gr.State() - text = gr.Textbox(label="Input", info="", lines=2, placeholder="Hello. Ask me a question about the indexed content. Please approach each question as if it is a new question, my memory is limited in this model.") - submit = gr.Button("SEND") - submit.click(load_api_key, [api_key]) - submit.click(my_chatbot, inputs=[text, state, model_type], outputs=[chatbot, state]) - submit_index.click(construct_index, [create_index, input_file, model_type, save_index], output, show_progress=True) - - -demo.launch(share = False) - - - diff --git a/spaces/lcipolina/Print_Gallery/README.md b/spaces/lcipolina/Print_Gallery/README.md deleted file mode 100644 index 39cf16214b7935f26f605cc9b8f0cf3b418657c0..0000000000000000000000000000000000000000 --- a/spaces/lcipolina/Print_Gallery/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Print_Gallery -emoji: 😻 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 2.8.14 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/legoandmars/glide-inpainting/glide_text2im/xf.py b/spaces/legoandmars/glide-inpainting/glide_text2im/xf.py deleted file mode 100644 index 5dfff440b489f3cc3c62450dc28c2f35f692dd94..0000000000000000000000000000000000000000 --- a/spaces/legoandmars/glide-inpainting/glide_text2im/xf.py +++ /dev/null @@ -1,130 +0,0 @@ -""" -Transformer implementation adapted from CLIP ViT: -https://github.com/openai/CLIP/blob/4c0275784d6d9da97ca1f47eaaee31de1867da91/clip/model.py -""" - -import math - -import torch as th -import torch.nn as nn - - -def convert_module_to_f16(l): - """ - Convert primitive modules to float16. - """ - if isinstance(l, (nn.Linear, nn.Conv2d, nn.ConvTranspose2d)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - -class LayerNorm(nn.LayerNorm): - """ - Implementation that supports fp16 inputs but fp32 gains/biases. - """ - - def forward(self, x: th.Tensor): - return super().forward(x.float()).to(x.dtype) - - -class MultiheadAttention(nn.Module): - def __init__(self, n_ctx, width, heads): - super().__init__() - self.n_ctx = n_ctx - self.width = width - self.heads = heads - self.c_qkv = nn.Linear(width, width * 3) - self.c_proj = nn.Linear(width, width) - self.attention = QKVMultiheadAttention(heads, n_ctx) - - def forward(self, x): - x = self.c_qkv(x) - x = self.attention(x) - x = self.c_proj(x) - return x - - -class MLP(nn.Module): - def __init__(self, width): - super().__init__() - self.width = width - self.c_fc = nn.Linear(width, width * 4) - self.c_proj = nn.Linear(width * 4, width) - self.gelu = nn.GELU() - - def forward(self, x): - return self.c_proj(self.gelu(self.c_fc(x))) - - -class QKVMultiheadAttention(nn.Module): - def __init__(self, n_heads: int, n_ctx: int): - super().__init__() - self.n_heads = n_heads - self.n_ctx = n_ctx - - def forward(self, qkv): - bs, n_ctx, width = qkv.shape - attn_ch = width // self.n_heads // 3 - scale = 1 / math.sqrt(math.sqrt(attn_ch)) - qkv = qkv.view(bs, n_ctx, self.n_heads, -1) - q, k, v = th.split(qkv, attn_ch, dim=-1) - weight = th.einsum( - "bthc,bshc->bhts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - wdtype = weight.dtype - weight = th.softmax(weight.float(), dim=-1).type(wdtype) - return th.einsum("bhts,bshc->bthc", weight, v).reshape(bs, n_ctx, -1) - - -class ResidualAttentionBlock(nn.Module): - def __init__( - self, - n_ctx: int, - width: int, - heads: int, - ): - super().__init__() - - self.attn = MultiheadAttention( - n_ctx, - width, - heads, - ) - self.ln_1 = LayerNorm(width) - self.mlp = MLP(width) - self.ln_2 = LayerNorm(width) - - def forward(self, x: th.Tensor): - x = x + self.attn(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__( - self, - n_ctx: int, - width: int, - layers: int, - heads: int, - ): - super().__init__() - self.n_ctx = n_ctx - self.width = width - self.layers = layers - self.resblocks = nn.ModuleList( - [ - ResidualAttentionBlock( - n_ctx, - width, - heads, - ) - for _ in range(layers) - ] - ) - - def forward(self, x: th.Tensor): - for block in self.resblocks: - x = block(x) - return x diff --git a/spaces/librarian-bot/webhook_metadata_reviewer/README.md b/spaces/librarian-bot/webhook_metadata_reviewer/README.md deleted file mode 100644 index d4a21b80bbb7eb653339abd00d89f87982008bbb..0000000000000000000000000000000000000000 --- a/spaces/librarian-bot/webhook_metadata_reviewer/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Automatic metadata review bot -emoji: 🧐 -colorFrom: blue -colorTo: pink -sdk: docker -pinned: false -duplicated_from: davanstrien/webhook_metadata_reviewer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/liimefruit/RVCollection/config.py b/spaces/liimefruit/RVCollection/config.py deleted file mode 100644 index 03275af5912b923cf6e74f7de743fde92eacf2ad..0000000000000000000000000000000000000000 --- a/spaces/liimefruit/RVCollection/config.py +++ /dev/null @@ -1,105 +0,0 @@ -import argparse -import torch -from multiprocessing import cpu_count - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.colab, - self.noparallel, - self.noautoopen, - self.api - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument("--api", action="store_true", help="Launch with api") - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.api - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Antares Auto Tune 8 Mac Crack Torrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Antares Auto Tune 8 Mac Crack Torrent.md deleted file mode 100644 index 3a9795dc0ae474fbf198a5a3f823ab3498151616..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Antares Auto Tune 8 Mac Crack Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Antares Auto Tune 8 Mac Crack Torrent


    Downloadhttps://bytlly.com/2uGwZs



    -
    -Antares Auto-Tune 8 Torrent Incl Patch + Full Version Setup Antares Auto-Tune Crack – is available here to download. The Audio industry is ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Doraemon Story Of Seasons Update 1.0.2 PLAZA FitGirl.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Doraemon Story Of Seasons Update 1.0.2 PLAZA FitGirl.md deleted file mode 100644 index 20f9e9f27787bf3c1c5908d0db76678f4365ce2a..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Doraemon Story Of Seasons Update 1.0.2 PLAZA FitGirl.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Doraemon Story of Seasons Update 1.0.2 PLAZA, FitGirl


    Download ––– https://bytlly.com/2uGvXM



    - -For Doraemon: Story of Seasons on the Nintendo Switch, a GameFAQs message board topic titled "Version 1.0.2 - Patch Notes?".. Download ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Dragon Age Inquisition Patch V.1.11 24 TOP.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Dragon Age Inquisition Patch V.1.11 24 TOP.md deleted file mode 100644 index 7038050d368cc5870dc6ffad46f7ff9924720edf..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Dragon Age Inquisition Patch V.1.11 24 TOP.md +++ /dev/null @@ -1,106 +0,0 @@ -
    -

    Dragon Age: Inquisition Patch v.1.11 24 - A Comprehensive Overview

    -

    Dragon Age: Inquisition is one of the most popular and critically acclaimed role-playing games of the last decade. It offers a vast and immersive world, a rich and diverse story, and a dynamic and engaging gameplay. However, like any complex game, it also has its share of bugs, glitches, and issues that may affect the player's experience.

    -

    Dragon Age: Inquisition Patch v.1.11 24


    Download Ziphttps://bytlly.com/2uGxHU



    -

    That's why the developers at BioWare have been constantly working on improving and updating the game with patches that fix various problems, add new features, and enhance performance. The latest patch for Dragon Age: Inquisition is v.1.11 24, which was released on January 27th, 2023 for Windows 10 users.

    -

    In this article, we will give you a comprehensive overview of what this patch does, how to install it, and what are the pros and cons of using it. We will also answer some frequently asked questions and provide some tips and tricks for getting the most out of Dragon Age: Inquisition Patch v.1.11 24.

    -

    What Does Dragon Age: Inquisition Patch v.1.11 24 Do?

    -

    Dragon Age: Inquisition Patch v.1.11 24 is a major update that brings several improvements and fixes to the game. Here are some of the main changes that this patch introduces:

    -
      -
    • It adds SplitCam Video Driver and SplitCam Audio Driver for Windows 10 users, which allow them to use their webcam with multiple applications at the same time and add effects to their video stream.
    • -
    • It updates the codecs for better video and audio quality and compatibility.
    • -
    • It fixes the incorrect log for the Desktop custom mode.
    • -
    • It fixes the issue where some programs did not detect SplitCam Video Driver in Windows 10.
    • -
    • It fixes the sound lagging issue in Windows 10 1703.
    • -
    • It fixes the crash that occurred at the start if the player selected a video source.
    • -
    -

    How to Install Dragon Age: Inquisition Patch v.1.11 24?

    -

    Dragon Age: Inquisition Patch v.1.11 24 is compatible with Windows 10 (both 32-bit and 64-bit versions) and it is totally free without any restrictions or hidden payments. You can download it from SteamDB, which provides curated patch notes for Dragon Age: Inquisition on Steam: https://steamdb.info/app/1222690/patchnotes/

    -

    The installation process is simple and straightforward. Just follow these steps:

    -
      -
    1. Run Steam and launch Dragon Age: Inquisition from your library.
    2. -
    3. Steam will automatically download and install the patch for you.
    4. -
    5. Wait for the installation to finish and restart the game.
    6. -
    -

    What are the Pros and Cons of Dragon Age: Inquisition Patch v.1.11 24?

    -

    Dragon Age: Inquisition Patch v.1.11 24 is a great update that brings many benefits and few drawbacks to the game. Here are some of them:

    -

    - - - - - - -
    ProsCons
    It improves the video and audio quality and compatibility of the game.It may not work with some webcam models or applications.
    It adds new features and effects for webcam users.It may cause some lag or delay in video stream.
    It fixes various bugs and issues that affected the gameplay.It may consume some CPU resources or disk space.
    It enhances the performance and stability of the game.It may have some bugs or errors in some features.
    -

    Frequently Asked Questions

    -

    If you have any questions or issues with Dragon Age: Inquisition Patch v.1.11 24, you can check the FAQs section on BioWare Blog, which provides official patch notes for Dragon Age: Inquisition: https://blog.bioware.com/dragon-age-inquisition-patch-notes/ Here are some of the common FAQs:

    -
      -
    • Q: How can I check which patch/version I'm using of Dragon Age: Inquisition?
    • -
    • A: You can check your patch/version by following these steps:
    • -
        -
      • Navigate to C:\Program Files (x86)\Origin Games\Dragon Age Inquisition\Update\Patch\package.mft
      • -
      • Open it with any ASCII text editor.
      • -
      • The version number is displayed at the top of the file.
      • -
      -
    • Q: How can I uninstall Dragon Age: Inquisition Patch v.1.11 24?
    • -
    • A: You can uninstall Dragon Age: Inquisition Patch v.1.11 24 by following these steps:
    • -
        -
      • Close Dragon Age: Inquisition and any other application that uses your webcam.
      • -
      • Navigate to C:\Program Files (x86)\Origin Games\Dragon Age Inquisition\Update\Patch\
      • -
      • Delete package.mft file.
      • -
      • Delete SplitCam folder if present.
      • -
      • Delete SplitCamAudio folder if present.
      • -
      • Delete SplitCamVideo folder if present.
      • - -

      -

      How to Use Dragon Age: Inquisition Patch v.1.11 24?

      -

      Dragon Age: Inquisition Patch v.1.11 24 is easy and intuitive to use. Here are some basic steps to get you started:

      -
        -
      1. Launch Dragon Age: Inquisition from Steam and select your saved game.
      2. -
      3. To use your webcam with multiple applications and add effects to your video stream, click on the "SplitCam" icon on the top right corner of the game screen.
      4. -
      5. To select the video source you want to use, click on the drop-down menu at the top left corner of the SplitCam window.
      6. -
      7. To add effects to your video stream, click on the "Effects" tab at the bottom left corner of the SplitCam window and choose from the categories on the left panel.
      8. -
      9. To zoom your video stream, use the slider at the bottom right corner of the SplitCam window or press Ctrl + mouse wheel.
      10. -
      11. To stream video to a livestream website or record it to Youtube, click on the "Stream" tab at the bottom left corner of the SplitCam window and select the platform you want to use from the list on the left panel.
      12. -
      13. To mix audio sources in one audio stream, click on the "Audio" tab at the bottom left corner of the SplitCam window and select the sources you want to use from the list on the left panel.
      14. -
      -

      What are the Best Practices for Using Dragon Age: Inquisition Patch v.1.11 24?

      -

      To get the most out of Dragon Age: Inquisition Patch v.1.11 24 and enjoy a smooth and high-quality video chat experience, you can follow these best practices:

      -
        -
      • Make sure your webcam driver and SplitCam software are updated to the latest version.
      • -
      • Close any unnecessary programs or processes that may interfere with SplitCam or consume CPU resources.
      • -
      • Adjust the settings of SplitCam according to your preferences and needs, such as resolution, frame rate, brightness, contrast, saturation, etc.
      • -
      • Choose the effects and masks that suit your video chat purpose and mood, and don't overuse them.
      • -
      • Test your video stream before broadcasting it to a livestream website or recording it to Youtube.
      • -
      • Have fun and be creative with SplitCam!
      • -
      -

      Where to Get More Information About Dragon Age: Inquisition Patch v.1.11 24?

      -

      If you want to learn more about Dragon Age: Inquisition Patch v.1.11 24, you can visit the following sources:

      -
        -
      • The official website: https://www.ea.com/games/dragon-age/dragon-age-inquisition Here you can find the latest news, updates, media, and community content for Dragon Age: Inquisition.
      • -
      • The official blog: https://blog.bioware.com/category/dragon-age/ Here you can read new articles and insights from the developers and writers of Dragon Age: Inquisition.
      • -
      • The official forum: https://answers.ea.com/t5/Dragon-Age-Inquisition/bd-p/Dragon-Age-Inquisition Here you can join the discussion with other players and get help from the support team.
      • -
      • The official social media: https://www.facebook.com/DragonAge/ https://twitter.com/dragonage Here you can follow Dragon Age: Inquisition on Facebook and Twitter and get the latest updates and interact with the community.
      • -
      -

      What are the Tips and Tricks for Playing Dragon Age: Inquisition Patch v.1.11 24?

      -

      Dragon Age: Inquisition Patch v.1.11 24 is a fun and immersive game that offers a lot of options and possibilities for the player. Here are some tips and tricks to help you enjoy the game even more:

      -
        -
      • Explore the world and collect resources. Dragon Age: Inquisition has a huge and beautiful world that is full of secrets, quests, and loot. You can use your resources to craft weapons, armor, potions, and upgrades for your equipment and your base.
      • -
      • Manage your party and your relationships. Dragon Age: Inquisition has a diverse and interesting cast of characters that you can recruit, interact with, and romance. You can choose who to bring with you on your missions, who to talk to, who to support, and who to romance. Your choices will affect your relationships with them and their loyalty to you.
      • -
      • Customize your character and your playstyle. Dragon Age: Inquisition allows you to create your own character from four different races (human, elf, dwarf, or qunari) and three different classes (warrior, rogue, or mage). You can also choose from various specializations that give you unique abilities and skills. You can also customize your appearance, your gear, your skills, and your tactics.
      • -
      • Play online with other players. Dragon Age: Inquisition has a multiplayer mode that lets you team up with up to three other players and take on various missions and challenges. You can choose from different characters with different abilities and roles, earn rewards, and unlock new content.
      • -
      -

      What are the Reviews for Dragon Age: Inquisition Patch v.1.11 24?

      -

      Dragon Age: Inquisition Patch v.1.11 24 has received mostly positive reviews from users and critics alike. Here are some of the comments from various sources:

      -
        -
      • "Dragon Age: Inquisition Patch v.1.11 24 is a great update that improves the game in many ways. I love the new SplitCam feature that lets me use my webcam with multiple applications and add effects to my video stream. The game also runs smoother and looks better than before." - User review on Steam
      • -
      • "Dragon Age: Inquisition Patch v.1.11 24 is a must-have for any fan of the game. It fixes many bugs and issues that plagued the game since launch, and adds new features and enhancements that make the game more enjoyable and immersive. The SplitCam feature is especially cool and fun to use." - Editor review on IGN
      • -
      • "Dragon Age: Inquisition Patch v.1.11 24 is a welcome update that brings a lot of improvements and fixes to the game. The SplitCam feature is a nice addition that allows you to use your webcam with multiple applications and add effects to your video stream. The game also looks and performs better than ever." - Review on GameSpot
      • -
      -

      Conclusion

      -

      Dragon Age: Inquisition Patch v.1.11 24 is a powerful and comprehensive update that enhances the game in various ways. It adds SplitCam Video Driver and SplitCam Audio Driver for Windows 10 users, which allow them to use their webcam with multiple applications at the same time and add effects to their video stream. It also updates the codecs for better video and audio quality and compatibility, fixes various bugs and issues, and improves the performance and stability of the game.

      -

      If you are looking for a free and versatile update for Dragon Age: Inquisition, you should definitely give Dragon Age: Inquisition Patch v.1.11 24 a try.

      -

      Dragon Age: Inquisition Patch v.1.11 24 is a powerful and comprehensive update that enhances the game in various ways. It adds SplitCam Video Driver and SplitCam Audio Driver for Windows 10 users, which allow them to use their webcam with multiple applications at the same time and add effects to their video stream. It also updates the codecs for better video and audio quality and compatibility, fixes various bugs and issues, and improves the performance and stability of the game.

      -

      If you are looking for a free and versatile update for Dragon Age: Inquisition, you should definitely give Dragon Age: Inquisition Patch v.1.11 24 a try.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/HACK SONY Vegas Pro 13.0 Build 373 (x64) RePack By D!akov NEW!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/HACK SONY Vegas Pro 13.0 Build 373 (x64) RePack By D!akov NEW!.md deleted file mode 100644 index 04067382a96ec8edbcbf6b22378183f8155de814..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/HACK SONY Vegas Pro 13.0 Build 373 (x64) RePack By D!akov NEW!.md +++ /dev/null @@ -1,5 +0,0 @@ - -

      84c4c6b66f GiliSoft Smart wifi USB KNetwok 2.2.0.0 (MAC/PC) with Patch.rar
      COFFEE PAD - TV Hacked REPACK - 4.6.2.1 FULL Version
      Osmera Torrent 1.3.6.2
      The Ranger Best of Pdf - KriptoVirus 2
      Training Files.zip
      THIS IS A FULLY PATCHED FILE!
      ALL FREE KEYS ARE LISTED BELOW
      I. I will not be held responsible for any viruses to your computer, hard drive, or other media. I can assure you that this is the real thing and that it has been fully tested. http://www.expressvpn.net/ This is the brand new premium VPN service that offers users a feeling of anonymity when they are online. I have seen both free and paid VPN services, and I must say that ExpressVPN is one of the finest VPN services I have ever used. It offers strong online security features, such as 256-bit SSL encryption on all servers, scrupulous customer service, and three platforms for all your devices. This is the best VPN service I have tried because it runs perfectly on both Windows and Mac. ExpressVPN is definitely the choice VPN service for all those who are looking for safe online browsing experience. Now you can easily enjoy the free ExpressVPN services if you want. I am a big fan of ExpressVPN because their customer service is simply impeccable. I have never experienced any problems or technical issues when using ExpressVPN, and the customer support staff is always very helpful, responsive, and professional. It is totally worth its money. The website is easy to use and it is never complicated to access the information you need. ExpressVPN is a bit pricey, but considering the vast number of users who use it, I would say it is a good investment. It offers 3 platforms for the three of their most popular devices: Android, iOS, and Windows. I’ve tried all three platforms, and I must say that Android is a dream! You can control everything through the 3D interface, which is free from unwanted ads and clutter. The basic functions of Android are easily manageable, so you have no problem navigating around the website. I prefer that the app is able to manage Android settings for my Wi-Fi connection settings. As far as online browsing experience using ExpressVPN is concerned, I must say that Android is not the best out of the three platforms. iOS is the only iOS compatible app that we can control. The touchscreen Android mobile, IOS, iOS 9.2.1 Theme Torrent for greek Note- Added and removed files [ 30 ] Iso Crack.rar [31] Full Version Rar Torrent. Full Windows!BETTER!! full free 1080p deudacop UPC Reference Model Of Paas Pdf 2017 PRO 24 Crack With Licence Key Free Download 2019.

      -

      HACK SONY Vegas Pro 13.0 Build 373 (x64) RePack By D!akov


      Download File · https://bytlly.com/2uGxy5



      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/dpm_solver_pytorch.py b/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/dpm_solver_pytorch.py deleted file mode 100644 index dee5e280661b61e0a99038ce0bd240db51344ead..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/dpm_solver_pytorch.py +++ /dev/null @@ -1,1201 +0,0 @@ -import math - -import torch - - -class NoiseScheduleVP: - def __init__( - self, - schedule='discrete', - betas=None, - alphas_cumprod=None, - continuous_beta_0=0.1, - continuous_beta_1=20., - ): - """Create a wrapper class for the forward SDE (VP type). - - *** - Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t. - We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images. - *** - - The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ). - We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper). - Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have: - - log_alpha_t = self.marginal_log_mean_coeff(t) - sigma_t = self.marginal_std(t) - lambda_t = self.marginal_lambda(t) - - Moreover, as lambda(t) is an invertible function, we also support its inverse function: - - t = self.inverse_lambda(lambda_t) - - =============================================================== - - We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]). - - 1. For discrete-time DPMs: - - For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by: - t_i = (i + 1) / N - e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1. - We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3. - - Args: - betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details) - alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details) - - Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`. - - **Important**: Please pay special attention for the args for `alphas_cumprod`: - The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that - q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ). - Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have - alpha_{t_n} = \sqrt{\hat{alpha_n}}, - and - log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}). - - - 2. For continuous-time DPMs: - - We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise - schedule are the default settings in DDPM and improved-DDPM: - - Args: - beta_min: A `float` number. The smallest beta for the linear schedule. - beta_max: A `float` number. The largest beta for the linear schedule. - cosine_s: A `float` number. The hyperparameter in the cosine schedule. - cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule. - T: A `float` number. The ending time of the forward process. - - =============================================================== - - Args: - schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs, - 'linear' or 'cosine' for continuous-time DPMs. - Returns: - A wrapper object of the forward SDE (VP type). - - =============================================================== - - Example: - - # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', betas=betas) - - # For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod) - - # For continuous-time DPMs (VPSDE), linear schedule: - >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.) - - """ - - if schedule not in ['discrete', 'linear', 'cosine']: - raise ValueError( - "Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format( - schedule)) - - self.schedule = schedule - if schedule == 'discrete': - if betas is not None: - log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0) - else: - assert alphas_cumprod is not None - log_alphas = 0.5 * torch.log(alphas_cumprod) - self.total_N = len(log_alphas) - self.T = 1. - self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1)) - self.log_alpha_array = log_alphas.reshape((1, -1,)) - else: - self.total_N = 1000 - self.beta_0 = continuous_beta_0 - self.beta_1 = continuous_beta_1 - self.cosine_s = 0.008 - self.cosine_beta_max = 999. - self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.)) - self.schedule = schedule - if schedule == 'cosine': - # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T. - # Note that T = 0.9946 may be not the optimal setting. However, we find it works well. - self.T = 0.9946 - else: - self.T = 1. - - def marginal_log_mean_coeff(self, t): - """ - Compute log(alpha_t) of a given continuous-time label t in [0, T]. - """ - if self.schedule == 'discrete': - return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device), - self.log_alpha_array.to(t.device)).reshape((-1)) - elif self.schedule == 'linear': - return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0 - elif self.schedule == 'cosine': - log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.)) - log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0 - return log_alpha_t - - def marginal_alpha(self, t): - """ - Compute alpha_t of a given continuous-time label t in [0, T]. - """ - return torch.exp(self.marginal_log_mean_coeff(t)) - - def marginal_std(self, t): - """ - Compute sigma_t of a given continuous-time label t in [0, T]. - """ - return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t))) - - def marginal_lambda(self, t): - """ - Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T]. - """ - log_mean_coeff = self.marginal_log_mean_coeff(t) - log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff)) - return log_mean_coeff - log_std - - def inverse_lambda(self, lamb): - """ - Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t. - """ - if self.schedule == 'linear': - tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - Delta = self.beta_0 ** 2 + tmp - return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0) - elif self.schedule == 'discrete': - log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb) - t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]), - torch.flip(self.t_array.to(lamb.device), [1])) - return t.reshape((-1,)) - else: - log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - t = t_fn(log_alpha) - return t - - -def model_wrapper( - model, - noise_schedule, - model_type="noise", - model_kwargs={}, - guidance_type="uncond", - condition=None, - unconditional_condition=None, - guidance_scale=1., - classifier_fn=None, - classifier_kwargs={}, -): - """Create a wrapper function for the noise prediction model. - - DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to - firstly wrap the model function to a noise prediction model that accepts the continuous time as the input. - - We support four types of the diffusion model by setting `model_type`: - - 1. "noise": noise prediction model. (Trained by predicting noise). - - 2. "x_start": data prediction model. (Trained by predicting the data x_0 at time 0). - - 3. "v": velocity prediction model. (Trained by predicting the velocity). - The "v" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2]. - - [1] Salimans, Tim, and Jonathan Ho. "Progressive distillation for fast sampling of diffusion models." - arXiv preprint arXiv:2202.00512 (2022). - [2] Ho, Jonathan, et al. "Imagen Video: High Definition Video Generation with Diffusion Models." - arXiv preprint arXiv:2210.02303 (2022). - - 4. "score": marginal score function. (Trained by denoising score matching). - Note that the score function and the noise prediction model follows a simple relationship: - ``` - noise(x_t, t) = -sigma_t * score(x_t, t) - ``` - - We support three types of guided sampling by DPMs by setting `guidance_type`: - 1. "uncond": unconditional sampling by DPMs. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - - 2. "classifier": classifier guidance sampling [3] by DPMs and another classifier. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - - The input `classifier_fn` has the following format: - `` - classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond) - `` - - [3] P. Dhariwal and A. Q. Nichol, "Diffusion models beat GANs on image synthesis," - in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794. - - 3. "classifier-free": classifier-free guidance sampling by conditional DPMs. - The input `model` has the following format: - `` - model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score - `` - And if cond == `unconditional_condition`, the model output is the unconditional DPM output. - - [4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance." - arXiv preprint arXiv:2207.12598 (2022). - - - The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999) - or continuous-time labels (i.e. epsilon to T). - - We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise: - `` - def model_fn(x, t_continuous) -> noise: - t_input = get_model_input_time(t_continuous) - return noise_pred(model, x, t_input, **model_kwargs) - `` - where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver. - - =============================================================== - - Args: - model: A diffusion model with the corresponding format described above. - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - model_type: A `str`. The parameterization type of the diffusion model. - "noise" or "x_start" or "v" or "score". - model_kwargs: A `dict`. A dict for the other inputs of the model function. - guidance_type: A `str`. The type of the guidance for sampling. - "uncond" or "classifier" or "classifier-free". - condition: A pytorch tensor. The condition for the guided sampling. - Only used for "classifier" or "classifier-free" guidance type. - unconditional_condition: A pytorch tensor. The condition for the unconditional sampling. - Only used for "classifier-free" guidance type. - guidance_scale: A `float`. The scale for the guided sampling. - classifier_fn: A classifier function. Only used for the classifier guidance. - classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function. - Returns: - A noise prediction model that accepts the noised data and the continuous time as the inputs. - """ - - def get_model_input_time(t_continuous): - """ - Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time. - For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N]. - For continuous-time DPMs, we just use `t_continuous`. - """ - if noise_schedule.schedule == 'discrete': - return (t_continuous - 1. / noise_schedule.total_N) * noise_schedule.total_N - else: - return t_continuous - - def noise_pred_fn(x, t_continuous, cond=None): - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - t_input = get_model_input_time(t_continuous) - if cond is None: - output = model(x, t_input, **model_kwargs) - else: - output = model(x, t_input, cond, **model_kwargs) - if model_type == "noise": - return output - elif model_type == "x_start": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims) - elif model_type == "v": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x - elif model_type == "score": - sigma_t = noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return -expand_dims(sigma_t, dims) * output - - def cond_grad_fn(x, t_input): - """ - Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t). - """ - with torch.enable_grad(): - x_in = x.detach().requires_grad_(True) - log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs) - return torch.autograd.grad(log_prob.sum(), x_in)[0] - - def model_fn(x, t_continuous): - """ - The noise predicition model function that is used for DPM-Solver. - """ - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - if guidance_type == "uncond": - return noise_pred_fn(x, t_continuous) - elif guidance_type == "classifier": - assert classifier_fn is not None - t_input = get_model_input_time(t_continuous) - cond_grad = cond_grad_fn(x, t_input) - sigma_t = noise_schedule.marginal_std(t_continuous) - noise = noise_pred_fn(x, t_continuous) - return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad - elif guidance_type == "classifier-free": - if guidance_scale == 1. or unconditional_condition is None: - return noise_pred_fn(x, t_continuous, cond=condition) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t_continuous] * 2) - c_in = torch.cat([unconditional_condition, condition]) - noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2) - return noise_uncond + guidance_scale * (noise - noise_uncond) - - assert model_type in ["noise", "x_start", "v"] - assert guidance_type in ["uncond", "classifier", "classifier-free"] - return model_fn - - -class DPM_Solver: - def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.): - """Construct a DPM-Solver. - - We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0"). - If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver). - If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++). - In such case, we further support the "dynamic thresholding" in [1] when `thresholding` is True. - The "dynamic thresholding" can greatly improve the sample quality for pixel-space DPMs with large guidance scales. - - Args: - model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]): - `` - def model_fn(x, t_continuous): - return noise - `` - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model. - thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the "dynamic thresholding" in [1]. - max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding. - - [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b. - """ - self.model = model_fn - self.noise_schedule = noise_schedule - self.predict_x0 = predict_x0 - self.thresholding = thresholding - self.max_val = max_val - - def noise_prediction_fn(self, x, t): - """ - Return the noise prediction model. - """ - return self.model(x, t) - - def data_prediction_fn(self, x, t): - """ - Return the data prediction model (with thresholding). - """ - noise = self.noise_prediction_fn(x, t) - dims = x.dim() - alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t) - x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims) - if self.thresholding: - p = 0.995 # A hyperparameter in the paper of "Imagen" [1]. - s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1) - s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims) - x0 = torch.clamp(x0, -s, s) / s - return x0 - - def model_fn(self, x, t): - """ - Convert the model to the noise prediction model or the data prediction model. - """ - if self.predict_x0: - return self.data_prediction_fn(x, t) - else: - return self.noise_prediction_fn(x, t) - - def get_time_steps(self, skip_type, t_T, t_0, N, device): - """Compute the intermediate time steps for sampling. - - Args: - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - N: A `int`. The total number of the spacing of the time steps. - device: A torch device. - Returns: - A pytorch tensor of the time steps, with the shape (N + 1,). - """ - if skip_type == 'logSNR': - lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device)) - lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device)) - logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device) - return self.noise_schedule.inverse_lambda(logSNR_steps) - elif skip_type == 'time_uniform': - return torch.linspace(t_T, t_0, N + 1).to(device) - elif skip_type == 'time_quadratic': - t_order = 2 - t = torch.linspace(t_T ** (1. / t_order), t_0 ** (1. / t_order), N + 1).pow(t_order).to(device) - return t - else: - raise ValueError( - "Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type)) - - def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device): - """ - Get the order of each step for sampling by the singlestep DPM-Solver. - - We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast". - Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is: - - If order == 1: - We take `steps` of DPM-Solver-1 (i.e. DDIM). - - If order == 2: - - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of DPM-Solver-2. - - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If order == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2. - - ============================================ - Args: - order: A `int`. The max order for the solver (2 or 3). - steps: A `int`. The total number of function evaluations (NFE). - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - device: A torch device. - Returns: - orders: A list of the solver order of each step. - """ - if order == 3: - K = steps // 3 + 1 - if steps % 3 == 0: - orders = [3, ] * (K - 2) + [2, 1] - elif steps % 3 == 1: - orders = [3, ] * (K - 1) + [1] - else: - orders = [3, ] * (K - 1) + [2] - elif order == 2: - if steps % 2 == 0: - K = steps // 2 - orders = [2, ] * K - else: - K = steps // 2 + 1 - orders = [2, ] * (K - 1) + [1] - elif order == 1: - K = 1 - orders = [1, ] * steps - else: - raise ValueError("'order' must be '1' or '2' or '3'.") - if skip_type == 'logSNR': - # To reproduce the results in DPM-Solver paper - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device) - else: - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[ - torch.cumsum(torch.tensor([0, ] + orders), dim=0).to(device)] - return timesteps_outer, orders - - def denoise_fn(self, x, s): - """ - Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization. - """ - return self.data_prediction_fn(x, s) - - def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False): - """ - DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - if self.predict_x0: - phi_1 = torch.expm1(-h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - else: - phi_1 = torch.expm1(h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - - def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False, - solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-2 from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the second-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 0.5 - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - s1 = ns.inverse_lambda(lambda_s1) - log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff( - s1), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t) - alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_1 = torch.expm1(-h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * ( - model_s1 - model_s) - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_1 = torch.expm1(h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s) - ) - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1} - else: - return x_t - - def singlestep_dpm_solver_third_update(self, x, s, t, r1=1. / 3., r2=2. / 3., model_s=None, model_s1=None, - return_intermediate=False, solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-3 from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`). - If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 1. / 3. - if r2 is None: - r2 = 2. / 3. - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - lambda_s2 = lambda_s + r2 * h - s1 = ns.inverse_lambda(lambda_s1) - s2 = ns.inverse_lambda(lambda_s2) - log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff( - s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std( - s2), ns.marginal_std(t) - alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_12 = torch.expm1(-r2 * h) - phi_1 = torch.expm1(-h) - phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1. - phi_2 = phi_1 / h + 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(sigma_s2 / sigma_s, dims) * x - - expand_dims(alpha_s2 * phi_12, dims) * model_s - + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + expand_dims(alpha_t * phi_2, dims) * D1 - - expand_dims(alpha_t * phi_3, dims) * D2 - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_12 = torch.expm1(r2 * h) - phi_1 = torch.expm1(h) - phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1. - phi_2 = phi_1 / h - 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x - - expand_dims(sigma_s2 * phi_12, dims) * model_s - - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - expand_dims(sigma_t * phi_2, dims) * D1 - - expand_dims(sigma_t * phi_3, dims) * D2 - ) - - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2} - else: - return x_t - - def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"): - """ - Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - ns = self.noise_schedule - dims = x.dim() - model_prev_1, model_prev_0 = model_prev_list - t_prev_1, t_prev_0 = t_prev_list - lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda( - t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0 = h_0 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - if self.predict_x0: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0 - ) - else: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0 - ) - return x_t - - def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'): - """ - Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - model_prev_2, model_prev_1, model_prev_0 = model_prev_list - t_prev_2, t_prev_1, t_prev_0 = t_prev_list - lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda( - t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_1 = lambda_prev_1 - lambda_prev_2 - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0, r1 = h_0 / h, h_1 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2) - D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1) - D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1) - if self.predict_x0: - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1 - - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h ** 2 - 0.5), dims) * D2 - ) - else: - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1 - - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h ** 2 - 0.5), dims) * D2 - ) - return x_t - - def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None, - r2=None): - """ - Singlestep DPM-Solver with the order `order` from time `s` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - r1: A `float`. The hyperparameter of the second-order or third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate) - elif order == 2: - return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1) - elif order == 3: - return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1, r2=r2) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'): - """ - Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`. - - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1]) - elif order == 2: - return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - elif order == 3: - return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5, - solver_type='dpm_solver'): - """ - The adaptive step size solver based on singlestep DPM-Solver. - - Args: - x: A pytorch tensor. The initial value at time `t_T`. - order: A `int`. The (higher) order of the solver. We only support order == 2 or 3. - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - h_init: A `float`. The initial step size (for logSNR). - atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1]. - rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05. - theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1]. - t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the - current time and `t_0` is less than `t_err`. The default setting is 1e-5. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_0: A pytorch tensor. The approximated solution at time `t_0`. - - [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score-based models," arXiv preprint arXiv:2105.14080, 2021. - """ - ns = self.noise_schedule - s = t_T * torch.ones((x.shape[0],)).to(x) - lambda_s = ns.marginal_lambda(s) - lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x)) - h = h_init * torch.ones_like(s).to(x) - x_prev = x - nfe = 0 - if order == 2: - r1 = 0.5 - lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - solver_type=solver_type, - **kwargs) - elif order == 3: - r1, r2 = 1. / 3., 2. / 3. - lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - return_intermediate=True, - solver_type=solver_type) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2, - solver_type=solver_type, - **kwargs) - else: - raise ValueError("For adaptive step size solver, order must be 2 or 3, got {}".format(order)) - while torch.abs((s - t_0)).mean() > t_err: - t = ns.inverse_lambda(lambda_s + h) - x_lower, lower_noise_kwargs = lower_update(x, s, t) - x_higher = higher_update(x, s, t, **lower_noise_kwargs) - delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev))) - norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True)) - E = norm_fn((x_higher - x_lower) / delta).max() - if torch.all(E <= 1.): - x = x_higher - s = t - x_prev = x_lower - lambda_s = ns.marginal_lambda(s) - h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s) - nfe += order - print('adaptive solver nfe', nfe) - return x - - def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform', - method='singlestep', denoise=False, solver_type='dpm_solver', atol=0.0078, - rtol=0.05, - ): - """ - Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`. - - ===================================================== - - We support the following algorithms for both noise prediction model and data prediction model: - - 'singlestep': - Singlestep DPM-Solver (i.e. "DPM-Solver-fast" in the paper), which combines different orders of singlestep DPM-Solver. - We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps). - The total number of function evaluations (NFE) == `steps`. - Given a fixed NFE == `steps`, the sampling procedure is: - - If `order` == 1: - - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2. - - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If `order` == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2. - - 'multistep': - Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`. - We initialize the first `order` values by lower order multistep solvers. - Given a fixed NFE == `steps`, the sampling procedure is: - Denote K = steps. - - If `order` == 1: - - We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2. - - If `order` == 3: - - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3. - - 'singlestep_fixed': - Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3). - We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE. - - 'adaptive': - Adaptive step size DPM-Solver (i.e. "DPM-Solver-12" and "DPM-Solver-23" in the paper). - We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`. - You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs - (NFE) and the sample quality. - - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2. - - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3. - - ===================================================== - - Some advices for choosing the algorithm: - - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs: - Use singlestep DPM-Solver ("DPM-Solver-fast" in the paper) with `order = 3`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3, - skip_type='time_uniform', method='singlestep') - - For **guided sampling with large guidance scale** by DPMs: - Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2, - skip_type='time_uniform', method='multistep') - - We support three types of `skip_type`: - - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images** - - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**. - - 'time_quadratic': quadratic time for the time steps. - - ===================================================== - Args: - x: A pytorch tensor. The initial value at time `t_start` - e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution. - steps: A `int`. The total number of function evaluations (NFE). - t_start: A `float`. The starting time of the sampling. - If `T` is None, we use self.noise_schedule.T (default is 1.0). - t_end: A `float`. The ending time of the sampling. - If `t_end` is None, we use 1. / self.noise_schedule.total_N. - e.g. if total_N == 1000, we have `t_end` == 1e-3. - For discrete-time DPMs: - - We recommend `t_end` == 1. / self.noise_schedule.total_N. - For continuous-time DPMs: - - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15. - order: A `int`. The order of DPM-Solver. - skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'. - method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'. - denoise: A `bool`. Whether to denoise at the final step. Default is False. - If `denoise` is True, the total NFE is (`steps` + 1). - solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`. - atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - Returns: - x_end: A pytorch tensor. The approximated solution at time `t_end`. - - """ - t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end - t_T = self.noise_schedule.T if t_start is None else t_start - device = x.device - if method == 'adaptive': - with torch.no_grad(): - x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol, - solver_type=solver_type) - elif method == 'multistep': - assert steps >= order - timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device) - assert timesteps.shape[0] - 1 == steps - with torch.no_grad(): - vec_t = timesteps[0].expand((x.shape[0])) - model_prev_list = [self.model_fn(x, vec_t)] - t_prev_list = [vec_t] - # Init the first `order` values by lower order multistep DPM-Solver. - for init_order in range(1, order): - vec_t = timesteps[init_order].expand(x.shape[0]) - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order, - solver_type=solver_type) - model_prev_list.append(self.model_fn(x, vec_t)) - t_prev_list.append(vec_t) - # Compute the remaining values by `order`-th order multistep DPM-Solver. - for step in range(order, steps + 1): - vec_t = timesteps[step].expand(x.shape[0]) - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, order, - solver_type=solver_type) - for i in range(order - 1): - t_prev_list[i] = t_prev_list[i + 1] - model_prev_list[i] = model_prev_list[i + 1] - t_prev_list[-1] = vec_t - # We do not need to evaluate the final model value. - if step < steps: - model_prev_list[-1] = self.model_fn(x, vec_t) - elif method in ['singlestep', 'singlestep_fixed']: - if method == 'singlestep': - timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order, - skip_type=skip_type, - t_T=t_T, t_0=t_0, - device=device) - elif method == 'singlestep_fixed': - K = steps // order - orders = [order, ] * K - timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device) - for i, order in enumerate(orders): - t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1] - timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(), - N=order, device=device) - lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner) - vec_s, vec_t = t_T_inner.repeat(x.shape[0]), t_0_inner.repeat(x.shape[0]) - h = lambda_inner[-1] - lambda_inner[0] - r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h - r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h - x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2) - if denoise: - x = self.denoise_fn(x, torch.ones((x.shape[0],)).to(device) * t_0) - return x - - -############################################################# -# other utility functions -############################################################# - -def interpolate_fn(x, xp, yp): - """ - A piecewise linear function y = f(x), using xp and yp as keypoints. - We implement f(x) in a differentiable way (i.e. applicable for autograd). - The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.) - - Args: - x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver). - xp: PyTorch tensor with shape [C, K], where K is the number of keypoints. - yp: PyTorch tensor with shape [C, K]. - Returns: - The function values f(x), with shape [N, C]. - """ - N, K = x.shape[0], xp.shape[1] - all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2) - sorted_all_x, x_indices = torch.sort(all_x, dim=2) - x_idx = torch.argmin(x_indices, dim=2) - cand_start_idx = x_idx - 1 - start_idx = torch.where( - torch.eq(x_idx, 0), - torch.tensor(1, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1) - start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2) - end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2) - start_idx2 = torch.where( - torch.eq(x_idx, 0), - torch.tensor(0, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1) - start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2) - end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2) - cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x) - return cand - - -def expand_dims(v, dims): - """ - Expand the tensor `v` to the dim `dims`. - - Args: - `v`: a PyTorch tensor with shape [N]. - `dim`: a `int`. - Returns: - a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`. - """ - return v[(...,) + (None,) * (dims - 1)] diff --git a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.cpp b/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.cpp deleted file mode 100644 index de1f4b0c8bc74a2d4daf712827a903cc1385a2a7..0000000000000000000000000000000000000000 --- a/spaces/lnyan/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.cpp +++ /dev/null @@ -1,234 +0,0 @@ -#include -#include -#include -#include -#include - -#include "inpaint.h" - -namespace { - static std::vector kDistance2Similarity; - - void init_kDistance2Similarity() { - double base[11] = {1.0, 0.99, 0.96, 0.83, 0.38, 0.11, 0.02, 0.005, 0.0006, 0.0001, 0}; - int length = (PatchDistanceMetric::kDistanceScale + 1); - kDistance2Similarity.resize(length); - for (int i = 0; i < length; ++i) { - double t = (double) i / length; - int j = (int) (100 * t); - int k = j + 1; - double vj = (j < 11) ? base[j] : 0; - double vk = (k < 11) ? base[k] : 0; - kDistance2Similarity[i] = vj + (100 * t - j) * (vk - vj); - } - } - - - inline void _weighted_copy(const MaskedImage &source, int ys, int xs, cv::Mat &target, int yt, int xt, double weight) { - if (source.is_masked(ys, xs)) return; - if (source.is_globally_masked(ys, xs)) return; - - auto source_ptr = source.get_image(ys, xs); - auto target_ptr = target.ptr(yt, xt); - -#pragma unroll - for (int c = 0; c < 3; ++c) - target_ptr[c] += static_cast(source_ptr[c]) * weight; - target_ptr[3] += weight; - } -} - -/** - * This algorithme uses a version proposed by Xavier Philippeau. - */ - -Inpainting::Inpainting(cv::Mat image, cv::Mat mask, const PatchDistanceMetric *metric) - : m_initial(image, mask), m_distance_metric(metric), m_pyramid(), m_source2target(), m_target2source() { - _initialize_pyramid(); -} - -Inpainting::Inpainting(cv::Mat image, cv::Mat mask, cv::Mat global_mask, const PatchDistanceMetric *metric) - : m_initial(image, mask, global_mask), m_distance_metric(metric), m_pyramid(), m_source2target(), m_target2source() { - _initialize_pyramid(); -} - -void Inpainting::_initialize_pyramid() { - auto source = m_initial; - m_pyramid.push_back(source); - while (source.size().height > m_distance_metric->patch_size() && source.size().width > m_distance_metric->patch_size()) { - source = source.downsample(); - m_pyramid.push_back(source); - } - - if (kDistance2Similarity.size() == 0) { - init_kDistance2Similarity(); - } -} - -cv::Mat Inpainting::run(bool verbose, bool verbose_visualize, unsigned int random_seed) { - srand(random_seed); - const int nr_levels = m_pyramid.size(); - - MaskedImage source, target; - for (int level = nr_levels - 1; level >= 0; --level) { - if (verbose) std::cerr << "Inpainting level: " << level << std::endl; - - source = m_pyramid[level]; - - if (level == nr_levels - 1) { - target = source.clone(); - target.clear_mask(); - m_source2target = NearestNeighborField(source, target, m_distance_metric); - m_target2source = NearestNeighborField(target, source, m_distance_metric); - } else { - m_source2target = NearestNeighborField(source, target, m_distance_metric, m_source2target); - m_target2source = NearestNeighborField(target, source, m_distance_metric, m_target2source); - } - - if (verbose) std::cerr << "Initialization done." << std::endl; - - if (verbose_visualize) { - auto visualize_size = m_initial.size(); - cv::Mat source_visualize(visualize_size, m_initial.image().type()); - cv::resize(source.image(), source_visualize, visualize_size); - cv::imshow("Source", source_visualize); - cv::Mat target_visualize(visualize_size, m_initial.image().type()); - cv::resize(target.image(), target_visualize, visualize_size); - cv::imshow("Target", target_visualize); - cv::waitKey(0); - } - - target = _expectation_maximization(source, target, level, verbose); - } - - return target.image(); -} - -// EM-Like algorithm (see "PatchMatch" - page 6). -// Returns a double sized target image (unless level = 0). -MaskedImage Inpainting::_expectation_maximization(MaskedImage source, MaskedImage target, int level, bool verbose) { - const int nr_iters_em = 1 + 2 * level; - const int nr_iters_nnf = static_cast(std::min(7, 1 + level)); - const int patch_size = m_distance_metric->patch_size(); - - MaskedImage new_source, new_target; - - for (int iter_em = 0; iter_em < nr_iters_em; ++iter_em) { - if (iter_em != 0) { - m_source2target.set_target(new_target); - m_target2source.set_source(new_target); - target = new_target; - } - - if (verbose) std::cerr << "EM Iteration: " << iter_em << std::endl; - - auto size = source.size(); - for (int i = 0; i < size.height; ++i) { - for (int j = 0; j < size.width; ++j) { - if (!source.contains_mask(i, j, patch_size)) { - m_source2target.set_identity(i, j); - m_target2source.set_identity(i, j); - } - } - } - if (verbose) std::cerr << " NNF minimization started." << std::endl; - m_source2target.minimize(nr_iters_nnf); - m_target2source.minimize(nr_iters_nnf); - if (verbose) std::cerr << " NNF minimization finished." << std::endl; - - // Instead of upsizing the final target, we build the last target from the next level source image. - // Thus, the final target is less blurry (see "Space-Time Video Completion" - page 5). - bool upscaled = false; - if (level >= 1 && iter_em == nr_iters_em - 1) { - new_source = m_pyramid[level - 1]; - new_target = target.upsample(new_source.size().width, new_source.size().height, m_pyramid[level - 1].global_mask()); - upscaled = true; - } else { - new_source = m_pyramid[level]; - new_target = target.clone(); - } - - auto vote = cv::Mat(new_target.size(), CV_64FC4); - vote.setTo(cv::Scalar::all(0)); - - // Votes for best patch from NNF Source->Target (completeness) and Target->Source (coherence). - _expectation_step(m_source2target, 1, vote, new_source, upscaled); - if (verbose) std::cerr << " Expectation source to target finished." << std::endl; - _expectation_step(m_target2source, 0, vote, new_source, upscaled); - if (verbose) std::cerr << " Expectation target to source finished." << std::endl; - - // Compile votes and update pixel values. - _maximization_step(new_target, vote); - if (verbose) std::cerr << " Minimization step finished." << std::endl; - } - - return new_target; -} - -// Expectation step: vote for best estimations of each pixel. -void Inpainting::_expectation_step( - const NearestNeighborField &nnf, bool source2target, - cv::Mat &vote, const MaskedImage &source, bool upscaled -) { - auto source_size = nnf.source_size(); - auto target_size = nnf.target_size(); - const int patch_size = m_distance_metric->patch_size(); - - for (int i = 0; i < source_size.height; ++i) { - for (int j = 0; j < source_size.width; ++j) { - if (nnf.source().is_globally_masked(i, j)) continue; - int yp = nnf.at(i, j, 0), xp = nnf.at(i, j, 1), dp = nnf.at(i, j, 2); - double w = kDistance2Similarity[dp]; - - for (int di = -patch_size; di <= patch_size; ++di) { - for (int dj = -patch_size; dj <= patch_size; ++dj) { - int ys = i + di, xs = j + dj, yt = yp + di, xt = xp + dj; - if (!(ys >= 0 && ys < source_size.height && xs >= 0 && xs < source_size.width)) continue; - if (nnf.source().is_globally_masked(ys, xs)) continue; - if (!(yt >= 0 && yt < target_size.height && xt >= 0 && xt < target_size.width)) continue; - if (nnf.target().is_globally_masked(yt, xt)) continue; - - if (!source2target) { - std::swap(ys, yt); - std::swap(xs, xt); - } - - if (upscaled) { - for (int uy = 0; uy < 2; ++uy) { - for (int ux = 0; ux < 2; ++ux) { - _weighted_copy(source, 2 * ys + uy, 2 * xs + ux, vote, 2 * yt + uy, 2 * xt + ux, w); - } - } - } else { - _weighted_copy(source, ys, xs, vote, yt, xt, w); - } - } - } - } - } -} - -// Maximization Step: maximum likelihood of target pixel. -void Inpainting::_maximization_step(MaskedImage &target, const cv::Mat &vote) { - auto target_size = target.size(); - for (int i = 0; i < target_size.height; ++i) { - for (int j = 0; j < target_size.width; ++j) { - const double *source_ptr = vote.ptr(i, j); - unsigned char *target_ptr = target.get_mutable_image(i, j); - - if (target.is_globally_masked(i, j)) { - continue; - } - - if (source_ptr[3] > 0) { - unsigned char r = cv::saturate_cast(source_ptr[0] / source_ptr[3]); - unsigned char g = cv::saturate_cast(source_ptr[1] / source_ptr[3]); - unsigned char b = cv::saturate_cast(source_ptr[2] / source_ptr[3]); - target_ptr[0] = r, target_ptr[1] = g, target_ptr[2] = b; - } else { - target.set_mask(i, j, 0); - } - } - } -} - diff --git a/spaces/lvwerra/license/README.md b/spaces/lvwerra/license/README.md deleted file mode 100644 index 9371e023f138523c78b8bf3c4d42c1535d322354..0000000000000000000000000000000000000000 --- a/spaces/lvwerra/license/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: License -emoji: ⚖️ -colorFrom: red -colorTo: indigo -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/lwchen/CodeFormer/CodeFormer/basicsr/ops/dcn/deform_conv.py b/spaces/lwchen/CodeFormer/CodeFormer/basicsr/ops/dcn/deform_conv.py deleted file mode 100644 index 734154f9ed9447d585eae7df6886acb136f8a3cf..0000000000000000000000000000000000000000 --- a/spaces/lwchen/CodeFormer/CodeFormer/basicsr/ops/dcn/deform_conv.py +++ /dev/null @@ -1,377 +0,0 @@ -import math -import torch -from torch import nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn import functional as F -from torch.nn.modules.utils import _pair, _single - -try: - from . import deform_conv_ext -except ImportError: - import os - BASICSR_JIT = os.getenv('BASICSR_JIT') - if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - deform_conv_ext = load( - 'deform_conv', - sources=[ - os.path.join(module_path, 'src', 'deform_conv_ext.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'), - ], - ) - - -class DeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - im2col_step=64): - if input is not None and input.dim() != 4: - raise ValueError(f'Expected 4D tensor as input, got {input.dim()}' 'D tensor instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.im2col_step = im2col_step - - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty(DeformConvFunction._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride)) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - if not input.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - deform_conv_ext.deform_conv_forward(input, weight, - offset, output, ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - if not grad_output.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - deform_conv_ext.deform_conv_backward_input(input, offset, grad_output, grad_input, - grad_offset, weight, ctx.bufs_[0], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - deform_conv_ext.deform_conv_backward_parameters(input, offset, grad_output, grad_weight, - ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], - ctx.padding[1], ctx.padding[0], ctx.dilation[1], - ctx.dilation[0], ctx.groups, ctx.deformable_groups, 1, - cur_im2col_step) - - return (grad_input, grad_offset, grad_weight, None, None, None, None, None) - - @staticmethod - def _output_size(input, weight, padding, dilation, stride): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = padding[d] - kernel = dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError('convolution input is too small (output would be ' f'{"x".join(map(str, output_size))})') - return output_size - - -class ModulatedDeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1): - ctx.stride = stride - ctx.padding = padding - ctx.dilation = dilation - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(1) # fake tensor - if not input.is_cuda: - raise NotImplementedError - if weight.requires_grad or mask.requires_grad or offset.requires_grad \ - or input.requires_grad: - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty(ModulatedDeformConvFunction._infer_shape(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - deform_conv_ext.modulated_deform_conv_forward(input, weight, bias, ctx._bufs[0], offset, mask, output, - ctx._bufs[1], weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - if not grad_output.is_cuda: - raise NotImplementedError - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - deform_conv_ext.modulated_deform_conv_backward(input, weight, bias, ctx._bufs[0], offset, mask, ctx._bufs[1], - grad_input, grad_weight, grad_bias, grad_offset, grad_mask, - grad_output, weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, None, None, None, None, None) - - @staticmethod - def _infer_shape(ctx, input, weight): - n = input.size(0) - channels_out = weight.size(0) - height, width = input.shape[2:4] - kernel_h, kernel_w = weight.shape[2:4] - height_out = (height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1)) // ctx.stride + 1 - width_out = (width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1)) // ctx.stride + 1 - return n, channels_out, height_out, width_out - - -deform_conv = DeformConvFunction.apply -modulated_deform_conv = ModulatedDeformConvFunction.apply - - -class DeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=False): - super(DeformConv, self).__init__() - - assert not bias - assert in_channels % groups == 0, \ - f'in_channels {in_channels} is not divisible by groups {groups}' - assert out_channels % groups == 0, \ - f'out_channels {out_channels} is not divisible ' \ - f'by groups {groups}' - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deformable_groups = deformable_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size)) - - self.reset_parameters() - - def reset_parameters(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - - def forward(self, x, offset): - # To fix an assert error in deform_conv_cuda.cpp:128 - # input image is smaller than kernel - input_pad = (x.size(2) < self.kernel_size[0] or x.size(3) < self.kernel_size[1]) - if input_pad: - pad_h = max(self.kernel_size[0] - x.size(2), 0) - pad_w = max(self.kernel_size[1] - x.size(3), 0) - x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - out = deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - if input_pad: - out = out[:, :, :out.size(2) - pad_h, :out.size(3) - pad_w].contiguous() - return out - - -class DeformConvPack(DeformConv): - """A Deformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(DeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 2 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_offset() - - def init_offset(self): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - offset = self.conv_offset(x) - return deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - - -class ModulatedDeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=True): - super(ModulatedDeformConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.deformable_groups = deformable_groups - self.with_bias = bias - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) - - -class ModulatedDeformConvPack(ModulatedDeformConv): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConvPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/config/host_device.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/config/host_device.h deleted file mode 100644 index 5540f91260d807bfb2ef06064767aeaccea2fc1a..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/config/host_device.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file host_device.h - * \brief Defines __host__ and __device__ - */ - -#pragma once - -#include - -// since nvcc defines __host__ and __device__ for us, -// and only nvcc knows what to do with __host__ and __device__, -// define them to be the empty string for other compilers - -#if THRUST_DEVICE_COMPILER != THRUST_DEVICE_COMPILER_NVCC - -// since __host__ & __device__ might have already be defined, only -// #define them if not defined already -// XXX this will break if the client does #include later - -#ifndef __host__ -#define __host__ -#endif // __host__ - -#ifndef __device__ -#define __device__ -#endif // __device__ - -#endif - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/for_each.h b/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/for_each.h deleted file mode 100644 index dfe5329b84ed273e60dacab576a559e351d26c42..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/tbb/detail/for_each.h +++ /dev/null @@ -1,54 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace tbb -{ -namespace detail -{ - -template - RandomAccessIterator for_each(execution_policy &exec, - RandomAccessIterator first, - RandomAccessIterator last, - UnaryFunction f); - -template - RandomAccessIterator for_each_n(execution_policy &exec, - RandomAccessIterator first, - Size n, - UnaryFunction f); - -} // end namespace detail -} // end namespace tbb -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/maminghui/ChatGPT/overwrites.py b/spaces/maminghui/ChatGPT/overwrites.py deleted file mode 100644 index 436fcf46b5807ca045e77ac762039ba0ffc16f6d..0000000000000000000000000000000000000000 --- a/spaces/maminghui/ChatGPT/overwrites.py +++ /dev/null @@ -1,38 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from presets import * -from llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - tag_regex = re.compile(r"^<\w+>[^<]+") - if tag_regex.search(y[-1][1]): - y[-1] = (y[-1][0].replace("\n", "
      "), y[-1][1]) - else: - y[-1] = (y[-1][0].replace("\n", "
      "), convert_mdtext(y[-1][1])) - return y diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/ops/__init__.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/ops/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/marioboy/neil-breen/vocoder_train.py b/spaces/marioboy/neil-breen/vocoder_train.py deleted file mode 100644 index d712ffa3e6c92a091aa18dc90f0027f46940e400..0000000000000000000000000000000000000000 --- a/spaces/marioboy/neil-breen/vocoder_train.py +++ /dev/null @@ -1,56 +0,0 @@ -from utils.argutils import print_args -from vocoder.train import train -from pathlib import Path -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Trains the vocoder from the synthesizer audios and the GTA synthesized mels, " - "or ground truth mels.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - - parser.add_argument("run_id", type=str, help= \ - "Name for this model instance. If a model state from the same run ID was previously " - "saved, the training will restart from there. Pass -f to overwrite saved states and " - "restart from scratch.") - parser.add_argument("datasets_root", type=str, help= \ - "Path to the directory containing your SV2TTS directory. Specifying --syn_dir or --voc_dir " - "will take priority over this argument.") - parser.add_argument("--syn_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the synthesizer directory that contains the ground truth mel spectrograms, " - "the wavs and the embeds. Defaults to /SV2TTS/synthesizer/.") - parser.add_argument("--voc_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the vocoder directory that contains the GTA synthesized mel spectrograms. " - "Defaults to /SV2TTS/vocoder/. Unused if --ground_truth is passed.") - parser.add_argument("-m", "--models_dir", type=str, default="vocoder/saved_models/", help=\ - "Path to the directory that will contain the saved model weights, as well as backups " - "of those weights and wavs generated during training.") - parser.add_argument("-g", "--ground_truth", action="store_true", help= \ - "Train on ground truth spectrograms (/SV2TTS/synthesizer/mels).") - parser.add_argument("-s", "--save_every", type=int, default=1000, help= \ - "Number of steps between updates of the model on the disk. Set to 0 to never save the " - "model.") - parser.add_argument("-b", "--backup_every", type=int, default=25000, help= \ - "Number of steps between backups of the model. Set to 0 to never make backups of the " - "model.") - parser.add_argument("-f", "--force_restart", action="store_true", help= \ - "Do not load any saved model and restart from scratch.") - args = parser.parse_args() - - # Process the arguments - if not hasattr(args, "syn_dir"): - args.syn_dir = Path(args.datasets_root, "SV2TTS", "synthesizer") - args.syn_dir = Path(args.syn_dir) - if not hasattr(args, "voc_dir"): - args.voc_dir = Path(args.datasets_root, "SV2TTS", "vocoder") - args.voc_dir = Path(args.voc_dir) - del args.datasets_root - args.models_dir = Path(args.models_dir) - args.models_dir.mkdir(exist_ok=True) - - # Run the training - print_args(args, parser) - train(**vars(args)) - \ No newline at end of file diff --git a/spaces/matthoffner/starchat-ui/components/Settings/Key.tsx b/spaces/matthoffner/starchat-ui/components/Settings/Key.tsx deleted file mode 100644 index fe056e9d9e0d0827d44b1cf82bf2c0dac1deccae..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/components/Settings/Key.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import { IconCheck, IconKey, IconX } from '@tabler/icons-react'; -import { FC, KeyboardEvent, useEffect, useRef, useState } from 'react'; - -import { useTranslation } from 'next-i18next'; - -import { SidebarButton } from '../Sidebar/SidebarButton'; - -interface Props { - apiKey: string; - onApiKeyChange: (apiKey: string) => void; -} - -export const Key: FC = ({ apiKey, onApiKeyChange }) => { - return null; -}; diff --git a/spaces/meaqua33/White-box-Cartoonization/app.py b/spaces/meaqua33/White-box-Cartoonization/app.py deleted file mode 100644 index c55ced56bd87a85f59d1c8ef84b7eca87422720f..0000000000000000000000000000000000000000 --- a/spaces/meaqua33/White-box-Cartoonization/app.py +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations -import argparse -import functools -import os -import pathlib -import sys -from typing import Callable -import uuid - -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image - -from io import BytesIO -from wbc.cartoonize import Cartoonize - -ORIGINAL_REPO_URL = 'https://github.com/SystemErrorWang/White-box-Cartoonization' -TITLE = 'SystemErrorWang/White-box-Cartoonization' -DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}. - -""" -ARTICLE = """ - -""" - -SAFEHASH = [x for x in "0123456789-abcdefghijklmnopqrstuvwxyz_ABCDEFGHIJKLMNOPQRSTUVWXYZ"] -def compress_UUID(): - ''' - 根据http://www.ietf.org/rfc/rfc1738.txt,由uuid编码扩bai大字符域生成du串 - 包括:[0-9a-zA-Z\-_]共64个 - 长度:(32-2)/3*2=20 - 备注:可在地球上人zhi人都用,使用100年不重复(2^120) - :return:String - ''' - row = str(uuid.uuid4()).replace('-', '') - safe_code = '' - for i in range(10): - enbin = "%012d" % int(bin(int(row[i * 3] + row[i * 3 + 1] + row[i * 3 + 2], 16))[2:], 10) - safe_code += (SAFEHASH[int(enbin[0:6], 2)] + SAFEHASH[int(enbin[6:12], 2)]) - safe_code = safe_code.replace('-', '') - return safe_code - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - parser.add_argument('--allow-screenshot', action='store_true') - return parser.parse_args() - -def run( - image, - cartoonize : Cartoonize -) -> tuple[PIL.Image.Image]: - - out_path = compress_UUID()+'.png' - cartoonize.run_sigle(image.name, out_path) - - return PIL.Image.open(out_path) - - -def main(): - gr.close_all() - - args = parse_args() - - cartoonize = Cartoonize(os.path.join(os.path.dirname(os.path.abspath(__file__)),'wbc/saved_models/')) - - func = functools.partial(run, cartoonize=cartoonize) - func = functools.update_wrapper(func, run) - - gr.Interface( - func, - [ - gr.inputs.Image(type='file', label='Input Image'), - ], - [ - gr.outputs.Image( - type='pil', - label='Result'), - ], - # examples=examples, - theme=args.theme, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, - allow_screenshot=args.allow_screenshot, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/merve/KerasBERTv1/README.md b/spaces/merve/KerasBERTv1/README.md deleted file mode 100644 index 6f78e92386f9e0aa355a0b41839a9724f91ee79e..0000000000000000000000000000000000000000 --- a/spaces/merve/KerasBERTv1/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: KerasBERTv1 -emoji: ❤️ -colorFrom: green -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/merve/fill-in-the-blank/source/_posts/2020-09-27-diversity-metrics.md b/spaces/merve/fill-in-the-blank/source/_posts/2020-09-27-diversity-metrics.md deleted file mode 100644 index 4c84423fe9a6f8566a0b7182bc378feec97d9654..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/_posts/2020-09-27-diversity-metrics.md +++ /dev/null @@ -1,100 +0,0 @@ ---- -template: post.html -title: Measuring Diversity -titlex: Diversity and Inclusion Metrics -summary: Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help. -shareimg: https://pair.withgoogle.com/explorables/images/measuring-diversity.png -permalink: /measuring-diversity/ -date: 2021-03-01 ---- - - - -Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for "CEO pictures" and sees a [page of white men](https://www.nytimes.com/interactive/2018/04/24/upshot/women-and-men-named-john.html), they may feel that only white men can be CEOs, further perpetuating lack of representation at companies' executive levels. - -Using the careful quantification outlined in a recent paper, [Diversity and Inclusion Metrics in Subset Selection](https://arxiv.org/pdf/2002.03256.pdf), we can quantify biases and push these systems to return a wider range of results. - -The mathematics of all this is a little easier to follow with abstract shapes. Let's take a look at some of them: - -
      - -Suppose we want to return about 30% green boxes to reflect the distribution of some larger universe of shapes. Try clicking on the shapes below to select some of them — can you find a better subset to return? - -
      - -Another diversity metric we care about is the percentage of dots... how close to 35% dots can you get? - -
      - -If we can only return a single subset, how should we consider multiple diversity metrics? Sometimes it isn't possible to reduce the difference of every metric to zero. One natural approach: find the selection with the **lowest mean difference** across all the metrics to get as close as possible to all the targets. - -In other circumstances, like picking a panel of speakers, avoiding badly representing any single category might be more important. This can be done by finding the subset with the **lowest max difference**. Try minimizing both below: - -
      - -Notice that minimizing the mean results in a different subset than minimizing the max; how else might using one over the other change the results? - -### Ranking Measures - -We can pull out more detail by showing how the mean difference and maximum difference rank lots of sets. Below, there are 20 sets of 10 shapes sorted by the two measures. Try adjusting the target slider on the left to see how the rankings change; each set's percentage of green, dots and small shapes are shown in the small histograms. - -
      - -At the extremes, the choice of measure can have a big impact: if we want to try and return all green results, we can shift the green target up to 100%. With this target, the minimum difference basically sorts the sets by the number of green items and uses the other targets as a tiebreaker. In contrast, sorting by the mean difference balances the green target more with the dot and small targets. - -
      - -Beyond mean and max differences, there are more ways to combine diversity metrics, like taking the cross of two metrics to account for [intersectionality](https://en.wikipedia.org/wiki/Intersectionality). The absolute value of the difference in target and actual percentages can also be quantified in other ways — you might want to penalize undershooting more than overshooting, for example. It's important to keep in mind what exactly you're trying to maximize and the dataset that you're operating on. - -### Which Measure is Best? - -In a vacuum, all of these ranking methods are defensible. Picking one requires knowledge of the dataset and broader societal context. - -For example, the doctors on the left have more variance along the shirt color attribute, but they're less diverse by gender than the doctors on the right. With the shirt color and gender targets we've picked, the two subsets have the same mean and max differences However, in most applications, it's more important to have a representative sample of socially relevant characteristics, like gender, rather than something less salient, like clothing color. - -
      - -Just selecting a diverse sample isn't sufficient either. [Diversity and Inclusion Metrics in Subset Selection](https://arxiv.org/pdf/2002.03256.pdf) introduces a way of measuring "inclusion" - how well does the searcher feel represented in the results? - -Below, we have gender diversity, without inclusion for women, in the “construction worker” image domain. Masculine-presenting individuals are shown in realistic, modern construction worker situations, while feminine-presenting individuals and other gender presentations are depicted as historic nostalgia, toys, clipart, or passive. - -
      - -The context of the query and the searcher also plays in the quality of search results. A search for "work clothing" that shows a mixed palette of colors for men's clothing and only pink women's clothing might make the searcher feel that women need to appear stereotypically feminine in a professional setting. But the same set of women's clothes might be appropriate to show for a "pink women work clothes" search or if the searcher had previously expressed a preference for pink. - -We saw how a small switch from mean to max made a huge difference in what abstract shapes are returned – and how things can get even more complex when socially salient characteristics are layered in. Defaults and small decisions can encode our priorities and values; intentionally thinking about how diversity and inclusion are being measured and which characteristics are emphasized is a step towards designing more equitable systems. - -### More Reading - -The [Diversity and Inclusion Metrics](https://arxiv.org/pdf/2002.03256.pdf) paper has a [Colab](https://colab.research.google.com/github/PAIR-code/ai-explorables/blob/master/source/measuring-diversity/diversity-and-inclusion.ipynb) with a detailed desciption of the metrics, additional visualizations and a reference Python implementation. - -The difficulties of [measuring fairness](https://pair.withgoogle.com/explorables/measuring-fairness/) in general have been well studied; subset selection is still an active area of research. [Fairness of Exposure in Rankings](https://www.cs.cornell.edu/~tj/publications/singh_joachims_18a.pdf) proposes a ranking algorithm that incorporates fairness constraints. [Toward creating a fairer ranking in search engine results](https://www.ilab.cs.rutgers.edu/~rg522/publication/gao-2020-ipm/gao-2020-ipm.pdf) measures diversity bias in actual search results. - -Inferring user preferences is also tricky; you can checkout ways to design for user feedback and control over queries in the [People + AI Guidebook](https://pair.withgoogle.com/chapter/feedback-controls/). - -### Credits - -Adam Pearce, Dylan Baker, Ellen Jiang, Meg Mitchell\* and Timnit Gebru\* // March 2021 - -*Work done while at Google - -Thanks to Alex Hanna, Carey Radebaugh, Emily Denton, Fernanda Viégas, James Wexler, Jess Holbrook, Ludovic Peran, Martin Wattenberg, Michael Terry, Yannick Assogba and Zan Armstrong for their help with this piece. - - -

      More Explorables

      - -

      - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/style.css b/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/style.css deleted file mode 100644 index 726984190483443c3da0905eae281514eccc7487..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/fill-in-the-blank/style.css +++ /dev/null @@ -1,737 +0,0 @@ -@media (max-width: 1100px){ - body{ - /*overflow-x: hidden;*/ - } -} - - -.tooltip { - top: -1000px; - position: absolute; - padding: 10px; - background: rgba(255, 255, 255, .8); - border: 0px solid lightgray; - - width: 300px; - font-size: 14px; - line-height: 1.4em; - background: rgba(0, 0, 0, .8); - color: #fff; - pointer-events: all !important; -} -.tooltip a{ - color: #fff !important; -} -.tooltip:hover{ -/* opacity: 1; - pointer-events: all !important; -*/} - -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .2s; - pointer-events: none !important; -} - -@media (max-width: 590px){ - .footend{ - margin-left: 0px; - width: 10px; - } - - - div.tooltip{ - transition: all 0s !important; - transition-delay: 0s !important; - - display: none; - position: fixed; - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -.tick{ - display: none; -} - -.bg-tick{ - stroke: #eee; -} - -text{ - pointer-events: none; - /*fill: #fff;*/ - text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff; -} - -.pair{ - width: 820px; - /*height: 550px;*/ - margin: 0px auto; - margin-top: 25px !important -} - -.nurse-name-zari-cda{ - margin-bottom: 35px; -} - -.pair > div{ - display: inline-block; - vertical-align: top; -} - -.pair .graph{ - width: 500px; -} - -.pair .options{ - width: 250px; - padding-right: 20px; -} - -.pair .warning{ - width: 250px; - /*border: 1px solid orange;*/ - /*background: #fff9e4;*/ - /*padding: 10px;*/ - margin-top: 15px; - padding-left: 0px; - font-size: 14px; - line-height: 1.25em; - opacity: 0; - transition: all .2s; -} - -.pair .reset{ - width: 58px; - /*border: 1px solid orange;*/ - /*background: #fff9e4;*/ - /*padding: 10px;*/ - margin-top: 15px; - font-size: 14px; - line-height: 1.25em; - opacity: 0; - transition: opacity .2s; - cursor: pointer; - user-select: none; - outline: 1px solid #ccc; - padding: 5px; - -} -.pair .reset span{ - position: relative; - top: -1px; - padding-right: 4px; - padding-left: 1px; - /*font-size: ;*/ -} - -.pair .reset:hover{ - background: #eee; - color: #000; - outline: 1px solid #000; -} - -.options > *{ - margin-right: 10px; -} - -.options b{ - display: block; - margin-bottom: 5px; - margin-top: 10px; -} - - - - -.flex-row{ - width: 100%; - display: flex; - justify-content: space-between; - column-gap: 10px -} - -.flex-row > *{ - flex-grow: 1; - margin-right: 0px !important; -} - -.options > *{ - margin-right: 0px; -} - -.pair textarea{ - width: 100%; -} - -.flex-row-textarea{ - display: block; -} - -@media (max-width: 820px){ - .pair{ - width: 100%; - height: auto; - max-width: 500px; - margin: 0px auto; - } - - .flex-row{ - margin-bottom: -10px; - } - - .flex-row-textarea{ - display: flex; - margin-bottom: 10px; - } - - - .pair .options{ - width: auto; - padding-right: 0px; - } - - .warning{ - display: none !important; - } - - .reset{ - display: none !important; - } - - .pair .graph{ - width: 100%; - } - - .annotations{ - display: none; - } -} - - - -.pair.difference{ - width: 1000px; - margin-left: 0px; -} - -.pair.difference .pair-container{ -} - -.pair .options.wide{ - width: 100%; - margin-bottom: 20px; -} -.pair .options.wide > div{ - display: inline-block; -} - -.options.wide .option-type .button{ - width: 78px !important; -} - -.options.wide .option-model .button{ - width: 40px !important; -} - -.options.wide .update.button{ - width: 80px !important; -} - -textarea{ - font-family: 'Roboto', Helvetica, sans-serif; - font-weight: 300; - line-height: 1.55em; - font-size: 16px; - font-weight: bold; - border: 1px #ccc solid; - resize: none; -} - -.button.update{ - /*height: 20px;*/ - /*position: relative;*/ - /*top: -30px;*/ - /*margin-bottom: -10px;*/ - /*vertical-align: center;*/ - margin-top: 25px; - width: 252px; - text-align: center; - font-weight: 500; -} -.button{ - display: inline-block; - outline: 1px solid #ccc; - padding: 5px; - margin-top: 10px; - margin-right: 10px; - position: relative; - top: -12px; - cursor: pointer; - user-select: none; -} - -@media (hover: hover) and (pointer: fine) { - .button:hover{ - outline-color: #000; - } -} - -@media screen and (-webkit-min-device-pixel-ratio:0) and @media (max-width: 900px) { - select, - textarea, - input { - font-size: 16px !important; - } - - textarea{ - height: 80px !important; - } -} - - -.button.active{ - background: #eee; - color: #000; - /*font-weight: 500;*/ -} - - -.button.loading i{ - opacity: 1; -} - -.button.loading{ - pointer-events: none; - /*opacity: .6;*/ -} -.p-button{ - /*position: relative;*/ - /*top: -3px;*/ - /*line-height: 10px;*/ - /*line-height: */ - display: inline-block; - margin-right: 15px; -} -.p-button-link{ - text-decoration: underline; - cursor: pointer; - padding-right: 10px; -} -.interesting-pair-alts .p-button-link{ - display: block; - text-decoration: none; -} -.interesting-pair-alts .p-button-link div{ - padding-left: 10px; - padding-right: 10px; - padding-top: 5px; - padding-bottom: 5px; - outline: 1px solid #ccc; - margin-top: 5px; - margin-bottom: 5px; - margin-left: 10px; - -} -.difference-difference-alts .p-button-link:hover div{ - outline: 1px solid #000; -} - -.difference-difference-alts .p-button-link{ - display: block; - text-decoration: none; -} -.difference-difference-alts .p-button-link div{ - padding-left: 10px; - padding-right: 10px; - padding-top: 5px; - padding-bottom: 5px; - outline: 1px solid #ccc; - margin-top: 5px; - margin-bottom: 5px; - margin-left: 10px; - -} -.difference-difference-alts .p-button-link:hover div{ - outline: 1px solid #000; -} - - -.wide .flex-row{ - width: 220px; -} - -.wide > *{ - margin-right: 40px; -} - -.wide textarea{ - position: relative; - top: 12px; -} - - -@media (max-width: 1100px){ - .pair-container-overflow{ - overflow-x: scroll; - width: 100% !important; - } - - .pair.difference{ - width: auto; - max-width: 2000px; - } - - .pair.difference .options{ - margin: 0px auto; - margin-left: max(50vh - 500px, 0px); - width: min(500px, 100%); - } - -} - -.pair-container{ - width: 1000px; -} - - - - - -.checkbox{ - display: inline-block; - position: relative; - top: -10px; - margin-left: 10px; - -} - -circle:hover{ - stroke: blue; -} - - - -.hover text{ - fill: #000; - font-weight: 300; - /*stroke-width: 2px;*/ - /*text-shadow: 0 2px 0 #000, 2px 0 0 #000, 0 -2px 0 #000, -2px 0 0 #000;*/ -} - -#graph > div{ - display: inline-block; -} - -text.tiny{ - font-size: 9px; - font-family: monospace; - /*fill: #555;*/ -} - - - - - -svg{ - overflow: visible; -} - - -input{ - font-family: monospace; - width: 900px; - overflow: hidden; - background-color: rgba(0,0,0,0); - border: 0px; -} - -textarea{ - font-family: monospace; - font-size: 14px; -} - -/* Hide scrollbar for Chrome, Safari and Opera */ -.top-sents::-webkit-scrollbar { - /*display: none;*/ -} - -/* Hide scrollbar for IE, Edge and Firefox */ -.top-sents { - -ms-overflow-style: none; /* IE and Edge */ - scrollbar-width: none; /* Firefox */ -} - -.sent{ - margin-top: -15px; -} - - - -.post-summary{ - display: none; -} - - -.token-container{ - text-align: center; - line-height: 2em; -} - -.token{ - display: inline-block; - padding: 5px; - margin: 10px; - margin-top: 0px; - margin-bottom: 0px; - font-size: 20px; - font-family: monospace; - outline: 1px solid #ccc; - color: #000; - cursor: pointer; - background: #fff; - border: 0px; -} - -.token:hover, .token.active{ - outline: 1px solid #000; -} - - -.xy-only, .rotate-only{ - opacity: 0; - transition: all .2s; -} - -.annotations{ - transition: opacity .2s; -} - -.is-xy .xy-only{ - opacity: 1 !important; -} -.is-rotate .rotate-only{ - opacity: 1 !important; -} - -.hamlet{ - min-height: 304px; - margin-bottom: 20px; -} - -.hamlet-edit .button{ - color: #ccc; - pointer-events: none; -} -.hamlet-edit.changed .button{ - color: #000; - pointer-events: all; -} - -@media (max-width: 500px){ - .hamlet-edit .button{ - display: block; - text-align: center; - top: 0px !important; - margin: 0px auto !important; - margin-top: 5px !important; - width: 100%; - } -} - - - -.pair .update{ - color: #ccc; - pointer-events: none; -} -.pair.changed .update{ - color: #000; - pointer-events: all; -} - - - - -.difference-difference-list{ - display: none; -} - -.pair-container{ - width: 900px; -} -.pair-container > div{ - display: inline-block; -} - - -.difference-difference textarea{ - height: 52px; -} - -.not-is-color-by .y-axis-label text, .not-is-color-by .sent-1 text, .not-is-color-by .x-axis-label{ - fill: #444 !important; -} - -.is-color-by .y-axis-label text, .is-color-by .sent-1 text, .is-color-by .x-axis-label{ - font-weight: 400; - /*text-decoration: underline;*/ -} - - - -.time-token.active path{ - stroke: #f0f; - opacity: 1; -} -.time-token.active text{ - fill: #f0f !important; - opacity: 1 !important; - font-size: 14px; -} - - -.token{ - -} - -.gender-over-time{ - width: 1100px; - margin: 0px auto; - font-size: 14px; - margin-left: -91px; -} - -.gender-over-time .tick{ - display: block; -} - -.gender-over-time .axis{ - opacity: .7; -} - -.gender-over-time .sentence{ - /*position: relative;*/ - width: 32%; -} - -.gender-over-time .sentence .sentence-title{ - right: 42px; - position: relative; - text-align: right; - font-family: monospace; - -} -.gender-over-time .sentence.is-bear .sentence-title{ - /*text-align: center;*/ - right: 115px; -} - -.gender-over-time .g-caption{ - line-height: 18px; - margin-bottom: 30px; - margin-top: 5px; - width: 290px; - font-size: 13px; - left: 365px; - position: relative; -} - -@media (max-width: 1100px){ - .gender-over-time{ - width: 100%; - margin-left: 0px; - max-width: 500px; - margin: 0px auto; - } - - .gender-over-time .sentence{ - width: 100% !important; - margin-bottom: 20px; - } - - .gender-over-time .g-caption{ - left: 0px; - width: 100%; - } -} - -.time-token text{ - font-family: monospace; - pointer-events: all !important; - cursor: default; -} - - - -img[src*="img/wiki-years.png"] { - width: 300px; -} - - -#more-explorables{ - margin-top: 100px; -} - - - - -/*html{ - font-smooth: never; - -webkit-font-smoothing: none; - background: transparent; -} - -path{ - display: none; -}*/ - - -button { - display: inline-block; - border: none; - margin: 0; - text-decoration: none; - background: #fff; - color: #ffffff; - font-size: 1em; - cursor: pointer; - text-align: center; - -webkit-appearance: none; - -moz-appearance: none; - font-family : inherit; - -} - -button:active { - transform: scale(0.99); -} - - -info{ - font-weight: 300; - font-size: 12px; - line-height: 0em; - position: relative; - left: 7px; - top: -1px; - cursor: default; -} -info:hover{ - font-weight: 600; -} \ No newline at end of file diff --git a/spaces/merve/hidden-bias/server-side/fill-in-the-blank/py/model_bert_large.py b/spaces/merve/hidden-bias/server-side/fill-in-the-blank/py/model_bert_large.py deleted file mode 100644 index 6ddb175a7158944305a2a8d9f99948ef41f7ec1a..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/server-side/fill-in-the-blank/py/model_bert_large.py +++ /dev/null @@ -1,124 +0,0 @@ -import torch -import json -import numpy as np - -from transformers import (BertForMaskedLM, BertTokenizer) - -modelpath = 'bert-large-uncased-whole-word-masking/' -tokenizer = BertTokenizer.from_pretrained(modelpath) -model = BertForMaskedLM.from_pretrained(modelpath) -model.eval() - -id_of_mask = 103 - -def get_embeddings(sentence): - with torch.no_grad(): - processed_sentence = '' + sentence + '' - tokenized = tokenizer.encode(processed_sentence) - input_ids = torch.tensor(tokenized).unsqueeze(0) # Batch size 1 - outputs = model(input_ids) - index_of_mask = tokenized.index(id_of_mask) - - # batch, tokens, vocab_size - prediction_scores = outputs[0] - - return prediction_scores[0][index_of_mask].cpu().numpy().tolist() - - -def get_embedding_group(tokens): - print(tokens) - - mutated = [] - for i, v in enumerate(tokens): - array = tokens.copy() - array[i] = id_of_mask - mutated.append(array) - - print('Running model') - output = model(torch.tensor(mutated))[0] - - print('Converting to list') - array = output.detach().numpy().tolist() - - print('Constructing out array') - # only grab mask embedding - # can probaby do this in torch? not sure how - out = [] - for i, v in enumerate(array): - out.append(v[i]) - - return out - -def get_embedding_group_top(tokens): - sents = get_embedding_group(tokens) - out = [] - - print('get_embedding_group done') - - for sent_i, sent in enumerate(sents): - all_tokens = [] - - for i, v in enumerate(sent): - all_tokens.append({'i': i, 'v': float(v)}) - - all_tokens.sort(key=lambda d: d['v'], reverse=True) - - topTokens = all_tokens[:90] - - sum = np.sum(np.exp(sent)) - for i, token in enumerate(topTokens): - token['p'] = float(np.exp(token['v'])/sum) - - out.append(all_tokens[:90]) - - return out - - -# Runs one token at a time to stay under memory limit -def get_embedding_group_low_mem(tokens): - print(tokens) - - out = [] - for index_of_mask, v in enumerate(tokens): - array = tokens.copy() - array[index_of_mask] = id_of_mask - - input_ids = torch.tensor(array).unsqueeze(0) - prediction_scores = model(input_ids)[0] - - out.append(prediction_scores[0][index_of_mask].detach().numpy()) - - return out - -def get_embedding_group_top_low_mem(tokens): - sents = get_embedding_group_low_mem(tokens) - out = [] - - print('get_embedding_group done') - - for sent_i, sent in enumerate(sents): - all_tokens = [] - - for i, v in enumerate(sent): - all_tokens.append({'i': i, 'v': float(v)}) - - all_tokens.sort(key=lambda d: d['v'], reverse=True) - - topTokens = all_tokens[:90] - - sum = np.sum(np.exp(sent)) - for i, token in enumerate(topTokens): - token['p'] = float(np.exp(token['v'])/sum) - - out.append(all_tokens[:90]) - - return out - - -import os -import shutil - -# Free up memory -if os.environ.get('REMOVE_WEIGHTS') == 'TRUE': - print('removing bert-large-uncased-whole-word-masking from filesystem') - shutil.rmtree('bert-large-uncased-whole-word-masking', ignore_errors=True) diff --git a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/py/main.py b/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/py/main.py deleted file mode 100644 index 2ac15bda96de733df52cd7730895ae18baf20529..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/py/main.py +++ /dev/null @@ -1,59 +0,0 @@ -import os -import json -import shutil - -from flask import Flask, request -from flask_cors import CORS - -import model_bert_large -import model_bert_zari_cda - -app = Flask(__name__) -CORS(app) - - -@app.route('/') -def hello_world(): - name = os.environ.get('NAME', 'Test') - print('[Hello]') - return 'Hello {}!'.format(name) - - -@app.route('/embed_test') -def embed_test(): - sentence = 'The dog went to the [MASK].' - print('[TEST] ', sentence) - return json.dumps(model_bert_large.get_embeddings(sentence)) - - -@app.route('/embed', methods=['POST']) -def embed(): - data = json.loads(request.data) - sentence = data['sentence'] - print('[BASE] ' + sentence) - return json.dumps(model_bert_large.get_embeddings(sentence)) - -@app.route('/embed_zari_cda', methods=['POST']) -def embed_zari_cda(): - data = json.loads(request.data) - sentence = data['sentence'] - print('[ZARI] ' + sentence) - return json.dumps(model_bert_zari_cda.get_embeddings(sentence)) - - -@app.route('/embed_group_top', methods=['POST']) -def embed_group_top(): - data = json.loads(request.data) - tokens = data['tokens'] - return json.dumps(model_bert_large.get_embedding_group_top(tokens)) - -@app.route('/get_embedding_group_top_low_mem', methods=['POST']) -def embed_group(): - data = json.loads(request.data) - tokens = data['tokens'] - return json.dumps(model_bert_large.get_embedding_group(tokens)) - -if __name__ == '__main__': - app.run(debug=True, host='0.0.0.0', port=int(os.environ.get('PORT', 5004))) - - diff --git a/spaces/metricspace/OcTra/df_local/model.py b/spaces/metricspace/OcTra/df_local/model.py deleted file mode 100644 index d8802766131c82536a511ab1a65c52bff0801edc..0000000000000000000000000000000000000000 --- a/spaces/metricspace/OcTra/df_local/model.py +++ /dev/null @@ -1,24 +0,0 @@ -from importlib import import_module - -import torch -from loguru import logger - -from df_local.config import DfParams, config - - -class ModelParams(DfParams): - def __init__(self): - self.__model = config("MODEL", default="deepfilternet", section="train") - self.__params = getattr(import_module("df_local." + self.__model), "ModelParams")() - - def __getattr__(self, attr: str): - return getattr(self.__params, attr) - - -def init_model(*args, **kwargs): - """Initialize the model specified in the config.""" - model = config("MODEL", default="deepfilternet", section="train") - logger.info(f"Initializing model `{model}`") - model = getattr(import_module("df_local." + model), "init_model")(*args, **kwargs) - model.to(memory_format=torch.channels_last) - return model diff --git a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/inception.py b/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/inception.py deleted file mode 100644 index f3afed8123e595f65c1333dea7151e653a836e2b..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/inception.py +++ /dev/null @@ -1,310 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision import models - -try: - from torchvision.models.utils import load_state_dict_from_url -except ImportError: - from torch.utils.model_zoo import load_url as load_state_dict_from_url - -# Inception weights ported to Pytorch from -# http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz -FID_WEIGHTS_URL = 'https://github.com/mseitzer/pytorch-fid/releases/download/fid_weights/pt_inception-2015-12-05-6726825d.pth' - - -class InceptionV3(nn.Module): - """Pretrained InceptionV3 network returning feature maps""" - - # Index of default block of inception to return, - # corresponds to output of final average pooling - DEFAULT_BLOCK_INDEX = 3 - - # Maps feature dimensionality to their output blocks indices - BLOCK_INDEX_BY_DIM = { - 64: 0, # First max pooling features - 192: 1, # Second max pooling featurs - 768: 2, # Pre-aux classifier features - 2048: 3 # Final average pooling features - } - - def __init__(self, - output_blocks=[DEFAULT_BLOCK_INDEX], - resize_input=True, - normalize_input=True, - requires_grad=False, - use_fid_inception=True): - """Build pretrained InceptionV3 - - Parameters - ---------- - output_blocks : list of int - Indices of blocks to return features of. Possible values are: - - 0: corresponds to output of first max pooling - - 1: corresponds to output of second max pooling - - 2: corresponds to output which is fed to aux classifier - - 3: corresponds to output of final average pooling - resize_input : bool - If true, bilinearly resizes input to width and height 299 before - feeding input to model. As the network without fully connected - layers is fully convolutional, it should be able to handle inputs - of arbitrary size, so resizing might not be strictly needed - normalize_input : bool - If true, scales the input from range (0, 1) to the range the - pretrained Inception network expects, namely (-1, 1) - requires_grad : bool - If true, parameters of the model require gradients. Possibly useful - for finetuning the network - use_fid_inception : bool - If true, uses the pretrained Inception model used in Tensorflow's - FID implementation. If false, uses the pretrained Inception model - available in torchvision. The FID Inception model has different - weights and a slightly different structure from torchvision's - Inception model. If you want to compute FID scores, you are - strongly advised to set this parameter to true to get comparable - results. - """ - super(InceptionV3, self).__init__() - - self.resize_input = resize_input - self.normalize_input = normalize_input - self.output_blocks = sorted(output_blocks) - self.last_needed_block = max(output_blocks) - - assert self.last_needed_block <= 3, \ - 'Last possible output block index is 3' - - self.blocks = nn.ModuleList() - - if use_fid_inception: - inception = fid_inception_v3() - else: - inception = models.inception_v3(pretrained=True) - - # Block 0: input to maxpool1 - block0 = [ - inception.Conv2d_1a_3x3, - inception.Conv2d_2a_3x3, - inception.Conv2d_2b_3x3, - nn.MaxPool2d(kernel_size=3, stride=2) - ] - self.blocks.append(nn.Sequential(*block0)) - - # Block 1: maxpool1 to maxpool2 - if self.last_needed_block >= 1: - block1 = [ - inception.Conv2d_3b_1x1, - inception.Conv2d_4a_3x3, - nn.MaxPool2d(kernel_size=3, stride=2) - ] - self.blocks.append(nn.Sequential(*block1)) - - # Block 2: maxpool2 to aux classifier - if self.last_needed_block >= 2: - block2 = [ - inception.Mixed_5b, - inception.Mixed_5c, - inception.Mixed_5d, - inception.Mixed_6a, - inception.Mixed_6b, - inception.Mixed_6c, - inception.Mixed_6d, - inception.Mixed_6e, - ] - self.blocks.append(nn.Sequential(*block2)) - - # Block 3: aux classifier to final avgpool - if self.last_needed_block >= 3: - block3 = [ - inception.Mixed_7a, - inception.Mixed_7b, - inception.Mixed_7c, - nn.AdaptiveAvgPool2d(output_size=(1, 1)) - ] - self.blocks.append(nn.Sequential(*block3)) - - for param in self.parameters(): - param.requires_grad = requires_grad - - def forward(self, inp): - """Get Inception feature maps - - Parameters - ---------- - inp : torch.autograd.Variable - Input tensor of shape Bx3xHxW. Values are expected to be in - range (0, 1) - - Returns - ------- - List of torch.autograd.Variable, corresponding to the selected output - block, sorted ascending by index - """ - outp = [] - x = inp - - if self.resize_input: - x = F.interpolate(x, - size=(299, 299), - mode='bilinear', - align_corners=False) - - if self.normalize_input: - x = 2 * x - 1 # Scale from range (0, 1) to range (-1, 1) - - for idx, block in enumerate(self.blocks): - x = block(x) - if idx in self.output_blocks: - outp.append(x) - - if idx == self.last_needed_block: - break - - return outp - - -def fid_inception_v3(): - """Build pretrained Inception model for FID computation - - The Inception model for FID computation uses a different set of weights - and has a slightly different structure than torchvision's Inception. - - This method first constructs torchvision's Inception and then patches the - necessary parts that are different in the FID Inception model. - """ - inception = models.inception_v3(num_classes=1008, - aux_logits=False, - pretrained=False) - inception.Mixed_5b = FIDInceptionA(192, pool_features=32) - inception.Mixed_5c = FIDInceptionA(256, pool_features=64) - inception.Mixed_5d = FIDInceptionA(288, pool_features=64) - inception.Mixed_6b = FIDInceptionC(768, channels_7x7=128) - inception.Mixed_6c = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6d = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6e = FIDInceptionC(768, channels_7x7=192) - inception.Mixed_7b = FIDInceptionE_1(1280) - inception.Mixed_7c = FIDInceptionE_2(2048) - - state_dict = load_state_dict_from_url(FID_WEIGHTS_URL, progress=True) - inception.load_state_dict(state_dict) - return inception - - -class FIDInceptionA(models.inception.InceptionA): - """InceptionA block patched for FID computation""" - def __init__(self, in_channels, pool_features): - super(FIDInceptionA, self).__init__(in_channels, pool_features) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch5x5 = self.branch5x5_1(x) - branch5x5 = self.branch5x5_2(branch5x5) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionC(models.inception.InceptionC): - """InceptionC block patched for FID computation""" - def __init__(self, in_channels, channels_7x7): - super(FIDInceptionC, self).__init__(in_channels, channels_7x7) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch7x7 = self.branch7x7_1(x) - branch7x7 = self.branch7x7_2(branch7x7) - branch7x7 = self.branch7x7_3(branch7x7) - - branch7x7dbl = self.branch7x7dbl_1(x) - branch7x7dbl = self.branch7x7dbl_2(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_3(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_4(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_5(branch7x7dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch7x7, branch7x7dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_1(models.inception.InceptionE): - """First InceptionE block patched for FID computation""" - def __init__(self, in_channels): - super(FIDInceptionE_1, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_2(models.inception.InceptionE): - """Second InceptionE block patched for FID computation""" - def __init__(self, in_channels): - super(FIDInceptionE_2, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: The FID Inception model uses max pooling instead of average - # pooling. This is likely an error in this specific Inception - # implementation, as other Inception models use average pooling here - # (which matches the description in the paper). - branch_pool = F.max_pool2d(x, kernel_size=3, stride=1, padding=1) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) diff --git a/spaces/mfrashad/ClothingGAN/netdissect/dissection.py b/spaces/mfrashad/ClothingGAN/netdissect/dissection.py deleted file mode 100644 index 6eef0dfd0b8804e45eb878aca68e72f8c6493474..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/netdissect/dissection.py +++ /dev/null @@ -1,1617 +0,0 @@ -''' -To run dissection: - -1. Load up the convolutional model you wish to dissect, and wrap it in - an InstrumentedModel; then call imodel.retain_layers([layernames,..]) - to instrument the layers of interest. -2. Load the segmentation dataset using the BrodenDataset class; - use the transform_image argument to normalize images to be - suitable for the model, or the size argument to truncate the dataset. -3. Choose a directory in which to write the output, and call - dissect(outdir, model, dataset). - -Example: - - from dissect import InstrumentedModel, dissect - from broden import BrodenDataset - - model = InstrumentedModel(load_my_model()) - model.eval() - model.cuda() - model.retain_layers(['conv1', 'conv2', 'conv3', 'conv4', 'conv5']) - bds = BrodenDataset('dataset/broden1_227', - transform_image=transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]), - size=1000) - dissect('result/dissect', model, bds, - examples_per_unit=10) -''' - -import torch, numpy, os, re, json, shutil, types, tempfile, torchvision -# import warnings -# warnings.simplefilter('error', UserWarning) -from PIL import Image -from xml.etree import ElementTree as et -from collections import OrderedDict, defaultdict -from .progress import verbose_progress, default_progress, print_progress -from .progress import desc_progress -from .runningstats import RunningQuantile, RunningTopK -from .runningstats import RunningCrossCovariance, RunningConditionalQuantile -from .sampler import FixedSubsetSampler -from .actviz import activation_visualization -from .segviz import segment_visualization, high_contrast -from .workerpool import WorkerBase, WorkerPool -from .segmenter import UnifiedParsingSegmenter - -def dissect(outdir, model, dataset, - segrunner=None, - train_dataset=None, - model_segmenter=None, - quantile_threshold=0.005, - iou_threshold=0.05, - iqr_threshold=0.01, - examples_per_unit=100, - batch_size=100, - num_workers=24, - seg_batch_size=5, - make_images=True, - make_labels=True, - make_maxiou=False, - make_covariance=False, - make_report=True, - make_row_images=True, - make_single_images=False, - rank_all_labels=False, - netname=None, - meta=None, - merge=None, - settings=None, - ): - ''' - Runs net dissection in-memory, using pytorch, and saves visualizations - and metadata into outdir. - ''' - assert not model.training, 'Run model.eval() before dissection' - if netname is None: - netname = type(model).__name__ - if segrunner is None: - segrunner = ClassifierSegRunner(dataset) - if train_dataset is None: - train_dataset = dataset - make_iqr = (quantile_threshold == 'iqr') - with torch.no_grad(): - device = next(model.parameters()).device - levels = None - labelnames, catnames = None, None - maxioudata, iqrdata = None, None - labeldata = None - iqrdata, cov = None, None - - labelnames, catnames = segrunner.get_label_and_category_names() - label_category = [catnames.index(c) if c in catnames else 0 - for l, c in labelnames] - - # First, always collect qunatiles and topk information. - segloader = torch.utils.data.DataLoader(dataset, - batch_size=batch_size, num_workers=num_workers, - pin_memory=(device.type == 'cuda')) - quantiles, topk = collect_quantiles_and_topk(outdir, model, - segloader, segrunner, k=examples_per_unit) - - # Thresholds can be automatically chosen by maximizing iqr - if make_iqr: - # Get thresholds based on an IQR optimization - segloader = torch.utils.data.DataLoader(train_dataset, - batch_size=1, num_workers=num_workers, - pin_memory=(device.type == 'cuda')) - iqrdata = collect_iqr(outdir, model, segloader, segrunner) - max_iqr, full_iqr_levels = iqrdata[:2] - max_iqr_agreement = iqrdata[4] - # qualified_iqr[max_iqr_quantile[layer] > 0.5] = 0 - levels = {layer: full_iqr_levels[layer][ - max_iqr[layer].max(0)[1], - torch.arange(max_iqr[layer].shape[1])].to(device) - for layer in full_iqr_levels} - else: - levels = {k: qc.quantiles([1.0 - quantile_threshold])[:,0] - for k, qc in quantiles.items()} - - quantiledata = (topk, quantiles, levels, quantile_threshold) - - if make_images: - segloader = torch.utils.data.DataLoader(dataset, - batch_size=batch_size, num_workers=num_workers, - pin_memory=(device.type == 'cuda')) - generate_images(outdir, model, dataset, topk, levels, segrunner, - row_length=examples_per_unit, batch_size=seg_batch_size, - row_images=make_row_images, - single_images=make_single_images, - num_workers=num_workers) - - if make_maxiou: - assert train_dataset, "Need training dataset for maxiou." - segloader = torch.utils.data.DataLoader(train_dataset, - batch_size=1, num_workers=num_workers, - pin_memory=(device.type == 'cuda')) - maxioudata = collect_maxiou(outdir, model, segloader, - segrunner) - - if make_labels: - segloader = torch.utils.data.DataLoader(dataset, - batch_size=1, num_workers=num_workers, - pin_memory=(device.type == 'cuda')) - iou_scores, iqr_scores, tcs, lcs, ccs, ics = ( - collect_bincounts(outdir, model, segloader, - levels, segrunner)) - labeldata = (iou_scores, iqr_scores, lcs, ccs, ics, iou_threshold, - iqr_threshold) - - if make_covariance: - segloader = torch.utils.data.DataLoader(dataset, - batch_size=seg_batch_size, - num_workers=num_workers, - pin_memory=(device.type == 'cuda')) - cov = collect_covariance(outdir, model, segloader, segrunner) - - if make_report: - generate_report(outdir, - quantiledata=quantiledata, - labelnames=labelnames, - catnames=catnames, - labeldata=labeldata, - maxioudata=maxioudata, - iqrdata=iqrdata, - covariancedata=cov, - rank_all_labels=rank_all_labels, - netname=netname, - meta=meta, - mergedata=merge, - settings=settings) - - return quantiledata, labeldata - -def generate_report(outdir, quantiledata, labelnames=None, catnames=None, - labeldata=None, maxioudata=None, iqrdata=None, covariancedata=None, - rank_all_labels=False, netname='Model', meta=None, settings=None, - mergedata=None): - ''' - Creates dissection.json reports and summary bargraph.svg files in the - specified output directory, and copies a dissection.html interface - to go along with it. - ''' - all_layers = [] - # Current source code directory, for html to copy. - srcdir = os.path.realpath( - os.path.join(os.getcwd(), os.path.dirname(__file__))) - # Unpack arguments - topk, quantiles, levels, quantile_threshold = quantiledata - top_record = dict( - netname=netname, - meta=meta, - default_ranking='unit', - quantile_threshold=quantile_threshold) - if settings is not None: - top_record['settings'] = settings - if labeldata is not None: - iou_scores, iqr_scores, lcs, ccs, ics, iou_threshold, iqr_threshold = ( - labeldata) - catorder = {'object': -7, 'scene': -6, 'part': -5, - 'piece': -4, - 'material': -3, 'texture': -2, 'color': -1} - for i, cat in enumerate(c for c in catnames if c not in catorder): - catorder[cat] = i - catnumber = {n: i for i, n in enumerate(catnames)} - catnumber['-'] = 0 - top_record['default_ranking'] = 'label' - top_record['iou_threshold'] = iou_threshold - top_record['iqr_threshold'] = iqr_threshold - labelnumber = dict((name[0], num) - for num, name in enumerate(labelnames)) - # Make a segmentation color dictionary - segcolors = {} - for i, name in enumerate(labelnames): - key = ','.join(str(s) for s in high_contrast[i % len(high_contrast)]) - if key in segcolors: - segcolors[key] += '/' + name[0] - else: - segcolors[key] = name[0] - top_record['segcolors'] = segcolors - for layer in topk.keys(): - units, rankings = [], [] - record = dict(layer=layer, units=units, rankings=rankings) - # For every unit, we always have basic visualization information. - topa, topi = topk[layer].result() - lev = levels[layer] - for u in range(len(topa)): - units.append(dict( - unit=u, - interp=True, - level=lev[u].item(), - top=[dict(imgnum=i.item(), maxact=a.item()) - for i, a in zip(topi[u], topa[u])], - )) - rankings.append(dict(name="unit", score=list([ - u for u in range(len(topa))]))) - # TODO: consider including stats and ranking based on quantiles, - # variance, connectedness here. - - # if we have labeldata, then every unit also gets a bunch of other info - if labeldata is not None: - lscore, qscore, cc, ic = [dat[layer] - for dat in [iou_scores, iqr_scores, ccs, ics]] - if iqrdata is not None: - # If we have IQR thresholds, assign labels based on that - max_iqr, max_iqr_level = iqrdata[:2] - best_label = max_iqr[layer].max(0)[1] - best_score = lscore[best_label, torch.arange(lscore.shape[1])] - best_qscore = qscore[best_label, torch.arange(lscore.shape[1])] - else: - # Otherwise, assign labels based on max iou - best_score, best_label = lscore.max(0) - best_qscore = qscore[best_label, torch.arange(qscore.shape[1])] - record['iou_threshold'] = iou_threshold, - for u, urec in enumerate(units): - score, qscore, label = ( - best_score[u], best_qscore[u], best_label[u]) - urec.update(dict( - iou=score.item(), - iou_iqr=qscore.item(), - lc=lcs[label].item(), - cc=cc[catnumber[labelnames[label][1]], u].item(), - ic=ic[label, u].item(), - interp=(qscore.item() > iqr_threshold and - score.item() > iou_threshold), - iou_labelnum=label.item(), - iou_label=labelnames[label.item()][0], - iou_cat=labelnames[label.item()][1], - )) - if maxioudata is not None: - max_iou, max_iou_level, max_iou_quantile = maxioudata - qualified_iou = max_iou[layer].clone() - # qualified_iou[max_iou_quantile[layer] > 0.75] = 0 - best_score, best_label = qualified_iou.max(0) - for u, urec in enumerate(units): - urec.update(dict( - maxiou=best_score[u].item(), - maxiou_label=labelnames[best_label[u].item()][0], - maxiou_cat=labelnames[best_label[u].item()][1], - maxiou_level=max_iou_level[layer][best_label[u], u].item(), - maxiou_quantile=max_iou_quantile[layer][ - best_label[u], u].item())) - if iqrdata is not None: - [max_iqr, max_iqr_level, max_iqr_quantile, - max_iqr_iou, max_iqr_agreement] = iqrdata - qualified_iqr = max_iqr[layer].clone() - qualified_iqr[max_iqr_quantile[layer] > 0.5] = 0 - best_score, best_label = qualified_iqr.max(0) - for u, urec in enumerate(units): - urec.update(dict( - iqr=best_score[u].item(), - iqr_label=labelnames[best_label[u].item()][0], - iqr_cat=labelnames[best_label[u].item()][1], - iqr_level=max_iqr_level[layer][best_label[u], u].item(), - iqr_quantile=max_iqr_quantile[layer][ - best_label[u], u].item(), - iqr_iou=max_iqr_iou[layer][best_label[u], u].item() - )) - if covariancedata is not None: - score = covariancedata[layer].correlation() - best_score, best_label = score.max(1) - for u, urec in enumerate(units): - urec.update(dict( - cor=best_score[u].item(), - cor_label=labelnames[best_label[u].item()][0], - cor_cat=labelnames[best_label[u].item()][1] - )) - if mergedata is not None: - # Final step: if the user passed any data to merge into the - # units, merge them now. This can be used, for example, to - # indiate that a unit is not interpretable based on some - # outside analysis of unit statistics. - for lrec in mergedata.get('layers', []): - if lrec['layer'] == layer: - break - else: - lrec = None - for u, urec in enumerate(lrec.get('units', []) if lrec else []): - units[u].update(urec) - # After populating per-unit info, populate per-layer ranking info - if labeldata is not None: - # Collect all labeled units - labelunits = defaultdict(list) - all_labelunits = defaultdict(list) - for u, urec in enumerate(units): - if urec['interp']: - labelunits[urec['iou_labelnum']].append(u) - all_labelunits[urec['iou_labelnum']].append(u) - # Sort all units in order with most popular label first. - label_ordering = sorted(units, - # Sort by: - key=lambda r: (-1 if r['interp'] else 0, # interpretable - -len(labelunits[r['iou_labelnum']]), # label freq, score - -max([units[u]['iou'] - for u in labelunits[r['iou_labelnum']]], default=0), - r['iou_labelnum'], # label - -r['iou'])) # unit score - # Add label and iou ranking. - rankings.append(dict(name="label", score=(numpy.argsort(list( - ur['unit'] for ur in label_ordering))).tolist())) - rankings.append(dict(name="max iou", metric="iou", score=list( - -ur['iou'] for ur in units))) - # Add ranking for top labels - # for labelnum in [n for n in sorted( - # all_labelunits.keys(), key=lambda x: - # -len(all_labelunits[x])) if len(all_labelunits[n])]: - # label = labelnames[labelnum][0] - # rankings.append(dict(name="%s-iou" % label, - # concept=label, metric='iou', - # score=(-lscore[labelnum, :]).tolist())) - # Collate labels by category then frequency. - record['labels'] = [dict( - label=labelnames[label][0], - labelnum=label, - units=labelunits[label], - cat=labelnames[label][1]) - for label in (sorted(labelunits.keys(), - # Sort by: - key=lambda l: (catorder.get( # category - labelnames[l][1], 0), - -len(labelunits[l]), # label freq - -max([units[u]['iou'] for u in labelunits[l]], - default=0) # score - ))) if len(labelunits[label])] - # Total number of interpretable units. - record['interpretable'] = sum(len(group['units']) - for group in record['labels']) - # Make a bargraph of labels - os.makedirs(os.path.join(outdir, safe_dir_name(layer)), - exist_ok=True) - catgroups = OrderedDict() - for _, cat in sorted([(v, k) for k, v in catorder.items()]): - catgroups[cat] = [] - for rec in record['labels']: - if rec['cat'] not in catgroups: - catgroups[rec['cat']] = [] - catgroups[rec['cat']].append(rec['label']) - make_svg_bargraph( - [rec['label'] for rec in record['labels']], - [len(rec['units']) for rec in record['labels']], - [(cat, len(group)) for cat, group in catgroups.items()], - filename=os.path.join(outdir, safe_dir_name(layer), - 'bargraph.svg')) - # Only show the bargraph if it is non-empty. - if len(record['labels']): - record['bargraph'] = 'bargraph.svg' - if maxioudata is not None: - rankings.append(dict(name="max maxiou", metric="maxiou", score=list( - -ur['maxiou'] for ur in units))) - if iqrdata is not None: - rankings.append(dict(name="max iqr", metric="iqr", score=list( - -ur['iqr'] for ur in units))) - if covariancedata is not None: - rankings.append(dict(name="max cor", metric="cor", score=list( - -ur['cor'] for ur in units))) - - all_layers.append(record) - # Now add the same rankings to every layer... - all_labels = None - if rank_all_labels: - all_labels = [name for name, cat in labelnames] - if labeldata is not None: - # Count layers+quadrants with a given label, and sort by freq - counted_labels = defaultdict(int) - for label in [ - re.sub(r'-(?:t|b|l|r|tl|tr|bl|br)$', '', unitrec['iou_label']) - for record in all_layers for unitrec in record['units']]: - counted_labels[label] += 1 - if all_labels is None: - all_labels = [label for count, label in sorted((-v, k) - for k, v in counted_labels.items())] - for record in all_layers: - layer = record['layer'] - for label in all_labels: - labelnum = labelnumber[label] - record['rankings'].append(dict(name="%s-iou" % label, - concept=label, metric='iou', - score=(-iou_scores[layer][labelnum, :]).tolist())) - - if maxioudata is not None: - if all_labels is None: - counted_labels = defaultdict(int) - for label in [ - re.sub(r'-(?:t|b|l|r|tl|tr|bl|br)$', '', - unitrec['maxiou_label']) - for record in all_layers for unitrec in record['units']]: - counted_labels[label] += 1 - all_labels = [label for count, label in sorted((-v, k) - for k, v in counted_labels.items())] - qualified_iou = max_iou[layer].clone() - qualified_iou[max_iou_quantile[layer] > 0.5] = 0 - for record in all_layers: - layer = record['layer'] - for label in all_labels: - labelnum = labelnumber[label] - record['rankings'].append(dict(name="%s-maxiou" % label, - concept=label, metric='maxiou', - score=(-qualified_iou[labelnum, :]).tolist())) - - if iqrdata is not None: - if all_labels is None: - counted_labels = defaultdict(int) - for label in [ - re.sub(r'-(?:t|b|l|r|tl|tr|bl|br)$', '', - unitrec['iqr_label']) - for record in all_layers for unitrec in record['units']]: - counted_labels[label] += 1 - all_labels = [label for count, label in sorted((-v, k) - for k, v in counted_labels.items())] - # qualified_iqr[max_iqr_quantile[layer] > 0.5] = 0 - for record in all_layers: - layer = record['layer'] - qualified_iqr = max_iqr[layer].clone() - for label in all_labels: - labelnum = labelnumber[label] - record['rankings'].append(dict(name="%s-iqr" % label, - concept=label, metric='iqr', - score=(-qualified_iqr[labelnum, :]).tolist())) - - if covariancedata is not None: - if all_labels is None: - counted_labels = defaultdict(int) - for label in [ - re.sub(r'-(?:t|b|l|r|tl|tr|bl|br)$', '', - unitrec['cor_label']) - for record in all_layers for unitrec in record['units']]: - counted_labels[label] += 1 - all_labels = [label for count, label in sorted((-v, k) - for k, v in counted_labels.items())] - for record in all_layers: - layer = record['layer'] - score = covariancedata[layer].correlation() - for label in all_labels: - labelnum = labelnumber[label] - record['rankings'].append(dict(name="%s-cor" % label, - concept=label, metric='cor', - score=(-score[:, labelnum]).tolist())) - - for record in all_layers: - layer = record['layer'] - # Dump per-layer json inside per-layer directory - record['dirname'] = '.' - with open(os.path.join(outdir, safe_dir_name(layer), 'dissect.json'), - 'w') as jsonfile: - top_record['layers'] = [record] - json.dump(top_record, jsonfile, indent=1) - # Copy the per-layer html - shutil.copy(os.path.join(srcdir, 'dissect.html'), - os.path.join(outdir, safe_dir_name(layer), 'dissect.html')) - record['dirname'] = safe_dir_name(layer) - - # Dump all-layer json in parent directory - with open(os.path.join(outdir, 'dissect.json'), 'w') as jsonfile: - top_record['layers'] = all_layers - json.dump(top_record, jsonfile, indent=1) - # Copy the all-layer html - shutil.copy(os.path.join(srcdir, 'dissect.html'), - os.path.join(outdir, 'dissect.html')) - shutil.copy(os.path.join(srcdir, 'edit.html'), - os.path.join(outdir, 'edit.html')) - - -def generate_images(outdir, model, dataset, topk, levels, - segrunner, row_length=None, gap_pixels=5, - row_images=True, single_images=False, prefix='', - batch_size=100, num_workers=24): - ''' - Creates an image strip file for every unit of every retained layer - of the model, in the format [outdir]/[layername]/[unitnum]-top.jpg. - Assumes that the indexes of topk refer to the indexes of dataset. - Limits each strip to the top row_length images. - ''' - progress = default_progress() - needed_images = {} - if row_images is False: - row_length = 1 - # Pass 1: needed_images lists all images that are topk for some unit. - for layer in topk: - topresult = topk[layer].result()[1].cpu() - for unit, row in enumerate(topresult): - for rank, imgnum in enumerate(row[:row_length]): - imgnum = imgnum.item() - if imgnum not in needed_images: - needed_images[imgnum] = [] - needed_images[imgnum].append((layer, unit, rank)) - levels = {k: v.cpu().numpy() for k, v in levels.items()} - row_length = len(row[:row_length]) - needed_sample = FixedSubsetSampler(sorted(needed_images.keys())) - device = next(model.parameters()).device - segloader = torch.utils.data.DataLoader(dataset, - batch_size=batch_size, num_workers=num_workers, - pin_memory=(device.type == 'cuda'), - sampler=needed_sample) - vizgrid, maskgrid, origrid, seggrid = [{} for _ in range(4)] - # Pass 2: populate vizgrid with visualizations of top units. - pool = None - for i, batch in enumerate( - progress(segloader, desc='Making images')): - # Reverse transformation to get the image in byte form. - seg, _, byte_im, _ = segrunner.run_and_segment_batch(batch, model, - want_rgb=True) - torch_features = model.retained_features() - scale_offset = getattr(model, 'scale_offset', None) - if pool is None: - # Distribute the work across processes: create shared mmaps. - for layer, tf in torch_features.items(): - [vizgrid[layer], maskgrid[layer], origrid[layer], - seggrid[layer]] = [ - create_temp_mmap_grid((tf.shape[1], - byte_im.shape[1], row_length, - byte_im.shape[2] + gap_pixels, depth), - dtype='uint8', - fill=255) - for depth in [3, 4, 3, 3]] - # Pass those mmaps to worker processes. - pool = WorkerPool(worker=VisualizeImageWorker, - memmap_grid_info=[ - {layer: (g.filename, g.shape, g.dtype) - for layer, g in grid.items()} - for grid in [vizgrid, maskgrid, origrid, seggrid]]) - byte_im = byte_im.cpu().numpy() - numpy_seg = seg.cpu().numpy() - features = {} - for index in range(len(byte_im)): - imgnum = needed_sample.samples[index + i*segloader.batch_size] - for layer, unit, rank in needed_images[imgnum]: - if layer not in features: - features[layer] = torch_features[layer].cpu().numpy() - pool.add(layer, unit, rank, - byte_im[index], - features[layer][index, unit], - levels[layer][unit], - scale_offset[layer] if scale_offset else None, - numpy_seg[index]) - pool.join() - # Pass 3: save image strips as [outdir]/[layer]/[unitnum]-[top/orig].jpg - pool = WorkerPool(worker=SaveImageWorker) - for layer, vg in progress(vizgrid.items(), desc='Saving images'): - os.makedirs(os.path.join(outdir, safe_dir_name(layer), - prefix + 'image'), exist_ok=True) - if single_images: - os.makedirs(os.path.join(outdir, safe_dir_name(layer), - prefix + 's-image'), exist_ok=True) - og, sg, mg = origrid[layer], seggrid[layer], maskgrid[layer] - for unit in progress(range(len(vg)), desc='Units'): - for suffix, grid in [('top.jpg', vg), ('orig.jpg', og), - ('seg.png', sg), ('mask.png', mg)]: - strip = grid[unit].reshape( - (grid.shape[1], grid.shape[2] * grid.shape[3], - grid.shape[4])) - if row_images: - filename = os.path.join(outdir, safe_dir_name(layer), - prefix + 'image', '%d-%s' % (unit, suffix)) - pool.add(strip[:,:-gap_pixels,:].copy(), filename) - # Image.fromarray(strip[:,:-gap_pixels,:]).save(filename, - # optimize=True, quality=80) - if single_images: - single_filename = os.path.join(outdir, safe_dir_name(layer), - prefix + 's-image', '%d-%s' % (unit, suffix)) - pool.add(strip[:,:strip.shape[1] // row_length - - gap_pixels,:].copy(), single_filename) - # Image.fromarray(strip[:,:strip.shape[1] // row_length - # - gap_pixels,:]).save(single_filename, - # optimize=True, quality=80) - pool.join() - # Delete the shared memory map files - clear_global_shared_files([g.filename - for grid in [vizgrid, maskgrid, origrid, seggrid] - for g in grid.values()]) - -global_shared_files = {} -def create_temp_mmap_grid(shape, dtype, fill): - dtype = numpy.dtype(dtype) - filename = os.path.join(tempfile.mkdtemp(), 'temp-%s-%s.mmap' % - ('x'.join('%d' % s for s in shape), dtype.name)) - fid = open(filename, mode='w+b') - original = numpy.memmap(fid, dtype=dtype, mode='w+', shape=shape) - original.fid = fid - original[...] = fill - global_shared_files[filename] = original - return original - -def shared_temp_mmap_grid(filename, shape, dtype): - if filename not in global_shared_files: - global_shared_files[filename] = numpy.memmap( - filename, dtype=dtype, mode='r+', shape=shape) - return global_shared_files[filename] - -def clear_global_shared_files(filenames): - for fn in filenames: - if fn in global_shared_files: - del global_shared_files[fn] - try: - os.unlink(fn) - except OSError: - pass - -class VisualizeImageWorker(WorkerBase): - def setup(self, memmap_grid_info): - self.vizgrid, self.maskgrid, self.origrid, self.seggrid = [ - {layer: shared_temp_mmap_grid(*info) - for layer, info in grid.items()} - for grid in memmap_grid_info] - def work(self, layer, unit, rank, - byte_im, acts, level, scale_offset, seg): - self.origrid[layer][unit,:,rank,:byte_im.shape[0],:] = byte_im - [self.vizgrid[layer][unit,:,rank,:byte_im.shape[0],:], - self.maskgrid[layer][unit,:,rank,:byte_im.shape[0],:]] = ( - activation_visualization( - byte_im, - acts, - level, - scale_offset=scale_offset, - return_mask=True)) - self.seggrid[layer][unit,:,rank,:byte_im.shape[0],:] = ( - segment_visualization(seg, byte_im.shape[0:2])) - -class SaveImageWorker(WorkerBase): - def work(self, data, filename): - Image.fromarray(data).save(filename, optimize=True, quality=80) - -def score_tally_stats(label_category, tc, truth, cc, ic): - pred = cc[label_category] - total = tc[label_category][:, None] - truth = truth[:, None] - epsilon = 1e-20 # avoid division-by-zero - union = pred + truth - ic - iou = ic.double() / (union.double() + epsilon) - arr = torch.empty(size=(2, 2) + ic.shape, dtype=ic.dtype, device=ic.device) - arr[0, 0] = ic - arr[0, 1] = pred - ic - arr[1, 0] = truth - ic - arr[1, 1] = total - union - arr = arr.double() / total.double() - mi = mutual_information(arr) - je = joint_entropy(arr) - iqr = mi / je - iqr[torch.isnan(iqr)] = 0 # Zero out any 0/0 - return iou, iqr - -def collect_quantiles_and_topk(outdir, model, segloader, - segrunner, k=100, resolution=1024): - ''' - Collects (estimated) quantile information and (exact) sorted top-K lists - for every channel in the retained layers of the model. Returns - a map of quantiles (one RunningQuantile for each layer) along with - a map of topk (one RunningTopK for each layer). - ''' - device = next(model.parameters()).device - features = model.retained_features() - cached_quantiles = { - layer: load_quantile_if_present(os.path.join(outdir, - safe_dir_name(layer)), 'quantiles.npz', - device=torch.device('cpu')) - for layer in features } - cached_topks = { - layer: load_topk_if_present(os.path.join(outdir, - safe_dir_name(layer)), 'topk.npz', - device=torch.device('cpu')) - for layer in features } - if (all(value is not None for value in cached_quantiles.values()) and - all(value is not None for value in cached_topks.values())): - return cached_quantiles, cached_topks - - layer_batch_size = 8 - all_layers = list(features.keys()) - layer_batches = [all_layers[i:i+layer_batch_size] - for i in range(0, len(all_layers), layer_batch_size)] - - quantiles, topks = {}, {} - progress = default_progress() - for layer_batch in layer_batches: - for i, batch in enumerate(progress(segloader, desc='Quantiles')): - # We don't actually care about the model output. - model(batch[0].to(device)) - features = model.retained_features() - # We care about the retained values - for key in layer_batch: - value = features[key] - if topks.get(key, None) is None: - topks[key] = RunningTopK(k) - if quantiles.get(key, None) is None: - quantiles[key] = RunningQuantile(resolution=resolution) - topvalue = value - if len(value.shape) > 2: - topvalue, _ = value.view(*(value.shape[:2] + (-1,))).max(2) - # Put the channel index last. - value = value.permute( - (0,) + tuple(range(2, len(value.shape))) + (1,) - ).contiguous().view(-1, value.shape[1]) - quantiles[key].add(value) - topks[key].add(topvalue) - # Save GPU memory - for key in layer_batch: - quantiles[key].to_(torch.device('cpu')) - topks[key].to_(torch.device('cpu')) - for layer in quantiles: - save_state_dict(quantiles[layer], - os.path.join(outdir, safe_dir_name(layer), 'quantiles.npz')) - save_state_dict(topks[layer], - os.path.join(outdir, safe_dir_name(layer), 'topk.npz')) - return quantiles, topks - -def collect_bincounts(outdir, model, segloader, levels, segrunner): - ''' - Returns label_counts, category_activation_counts, and intersection_counts, - across the data set, counting the pixels of intersection between upsampled, - thresholded model featuremaps, with segmentation classes in the segloader. - - label_counts (independent of model): pixels across the data set that - are labeled with the given label. - category_activation_counts (one per layer): for each feature channel, - pixels across the dataset where the channel exceeds the level - threshold. There is one count per category: activations only - contribute to the categories for which any category labels are - present on the images. - intersection_counts (one per layer): for each feature channel and - label, pixels across the dataset where the channel exceeds - the level, and the labeled segmentation class is also present. - - This is a performance-sensitive function. Best performance is - achieved with a counting scheme which assumes a segloader with - batch_size 1. - ''' - # Load cached data if present - (iou_scores, iqr_scores, - total_counts, label_counts, category_activation_counts, - intersection_counts) = {}, {}, None, None, {}, {} - found_all = True - for layer in model.retained_features(): - filename = os.path.join(outdir, safe_dir_name(layer), 'bincounts.npz') - if os.path.isfile(filename): - data = numpy.load(filename) - iou_scores[layer] = torch.from_numpy(data['iou_scores']) - iqr_scores[layer] = torch.from_numpy(data['iqr_scores']) - total_counts = torch.from_numpy(data['total_counts']) - label_counts = torch.from_numpy(data['label_counts']) - category_activation_counts[layer] = torch.from_numpy( - data['category_activation_counts']) - intersection_counts[layer] = torch.from_numpy( - data['intersection_counts']) - else: - found_all = False - if found_all: - return (iou_scores, iqr_scores, - total_counts, label_counts, category_activation_counts, - intersection_counts) - - device = next(model.parameters()).device - labelcat, categories = segrunner.get_label_and_category_names() - label_category = [categories.index(c) if c in categories else 0 - for l, c in labelcat] - num_labels, num_categories = (len(n) for n in [labelcat, categories]) - - # One-hot vector of category for each label - labelcat = torch.zeros(num_labels, num_categories, - dtype=torch.long, device=device) - labelcat.scatter_(1, torch.from_numpy(numpy.array(label_category, - dtype='int64')).to(device)[:,None], 1) - # Running bincounts - # activation_counts = {} - assert segloader.batch_size == 1 # category_activation_counts needs this. - category_activation_counts = {} - intersection_counts = {} - label_counts = torch.zeros(num_labels, dtype=torch.long, device=device) - total_counts = torch.zeros(num_categories, dtype=torch.long, device=device) - progress = default_progress() - scale_offset_map = getattr(model, 'scale_offset', None) - upsample_grids = {} - # total_batch_categories = torch.zeros( - # labelcat.shape[1], dtype=torch.long, device=device) - for i, batch in enumerate(progress(segloader, desc='Bincounts')): - seg, batch_label_counts, _, imshape = segrunner.run_and_segment_batch( - batch, model, want_bincount=True, want_rgb=True) - bc = batch_label_counts.cpu() - batch_label_counts = batch_label_counts.to(device) - seg = seg.to(device) - features = model.retained_features() - # Accumulate bincounts and identify nonzeros - label_counts += batch_label_counts[0] - batch_labels = bc[0].nonzero()[:,0] - batch_categories = labelcat[batch_labels].max(0)[0] - total_counts += batch_categories * ( - seg.shape[0] * seg.shape[2] * seg.shape[3]) - for key, value in features.items(): - if key not in upsample_grids: - upsample_grids[key] = upsample_grid(value.shape[2:], - seg.shape[2:], imshape, - scale_offset=scale_offset_map.get(key, None) - if scale_offset_map is not None else None, - dtype=value.dtype, device=value.device) - upsampled = torch.nn.functional.grid_sample(value, - upsample_grids[key], padding_mode='border') - amask = (upsampled > levels[key][None,:,None,None].to( - upsampled.device)) - ac = amask.int().view(amask.shape[1], -1).sum(1) - # if key not in activation_counts: - # activation_counts[key] = ac - # else: - # activation_counts[key] += ac - # The fastest approach: sum over each label separately! - for label in batch_labels.tolist(): - if label == 0: - continue # ignore the background label - imask = amask * ((seg == label).max(dim=1, keepdim=True)[0]) - ic = imask.int().view(imask.shape[1], -1).sum(1) - if key not in intersection_counts: - intersection_counts[key] = torch.zeros(num_labels, - amask.shape[1], dtype=torch.long, device=device) - intersection_counts[key][label] += ic - # Count activations within images that have category labels. - # Note: This only makes sense with batch-size one - # total_batch_categories += batch_categories - cc = batch_categories[:,None] * ac[None,:] - if key not in category_activation_counts: - category_activation_counts[key] = cc - else: - category_activation_counts[key] += cc - iou_scores = {} - iqr_scores = {} - for k in intersection_counts: - iou_scores[k], iqr_scores[k] = score_tally_stats( - label_category, total_counts, label_counts, - category_activation_counts[k], intersection_counts[k]) - for k in intersection_counts: - numpy.savez(os.path.join(outdir, safe_dir_name(k), 'bincounts.npz'), - iou_scores=iou_scores[k].cpu().numpy(), - iqr_scores=iqr_scores[k].cpu().numpy(), - total_counts=total_counts.cpu().numpy(), - label_counts=label_counts.cpu().numpy(), - category_activation_counts=category_activation_counts[k] - .cpu().numpy(), - intersection_counts=intersection_counts[k].cpu().numpy(), - levels=levels[k].cpu().numpy()) - return (iou_scores, iqr_scores, - total_counts, label_counts, category_activation_counts, - intersection_counts) - -def collect_cond_quantiles(outdir, model, segloader, segrunner): - ''' - Returns maxiou and maxiou_level across the data set, one per layer. - - This is a performance-sensitive function. Best performance is - achieved with a counting scheme which assumes a segloader with - batch_size 1. - ''' - device = next(model.parameters()).device - cached_cond_quantiles = { - layer: load_conditional_quantile_if_present(os.path.join(outdir, - safe_dir_name(layer)), 'cond_quantiles.npz') # on cpu - for layer in model.retained_features() } - label_fracs = load_npy_if_present(outdir, 'label_fracs.npy', 'cpu') - if label_fracs is not None and all( - value is not None for value in cached_cond_quantiles.values()): - return cached_cond_quantiles, label_fracs - - labelcat, categories = segrunner.get_label_and_category_names() - label_category = [categories.index(c) if c in categories else 0 - for l, c in labelcat] - num_labels, num_categories = (len(n) for n in [labelcat, categories]) - - # One-hot vector of category for each label - labelcat = torch.zeros(num_labels, num_categories, - dtype=torch.long, device=device) - labelcat.scatter_(1, torch.from_numpy(numpy.array(label_category, - dtype='int64')).to(device)[:,None], 1) - # Running maxiou - assert segloader.batch_size == 1 # category_activation_counts needs this. - conditional_quantiles = {} - label_counts = torch.zeros(num_labels, dtype=torch.long, device=device) - pixel_count = 0 - progress = default_progress() - scale_offset_map = getattr(model, 'scale_offset', None) - upsample_grids = {} - common_conditions = set() - if label_fracs is None or label_fracs is 0: - for i, batch in enumerate(progress(segloader, desc='label fracs')): - seg, batch_label_counts, im, _ = segrunner.run_and_segment_batch( - batch, model, want_bincount=True, want_rgb=True) - batch_label_counts = batch_label_counts.to(device) - features = model.retained_features() - # Accumulate bincounts and identify nonzeros - label_counts += batch_label_counts[0] - pixel_count += seg.shape[2] * seg.shape[3] - label_fracs = (label_counts.cpu().float() / pixel_count)[:, None, None] - numpy.save(os.path.join(outdir, 'label_fracs.npy'), label_fracs) - - skip_threshold = 1e-4 - skip_labels = set(i.item() - for i in (label_fracs.view(-1) < skip_threshold).nonzero().view(-1)) - - for layer in progress(model.retained_features().keys(), desc='CQ layers'): - if cached_cond_quantiles.get(layer, None) is not None: - conditional_quantiles[layer] = cached_cond_quantiles[layer] - continue - - for i, batch in enumerate(progress(segloader, desc='Condquant')): - seg, batch_label_counts, _, imshape = ( - segrunner.run_and_segment_batch( - batch, model, want_bincount=True, want_rgb=True)) - bc = batch_label_counts.cpu() - batch_label_counts = batch_label_counts.to(device) - features = model.retained_features() - # Accumulate bincounts and identify nonzeros - label_counts += batch_label_counts[0] - pixel_count += seg.shape[2] * seg.shape[3] - batch_labels = bc[0].nonzero()[:,0] - batch_categories = labelcat[batch_labels].max(0)[0] - cpu_seg = None - value = features[layer] - if layer not in upsample_grids: - upsample_grids[layer] = upsample_grid(value.shape[2:], - seg.shape[2:], imshape, - scale_offset=scale_offset_map.get(layer, None) - if scale_offset_map is not None else None, - dtype=value.dtype, device=value.device) - if layer not in conditional_quantiles: - conditional_quantiles[layer] = RunningConditionalQuantile( - resolution=2048) - upsampled = torch.nn.functional.grid_sample(value, - upsample_grids[layer], padding_mode='border').view( - value.shape[1], -1) - conditional_quantiles[layer].add(('all',), upsampled.t()) - cpu_upsampled = None - for label in batch_labels.tolist(): - if label in skip_labels: - continue - label_key = ('label', label) - if label_key in common_conditions: - imask = (seg == label).max(dim=1)[0].view(-1) - intersected = upsampled[:, imask] - conditional_quantiles[layer].add(('label', label), - intersected.t()) - else: - if cpu_seg is None: - cpu_seg = seg.cpu() - if cpu_upsampled is None: - cpu_upsampled = upsampled.cpu() - imask = (cpu_seg == label).max(dim=1)[0].view(-1) - intersected = cpu_upsampled[:, imask] - conditional_quantiles[layer].add(('label', label), - intersected.t()) - if num_categories > 1: - for cat in batch_categories.nonzero()[:,0]: - conditional_quantiles[layer].add(('cat', cat.item()), - upsampled.t()) - # Move the most common conditions to the GPU. - if i and not i & (i - 1): # if i is a power of 2: - cq = conditional_quantiles[layer] - common_conditions = set(cq.most_common_conditions(64)) - cq.to_('cpu', [k for k in cq.running_quantiles.keys() - if k not in common_conditions]) - # When a layer is done, get it off the GPU - conditional_quantiles[layer].to_('cpu') - - label_fracs = (label_counts.cpu().float() / pixel_count)[:, None, None] - - for cq in conditional_quantiles.values(): - cq.to_('cpu') - - for layer in conditional_quantiles: - save_state_dict(conditional_quantiles[layer], - os.path.join(outdir, safe_dir_name(layer), 'cond_quantiles.npz')) - numpy.save(os.path.join(outdir, 'label_fracs.npy'), label_fracs) - - return conditional_quantiles, label_fracs - - -def collect_maxiou(outdir, model, segloader, segrunner): - ''' - Returns maxiou and maxiou_level across the data set, one per layer. - - This is a performance-sensitive function. Best performance is - achieved with a counting scheme which assumes a segloader with - batch_size 1. - ''' - device = next(model.parameters()).device - conditional_quantiles, label_fracs = collect_cond_quantiles( - outdir, model, segloader, segrunner) - - labelcat, categories = segrunner.get_label_and_category_names() - label_category = [categories.index(c) if c in categories else 0 - for l, c in labelcat] - num_labels, num_categories = (len(n) for n in [labelcat, categories]) - - label_list = [('label', i) for i in range(num_labels)] - category_list = [('all',)] if num_categories <= 1 else ( - [('cat', i) for i in range(num_categories)]) - max_iou, max_iou_level, max_iou_quantile = {}, {}, {} - fracs = torch.logspace(-3, 0, 100) - progress = default_progress() - for layer, cq in progress(conditional_quantiles.items(), desc='Maxiou'): - levels = cq.conditional(('all',)).quantiles(1 - fracs) - denoms = 1 - cq.collected_normalize(category_list, levels) - isects = (1 - cq.collected_normalize(label_list, levels)) * label_fracs - unions = label_fracs + denoms[label_category, :, :] - isects - iou = isects / unions - # TODO: erase any for which threshold is bad - max_iou[layer], level_bucket = iou.max(2) - max_iou_level[layer] = levels[ - torch.arange(levels.shape[0])[None,:], level_bucket] - max_iou_quantile[layer] = fracs[level_bucket] - for layer in model.retained_features(): - numpy.savez(os.path.join(outdir, safe_dir_name(layer), 'max_iou.npz'), - max_iou=max_iou[layer].cpu().numpy(), - max_iou_level=max_iou_level[layer].cpu().numpy(), - max_iou_quantile=max_iou_quantile[layer].cpu().numpy()) - return (max_iou, max_iou_level, max_iou_quantile) - -def collect_iqr(outdir, model, segloader, segrunner): - ''' - Returns iqr and iqr_level. - - This is a performance-sensitive function. Best performance is - achieved with a counting scheme which assumes a segloader with - batch_size 1. - ''' - max_iqr, max_iqr_level, max_iqr_quantile, max_iqr_iou = {}, {}, {}, {} - max_iqr_agreement = {} - found_all = True - for layer in model.retained_features(): - filename = os.path.join(outdir, safe_dir_name(layer), 'iqr.npz') - if os.path.isfile(filename): - data = numpy.load(filename) - max_iqr[layer] = torch.from_numpy(data['max_iqr']) - max_iqr_level[layer] = torch.from_numpy(data['max_iqr_level']) - max_iqr_quantile[layer] = torch.from_numpy(data['max_iqr_quantile']) - max_iqr_iou[layer] = torch.from_numpy(data['max_iqr_iou']) - max_iqr_agreement[layer] = torch.from_numpy( - data['max_iqr_agreement']) - else: - found_all = False - if found_all: - return (max_iqr, max_iqr_level, max_iqr_quantile, max_iqr_iou, - max_iqr_agreement) - - - device = next(model.parameters()).device - conditional_quantiles, label_fracs = collect_cond_quantiles( - outdir, model, segloader, segrunner) - - labelcat, categories = segrunner.get_label_and_category_names() - label_category = [categories.index(c) if c in categories else 0 - for l, c in labelcat] - num_labels, num_categories = (len(n) for n in [labelcat, categories]) - - label_list = [('label', i) for i in range(num_labels)] - category_list = [('all',)] if num_categories <= 1 else ( - [('cat', i) for i in range(num_categories)]) - full_mi, full_je, full_iqr = {}, {}, {} - fracs = torch.logspace(-3, 0, 100) - progress = default_progress() - for layer, cq in progress(conditional_quantiles.items(), desc='IQR'): - levels = cq.conditional(('all',)).quantiles(1 - fracs) - truth = label_fracs.to(device) - preds = (1 - cq.collected_normalize(category_list, levels) - )[label_category, :, :].to(device) - cond_isects = 1 - cq.collected_normalize(label_list, levels).to(device) - isects = cond_isects * truth - unions = truth + preds - isects - arr = torch.empty(size=(2, 2) + isects.shape, dtype=isects.dtype, - device=device) - arr[0, 0] = isects - arr[0, 1] = preds - isects - arr[1, 0] = truth - isects - arr[1, 1] = 1 - unions - arr.clamp_(0, 1) - mi = mutual_information(arr) - mi[:,:,-1] = 0 # at the 1.0 quantile should be no MI. - # Don't trust mi when less than label_frac is less than 1e-3, - # because our samples are too small. - mi[label_fracs.view(-1) < 1e-3, :, :] = 0 - je = joint_entropy(arr) - iqr = mi / je - iqr[torch.isnan(iqr)] = 0 # Zero out any 0/0 - full_mi[layer] = mi.cpu() - full_je[layer] = je.cpu() - full_iqr[layer] = iqr.cpu() - del mi, je - agreement = isects + arr[1, 1] - # When optimizing, maximize only over those pairs where the - # unit is positively correlated with the label, and where the - # threshold level is positive - positive_iqr = iqr - positive_iqr[agreement <= 0.8] = 0 - positive_iqr[(levels <= 0.0)[None, :, :].expand(positive_iqr.shape)] = 0 - # TODO: erase any for which threshold is bad - maxiqr, level_bucket = positive_iqr.max(2) - max_iqr[layer] = maxiqr.cpu() - max_iqr_level[layer] = levels.to(device)[ - torch.arange(levels.shape[0])[None,:], level_bucket].cpu() - max_iqr_quantile[layer] = fracs.to(device)[level_bucket].cpu() - max_iqr_agreement[layer] = agreement[ - torch.arange(agreement.shape[0])[:, None], - torch.arange(agreement.shape[1])[None, :], - level_bucket].cpu() - - # Compute the iou that goes with each maximized iqr - matching_iou = (isects[ - torch.arange(isects.shape[0])[:, None], - torch.arange(isects.shape[1])[None, :], - level_bucket] / - unions[ - torch.arange(unions.shape[0])[:, None], - torch.arange(unions.shape[1])[None, :], - level_bucket]) - matching_iou[torch.isnan(matching_iou)] = 0 - max_iqr_iou[layer] = matching_iou.cpu() - for layer in model.retained_features(): - numpy.savez(os.path.join(outdir, safe_dir_name(layer), 'iqr.npz'), - max_iqr=max_iqr[layer].cpu().numpy(), - max_iqr_level=max_iqr_level[layer].cpu().numpy(), - max_iqr_quantile=max_iqr_quantile[layer].cpu().numpy(), - max_iqr_iou=max_iqr_iou[layer].cpu().numpy(), - max_iqr_agreement=max_iqr_agreement[layer].cpu().numpy(), - full_mi=full_mi[layer].cpu().numpy(), - full_je=full_je[layer].cpu().numpy(), - full_iqr=full_iqr[layer].cpu().numpy()) - return (max_iqr, max_iqr_level, max_iqr_quantile, max_iqr_iou, - max_iqr_agreement) - -def mutual_information(arr): - total = 0 - for j in range(arr.shape[0]): - for k in range(arr.shape[1]): - joint = arr[j,k] - ind = arr[j,:].sum(dim=0) * arr[:,k].sum(dim=0) - term = joint * (joint / ind).log() - term[torch.isnan(term)] = 0 - total += term - return total.clamp_(0) - -def joint_entropy(arr): - total = 0 - for j in range(arr.shape[0]): - for k in range(arr.shape[1]): - joint = arr[j,k] - term = joint * joint.log() - term[torch.isnan(term)] = 0 - total += term - return (-total).clamp_(0) - -def information_quality_ratio(arr): - iqr = mutual_information(arr) / joint_entropy(arr) - iqr[torch.isnan(iqr)] = 0 - return iqr - -def collect_covariance(outdir, model, segloader, segrunner): - ''' - Returns label_mean, label_variance, unit_mean, unit_variance, - and cross_covariance across the data set. - - label_mean, label_variance (independent of model): - treating the label as a one-hot, each label's mean and variance. - unit_mean, unit_variance (one per layer): for each feature channel, - the mean and variance of the activations in that channel. - cross_covariance (one per layer): the cross covariance between the - labels and the units in the layer. - ''' - device = next(model.parameters()).device - cached_covariance = { - layer: load_covariance_if_present(os.path.join(outdir, - safe_dir_name(layer)), 'covariance.npz', device=device) - for layer in model.retained_features() } - if all(value is not None for value in cached_covariance.values()): - return cached_covariance - labelcat, categories = segrunner.get_label_and_category_names() - label_category = [categories.index(c) if c in categories else 0 - for l, c in labelcat] - num_labels, num_categories = (len(n) for n in [labelcat, categories]) - - # Running covariance - cov = {} - progress = default_progress() - scale_offset_map = getattr(model, 'scale_offset', None) - upsample_grids = {} - for i, batch in enumerate(progress(segloader, desc='Covariance')): - seg, _, _, imshape = segrunner.run_and_segment_batch(batch, model, - want_rgb=True) - features = model.retained_features() - ohfeats = multilabel_onehot(seg, num_labels, ignore_index=0) - # Accumulate bincounts and identify nonzeros - for key, value in features.items(): - if key not in upsample_grids: - upsample_grids[key] = upsample_grid(value.shape[2:], - seg.shape[2:], imshape, - scale_offset=scale_offset_map.get(key, None) - if scale_offset_map is not None else None, - dtype=value.dtype, device=value.device) - upsampled = torch.nn.functional.grid_sample(value, - upsample_grids[key].expand( - (value.shape[0],) + upsample_grids[key].shape[1:]), - padding_mode='border') - if key not in cov: - cov[key] = RunningCrossCovariance() - cov[key].add(upsampled, ohfeats) - for layer in cov: - save_state_dict(cov[layer], - os.path.join(outdir, safe_dir_name(layer), 'covariance.npz')) - return cov - -def multilabel_onehot(labels, num_labels, dtype=None, ignore_index=None): - ''' - Converts a multilabel tensor into a onehot tensor. - - The input labels is a tensor of shape (samples, multilabels, y, x). - The output is a tensor of shape (samples, num_labels, y, x). - If ignore_index is specified, labels with that index are ignored. - Each x in labels should be 0 <= x < num_labels, or x == ignore_index. - ''' - assert ignore_index is None or ignore_index <= 0 - if dtype is None: - dtype = torch.float - device = labels.device - chans = num_labels + (-ignore_index if ignore_index else 0) - outshape = (labels.shape[0], chans) + labels.shape[2:] - result = torch.zeros(outshape, device=device, dtype=dtype) - if ignore_index and ignore_index < 0: - labels = labels + (-ignore_index) - result.scatter_(1, labels, 1) - if ignore_index and ignore_index < 0: - result = result[:, -ignore_index:] - elif ignore_index is not None: - result[:, ignore_index] = 0 - return result - -def load_npy_if_present(outdir, filename, device): - filepath = os.path.join(outdir, filename) - if os.path.isfile(filepath): - data = numpy.load(filepath) - return torch.from_numpy(data).to(device) - return 0 - -def load_npz_if_present(outdir, filename, varnames, device): - filepath = os.path.join(outdir, filename) - if os.path.isfile(filepath): - data = numpy.load(filepath) - numpy_result = [data[n] for n in varnames] - return tuple(torch.from_numpy(data).to(device) for data in numpy_result) - return None - -def load_quantile_if_present(outdir, filename, device): - filepath = os.path.join(outdir, filename) - if os.path.isfile(filepath): - data = numpy.load(filepath) - result = RunningQuantile(state=data) - result.to_(device) - return result - return None - -def load_conditional_quantile_if_present(outdir, filename): - filepath = os.path.join(outdir, filename) - if os.path.isfile(filepath): - data = numpy.load(filepath) - result = RunningConditionalQuantile(state=data) - return result - return None - -def load_topk_if_present(outdir, filename, device): - filepath = os.path.join(outdir, filename) - if os.path.isfile(filepath): - data = numpy.load(filepath) - result = RunningTopK(state=data) - result.to_(device) - return result - return None - -def load_covariance_if_present(outdir, filename, device): - filepath = os.path.join(outdir, filename) - if os.path.isfile(filepath): - data = numpy.load(filepath) - result = RunningCrossCovariance(state=data) - result.to_(device) - return result - return None - -def save_state_dict(obj, filepath): - dirname = os.path.dirname(filepath) - os.makedirs(dirname, exist_ok=True) - dic = obj.state_dict() - numpy.savez(filepath, **dic) - -def upsample_grid(data_shape, target_shape, input_shape=None, - scale_offset=None, dtype=torch.float, device=None): - '''Prepares a grid to use with grid_sample to upsample a batch of - features in data_shape to the target_shape. Can use scale_offset - and input_shape to center the grid in a nondefault way: scale_offset - maps feature pixels to input_shape pixels, and it is assumed that - the target_shape is a uniform downsampling of input_shape.''' - # Default is that nothing is resized. - if target_shape is None: - target_shape = data_shape - # Make a default scale_offset to fill the image if there isn't one - if scale_offset is None: - scale = tuple(float(ts) / ds - for ts, ds in zip(target_shape, data_shape)) - offset = tuple(0.5 * s - 0.5 for s in scale) - else: - scale, offset = (v for v in zip(*scale_offset)) - # Handle downsampling for different input vs target shape. - if input_shape is not None: - scale = tuple(s * (ts - 1) / (ns - 1) - for s, ns, ts in zip(scale, input_shape, target_shape)) - offset = tuple(o * (ts - 1) / (ns - 1) - for o, ns, ts in zip(offset, input_shape, target_shape)) - # Pytorch needs target coordinates in terms of source coordinates [-1..1] - ty, tx = (((torch.arange(ts, dtype=dtype, device=device) - o) - * (2 / (s * (ss - 1))) - 1) - for ts, ss, s, o, in zip(target_shape, data_shape, scale, offset)) - # Whoa, note that grid_sample reverses the order y, x -> x, y. - grid = torch.stack( - (tx[None,:].expand(target_shape), ty[:,None].expand(target_shape)),2 - )[None,:,:,:].expand((1, target_shape[0], target_shape[1], 2)) - return grid - -def safe_dir_name(filename): - keepcharacters = (' ','.','_','-') - return ''.join(c - for c in filename if c.isalnum() or c in keepcharacters).rstrip() - -bargraph_palette = [ - ('#4B4CBF', '#B6B6F2'), - ('#55B05B', '#B6F2BA'), - ('#50BDAC', '#A5E5DB'), - ('#81C679', '#C0FF9B'), - ('#F0883B', '#F2CFB6'), - ('#D4CF24', '#F2F1B6'), - ('#D92E2B', '#F2B6B6'), - ('#AB6BC6', '#CFAAFF'), -] - -def make_svg_bargraph(labels, heights, categories, - barheight=100, barwidth=12, show_labels=True, filename=None): - # if len(labels) == 0: - # return # Nothing to do - unitheight = float(barheight) / max(max(heights, default=1), 1) - textheight = barheight if show_labels else 0 - labelsize = float(barwidth) - gap = float(barwidth) / 4 - textsize = barwidth + gap - rollup = max(heights, default=1) - textmargin = float(labelsize) * 2 / 3 - leftmargin = 32 - rightmargin = 8 - svgwidth = len(heights) * (barwidth + gap) + 2 * leftmargin + rightmargin - svgheight = barheight + textheight - - # create an SVG XML element - svg = et.Element('svg', width=str(svgwidth), height=str(svgheight), - version='1.1', xmlns='http://www.w3.org/2000/svg') - - # Draw the bar graph - basey = svgheight - textheight - x = leftmargin - # Add units scale on left - if len(heights): - for h in [1, (max(heights) + 1) // 2, max(heights)]: - et.SubElement(svg, 'text', x='0', y='0', - style=('font-family:sans-serif;font-size:%dpx;' + - 'text-anchor:end;alignment-baseline:hanging;' + - 'transform:translate(%dpx, %dpx);') % - (textsize, x - gap, basey - h * unitheight)).text = str(h) - et.SubElement(svg, 'text', x='0', y='0', - style=('font-family:sans-serif;font-size:%dpx;' + - 'text-anchor:middle;' + - 'transform:translate(%dpx, %dpx) rotate(-90deg)') % - (textsize, x - gap - textsize, basey - h * unitheight / 2) - ).text = 'units' - # Draw big category background rectangles - for catindex, (cat, catcount) in enumerate(categories): - if not catcount: - continue - et.SubElement(svg, 'rect', x=str(x), y=str(basey - rollup * unitheight), - width=(str((barwidth + gap) * catcount - gap)), - height = str(rollup*unitheight), - fill=bargraph_palette[catindex % len(bargraph_palette)][1]) - x += (barwidth + gap) * catcount - # Draw small bars as well as 45degree text labels - x = leftmargin - catindex = -1 - catcount = 0 - for label, height in zip(labels, heights): - while not catcount and catindex <= len(categories): - catindex += 1 - catcount = categories[catindex][1] - color = bargraph_palette[catindex % len(bargraph_palette)][0] - et.SubElement(svg, 'rect', x=str(x), y=str(basey-(height * unitheight)), - width=str(barwidth), height=str(height * unitheight), - fill=color) - x += barwidth - if show_labels: - et.SubElement(svg, 'text', x='0', y='0', - style=('font-family:sans-serif;font-size:%dpx;text-anchor:end;'+ - 'transform:translate(%dpx, %dpx) rotate(-45deg);') % - (labelsize, x, basey + textmargin)).text = readable(label) - x += gap - catcount -= 1 - # Text labels for each category - x = leftmargin - for cat, catcount in categories: - if not catcount: - continue - et.SubElement(svg, 'text', x='0', y='0', - style=('font-family:sans-serif;font-size:%dpx;text-anchor:end;'+ - 'transform:translate(%dpx, %dpx) rotate(-90deg);') % - (textsize, x + (barwidth + gap) * catcount - gap, - basey - rollup * unitheight + gap)).text = '%d %s' % ( - catcount, readable(cat + ('s' if catcount != 1 else ''))) - x += (barwidth + gap) * catcount - # Output - this is the bare svg. - result = et.tostring(svg) - if filename: - f = open(filename, 'wb') - # When writing to a file a special header is needed. - f.write(''.join([ - '\n', - '\n'] - ).encode('utf-8')) - f.write(result) - f.close() - return result - -readable_replacements = [(re.compile(r[0]), r[1]) for r in [ - (r'-[sc]$', ''), - (r'_', ' '), - ]] - -def readable(label): - for pattern, subst in readable_replacements: - label= re.sub(pattern, subst, label) - return label - -def reverse_normalize_from_transform(transform): - ''' - Crawl around the transforms attached to a dataset looking for a - Normalize transform, and return it a corresponding ReverseNormalize, - or None if no normalization is found. - ''' - if isinstance(transform, torchvision.transforms.Normalize): - return ReverseNormalize(transform.mean, transform.std) - t = getattr(transform, 'transform', None) - if t is not None: - return reverse_normalize_from_transform(t) - transforms = getattr(transform, 'transforms', None) - if transforms is not None: - for t in reversed(transforms): - result = reverse_normalize_from_transform(t) - if result is not None: - return result - return None - -class ReverseNormalize: - ''' - Applies the reverse of torchvision.transforms.Normalize. - ''' - def __init__(self, mean, stdev): - mean = numpy.array(mean) - stdev = numpy.array(stdev) - self.mean = torch.from_numpy(mean)[None,:,None,None].float() - self.stdev = torch.from_numpy(stdev)[None,:,None,None].float() - def __call__(self, data): - device = data.device - return data.mul(self.stdev.to(device)).add_(self.mean.to(device)) - -class ImageOnlySegRunner: - def __init__(self, dataset, recover_image=None): - if recover_image is None: - recover_image = reverse_normalize_from_transform(dataset) - self.recover_image = recover_image - self.dataset = dataset - def get_label_and_category_names(self): - return [('-', '-')], ['-'] - def run_and_segment_batch(self, batch, model, - want_bincount=False, want_rgb=False): - [im] = batch - device = next(model.parameters()).device - if want_rgb: - rgb = self.recover_image(im.clone() - ).permute(0, 2, 3, 1).mul_(255).clamp(0, 255).byte() - else: - rgb = None - # Stubs for seg and bc - seg = torch.zeros(im.shape[0], 1, 1, 1, dtype=torch.long) - bc = torch.ones(im.shape[0], 1, dtype=torch.long) - # Run the model. - model(im.to(device)) - return seg, bc, rgb, im.shape[2:] - -class ClassifierSegRunner: - def __init__(self, dataset, recover_image=None): - # The dataset contains explicit segmentations - if recover_image is None: - recover_image = reverse_normalize_from_transform(dataset) - self.recover_image = recover_image - self.dataset = dataset - def get_label_and_category_names(self): - catnames = self.dataset.categories - label_and_cat_names = [(readable(label), - catnames[self.dataset.label_category[i]]) - for i, label in enumerate(self.dataset.labels)] - return label_and_cat_names, catnames - def run_and_segment_batch(self, batch, model, - want_bincount=False, want_rgb=False): - ''' - Runs the dissected model on one batch of the dataset, and - returns a multilabel semantic segmentation for the data. - Given a batch of size (n, c, y, x) the segmentation should - be a (long integer) tensor of size (n, d, y//r, x//r) where - d is the maximum number of simultaneous labels given to a pixel, - and where r is some (optional) resolution reduction factor. - In the segmentation returned, the label `0` is reserved for - the background "no-label". - - In addition to the segmentation, bc, rgb, and shape are returned - where bc is a per-image bincount counting returned label pixels, - rgb is a viewable (n, y, x, rgb) byte image tensor for the data - for visualizations (reversing normalizations, for example), and - shape is the (y, x) size of the data. If want_bincount or - want_rgb are False, those return values may be None. - ''' - im, seg, bc = batch - device = next(model.parameters()).device - if want_rgb: - rgb = self.recover_image(im.clone() - ).permute(0, 2, 3, 1).mul_(255).clamp(0, 255).byte() - else: - rgb = None - # Run the model. - model(im.to(device)) - return seg, bc, rgb, im.shape[2:] - -class GeneratorSegRunner: - def __init__(self, segmenter): - # The segmentations are given by an algorithm - if segmenter is None: - segmenter = UnifiedParsingSegmenter(segsizes=[256], segdiv='quad') - self.segmenter = segmenter - self.num_classes = len(segmenter.get_label_and_category_names()[0]) - def get_label_and_category_names(self): - return self.segmenter.get_label_and_category_names() - def run_and_segment_batch(self, batch, model, - want_bincount=False, want_rgb=False): - ''' - Runs the dissected model on one batch of the dataset, and - returns a multilabel semantic segmentation for the data. - Given a batch of size (n, c, y, x) the segmentation should - be a (long integer) tensor of size (n, d, y//r, x//r) where - d is the maximum number of simultaneous labels given to a pixel, - and where r is some (optional) resolution reduction factor. - In the segmentation returned, the label `0` is reserved for - the background "no-label". - - In addition to the segmentation, bc, rgb, and shape are returned - where bc is a per-image bincount counting returned label pixels, - rgb is a viewable (n, y, x, rgb) byte image tensor for the data - for visualizations (reversing normalizations, for example), and - shape is the (y, x) size of the data. If want_bincount or - want_rgb are False, those return values may be None. - ''' - device = next(model.parameters()).device - z_batch = batch[0] - tensor_images = model(z_batch.to(device)) - seg = self.segmenter.segment_batch(tensor_images, downsample=2) - if want_bincount: - index = torch.arange(z_batch.shape[0], - dtype=torch.long, device=device) - bc = (seg + index[:, None, None, None] * self.num_classes).view(-1 - ).bincount(minlength=z_batch.shape[0] * self.num_classes) - bc = bc.view(z_batch.shape[0], self.num_classes) - else: - bc = None - if want_rgb: - images = ((tensor_images + 1) / 2 * 255) - rgb = images.permute(0, 2, 3, 1).clamp(0, 255).byte() - else: - rgb = None - return seg, bc, rgb, tensor_images.shape[2:] diff --git a/spaces/mikeee/radiobee-aligner/radiobee/app.py b/spaces/mikeee/radiobee-aligner/radiobee/app.py deleted file mode 100644 index 8db39bd7832424dbf98733a76b14e401b40a604f..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-aligner/radiobee/app.py +++ /dev/null @@ -1,38 +0,0 @@ -"""Talk to spaces VM via subprocess.check_output.""" -# pylint: disable=unused-variable, invalid-name - -# import httpx -import subprocess as sp -from shlex import split -import gradio as gr - - -def greet(command): - """Probe vm.""" - try: - out = sp.check_output(split(command), encoding="utf8") - except Exception as e: - out = str(e) - # return "Hello " + name + "!!" - if not (out and out.strip()): - out = "No output, that's all we know." - return out - - -iface = gr.Interface( - fn=greet, - inputs="text", - outputs="text", - examples=[ - "cat /proc/version", - "free # show free memory", - "uname -m", - "df -h .", - "cat /proc/cpuinfo", - ], - title="probe the system", - description="talk to the system via subprocess.check_output ", -) - -# iface.launch(share=True, debug=True) -iface.launch(debug=True) diff --git a/spaces/mikeee/radiobee-dev/tests/test_shuffle_sents.py b/spaces/mikeee/radiobee-dev/tests/test_shuffle_sents.py deleted file mode 100644 index 2c09d0253786f05946d007b64df43a30cd1fc032..0000000000000000000000000000000000000000 --- a/spaces/mikeee/radiobee-dev/tests/test_shuffle_sents.py +++ /dev/null @@ -1,136 +0,0 @@ -"""Test shuffle_sents. - - eps: float = 6 - min_samples: int = 4 - tf_type: str = "linear" - idf_type: Optional[str] = None - dl_type: Optional[str] = None - norm: Optional[str] = None - lang1: Optional[str] = "en" - lang2: Optional[str] = "zh" -""" -from radiobee.seg_text import seg_text -from radiobee.shuffle_sents import shuffle_sents -from radiobee.align_sents import align_sents - -text1 = """`Wretched inmates!' I ejaculated mentally, `you deserve perpetual isolation from your species for your churlish inhospitality. At least, I would not keep my doors barred in the day time. I don't care--I will get in!' So resolved, I grasped the latch and shook it vehemently. Vinegar-faced Joseph projected his head from a round window of the barn.""" -text2 = """“被囚禁的囚犯!”我在精神上被射精,“你应该永远与你的物种隔绝,因为你这种粗鲁的病态。至少,我白天不会锁门,我不在乎,我进去了!”我决心如此,我抓住了门锁,狠狠地摇了一下。醋脸的约瑟夫从谷仓的圆窗朝他的头照射。""" -text3 = """"Elende Insassen! ejakulierte ich im Geiste, "ihr verdient die ewige Isolation von eurer Spezies für eure rüpelhafte Ungastlichkeit. Zumindest würde ich meine Türen tagsüber nicht verriegeln. Das ist mir egal - ich werde reinkommen!' So entschlossen, ergriff ich die Klinke und rüttelte heftig daran. Der essiggesichtige Joseph streckte seinen Kopf aus einem runden Fenster der Scheune.""" - - -def test_shuffle_sents_en_zh(): - """Test shuffle_sents_en_zh.""" - sents_en = seg_text(text1) - sents_zh = seg_text(text2) - - lang1 = "en" - lang2 = "zh" - - pairs = shuffle_sents(sents_en, sents_zh) - pairs_ = shuffle_sents(sents_en, sents_zh, lang1=lang1, lang2=lang2) - - # pairs[3] == ('', "I don't care--I will get in!'", '') - assert pairs == pairs_ - - # assert not pairs[3][0] - # after swapping - assert not pairs[3][1] - - -def test_shuffle_sents_en_de(): - """Test shuffle_sents_en_de.""" - sents_en = seg_text(text1) - sents_de = seg_text(text3) - - lang1 = "en" - lang2 = "de" - - pairs = shuffle_sents(sents_en, sents_de) - pairs_ = shuffle_sents(sents_en, sents_de, lang1=lang1, lang2=lang2) - - assert pairs == pairs_ - - # - # assert not pairs[3][0] - _ = """In [218]: pairs[:2] - Out[218]: - [["`Wretched inmates!'", '', ''], - ['I ejaculated mentally, `you deserve perpetual isolation from your species for your churlish inhospitality.', - '"Elende Insassen! ejakulierte ich im Geiste, "ihr verdient die ewige Isolation von eurer Spezies für eure rüpelhafte Ungastlichkeit.', - 0.62]] - """ - assert not pairs[0][1] - assert "mentally" in str(pairs[1]) and "Elende" in str(pairs[1]) - - # [elm[2] for elm in pairs] - # ['', 0.62, 0.72, 0.74, 0.68, 0.79] - if isinstance(pairs[1][2], float): - assert pairs[1][2] > 0.6 - if isinstance(pairs[2][2], float): - assert pairs[2][2] > 0.7 - if isinstance(pairs[3][2], float): - assert pairs[3][2] > 0.7 - if isinstance(pairs[4][2], float): - assert pairs[4][2] > 0.6 - if isinstance(pairs[5][2], float): - assert pairs[5][2] > 0.7 - - -_ = """ -In [232]: shuffle_sents.cmat.round(2) -Out[232]: -array([[ 0.27, 0.62, 0.07, 0.11, 0.02, 0.02], - [ 0.03, 0.09, 0.72, 0.18, 0.07, -0.07], - [ 0.19, 0.07, 0.16, 0.74, -0.01, -0.02], - [-0.02, 0.18, 0.16, 0.06, 0.68, -0.04], - [ 0.02, 0.07, 0.04, -0.04, 0.02, 0.79]], dtype=float32) -pairs[1] -sents_en[1], sents_de[0], shuffle_sents.cmat[0, 1] -['I ejaculated mentally, `you deserve perpetual isolation from your species for your churlish inhospitality.', - '"Elende Insassen! ejakulierte ich im Geiste, "ihr verdient die ewige Isolation von eurer Spezies für eure rüpelhafte Ungastlichkeit.', - 0.62] - -pairs[2] -sents_en[2], sents_de[1], shuffle_sents.cmat[1, 2].round(2) -Out[244]: -('At least, I would not keep my doors barred in the day time.', - 'Zumindest würde ich meine Türen tagsüber nicht verriegeln.', - 0.72) -... - -import mtplotlib -import matplotlib.pyplot as plt -import seaborn as sns - -sns.set() -set_style("darkgrind") -plt.ion() - -ali = shuffle_sents(sents_en, sents_de) -sns.heatmap(shuffle_sents.cmat, cmap="viridis_r").invert_yaxis() -ax = plt.gca() -ax.set_xlabel(shuffle_sents.lang1) -ax.set_ylabel(shuffle_sents.lang2) - -ali == [["`Wretched inmates!'", '', ''], - ['I ejaculated mentally, `you deserve perpetual isolation from your species for your churlish inhospitality.', - '"Elende Insassen! ejakulierte ich im Geiste, "ihr verdient die ewige Isolation von eurer Spezies für eure rüpelhafte Ungastlichkeit.', - 0.62], - ['At least, I would not keep my doors barred in the day time.', - 'Zumindest würde ich meine Türen tagsüber nicht verriegeln.', - 0.72], - ["I don't care--I will get in!'", - "Das ist mir egal - ich werde reinkommen!'", - 0.74], - ['So resolved, I grasped the latch and shook it vehemently.', - 'So entschlossen, ergriff ich die Klinke und rüttelte heftig daran.', - 0.68], - ['Vinegar-faced Joseph projected his head from a round window of the barn.', - 'Der essiggesichtige Joseph streckte seinen Kopf aus einem runden Fenster der Scheune.', - 0.79]] - -res1 = align_sents(sents_en, sents_de) -ali = shuffle_sents(sents_en, sents_de) -for idx in range(1, 6): - assert res1[idx] == tuple(ali[idx][:2]) -""" diff --git a/spaces/miracle01/white-emotion-recognition/app.py b/spaces/miracle01/white-emotion-recognition/app.py deleted file mode 100644 index 26f18be352ae39a6eca77faee4fb7f2a5f54f65b..0000000000000000000000000000000000000000 --- a/spaces/miracle01/white-emotion-recognition/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/tahiyacy/emotion-recognition").launch() \ No newline at end of file diff --git a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/settings/$types.d.ts b/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/settings/$types.d.ts deleted file mode 100644 index 11802b80d201eeb689785235bcb7a8a567da64f3..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/settings/$types.d.ts +++ /dev/null @@ -1,28 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { } -type RouteId = '/settings'; -type MaybeWithVoid = {} extends T ? T | void : T; -export type RequiredKeys = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? never : K; }[keyof T]; -type OutputDataShape = MaybeWithVoid> & Partial> & Record> -type EnsureDefined = T extends null | undefined ? {} : T; -type OptionalUnion, A extends keyof U = U extends U ? keyof U : never> = U extends unknown ? { [P in Exclude]?: never } & U : never; -export type Snapshot = Kit.Snapshot; -type PageServerParentData = EnsureDefined; -type PageParentData = EnsureDefined; - -export type PageServerLoad = OutputDataShape> = Kit.ServerLoad; -export type PageServerLoadEvent = Parameters[0]; -type ExcludeActionFailure = T extends Kit.ActionFailure ? never : T extends void ? never : T; -type ActionsSuccess any>> = { [Key in keyof T]: ExcludeActionFailure>>; }[keyof T]; -type ExtractActionFailure = T extends Kit.ActionFailure ? X extends void ? never : X : never; -type ActionsFailure any>> = { [Key in keyof T]: Exclude>>, void>; }[keyof T]; -type ActionsExport = typeof import('../../../../../src/routes/settings/+page.server.js').actions -export type SubmitFunction = Kit.SubmitFunction>, Expand>> -export type ActionData = Expand> | null; -export type PageServerData = null; -export type PageData = Expand; -export type Action | void = Record | void> = Kit.Action -export type Actions | void = Record | void> = Kit.Actions -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/mnauf/detect-bees/val.py b/spaces/mnauf/detect-bees/val.py deleted file mode 100644 index 127acf8100297f6a15e9008ea3eb674550d743b3..0000000000000000000000000000000000000000 --- a/spaces/mnauf/detect-bees/val.py +++ /dev/null @@ -1,406 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Validate a trained YOLOv5 detection model on a detection dataset - -Usage: - $ python val.py --weights yolov5s.pt --data coco128.yaml --img 640 - -Usage - formats: - $ python val.py --weights yolov5s.pt # PyTorch - yolov5s.torchscript # TorchScript - yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn - yolov5s_openvino_model # OpenVINO - yolov5s.engine # TensorRT - yolov5s.mlmodel # CoreML (macOS-only) - yolov5s_saved_model # TensorFlow SavedModel - yolov5s.pb # TensorFlow GraphDef - yolov5s.tflite # TensorFlow Lite - yolov5s_edgetpu.tflite # TensorFlow Edge TPU - yolov5s_paddle_model # PaddlePaddle -""" - -import argparse -import json -import os -import sys -from pathlib import Path - -import numpy as np -import torch -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from models.common import DetectMultiBackend -from utils.callbacks import Callbacks -from utils.dataloaders import create_dataloader -from utils.general import (LOGGER, Profile, check_dataset, check_img_size, check_requirements, check_yaml, - coco80_to_coco91_class, colorstr, increment_path, non_max_suppression, print_args, - scale_boxes, xywh2xyxy, xyxy2xywh) -from utils.metrics import ConfusionMatrix, ap_per_class, box_iou -from utils.plots import output_to_target, plot_images, plot_val_study -from utils.torch_utils import select_device, smart_inference_mode - - -def save_one_txt(predn, save_conf, shape, file): - # Save one txt result - gn = torch.tensor(shape)[[1, 0, 1, 0]] # normalization gain whwh - for *xyxy, conf, cls in predn.tolist(): - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format - with open(file, 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - -def save_one_json(predn, jdict, path, class_map): - # Save one JSON result {"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236} - image_id = int(path.stem) if path.stem.isnumeric() else path.stem - box = xyxy2xywh(predn[:, :4]) # xywh - box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner - for p, b in zip(predn.tolist(), box.tolist()): - jdict.append({ - 'image_id': image_id, - 'category_id': class_map[int(p[5])], - 'bbox': [round(x, 3) for x in b], - 'score': round(p[4], 5)}) - - -def process_batch(detections, labels, iouv): - """ - Return correct prediction matrix - Arguments: - detections (array[N, 6]), x1, y1, x2, y2, conf, class - labels (array[M, 5]), class, x1, y1, x2, y2 - Returns: - correct (array[N, 10]), for 10 IoU levels - """ - correct = np.zeros((detections.shape[0], iouv.shape[0])).astype(bool) - iou = box_iou(labels[:, 1:], detections[:, :4]) - correct_class = labels[:, 0:1] == detections[:, 5] - for i in range(len(iouv)): - x = torch.where((iou >= iouv[i]) & correct_class) # IoU > threshold and classes match - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() # [label, detect, iou] - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - # matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - correct[matches[:, 1].astype(int), i] = True - return torch.tensor(correct, dtype=torch.bool, device=iouv.device) - - -@smart_inference_mode() -def run( - data, - weights=None, # model.pt path(s) - batch_size=32, # batch size - imgsz=640, # inference size (pixels) - conf_thres=0.001, # confidence threshold - iou_thres=0.6, # NMS IoU threshold - max_det=300, # maximum detections per image - task='val', # train, val, test, speed or study - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - workers=8, # max dataloader workers (per RANK in DDP mode) - single_cls=False, # treat as single-class dataset - augment=False, # augmented inference - verbose=False, # verbose output - save_txt=False, # save results to *.txt - save_hybrid=False, # save label+prediction hybrid results to *.txt - save_conf=False, # save confidences in --save-txt labels - save_json=False, # save a COCO-JSON results file - project=ROOT / 'runs/val', # save to project/name - name='exp', # save to project/name - exist_ok=False, # existing project/name ok, do not increment - half=True, # use FP16 half-precision inference - dnn=False, # use OpenCV DNN for ONNX inference - model=None, - dataloader=None, - save_dir=Path(''), - plots=True, - callbacks=Callbacks(), - compute_loss=None, -): - # Initialize/load model and set device - training = model is not None - if training: # called by train.py - device, pt, jit, engine = next(model.parameters()).device, True, False, False # get model device, PyTorch model - half &= device.type != 'cpu' # half precision only supported on CUDA - model.half() if half else model.float() - else: # called directly - device = select_device(device, batch_size=batch_size) - - # Directories - save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Load model - model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) - stride, pt, jit, engine = model.stride, model.pt, model.jit, model.engine - imgsz = check_img_size(imgsz, s=stride) # check image size - half = model.fp16 # FP16 supported on limited backends with CUDA - if engine: - batch_size = model.batch_size - else: - device = model.device - if not (pt or jit): - batch_size = 1 # export.py models default to batch-size 1 - LOGGER.info(f'Forcing --batch-size 1 square inference (1,3,{imgsz},{imgsz}) for non-PyTorch models') - - # Data - data = check_dataset(data) # check - - # Configure - model.eval() - cuda = device.type != 'cpu' - is_coco = isinstance(data.get('val'), str) and data['val'].endswith(f'coco{os.sep}val2017.txt') # COCO dataset - nc = 1 if single_cls else int(data['nc']) # number of classes - iouv = torch.linspace(0.5, 0.95, 10, device=device) # iou vector for mAP@0.5:0.95 - niou = iouv.numel() - - # Dataloader - if not training: - if pt and not single_cls: # check --weights are trained on --data - ncm = model.model.nc - assert ncm == nc, f'{weights} ({ncm} classes) trained on different --data than what you passed ({nc} ' \ - f'classes). Pass correct combination of --weights and --data that are trained together.' - model.warmup(imgsz=(1 if pt else batch_size, 3, imgsz, imgsz)) # warmup - pad, rect = (0.0, False) if task == 'speed' else (0.5, pt) # square inference for benchmarks - task = task if task in ('train', 'val', 'test') else 'val' # path to train/val/test images - dataloader = create_dataloader(data[task], - imgsz, - batch_size, - stride, - single_cls, - pad=pad, - rect=rect, - workers=workers, - prefix=colorstr(f'{task}: '))[0] - - seen = 0 - confusion_matrix = ConfusionMatrix(nc=nc) - names = model.names if hasattr(model, 'names') else model.module.names # get class names - if isinstance(names, (list, tuple)): # old format - names = dict(enumerate(names)) - class_map = coco80_to_coco91_class() if is_coco else list(range(1000)) - s = ('%22s' + '%11s' * 6) % ('Class', 'Images', 'Instances', 'P', 'R', 'mAP50', 'mAP50-95') - tp, fp, p, r, f1, mp, mr, map50, ap50, map = 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 - dt = Profile(), Profile(), Profile() # profiling times - loss = torch.zeros(3, device=device) - jdict, stats, ap, ap_class = [], [], [], [] - callbacks.run('on_val_start') - pbar = tqdm(dataloader, desc=s, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar - for batch_i, (im, targets, paths, shapes) in enumerate(pbar): - callbacks.run('on_val_batch_start') - with dt[0]: - if cuda: - im = im.to(device, non_blocking=True) - targets = targets.to(device) - im = im.half() if half else im.float() # uint8 to fp16/32 - im /= 255 # 0 - 255 to 0.0 - 1.0 - nb, _, height, width = im.shape # batch size, channels, height, width - - # Inference - with dt[1]: - preds, train_out = model(im) if compute_loss else (model(im, augment=augment), None) - - # Loss - if compute_loss: - loss += compute_loss(train_out, targets)[1] # box, obj, cls - - # NMS - targets[:, 2:] *= torch.tensor((width, height, width, height), device=device) # to pixels - lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling - with dt[2]: - preds = non_max_suppression(preds, - conf_thres, - iou_thres, - labels=lb, - multi_label=True, - agnostic=single_cls, - max_det=max_det) - - # Metrics - for si, pred in enumerate(preds): - labels = targets[targets[:, 0] == si, 1:] - nl, npr = labels.shape[0], pred.shape[0] # number of labels, predictions - path, shape = Path(paths[si]), shapes[si][0] - correct = torch.zeros(npr, niou, dtype=torch.bool, device=device) # init - seen += 1 - - if npr == 0: - if nl: - stats.append((correct, *torch.zeros((2, 0), device=device), labels[:, 0])) - if plots: - confusion_matrix.process_batch(detections=None, labels=labels[:, 0]) - continue - - # Predictions - if single_cls: - pred[:, 5] = 0 - predn = pred.clone() - scale_boxes(im[si].shape[1:], predn[:, :4], shape, shapes[si][1]) # native-space pred - - # Evaluate - if nl: - tbox = xywh2xyxy(labels[:, 1:5]) # target boxes - scale_boxes(im[si].shape[1:], tbox, shape, shapes[si][1]) # native-space labels - labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels - correct = process_batch(predn, labelsn, iouv) - if plots: - confusion_matrix.process_batch(predn, labelsn) - stats.append((correct, pred[:, 4], pred[:, 5], labels[:, 0])) # (correct, conf, pcls, tcls) - - # Save/log - if save_txt: - save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / f'{path.stem}.txt') - if save_json: - save_one_json(predn, jdict, path, class_map) # append to COCO-JSON dictionary - callbacks.run('on_val_image_end', pred, predn, path, names, im[si]) - - # Plot images - if plots and batch_i < 3: - plot_images(im, targets, paths, save_dir / f'val_batch{batch_i}_labels.jpg', names) # labels - plot_images(im, output_to_target(preds), paths, save_dir / f'val_batch{batch_i}_pred.jpg', names) # pred - - callbacks.run('on_val_batch_end', batch_i, im, targets, paths, shapes, preds) - - # Compute metrics - stats = [torch.cat(x, 0).cpu().numpy() for x in zip(*stats)] # to numpy - if len(stats) and stats[0].any(): - tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names) - ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95 - mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean() - nt = np.bincount(stats[3].astype(int), minlength=nc) # number of targets per class - - # Print results - pf = '%22s' + '%11i' * 2 + '%11.3g' * 4 # print format - LOGGER.info(pf % ('all', seen, nt.sum(), mp, mr, map50, map)) - if nt.sum() == 0: - LOGGER.warning(f'WARNING ⚠️ no labels found in {task} set, can not compute metrics without labels') - - # Print results per class - if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats): - for i, c in enumerate(ap_class): - LOGGER.info(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i])) - - # Print speeds - t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image - if not training: - shape = (batch_size, 3, imgsz, imgsz) - LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {shape}' % t) - - # Plots - if plots: - confusion_matrix.plot(save_dir=save_dir, names=list(names.values())) - callbacks.run('on_val_end', nt, tp, fp, p, r, f1, ap, ap50, ap_class, confusion_matrix) - - # Save JSON - if save_json and len(jdict): - w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights - anno_json = str(Path(data.get('path', '../coco')) / 'annotations/instances_val2017.json') # annotations json - pred_json = str(save_dir / f"{w}_predictions.json") # predictions json - LOGGER.info(f'\nEvaluating pycocotools mAP... saving {pred_json}...') - with open(pred_json, 'w') as f: - json.dump(jdict, f) - - try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb - check_requirements('pycocotools') - from pycocotools.coco import COCO - from pycocotools.cocoeval import COCOeval - - anno = COCO(anno_json) # init annotations api - pred = anno.loadRes(pred_json) # init predictions api - eval = COCOeval(anno, pred, 'bbox') - if is_coco: - eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.im_files] # image IDs to evaluate - eval.evaluate() - eval.accumulate() - eval.summarize() - map, map50 = eval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5) - except Exception as e: - LOGGER.info(f'pycocotools unable to run: {e}') - - # Return results - model.float() # for training - if not training: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") - maps = np.zeros(nc) + map - for i, c in enumerate(ap_class): - maps[c] = ap[i] - return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model path(s)') - parser.add_argument('--batch-size', type=int, default=32, help='batch size') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold') - parser.add_argument('--max-det', type=int, default=300, help='maximum detections per image') - parser.add_argument('--task', default='val', help='train, val, test, speed or study') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') - parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--verbose', action='store_true', help='report mAP by class') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--save-json', action='store_true', help='save a COCO-JSON results file') - parser.add_argument('--project', default=ROOT / 'runs/val', help='save to project/name') - parser.add_argument('--name', default='exp', help='save to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') - opt = parser.parse_args() - opt.data = check_yaml(opt.data) # check YAML - opt.save_json |= opt.data.endswith('coco.yaml') - opt.save_txt |= opt.save_hybrid - print_args(vars(opt)) - return opt - - -def main(opt): - check_requirements(exclude=('tensorboard', 'thop')) - - if opt.task in ('train', 'val', 'test'): # run normally - if opt.conf_thres > 0.001: # https://github.com/ultralytics/yolov5/issues/1466 - LOGGER.info(f'WARNING ⚠️ confidence threshold {opt.conf_thres} > 0.001 produces invalid results') - if opt.save_hybrid: - LOGGER.info('WARNING ⚠️ --save-hybrid will return high mAP from hybrid labels, not from predictions alone') - run(**vars(opt)) - - else: - weights = opt.weights if isinstance(opt.weights, list) else [opt.weights] - opt.half = True # FP16 for fastest results - if opt.task == 'speed': # speed benchmarks - # python val.py --task speed --data coco.yaml --batch 1 --weights yolov5n.pt yolov5s.pt... - opt.conf_thres, opt.iou_thres, opt.save_json = 0.25, 0.45, False - for opt.weights in weights: - run(**vars(opt), plots=False) - - elif opt.task == 'study': # speed vs mAP benchmarks - # python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n.pt yolov5s.pt... - for opt.weights in weights: - f = f'study_{Path(opt.data).stem}_{Path(opt.weights).stem}.txt' # filename to save to - x, y = list(range(256, 1536 + 128, 128)), [] # x axis (image sizes), y axis - for opt.imgsz in x: # img-size - LOGGER.info(f'\nRunning {f} --imgsz {opt.imgsz}...') - r, _, t = run(**vars(opt), plots=False) - y.append(r + t) # results and times - np.savetxt(f, y, fmt='%10.4g') # save - os.system('zip -r study.zip study_*.txt') - plot_val_study(x=x) # plot - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/lr_scheduler/manual_lr_scheduler.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/lr_scheduler/manual_lr_scheduler.py deleted file mode 100644 index 0269a1e2853854745e23b07931294f37b67d0295..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/lr_scheduler/manual_lr_scheduler.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import LegacyFairseqLRScheduler, register_lr_scheduler -import logging -import ast - -logger = logging.getLogger(__name__) -logger.setLevel(logging.WARNING) - - -@register_lr_scheduler("manual") -class ManualSchedule(LegacyFairseqLRScheduler): - """Decay the LR on a manual schedule.""" - - def __init__(self, args, optimizer): - super().__init__(args, optimizer) - - self.epoch2lr = self.parse_manuallr_args(args.epoch2lr) - self.update2lr = self.parse_manuallr_args(args.update2lr) - logger.info("@@@ ManualSchedule epoch2lr={}".format(self.epoch2lr)) - logger.info("@@@ ManualSchedule update2lr={}".format(self.update2lr)) - - if 1 in self.epoch2lr: - self.lr = self.epoch2lr[1] - elif 1 in self.update2lr: - self.lr = self.update2lr[1] - else: - self.lr = args.lr[0] - self.optimizer.set_lr(self.lr) # Set the beginning of the epoch. - - def parse_manuallr_args(self, lr_args_str): - lr_dict = ast.literal_eval(lr_args_str.replace(' ', '')) - if not isinstance(lr_dict, dict): - raise ValueError("epoch2lr/update2lr must be abel to evaluated to a dict") - - lr_args = {} - logger.info("@@@ after parsing input dictionary lr_dict = {}".format(lr_dict)) - for key, val in lr_dict.items(): - if "," in key: - for k in key.split(","): - lr_args[int(k)] = float(val) - elif "-" in key: - s = int(key.split("-")[0]) - e = int(key.split("-")[1]) - for k in range(s, e + 1, 1): - lr_args[k] = float(val) - else: - lr_args[int(key)] = float(val) - - return lr_args - - @staticmethod - def add_args(parser): - """Add arguments to the parser for this LR scheduler.""" - # fmt: off - parser.add_argument( - "--epoch2lr", - type=str, - metavar="DICT", - default="{}", - help="a dictionary used to set lr for each epoch manually", - ) - parser.add_argument( - "--update2lr", - type=str, - metavar="DICT", - default="{}", - help="a dictionary used to set lr for each update manually", - ) - # fmt: on - - def state_dict(self): - return {"lr": self.lr} - - def load_state_dict(self, state_dict): - if "lr" in state_dict: - self.lr = state_dict["lr"] - - def get_next_lr(self, epoch): - manual_keys = [k for k in self.epoch2lr if k <= epoch] - if manual_keys: - manual_lr = self.epoch2lr[max(manual_keys)] - else: - logger.warning("@@@ epoch={} does not exist in manual lr input. epoch2lr={}...".format( - epoch, list(self.epoch2lr.items())[:min(10, len(self.epoch2lr.keys())-1)] - )) - manual_lr = self.optimizer.get_lr() - return manual_lr - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - self.lr = self.get_next_lr(epoch) - self.optimizer.set_lr(self.lr) - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - manual_keys = [k for k in self.update2lr if k <= num_updates] - if manual_keys: - manual_lr = self.update2lr[max(manual_keys)] - else: - logger.warning("epoch={} does not exist in manual lr input update2lr={}...".format( - num_updates, list(self.update2lr.items())[:min(10, len(self.update2lr.keys())-1)])) - manual_lr = self.optimizer.get_lr() - - self.optimizer.set_lr(manual_lr) - return self.optimizer.get_lr() diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/shard.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/shard.py deleted file mode 100644 index 9d7f2eb9e5de6086fe2435d432bde7521ebb8155..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/shard.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Any, Dict - -from fairseq.distributed import utils - - -try: - from fairscale.optim import OSS - - _has_fairscale = True -except ImportError: - _has_fairscale = False - - -def shard_(optimizer, group): - if not _has_fairscale: - raise ImportError( - "\n\nPlease install the fairscale package:" "\n\n pip install fairscale" - ) - - class FairseqOSS(OSS): - @property - def disable_mem_eff_fp16_loading_hack(self): - return True - - def __getattr__(self, name): - if name.startswith("supports") and hasattr(self.optim, name): - return getattr(self.optim, name) - raise AttributeError( - "'FairseqOSS' object has no attribute {0!r}".format(name) - ) - - def broadcast_global_state_dict( - self, state_dict: Dict[str, Any] - ) -> Dict[str, Any]: - """ - Broadcasts the entire state_dict to all other ranks - each rank is responsible to load their own partition of data - """ - return utils.broadcast_object( - state_dict, - src_rank=0, - group=self.group, - ) - - torch_optimizer = optimizer.optimizer - optim_cls = type(torch_optimizer) - - optimizer.optimizer = FairseqOSS( - torch_optimizer.param_groups, - optim_cls, - group=group, - **optimizer.optimizer_config - ) diff --git a/spaces/mueller-franzes/medfusion-app/medical_diffusion/utils/train_utils.py b/spaces/mueller-franzes/medfusion-app/medical_diffusion/utils/train_utils.py deleted file mode 100644 index dbbc73701c6afe3043fb437761c78ca8f4805cc6..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/medical_diffusion/utils/train_utils.py +++ /dev/null @@ -1,88 +0,0 @@ -import copy -import torch -import torch.nn as nn - -class EMAModel(nn.Module): - # See: https://github.com/huggingface/diffusers/blob/3100bc967084964480628ae61210b7eaa7436f1d/src/diffusers/training_utils.py#L42 - """ - Exponential Moving Average of models weights - """ - - def __init__( - self, - model, - update_after_step=0, - inv_gamma=1.0, - power=2 / 3, - min_value=0.0, - max_value=0.9999, - ): - super().__init__() - """ - @crowsonkb's notes on EMA Warmup: - If gamma=1 and power=1, implements a simple average. gamma=1, power=2/3 are good values for models you plan - to train for a million or more steps (reaches decay factor 0.999 at 31.6K steps, 0.9999 at 1M steps), - gamma=1, power=3/4 for models you plan to train for less (reaches decay factor 0.999 at 10K steps, 0.9999 - at 215.4k steps). - Args: - inv_gamma (float): Inverse multiplicative factor of EMA warmup. Default: 1. - power (float): Exponential factor of EMA warmup. Default: 2/3. - min_value (float): The minimum EMA decay rate. Default: 0. - """ - - self.averaged_model = copy.deepcopy(model).eval() - self.averaged_model.requires_grad_(False) - - self.update_after_step = update_after_step - self.inv_gamma = inv_gamma - self.power = power - self.min_value = min_value - self.max_value = max_value - - self.averaged_model = self.averaged_model #.to(device=model.device) - - self.decay = 0.0 - self.optimization_step = 0 - - def get_decay(self, optimization_step): - """ - Compute the decay factor for the exponential moving average. - """ - step = max(0, optimization_step - self.update_after_step - 1) - value = 1 - (1 + step / self.inv_gamma) ** -self.power - - if step <= 0: - return 0.0 - - return max(self.min_value, min(value, self.max_value)) - - @torch.no_grad() - def step(self, new_model): - ema_state_dict = {} - ema_params = self.averaged_model.state_dict() - - self.decay = self.get_decay(self.optimization_step) - - for key, param in new_model.named_parameters(): - if isinstance(param, dict): - continue - try: - ema_param = ema_params[key] - except KeyError: - ema_param = param.float().clone() if param.ndim == 1 else copy.deepcopy(param) - ema_params[key] = ema_param - - if not param.requires_grad: - ema_params[key].copy_(param.to(dtype=ema_param.dtype).data) - ema_param = ema_params[key] - else: - ema_param.mul_(self.decay) - ema_param.add_(param.data.to(dtype=ema_param.dtype), alpha=1 - self.decay) - - ema_state_dict[key] = ema_param - - for key, param in new_model.named_buffers(): - ema_state_dict[key] = param - - self.averaged_model.load_state_dict(ema_state_dict, strict=False) - self.optimization_step += 1 \ No newline at end of file diff --git a/spaces/ner4archives/ner4archives-NEL-vizualizer-app/README.md b/spaces/ner4archives/ner4archives-NEL-vizualizer-app/README.md deleted file mode 100644 index f62ccdfb761f2d0882da304421d4b05d6bef1c7f..0000000000000000000000000000000000000000 --- a/spaces/ner4archives/ner4archives-NEL-vizualizer-app/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: NER4Archives Visualizer App -emoji: 📜 -colorFrom: indigo -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false - - ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - - diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/3ds Emulator V1.1.7 Bios 14 VERIFIED.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/3ds Emulator V1.1.7 Bios 14 VERIFIED.md deleted file mode 100644 index 60dab103aa6e5c199611a6bd3536faec7f316fa3..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/3ds Emulator V1.1.7 Bios 14 VERIFIED.md +++ /dev/null @@ -1,61 +0,0 @@ -
      -

      3ds Emulator V1.1.7 Bios 14: What You Need to Know

      -

      If you are a fan of Nintendo games, you might have heard of the 3ds Emulator V1.1.7 Bios 14. This is a software that allows you to play Nintendo 3DS games on your PC, Android, or iOS devices. But what exactly is this software, and how can you use it? In this article, we will answer these questions and more.

      -

      What is 3ds Emulator V1.1.7 Bios 14?

      -

      To understand what this software is, we need to break it down into three parts: the 3ds emulator, the bios file, and the version number.

      -

      3ds Emulator V1.1.7 Bios 14


      DOWNLOAD ✯✯✯ https://urlcod.com/2uIaQT



      -

      What is a 3ds emulator?

      -

      An emulator is a program that mimics the functions of another device or system. A 3ds emulator is an emulator that mimics the functions of a Nintendo 3DS, which is a handheld gaming console that can display stereoscopic 3D effects without the need for special glasses.

      -

      A 3ds emulator allows you to play Nintendo 3DS games on your PC, Android, or iOS devices, as if you were playing them on a real console. You can enjoy the same graphics, sound, and gameplay as on a real device, but with some added benefits, such as saving and loading states, customizing controls, and enhancing performance.

      -

      What is a bios file?

      -

      A bios file is a file that contains the basic input/output system (BIOS) of a device or system. The BIOS is a firmware that controls the booting process, hardware configuration, and communication between different components of a device or system.

      -

      A bios file is essential for running an emulator, as it provides the information and instructions that the emulator needs to mimic the functions of the device or system that it emulates. Without a bios file, an emulator cannot run properly or at all.

      -

      What is the version 1.1.7 of the 3ds emulator?

      -

      The version number of an emulator indicates the updates and improvements that have been made to it over time. The version 1.1.7 of the 3ds emulator is one of the latest versions that has been released by its developers.

      -

      The version 1.1.7 of the 3ds emulator claims to have fixed some bugs and glitches, improved compatibility and performance, added new features and options, and enhanced user interface and experience.

      -

      Why do you need 3ds Emulator V1.1.7 Bios 14?

      -

      Now that you know what this software is, you might wonder why you need it. Here are some reasons why you might want to use this software.

      -

      The benefits of using a 3ds emulator

      -

      Using a 3ds emulator has many benefits, such as: - You can play Nintendo 3DS games on your PC, Android, or iOS devices, without having to buy a real console or game cartridges. - You can save and load your game progress anytime and anywhere, without worrying about losing data or battery life. - You can customize your controls, screen size, resolution, sound, and other settings to suit your preferences and device specifications. - You can enhance the graphics, speed, and performance of the games, by using filters, shaders, cheats, and other options. - You can access a large library of games and roms, by downloading them from various sources online.

      The features of the version 1.1.7 of the 3ds emulator

      -

      The version 1.1.7 of the 3ds emulator has many features that make it one of the best and most popular emulators available. Some of these features are:

      -

      - - It supports all Nintendo 3DS games, including the latest releases and updates. - It has a high compatibility rate, meaning that most games run smoothly and without errors. - It has a fast and stable performance, meaning that the games run at full speed and without lag or crashes. - It has a user-friendly and intuitive interface, meaning that the emulator is easy to use and navigate. - It has a multi-language support, meaning that the emulator can be used in different languages, such as English, Spanish, French, German, Italian, Japanese, Chinese, and more. - It has a multiplayer mode, meaning that you can play online with other players using the same emulator or different devices.

      The compatibility of the bios file with the emulator

      -

      The bios file is compatible with the version 1.1.7 of the 3ds emulator, meaning that it works well with it and does not cause any problems or conflicts. The bios file is also compatible with other versions of the 3ds emulator, as well as other emulators that use the same bios file.

      -

      The bios file is also compatible with different devices and operating systems, such as Windows, Mac OS X, Linux, Android, iOS, and more. The bios file is also compatible with different processors and architectures, such as x86, x64, ARM, and more.

      -

      How to download and install 3ds Emulator V1.1.7 Bios 14?

      -

      Now that you know why you need this software, you might want to know how to get it. Here are some steps to download and install this software.

      -

      The sources of the emulator and the bios file

      -

      The first step is to find reliable and safe sources for downloading the emulator and the bios file. There are many websites that offer these files for free or for a fee, but not all of them are trustworthy or legitimate.

      -

      Some websites may contain malware or viruses that can harm your device or steal your personal information. Some websites may also provide fake or outdated files that do not work or cause errors.

      -

      To avoid these risks, you should only download from reputable and verified sources that have positive reviews and feedback from other users. Some examples of such sources are:

      - - The official website of the emulator: [https://www.3dsemulator.org/] - The official website of the bios file: [https://www.bios-files.com/] - The official website of Nintendo: [https://www.nintendo.com/]

      The steps to download and install the emulator and the bios file

      -

      The second step is to follow these steps to download and install the emulator and the bios file:

      - - Go to one of the sources mentioned above and find the download link for the emulator and the bios file. - Click on the download link and save the files to your device. - Extract the files from their compressed format using a program such as WinRAR or 7-Zip. - Open the folder where you extracted the files and find the executable file for the emulator (usually named 3dsemulator.exe). - Double-click on the executable file to launch the emulator. - Go to File > Open Bios File and browse to the folder where you extracted the bios file (usually named bios.bin). - Select the bios file and click Open. - Wait for a few seconds until you see a message saying "Bios Loaded Successfully". - Congratulations! You have successfully installed the emulator and the bios file.

      The tips to avoid scams and viruses

      -

      The third step is to follow these tips to avoid scams and viruses when downloading and installing this software:

      - - Always scan your files with an antivirus program before opening them. - Always read the terms and conditions before agreeing to anything. - Always check the file size and format before downloading them. - Always backup your data and settings before installing anything. - Always be careful of pop-ups, ads, and links that ask you to download or install something. - Always research the source and the file before downloading or installing them.

      How to use 3ds Emulator V1.1.7 Bios 14?

      -

      Now that you have downloaded and installed this software, you might want to know how to use it. Here are some steps to use this software.

      -

      The requirements for running the emulator

      -

      The first step is to make sure that your device meets the minimum requirements for running the emulator. These are:

      - - A device with a processor of at least 1 GHz and a RAM of at least 512 MB. - A device with an operating system of Windows XP or higher, Mac OS X 10.6 or higher, Linux, Android 4.0 or higher, or iOS 7.0 or higher. - A device with a graphics card that supports OpenGL ES 2.0 or higher. - A device with a sound card that supports DirectSound or OpenAL. - A device with a storage space of at least 100 MB for the emulator and the bios file, and more for the games and roms.

      The settings and options of the emulator

      -

      The second step is to adjust the settings and options of the emulator to optimize your gaming experience. These are:

      - - Go to Options > Emulation Settings and choose the emulation mode that suits your device and game. You can choose between Hardware, Software, and Hybrid modes, depending on the performance and compatibility of your device and game. - Go to Options > Graphics Settings and choose the graphics settings that suit your device and game. You can adjust the screen size, resolution, aspect ratio, filter, shader, anti-aliasing, anisotropic filtering, and more, depending on the quality and speed of your device and game. - Go to Options > Sound Settings and choose the sound settings that suit your device and game. You can adjust the volume, frequency, latency, reverb, interpolation, and more, depending on the clarity and realism of your device and game. - Go to Options > Control Settings and choose the control settings that suit your device and game. You can customize the keyboard, mouse, touch screen, joystick, or gamepad controls, depending on the convenience and accuracy of your device and game.

      The games and roms that you can play with the emulator

      -

      The third step is to find and play the games and roms that you want to play with the emulator. These are:

      - - Go to one of the sources mentioned above or any other source that offers Nintendo 3DS games and roms for free or for a fee. - Download the games and roms that you want to play to your device. - Extract the games and roms from their compressed format using a program such as WinRAR or 7-Zip. - Open the folder where you extracted the games and roms and find the file for the game or rom (usually named .3ds or .cia). - Double-click on the file to launch the game or rom with the emulator. - Enjoy playing your favorite Nintendo 3DS games on your PC, Android, or iOS devices.

      Conclusion

      -

      In conclusion, 3ds Emulator V1.1.7 Bios 14 is a software that allows you to play Nintendo 3DS games on your PC, Android, or iOS devices. It has many benefits, features, and options that make it one of the best emulators available. It is also easy to download, install, and use.

      -

      If you are looking for a way to enjoy Nintendo 3DS games without having to buy a real console or game cartridges, you should definitely try this software. You will not regret it.

      -

      FAQs

      -

      Here are some frequently asked questions about this software:

      -

      Q: Is this software legal?

      -

      A: This software is legal as long as you own a copy of the original Nintendo 3DS console and games that you want to play with it. However, downloading games and roms from unauthorized sources may be illegal in some countries. You should check your local laws before doing so.

      -

      Q: Is this software safe?

      -

      A: This software is safe as long as you download it from reputable and verified sources that do not contain malware or viruses. You should also scan your files with an antivirus program before opening them.

      -

      Q: Is this software free?

      -

      A: This software is free as long as you download it from official sources that do not charge any fees. However, some sources may require you to complete surveys or offers before downloading them. You should be careful of these sources as they may contain scams or viruses. You should only download from sources that you trust and that have positive reviews and feedback from other users.

      -

      Q: How can I update this software?

      -

      A: You can update this software by visiting the official website of the emulator or the bios file and downloading the latest version available. You should also check for updates regularly to enjoy the new features and improvements that the developers make.

      -

      Q: How can I contact the developers of this software?

      -

      A: You can contact the developers of this software by visiting their official website or their social media pages and sending them a message or a comment. You can also report any bugs or issues that you encounter or suggest any ideas or feedback that you have.

      -

      Q: How can I support the developers of this software?

      -

      A: You can support the developers of this software by donating to them via their official website or their social media pages. You can also support them by sharing their software with your friends and family, rating and reviewing their software, and joining their community.

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Blaze And Blade Eternal Quest [1998 PC Full ISO] (CRS) DRM Free [NEW].md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Blaze And Blade Eternal Quest [1998 PC Full ISO] (CRS) DRM Free [NEW].md deleted file mode 100644 index 53420b6c39530d218ad2f51a80507b962bf4eb49..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Blaze And Blade Eternal Quest [1998 PC Full ISO] (CRS) DRM Free [NEW].md +++ /dev/null @@ -1,17 +0,0 @@ - -

      Blaze And Blade: Eternal Quest - A Classic Action RPG for PC

      -

      Blaze And Blade: Eternal Quest is a multiplayer action role-playing game that was released in 1998 for the PlayStation and Microsoft Windows. It is the first game in the Blaze And Blade series, which also includes Blaze And Blade Busters, a Japan-only sequel.

      -

      The game allows players to create their own characters from eight different classes, such as fighter, mage, thief, or priest, and choose their gender, appearance, and personality. The game supports up to four players in co-op mode, using either a MultiTap or a cable link for the PlayStation version, or a LAN connection for the PC version.

      -

      Blaze And Blade: Eternal Quest [1998 PC, Full ISO] (CRS) DRM Free


      Download Zip ⇒⇒⇒ https://urlcod.com/2uI9I4



      -

      The game's story revolves around a group of adventurers who discover an ancient lithograph that is said to grant great power to those who can collect the magical gems that fit into it. The adventurers explore various dungeons and locations in the Forbidden Land, formerly known as Foresia, and face enemies, traps, and puzzles along the way.

      -

      The game features a real-time combat system that allows players to switch between characters and use different skills and items. The game also has a unique character growth system that depends on the actions and choices of the players, rather than fixed levels and stats. For example, a character's strength can increase by using heavy weapons or carrying heavy items, while their intelligence can increase by using magic or solving puzzles.

      -

      Blaze And Blade: Eternal Quest is a game that offers a lot of freedom and customization for RPG fans who enjoy exploring and experimenting. The game has a retro charm and a quirky sense of humor that make it stand out from other games of its genre. The game is also DRM-free, meaning that it does not require any activation or online connection to play.

      -

      If you are looking for a classic action RPG that you can play with your friends or by yourself, you might want to check out Blaze And Blade: Eternal Quest. You can download the full ISO file from CRS (Classic Retro Software), a website that specializes in preserving and distributing old PC games. You will need an emulator or a virtual machine to run the game on modern systems.

      -

      Blaze And Blade: Eternal Quest is a game that deserves more recognition and appreciation for its originality and fun factor. It is a hidden gem that you should not miss if you love action RPGs.

      - -

      Blaze And Blade: Eternal Quest has a colorful and detailed graphics style that creates a vivid and immersive world. The game's soundtrack is composed by Ken Kojima, who also worked on other T&E Soft games such as Hydlide and Daikoukai Jidai. The music is catchy and atmospheric, and fits well with the game's mood and setting.

      -

      -

      The game's difficulty level can be adjusted by the players, who can choose to play on easy, normal, or hard mode. The game also has a permadeath option, which means that if a character dies, they are gone forever and cannot be revived. This adds an extra challenge and risk to the game, as well as a sense of realism and consequence.

      -

      Blaze And Blade: Eternal Quest is a game that can provide hours of entertainment and replay value, as each playthrough can be different depending on the characters, choices, and actions of the players. The game also has a lot of secrets and hidden content that can be discovered by exploring and experimenting. The game is a true gem that deserves more attention and appreciation from RPG fans.

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Corel Photoimpact X3 Activation Code Serial Number ((TOP)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Corel Photoimpact X3 Activation Code Serial Number ((TOP)).md deleted file mode 100644 index 9035b9eca06c0a4b984e9147f0e48f25b883808c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Corel Photoimpact X3 Activation Code Serial Number ((TOP)).md +++ /dev/null @@ -1,95 +0,0 @@ - -

      Corel PhotoImpact X3 Activation Code Serial Number: How to Get It and Why You Need It

      -

      If you are looking for a powerful and easy-to-use photo editing software that combines inspiring photo projects and amazing digital art, you might want to check out Corel PhotoImpact X3. This software can help you make digital photography and image creativity fun, fast, and easy. But before you can enjoy all the features and benefits of Corel PhotoImpact X3, you need to activate it with a serial number and an activation code. In this article, we will explain what these terms mean, how to get them, how to activate your software, what to do if you lose or forget them, and what are some alternatives and competitors of Corel PhotoImpact X3. We will also share some reviews and ratings of Corel PhotoImpact X3 from real users.

      -

      What is Corel PhotoImpact X3?

      -

      Corel PhotoImpact X3 is a graphic design software that enables users to view, edit, and manage images using a drag-and-drop interface, effects, filters galleries, and more. It was originally developed by Ulead Systems, but later acquired by Corel Corporation in 2006. Corel PhotoImpact X3 was released in 2008 as the 13th version of the software, and it is still available for purchase from the official website or other online platforms.

      -

      corel photoimpact x3 activation code serial number


      DOWNLOADhttps://urlcod.com/2uIbOE



      -

      Features and benefits of Corel PhotoImpact X3

      -

      Corel PhotoImpact X3 offers a range of features and benefits for photo editing enthusiasts, such as:

      -
        -
      • ExpressFix: A handy mode that provides automated enhancements and easy-to-understand options for quickly fixing exposure, color, composition, noise, red-eye, straightening, cropping, and more.
      • -
      • Corel MediaOne Plus: A digital media management suite that allows users to import, tag, sort, organize, search, share, and create slideshows from their photos and videos.
      • -
      • EasyPalette: A library of over 800 objects and 2,500 customizable effects that users can drag-and-drop to apply to their images.
      • -
      • SmartGuide: A feature that shows step-by-step directions on-screen for completing various photo-editing, web design, video, and DVD menu tasks.
      • -
      • Welcome Screen: A feature that lets users jump directly to browsing photos, photo editing, or creating photo projects.
      • -
      • RAW File Support: A feature that supports a greater number of camera models and provides brighter previews, improved performance, and easier editing of RAW images.
      • -
      • Share Button: A feature that offers easy wizards to create fun photo projects and gifts using over 200 customizable templates.
      • -
      -

      System requirements and compatibility of Corel PhotoImpact X3

      -

      To run Corel PhotoImpact X3 smoothly on your computer, you need to meet the following minimum system requirements:

      -
        -
      • Windows XP SP2 Home Edition/Professional (32-bit), Windows Vista (32-bit or 64-bit editions), Windows 7 (32-bit or 64-bit editions), Windows 8, Windows 10
      • Intel Pentium III, AMD Athlon 800 or above CPU
      • 512 MB RAM (for Windows XP), 1 GB RAM (for Windows Vista and Windows 7)
      • 750 MB available hard disk space
      • 1024 x 768 resolution, 16-bit color display or higher
      • CD-ROM drive
      • Internet connection required for online activation and web services
      • -
      -

      Corel PhotoImpact X3 is compatible with the following file formats:

      - - - - - - - - - - - -
      ImageVideoAudio
      BMP, CLP, CUR, DCS, DCX, EPS, FAX, FPX, GIF, ICO, IFF, IMG, JP2, JPC, JPG, MAC, MSP, PBM, PCD*, PCT, PCX, PDF*, PEF*, PGM, PIC, PNG, PPM, PSD, PSPImage, PXR, RAS, SCI, SCT, SHG, TGA, TIF/TIFF*, UFO*, UFP*, WBM and WBMP. RAW file support for over 250 camera models including the following file extensions: 3FR*, ARW*, BAY*, CR2*, CRW*, CS1*, DCR*, DNG*, ERF*, FFF*, HDR*, K25*, KDC*, MDC*, MRW*, NEF*, NRW*, ORF*, PEF*, RAF*, RAW*, SR2*, SRF* and X3F*.ASF (MPEG-4), AVI (MPEG-4), DAT (MPEG-1), MOV (MPEG-4), MPEG-1 and MPEG-2MIDI and WAV
      -

      What is a serial number and an activation code?

      -

      A serial number and an activation code are two types of codes that are required to activate Corel PhotoImpact X3. They are different from each other in terms of their purpose and format.

      -

      The difference between a serial number and an activation code

      -

      A serial number is a unique alphanumeric code that identifies your copy of Corel PhotoImpact X3. It is usually composed of 18 digits divided into six groups of three digits each. For example: XXX-XXX-XXX-XXX-XXX-XXX. A serial number is provided to you when you purchase Corel PhotoImpact X3 from the official website or other authorized resellers. You need to enter your serial number during the installation process of Corel PhotoImpact X3.

      -

      -

      An activation code is a one-time use code that verifies that your copy of Corel PhotoImpact X3 is genuine and not pirated. It is usually composed of 16 digits divided into four groups of four digits each. For example: XXXX-XXXX-XXXX-XXXX. An activation code is generated by Corel after you enter your serial number and some personal information online or by phone. You need to enter your activation code after the installation process of Corel PhotoImpact X3.

      -

      How to find your serial number and activation code

      -

      If you purchased Corel PhotoImpact X3 from the official website or other online platforms, you can find your serial number in the confirmation email that was sent to you after your purchase. You can also find your serial number in your Corel account if you registered your product online.

      -

      If you purchased Corel PhotoImpact X3 from a physical store or received it as a gift, you can find your serial number on the back of the CD case or on the sticker inside the DVD box.

      -

      To get your activation code, you need to follow these steps:

      -
        -
      1. Launch Corel PhotoImpact X3 and click on Activate Now.
      2. -
      3. Select Activate Online or Activate by Phone.
      4. -
      5. If you choose Activate Online, you need to enter your serial number and some personal information such as your name and email address. Then click on Submit. You will receive your activation code on the screen and in your email.
      6. -
      7. If you choose Activate by Phone, you need to call the toll-free number that is displayed on the screen and provide your serial number and some personal information. You will receive your activation code from the customer service representative.
      8. -
      9. Enter your activation code in the corresponding field and click on Finish.
      10. -
      -

      How to activate Corel PhotoImpact X3 with your serial number and layers. GIMP is suitable for users who want a free and flexible photo editing software that can handle various tasks. However, GIMP is also less user-friendly, intuitive, and stable than Corel PhotoImpact X3.

      -

      Reviews and ratings of Corel PhotoImpact X3

      -

      Corel PhotoImpact X3 has received mixed reviews and ratings from users and critics. Some users praised its ease of use, versatility, and affordability, while others criticized its outdated interface, limited support, and lack of updates. Here are some examples of reviews and ratings of Corel PhotoImpact X3 from different sources:

      -

      Pros and cons of Corel PhotoImpact X3

      -

      According to Software Advice, a website that provides reviews and ratings of various software, Corel PhotoImpact X3 has the following pros and cons:

      - - - - - - - - - -
      ProsCons
      - Easy to learn and use - Offers a lot of features and effects for photo editing - Has a good balance between power and simplicity - Has a low price compared to other photo editing software - Includes Corel MediaOne Plus for managing photos and videos- Has an outdated and cluttered interface - Does not support some newer file formats and camera models - Does not receive regular updates or bug fixes - Has limited customer support and online resources - Lacks some advanced tools and options for professional photo editing
      -

      User feedback and testimonials of Corel PhotoImpact X3

      -

      According to Amazon, a website that sells and reviews various products, Corel PhotoImpact X3 has an average rating of 4.1 out of 5 stars based on 111 customer reviews. Here are some examples of user feedback and testimonials of Corel PhotoImpact X3 from Amazon:

      -
        -
      • "I have been using PhotoImpact for years and love it. It is easy to use and has many features that I use regularly. I especially like the ExpressFix mode that allows me to quickly fix common problems with my photos. I also like the Share button that lets me create fun photo projects and gifts. I would recommend this software to anyone who wants a simple but powerful photo editing software." - 5 stars
      • -
      • "I bought this software because I needed a photo editing software that could handle RAW files from my camera. However, I was disappointed to find out that it does not support my camera model. I contacted Corel customer support but they were not helpful at all. They told me to wait for an update that might or might not come. I feel like I wasted my money on this software." - 1 star
      • -
      • "I have been using PhotoImpact for a long time and I still like it. It is not as fancy or complicated as Photoshop, but it does what I need it to do. It is easy to use and has a lot of options for editing photos. It also works well with other Corel products such as PaintShop Pro and VideoStudio. I think it is a great value for the money." - 4 stars
      • -
      • "I bought this software because I wanted to try something new for photo editing. However, I regret my decision because this software is very outdated and buggy. It crashes frequently, freezes my computer, and corrupts my files. It also has a very poor interface that is hard to navigate and understand. It does not have many features or effects that other photo editing software have. I do not recommend this software to anyone." - 2 stars
      • -
      -

      Conclusion

      -

      Corel PhotoImpact X3 is a graphic design software that can help you edit, enhance, and create amazing images with ease. It has a lot of features and benefits that make it suitable for photo editing enthusiasts who want a simple but powerful software. However, it also has some drawbacks such as an outdated interface, limited support, and lack of updates that make it less appealing for professional photographers who want a more advanced and updated software.

      -

      If you want to use Corel PhotoImpact X3, you need to activate it with a serial number and an activation code that you can get from your purchase confirmation email or from Corel customer support. You can activate your software online or offline by following the instructions on the screen.

      -

      If you lose or forget your serial number or activation code, you can contact Corel customer support or use a third-party software or website to retrieve them. However, you should be careful about the security risks or the terms of service violations that might occur.

      -

      If you are not satisfied with Corel PhotoImpact X3 or want to try other photo editing software, you can consider some alternatives and competitors such as Adobe Photoshop or GIMP that offer more features, effects, and updates for photo editing. However, they also have their own pros and cons that you should weigh before making a decision.

      -

      We hope this article has helped you understand more about Corel PhotoImpact X3 activation code serial number and how to get it and why you need it. If you have any questions or feedback, please feel free to leave a comment below.

      -

      FAQs

      -

      Here are some frequently asked questions and answers about Corel PhotoImpact X3 activation code serial number:

      -

      Q: How much does Corel PhotoImpact X3 cost?

      -

      A: Corel PhotoImpact X3 costs $29.99 USD for a one-time purchase from the official website or other online platforms. You can also get a free trial version for 30 days from the official website.

      -

      Q: Can I use Corel PhotoImpact X3 on multiple computers?

      -

      A: Yes, you can use Corel PhotoImpact X3 on up to three computers with the same serial number and activation code. However, you cannot use the software on more than one computer at the same time.

      -

      Q: Can I transfer Corel PhotoImpact X3 to another computer?

      -

      A: Yes, you can transfer Corel PhotoImpact X3 to another computer by uninstalling it from the old computer and installing it on the new computer. You need to enter your serial number and activation code again on the new computer.

      -

      Q: Can I upgrade Corel PhotoImpact X3 to a newer version?

      -

      A: No, Corel PhotoImpact X3 is the latest and final version of the software. There are no updates or upgrades available for Corel PhotoImpact X3.

      -

      Q: Is Corel PhotoImpact X3 compatible with Windows 10?

      -

      A: Yes, Corel PhotoImpact X3 is compatible with Windows 10. However, some users have reported some issues or errors when using the software on Windows 10. You can try to run the software in compatibility mode or as an administrator to solve these problems.

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mainconcept Aac Encoder V1.0.6 Serial 30.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mainconcept Aac Encoder V1.0.6 Serial 30.md deleted file mode 100644 index 44d943cd2d39a1535f767855ddb6138a016bd866..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Mainconcept Aac Encoder V1.0.6 Serial 30.md +++ /dev/null @@ -1,162 +0,0 @@ - -

      MainConcept AAC Encoder v1.0.6 Serial 30: A Review

      -

      If you are looking for a professional and reliable audio encoding software, you might have heard of MainConcept AAC Encoder v1.0.6 Serial 30. This software is designed to enrich Adobe products with state-of-the-art codec solutions, especially for the Adobe Flash Media Live Encoder 2.5 that only comes with Nellymoser or MP3 audio encoding as standard.

      -

      mainconcept aac encoder v1.0.6 serial 30


      Download Ziphttps://urlcod.com/2uIcwL



      -

      In this article, we will review MainConcept AAC Encoder v1.0.6 Serial 30, and show you what it is, how to install and activate it, how to use it, and what are its pros and cons.

      -

      What is MainConcept AAC Encoder?

      -

      MainConcept AAC Encoder is a plug-in that offers professional AAC encoding within the Adobe Flash Media Live Encoder 2.5. It supports AAC (MPEG-4 AAC & HE Audio), which is the emerging future audio standard that might replace existing ones, such as MP3.

      -

      What is AAC?

      -

      AAC stands for Advanced Audio Coding, which is a lossy audio compression format that provides better sound quality and efficiency than MP3. It is widely supported by popular devices such as Apple iPod, Sony PSP, Sony PS3, Nintendo Wii, various cell phones, etc.

      -

      AAC has different versions, such as Low Complexity (LC), High-Efficiency (HE) v1 and v2, and Extended High-Efficiency (xHE). The HE versions use Spectral Band Replication (SBR) and Parametric Stereo (PS) techniques to enhance the audio quality at low bit rates. The xHE version uses Unified Speech and Audio Coding (USAC) to improve the speech and music quality at very low bit rates.

      -

      What are the features of MainConcept AAC Encoder?

      -

      MainConcept AAC Encoder has the following features:

      -
        -
      • Fully compliant to ISO/IEC 14496-3 (MPEG-4 AAC) and ISO/IEC 13818-7 (MPEG-2 AAC) audio streams specification
      • -
      • Encodes PCM audio streams to MPEG-2 / MPEG-4 Low Complexity, HE AAC v1 as SBR, and HE AAC v2 as Parametric Stereo audio streams
      • -
      • Supports common output formats like RAW (no header), ADTS (Audio Data Transport Stream header), and LOAS/LATM (used for multiplexing into MPEG-2 streams)
      • -
      • Supports different channel layouts from mono, stereo, 5.1 up to 7.1
      • -
      • Supports different sampling rates from 8 kHz up to 96 kHz
      • -
      • Supports different bit rates from 8 kbit/s up to 320 kbit/s
      • -
      • Supports different profiles such as LC, HE, and HEv2
      • -
      • Supports different modes such as CBR (Constant Bit Rate), VBR (Variable Bit Rate), and ABR (Average Bit Rate)
      • -
      • Supports different quality levels from 0 (lowest) to 5 (highest)
      • -
      • Supports metadata such as title, artist, album, genre, etc
      • -
      • Supports gapless encoding for seamless playback of consecutive tracks
      • -
      -

      What are the benefits of using MainConcept AAC Encoder?

      -

      MainConcept AAC Encoder has the following benefits:

      -

      -
        -
      • It provides high-quality audio encoding for Adobe Flash Media Live Encoder 2.5, which only supports Nellymoser or MP3 audio encoding by default
      • -
      • It allows you to stream audio files in AAC format, which is compatible with most popular devices and platforms
      • -
      • It enables you to save bandwidth and storage space by using efficient compression techniques such as SBR and PS
      • -
      • It gives you flexibility and control over the encoding parameters such as bit rate, mode, quality, etc
      • -
      • It supports various input and output formats such as RAW, ADTS, and LOAS/LATM
      • -
      • It supports various channel layouts and sampling rates for different audio scenarios
      • -
      • It supports metadata and gapless encoding for better user experience
      • -
      -

      How to install and activate MainConcept AAC Encoder v1.0.6 Serial 30?

      -

      In order to install and activate MainConcept AAC Encoder v1.0.6 Serial 30, you need to follow these steps:

      -

      How to download MainConcept AAC Encoder v1.0.6?

      -

      You can download MainConcept AAC Encoder v1.0.6 from the official website of MainConcept. You need to register an account and provide some basic information before you can access the download link. You will also receive an email with the serial number for activation.

      -

      How to install MainConcept AAC Encoder v1.0.6?

      -

      After you download the installer file, you need to run it and follow the instructions on the screen. You will be asked to accept the license agreement, choose the installation folder, and select the components to install. You can choose to install the plug-in for Adobe Flash Media Live Encoder 2.5 or Adobe Premiere Pro CS4/CS5/CS6/CC.

      -

      The installation process will take a few minutes, and you will see a confirmation message when it is done.

      -

      How to activate MainConcept AAC Encoder v1.0.6 with Serial 30?

      -

      To activate MainConcept AAC Encoder v1.0.6 with Serial 30, you need to launch Adobe Flash Media Live Encoder 2.5 or Adobe Premiere Pro CS4/CS5/CS6/CC, depending on which component you installed. You will see a dialog box asking you to enter the serial number that you received by email.

      -

      You need to enter the serial number exactly as it is shown in the email, including the dashes and spaces. Then, click on Activate Online button to complete the activation process. You will see a message saying that your product has been successfully activated.

      -

      If you have any problems with the activation process, you can contact the customer support of MainConcept.

      How to use MainConcept AAC Encoder v1.0.6 Serial 30?

      -

      Once you have installed and activated MainConcept AAC Encoder v1.0.6 Serial 30, you can start using it to encode and stream audio files in AAC format. Here are some tips on how to use it:

      -

      How to encode audio files with MainConcept AAC Encoder v1.0.6?

      -

      To encode audio files with MainConcept AAC Encoder v1.0.6, you need to use Adobe Flash Media Live Encoder 2.5 or Adobe Premiere Pro CS4/CS5/CS6/CC, depending on which component you installed.

      -

      If you use Adobe Flash Media Live Encoder 2.5, you need to do the following steps:

      -
        -
      1. Launch Adobe Flash Media Live Encoder 2.5 and select the input source for your audio file
      2. -
      3. Click on the Audio tab and select MainConcept AAC Encoder from the Format drop-down menu
      4. -
      5. Click on the Settings button to open the MainConcept AAC Encoder Settings dialog box
      6. -
      7. Select the output format, channel layout, sampling rate, bit rate, mode, quality, profile, and metadata for your audio file
      8. -
      9. Click on OK to save the settings and close the dialog box
      10. -
      11. Click on Start to begin the encoding process
      12. -
      13. Click on Stop to end the encoding process
      14. -
      -

      If you use Adobe Premiere Pro CS4/CS5/CS6/CC, you need to do the following steps:

      -
        -
      1. Launch Adobe Premiere Pro CS4/CS5/CS6/CC and import your audio file into the project panel
      2. -
      3. Drag and drop your audio file into the timeline and edit it as you wish
      4. -
      5. Select File > Export > Media to open the Export Settings dialog box
      6. -
      7. Select MainConcept AAC Encoder from the Format drop-down menu
      8. -
      9. Select the output format, channel layout, sampling rate, bit rate, mode, quality, profile, and metadata for your audio file
      10. -
      11. Click on Export to begin the encoding process
      12. -
      -

      How to configure the encoding settings with MainConcept AAC Encoder v1.0.6?

      -

      To configure the encoding settings with MainConcept AAC Encoder v1.0.6, you need to open the MainConcept AAC Encoder Settings dialog box by clicking on the Settings button in Adobe Flash Media Live Encoder 2.5 or Adobe Premiere Pro CS4/CS5/CS6/CC.

      -

      In this dialog box, you can adjust the following parameters:

      - - - - - - - - - - -
      ParameterDescription
      Output FormatThe output format of the encoded audio file. You can choose from RAW (no header), ADTS (Audio Data Transport Stream header), or LOAS/LATM (used for multiplexing into MPEG-2 streams).
      Channel LayoutThe channel layout of the encoded audio file. You can choose from mono, stereo, 5.1 up to 7.1.
      Sampling RateThe sampling rate of the encoded audio file. You can choose from 8 kHz up to 96 kHz.
      Bit RateThe bit rate of the encoded audio file. You can choose from 8 kbit/s up to 320 kbit/s.
      ModeThe mode of the encoded audio file. You can choose from CBR (Constant Bit Rate), VBR (Variable Bit Rate), or ABR (Average Bit Rate).
      QualityThe quality level of the encoded audio file. You can choose from 0 (lowest) to 5 (highest).
      ProfileThe profile of the encoded audio file. You can choose from LC (Low Complexity), HE (High-Efficiency), or HEv2 (High-Efficiency version 2).
      MetadataThe metadata of the encoded audio file. You can enter information such as title, artist, album, genre, etc.
      -

      How to stream audio files with MainConcept AAC Encoder v1.0.6?

      -

      To stream audio files with MainConcept AAC Encoder v1.0.6, you need to use Adobe Flash Media Live Encoder 2.5 or Adobe Premiere Pro CS4/CS5/CS 6/CC, depending on which component you installed. If you use Adobe Flash Media Live Encoder 2.5, you need to do the following steps:

        -
      1. Launch Adobe Flash Media Live Encoder 2.5 and select the input source for your audio file
      2. -
      3. Click on the Audio tab and select MainConcept AAC Encoder from the Format drop-down menu
      4. -
      5. Click on the Settings button to open the MainConcept AAC Encoder Settings dialog box and configure the encoding settings as you wish
      6. -
      7. Click on OK to save the settings and close the dialog box
      8. -
      9. Click on the Output tab and select Stream to Flash Media Server from the Output Type drop-down menu
      10. -
      11. Enter the URL, username, and password of your Flash Media Server in the corresponding fields
      12. -
      13. Click on Connect to connect to your Flash Media Server
      14. -
      15. Enter the stream name and select the stream type for your audio file
      16. -
      17. Click on Start to begin streaming your audio file
      18. -
      19. Click on Stop to end streaming your audio file
      20. -
      - If you use Adobe Premiere Pro CS4/CS5/CS6/CC, you need to do the following steps:
        -
      1. Launch Adobe Premiere Pro CS4/CS5/CS6/CC and import your audio file into the project panel
      2. -
      3. Drag and drop your audio file into the timeline and edit it as you wish
      4. -
      5. Select File > Export > Media to open the Export Settings dialog box
      6. -
      7. Select MainConcept AAC Encoder from the Format drop-down menu and configure the encoding settings as you wish
      8. -
      9. Select Publish > Adobe Flash Media Server from the left panel and check the box next to it
      10. -
      11. Enter the URL, username, and password of your Flash Media Server in the corresponding fields
      12. -
      13. Enter the stream name and select the stream type for your audio file
      14. -
      15. Click on Queue to add your audio file to the Adobe Media Encoder queue
      16. -
      17. Launch Adobe Media Encoder and click on Start Queue to begin streaming your audio file
      18. -
      19. Click on Stop Queue to end streaming your audio file
      20. -
      -

      What are the pros and cons of MainConcept AAC Encoder v1.0.6 Serial 30?

      -

      MainConcept AAC Encoder v1.0.6 Serial 30 is a powerful and versatile audio encoding software, but it also has some drawbacks. Here are some of the pros and cons of using it:

      -

      Pros of MainConcept AAC Encoder v1.0.6 Serial 30

      -
        -
      • It provides high-quality audio encoding for Adobe Flash Media Live Encoder 2.5, which only supports Nellymoser or MP3 audio encoding by default
      • -
      • It allows you to stream audio files in AAC format, which is compatible with most popular devices and platforms
      • -
      • It enables you to save bandwidth and storage space by using efficient compression techniques such as SBR and PS
      • -
      • It gives you flexibility and control over the encoding parameters such as bit rate, mode, quality, etc.
      • -
      • It supports various input and output formats such as RAW, ADTS, and LOAS/LATM
      • -
      • It supports various channel layouts and sampling rates for different audio scenarios
      • -
      • It supports metadata and gapless encoding for better user experience
      • -
      -

      Cons of MainConcept AAC Encoder v1.0.6 Serial 30

      -
        -
      • It requires a serial number for activation, which might be lost or stolen by hackers or malware
      • -
      • It is not compatible with other versions of Adobe Flash Media Live Encoder or Adobe Premiere Pro than 2.5 or CS4/CS5/CS6/CC respectively
      • -
      • It does not support xHE-AAC profile, which is the latest version of AAC that offers better speech and music quality at very low bit rates
      • -
      • It does not support Dolby Digital Plus or Dolby Atmos formats, which are advanced surround sound formats that offer immersive audio experience
      • -
      • It might have some compatibility issues with some devices or platforms that do not support AAC format or certain profiles or modes of AAC format
      • -
      -

      Conclusion

      -

      MainConcept AAC Encoder v1.0.6 Serial 30 is a plug-in that offers professional AAC encoding within the Adobe Flash Media Live Encoder 2.5 or Adobe Premiere Pro CS4/CS5/CS6/CC. It supports AAC (MPEG-4 AAC & HE Audio), which is the emerging future audio standard that might replace existing ones, such as MP3.

      -

      MainConcept AAC Encoder v1.0.6 Serial 30 has many features and benefits, such as high-quality audio encoding, compatibility with popular devices and platforms, bandwidth and storage saving, flexibility and control over encoding parameters, various input and output formats, metadata and gapless encoding support, etc.

      -

      However, MainConcept AAC Encoder v1.0.6 Serial 30 also has some drawbacks, such as requiring a serial number for activation, not being compatible with other versions of Adobe Flash Media Live Encoder or Adobe Premiere Pro, not supporting xHE-AAC profile or Dolby Digital Plus or Dolby Atmos formats, and having some compatibility issues with some devices or platforms that do not support AAC format or certain profiles or modes of AAC format.

      -

      Therefore, MainConcept AAC Encoder v1.0.6 Serial 30 is a great choice for professional and reliable audio encoding software, but it also has some limitations that you should be aware of before using it.

      -

      FAQs

      -

      Here are some frequently asked questions about MainConcept AAC Encoder v1.0.6 Serial 30:

      -
        -
      1. Q: Where can I get MainConcept AAC Encoder v1.0.6 Serial 30?
      2. -
      3. A: You can get MainConcept AAC Encoder v1.0.6 Serial 30 from the official website of MainConcept. You need to register an account and provide some basic information before you can access the download link. You will also receive an email with the serial number for activation.
      4. -
      5. Q: How much does MainConcept AAC Encoder v1.0.6 Serial 30 cost?
      6. -
      7. A: MainConcept AAC Encoder v1.0.6 Serial 30 costs $180 for a single user license. You can also get a free trial version for 30 days from the official website of MainConcept.
      8. -
      9. Q: What are the system requirements for MainConcept AAC Encoder v1.0.6 Serial 30?
      10. -
      11. A: The system requirements for MainConcept AAC Encoder v1.0.6 Serial 30 are as follows:
      12. -
          -
        • Operating System: Windows XP/Vista/7/8/10
        • -
        • Processor: Pentium IV or higher
        • -
        • Memory: 512 MB RAM or higher
        • -
        • Disk Space: 100 MB free disk space or higher
        • -
        • Software: Adobe Flash Media Live Encoder 2.5 or Adobe Premiere Pro CS4/CS5/CS6/CC
        • -
        -
      13. Q: How can I contact the customer support of MainConcept?
      14. -
      15. A: You can contact the customer support of MainConcept by filling out the online form on their website, sending an email to support@mainconcept.com, or calling +49 (0)2408-9383-0.
      16. -
      17. Q: What are some alternatives to MainConcept AAC Encoder v1.0.6 Serial 30?
      18. -
      19. A: Some alternatives to MainConcept AAC Encoder v1.0.6 Serial 30 are as follows:
      20. -
          -
        • Fraunhofer FDK AAC Codec Library for Android: A software library that provides high-quality encoding and decoding of AAC audio on Android devices.
        • -
        • Nero AAC Codec: A freeware tool that allows you to convert WAV files to AAC files and vice versa.
        • -
        • Foobar2000: A free and advanced audio player that supports various audio formats, including AAC.
        • -
        -

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/csrc/nms_rotated/nms_rotated.h b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/csrc/nms_rotated/nms_rotated.h deleted file mode 100644 index 12aca388e47b12dafd20999f2991a9d42f4b904b..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/csrc/nms_rotated/nms_rotated.h +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. -#pragma once -#include - -namespace detectron2 { - -at::Tensor nms_rotated_cpu( - const at::Tensor& dets, - const at::Tensor& scores, - const double iou_threshold); - -#if defined(WITH_CUDA) || defined(WITH_HIP) -at::Tensor nms_rotated_cuda( - const at::Tensor& dets, - const at::Tensor& scores, - const double iou_threshold); -#endif - -// Interface for Python -// inline is needed to prevent multiple function definitions when this header is -// included by different cpps -inline at::Tensor nms_rotated( - const at::Tensor& dets, - const at::Tensor& scores, - const double iou_threshold) { - assert(dets.device().is_cuda() == scores.device().is_cuda()); - if (dets.device().is_cuda()) { -#if defined(WITH_CUDA) || defined(WITH_HIP) - return nms_rotated_cuda( - dets.contiguous(), scores.contiguous(), iou_threshold); -#else - AT_ERROR("Detectron2 is not compiled with GPU support!"); -#endif - } - - return nms_rotated_cpu(dets.contiguous(), scores.contiguous(), iou_threshold); -} - -} // namespace detectron2 diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/utils/collect_env.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/utils/collect_env.py deleted file mode 100644 index 2846d7a56c3efbdec5ccc5a9c4890ff47cff9512..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/utils/collect_env.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import importlib -import numpy as np -import os -import re -import subprocess -import sys -from collections import defaultdict -import PIL -import torch -import torchvision -from tabulate import tabulate - -__all__ = ["collect_env_info"] - - -def collect_torch_env(): - try: - import torch.__config__ - - return torch.__config__.show() - except ImportError: - # compatible with older versions of pytorch - from torch.utils.collect_env import get_pretty_env_info - - return get_pretty_env_info() - - -def get_env_module(): - var_name = "DETECTRON2_ENV_MODULE" - return var_name, os.environ.get(var_name, "") - - -def detect_compute_compatibility(CUDA_HOME, so_file): - try: - cuobjdump = os.path.join(CUDA_HOME, "bin", "cuobjdump") - if os.path.isfile(cuobjdump): - output = subprocess.check_output( - "'{}' --list-elf '{}'".format(cuobjdump, so_file), shell=True - ) - output = output.decode("utf-8").strip().split("\n") - arch = [] - for line in output: - line = re.findall(r"\.sm_([0-9]*)\.", line)[0] - arch.append(".".join(line)) - arch = sorted(set(arch)) - return ", ".join(arch) - else: - return so_file + "; cannot find cuobjdump" - except Exception: - # unhandled failure - return so_file - - -def collect_env_info(): - has_gpu = torch.cuda.is_available() # true for both CUDA & ROCM - torch_version = torch.__version__ - - # NOTE that CUDA_HOME/ROCM_HOME could be None even when CUDA runtime libs are functional - from torch.utils.cpp_extension import CUDA_HOME, ROCM_HOME - - has_rocm = False - if (getattr(torch.version, "hip", None) is not None) and (ROCM_HOME is not None): - has_rocm = True - has_cuda = has_gpu and (not has_rocm) - - data = [] - data.append(("sys.platform", sys.platform)) # check-template.yml depends on it - data.append(("Python", sys.version.replace("\n", ""))) - data.append(("numpy", np.__version__)) - - try: - import detectron2 # noqa - - data.append( - ("detectron2", detectron2.__version__ + " @" + os.path.dirname(detectron2.__file__)) - ) - except ImportError: - data.append(("detectron2", "failed to import")) - except AttributeError: - data.append(("detectron2", "imported a wrong installation")) - - try: - import detectron2._C as _C - except ImportError as e: - data.append(("detectron2._C", f"not built correctly: {e}")) - - # print system compilers when extension fails to build - if sys.platform != "win32": # don't know what to do for windows - try: - # this is how torch/utils/cpp_extensions.py choose compiler - cxx = os.environ.get("CXX", "c++") - cxx = subprocess.check_output("'{}' --version".format(cxx), shell=True) - cxx = cxx.decode("utf-8").strip().split("\n")[0] - except subprocess.SubprocessError: - cxx = "Not found" - data.append(("Compiler ($CXX)", cxx)) - - if has_cuda and CUDA_HOME is not None: - try: - nvcc = os.path.join(CUDA_HOME, "bin", "nvcc") - nvcc = subprocess.check_output("'{}' -V".format(nvcc), shell=True) - nvcc = nvcc.decode("utf-8").strip().split("\n")[-1] - except subprocess.SubprocessError: - nvcc = "Not found" - data.append(("CUDA compiler", nvcc)) - if has_cuda and sys.platform != "win32": - try: - so_file = importlib.util.find_spec("detectron2._C").origin - except (ImportError, AttributeError): - pass - else: - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, so_file)) - ) - else: - # print compilers that are used to build extension - data.append(("Compiler", _C.get_compiler_version())) - data.append(("CUDA compiler", _C.get_cuda_version())) # cuda or hip - if has_cuda and getattr(_C, "has_cuda", lambda: True)(): - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, _C.__file__)) - ) - - data.append(get_env_module()) - data.append(("PyTorch", torch_version + " @" + os.path.dirname(torch.__file__))) - data.append(("PyTorch debug build", torch.version.debug)) - try: - data.append(("torch._C._GLIBCXX_USE_CXX11_ABI", torch._C._GLIBCXX_USE_CXX11_ABI)) - except Exception: - pass - - if not has_gpu: - has_gpu_text = "No: torch.cuda.is_available() == False" - else: - has_gpu_text = "Yes" - data.append(("GPU available", has_gpu_text)) - if has_gpu: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - cap = ".".join((str(x) for x in torch.cuda.get_device_capability(k))) - name = torch.cuda.get_device_name(k) + f" (arch={cap})" - devices[name].append(str(k)) - for name, devids in devices.items(): - data.append(("GPU " + ",".join(devids), name)) - - if has_rocm: - msg = " - invalid!" if not (ROCM_HOME and os.path.isdir(ROCM_HOME)) else "" - data.append(("ROCM_HOME", str(ROCM_HOME) + msg)) - else: - try: - from torch.utils.collect_env import get_nvidia_driver_version, run as _run - - data.append(("Driver version", get_nvidia_driver_version(_run))) - except Exception: - pass - msg = " - invalid!" if not (CUDA_HOME and os.path.isdir(CUDA_HOME)) else "" - data.append(("CUDA_HOME", str(CUDA_HOME) + msg)) - - cuda_arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None) - if cuda_arch_list: - data.append(("TORCH_CUDA_ARCH_LIST", cuda_arch_list)) - data.append(("Pillow", PIL.__version__)) - - try: - data.append( - ( - "torchvision", - str(torchvision.__version__) + " @" + os.path.dirname(torchvision.__file__), - ) - ) - if has_cuda: - try: - torchvision_C = importlib.util.find_spec("torchvision._C").origin - msg = detect_compute_compatibility(CUDA_HOME, torchvision_C) - data.append(("torchvision arch flags", msg)) - except (ImportError, AttributeError): - data.append(("torchvision._C", "Not found")) - except AttributeError: - data.append(("torchvision", "unknown")) - - try: - import fvcore - - data.append(("fvcore", fvcore.__version__)) - except (ImportError, AttributeError): - pass - - try: - import iopath - - data.append(("iopath", iopath.__version__)) - except (ImportError, AttributeError): - pass - - try: - import cv2 - - data.append(("cv2", cv2.__version__)) - except (ImportError, AttributeError): - data.append(("cv2", "Not found")) - env_str = tabulate(data) + "\n" - env_str += collect_torch_env() - return env_str - - -def test_nccl_ops(): - num_gpu = torch.cuda.device_count() - if os.access("/tmp", os.W_OK): - import torch.multiprocessing as mp - - dist_url = "file:///tmp/nccl_tmp_file" - print("Testing NCCL connectivity ... this should not hang.") - mp.spawn(_test_nccl_worker, nprocs=num_gpu, args=(num_gpu, dist_url), daemon=False) - print("NCCL succeeded.") - - -def _test_nccl_worker(rank, num_gpu, dist_url): - import torch.distributed as dist - - dist.init_process_group(backend="NCCL", init_method=dist_url, rank=rank, world_size=num_gpu) - dist.barrier(device_ids=[rank]) - - -if __name__ == "__main__": - try: - from detectron2.utils.collect_env import collect_env_info as f - - print(f()) - except ImportError: - print(collect_env_info()) - - if torch.cuda.is_available(): - num_gpu = torch.cuda.device_count() - for k in range(num_gpu): - device = f"cuda:{k}" - try: - x = torch.tensor([1, 2.0], dtype=torch.float32) - x = x.to(device) - except Exception as e: - print( - f"Unable to copy tensor to device={device}: {e}. " - "Your CUDA environment is broken." - ) - if num_gpu > 1: - test_nccl_ops() diff --git a/spaces/niro-private/chatCSV/files.py b/spaces/niro-private/chatCSV/files.py deleted file mode 100644 index 061ae907ec528e1f4fead599877eec57642a2859..0000000000000000000000000000000000000000 --- a/spaces/niro-private/chatCSV/files.py +++ /dev/null @@ -1,191 +0,0 @@ -import os -from typing import ( - Any, - Union, -) -import zipfile -import streamlit as st -from streamlit.runtime.uploaded_file_manager import ( - UploadedFile, - UploadedFileRec, - UploadedFileManager, -) -from streamlit.runtime.scriptrunner import get_script_run_ctx -from supabase.client import Client -from langchain.vectorstores.supabase import SupabaseVectorStore -from components_keys import ComponentsKeys -from loaders.audio import process_audio -from loaders.txt import process_txt -from loaders.csv import process_csv -from loaders.markdown import process_markdown -from loaders.pdf import process_pdf -from loaders.html import ( - create_html_file, - delete_tempfile, - get_html, - process_html, -) -from loaders.powerpoint import process_powerpoint -from loaders.docx import process_docx -from utils import compute_sha1_from_content - - -ctx = get_script_run_ctx() -manager = UploadedFileManager() -file_processors = { - ".txt": process_txt, - ".csv": process_csv, - ".md": process_markdown, - ".markdown": process_markdown, - ".m4a": process_audio, - ".mp3": process_audio, - ".webm": process_audio, - ".mp4": process_audio, - ".mpga": process_audio, - ".wav": process_audio, - ".mpeg": process_audio, - ".pdf": process_pdf, - ".html": process_html, - ".pptx": process_powerpoint, - ".docx": process_docx -} - -def file_uploader(supabase, vector_store): - # Omit zip file support if the `st.secrets.self_hosted` != "true" because - # a zip file can consist of multiple files so the limit on 1 file uploaded - # at a time in the demo can be circumvented. - accepted_file_extensions = list(file_processors.keys()) - accept_multiple_files = st.secrets.self_hosted == "true" - if accept_multiple_files: - accepted_file_extensions += [".zip"] - - files = st.file_uploader( - "**Upload a file**", - accept_multiple_files=accept_multiple_files, - type=accepted_file_extensions, - key=ComponentsKeys.FILE_UPLOADER, - ) - if st.secrets.self_hosted == "false": - st.markdown("**In demo mode, the max file size is 1MB**") - if st.button("Add to Database"): - # Single file upload - if isinstance(files, UploadedFile): - filter_file(files, supabase, vector_store) - # Multiple files upload - elif isinstance(files, list): - for file in files: - filter_file(file, supabase, vector_store) - -def file_already_exists(supabase, file): - file_sha1 = compute_sha1_from_content(file.getvalue()) - response = supabase.table("documents").select("id").eq("metadata->>file_sha1", file_sha1).execute() - return len(response.data) > 0 - -def file_to_uploaded_file(file: Any) -> Union[None, UploadedFile]: - """Convert a file to a streamlit `UploadedFile` object. - - This allows us to unzip files and treat them the same way - streamlit treats files uploaded through the file uploader. - - Parameters - --------- - file : Any - The file. Can be any file supported by this app. - - Returns - ------- - Union[None, UploadedFile] - The file converted to a streamlit `UploadedFile` object. - Returns `None` if the script context cannot be grabbed. - """ - - if ctx is None: - print("script context not found, skipping uploading file:", file.name) - return - - file_extension = os.path.splitext(file.name)[-1] - file_name = file.name - file_data = file.read() - # The file manager will automatically assign an ID so pass `None` - # Reference: https://github.com/streamlit/streamlit/blob/9a6ce804b7977bdc1f18906d1672c45f9a9b3398/lib/streamlit/runtime/uploaded_file_manager.py#LL98C6-L98C6 - uploaded_file_rec = UploadedFileRec(None, file_name, file_extension, file_data) - uploaded_file_rec = manager.add_file( - ctx.session_id, - ComponentsKeys.FILE_UPLOADER, - uploaded_file_rec, - ) - return UploadedFile(uploaded_file_rec) - -def filter_zip_file( - file: UploadedFile, - supabase: Client, - vector_store: SupabaseVectorStore, -) -> None: - """Unzip the zip file then filter each unzipped file. - - Parameters - ---------- - file : UploadedFile - The uploaded file from the file uploader. - supabase : Client - The supabase client. - vector_store : SupabaseVectorStore - The vector store in the database. - """ - - with zipfile.ZipFile(file, "r") as z: - unzipped_files = z.namelist() - for unzipped_file in unzipped_files: - with z.open(unzipped_file, "r") as f: - filter_file(f, supabase, vector_store) - -def filter_file(file, supabase, vector_store): - # Streamlit file uploads are of type `UploadedFile` which has the - # necessary methods and attributes for this app to work. - if not isinstance(file, UploadedFile): - file = file_to_uploaded_file(file) - - file_extension = os.path.splitext(file.name)[-1] - if file_extension == ".zip": - filter_zip_file(file, supabase, vector_store) - return True - - if file_already_exists(supabase, file): - st.write(f"😎 {file.name} is already in the database.") - return False - - if file.size < 1: - st.write(f"💨 {file.name} is empty.") - return False - - if file_extension in file_processors: - if st.secrets.self_hosted == "false": - file_processors[file_extension](vector_store, file, stats_db=supabase) - else: - file_processors[file_extension](vector_store, file, stats_db=None) - st.write(f"✅ {file.name} ") - return True - - st.write(f"❌ {file.name} is not a valid file type.") - return False - -def url_uploader(supabase, vector_store): - url = st.text_area("**Add an url**",placeholder="vanti.ai") - button = st.button("Add the URL to the database") - - if button: - if not st.session_state["overused"]: - html = get_html(url) - if html: - st.write(f"Getting content ... {url} ") - try: - file, temp_file_path = create_html_file(url, html) - except UnicodeEncodeError as e: - st.write(f"❌ Error encoding character: {e}") - file, temp_file_path = create_html_file(url, html) - ret = filter_file(file, supabase, vector_store) - delete_tempfile(temp_file_path, url, ret) - else: - st.write(f"❌ Failed to access to {url} .") - else: - st.write("You have reached your daily limit. Please come back later or self host the solution.") \ No newline at end of file diff --git a/spaces/nomic-ai/BelleGroup_train_1M_CN/index.html b/spaces/nomic-ai/BelleGroup_train_1M_CN/index.html deleted file mode 100644 index 221dc1b90167be35e486b264c5a56cf1cd1dd3f3..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/BelleGroup_train_1M_CN/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - BelleGroup/train_1M_CN - - - - -
      - -
      - - - \ No newline at end of file diff --git a/spaces/nomic-ai/nomic-ai_gpt4all-j-prompt-generations/index.html b/spaces/nomic-ai/nomic-ai_gpt4all-j-prompt-generations/index.html deleted file mode 100644 index e3588b476f4862a682a483659f3408fd7dd928e7..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/nomic-ai_gpt4all-j-prompt-generations/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - nomic-ai/gpt4all-j-prompt-generations - - - - -
      - -
      - - - \ No newline at end of file diff --git a/spaces/nomic-ai/wikiann/style.css b/spaces/nomic-ai/wikiann/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/wikiann/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/vector/cachealignedvector_test.cc b/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/vector/cachealignedvector_test.cc deleted file mode 100644 index 245c64d7dc3de69e5e37c4445c9ce4c599b28ab0..0000000000000000000000000000000000000000 --- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/vector/cachealignedvector_test.cc +++ /dev/null @@ -1,405 +0,0 @@ -// Copyright 2021 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#include "sparse_matmul/vector/cache_aligned_vector.h" - -#if defined __aarch64__ -#include -#endif - -#include - -#include -#include -#include -#include -#include - -#include "gmock/gmock.h" -#include "gtest/gtest.h" -#include "sparse_matmul/numerics/test_utils.h" -#include "sparse_matmul/os/coop_threads.h" - -namespace csrblocksparse { - -const float kExpRelTolerance = .03f; // 3% relative -#ifdef SIGMOID_AS_TANH -const float kSigmoidRelTolerance = .09f; // 9.0% relative -const float kSigmoidAbsTolerance = .003f; -#else -const float kSigmoidRelTolerance = .031f; // 3.1% relative -const float kSigmoidAbsTolerance = .006f; -#endif -const float kTanhRelTolerance = .014f; // 1.4% relative -const float kTanhAbsTolerance = .00525f; - -TEST(Transcendentals, CacheAlignedVectorExp) { - const int kTestSize = 1 << 16; - CacheAlignedVector values(kTestSize); - values.FillRandom(); - CacheAlignedVector values_ref = values; - - values.Exp(); - for (int i = 0; i < kTestSize; ++i) { - float exact_val = std::exp(values_ref[i]); - float rel_diff = RelDiff(exact_val, values[i]); - - EXPECT_LT(rel_diff, kExpRelTolerance) - << exact_val << " " << values[i] << " " << values_ref[i]; - } -} - -TEST(Transcendentals, CacheAlignedVectorSigmoid) { - const int kTestSize = 1 << 16; - CacheAlignedVector values(kTestSize); - values.FillRandom(); - CacheAlignedVector values_ref = values; - - values.Sigmoid(); - for (int i = 0; i < kTestSize; ++i) { - float exact_val = 1. / (1. + std::exp(-values_ref[i])); - float rel_diff = RelDiff(exact_val, values[i]); - - EXPECT_LT(rel_diff, kSigmoidRelTolerance) - << exact_val << " " << values[i] << " " << values_ref[i]; - EXPECT_NEAR(values[i], exact_val, kSigmoidAbsTolerance) << values_ref[i]; - } -} - -TEST(Transcendentals, CacheAlignedVectorTanh) { - const int kTestSize = 1 << 16; - CacheAlignedVector values(kTestSize); - values.FillRandom(); - CacheAlignedVector values_ref = values; - - values.Tanh(); - for (int i = 0; i < kTestSize; ++i) { - float exact_val = std::tanh(values_ref[i]); - float rel_diff = RelDiff(exact_val, values[i]); - - EXPECT_LT(rel_diff, kTanhRelTolerance) - << exact_val << " " << values[i] << " " << values_ref[i]; - EXPECT_NEAR(values[i], exact_val, kTanhAbsTolerance) << values_ref[i]; - } -} - -// Uniformly sample logits and check that the resulting sample choices are -// also (nearly) uniformly distributed. -TEST(Sampling, Random) { - const int kSize = 256; - - CacheAlignedVector logits(kSize); - logits.FillZero(); - - double histogram[kSize] = {}; - - const int kIterations = 10000; - for (int i = 0; i < kIterations; ++i) { - histogram[logits.Sample()]++; - } - - for (int i = 0; i < kSize; ++i) { - // .002 is an empirical bound - EXPECT_GT(histogram[i] / kIterations, 1. / kSize - .002f); - EXPECT_LT(histogram[i] / kIterations, 1. / kSize + .002f); - } -} - -// Put (nearly) all the probability mass on one bin and make sure only that bin -// is chosen. -TEST(Sampling, FixedDistribution) { - const int kSize = 256; - - CacheAlignedVector logits(kSize); - - int histogram[kSize] = {}; - - const int kIterations = 1000; - const int kIndex = 3; - const int kAllProbabilityMass = 10; - const int kNoProbabilityMass = -10; - for (int i = 0; i < kIterations; ++i) { - for (int i = 1; i <= kSize; ++i) { - logits.data()[i - 1] = - i == (kIndex + 1) ? kAllProbabilityMass : kNoProbabilityMass; - } - - histogram[logits.Sample()]++; - } - - EXPECT_EQ(histogram[kIndex], 1000); -} - -// Put (nearly) all the probability mass on one bin outside the target range, -// and make sure that bin is not chosen. -TEST(ScalarSample, ThreadedMasked) { - const int kSize = 256; - const int mindex = 2; - const int maxdex = 3; - const int kNumThreads = 4; - const int kIterations = 1000; - const int kIndex = 3; - const int kMostProbabilityMass = 3; - const int kLittleProbabilityMass = -3; - - CacheAlignedVector logits(kSize); - std::vector> tmp_vectors; - std::vector generators(kNumThreads); - - for (int i = 0; i < kNumThreads; ++i) { - tmp_vectors.emplace_back(kSize); - } - - for (int i = 0; i < kSize; ++i) { - logits.data()[i] = - (i + 1) == (kIndex + 1) ? kMostProbabilityMass : kLittleProbabilityMass; - } - - std::vector> histograms; - for (int i = 0; i < kNumThreads; ++i) { - histograms.emplace_back(kSize); - } - - auto f = [&](csrblocksparse::SpinBarrier* /*barrier*/, int tid) { - for (int i = 0; i < kIterations; ++i) { - histograms[tid][logits.ScalarSample( - 1.f, &generators[tid], &tmp_vectors[tid], 0, mindex, maxdex)]++; - } - }; - - csrblocksparse::LaunchOnThreadsWithBarrier(kNumThreads, f); - - // Every thread should generate the exact same set of samples. - for (int i = 0; i < kSize; ++i) { - int val = histograms[0][i]; - for (int tid = 1; tid < kNumThreads; ++tid) { - EXPECT_EQ(val, histograms[tid][i]); - } - } - - // The most probable sample should be the only one we're sampling. - for (int tid = 0; tid < kNumThreads; ++tid) { - EXPECT_EQ(std::distance(histograms[tid].begin(), - std::max_element(histograms[tid].begin(), - histograms[tid].end())), - mindex); - } -} - -TEST(Sampling, Threaded) { - const int kSize = 256; - const int kNumThreads = 4; - const int kIterations = 1000; - const int kIndex = 3; - const int kMostProbabilityMass = 3; - const int kLittleProbabilityMass = -3; - - CacheAlignedVector logits(kSize); - std::vector> tmp_vectors; - std::vector generators(kNumThreads); - - for (int i = 0; i < kNumThreads; ++i) { - tmp_vectors.emplace_back(kSize); - } - - for (int i = 0; i < kSize; ++i) { - logits.data()[i] = - (i + 1) == (kIndex + 1) ? kMostProbabilityMass : kLittleProbabilityMass; - } - - std::vector> histograms; - for (int i = 0; i < kNumThreads; ++i) { - histograms.emplace_back(kSize); - } - - auto f = [&](csrblocksparse::SpinBarrier* /*barrier*/, int tid) { - for (int i = 0; i < kIterations; ++i) { - histograms[tid] - [logits.Sample(1.f, &generators[tid], &tmp_vectors[tid])]++; - } - }; - - csrblocksparse::LaunchOnThreadsWithBarrier(kNumThreads, f); - - // Every thread should generate the exact same set of samples. - for (int i = 0; i < kSize; ++i) { - int val = histograms[0][i]; - for (int tid = 1; tid < kNumThreads; ++tid) { - EXPECT_EQ(val, histograms[tid][i]); - } - } - - // The most probable sample should be the one with the most probability mass. - for (int tid = 0; tid < kNumThreads; ++tid) { - EXPECT_EQ(std::distance(histograms[tid].begin(), - std::max_element(histograms[tid].begin(), - histograms[tid].end())), - kIndex); - } -} - -void CreateVectorHelper( - csrblocksparse::FatCacheAlignedVector* fat_vector, int cols, - int rows, std::unique_ptr>* view) { - *view = absl::make_unique>(*fat_vector, - cols, rows); -} - -void CreateVectorHelper( - csrblocksparse::FatCacheAlignedVector* fat_vector, int cols, - int rows, std::unique_ptr>* view) { - *view = absl::make_unique>( - fat_vector, cols, rows); -} - -csrblocksparse::FatCacheAlignedVector CreateFatAlignedVector(int rows, - int cols) { - csrblocksparse::FatCacheAlignedVector fat_vector(rows, cols); - // Usage intent of FatCacheAlignedVector is that they are COLUMN MAJOR. - float v = 0; - for (int c = 0; c < cols; ++c) { - for (int r = 0; r < rows; ++r) { - fat_vector.data()[c * rows + r] = v++; - } - } - - return fat_vector; -} - -template -void TestFatVectorView() { - const int kRows = 6; - const int kCols = 6; - auto fat_vector = CreateFatAlignedVector(kRows, kCols); - - std::unique_ptr top; - CreateVectorHelper(&fat_vector, 0, kRows / 2, &top); - std::unique_ptr bottom; - CreateVectorHelper(&fat_vector, kRows / 2, kRows / 2, &bottom); - - EXPECT_EQ(top->cols(), kCols); - EXPECT_EQ(bottom->cols(), kCols); - EXPECT_EQ(top->rows(), kRows / 2); - EXPECT_EQ(bottom->rows(), kRows / 2); - EXPECT_EQ(top->col_stride(), kRows); - EXPECT_EQ(bottom->col_stride(), kRows); - - for (int c = 0; c < kCols; ++c) { - for (int r = 0; r < kRows; ++r) { - if (r < kRows / 2) { - EXPECT_EQ(fat_vector[c * kRows + r], - top->data()[c * top->col_stride() + r]); - } else { - EXPECT_EQ(fat_vector[c * kRows + r], - bottom->data()[c * top->col_stride() + r - kRows / 2]); - } - } - } -} - -TEST(FatVector, View) { - TestFatVectorView>(); -} -TEST(FatVector, MutableView) { - TestFatVectorView>(); -} - -TEST(FatVector, SliceMutableView) { - const int kRows = 6; - const int kCols = 3; - auto fat_vector = CreateFatAlignedVector(kRows, kCols); - - int c = 1; - csrblocksparse::MutableVectorView slice = fat_vector.slice(c); - for (int r = 0; r < kRows; ++r) { - EXPECT_EQ(slice[r], c * kRows + r); - } -} - -TEST(FatVector, SliceConstView) { - const int kRows = 6; - const int kCols = 3; - auto fat_vector = CreateFatAlignedVector(kRows, kCols); - - int c = 1; - csrblocksparse::VectorView const_slice; - { - // Take a VectorView from a non-const slice. - const_slice = fat_vector.slice(c); - for (int r = 0; r < kRows; ++r) { - EXPECT_EQ(const_slice[r], c * kRows + r); - } - } - - { - // Take a VectorView from a const slice. - const auto& const_fat_vector = fat_vector; - const_slice = const_fat_vector.slice(c); - for (int r = 0; r < kRows; ++r) { - EXPECT_EQ(const_slice[r], c * kRows + r); - } - } -} - -TEST(View, FromMutableToConst) { - const int kRows = 6; - const int kCols = 3; - auto fat_vector = CreateFatAlignedVector(kRows, kCols); - csrblocksparse::MutableVectorView slice = fat_vector.slice(0); - - csrblocksparse::VectorView const_slice(slice); - for (int r = 0; r < kRows; ++r) { - EXPECT_EQ(const_slice[r], r); - } -} - -TEST(View, CopyTest) { - const int kRows = 6; - const int kCols = 3; - auto fat_vector = CreateFatAlignedVector(kRows, kCols); - csrblocksparse::MutableVectorView slice = fat_vector.slice(0); - csrblocksparse::MutableVectorView slice2(slice); - - for (int r = 0; r < kRows; ++r) { - EXPECT_EQ(slice2[r], r); - } -} - -TEST(Vector, CopyNull) { - // Check that we can copy a vector with a null generator without segfault. - CacheAlignedVector foo((CacheAlignedVector())); - // This is here to prevent foo from being optimized out. - CHECK_EQ(foo.size(), 0); - CacheAlignedVector foo_bar = CacheAlignedVector(); - CHECK_EQ(foo_bar.size(), 0); -} - -TEST(Vector, FromRawPointer) { - std::vector input; - for (int i = 0; i < 5; ++i) { - input.push_back(i * 2); - } - - // Calls first constructor. - CacheAlignedVector foo(input.data(), 5); - CHECK_EQ(foo.size(), 5); - EXPECT_THAT(input, testing::ElementsAreArray(foo.data(), 5)); - - // Calls the second constructor. - CacheAlignedVector foo2(input.data(), 5); - CHECK_EQ(foo2.size(), 5); - EXPECT_THAT(input, testing::ElementsAreArray(foo2.data(), 5)); -} - -} // namespace csrblocksparse diff --git a/spaces/ohmyteeth/seo-tools/README.md b/spaces/ohmyteeth/seo-tools/README.md deleted file mode 100644 index 2ad62afaf453e2fb7327030ba6562a72a1eeab61..0000000000000000000000000000000000000000 --- a/spaces/ohmyteeth/seo-tools/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SEO Tools -emoji: 🚀 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/owaiskha9654/Yolo-v7/app.py b/spaces/owaiskha9654/Yolo-v7/app.py deleted file mode 100644 index 572009c8ca945a46e72781c847213b9d7e40c044..0000000000000000000000000000000000000000 --- a/spaces/owaiskha9654/Yolo-v7/app.py +++ /dev/null @@ -1,49 +0,0 @@ -import torch -import gradio as gr -from huggingface_hub import hf_hub_download -from PIL import Image - -REPO_ID = "owaiskha9654/Yolov7_Custom_Object_Detection" -FILENAME = "best.pt" - - -yolov7_custom_weights = hf_hub_download(repo_id=REPO_ID, filename=FILENAME) - -model = torch.hub.load('Owaiskhan9654/yolov7-1:main',model='custom', path_or_model=yolov7_custom_weights, force_reload=True) # My Github repository https://github.com/Owaiskhan9654 - -def object_detection(im, size=416): - results = model(im) - results.render() - return Image.fromarray(results.imgs[0]) - -title = "Yolov7 Custom" - -image = gr.inputs.Image(shape=(416, 416), image_mode="RGB", source="upload", label="Upload Image", optional=False) -outputs = gr.outputs.Image(type="pil", label="Output Image") - -Custom_description="
      Custom Training Performed on Kaggle Link

      Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

      1st class is for Person Detected
      2nd class is for Car Detected" - -Footer = ( - "
      Model Trained by: Owais Ahmad Data Scientist at Thoucentric Visit Profile
      " - - "
      Model Trained Kaggle Kernel Link
      " - - "
      Kaggle Profile Link
      " - - "
      HuggingFace🤗 Model Deployed Repository Link
      " -) - -examples1=[["Image1.jpeg"],["Image2.jpeg"],["Image3.jpeg"],["Image4.jpeg"],["Image5.jpeg"],["Image6.jpeg"],["horses.jpeg"],["horses.jpeg"]] - -Top_Title="
      Yolov7 🚀 Custom Trained by Owais Ahmad
      🚗Car and 👦Person Detection" -css = ".output-image, .input-image {height: 50rem !important; width: 100% !important;}" -css = ".image-preview {height: auto !important;}" - -gr.Interface( - fn=object_detection, - inputs=image, - outputs=outputs, - title=Top_Title, - description=Custom_description, - article=Footer, - examples=[["car-person-2.jpg"], ["car-person-2.jpg"]]).launch() \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/repaint.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/repaint.md deleted file mode 100644 index 9529893c354b160c4c4ded38dc5a2410693afefb..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/repaint.md +++ /dev/null @@ -1,37 +0,0 @@ - - -# RePaint - -[RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://huggingface.co/papers/2201.09865) is by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool. - -The abstract from the paper is: - -*Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. -RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions.* - -The original codebase can be found at [andreas128/RePaint](https://github.com/andreas128/RePaint). - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - - -## RePaintPipeline -[[autodoc]] RePaintPipeline - - all - - __call__ - -## ImagePipelineOutput -[[autodoc]] pipelines.ImagePipelineOutput diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/t2i_adapter/README_sdxl.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/t2i_adapter/README_sdxl.md deleted file mode 100644 index 03053c85d8a53564d5361c8c050e73238e65da03..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/t2i_adapter/README_sdxl.md +++ /dev/null @@ -1,131 +0,0 @@ -# T2I-Adapter training example for Stable Diffusion XL (SDXL) - -The `train_t2i_adapter_sdxl.py` script shows how to implement the [T2I-Adapter training procedure](https://hf.co/papers/2302.08453) for [Stable Diffusion XL](https://huggingface.co/papers/2307.01952). - -## Running locally with PyTorch - -### Installing the dependencies - -Before running the scripts, make sure to install the library's training dependencies: - -**Important** - -To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: - -```bash -git clone https://github.com/huggingface/diffusers -cd diffusers -pip install -e . -``` - -Then cd in the `examples/t2i_adapter` folder and run -```bash -pip install -r requirements_sdxl.txt -``` - -And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: - -```bash -accelerate config -``` - -Or for a default accelerate configuration without answering questions about your environment - -```bash -accelerate config default -``` - -Or if your environment doesn't support an interactive shell (e.g., a notebook) - -```python -from accelerate.utils import write_basic_config -write_basic_config() -``` - -When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups. - -## Circle filling dataset - -The original dataset is hosted in the [ControlNet repo](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip). We re-uploaded it to be compatible with `datasets` [here](https://huggingface.co/datasets/fusing/fill50k). Note that `datasets` handles dataloading within the training script. - -## Training - -Our training examples use two test conditioning images. They can be downloaded by running - -```sh -wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png - -wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png -``` - -Then run `huggingface-cli login` to log into your Hugging Face account. This is needed to be able to push the trained T2IAdapter parameters to Hugging Face Hub. - -```bash -export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" -export OUTPUT_DIR="path to save model" - -accelerate launch train_t2i_adapter_sdxl.py \ - --pretrained_model_name_or_path=$MODEL_DIR \ - --output_dir=$OUTPUT_DIR \ - --dataset_name=fusing/fill50k \ - --mixed_precision="fp16" \ - --resolution=1024 \ - --learning_rate=1e-5 \ - --max_train_steps=15000 \ - --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ - --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ - --validation_steps=100 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=4 \ - --report_to="wandb" \ - --seed=42 \ - --push_to_hub -``` - -To better track our training experiments, we're using the following flags in the command above: - -* `report_to="wandb` will ensure the training runs are tracked on Weights and Biases. To use it, be sure to install `wandb` with `pip install wandb`. -* `validation_image`, `validation_prompt`, and `validation_steps` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected. - -Our experiments were conducted on a single 40GB A100 GPU. - -### Inference - -Once training is done, we can perform inference like so: - -```python -from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest -from diffusers.utils import load_image -import torch - -base_model_path = "stabilityai/stable-diffusion-xl-base-1.0" -adapter_path = "path to adapter" - -adapter = T2IAdapter.from_pretrained(adapter_path, torch_dtype=torch.float16) -pipe = StableDiffusionXLAdapterPipeline.from_pretrained( - base_model_path, adapter=adapter, torch_dtype=torch.float16 -) - -# speed up diffusion process with faster scheduler and memory optimization -pipe.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) -# remove following line if xformers is not installed or when using Torch 2.0. -pipe.enable_xformers_memory_efficient_attention() -# memory optimization. -pipe.enable_model_cpu_offload() - -control_image = load_image("./conditioning_image_1.png") -prompt = "pale golden rod circle with old lace background" - -# generate image -generator = torch.manual_seed(0) -image = pipe( - prompt, num_inference_steps=20, generator=generator, image=control_image -).images[0] -image.save("./output.png") -``` - -## Notes - -### Specifying a better VAE - -SDXL's VAE is known to suffer from numerical instability issues. This is why we also expose a CLI argument namely `--pretrained_vae_model_name_or_path` that lets you specify the location of a better VAE (such as [this one](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)). diff --git a/spaces/pierreguillou/tesseract-ocr-pt/app.py b/spaces/pierreguillou/tesseract-ocr-pt/app.py deleted file mode 100644 index 2d8af11fc6d28f21ae155dc79a8cbd07deb8309f..0000000000000000000000000000000000000000 --- a/spaces/pierreguillou/tesseract-ocr-pt/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import os -import gradio as gr -import re - -print(os.popen(f'cat /etc/debian_version').read()) -print(os.popen(f'cat /etc/issue').read()) -print(os.popen(f'apt search tesseract').read()) - -# choices = os.popen('tesseract --list-langs').read().split('\n')[1:-1] - -def correction(text): - # replace 3 lines break (\n\n\n) or more by 2 lines break - text = text.replace('\n \n','\n\n') - text = re.sub(r'\n(\n+)', '\n\n', text).strip() - - # delete \n at the end of a line - text = re.sub(r'(?Tesseract documentation | Github Repo

      " -#examples = [['eurotext.png', ['eng']], ['tesseract_sample.png', ['jpn', 'eng']], ['chi.jpg', ['HanS', 'HanT']]] -examples = [['exemple.png']] -allow_flagging = "never" -live = True - -gr.Interface( - inference, - #[gr.inputs.Image(type="filepath", label="Input"), gr.inputs.CheckboxGroup(choices, type="value", default=['eng'], label='language')], - gr.Image(type="filepath", label="Input"), - "text", - title=title, - description=description, - article=article, - examples=examples, - allow_flagging=allow_flagging, - live=live -).launch(debug=False, enable_queue=True) \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/exceptions.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/exceptions.py deleted file mode 100644 index 12219f124aeca6d3d7edd2621071f100c7ecd90a..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/exceptions.py +++ /dev/null @@ -1,299 +0,0 @@ -# exceptions.py - -import re -import sys -import typing - -from .util import ( - col, - line, - lineno, - _collapse_string_to_ranges, - replaced_by_pep8, -) -from .unicode import pyparsing_unicode as ppu - - -class ExceptionWordUnicode(ppu.Latin1, ppu.LatinA, ppu.LatinB, ppu.Greek, ppu.Cyrillic): - pass - - -_extract_alphanums = _collapse_string_to_ranges(ExceptionWordUnicode.alphanums) -_exception_word_extractor = re.compile("([" + _extract_alphanums + "]{1,16})|.") - - -class ParseBaseException(Exception): - """base exception class for all parsing runtime exceptions""" - - loc: int - msg: str - pstr: str - parser_element: typing.Any # "ParserElement" - args: typing.Tuple[str, int, typing.Optional[str]] - - __slots__ = ( - "loc", - "msg", - "pstr", - "parser_element", - "args", - ) - - # Performance tuning: we construct a *lot* of these, so keep this - # constructor as small and fast as possible - def __init__( - self, - pstr: str, - loc: int = 0, - msg: typing.Optional[str] = None, - elem=None, - ): - self.loc = loc - if msg is None: - self.msg = pstr - self.pstr = "" - else: - self.msg = msg - self.pstr = pstr - self.parser_element = elem - self.args = (pstr, loc, msg) - - @staticmethod - def explain_exception(exc, depth=16): - """ - Method to take an exception and translate the Python internal traceback into a list - of the pyparsing expressions that caused the exception to be raised. - - Parameters: - - - exc - exception raised during parsing (need not be a ParseException, in support - of Python exceptions that might be raised in a parse action) - - depth (default=16) - number of levels back in the stack trace to list expression - and function names; if None, the full stack trace names will be listed; if 0, only - the failing input line, marker, and exception string will be shown - - Returns a multi-line string listing the ParserElements and/or function names in the - exception's stack trace. - """ - import inspect - from .core import ParserElement - - if depth is None: - depth = sys.getrecursionlimit() - ret = [] - if isinstance(exc, ParseBaseException): - ret.append(exc.line) - ret.append(" " * (exc.column - 1) + "^") - ret.append(f"{type(exc).__name__}: {exc}") - - if depth > 0: - callers = inspect.getinnerframes(exc.__traceback__, context=depth) - seen = set() - for i, ff in enumerate(callers[-depth:]): - frm = ff[0] - - f_self = frm.f_locals.get("self", None) - if isinstance(f_self, ParserElement): - if not frm.f_code.co_name.startswith( - ("parseImpl", "_parseNoCache") - ): - continue - if id(f_self) in seen: - continue - seen.add(id(f_self)) - - self_type = type(f_self) - ret.append( - f"{self_type.__module__}.{self_type.__name__} - {f_self}" - ) - - elif f_self is not None: - self_type = type(f_self) - ret.append(f"{self_type.__module__}.{self_type.__name__}") - - else: - code = frm.f_code - if code.co_name in ("wrapper", ""): - continue - - ret.append(code.co_name) - - depth -= 1 - if not depth: - break - - return "\n".join(ret) - - @classmethod - def _from_exception(cls, pe): - """ - internal factory method to simplify creating one type of ParseException - from another - avoids having __init__ signature conflicts among subclasses - """ - return cls(pe.pstr, pe.loc, pe.msg, pe.parser_element) - - @property - def line(self) -> str: - """ - Return the line of text where the exception occurred. - """ - return line(self.loc, self.pstr) - - @property - def lineno(self) -> int: - """ - Return the 1-based line number of text where the exception occurred. - """ - return lineno(self.loc, self.pstr) - - @property - def col(self) -> int: - """ - Return the 1-based column on the line of text where the exception occurred. - """ - return col(self.loc, self.pstr) - - @property - def column(self) -> int: - """ - Return the 1-based column on the line of text where the exception occurred. - """ - return col(self.loc, self.pstr) - - # pre-PEP8 compatibility - @property - def parserElement(self): - return self.parser_element - - @parserElement.setter - def parserElement(self, elem): - self.parser_element = elem - - def __str__(self) -> str: - if self.pstr: - if self.loc >= len(self.pstr): - foundstr = ", found end of text" - else: - # pull out next word at error location - found_match = _exception_word_extractor.match(self.pstr, self.loc) - if found_match is not None: - found = found_match.group(0) - else: - found = self.pstr[self.loc : self.loc + 1] - foundstr = (", found %r" % found).replace(r"\\", "\\") - else: - foundstr = "" - return f"{self.msg}{foundstr} (at char {self.loc}), (line:{self.lineno}, col:{self.column})" - - def __repr__(self): - return str(self) - - def mark_input_line( - self, marker_string: typing.Optional[str] = None, *, markerString: str = ">!<" - ) -> str: - """ - Extracts the exception line from the input string, and marks - the location of the exception with a special symbol. - """ - markerString = marker_string if marker_string is not None else markerString - line_str = self.line - line_column = self.column - 1 - if markerString: - line_str = "".join( - (line_str[:line_column], markerString, line_str[line_column:]) - ) - return line_str.strip() - - def explain(self, depth=16) -> str: - """ - Method to translate the Python internal traceback into a list - of the pyparsing expressions that caused the exception to be raised. - - Parameters: - - - depth (default=16) - number of levels back in the stack trace to list expression - and function names; if None, the full stack trace names will be listed; if 0, only - the failing input line, marker, and exception string will be shown - - Returns a multi-line string listing the ParserElements and/or function names in the - exception's stack trace. - - Example:: - - expr = pp.Word(pp.nums) * 3 - try: - expr.parse_string("123 456 A789") - except pp.ParseException as pe: - print(pe.explain(depth=0)) - - prints:: - - 123 456 A789 - ^ - ParseException: Expected W:(0-9), found 'A' (at char 8), (line:1, col:9) - - Note: the diagnostic output will include string representations of the expressions - that failed to parse. These representations will be more helpful if you use `set_name` to - give identifiable names to your expressions. Otherwise they will use the default string - forms, which may be cryptic to read. - - Note: pyparsing's default truncation of exception tracebacks may also truncate the - stack of expressions that are displayed in the ``explain`` output. To get the full listing - of parser expressions, you may have to set ``ParserElement.verbose_stacktrace = True`` - """ - return self.explain_exception(self, depth) - - # fmt: off - @replaced_by_pep8(mark_input_line) - def markInputline(self): ... - # fmt: on - - -class ParseException(ParseBaseException): - """ - Exception thrown when a parse expression doesn't match the input string - - Example:: - - try: - Word(nums).set_name("integer").parse_string("ABC") - except ParseException as pe: - print(pe) - print("column: {}".format(pe.column)) - - prints:: - - Expected integer (at char 0), (line:1, col:1) - column: 1 - - """ - - -class ParseFatalException(ParseBaseException): - """ - User-throwable exception thrown when inconsistent parse content - is found; stops all parsing immediately - """ - - -class ParseSyntaxException(ParseFatalException): - """ - Just like :class:`ParseFatalException`, but thrown internally - when an :class:`ErrorStop` ('-' operator) indicates - that parsing is to stop immediately because an unbacktrackable - syntax error has been found. - """ - - -class RecursiveGrammarException(Exception): - """ - Exception thrown by :class:`ParserElement.validate` if the - grammar could be left-recursive; parser may need to enable - left recursion using :class:`ParserElement.enable_left_recursion` - """ - - def __init__(self, parseElementList): - self.parseElementTrace = parseElementList - - def __str__(self) -> str: - return f"RecursiveGrammarException: {self.parseElementTrace}" diff --git a/spaces/platzi/platzi-curso-streamlit-segmentacion-imagenes/app.py b/spaces/platzi/platzi-curso-streamlit-segmentacion-imagenes/app.py deleted file mode 100644 index d80e0f51365480c5d6a14e1daf0b695226675b27..0000000000000000000000000000000000000000 --- a/spaces/platzi/platzi-curso-streamlit-segmentacion-imagenes/app.py +++ /dev/null @@ -1,108 +0,0 @@ -import streamlit as st -from PIL import Image -import numpy as np -import cv2 -from huggingface_hub import from_pretrained_keras - -st.header("Segmentación de dientes con rayos X") - -st.markdown( - """ - -Hola estudiantes de Platzi 🚀. Este modelo usa UNet para segmentar imágenes -de dientes en rayos X. Se utila un modelo de Keras importado con la función -`huggingface_hub.from_pretrained_keras`. Recuerda que el Hub de Hugging Face está integrado -con muchas librerías como Keras, scikit-learn, fastai y otras. - -El modelo fue creado por [SerdarHelli](https://huggingface.co/SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net). - -""" -) - -## Seleccionamos y cargamos el modelo -model_id = "SerdarHelli/Segmentation-of-Teeth-in-Panoramic-X-ray-Image-Using-U-Net" -model = from_pretrained_keras(model_id) - -## Permitimos a la usuaria cargar una imagen -archivo_imagen = st.file_uploader("Sube aquí tu imagen.", type=["png", "jpg", "jpeg"]) - -## Si una imagen tiene más de un canal entonces se convierte a escala de grises (1 canal) -def convertir_one_channel(img): - if len(img.shape) > 2: - img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - return img - else: - return img - - -def convertir_rgb(img): - if len(img.shape) == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) - return img - else: - return img - - -## Manipularemos la interfaz para que podamos usar imágenes ejemplo -## Si el usuario da click en un ejemplo entonces el modelo correrá con él -ejemplos = ["dientes_1.png", "dientes_2.png", "dientes_3.png"] - -## Creamos tres columnas; en cada una estará una imagen ejemplo -col1, col2, col3 = st.columns(3) -with col1: - ## Se carga la imagen y se muestra en la interfaz - ex = Image.open(ejemplos[0]) - st.image(ex, width=200) - ## Si oprime el botón entonces usaremos ese ejemplo en el modelo - if st.button("Corre este ejemplo 1"): - archivo_imagen = ejemplos[0] - -with col2: - ex1 = Image.open(ejemplos[1]) - st.image(ex1, width=200) - if st.button("Corre este ejemplo 2"): - archivo_imagen = ejemplos[1] - -with col3: - ex2 = Image.open(ejemplos[2]) - st.image(ex2, width=200) - if st.button("Corre este ejemplo 3"): - archivo_imagen = ejemplos[2] - -## Si tenemos una imagen para ingresar en el modelo entonces -## la procesamos e ingresamos al modelo -if archivo_imagen is not None: - ## Cargamos la imagen con PIL, la mostramos y la convertimos a un array de NumPy - img = Image.open(archivo_imagen) - st.image(img, width=850) - img = np.asarray(img) - - ## Procesamos la imagen para ingresarla al modelo - img_cv = convertir_one_channel(img) - img_cv = cv2.resize(img_cv, (512, 512), interpolation=cv2.INTER_LANCZOS4) - img_cv = np.float32(img_cv / 255) - img_cv = np.reshape(img_cv, (1, 512, 512, 1)) - - ## Ingresamos el array de NumPy al modelo - predicted = model.predict(img_cv) - predicted = predicted[0] - - ## Regresamos la imagen a su forma original y agregamos las máscaras de la segmentación - predicted = cv2.resize( - predicted, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_LANCZOS4 - ) - mask = np.uint8(predicted * 255) # - _, mask = cv2.threshold( - mask, thresh=0, maxval=255, type=cv2.THRESH_BINARY + cv2.THRESH_OTSU - ) - kernel = np.ones((5, 5), dtype=np.float32) - mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=1) - mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel, iterations=1) - cnts, hieararch = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) - output = cv2.drawContours(convertir_one_channel(img), cnts, -1, (255, 0, 0), 3) - - ## Si obtuvimos exitosamente un resultadod entonces lo mostramos en la interfaz - if output is not None: - st.subheader("Segmentación:") - st.write(output.shape) - st.image(output, width=850) diff --git a/spaces/pojitha/sinhala_hate_speech/README.md b/spaces/pojitha/sinhala_hate_speech/README.md deleted file mode 100644 index f5df92ac2886e20c0ddc5fd371611a427427417b..0000000000000000000000000000000000000000 --- a/spaces/pojitha/sinhala_hate_speech/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sinhala Hate Speech -emoji: 📚 -colorFrom: purple -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/prerna9811/Chord/portaudio/bindings/java/jportaudio/src/com/portaudio/DeviceInfo.java b/spaces/prerna9811/Chord/portaudio/bindings/java/jportaudio/src/com/portaudio/DeviceInfo.java deleted file mode 100644 index 1c4682ec50732882a9d657c26d5b6ab19990691a..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/bindings/java/jportaudio/src/com/portaudio/DeviceInfo.java +++ /dev/null @@ -1,65 +0,0 @@ -/* - * Portable Audio I/O Library - * Java Binding for PortAudio - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 2008 Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup bindings_java - - @brief Information about a JPortAudio device. -*/ -package com.portaudio; - -/** - * Equivalent to PaDeviceInfo - * @see PortAudio - * @see HostApiInfo - * @author Phil Burk - * - */ -public class DeviceInfo -{ - public int version; - public String name; - public int hostApi; - public int maxInputChannels; - public int maxOutputChannels; - public double defaultLowInputLatency; - public double defaultHighInputLatency; - public double defaultLowOutputLatency; - public double defaultHighOutputLatency; - public double defaultSampleRate; -} diff --git a/spaces/prithvihehe/TheBotFather/app.py b/spaces/prithvihehe/TheBotFather/app.py deleted file mode 100644 index fe63dcfe56c0524b4ace27f39afba71f2d29f727..0000000000000000000000000000000000000000 --- a/spaces/prithvihehe/TheBotFather/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import openai -import gradio - - -openai.api_key = "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" - -messages = [{"role": "system", "content": "You are Vito Corleone from the Godfather, act wise and help people who come to you, and also speak like him"}] - -def MyChatGPT(user_input): - messages.append({"role": "user", "content": user_input}) - response = openai.ChatCompletion.create( - model = "gpt-3.5-turbo", - messages = messages - ) - reply = response["choices"][0]["message"]["content"] - messages.append({"role": "assistant", "content": reply}) - return reply - -def chatbot(input, history = []): - output = MyChatGPT(input) - avatar_url = "https://mcdn.wallpapersafari.com/medium/43/60/bFauO9.jpg" - message_with_avatar = f'
      {output}
      ' - history.append((input, message_with_avatar)) - return history, history - - -demo = gradio.Interface(fn=chatbot, inputs = ["text", 'state'], outputs = ["chatbot",'state'], title = "TheBotFather") - -css = """ -body { - background-image: url('https://c4.wallpaperflare.com/wallpaper/484/369/194/movies-the-godfather-al-pacino-wallpaper-preview.jpg'); - background-size: cover; - opacity: 0.9; -} -.gradio-input-wrapper, .gradio-output-wrapper { - background-color: rgba(255, 255, 255, 0.95) !important; - -} -""" -demo.css = css - -demo.launch() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/utils/core.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/utils/core.py deleted file mode 100644 index f365ce96235d5ee633ee08ba0de14d3dacc3efe3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/utils/core.py +++ /dev/null @@ -1,843 +0,0 @@ -""" -Utility routines -""" -from collections.abc import Mapping, MutableMapping -from copy import deepcopy -import json -import itertools -import re -import sys -import traceback -import warnings -from typing import ( - Callable, - TypeVar, - Any, - Union, - Dict, - Optional, - Tuple, - Sequence, - Type, - cast, -) -from types import ModuleType - -import jsonschema -import pandas as pd -import numpy as np -from pandas.api.types import infer_dtype - -from altair.utils.schemapi import SchemaBase -from altair.utils._dfi_types import Column, DtypeKind, DataFrame as DfiDataFrame - -if sys.version_info >= (3, 10): - from typing import ParamSpec -else: - from typing_extensions import ParamSpec - -from typing import Literal, Protocol, TYPE_CHECKING - -if TYPE_CHECKING: - from pandas.core.interchange.dataframe_protocol import Column as PandasColumn - -_V = TypeVar("_V") -_P = ParamSpec("_P") - - -class _DataFrameLike(Protocol): - def __dataframe__(self, *args, **kwargs) -> DfiDataFrame: - ... - - -TYPECODE_MAP = { - "ordinal": "O", - "nominal": "N", - "quantitative": "Q", - "temporal": "T", - "geojson": "G", -} - -INV_TYPECODE_MAP = {v: k for k, v in TYPECODE_MAP.items()} - - -# aggregates from vega-lite version 4.6.0 -AGGREGATES = [ - "argmax", - "argmin", - "average", - "count", - "distinct", - "max", - "mean", - "median", - "min", - "missing", - "product", - "q1", - "q3", - "ci0", - "ci1", - "stderr", - "stdev", - "stdevp", - "sum", - "valid", - "values", - "variance", - "variancep", -] - -# window aggregates from vega-lite version 4.6.0 -WINDOW_AGGREGATES = [ - "row_number", - "rank", - "dense_rank", - "percent_rank", - "cume_dist", - "ntile", - "lag", - "lead", - "first_value", - "last_value", - "nth_value", -] - -# timeUnits from vega-lite version 4.17.0 -TIMEUNITS = [ - "year", - "quarter", - "month", - "week", - "day", - "dayofyear", - "date", - "hours", - "minutes", - "seconds", - "milliseconds", - "yearquarter", - "yearquartermonth", - "yearmonth", - "yearmonthdate", - "yearmonthdatehours", - "yearmonthdatehoursminutes", - "yearmonthdatehoursminutesseconds", - "yearweek", - "yearweekday", - "yearweekdayhours", - "yearweekdayhoursminutes", - "yearweekdayhoursminutesseconds", - "yeardayofyear", - "quartermonth", - "monthdate", - "monthdatehours", - "monthdatehoursminutes", - "monthdatehoursminutesseconds", - "weekday", - "weeksdayhours", - "weekdayhoursminutes", - "weekdayhoursminutesseconds", - "dayhours", - "dayhoursminutes", - "dayhoursminutesseconds", - "hoursminutes", - "hoursminutesseconds", - "minutesseconds", - "secondsmilliseconds", - "utcyear", - "utcquarter", - "utcmonth", - "utcweek", - "utcday", - "utcdayofyear", - "utcdate", - "utchours", - "utcminutes", - "utcseconds", - "utcmilliseconds", - "utcyearquarter", - "utcyearquartermonth", - "utcyearmonth", - "utcyearmonthdate", - "utcyearmonthdatehours", - "utcyearmonthdatehoursminutes", - "utcyearmonthdatehoursminutesseconds", - "utcyearweek", - "utcyearweekday", - "utcyearweekdayhours", - "utcyearweekdayhoursminutes", - "utcyearweekdayhoursminutesseconds", - "utcyeardayofyear", - "utcquartermonth", - "utcmonthdate", - "utcmonthdatehours", - "utcmonthdatehoursminutes", - "utcmonthdatehoursminutesseconds", - "utcweekday", - "utcweeksdayhours", - "utcweekdayhoursminutes", - "utcweekdayhoursminutesseconds", - "utcdayhours", - "utcdayhoursminutes", - "utcdayhoursminutesseconds", - "utchoursminutes", - "utchoursminutesseconds", - "utcminutesseconds", - "utcsecondsmilliseconds", -] - - -_InferredVegaLiteType = Literal["ordinal", "nominal", "quantitative", "temporal"] - - -def infer_vegalite_type( - data: object, -) -> Union[_InferredVegaLiteType, Tuple[_InferredVegaLiteType, list]]: - """ - From an array-like input, infer the correct vega typecode - ('ordinal', 'nominal', 'quantitative', or 'temporal') - - Parameters - ---------- - data: object - """ - typ = infer_dtype(data, skipna=False) - - if typ in [ - "floating", - "mixed-integer-float", - "integer", - "mixed-integer", - "complex", - ]: - return "quantitative" - elif typ == "categorical" and hasattr(data, "cat") and data.cat.ordered: - return ("ordinal", data.cat.categories.tolist()) - elif typ in ["string", "bytes", "categorical", "boolean", "mixed", "unicode"]: - return "nominal" - elif typ in [ - "datetime", - "datetime64", - "timedelta", - "timedelta64", - "date", - "time", - "period", - ]: - return "temporal" - else: - warnings.warn( - "I don't know how to infer vegalite type from '{}'. " - "Defaulting to nominal.".format(typ), - stacklevel=1, - ) - return "nominal" - - -def merge_props_geom(feat: dict) -> dict: - """ - Merge properties with geometry - * Overwrites 'type' and 'geometry' entries if existing - """ - - geom = {k: feat[k] for k in ("type", "geometry")} - try: - feat["properties"].update(geom) - props_geom = feat["properties"] - except (AttributeError, KeyError): - # AttributeError when 'properties' equals None - # KeyError when 'properties' is non-existing - props_geom = geom - - return props_geom - - -def sanitize_geo_interface(geo: MutableMapping) -> dict: - """Santize a geo_interface to prepare it for serialization. - - * Make a copy - * Convert type array or _Array to list - * Convert tuples to lists (using json.loads/dumps) - * Merge properties with geometry - """ - - geo = deepcopy(geo) - - # convert type _Array or array to list - for key in geo.keys(): - if str(type(geo[key]).__name__).startswith(("_Array", "array")): - geo[key] = geo[key].tolist() - - # convert (nested) tuples to lists - geo_dct: dict = json.loads(json.dumps(geo)) - - # sanitize features - if geo_dct["type"] == "FeatureCollection": - geo_dct = geo_dct["features"] - if len(geo_dct) > 0: - for idx, feat in enumerate(geo_dct): - geo_dct[idx] = merge_props_geom(feat) - elif geo_dct["type"] == "Feature": - geo_dct = merge_props_geom(geo_dct) - else: - geo_dct = {"type": "Feature", "geometry": geo_dct} - - return geo_dct - - -def numpy_is_subtype(dtype: Any, subtype: Any) -> bool: - try: - return np.issubdtype(dtype, subtype) - except (NotImplementedError, TypeError): - return False - - -def sanitize_dataframe(df: pd.DataFrame) -> pd.DataFrame: # noqa: C901 - """Sanitize a DataFrame to prepare it for serialization. - - * Make a copy - * Convert RangeIndex columns to strings - * Raise ValueError if column names are not strings - * Raise ValueError if it has a hierarchical index. - * Convert categoricals to strings. - * Convert np.bool_ dtypes to Python bool objects - * Convert np.int dtypes to Python int objects - * Convert floats to objects and replace NaNs/infs with None. - * Convert DateTime dtypes into appropriate string representations - * Convert Nullable integers to objects and replace NaN with None - * Convert Nullable boolean to objects and replace NaN with None - * convert dedicated string column to objects and replace NaN with None - * Raise a ValueError for TimeDelta dtypes - """ - df = df.copy() - - if isinstance(df.columns, pd.RangeIndex): - df.columns = df.columns.astype(str) - - for col_name in df.columns: - if not isinstance(col_name, str): - raise ValueError( - "Dataframe contains invalid column name: {0!r}. " - "Column names must be strings".format(col_name) - ) - - if isinstance(df.index, pd.MultiIndex): - raise ValueError("Hierarchical indices not supported") - if isinstance(df.columns, pd.MultiIndex): - raise ValueError("Hierarchical indices not supported") - - def to_list_if_array(val): - if isinstance(val, np.ndarray): - return val.tolist() - else: - return val - - for dtype_item in df.dtypes.items(): - # We know that the column names are strings from the isinstance check - # further above but mypy thinks it is of type Hashable and therefore does not - # let us assign it to the col_name variable which is already of type str. - col_name = cast(str, dtype_item[0]) - dtype = dtype_item[1] - dtype_name = str(dtype) - if dtype_name == "category": - # Work around bug in to_json for categorical types in older versions - # of pandas as they do not properly convert NaN values to null in to_json. - # We can probably remove this part once we require Pandas >= 1.0 - col = df[col_name].astype(object) - df[col_name] = col.where(col.notnull(), None) - elif dtype_name == "string": - # dedicated string datatype (since 1.0) - # https://pandas.pydata.org/pandas-docs/version/1.0.0/whatsnew/v1.0.0.html#dedicated-string-data-type - col = df[col_name].astype(object) - df[col_name] = col.where(col.notnull(), None) - elif dtype_name == "bool": - # convert numpy bools to objects; np.bool is not JSON serializable - df[col_name] = df[col_name].astype(object) - elif dtype_name == "boolean": - # dedicated boolean datatype (since 1.0) - # https://pandas.io/docs/user_guide/boolean.html - col = df[col_name].astype(object) - df[col_name] = col.where(col.notnull(), None) - elif dtype_name.startswith("datetime") or dtype_name.startswith("timestamp"): - # Convert datetimes to strings. This needs to be a full ISO string - # with time, which is why we cannot use ``col.astype(str)``. - # This is because Javascript parses date-only times in UTC, but - # parses full ISO-8601 dates as local time, and dates in Vega and - # Vega-Lite are displayed in local time by default. - # (see https://github.com/altair-viz/altair/issues/1027) - df[col_name] = ( - df[col_name].apply(lambda x: x.isoformat()).replace("NaT", "") - ) - elif dtype_name.startswith("timedelta"): - raise ValueError( - 'Field "{col_name}" has type "{dtype}" which is ' - "not supported by Altair. Please convert to " - "either a timestamp or a numerical value." - "".format(col_name=col_name, dtype=dtype) - ) - elif dtype_name.startswith("geometry"): - # geopandas >=0.6.1 uses the dtype geometry. Continue here - # otherwise it will give an error on np.issubdtype(dtype, np.integer) - continue - elif dtype_name in { - "Int8", - "Int16", - "Int32", - "Int64", - "UInt8", - "UInt16", - "UInt32", - "UInt64", - "Float32", - "Float64", - }: # nullable integer datatypes (since 24.0) and nullable float datatypes (since 1.2.0) - # https://pandas.pydata.org/pandas-docs/version/0.25/whatsnew/v0.24.0.html#optional-integer-na-support - col = df[col_name].astype(object) - df[col_name] = col.where(col.notnull(), None) - elif numpy_is_subtype(dtype, np.integer): - # convert integers to objects; np.int is not JSON serializable - df[col_name] = df[col_name].astype(object) - elif numpy_is_subtype(dtype, np.floating): - # For floats, convert to Python float: np.float is not JSON serializable - # Also convert NaN/inf values to null, as they are not JSON serializable - col = df[col_name] - bad_values = col.isnull() | np.isinf(col) - df[col_name] = col.astype(object).where(~bad_values, None) - elif dtype == object: - # Convert numpy arrays saved as objects to lists - # Arrays are not JSON serializable - col = df[col_name].astype(object).apply(to_list_if_array) - df[col_name] = col.where(col.notnull(), None) - return df - - -def sanitize_arrow_table(pa_table): - """Sanitize arrow table for JSON serialization""" - import pyarrow as pa - import pyarrow.compute as pc - - arrays = [] - schema = pa_table.schema - for name in schema.names: - array = pa_table[name] - dtype = schema.field(name).type - if str(dtype).startswith("timestamp"): - arrays.append(pc.strftime(array)) - elif str(dtype).startswith("duration"): - raise ValueError( - 'Field "{col_name}" has type "{dtype}" which is ' - "not supported by Altair. Please convert to " - "either a timestamp or a numerical value." - "".format(col_name=name, dtype=dtype) - ) - else: - arrays.append(array) - - return pa.Table.from_arrays(arrays, names=schema.names) - - -def parse_shorthand( - shorthand: Union[Dict[str, Any], str], - data: Optional[Union[pd.DataFrame, _DataFrameLike]] = None, - parse_aggregates: bool = True, - parse_window_ops: bool = False, - parse_timeunits: bool = True, - parse_types: bool = True, -) -> Dict[str, Any]: - """General tool to parse shorthand values - - These are of the form: - - - "col_name" - - "col_name:O" - - "average(col_name)" - - "average(col_name):O" - - Optionally, a dataframe may be supplied, from which the type - will be inferred if not specified in the shorthand. - - Parameters - ---------- - shorthand : dict or string - The shorthand representation to be parsed - data : DataFrame, optional - If specified and of type DataFrame, then use these values to infer the - column type if not provided by the shorthand. - parse_aggregates : boolean - If True (default), then parse aggregate functions within the shorthand. - parse_window_ops : boolean - If True then parse window operations within the shorthand (default:False) - parse_timeunits : boolean - If True (default), then parse timeUnits from within the shorthand - parse_types : boolean - If True (default), then parse typecodes within the shorthand - - Returns - ------- - attrs : dict - a dictionary of attributes extracted from the shorthand - - Examples - -------- - >>> data = pd.DataFrame({'foo': ['A', 'B', 'A', 'B'], - ... 'bar': [1, 2, 3, 4]}) - - >>> parse_shorthand('name') == {'field': 'name'} - True - - >>> parse_shorthand('name:Q') == {'field': 'name', 'type': 'quantitative'} - True - - >>> parse_shorthand('average(col)') == {'aggregate': 'average', 'field': 'col'} - True - - >>> parse_shorthand('foo:O') == {'field': 'foo', 'type': 'ordinal'} - True - - >>> parse_shorthand('min(foo):Q') == {'aggregate': 'min', 'field': 'foo', 'type': 'quantitative'} - True - - >>> parse_shorthand('month(col)') == {'field': 'col', 'timeUnit': 'month', 'type': 'temporal'} - True - - >>> parse_shorthand('year(col):O') == {'field': 'col', 'timeUnit': 'year', 'type': 'ordinal'} - True - - >>> parse_shorthand('foo', data) == {'field': 'foo', 'type': 'nominal'} - True - - >>> parse_shorthand('bar', data) == {'field': 'bar', 'type': 'quantitative'} - True - - >>> parse_shorthand('bar:O', data) == {'field': 'bar', 'type': 'ordinal'} - True - - >>> parse_shorthand('sum(bar)', data) == {'aggregate': 'sum', 'field': 'bar', 'type': 'quantitative'} - True - - >>> parse_shorthand('count()', data) == {'aggregate': 'count', 'type': 'quantitative'} - True - """ - from altair.utils._importers import pyarrow_available - - if not shorthand: - return {} - - valid_typecodes = list(TYPECODE_MAP) + list(INV_TYPECODE_MAP) - - units = { - "field": "(?P.*)", - "type": "(?P{})".format("|".join(valid_typecodes)), - "agg_count": "(?Pcount)", - "op_count": "(?Pcount)", - "aggregate": "(?P{})".format("|".join(AGGREGATES)), - "window_op": "(?P{})".format("|".join(AGGREGATES + WINDOW_AGGREGATES)), - "timeUnit": "(?P{})".format("|".join(TIMEUNITS)), - } - - patterns = [] - - if parse_aggregates: - patterns.extend([r"{agg_count}\(\)"]) - patterns.extend([r"{aggregate}\({field}\)"]) - if parse_window_ops: - patterns.extend([r"{op_count}\(\)"]) - patterns.extend([r"{window_op}\({field}\)"]) - if parse_timeunits: - patterns.extend([r"{timeUnit}\({field}\)"]) - - patterns.extend([r"{field}"]) - - if parse_types: - patterns = list(itertools.chain(*((p + ":{type}", p) for p in patterns))) - - regexps = ( - re.compile(r"\A" + p.format(**units) + r"\Z", re.DOTALL) for p in patterns - ) - - # find matches depending on valid fields passed - if isinstance(shorthand, dict): - attrs = shorthand - else: - attrs = next( - exp.match(shorthand).groupdict() # type: ignore[union-attr] - for exp in regexps - if exp.match(shorthand) is not None - ) - - # Handle short form of the type expression - if "type" in attrs: - attrs["type"] = INV_TYPECODE_MAP.get(attrs["type"], attrs["type"]) - - # counts are quantitative by default - if attrs == {"aggregate": "count"}: - attrs["type"] = "quantitative" - - # times are temporal by default - if "timeUnit" in attrs and "type" not in attrs: - attrs["type"] = "temporal" - - # if data is specified and type is not, infer type from data - if "type" not in attrs: - if pyarrow_available() and data is not None and hasattr(data, "__dataframe__"): - dfi = data.__dataframe__() - if "field" in attrs: - unescaped_field = attrs["field"].replace("\\", "") - if unescaped_field in dfi.column_names(): - column = dfi.get_column_by_name(unescaped_field) - try: - attrs["type"] = infer_vegalite_type_for_dfi_column(column) - except (NotImplementedError, AttributeError, ValueError): - # Fall back to pandas-based inference. - # Note: The AttributeError catch is a workaround for - # https://github.com/pandas-dev/pandas/issues/55332 - if isinstance(data, pd.DataFrame): - attrs["type"] = infer_vegalite_type(data[unescaped_field]) - else: - raise - - if isinstance(attrs["type"], tuple): - attrs["sort"] = attrs["type"][1] - attrs["type"] = attrs["type"][0] - elif isinstance(data, pd.DataFrame): - # Fallback if pyarrow is not installed or if pandas is older than 1.5 - # - # Remove escape sequences so that types can be inferred for columns with special characters - if "field" in attrs and attrs["field"].replace("\\", "") in data.columns: - attrs["type"] = infer_vegalite_type( - data[attrs["field"].replace("\\", "")] - ) - # ordered categorical dataframe columns return the type and sort order as a tuple - if isinstance(attrs["type"], tuple): - attrs["sort"] = attrs["type"][1] - attrs["type"] = attrs["type"][0] - - # If an unescaped colon is still present, it's often due to an incorrect data type specification - # but could also be due to using a column name with ":" in it. - if ( - "field" in attrs - and ":" in attrs["field"] - and attrs["field"][attrs["field"].rfind(":") - 1] != "\\" - ): - raise ValueError( - '"{}" '.format(attrs["field"].split(":")[-1]) - + "is not one of the valid encoding data types: {}.".format( - ", ".join(TYPECODE_MAP.values()) - ) - + "\nFor more details, see https://altair-viz.github.io/user_guide/encodings/index.html#encoding-data-types. " - + "If you are trying to use a column name that contains a colon, " - + 'prefix it with a backslash; for example "column\\:name" instead of "column:name".' - ) - return attrs - - -def infer_vegalite_type_for_dfi_column( - column: Union[Column, "PandasColumn"], -) -> Union[_InferredVegaLiteType, Tuple[_InferredVegaLiteType, list]]: - from pyarrow.interchange.from_dataframe import column_to_array - - try: - kind = column.dtype[0] - except NotImplementedError as e: - # Edge case hack: - # dtype access fails for pandas column with datetime64[ns, UTC] type, - # but all we need to know is that its temporal, so check the - # error message for the presence of datetime64. - # - # See https://github.com/pandas-dev/pandas/issues/54239 - if "datetime64" in e.args[0] or "timestamp" in e.args[0]: - return "temporal" - raise e - - if ( - kind == DtypeKind.CATEGORICAL - and column.describe_categorical["is_ordered"] - and column.describe_categorical["categories"] is not None - ): - # Treat ordered categorical column as Vega-Lite ordinal - categories_column = column.describe_categorical["categories"] - categories_array = column_to_array(categories_column) - return "ordinal", categories_array.to_pylist() - if kind in (DtypeKind.STRING, DtypeKind.CATEGORICAL, DtypeKind.BOOL): - return "nominal" - elif kind in (DtypeKind.INT, DtypeKind.UINT, DtypeKind.FLOAT): - return "quantitative" - elif kind == DtypeKind.DATETIME: - return "temporal" - else: - raise ValueError(f"Unexpected DtypeKind: {kind}") - - -def use_signature(Obj: Callable[_P, Any]): - """Apply call signature and documentation of Obj to the decorated method""" - - def decorate(f: Callable[..., _V]) -> Callable[_P, _V]: - # call-signature of f is exposed via __wrapped__. - # we want it to mimic Obj.__init__ - f.__wrapped__ = Obj.__init__ # type: ignore - f._uses_signature = Obj # type: ignore - - # Supplement the docstring of f with information from Obj - if Obj.__doc__: - # Patch in a reference to the class this docstring is copied from, - # to generate a hyperlink. - doclines = Obj.__doc__.splitlines() - doclines[0] = f"Refer to :class:`{Obj.__name__}`" - - if f.__doc__: - doc = f.__doc__ + "\n".join(doclines[1:]) - else: - doc = "\n".join(doclines) - try: - f.__doc__ = doc - except AttributeError: - # __doc__ is not modifiable for classes in Python < 3.3 - pass - - return f - - return decorate - - -def update_nested( - original: MutableMapping, update: Mapping, copy: bool = False -) -> MutableMapping: - """Update nested dictionaries - - Parameters - ---------- - original : MutableMapping - the original (nested) dictionary, which will be updated in-place - update : Mapping - the nested dictionary of updates - copy : bool, default False - if True, then copy the original dictionary rather than modifying it - - Returns - ------- - original : MutableMapping - a reference to the (modified) original dict - - Examples - -------- - >>> original = {'x': {'b': 2, 'c': 4}} - >>> update = {'x': {'b': 5, 'd': 6}, 'y': 40} - >>> update_nested(original, update) # doctest: +SKIP - {'x': {'b': 5, 'c': 4, 'd': 6}, 'y': 40} - >>> original # doctest: +SKIP - {'x': {'b': 5, 'c': 4, 'd': 6}, 'y': 40} - """ - if copy: - original = deepcopy(original) - for key, val in update.items(): - if isinstance(val, Mapping): - orig_val = original.get(key, {}) - if isinstance(orig_val, MutableMapping): - original[key] = update_nested(orig_val, val) - else: - original[key] = val - else: - original[key] = val - return original - - -def display_traceback(in_ipython: bool = True): - exc_info = sys.exc_info() - - if in_ipython: - from IPython.core.getipython import get_ipython - - ip = get_ipython() - else: - ip = None - - if ip is not None: - ip.showtraceback(exc_info) - else: - traceback.print_exception(*exc_info) - - -def infer_encoding_types(args: Sequence, kwargs: MutableMapping, channels: ModuleType): - """Infer typed keyword arguments for args and kwargs - - Parameters - ---------- - args : Sequence - Sequence of function args - kwargs : MutableMapping - Dict of function kwargs - channels : ModuleType - The module containing all altair encoding channel classes. - - Returns - ------- - kwargs : dict - All args and kwargs in a single dict, with keys and types - based on the channels mapping. - """ - # Construct a dictionary of channel type to encoding name - # TODO: cache this somehow? - channel_objs = (getattr(channels, name) for name in dir(channels)) - channel_objs = ( - c for c in channel_objs if isinstance(c, type) and issubclass(c, SchemaBase) - ) - channel_to_name: Dict[Type[SchemaBase], str] = { - c: c._encoding_name for c in channel_objs - } - name_to_channel: Dict[str, Dict[str, Type[SchemaBase]]] = {} - for chan, name in channel_to_name.items(): - chans = name_to_channel.setdefault(name, {}) - if chan.__name__.endswith("Datum"): - key = "datum" - elif chan.__name__.endswith("Value"): - key = "value" - else: - key = "field" - chans[key] = chan - - # First use the mapping to convert args to kwargs based on their types. - for arg in args: - if isinstance(arg, (list, tuple)) and len(arg) > 0: - type_ = type(arg[0]) - else: - type_ = type(arg) - - encoding = channel_to_name.get(type_, None) - if encoding is None: - raise NotImplementedError("positional of type {}" "".format(type_)) - if encoding in kwargs: - raise ValueError("encoding {} specified twice.".format(encoding)) - kwargs[encoding] = arg - - def _wrap_in_channel_class(obj, encoding): - if isinstance(obj, SchemaBase): - return obj - - if isinstance(obj, str): - obj = {"shorthand": obj} - - if isinstance(obj, (list, tuple)): - return [_wrap_in_channel_class(subobj, encoding) for subobj in obj] - - if encoding not in name_to_channel: - warnings.warn( - "Unrecognized encoding channel '{}'".format(encoding), stacklevel=1 - ) - return obj - - classes = name_to_channel[encoding] - cls = classes["value"] if "value" in obj else classes["field"] - - try: - # Don't force validation here; some objects won't be valid until - # they're created in the context of a chart. - return cls.from_dict(obj, validate=False) - except jsonschema.ValidationError: - # our attempts at finding the correct class have failed - return obj - - return { - encoding: _wrap_in_channel_class(obj, encoding) - for encoding, obj in kwargs.items() - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/__init__.py deleted file mode 100644 index 29fb3561e4f2dc9d3a764e756439c0dea2c9897a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/__init__.py +++ /dev/null @@ -1,169 +0,0 @@ -from __future__ import annotations - -__all__ = ( - "maybe_async", - "maybe_async_cm", - "run", - "sleep", - "sleep_forever", - "sleep_until", - "current_time", - "get_all_backends", - "get_cancelled_exc_class", - "BrokenResourceError", - "BrokenWorkerProcess", - "BusyResourceError", - "ClosedResourceError", - "DelimiterNotFound", - "EndOfStream", - "ExceptionGroup", - "IncompleteRead", - "TypedAttributeLookupError", - "WouldBlock", - "AsyncFile", - "Path", - "open_file", - "wrap_file", - "aclose_forcefully", - "open_signal_receiver", - "connect_tcp", - "connect_unix", - "create_tcp_listener", - "create_unix_listener", - "create_udp_socket", - "create_connected_udp_socket", - "getaddrinfo", - "getnameinfo", - "wait_socket_readable", - "wait_socket_writable", - "create_memory_object_stream", - "run_process", - "open_process", - "create_lock", - "CapacityLimiter", - "CapacityLimiterStatistics", - "Condition", - "ConditionStatistics", - "Event", - "EventStatistics", - "Lock", - "LockStatistics", - "Semaphore", - "SemaphoreStatistics", - "create_condition", - "create_event", - "create_semaphore", - "create_capacity_limiter", - "open_cancel_scope", - "fail_after", - "move_on_after", - "current_effective_deadline", - "TASK_STATUS_IGNORED", - "CancelScope", - "create_task_group", - "TaskInfo", - "get_current_task", - "get_running_tasks", - "wait_all_tasks_blocked", - "run_sync_in_worker_thread", - "run_async_from_thread", - "run_sync_from_thread", - "current_default_worker_thread_limiter", - "create_blocking_portal", - "start_blocking_portal", - "typed_attribute", - "TypedAttributeSet", - "TypedAttributeProvider", -) - -from typing import Any - -from ._core._compat import maybe_async, maybe_async_cm -from ._core._eventloop import ( - current_time, - get_all_backends, - get_cancelled_exc_class, - run, - sleep, - sleep_forever, - sleep_until, -) -from ._core._exceptions import ( - BrokenResourceError, - BrokenWorkerProcess, - BusyResourceError, - ClosedResourceError, - DelimiterNotFound, - EndOfStream, - ExceptionGroup, - IncompleteRead, - TypedAttributeLookupError, - WouldBlock, -) -from ._core._fileio import AsyncFile, Path, open_file, wrap_file -from ._core._resources import aclose_forcefully -from ._core._signals import open_signal_receiver -from ._core._sockets import ( - connect_tcp, - connect_unix, - create_connected_udp_socket, - create_tcp_listener, - create_udp_socket, - create_unix_listener, - getaddrinfo, - getnameinfo, - wait_socket_readable, - wait_socket_writable, -) -from ._core._streams import create_memory_object_stream -from ._core._subprocesses import open_process, run_process -from ._core._synchronization import ( - CapacityLimiter, - CapacityLimiterStatistics, - Condition, - ConditionStatistics, - Event, - EventStatistics, - Lock, - LockStatistics, - Semaphore, - SemaphoreStatistics, - create_capacity_limiter, - create_condition, - create_event, - create_lock, - create_semaphore, -) -from ._core._tasks import ( - TASK_STATUS_IGNORED, - CancelScope, - create_task_group, - current_effective_deadline, - fail_after, - move_on_after, - open_cancel_scope, -) -from ._core._testing import ( - TaskInfo, - get_current_task, - get_running_tasks, - wait_all_tasks_blocked, -) -from ._core._typedattr import TypedAttributeProvider, TypedAttributeSet, typed_attribute - -# Re-exported here, for backwards compatibility -# isort: off -from .to_thread import current_default_worker_thread_limiter, run_sync_in_worker_thread -from .from_thread import ( - create_blocking_portal, - run_async_from_thread, - run_sync_from_thread, - start_blocking_portal, -) - -# Re-export imports so they look like they live directly in this package -key: str -value: Any -for key, value in list(locals().items()): - if getattr(value, "__module__", "").startswith("anyio."): - value.__module__ = __name__ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/_compat.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/_compat.py deleted file mode 100644 index 23f8866598b4b4eb836b9d9b210ebd395fd0c557..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/_compat.py +++ /dev/null @@ -1,623 +0,0 @@ -import codecs -import io -import os -import re -import sys -import typing as t -from weakref import WeakKeyDictionary - -CYGWIN = sys.platform.startswith("cygwin") -WIN = sys.platform.startswith("win") -auto_wrap_for_ansi: t.Optional[t.Callable[[t.TextIO], t.TextIO]] = None -_ansi_re = re.compile(r"\033\[[;?0-9]*[a-zA-Z]") - - -def _make_text_stream( - stream: t.BinaryIO, - encoding: t.Optional[str], - errors: t.Optional[str], - force_readable: bool = False, - force_writable: bool = False, -) -> t.TextIO: - if encoding is None: - encoding = get_best_encoding(stream) - if errors is None: - errors = "replace" - return _NonClosingTextIOWrapper( - stream, - encoding, - errors, - line_buffering=True, - force_readable=force_readable, - force_writable=force_writable, - ) - - -def is_ascii_encoding(encoding: str) -> bool: - """Checks if a given encoding is ascii.""" - try: - return codecs.lookup(encoding).name == "ascii" - except LookupError: - return False - - -def get_best_encoding(stream: t.IO[t.Any]) -> str: - """Returns the default stream encoding if not found.""" - rv = getattr(stream, "encoding", None) or sys.getdefaultencoding() - if is_ascii_encoding(rv): - return "utf-8" - return rv - - -class _NonClosingTextIOWrapper(io.TextIOWrapper): - def __init__( - self, - stream: t.BinaryIO, - encoding: t.Optional[str], - errors: t.Optional[str], - force_readable: bool = False, - force_writable: bool = False, - **extra: t.Any, - ) -> None: - self._stream = stream = t.cast( - t.BinaryIO, _FixupStream(stream, force_readable, force_writable) - ) - super().__init__(stream, encoding, errors, **extra) - - def __del__(self) -> None: - try: - self.detach() - except Exception: - pass - - def isatty(self) -> bool: - # https://bitbucket.org/pypy/pypy/issue/1803 - return self._stream.isatty() - - -class _FixupStream: - """The new io interface needs more from streams than streams - traditionally implement. As such, this fix-up code is necessary in - some circumstances. - - The forcing of readable and writable flags are there because some tools - put badly patched objects on sys (one such offender are certain version - of jupyter notebook). - """ - - def __init__( - self, - stream: t.BinaryIO, - force_readable: bool = False, - force_writable: bool = False, - ): - self._stream = stream - self._force_readable = force_readable - self._force_writable = force_writable - - def __getattr__(self, name: str) -> t.Any: - return getattr(self._stream, name) - - def read1(self, size: int) -> bytes: - f = getattr(self._stream, "read1", None) - - if f is not None: - return t.cast(bytes, f(size)) - - return self._stream.read(size) - - def readable(self) -> bool: - if self._force_readable: - return True - x = getattr(self._stream, "readable", None) - if x is not None: - return t.cast(bool, x()) - try: - self._stream.read(0) - except Exception: - return False - return True - - def writable(self) -> bool: - if self._force_writable: - return True - x = getattr(self._stream, "writable", None) - if x is not None: - return t.cast(bool, x()) - try: - self._stream.write("") # type: ignore - except Exception: - try: - self._stream.write(b"") - except Exception: - return False - return True - - def seekable(self) -> bool: - x = getattr(self._stream, "seekable", None) - if x is not None: - return t.cast(bool, x()) - try: - self._stream.seek(self._stream.tell()) - except Exception: - return False - return True - - -def _is_binary_reader(stream: t.IO[t.Any], default: bool = False) -> bool: - try: - return isinstance(stream.read(0), bytes) - except Exception: - return default - # This happens in some cases where the stream was already - # closed. In this case, we assume the default. - - -def _is_binary_writer(stream: t.IO[t.Any], default: bool = False) -> bool: - try: - stream.write(b"") - except Exception: - try: - stream.write("") - return False - except Exception: - pass - return default - return True - - -def _find_binary_reader(stream: t.IO[t.Any]) -> t.Optional[t.BinaryIO]: - # We need to figure out if the given stream is already binary. - # This can happen because the official docs recommend detaching - # the streams to get binary streams. Some code might do this, so - # we need to deal with this case explicitly. - if _is_binary_reader(stream, False): - return t.cast(t.BinaryIO, stream) - - buf = getattr(stream, "buffer", None) - - # Same situation here; this time we assume that the buffer is - # actually binary in case it's closed. - if buf is not None and _is_binary_reader(buf, True): - return t.cast(t.BinaryIO, buf) - - return None - - -def _find_binary_writer(stream: t.IO[t.Any]) -> t.Optional[t.BinaryIO]: - # We need to figure out if the given stream is already binary. - # This can happen because the official docs recommend detaching - # the streams to get binary streams. Some code might do this, so - # we need to deal with this case explicitly. - if _is_binary_writer(stream, False): - return t.cast(t.BinaryIO, stream) - - buf = getattr(stream, "buffer", None) - - # Same situation here; this time we assume that the buffer is - # actually binary in case it's closed. - if buf is not None and _is_binary_writer(buf, True): - return t.cast(t.BinaryIO, buf) - - return None - - -def _stream_is_misconfigured(stream: t.TextIO) -> bool: - """A stream is misconfigured if its encoding is ASCII.""" - # If the stream does not have an encoding set, we assume it's set - # to ASCII. This appears to happen in certain unittest - # environments. It's not quite clear what the correct behavior is - # but this at least will force Click to recover somehow. - return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii") - - -def _is_compat_stream_attr(stream: t.TextIO, attr: str, value: t.Optional[str]) -> bool: - """A stream attribute is compatible if it is equal to the - desired value or the desired value is unset and the attribute - has a value. - """ - stream_value = getattr(stream, attr, None) - return stream_value == value or (value is None and stream_value is not None) - - -def _is_compatible_text_stream( - stream: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] -) -> bool: - """Check if a stream's encoding and errors attributes are - compatible with the desired values. - """ - return _is_compat_stream_attr( - stream, "encoding", encoding - ) and _is_compat_stream_attr(stream, "errors", errors) - - -def _force_correct_text_stream( - text_stream: t.IO[t.Any], - encoding: t.Optional[str], - errors: t.Optional[str], - is_binary: t.Callable[[t.IO[t.Any], bool], bool], - find_binary: t.Callable[[t.IO[t.Any]], t.Optional[t.BinaryIO]], - force_readable: bool = False, - force_writable: bool = False, -) -> t.TextIO: - if is_binary(text_stream, False): - binary_reader = t.cast(t.BinaryIO, text_stream) - else: - text_stream = t.cast(t.TextIO, text_stream) - # If the stream looks compatible, and won't default to a - # misconfigured ascii encoding, return it as-is. - if _is_compatible_text_stream(text_stream, encoding, errors) and not ( - encoding is None and _stream_is_misconfigured(text_stream) - ): - return text_stream - - # Otherwise, get the underlying binary reader. - possible_binary_reader = find_binary(text_stream) - - # If that's not possible, silently use the original reader - # and get mojibake instead of exceptions. - if possible_binary_reader is None: - return text_stream - - binary_reader = possible_binary_reader - - # Default errors to replace instead of strict in order to get - # something that works. - if errors is None: - errors = "replace" - - # Wrap the binary stream in a text stream with the correct - # encoding parameters. - return _make_text_stream( - binary_reader, - encoding, - errors, - force_readable=force_readable, - force_writable=force_writable, - ) - - -def _force_correct_text_reader( - text_reader: t.IO[t.Any], - encoding: t.Optional[str], - errors: t.Optional[str], - force_readable: bool = False, -) -> t.TextIO: - return _force_correct_text_stream( - text_reader, - encoding, - errors, - _is_binary_reader, - _find_binary_reader, - force_readable=force_readable, - ) - - -def _force_correct_text_writer( - text_writer: t.IO[t.Any], - encoding: t.Optional[str], - errors: t.Optional[str], - force_writable: bool = False, -) -> t.TextIO: - return _force_correct_text_stream( - text_writer, - encoding, - errors, - _is_binary_writer, - _find_binary_writer, - force_writable=force_writable, - ) - - -def get_binary_stdin() -> t.BinaryIO: - reader = _find_binary_reader(sys.stdin) - if reader is None: - raise RuntimeError("Was not able to determine binary stream for sys.stdin.") - return reader - - -def get_binary_stdout() -> t.BinaryIO: - writer = _find_binary_writer(sys.stdout) - if writer is None: - raise RuntimeError("Was not able to determine binary stream for sys.stdout.") - return writer - - -def get_binary_stderr() -> t.BinaryIO: - writer = _find_binary_writer(sys.stderr) - if writer is None: - raise RuntimeError("Was not able to determine binary stream for sys.stderr.") - return writer - - -def get_text_stdin( - encoding: t.Optional[str] = None, errors: t.Optional[str] = None -) -> t.TextIO: - rv = _get_windows_console_stream(sys.stdin, encoding, errors) - if rv is not None: - return rv - return _force_correct_text_reader(sys.stdin, encoding, errors, force_readable=True) - - -def get_text_stdout( - encoding: t.Optional[str] = None, errors: t.Optional[str] = None -) -> t.TextIO: - rv = _get_windows_console_stream(sys.stdout, encoding, errors) - if rv is not None: - return rv - return _force_correct_text_writer(sys.stdout, encoding, errors, force_writable=True) - - -def get_text_stderr( - encoding: t.Optional[str] = None, errors: t.Optional[str] = None -) -> t.TextIO: - rv = _get_windows_console_stream(sys.stderr, encoding, errors) - if rv is not None: - return rv - return _force_correct_text_writer(sys.stderr, encoding, errors, force_writable=True) - - -def _wrap_io_open( - file: t.Union[str, "os.PathLike[str]", int], - mode: str, - encoding: t.Optional[str], - errors: t.Optional[str], -) -> t.IO[t.Any]: - """Handles not passing ``encoding`` and ``errors`` in binary mode.""" - if "b" in mode: - return open(file, mode) - - return open(file, mode, encoding=encoding, errors=errors) - - -def open_stream( - filename: "t.Union[str, os.PathLike[str]]", - mode: str = "r", - encoding: t.Optional[str] = None, - errors: t.Optional[str] = "strict", - atomic: bool = False, -) -> t.Tuple[t.IO[t.Any], bool]: - binary = "b" in mode - filename = os.fspath(filename) - - # Standard streams first. These are simple because they ignore the - # atomic flag. Use fsdecode to handle Path("-"). - if os.fsdecode(filename) == "-": - if any(m in mode for m in ["w", "a", "x"]): - if binary: - return get_binary_stdout(), False - return get_text_stdout(encoding=encoding, errors=errors), False - if binary: - return get_binary_stdin(), False - return get_text_stdin(encoding=encoding, errors=errors), False - - # Non-atomic writes directly go out through the regular open functions. - if not atomic: - return _wrap_io_open(filename, mode, encoding, errors), True - - # Some usability stuff for atomic writes - if "a" in mode: - raise ValueError( - "Appending to an existing file is not supported, because that" - " would involve an expensive `copy`-operation to a temporary" - " file. Open the file in normal `w`-mode and copy explicitly" - " if that's what you're after." - ) - if "x" in mode: - raise ValueError("Use the `overwrite`-parameter instead.") - if "w" not in mode: - raise ValueError("Atomic writes only make sense with `w`-mode.") - - # Atomic writes are more complicated. They work by opening a file - # as a proxy in the same folder and then using the fdopen - # functionality to wrap it in a Python file. Then we wrap it in an - # atomic file that moves the file over on close. - import errno - import random - - try: - perm: t.Optional[int] = os.stat(filename).st_mode - except OSError: - perm = None - - flags = os.O_RDWR | os.O_CREAT | os.O_EXCL - - if binary: - flags |= getattr(os, "O_BINARY", 0) - - while True: - tmp_filename = os.path.join( - os.path.dirname(filename), - f".__atomic-write{random.randrange(1 << 32):08x}", - ) - try: - fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm) - break - except OSError as e: - if e.errno == errno.EEXIST or ( - os.name == "nt" - and e.errno == errno.EACCES - and os.path.isdir(e.filename) - and os.access(e.filename, os.W_OK) - ): - continue - raise - - if perm is not None: - os.chmod(tmp_filename, perm) # in case perm includes bits in umask - - f = _wrap_io_open(fd, mode, encoding, errors) - af = _AtomicFile(f, tmp_filename, os.path.realpath(filename)) - return t.cast(t.IO[t.Any], af), True - - -class _AtomicFile: - def __init__(self, f: t.IO[t.Any], tmp_filename: str, real_filename: str) -> None: - self._f = f - self._tmp_filename = tmp_filename - self._real_filename = real_filename - self.closed = False - - @property - def name(self) -> str: - return self._real_filename - - def close(self, delete: bool = False) -> None: - if self.closed: - return - self._f.close() - os.replace(self._tmp_filename, self._real_filename) - self.closed = True - - def __getattr__(self, name: str) -> t.Any: - return getattr(self._f, name) - - def __enter__(self) -> "_AtomicFile": - return self - - def __exit__(self, exc_type: t.Optional[t.Type[BaseException]], *_: t.Any) -> None: - self.close(delete=exc_type is not None) - - def __repr__(self) -> str: - return repr(self._f) - - -def strip_ansi(value: str) -> str: - return _ansi_re.sub("", value) - - -def _is_jupyter_kernel_output(stream: t.IO[t.Any]) -> bool: - while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)): - stream = stream._stream - - return stream.__class__.__module__.startswith("ipykernel.") - - -def should_strip_ansi( - stream: t.Optional[t.IO[t.Any]] = None, color: t.Optional[bool] = None -) -> bool: - if color is None: - if stream is None: - stream = sys.stdin - return not isatty(stream) and not _is_jupyter_kernel_output(stream) - return not color - - -# On Windows, wrap the output streams with colorama to support ANSI -# color codes. -# NOTE: double check is needed so mypy does not analyze this on Linux -if sys.platform.startswith("win") and WIN: - from ._winconsole import _get_windows_console_stream - - def _get_argv_encoding() -> str: - import locale - - return locale.getpreferredencoding() - - _ansi_stream_wrappers: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary() - - def auto_wrap_for_ansi( # noqa: F811 - stream: t.TextIO, color: t.Optional[bool] = None - ) -> t.TextIO: - """Support ANSI color and style codes on Windows by wrapping a - stream with colorama. - """ - try: - cached = _ansi_stream_wrappers.get(stream) - except Exception: - cached = None - - if cached is not None: - return cached - - import colorama - - strip = should_strip_ansi(stream, color) - ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip) - rv = t.cast(t.TextIO, ansi_wrapper.stream) - _write = rv.write - - def _safe_write(s): - try: - return _write(s) - except BaseException: - ansi_wrapper.reset_all() - raise - - rv.write = _safe_write - - try: - _ansi_stream_wrappers[stream] = rv - except Exception: - pass - - return rv - -else: - - def _get_argv_encoding() -> str: - return getattr(sys.stdin, "encoding", None) or sys.getfilesystemencoding() - - def _get_windows_console_stream( - f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] - ) -> t.Optional[t.TextIO]: - return None - - -def term_len(x: str) -> int: - return len(strip_ansi(x)) - - -def isatty(stream: t.IO[t.Any]) -> bool: - try: - return stream.isatty() - except Exception: - return False - - -def _make_cached_stream_func( - src_func: t.Callable[[], t.Optional[t.TextIO]], - wrapper_func: t.Callable[[], t.TextIO], -) -> t.Callable[[], t.Optional[t.TextIO]]: - cache: t.MutableMapping[t.TextIO, t.TextIO] = WeakKeyDictionary() - - def func() -> t.Optional[t.TextIO]: - stream = src_func() - - if stream is None: - return None - - try: - rv = cache.get(stream) - except Exception: - rv = None - if rv is not None: - return rv - rv = wrapper_func() - try: - cache[stream] = rv - except Exception: - pass - return rv - - return func - - -_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin) -_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout) -_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr) - - -binary_streams: t.Mapping[str, t.Callable[[], t.BinaryIO]] = { - "stdin": get_binary_stdin, - "stdout": get_binary_stdout, - "stderr": get_binary_stderr, -} - -text_streams: t.Mapping[ - str, t.Callable[[t.Optional[str], t.Optional[str]], t.TextIO] -] = { - "stdin": get_text_stdin, - "stdout": get_text_stdout, - "stderr": get_text_stderr, -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_D_.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_D_.py deleted file mode 100644 index 536ff2f98a0abb8b27fe6da44199534a32fd0c3e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_D_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_D_(table_T_S_I_V_): - pass diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4.py deleted file mode 100644 index cb4006048d5536b08acc264a5e5766209ca085ef..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4.py +++ /dev/null @@ -1,606 +0,0 @@ -import functools -import io -import os - -import matplotlib as mpl -from matplotlib import _api, backend_tools, cbook -from matplotlib.backend_bases import ( - ToolContainerBase, KeyEvent, LocationEvent, MouseEvent, ResizeEvent, - CloseEvent) - -try: - import gi -except ImportError as err: - raise ImportError("The GTK4 backends require PyGObject") from err - -try: - # :raises ValueError: If module/version is already loaded, already - # required, or unavailable. - gi.require_version("Gtk", "4.0") -except ValueError as e: - # in this case we want to re-raise as ImportError so the - # auto-backend selection logic correctly skips. - raise ImportError(e) from e - -from gi.repository import Gio, GLib, Gtk, Gdk, GdkPixbuf -from . import _backend_gtk -from ._backend_gtk import ( # noqa: F401 # pylint: disable=W0611 - _BackendGTK, _FigureCanvasGTK, _FigureManagerGTK, _NavigationToolbar2GTK, - TimerGTK as TimerGTK4, -) - - -class FigureCanvasGTK4(_FigureCanvasGTK, Gtk.DrawingArea): - required_interactive_framework = "gtk4" - supports_blit = False - manager_class = _api.classproperty(lambda cls: FigureManagerGTK4) - _context_is_scaled = False - - def __init__(self, figure=None): - super().__init__(figure=figure) - - self.set_hexpand(True) - self.set_vexpand(True) - - self._idle_draw_id = 0 - self._rubberband_rect = None - - self.set_draw_func(self._draw_func) - self.connect('resize', self.resize_event) - self.connect('notify::scale-factor', self._update_device_pixel_ratio) - - click = Gtk.GestureClick() - click.set_button(0) # All buttons. - click.connect('pressed', self.button_press_event) - click.connect('released', self.button_release_event) - self.add_controller(click) - - key = Gtk.EventControllerKey() - key.connect('key-pressed', self.key_press_event) - key.connect('key-released', self.key_release_event) - self.add_controller(key) - - motion = Gtk.EventControllerMotion() - motion.connect('motion', self.motion_notify_event) - motion.connect('enter', self.enter_notify_event) - motion.connect('leave', self.leave_notify_event) - self.add_controller(motion) - - scroll = Gtk.EventControllerScroll.new( - Gtk.EventControllerScrollFlags.VERTICAL) - scroll.connect('scroll', self.scroll_event) - self.add_controller(scroll) - - self.set_focusable(True) - - css = Gtk.CssProvider() - style = '.matplotlib-canvas { background-color: white; }' - if Gtk.check_version(4, 9, 3) is None: - css.load_from_data(style, -1) - else: - css.load_from_data(style.encode('utf-8')) - style_ctx = self.get_style_context() - style_ctx.add_provider(css, Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION) - style_ctx.add_class("matplotlib-canvas") - - def destroy(self): - CloseEvent("close_event", self)._process() - - def set_cursor(self, cursor): - # docstring inherited - self.set_cursor_from_name(_backend_gtk.mpl_to_gtk_cursor_name(cursor)) - - def _mpl_coords(self, xy=None): - """ - Convert the *xy* position of a GTK event, or of the current cursor - position if *xy* is None, to Matplotlib coordinates. - - GTK use logical pixels, but the figure is scaled to physical pixels for - rendering. Transform to physical pixels so that all of the down-stream - transforms work as expected. - - Also, the origin is different and needs to be corrected. - """ - if xy is None: - surface = self.get_native().get_surface() - is_over, x, y, mask = surface.get_device_position( - self.get_display().get_default_seat().get_pointer()) - else: - x, y = xy - x = x * self.device_pixel_ratio - # flip y so y=0 is bottom of canvas - y = self.figure.bbox.height - y * self.device_pixel_ratio - return x, y - - def scroll_event(self, controller, dx, dy): - MouseEvent( - "scroll_event", self, *self._mpl_coords(), step=dy, - modifiers=self._mpl_modifiers(controller), - )._process() - return True - - def button_press_event(self, controller, n_press, x, y): - MouseEvent( - "button_press_event", self, *self._mpl_coords((x, y)), - controller.get_current_button(), - modifiers=self._mpl_modifiers(controller), - )._process() - self.grab_focus() - - def button_release_event(self, controller, n_press, x, y): - MouseEvent( - "button_release_event", self, *self._mpl_coords((x, y)), - controller.get_current_button(), - modifiers=self._mpl_modifiers(controller), - )._process() - - def key_press_event(self, controller, keyval, keycode, state): - KeyEvent( - "key_press_event", self, self._get_key(keyval, keycode, state), - *self._mpl_coords(), - )._process() - return True - - def key_release_event(self, controller, keyval, keycode, state): - KeyEvent( - "key_release_event", self, self._get_key(keyval, keycode, state), - *self._mpl_coords(), - )._process() - return True - - def motion_notify_event(self, controller, x, y): - MouseEvent( - "motion_notify_event", self, *self._mpl_coords((x, y)), - modifiers=self._mpl_modifiers(controller), - )._process() - - def enter_notify_event(self, controller, x, y): - LocationEvent( - "figure_enter_event", self, *self._mpl_coords((x, y)), - modifiers=self._mpl_modifiers(), - )._process() - - def leave_notify_event(self, controller): - LocationEvent( - "figure_leave_event", self, *self._mpl_coords(), - modifiers=self._mpl_modifiers(), - )._process() - - def resize_event(self, area, width, height): - self._update_device_pixel_ratio() - dpi = self.figure.dpi - winch = width * self.device_pixel_ratio / dpi - hinch = height * self.device_pixel_ratio / dpi - self.figure.set_size_inches(winch, hinch, forward=False) - ResizeEvent("resize_event", self)._process() - self.draw_idle() - - def _mpl_modifiers(self, controller=None): - if controller is None: - surface = self.get_native().get_surface() - is_over, x, y, event_state = surface.get_device_position( - self.get_display().get_default_seat().get_pointer()) - else: - event_state = controller.get_current_event_state() - mod_table = [ - ("ctrl", Gdk.ModifierType.CONTROL_MASK), - ("alt", Gdk.ModifierType.ALT_MASK), - ("shift", Gdk.ModifierType.SHIFT_MASK), - ("super", Gdk.ModifierType.SUPER_MASK), - ] - return [name for name, mask in mod_table if event_state & mask] - - def _get_key(self, keyval, keycode, state): - unikey = chr(Gdk.keyval_to_unicode(keyval)) - key = cbook._unikey_or_keysym_to_mplkey( - unikey, - Gdk.keyval_name(keyval)) - modifiers = [ - ("ctrl", Gdk.ModifierType.CONTROL_MASK, "control"), - ("alt", Gdk.ModifierType.ALT_MASK, "alt"), - ("shift", Gdk.ModifierType.SHIFT_MASK, "shift"), - ("super", Gdk.ModifierType.SUPER_MASK, "super"), - ] - mods = [ - mod for mod, mask, mod_key in modifiers - if (mod_key != key and state & mask - and not (mod == "shift" and unikey.isprintable()))] - return "+".join([*mods, key]) - - def _update_device_pixel_ratio(self, *args, **kwargs): - # We need to be careful in cases with mixed resolution displays if - # device_pixel_ratio changes. - if self._set_device_pixel_ratio(self.get_scale_factor()): - self.draw() - - def _draw_rubberband(self, rect): - self._rubberband_rect = rect - # TODO: Only update the rubberband area. - self.queue_draw() - - def _draw_func(self, drawing_area, ctx, width, height): - self.on_draw_event(self, ctx) - self._post_draw(self, ctx) - - def _post_draw(self, widget, ctx): - if self._rubberband_rect is None: - return - - lw = 1 - dash = 3 - if not self._context_is_scaled: - x0, y0, w, h = (dim / self.device_pixel_ratio - for dim in self._rubberband_rect) - else: - x0, y0, w, h = self._rubberband_rect - lw *= self.device_pixel_ratio - dash *= self.device_pixel_ratio - x1 = x0 + w - y1 = y0 + h - - # Draw the lines from x0, y0 towards x1, y1 so that the - # dashes don't "jump" when moving the zoom box. - ctx.move_to(x0, y0) - ctx.line_to(x0, y1) - ctx.move_to(x0, y0) - ctx.line_to(x1, y0) - ctx.move_to(x0, y1) - ctx.line_to(x1, y1) - ctx.move_to(x1, y0) - ctx.line_to(x1, y1) - - ctx.set_antialias(1) - ctx.set_line_width(lw) - ctx.set_dash((dash, dash), 0) - ctx.set_source_rgb(0, 0, 0) - ctx.stroke_preserve() - - ctx.set_dash((dash, dash), dash) - ctx.set_source_rgb(1, 1, 1) - ctx.stroke() - - def on_draw_event(self, widget, ctx): - # to be overwritten by GTK4Agg or GTK4Cairo - pass - - def draw(self): - # docstring inherited - if self.is_drawable(): - self.queue_draw() - - def draw_idle(self): - # docstring inherited - if self._idle_draw_id != 0: - return - def idle_draw(*args): - try: - self.draw() - finally: - self._idle_draw_id = 0 - return False - self._idle_draw_id = GLib.idle_add(idle_draw) - - def flush_events(self): - # docstring inherited - context = GLib.MainContext.default() - while context.pending(): - context.iteration(True) - - -class NavigationToolbar2GTK4(_NavigationToolbar2GTK, Gtk.Box): - def __init__(self, canvas): - Gtk.Box.__init__(self) - - self.add_css_class('toolbar') - - self._gtk_ids = {} - for text, tooltip_text, image_file, callback in self.toolitems: - if text is None: - self.append(Gtk.Separator()) - continue - image = Gtk.Image.new_from_gicon( - Gio.Icon.new_for_string( - str(cbook._get_data_path('images', - f'{image_file}-symbolic.svg')))) - self._gtk_ids[text] = button = ( - Gtk.ToggleButton() if callback in ['zoom', 'pan'] else - Gtk.Button()) - button.set_child(image) - button.add_css_class('flat') - button.add_css_class('image-button') - # Save the handler id, so that we can block it as needed. - button._signal_handler = button.connect( - 'clicked', getattr(self, callback)) - button.set_tooltip_text(tooltip_text) - self.append(button) - - # This filler item ensures the toolbar is always at least two text - # lines high. Otherwise the canvas gets redrawn as the mouse hovers - # over images because those use two-line messages which resize the - # toolbar. - label = Gtk.Label() - label.set_markup( - '\N{NO-BREAK SPACE}\n\N{NO-BREAK SPACE}') - label.set_hexpand(True) # Push real message to the right. - self.append(label) - - self.message = Gtk.Label() - self.message.set_justify(Gtk.Justification.RIGHT) - self.append(self.message) - - _NavigationToolbar2GTK.__init__(self, canvas) - - def save_figure(self, *args): - dialog = Gtk.FileChooserNative( - title='Save the figure', - transient_for=self.canvas.get_root(), - action=Gtk.FileChooserAction.SAVE, - modal=True) - self._save_dialog = dialog # Must keep a reference. - - ff = Gtk.FileFilter() - ff.set_name('All files') - ff.add_pattern('*') - dialog.add_filter(ff) - dialog.set_filter(ff) - - formats = [] - default_format = None - for i, (name, fmts) in enumerate( - self.canvas.get_supported_filetypes_grouped().items()): - ff = Gtk.FileFilter() - ff.set_name(name) - for fmt in fmts: - ff.add_pattern(f'*.{fmt}') - dialog.add_filter(ff) - formats.append(name) - if self.canvas.get_default_filetype() in fmts: - default_format = i - # Setting the choice doesn't always work, so make sure the default - # format is first. - formats = [formats[default_format], *formats[:default_format], - *formats[default_format+1:]] - dialog.add_choice('format', 'File format', formats, formats) - dialog.set_choice('format', formats[default_format]) - - dialog.set_current_folder(Gio.File.new_for_path( - os.path.expanduser(mpl.rcParams['savefig.directory']))) - dialog.set_current_name(self.canvas.get_default_filename()) - - @functools.partial(dialog.connect, 'response') - def on_response(dialog, response): - file = dialog.get_file() - fmt = dialog.get_choice('format') - fmt = self.canvas.get_supported_filetypes_grouped()[fmt][0] - dialog.destroy() - self._save_dialog = None - if response != Gtk.ResponseType.ACCEPT: - return - # Save dir for next time, unless empty str (which means use cwd). - if mpl.rcParams['savefig.directory']: - parent = file.get_parent() - mpl.rcParams['savefig.directory'] = parent.get_path() - try: - self.canvas.figure.savefig(file.get_path(), format=fmt) - except Exception as e: - msg = Gtk.MessageDialog( - transient_for=self.canvas.get_root(), - message_type=Gtk.MessageType.ERROR, - buttons=Gtk.ButtonsType.OK, modal=True, - text=str(e)) - msg.show() - - dialog.show() - - -class ToolbarGTK4(ToolContainerBase, Gtk.Box): - _icon_extension = '-symbolic.svg' - - def __init__(self, toolmanager): - ToolContainerBase.__init__(self, toolmanager) - Gtk.Box.__init__(self) - self.set_property('orientation', Gtk.Orientation.HORIZONTAL) - - # Tool items are created later, but must appear before the message. - self._tool_box = Gtk.Box() - self.append(self._tool_box) - self._groups = {} - self._toolitems = {} - - # This filler item ensures the toolbar is always at least two text - # lines high. Otherwise the canvas gets redrawn as the mouse hovers - # over images because those use two-line messages which resize the - # toolbar. - label = Gtk.Label() - label.set_markup( - '\N{NO-BREAK SPACE}\n\N{NO-BREAK SPACE}') - label.set_hexpand(True) # Push real message to the right. - self.append(label) - - self._message = Gtk.Label() - self._message.set_justify(Gtk.Justification.RIGHT) - self.append(self._message) - - def add_toolitem(self, name, group, position, image_file, description, - toggle): - if toggle: - button = Gtk.ToggleButton() - else: - button = Gtk.Button() - button.set_label(name) - button.add_css_class('flat') - - if image_file is not None: - image = Gtk.Image.new_from_gicon( - Gio.Icon.new_for_string(image_file)) - button.set_child(image) - button.add_css_class('image-button') - - if position is None: - position = -1 - - self._add_button(button, group, position) - signal = button.connect('clicked', self._call_tool, name) - button.set_tooltip_text(description) - self._toolitems.setdefault(name, []) - self._toolitems[name].append((button, signal)) - - def _find_child_at_position(self, group, position): - children = [None] - child = self._groups[group].get_first_child() - while child is not None: - children.append(child) - child = child.get_next_sibling() - return children[position] - - def _add_button(self, button, group, position): - if group not in self._groups: - if self._groups: - self._add_separator() - group_box = Gtk.Box() - self._tool_box.append(group_box) - self._groups[group] = group_box - self._groups[group].insert_child_after( - button, self._find_child_at_position(group, position)) - - def _call_tool(self, btn, name): - self.trigger_tool(name) - - def toggle_toolitem(self, name, toggled): - if name not in self._toolitems: - return - for toolitem, signal in self._toolitems[name]: - toolitem.handler_block(signal) - toolitem.set_active(toggled) - toolitem.handler_unblock(signal) - - def remove_toolitem(self, name): - if name not in self._toolitems: - self.toolmanager.message_event(f'{name} not in toolbar', self) - return - - for group in self._groups: - for toolitem, _signal in self._toolitems[name]: - if toolitem in self._groups[group]: - self._groups[group].remove(toolitem) - del self._toolitems[name] - - def _add_separator(self): - sep = Gtk.Separator() - sep.set_property("orientation", Gtk.Orientation.VERTICAL) - self._tool_box.append(sep) - - def set_message(self, s): - self._message.set_label(s) - - -@backend_tools._register_tool_class(FigureCanvasGTK4) -class SaveFigureGTK4(backend_tools.SaveFigureBase): - def trigger(self, *args, **kwargs): - NavigationToolbar2GTK4.save_figure( - self._make_classic_style_pseudo_toolbar()) - - -@backend_tools._register_tool_class(FigureCanvasGTK4) -class HelpGTK4(backend_tools.ToolHelpBase): - def _normalize_shortcut(self, key): - """ - Convert Matplotlib key presses to GTK+ accelerator identifiers. - - Related to `FigureCanvasGTK4._get_key`. - """ - special = { - 'backspace': 'BackSpace', - 'pagedown': 'Page_Down', - 'pageup': 'Page_Up', - 'scroll_lock': 'Scroll_Lock', - } - - parts = key.split('+') - mods = ['<' + mod + '>' for mod in parts[:-1]] - key = parts[-1] - - if key in special: - key = special[key] - elif len(key) > 1: - key = key.capitalize() - elif key.isupper(): - mods += [''] - - return ''.join(mods) + key - - def _is_valid_shortcut(self, key): - """ - Check for a valid shortcut to be displayed. - - - GTK will never send 'cmd+' (see `FigureCanvasGTK4._get_key`). - - The shortcut window only shows keyboard shortcuts, not mouse buttons. - """ - return 'cmd+' not in key and not key.startswith('MouseButton.') - - def trigger(self, *args): - section = Gtk.ShortcutsSection() - - for name, tool in sorted(self.toolmanager.tools.items()): - if not tool.description: - continue - - # Putting everything in a separate group allows GTK to - # automatically split them into separate columns/pages, which is - # useful because we have lots of shortcuts, some with many keys - # that are very wide. - group = Gtk.ShortcutsGroup() - section.append(group) - # A hack to remove the title since we have no group naming. - child = group.get_first_child() - while child is not None: - child.set_visible(False) - child = child.get_next_sibling() - - shortcut = Gtk.ShortcutsShortcut( - accelerator=' '.join( - self._normalize_shortcut(key) - for key in self.toolmanager.get_tool_keymap(name) - if self._is_valid_shortcut(key)), - title=tool.name, - subtitle=tool.description) - group.append(shortcut) - - window = Gtk.ShortcutsWindow( - title='Help', - modal=True, - transient_for=self._figure.canvas.get_root()) - window.set_child(section) - - window.show() - - -@backend_tools._register_tool_class(FigureCanvasGTK4) -class ToolCopyToClipboardGTK4(backend_tools.ToolCopyToClipboardBase): - def trigger(self, *args, **kwargs): - with io.BytesIO() as f: - self.canvas.print_rgba(f) - w, h = self.canvas.get_width_height() - pb = GdkPixbuf.Pixbuf.new_from_data(f.getbuffer(), - GdkPixbuf.Colorspace.RGB, True, - 8, w, h, w*4) - clipboard = self.canvas.get_clipboard() - clipboard.set(pb) - - -backend_tools._register_tool_class( - FigureCanvasGTK4, _backend_gtk.ConfigureSubplotsGTK) -backend_tools._register_tool_class( - FigureCanvasGTK4, _backend_gtk.RubberbandGTK) -Toolbar = ToolbarGTK4 - - -class FigureManagerGTK4(_FigureManagerGTK): - _toolbar2_class = NavigationToolbar2GTK4 - _toolmanager_toolbar_class = ToolbarGTK4 - - -@_BackendGTK.export -class _BackendGTK4(_BackendGTK): - FigureCanvas = FigureCanvasGTK4 - FigureManager = FigureManagerGTK4 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/random/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/random/__init__.py deleted file mode 100644 index 2e8f99fe3045b9c2b691a8ece67d0f06d9d73b08..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/random/__init__.py +++ /dev/null @@ -1,215 +0,0 @@ -""" -======================== -Random Number Generation -======================== - -Use ``default_rng()`` to create a `Generator` and call its methods. - -=============== ========================================================= -Generator ---------------- --------------------------------------------------------- -Generator Class implementing all of the random number distributions -default_rng Default constructor for ``Generator`` -=============== ========================================================= - -============================================= === -BitGenerator Streams that work with Generator ---------------------------------------------- --- -MT19937 -PCG64 -PCG64DXSM -Philox -SFC64 -============================================= === - -============================================= === -Getting entropy to initialize a BitGenerator ---------------------------------------------- --- -SeedSequence -============================================= === - - -Legacy ------- - -For backwards compatibility with previous versions of numpy before 1.17, the -various aliases to the global `RandomState` methods are left alone and do not -use the new `Generator` API. - -==================== ========================================================= -Utility functions --------------------- --------------------------------------------------------- -random Uniformly distributed floats over ``[0, 1)`` -bytes Uniformly distributed random bytes. -permutation Randomly permute a sequence / generate a random sequence. -shuffle Randomly permute a sequence in place. -choice Random sample from 1-D array. -==================== ========================================================= - -==================== ========================================================= -Compatibility -functions - removed -in the new API --------------------- --------------------------------------------------------- -rand Uniformly distributed values. -randn Normally distributed values. -ranf Uniformly distributed floating point numbers. -random_integers Uniformly distributed integers in a given range. - (deprecated, use ``integers(..., closed=True)`` instead) -random_sample Alias for `random_sample` -randint Uniformly distributed integers in a given range -seed Seed the legacy random number generator. -==================== ========================================================= - -==================== ========================================================= -Univariate -distributions --------------------- --------------------------------------------------------- -beta Beta distribution over ``[0, 1]``. -binomial Binomial distribution. -chisquare :math:`\\chi^2` distribution. -exponential Exponential distribution. -f F (Fisher-Snedecor) distribution. -gamma Gamma distribution. -geometric Geometric distribution. -gumbel Gumbel distribution. -hypergeometric Hypergeometric distribution. -laplace Laplace distribution. -logistic Logistic distribution. -lognormal Log-normal distribution. -logseries Logarithmic series distribution. -negative_binomial Negative binomial distribution. -noncentral_chisquare Non-central chi-square distribution. -noncentral_f Non-central F distribution. -normal Normal / Gaussian distribution. -pareto Pareto distribution. -poisson Poisson distribution. -power Power distribution. -rayleigh Rayleigh distribution. -triangular Triangular distribution. -uniform Uniform distribution. -vonmises Von Mises circular distribution. -wald Wald (inverse Gaussian) distribution. -weibull Weibull distribution. -zipf Zipf's distribution over ranked data. -==================== ========================================================= - -==================== ========================================================== -Multivariate -distributions --------------------- ---------------------------------------------------------- -dirichlet Multivariate generalization of Beta distribution. -multinomial Multivariate generalization of the binomial distribution. -multivariate_normal Multivariate generalization of the normal distribution. -==================== ========================================================== - -==================== ========================================================= -Standard -distributions --------------------- --------------------------------------------------------- -standard_cauchy Standard Cauchy-Lorentz distribution. -standard_exponential Standard exponential distribution. -standard_gamma Standard Gamma distribution. -standard_normal Standard normal distribution. -standard_t Standard Student's t-distribution. -==================== ========================================================= - -==================== ========================================================= -Internal functions --------------------- --------------------------------------------------------- -get_state Get tuple representing internal state of generator. -set_state Set state of generator. -==================== ========================================================= - - -""" -__all__ = [ - 'beta', - 'binomial', - 'bytes', - 'chisquare', - 'choice', - 'dirichlet', - 'exponential', - 'f', - 'gamma', - 'geometric', - 'get_state', - 'gumbel', - 'hypergeometric', - 'laplace', - 'logistic', - 'lognormal', - 'logseries', - 'multinomial', - 'multivariate_normal', - 'negative_binomial', - 'noncentral_chisquare', - 'noncentral_f', - 'normal', - 'pareto', - 'permutation', - 'poisson', - 'power', - 'rand', - 'randint', - 'randn', - 'random', - 'random_integers', - 'random_sample', - 'ranf', - 'rayleigh', - 'sample', - 'seed', - 'set_state', - 'shuffle', - 'standard_cauchy', - 'standard_exponential', - 'standard_gamma', - 'standard_normal', - 'standard_t', - 'triangular', - 'uniform', - 'vonmises', - 'wald', - 'weibull', - 'zipf', -] - -# add these for module-freeze analysis (like PyInstaller) -from . import _pickle -from . import _common -from . import _bounded_integers - -from ._generator import Generator, default_rng -from .bit_generator import SeedSequence, BitGenerator -from ._mt19937 import MT19937 -from ._pcg64 import PCG64, PCG64DXSM -from ._philox import Philox -from ._sfc64 import SFC64 -from .mtrand import * - -__all__ += ['Generator', 'RandomState', 'SeedSequence', 'MT19937', - 'Philox', 'PCG64', 'PCG64DXSM', 'SFC64', 'default_rng', - 'BitGenerator'] - - -def __RandomState_ctor(): - """Return a RandomState instance. - - This function exists solely to assist (un)pickling. - - Note that the state of the RandomState returned here is irrelevant, as this - function's entire purpose is to return a newly allocated RandomState whose - state pickle can set. Consequently the RandomState returned by this function - is a freshly allocated copy with a seed=0. - - See https://github.com/numpy/numpy/issues/4763 for a detailed discussion - - """ - return RandomState(seed=0) - - -from numpy._pytesttester import PytestTester -test = PytestTester(__name__) -del PytestTester diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/arrays/base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/arrays/base.py deleted file mode 100644 index bfd6ae361e1e8fdf0526a754476903b2274f5d7c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/arrays/base.py +++ /dev/null @@ -1,2451 +0,0 @@ -""" -An interface for extending pandas with custom arrays. - -.. warning:: - - This is an experimental API and subject to breaking changes - without warning. -""" -from __future__ import annotations - -import operator -from typing import ( - TYPE_CHECKING, - Any, - Callable, - ClassVar, - Literal, - cast, - overload, -) -import warnings - -import numpy as np - -from pandas._libs import ( - algos as libalgos, - lib, -) -from pandas.compat import set_function_name -from pandas.compat.numpy import function as nv -from pandas.errors import AbstractMethodError -from pandas.util._decorators import ( - Appender, - Substitution, - cache_readonly, -) -from pandas.util._exceptions import find_stack_level -from pandas.util._validators import ( - validate_bool_kwarg, - validate_fillna_kwargs, - validate_insert_loc, -) - -from pandas.core.dtypes.cast import maybe_cast_pointwise_result -from pandas.core.dtypes.common import ( - is_list_like, - is_scalar, - pandas_dtype, -) -from pandas.core.dtypes.dtypes import ExtensionDtype -from pandas.core.dtypes.generic import ( - ABCDataFrame, - ABCIndex, - ABCSeries, -) -from pandas.core.dtypes.missing import isna - -from pandas.core import ( - arraylike, - missing, - roperator, -) -from pandas.core.algorithms import ( - factorize_array, - isin, - map_array, - mode, - rank, - unique, -) -from pandas.core.array_algos.quantile import quantile_with_mask -from pandas.core.sorting import ( - nargminmax, - nargsort, -) - -if TYPE_CHECKING: - from collections.abc import ( - Iterator, - Sequence, - ) - - from pandas._typing import ( - ArrayLike, - AstypeArg, - AxisInt, - Dtype, - FillnaOptions, - InterpolateOptions, - NumpySorter, - NumpyValueArrayLike, - PositionalIndexer, - ScalarIndexer, - Self, - SequenceIndexer, - Shape, - SortKind, - TakeIndexer, - npt, - ) - - from pandas import Index - -_extension_array_shared_docs: dict[str, str] = {} - - -class ExtensionArray: - """ - Abstract base class for custom 1-D array types. - - pandas will recognize instances of this class as proper arrays - with a custom type and will not attempt to coerce them to objects. They - may be stored directly inside a :class:`DataFrame` or :class:`Series`. - - Attributes - ---------- - dtype - nbytes - ndim - shape - - Methods - ------- - argsort - astype - copy - dropna - factorize - fillna - equals - insert - interpolate - isin - isna - ravel - repeat - searchsorted - shift - take - tolist - unique - view - _accumulate - _concat_same_type - _formatter - _from_factorized - _from_sequence - _from_sequence_of_strings - _hash_pandas_object - _pad_or_backfill - _reduce - _values_for_argsort - _values_for_factorize - - Notes - ----- - The interface includes the following abstract methods that must be - implemented by subclasses: - - * _from_sequence - * _from_factorized - * __getitem__ - * __len__ - * __eq__ - * dtype - * nbytes - * isna - * take - * copy - * _concat_same_type - * interpolate - - A default repr displaying the type, (truncated) data, length, - and dtype is provided. It can be customized or replaced by - by overriding: - - * __repr__ : A default repr for the ExtensionArray. - * _formatter : Print scalars inside a Series or DataFrame. - - Some methods require casting the ExtensionArray to an ndarray of Python - objects with ``self.astype(object)``, which may be expensive. When - performance is a concern, we highly recommend overriding the following - methods: - - * fillna - * _pad_or_backfill - * dropna - * unique - * factorize / _values_for_factorize - * argsort, argmax, argmin / _values_for_argsort - * searchsorted - * map - - The remaining methods implemented on this class should be performant, - as they only compose abstract methods. Still, a more efficient - implementation may be available, and these methods can be overridden. - - One can implement methods to handle array accumulations or reductions. - - * _accumulate - * _reduce - - One can implement methods to handle parsing from strings that will be used - in methods such as ``pandas.io.parsers.read_csv``. - - * _from_sequence_of_strings - - This class does not inherit from 'abc.ABCMeta' for performance reasons. - Methods and properties required by the interface raise - ``pandas.errors.AbstractMethodError`` and no ``register`` method is - provided for registering virtual subclasses. - - ExtensionArrays are limited to 1 dimension. - - They may be backed by none, one, or many NumPy arrays. For example, - ``pandas.Categorical`` is an extension array backed by two arrays, - one for codes and one for categories. An array of IPv6 address may - be backed by a NumPy structured array with two fields, one for the - lower 64 bits and one for the upper 64 bits. Or they may be backed - by some other storage type, like Python lists. Pandas makes no - assumptions on how the data are stored, just that it can be converted - to a NumPy array. - The ExtensionArray interface does not impose any rules on how this data - is stored. However, currently, the backing data cannot be stored in - attributes called ``.values`` or ``._values`` to ensure full compatibility - with pandas internals. But other names as ``.data``, ``._data``, - ``._items``, ... can be freely used. - - If implementing NumPy's ``__array_ufunc__`` interface, pandas expects - that - - 1. You defer by returning ``NotImplemented`` when any Series are present - in `inputs`. Pandas will extract the arrays and call the ufunc again. - 2. You define a ``_HANDLED_TYPES`` tuple as an attribute on the class. - Pandas inspect this to determine whether the ufunc is valid for the - types present. - - See :ref:`extending.extension.ufunc` for more. - - By default, ExtensionArrays are not hashable. Immutable subclasses may - override this behavior. - - Examples - -------- - Please see the following: - - https://github.com/pandas-dev/pandas/blob/main/pandas/tests/extension/list/array.py - """ - - # '_typ' is for pandas.core.dtypes.generic.ABCExtensionArray. - # Don't override this. - _typ = "extension" - - # similar to __array_priority__, positions ExtensionArray after Index, - # Series, and DataFrame. EA subclasses may override to choose which EA - # subclass takes priority. If overriding, the value should always be - # strictly less than 2000 to be below Index.__pandas_priority__. - __pandas_priority__ = 1000 - - # ------------------------------------------------------------------------ - # Constructors - # ------------------------------------------------------------------------ - - @classmethod - def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = False): - """ - Construct a new ExtensionArray from a sequence of scalars. - - Parameters - ---------- - scalars : Sequence - Each element will be an instance of the scalar type for this - array, ``cls.dtype.type`` or be converted into this type in this method. - dtype : dtype, optional - Construct for this particular dtype. This should be a Dtype - compatible with the ExtensionArray. - copy : bool, default False - If True, copy the underlying data. - - Returns - ------- - ExtensionArray - - Examples - -------- - >>> pd.arrays.IntegerArray._from_sequence([4, 5]) - - [4, 5] - Length: 2, dtype: Int64 - """ - raise AbstractMethodError(cls) - - @classmethod - def _from_sequence_of_strings( - cls, strings, *, dtype: Dtype | None = None, copy: bool = False - ): - """ - Construct a new ExtensionArray from a sequence of strings. - - Parameters - ---------- - strings : Sequence - Each element will be an instance of the scalar type for this - array, ``cls.dtype.type``. - dtype : dtype, optional - Construct for this particular dtype. This should be a Dtype - compatible with the ExtensionArray. - copy : bool, default False - If True, copy the underlying data. - - Returns - ------- - ExtensionArray - - Examples - -------- - >>> pd.arrays.IntegerArray._from_sequence_of_strings(["1", "2", "3"]) - - [1, 2, 3] - Length: 3, dtype: Int64 - """ - raise AbstractMethodError(cls) - - @classmethod - def _from_factorized(cls, values, original): - """ - Reconstruct an ExtensionArray after factorization. - - Parameters - ---------- - values : ndarray - An integer ndarray with the factorized values. - original : ExtensionArray - The original ExtensionArray that factorize was called on. - - See Also - -------- - factorize : Top-level factorize method that dispatches here. - ExtensionArray.factorize : Encode the extension array as an enumerated type. - - Examples - -------- - >>> interv_arr = pd.arrays.IntervalArray([pd.Interval(0, 1), - ... pd.Interval(1, 5), pd.Interval(1, 5)]) - >>> codes, uniques = pd.factorize(interv_arr) - >>> pd.arrays.IntervalArray._from_factorized(uniques, interv_arr) - - [(0, 1], (1, 5]] - Length: 2, dtype: interval[int64, right] - """ - raise AbstractMethodError(cls) - - # ------------------------------------------------------------------------ - # Must be a Sequence - # ------------------------------------------------------------------------ - @overload - def __getitem__(self, item: ScalarIndexer) -> Any: - ... - - @overload - def __getitem__(self, item: SequenceIndexer) -> Self: - ... - - def __getitem__(self, item: PositionalIndexer) -> Self | Any: - """ - Select a subset of self. - - Parameters - ---------- - item : int, slice, or ndarray - * int: The position in 'self' to get. - - * slice: A slice object, where 'start', 'stop', and 'step' are - integers or None - - * ndarray: A 1-d boolean NumPy ndarray the same length as 'self' - - * list[int]: A list of int - - Returns - ------- - item : scalar or ExtensionArray - - Notes - ----- - For scalar ``item``, return a scalar value suitable for the array's - type. This should be an instance of ``self.dtype.type``. - - For slice ``key``, return an instance of ``ExtensionArray``, even - if the slice is length 0 or 1. - - For a boolean mask, return an instance of ``ExtensionArray``, filtered - to the values where ``item`` is True. - """ - raise AbstractMethodError(self) - - def __setitem__(self, key, value) -> None: - """ - Set one or more values inplace. - - This method is not required to satisfy the pandas extension array - interface. - - Parameters - ---------- - key : int, ndarray, or slice - When called from, e.g. ``Series.__setitem__``, ``key`` will be - one of - - * scalar int - * ndarray of integers. - * boolean ndarray - * slice object - - value : ExtensionDtype.type, Sequence[ExtensionDtype.type], or object - value or values to be set of ``key``. - - Returns - ------- - None - """ - # Some notes to the ExtensionArray implementor who may have ended up - # here. While this method is not required for the interface, if you - # *do* choose to implement __setitem__, then some semantics should be - # observed: - # - # * Setting multiple values : ExtensionArrays should support setting - # multiple values at once, 'key' will be a sequence of integers and - # 'value' will be a same-length sequence. - # - # * Broadcasting : For a sequence 'key' and a scalar 'value', - # each position in 'key' should be set to 'value'. - # - # * Coercion : Most users will expect basic coercion to work. For - # example, a string like '2018-01-01' is coerced to a datetime - # when setting on a datetime64ns array. In general, if the - # __init__ method coerces that value, then so should __setitem__ - # Note, also, that Series/DataFrame.where internally use __setitem__ - # on a copy of the data. - raise NotImplementedError(f"{type(self)} does not implement __setitem__.") - - def __len__(self) -> int: - """ - Length of this array - - Returns - ------- - length : int - """ - raise AbstractMethodError(self) - - def __iter__(self) -> Iterator[Any]: - """ - Iterate over elements of the array. - """ - # This needs to be implemented so that pandas recognizes extension - # arrays as list-like. The default implementation makes successive - # calls to ``__getitem__``, which may be slower than necessary. - for i in range(len(self)): - yield self[i] - - def __contains__(self, item: object) -> bool | np.bool_: - """ - Return for `item in self`. - """ - # GH37867 - # comparisons of any item to pd.NA always return pd.NA, so e.g. "a" in [pd.NA] - # would raise a TypeError. The implementation below works around that. - if is_scalar(item) and isna(item): - if not self._can_hold_na: - return False - elif item is self.dtype.na_value or isinstance(item, self.dtype.type): - return self._hasna - else: - return False - else: - # error: Item "ExtensionArray" of "Union[ExtensionArray, ndarray]" has no - # attribute "any" - return (item == self).any() # type: ignore[union-attr] - - # error: Signature of "__eq__" incompatible with supertype "object" - def __eq__(self, other: Any) -> ArrayLike: # type: ignore[override] - """ - Return for `self == other` (element-wise equality). - """ - # Implementer note: this should return a boolean numpy ndarray or - # a boolean ExtensionArray. - # When `other` is one of Series, Index, or DataFrame, this method should - # return NotImplemented (to ensure that those objects are responsible for - # first unpacking the arrays, and then dispatch the operation to the - # underlying arrays) - raise AbstractMethodError(self) - - # error: Signature of "__ne__" incompatible with supertype "object" - def __ne__(self, other: Any) -> ArrayLike: # type: ignore[override] - """ - Return for `self != other` (element-wise in-equality). - """ - return ~(self == other) - - def to_numpy( - self, - dtype: npt.DTypeLike | None = None, - copy: bool = False, - na_value: object = lib.no_default, - ) -> np.ndarray: - """ - Convert to a NumPy ndarray. - - This is similar to :meth:`numpy.asarray`, but may provide additional control - over how the conversion is done. - - Parameters - ---------- - dtype : str or numpy.dtype, optional - The dtype to pass to :meth:`numpy.asarray`. - copy : bool, default False - Whether to ensure that the returned value is a not a view on - another array. Note that ``copy=False`` does not *ensure* that - ``to_numpy()`` is no-copy. Rather, ``copy=True`` ensure that - a copy is made, even if not strictly necessary. - na_value : Any, optional - The value to use for missing values. The default value depends - on `dtype` and the type of the array. - - Returns - ------- - numpy.ndarray - """ - result = np.asarray(self, dtype=dtype) - if copy or na_value is not lib.no_default: - result = result.copy() - if na_value is not lib.no_default: - result[self.isna()] = na_value - return result - - # ------------------------------------------------------------------------ - # Required attributes - # ------------------------------------------------------------------------ - - @property - def dtype(self) -> ExtensionDtype: - """ - An instance of ExtensionDtype. - - Examples - -------- - >>> pd.array([1, 2, 3]).dtype - Int64Dtype() - """ - raise AbstractMethodError(self) - - @property - def shape(self) -> Shape: - """ - Return a tuple of the array dimensions. - - Examples - -------- - >>> arr = pd.array([1, 2, 3]) - >>> arr.shape - (3,) - """ - return (len(self),) - - @property - def size(self) -> int: - """ - The number of elements in the array. - """ - # error: Incompatible return value type (got "signedinteger[_64Bit]", - # expected "int") [return-value] - return np.prod(self.shape) # type: ignore[return-value] - - @property - def ndim(self) -> int: - """ - Extension Arrays are only allowed to be 1-dimensional. - - Examples - -------- - >>> arr = pd.array([1, 2, 3]) - >>> arr.ndim - 1 - """ - return 1 - - @property - def nbytes(self) -> int: - """ - The number of bytes needed to store this object in memory. - - Examples - -------- - >>> pd.array([1, 2, 3]).nbytes - 27 - """ - # If this is expensive to compute, return an approximate lower bound - # on the number of bytes needed. - raise AbstractMethodError(self) - - # ------------------------------------------------------------------------ - # Additional Methods - # ------------------------------------------------------------------------ - - @overload - def astype(self, dtype: npt.DTypeLike, copy: bool = ...) -> np.ndarray: - ... - - @overload - def astype(self, dtype: ExtensionDtype, copy: bool = ...) -> ExtensionArray: - ... - - @overload - def astype(self, dtype: AstypeArg, copy: bool = ...) -> ArrayLike: - ... - - def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike: - """ - Cast to a NumPy array or ExtensionArray with 'dtype'. - - Parameters - ---------- - dtype : str or dtype - Typecode or data-type to which the array is cast. - copy : bool, default True - Whether to copy the data, even if not necessary. If False, - a copy is made only if the old dtype does not match the - new dtype. - - Returns - ------- - np.ndarray or pandas.api.extensions.ExtensionArray - An ``ExtensionArray`` if ``dtype`` is ``ExtensionDtype``, - otherwise a Numpy ndarray with ``dtype`` for its dtype. - - Examples - -------- - >>> arr = pd.array([1, 2, 3]) - >>> arr - - [1, 2, 3] - Length: 3, dtype: Int64 - - Casting to another ``ExtensionDtype`` returns an ``ExtensionArray``: - - >>> arr1 = arr.astype('Float64') - >>> arr1 - - [1.0, 2.0, 3.0] - Length: 3, dtype: Float64 - >>> arr1.dtype - Float64Dtype() - - Otherwise, we will get a Numpy ndarray: - - >>> arr2 = arr.astype('float64') - >>> arr2 - array([1., 2., 3.]) - >>> arr2.dtype - dtype('float64') - """ - dtype = pandas_dtype(dtype) - if dtype == self.dtype: - if not copy: - return self - else: - return self.copy() - - if isinstance(dtype, ExtensionDtype): - cls = dtype.construct_array_type() - return cls._from_sequence(self, dtype=dtype, copy=copy) - - elif lib.is_np_dtype(dtype, "M"): - from pandas.core.arrays import DatetimeArray - - return DatetimeArray._from_sequence(self, dtype=dtype, copy=copy) - - elif lib.is_np_dtype(dtype, "m"): - from pandas.core.arrays import TimedeltaArray - - return TimedeltaArray._from_sequence(self, dtype=dtype, copy=copy) - - return np.array(self, dtype=dtype, copy=copy) - - def isna(self) -> np.ndarray | ExtensionArraySupportsAnyAll: - """ - A 1-D array indicating if each value is missing. - - Returns - ------- - numpy.ndarray or pandas.api.extensions.ExtensionArray - In most cases, this should return a NumPy ndarray. For - exceptional cases like ``SparseArray``, where returning - an ndarray would be expensive, an ExtensionArray may be - returned. - - Notes - ----- - If returning an ExtensionArray, then - - * ``na_values._is_boolean`` should be True - * `na_values` should implement :func:`ExtensionArray._reduce` - * ``na_values.any`` and ``na_values.all`` should be implemented - - Examples - -------- - >>> arr = pd.array([1, 2, np.nan, np.nan]) - >>> arr.isna() - array([False, False, True, True]) - """ - raise AbstractMethodError(self) - - @property - def _hasna(self) -> bool: - # GH#22680 - """ - Equivalent to `self.isna().any()`. - - Some ExtensionArray subclasses may be able to optimize this check. - """ - return bool(self.isna().any()) - - def _values_for_argsort(self) -> np.ndarray: - """ - Return values for sorting. - - Returns - ------- - ndarray - The transformed values should maintain the ordering between values - within the array. - - See Also - -------- - ExtensionArray.argsort : Return the indices that would sort this array. - - Notes - ----- - The caller is responsible for *not* modifying these values in-place, so - it is safe for implementors to give views on ``self``. - - Functions that use this (e.g. ``ExtensionArray.argsort``) should ignore - entries with missing values in the original array (according to - ``self.isna()``). This means that the corresponding entries in the returned - array don't need to be modified to sort correctly. - - Examples - -------- - In most cases, this is the underlying Numpy array of the ``ExtensionArray``: - - >>> arr = pd.array([1, 2, 3]) - >>> arr._values_for_argsort() - array([1, 2, 3]) - """ - # Note: this is used in `ExtensionArray.argsort/argmin/argmax`. - return np.array(self) - - def argsort( - self, - *, - ascending: bool = True, - kind: SortKind = "quicksort", - na_position: str = "last", - **kwargs, - ) -> np.ndarray: - """ - Return the indices that would sort this array. - - Parameters - ---------- - ascending : bool, default True - Whether the indices should result in an ascending - or descending sort. - kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, optional - Sorting algorithm. - na_position : {'first', 'last'}, default 'last' - If ``'first'``, put ``NaN`` values at the beginning. - If ``'last'``, put ``NaN`` values at the end. - *args, **kwargs: - Passed through to :func:`numpy.argsort`. - - Returns - ------- - np.ndarray[np.intp] - Array of indices that sort ``self``. If NaN values are contained, - NaN values are placed at the end. - - See Also - -------- - numpy.argsort : Sorting implementation used internally. - - Examples - -------- - >>> arr = pd.array([3, 1, 2, 5, 4]) - >>> arr.argsort() - array([1, 2, 0, 4, 3]) - """ - # Implementor note: You have two places to override the behavior of - # argsort. - # 1. _values_for_argsort : construct the values passed to np.argsort - # 2. argsort : total control over sorting. In case of overriding this, - # it is recommended to also override argmax/argmin - ascending = nv.validate_argsort_with_ascending(ascending, (), kwargs) - - values = self._values_for_argsort() - return nargsort( - values, - kind=kind, - ascending=ascending, - na_position=na_position, - mask=np.asarray(self.isna()), - ) - - def argmin(self, skipna: bool = True) -> int: - """ - Return the index of minimum value. - - In case of multiple occurrences of the minimum value, the index - corresponding to the first occurrence is returned. - - Parameters - ---------- - skipna : bool, default True - - Returns - ------- - int - - See Also - -------- - ExtensionArray.argmax : Return the index of the maximum value. - - Examples - -------- - >>> arr = pd.array([3, 1, 2, 5, 4]) - >>> arr.argmin() - 1 - """ - # Implementor note: You have two places to override the behavior of - # argmin. - # 1. _values_for_argsort : construct the values used in nargminmax - # 2. argmin itself : total control over sorting. - validate_bool_kwarg(skipna, "skipna") - if not skipna and self._hasna: - raise NotImplementedError - return nargminmax(self, "argmin") - - def argmax(self, skipna: bool = True) -> int: - """ - Return the index of maximum value. - - In case of multiple occurrences of the maximum value, the index - corresponding to the first occurrence is returned. - - Parameters - ---------- - skipna : bool, default True - - Returns - ------- - int - - See Also - -------- - ExtensionArray.argmin : Return the index of the minimum value. - - Examples - -------- - >>> arr = pd.array([3, 1, 2, 5, 4]) - >>> arr.argmax() - 3 - """ - # Implementor note: You have two places to override the behavior of - # argmax. - # 1. _values_for_argsort : construct the values used in nargminmax - # 2. argmax itself : total control over sorting. - validate_bool_kwarg(skipna, "skipna") - if not skipna and self._hasna: - raise NotImplementedError - return nargminmax(self, "argmax") - - def interpolate( - self, - *, - method: InterpolateOptions, - axis: int, - index: Index, - limit, - limit_direction, - limit_area, - copy: bool, - **kwargs, - ) -> Self: - """ - See DataFrame.interpolate.__doc__. - - Examples - -------- - >>> arr = pd.arrays.NumpyExtensionArray(np.array([0, 1, np.nan, 3])) - >>> arr.interpolate(method="linear", - ... limit=3, - ... limit_direction="forward", - ... index=pd.Index([1, 2, 3, 4]), - ... fill_value=1, - ... copy=False, - ... axis=0, - ... limit_area="inside" - ... ) - - [0.0, 1.0, 2.0, 3.0] - Length: 4, dtype: float64 - """ - # NB: we return type(self) even if copy=False - raise NotImplementedError( - f"{type(self).__name__} does not implement interpolate" - ) - - def _pad_or_backfill( - self, *, method: FillnaOptions, limit: int | None = None, copy: bool = True - ) -> Self: - """ - Pad or backfill values, used by Series/DataFrame ffill and bfill. - - Parameters - ---------- - method : {'backfill', 'bfill', 'pad', 'ffill'} - Method to use for filling holes in reindexed Series: - - * pad / ffill: propagate last valid observation forward to next valid. - * backfill / bfill: use NEXT valid observation to fill gap. - - limit : int, default None - This is the maximum number of consecutive - NaN values to forward/backward fill. In other words, if there is - a gap with more than this number of consecutive NaNs, it will only - be partially filled. If method is not specified, this is the - maximum number of entries along the entire axis where NaNs will be - filled. - - copy : bool, default True - Whether to make a copy of the data before filling. If False, then - the original should be modified and no new memory should be allocated. - For ExtensionArray subclasses that cannot do this, it is at the - author's discretion whether to ignore "copy=False" or to raise. - The base class implementation ignores the keyword if any NAs are - present. - - Returns - ------- - Same type as self - - Examples - -------- - >>> arr = pd.array([np.nan, np.nan, 2, 3, np.nan, np.nan]) - >>> arr._pad_or_backfill(method="backfill", limit=1) - - [, 2, 2, 3, , ] - Length: 6, dtype: Int64 - """ - - # If a 3rd-party EA has implemented this functionality in fillna, - # we warn that they need to implement _pad_or_backfill instead. - if ( - type(self).fillna is not ExtensionArray.fillna - and type(self)._pad_or_backfill is ExtensionArray._pad_or_backfill - ): - # Check for _pad_or_backfill here allows us to call - # super()._pad_or_backfill without getting this warning - warnings.warn( - "ExtensionArray.fillna 'method' keyword is deprecated. " - "In a future version. arr._pad_or_backfill will be called " - "instead. 3rd-party ExtensionArray authors need to implement " - "_pad_or_backfill.", - DeprecationWarning, - stacklevel=find_stack_level(), - ) - return self.fillna(method=method, limit=limit) - - mask = self.isna() - - if mask.any(): - # NB: the base class does not respect the "copy" keyword - meth = missing.clean_fill_method(method) - - npmask = np.asarray(mask) - if meth == "pad": - indexer = libalgos.get_fill_indexer(npmask, limit=limit) - return self.take(indexer, allow_fill=True) - else: - # i.e. meth == "backfill" - indexer = libalgos.get_fill_indexer(npmask[::-1], limit=limit)[::-1] - return self[::-1].take(indexer, allow_fill=True) - - else: - if not copy: - return self - new_values = self.copy() - return new_values - - def fillna( - self, - value: object | ArrayLike | None = None, - method: FillnaOptions | None = None, - limit: int | None = None, - copy: bool = True, - ) -> Self: - """ - Fill NA/NaN values using the specified method. - - Parameters - ---------- - value : scalar, array-like - If a scalar value is passed it is used to fill all missing values. - Alternatively, an array-like "value" can be given. It's expected - that the array-like have the same length as 'self'. - method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None - Method to use for filling holes in reindexed Series: - - * pad / ffill: propagate last valid observation forward to next valid. - * backfill / bfill: use NEXT valid observation to fill gap. - - .. deprecated:: 2.1.0 - - limit : int, default None - If method is specified, this is the maximum number of consecutive - NaN values to forward/backward fill. In other words, if there is - a gap with more than this number of consecutive NaNs, it will only - be partially filled. If method is not specified, this is the - maximum number of entries along the entire axis where NaNs will be - filled. - - .. deprecated:: 2.1.0 - - copy : bool, default True - Whether to make a copy of the data before filling. If False, then - the original should be modified and no new memory should be allocated. - For ExtensionArray subclasses that cannot do this, it is at the - author's discretion whether to ignore "copy=False" or to raise. - The base class implementation ignores the keyword in pad/backfill - cases. - - Returns - ------- - ExtensionArray - With NA/NaN filled. - - Examples - -------- - >>> arr = pd.array([np.nan, np.nan, 2, 3, np.nan, np.nan]) - >>> arr.fillna(0) - - [0, 0, 2, 3, 0, 0] - Length: 6, dtype: Int64 - """ - if method is not None: - warnings.warn( - f"The 'method' keyword in {type(self).__name__}.fillna is " - "deprecated and will be removed in a future version.", - FutureWarning, - stacklevel=find_stack_level(), - ) - - value, method = validate_fillna_kwargs(value, method) - - mask = self.isna() - # error: Argument 2 to "check_value_size" has incompatible type - # "ExtensionArray"; expected "ndarray" - value = missing.check_value_size( - value, mask, len(self) # type: ignore[arg-type] - ) - - if mask.any(): - if method is not None: - meth = missing.clean_fill_method(method) - - npmask = np.asarray(mask) - if meth == "pad": - indexer = libalgos.get_fill_indexer(npmask, limit=limit) - return self.take(indexer, allow_fill=True) - else: - # i.e. meth == "backfill" - indexer = libalgos.get_fill_indexer(npmask[::-1], limit=limit)[::-1] - return self[::-1].take(indexer, allow_fill=True) - else: - # fill with value - if not copy: - new_values = self[:] - else: - new_values = self.copy() - new_values[mask] = value - else: - if not copy: - new_values = self[:] - else: - new_values = self.copy() - return new_values - - def dropna(self) -> Self: - """ - Return ExtensionArray without NA values. - - Returns - ------- - - Examples - -------- - >>> pd.array([1, 2, np.nan]).dropna() - - [1, 2] - Length: 2, dtype: Int64 - """ - # error: Unsupported operand type for ~ ("ExtensionArray") - return self[~self.isna()] # type: ignore[operator] - - def shift(self, periods: int = 1, fill_value: object = None) -> ExtensionArray: - """ - Shift values by desired number. - - Newly introduced missing values are filled with - ``self.dtype.na_value``. - - Parameters - ---------- - periods : int, default 1 - The number of periods to shift. Negative values are allowed - for shifting backwards. - - fill_value : object, optional - The scalar value to use for newly introduced missing values. - The default is ``self.dtype.na_value``. - - Returns - ------- - ExtensionArray - Shifted. - - Notes - ----- - If ``self`` is empty or ``periods`` is 0, a copy of ``self`` is - returned. - - If ``periods > len(self)``, then an array of size - len(self) is returned, with all values filled with - ``self.dtype.na_value``. - - For 2-dimensional ExtensionArrays, we are always shifting along axis=0. - - Examples - -------- - >>> arr = pd.array([1, 2, 3]) - >>> arr.shift(2) - - [, , 1] - Length: 3, dtype: Int64 - """ - # Note: this implementation assumes that `self.dtype.na_value` can be - # stored in an instance of your ExtensionArray with `self.dtype`. - if not len(self) or periods == 0: - return self.copy() - - if isna(fill_value): - fill_value = self.dtype.na_value - - empty = self._from_sequence( - [fill_value] * min(abs(periods), len(self)), dtype=self.dtype - ) - if periods > 0: - a = empty - b = self[:-periods] - else: - a = self[abs(periods) :] - b = empty - return self._concat_same_type([a, b]) - - def unique(self) -> Self: - """ - Compute the ExtensionArray of unique values. - - Returns - ------- - pandas.api.extensions.ExtensionArray - - Examples - -------- - >>> arr = pd.array([1, 2, 3, 1, 2, 3]) - >>> arr.unique() - - [1, 2, 3] - Length: 3, dtype: Int64 - """ - uniques = unique(self.astype(object)) - return self._from_sequence(uniques, dtype=self.dtype) - - def searchsorted( - self, - value: NumpyValueArrayLike | ExtensionArray, - side: Literal["left", "right"] = "left", - sorter: NumpySorter | None = None, - ) -> npt.NDArray[np.intp] | np.intp: - """ - Find indices where elements should be inserted to maintain order. - - Find the indices into a sorted array `self` (a) such that, if the - corresponding elements in `value` were inserted before the indices, - the order of `self` would be preserved. - - Assuming that `self` is sorted: - - ====== ================================ - `side` returned index `i` satisfies - ====== ================================ - left ``self[i-1] < value <= self[i]`` - right ``self[i-1] <= value < self[i]`` - ====== ================================ - - Parameters - ---------- - value : array-like, list or scalar - Value(s) to insert into `self`. - side : {'left', 'right'}, optional - If 'left', the index of the first suitable location found is given. - If 'right', return the last such index. If there is no suitable - index, return either 0 or N (where N is the length of `self`). - sorter : 1-D array-like, optional - Optional array of integer indices that sort array a into ascending - order. They are typically the result of argsort. - - Returns - ------- - array of ints or int - If value is array-like, array of insertion points. - If value is scalar, a single integer. - - See Also - -------- - numpy.searchsorted : Similar method from NumPy. - - Examples - -------- - >>> arr = pd.array([1, 2, 3, 5]) - >>> arr.searchsorted([4]) - array([3]) - """ - # Note: the base tests provided by pandas only test the basics. - # We do not test - # 1. Values outside the range of the `data_for_sorting` fixture - # 2. Values between the values in the `data_for_sorting` fixture - # 3. Missing values. - arr = self.astype(object) - if isinstance(value, ExtensionArray): - value = value.astype(object) - return arr.searchsorted(value, side=side, sorter=sorter) - - def equals(self, other: object) -> bool: - """ - Return if another array is equivalent to this array. - - Equivalent means that both arrays have the same shape and dtype, and - all values compare equal. Missing values in the same location are - considered equal (in contrast with normal equality). - - Parameters - ---------- - other : ExtensionArray - Array to compare to this Array. - - Returns - ------- - boolean - Whether the arrays are equivalent. - - Examples - -------- - >>> arr1 = pd.array([1, 2, np.nan]) - >>> arr2 = pd.array([1, 2, np.nan]) - >>> arr1.equals(arr2) - True - """ - if type(self) != type(other): - return False - other = cast(ExtensionArray, other) - if self.dtype != other.dtype: - return False - elif len(self) != len(other): - return False - else: - equal_values = self == other - if isinstance(equal_values, ExtensionArray): - # boolean array with NA -> fill with False - equal_values = equal_values.fillna(False) - # error: Unsupported left operand type for & ("ExtensionArray") - equal_na = self.isna() & other.isna() # type: ignore[operator] - return bool((equal_values | equal_na).all()) - - def isin(self, values) -> npt.NDArray[np.bool_]: - """ - Pointwise comparison for set containment in the given values. - - Roughly equivalent to `np.array([x in values for x in self])` - - Parameters - ---------- - values : Sequence - - Returns - ------- - np.ndarray[bool] - - Examples - -------- - >>> arr = pd.array([1, 2, 3]) - >>> arr.isin([1]) - - [True, False, False] - Length: 3, dtype: boolean - """ - return isin(np.asarray(self), values) - - def _values_for_factorize(self) -> tuple[np.ndarray, Any]: - """ - Return an array and missing value suitable for factorization. - - Returns - ------- - values : ndarray - An array suitable for factorization. This should maintain order - and be a supported dtype (Float64, Int64, UInt64, String, Object). - By default, the extension array is cast to object dtype. - na_value : object - The value in `values` to consider missing. This will be treated - as NA in the factorization routines, so it will be coded as - `-1` and not included in `uniques`. By default, - ``np.nan`` is used. - - Notes - ----- - The values returned by this method are also used in - :func:`pandas.util.hash_pandas_object`. If needed, this can be - overridden in the ``self._hash_pandas_object()`` method. - - Examples - -------- - >>> pd.array([1, 2, 3])._values_for_factorize() - (array([1, 2, 3], dtype=object), nan) - """ - return self.astype(object), np.nan - - def factorize( - self, - use_na_sentinel: bool = True, - ) -> tuple[np.ndarray, ExtensionArray]: - """ - Encode the extension array as an enumerated type. - - Parameters - ---------- - use_na_sentinel : bool, default True - If True, the sentinel -1 will be used for NaN values. If False, - NaN values will be encoded as non-negative integers and will not drop the - NaN from the uniques of the values. - - .. versionadded:: 1.5.0 - - Returns - ------- - codes : ndarray - An integer NumPy array that's an indexer into the original - ExtensionArray. - uniques : ExtensionArray - An ExtensionArray containing the unique values of `self`. - - .. note:: - - uniques will *not* contain an entry for the NA value of - the ExtensionArray if there are any missing values present - in `self`. - - See Also - -------- - factorize : Top-level factorize method that dispatches here. - - Notes - ----- - :meth:`pandas.factorize` offers a `sort` keyword as well. - - Examples - -------- - >>> idx1 = pd.PeriodIndex(["2014-01", "2014-01", "2014-02", "2014-02", - ... "2014-03", "2014-03"], freq="M") - >>> arr, idx = idx1.factorize() - >>> arr - array([0, 0, 1, 1, 2, 2]) - >>> idx - PeriodIndex(['2014-01', '2014-02', '2014-03'], dtype='period[M]') - """ - # Implementer note: There are two ways to override the behavior of - # pandas.factorize - # 1. _values_for_factorize and _from_factorize. - # Specify the values passed to pandas' internal factorization - # routines, and how to convert from those values back to the - # original ExtensionArray. - # 2. ExtensionArray.factorize. - # Complete control over factorization. - arr, na_value = self._values_for_factorize() - - codes, uniques = factorize_array( - arr, use_na_sentinel=use_na_sentinel, na_value=na_value - ) - - uniques_ea = self._from_factorized(uniques, self) - return codes, uniques_ea - - _extension_array_shared_docs[ - "repeat" - ] = """ - Repeat elements of a %(klass)s. - - Returns a new %(klass)s where each element of the current %(klass)s - is repeated consecutively a given number of times. - - Parameters - ---------- - repeats : int or array of ints - The number of repetitions for each element. This should be a - non-negative integer. Repeating 0 times will return an empty - %(klass)s. - axis : None - Must be ``None``. Has no effect but is accepted for compatibility - with numpy. - - Returns - ------- - %(klass)s - Newly created %(klass)s with repeated elements. - - See Also - -------- - Series.repeat : Equivalent function for Series. - Index.repeat : Equivalent function for Index. - numpy.repeat : Similar method for :class:`numpy.ndarray`. - ExtensionArray.take : Take arbitrary positions. - - Examples - -------- - >>> cat = pd.Categorical(['a', 'b', 'c']) - >>> cat - ['a', 'b', 'c'] - Categories (3, object): ['a', 'b', 'c'] - >>> cat.repeat(2) - ['a', 'a', 'b', 'b', 'c', 'c'] - Categories (3, object): ['a', 'b', 'c'] - >>> cat.repeat([1, 2, 3]) - ['a', 'b', 'b', 'c', 'c', 'c'] - Categories (3, object): ['a', 'b', 'c'] - """ - - @Substitution(klass="ExtensionArray") - @Appender(_extension_array_shared_docs["repeat"]) - def repeat(self, repeats: int | Sequence[int], axis: AxisInt | None = None) -> Self: - nv.validate_repeat((), {"axis": axis}) - ind = np.arange(len(self)).repeat(repeats) - return self.take(ind) - - # ------------------------------------------------------------------------ - # Indexing methods - # ------------------------------------------------------------------------ - - def take( - self, - indices: TakeIndexer, - *, - allow_fill: bool = False, - fill_value: Any = None, - ) -> Self: - """ - Take elements from an array. - - Parameters - ---------- - indices : sequence of int or one-dimensional np.ndarray of int - Indices to be taken. - allow_fill : bool, default False - How to handle negative values in `indices`. - - * False: negative values in `indices` indicate positional indices - from the right (the default). This is similar to - :func:`numpy.take`. - - * True: negative values in `indices` indicate - missing values. These values are set to `fill_value`. Any other - other negative values raise a ``ValueError``. - - fill_value : any, optional - Fill value to use for NA-indices when `allow_fill` is True. - This may be ``None``, in which case the default NA value for - the type, ``self.dtype.na_value``, is used. - - For many ExtensionArrays, there will be two representations of - `fill_value`: a user-facing "boxed" scalar, and a low-level - physical NA value. `fill_value` should be the user-facing version, - and the implementation should handle translating that to the - physical version for processing the take if necessary. - - Returns - ------- - ExtensionArray - - Raises - ------ - IndexError - When the indices are out of bounds for the array. - ValueError - When `indices` contains negative values other than ``-1`` - and `allow_fill` is True. - - See Also - -------- - numpy.take : Take elements from an array along an axis. - api.extensions.take : Take elements from an array. - - Notes - ----- - ExtensionArray.take is called by ``Series.__getitem__``, ``.loc``, - ``iloc``, when `indices` is a sequence of values. Additionally, - it's called by :meth:`Series.reindex`, or any other method - that causes realignment, with a `fill_value`. - - Examples - -------- - Here's an example implementation, which relies on casting the - extension array to object dtype. This uses the helper method - :func:`pandas.api.extensions.take`. - - .. code-block:: python - - def take(self, indices, allow_fill=False, fill_value=None): - from pandas.core.algorithms import take - - # If the ExtensionArray is backed by an ndarray, then - # just pass that here instead of coercing to object. - data = self.astype(object) - - if allow_fill and fill_value is None: - fill_value = self.dtype.na_value - - # fill value should always be translated from the scalar - # type for the array, to the physical storage type for - # the data, before passing to take. - - result = take(data, indices, fill_value=fill_value, - allow_fill=allow_fill) - return self._from_sequence(result, dtype=self.dtype) - """ - # Implementer note: The `fill_value` parameter should be a user-facing - # value, an instance of self.dtype.type. When passed `fill_value=None`, - # the default of `self.dtype.na_value` should be used. - # This may differ from the physical storage type your ExtensionArray - # uses. In this case, your implementation is responsible for casting - # the user-facing type to the storage type, before using - # pandas.api.extensions.take - raise AbstractMethodError(self) - - def copy(self) -> Self: - """ - Return a copy of the array. - - Returns - ------- - ExtensionArray - - Examples - -------- - >>> arr = pd.array([1, 2, 3]) - >>> arr2 = arr.copy() - >>> arr[0] = 2 - >>> arr2 - - [1, 2, 3] - Length: 3, dtype: Int64 - """ - raise AbstractMethodError(self) - - def view(self, dtype: Dtype | None = None) -> ArrayLike: - """ - Return a view on the array. - - Parameters - ---------- - dtype : str, np.dtype, or ExtensionDtype, optional - Default None. - - Returns - ------- - ExtensionArray or np.ndarray - A view on the :class:`ExtensionArray`'s data. - - Examples - -------- - This gives view on the underlying data of an ``ExtensionArray`` and is not a - copy. Modifications on either the view or the original ``ExtensionArray`` - will be reflectd on the underlying data: - - >>> arr = pd.array([1, 2, 3]) - >>> arr2 = arr.view() - >>> arr[0] = 2 - >>> arr2 - - [2, 2, 3] - Length: 3, dtype: Int64 - """ - # NB: - # - This must return a *new* object referencing the same data, not self. - # - The only case that *must* be implemented is with dtype=None, - # giving a view with the same dtype as self. - if dtype is not None: - raise NotImplementedError(dtype) - return self[:] - - # ------------------------------------------------------------------------ - # Printing - # ------------------------------------------------------------------------ - - def __repr__(self) -> str: - if self.ndim > 1: - return self._repr_2d() - - from pandas.io.formats.printing import format_object_summary - - # the short repr has no trailing newline, while the truncated - # repr does. So we include a newline in our template, and strip - # any trailing newlines from format_object_summary - data = format_object_summary( - self, self._formatter(), indent_for_name=False - ).rstrip(", \n") - class_name = f"<{type(self).__name__}>\n" - return f"{class_name}{data}\nLength: {len(self)}, dtype: {self.dtype}" - - def _repr_2d(self) -> str: - from pandas.io.formats.printing import format_object_summary - - # the short repr has no trailing newline, while the truncated - # repr does. So we include a newline in our template, and strip - # any trailing newlines from format_object_summary - lines = [ - format_object_summary(x, self._formatter(), indent_for_name=False).rstrip( - ", \n" - ) - for x in self - ] - data = ",\n".join(lines) - class_name = f"<{type(self).__name__}>" - return f"{class_name}\n[\n{data}\n]\nShape: {self.shape}, dtype: {self.dtype}" - - def _formatter(self, boxed: bool = False) -> Callable[[Any], str | None]: - """ - Formatting function for scalar values. - - This is used in the default '__repr__'. The returned formatting - function receives instances of your scalar type. - - Parameters - ---------- - boxed : bool, default False - An indicated for whether or not your array is being printed - within a Series, DataFrame, or Index (True), or just by - itself (False). This may be useful if you want scalar values - to appear differently within a Series versus on its own (e.g. - quoted or not). - - Returns - ------- - Callable[[Any], str] - A callable that gets instances of the scalar type and - returns a string. By default, :func:`repr` is used - when ``boxed=False`` and :func:`str` is used when - ``boxed=True``. - - Examples - -------- - >>> class MyExtensionArray(pd.arrays.NumpyExtensionArray): - ... def _formatter(self, boxed=False): - ... return lambda x: '*' + str(x) + '*' if boxed else repr(x) + '*' - >>> MyExtensionArray(np.array([1, 2, 3, 4])) - - [1*, 2*, 3*, 4*] - Length: 4, dtype: int64 - """ - if boxed: - return str - return repr - - # ------------------------------------------------------------------------ - # Reshaping - # ------------------------------------------------------------------------ - - def transpose(self, *axes: int) -> ExtensionArray: - """ - Return a transposed view on this array. - - Because ExtensionArrays are always 1D, this is a no-op. It is included - for compatibility with np.ndarray. - """ - return self[:] - - @property - def T(self) -> ExtensionArray: - return self.transpose() - - def ravel(self, order: Literal["C", "F", "A", "K"] | None = "C") -> ExtensionArray: - """ - Return a flattened view on this array. - - Parameters - ---------- - order : {None, 'C', 'F', 'A', 'K'}, default 'C' - - Returns - ------- - ExtensionArray - - Notes - ----- - - Because ExtensionArrays are 1D-only, this is a no-op. - - The "order" argument is ignored, is for compatibility with NumPy. - - Examples - -------- - >>> pd.array([1, 2, 3]).ravel() - - [1, 2, 3] - Length: 3, dtype: Int64 - """ - return self - - @classmethod - def _concat_same_type(cls, to_concat: Sequence[Self]) -> Self: - """ - Concatenate multiple array of this dtype. - - Parameters - ---------- - to_concat : sequence of this type - - Returns - ------- - ExtensionArray - - Examples - -------- - >>> arr1 = pd.array([1, 2, 3]) - >>> arr2 = pd.array([4, 5, 6]) - >>> pd.arrays.IntegerArray._concat_same_type([arr1, arr2]) - - [1, 2, 3, 4, 5, 6] - Length: 6, dtype: Int64 - """ - # Implementer note: this method will only be called with a sequence of - # ExtensionArrays of this class and with the same dtype as self. This - # should allow "easy" concatenation (no upcasting needed), and result - # in a new ExtensionArray of the same dtype. - # Note: this strict behaviour is only guaranteed starting with pandas 1.1 - raise AbstractMethodError(cls) - - # The _can_hold_na attribute is set to True so that pandas internals - # will use the ExtensionDtype.na_value as the NA value in operations - # such as take(), reindex(), shift(), etc. In addition, those results - # will then be of the ExtensionArray subclass rather than an array - # of objects - @cache_readonly - def _can_hold_na(self) -> bool: - return self.dtype._can_hold_na - - def _accumulate( - self, name: str, *, skipna: bool = True, **kwargs - ) -> ExtensionArray: - """ - Return an ExtensionArray performing an accumulation operation. - - The underlying data type might change. - - Parameters - ---------- - name : str - Name of the function, supported values are: - - cummin - - cummax - - cumsum - - cumprod - skipna : bool, default True - If True, skip NA values. - **kwargs - Additional keyword arguments passed to the accumulation function. - Currently, there is no supported kwarg. - - Returns - ------- - array - - Raises - ------ - NotImplementedError : subclass does not define accumulations - - Examples - -------- - >>> arr = pd.array([1, 2, 3]) - >>> arr._accumulate(name='cumsum') - - [1, 3, 6] - Length: 3, dtype: Int64 - """ - raise NotImplementedError(f"cannot perform {name} with type {self.dtype}") - - def _reduce( - self, name: str, *, skipna: bool = True, keepdims: bool = False, **kwargs - ): - """ - Return a scalar result of performing the reduction operation. - - Parameters - ---------- - name : str - Name of the function, supported values are: - { any, all, min, max, sum, mean, median, prod, - std, var, sem, kurt, skew }. - skipna : bool, default True - If True, skip NaN values. - keepdims : bool, default False - If False, a scalar is returned. - If True, the result has dimension with size one along the reduced axis. - - .. versionadded:: 2.1 - - This parameter is not required in the _reduce signature to keep backward - compatibility, but will become required in the future. If the parameter - is not found in the method signature, a FutureWarning will be emitted. - **kwargs - Additional keyword arguments passed to the reduction function. - Currently, `ddof` is the only supported kwarg. - - Returns - ------- - scalar - - Raises - ------ - TypeError : subclass does not define reductions - - Examples - -------- - >>> pd.array([1, 2, 3])._reduce("min") - 1 - """ - meth = getattr(self, name, None) - if meth is None: - raise TypeError( - f"'{type(self).__name__}' with dtype {self.dtype} " - f"does not support reduction '{name}'" - ) - result = meth(skipna=skipna, **kwargs) - if keepdims: - result = np.array([result]) - - return result - - # https://github.com/python/typeshed/issues/2148#issuecomment-520783318 - # Incompatible types in assignment (expression has type "None", base class - # "object" defined the type as "Callable[[object], int]") - __hash__: ClassVar[None] # type: ignore[assignment] - - # ------------------------------------------------------------------------ - # Non-Optimized Default Methods; in the case of the private methods here, - # these are not guaranteed to be stable across pandas versions. - - def _values_for_json(self) -> np.ndarray: - """ - Specify how to render our entries in to_json. - - Notes - ----- - The dtype on the returned ndarray is not restricted, but for non-native - types that are not specifically handled in objToJSON.c, to_json is - liable to raise. In these cases, it may be safer to return an ndarray - of strings. - """ - return np.asarray(self) - - def _hash_pandas_object( - self, *, encoding: str, hash_key: str, categorize: bool - ) -> npt.NDArray[np.uint64]: - """ - Hook for hash_pandas_object. - - Default is to use the values returned by _values_for_factorize. - - Parameters - ---------- - encoding : str - Encoding for data & key when strings. - hash_key : str - Hash_key for string key to encode. - categorize : bool - Whether to first categorize object arrays before hashing. This is more - efficient when the array contains duplicate values. - - Returns - ------- - np.ndarray[uint64] - - Examples - -------- - >>> pd.array([1, 2])._hash_pandas_object(encoding='utf-8', - ... hash_key="1000000000000000", - ... categorize=False - ... ) - array([11381023671546835630, 4641644667904626417], dtype=uint64) - """ - from pandas.core.util.hashing import hash_array - - values, _ = self._values_for_factorize() - return hash_array( - values, encoding=encoding, hash_key=hash_key, categorize=categorize - ) - - def tolist(self) -> list: - """ - Return a list of the values. - - These are each a scalar type, which is a Python scalar - (for str, int, float) or a pandas scalar - (for Timestamp/Timedelta/Interval/Period) - - Returns - ------- - list - - Examples - -------- - >>> arr = pd.array([1, 2, 3]) - >>> arr.tolist() - [1, 2, 3] - """ - if self.ndim > 1: - return [x.tolist() for x in self] - return list(self) - - def delete(self, loc: PositionalIndexer) -> Self: - indexer = np.delete(np.arange(len(self)), loc) - return self.take(indexer) - - def insert(self, loc: int, item) -> Self: - """ - Insert an item at the given position. - - Parameters - ---------- - loc : int - item : scalar-like - - Returns - ------- - same type as self - - Notes - ----- - This method should be both type and dtype-preserving. If the item - cannot be held in an array of this type/dtype, either ValueError or - TypeError should be raised. - - The default implementation relies on _from_sequence to raise on invalid - items. - - Examples - -------- - >>> arr = pd.array([1, 2, 3]) - >>> arr.insert(2, -1) - - [1, 2, -1, 3] - Length: 4, dtype: Int64 - """ - loc = validate_insert_loc(loc, len(self)) - - item_arr = type(self)._from_sequence([item], dtype=self.dtype) - - return type(self)._concat_same_type([self[:loc], item_arr, self[loc:]]) - - def _putmask(self, mask: npt.NDArray[np.bool_], value) -> None: - """ - Analogue to np.putmask(self, mask, value) - - Parameters - ---------- - mask : np.ndarray[bool] - value : scalar or listlike - If listlike, must be arraylike with same length as self. - - Returns - ------- - None - - Notes - ----- - Unlike np.putmask, we do not repeat listlike values with mismatched length. - 'value' should either be a scalar or an arraylike with the same length - as self. - """ - if is_list_like(value): - val = value[mask] - else: - val = value - - self[mask] = val - - def _where(self, mask: npt.NDArray[np.bool_], value) -> Self: - """ - Analogue to np.where(mask, self, value) - - Parameters - ---------- - mask : np.ndarray[bool] - value : scalar or listlike - - Returns - ------- - same type as self - """ - result = self.copy() - - if is_list_like(value): - val = value[~mask] - else: - val = value - - result[~mask] = val - return result - - def _fill_mask_inplace( - self, method: str, limit: int | None, mask: npt.NDArray[np.bool_] - ) -> None: - """ - Replace values in locations specified by 'mask' using pad or backfill. - - See also - -------- - ExtensionArray.fillna - """ - func = missing.get_fill_func(method) - npvalues = self.astype(object) - # NB: if we don't copy mask here, it may be altered inplace, which - # would mess up the `self[mask] = ...` below. - func(npvalues, limit=limit, mask=mask.copy()) - new_values = self._from_sequence(npvalues, dtype=self.dtype) - self[mask] = new_values[mask] - - def _rank( - self, - *, - axis: AxisInt = 0, - method: str = "average", - na_option: str = "keep", - ascending: bool = True, - pct: bool = False, - ): - """ - See Series.rank.__doc__. - """ - if axis != 0: - raise NotImplementedError - - return rank( - self._values_for_argsort(), - axis=axis, - method=method, - na_option=na_option, - ascending=ascending, - pct=pct, - ) - - @classmethod - def _empty(cls, shape: Shape, dtype: ExtensionDtype): - """ - Create an ExtensionArray with the given shape and dtype. - - See also - -------- - ExtensionDtype.empty - ExtensionDtype.empty is the 'official' public version of this API. - """ - # Implementer note: while ExtensionDtype.empty is the public way to - # call this method, it is still required to implement this `_empty` - # method as well (it is called internally in pandas) - obj = cls._from_sequence([], dtype=dtype) - - taker = np.broadcast_to(np.intp(-1), shape) - result = obj.take(taker, allow_fill=True) - if not isinstance(result, cls) or dtype != result.dtype: - raise NotImplementedError( - f"Default 'empty' implementation is invalid for dtype='{dtype}'" - ) - return result - - def _quantile(self, qs: npt.NDArray[np.float64], interpolation: str) -> Self: - """ - Compute the quantiles of self for each quantile in `qs`. - - Parameters - ---------- - qs : np.ndarray[float64] - interpolation: str - - Returns - ------- - same type as self - """ - mask = np.asarray(self.isna()) - arr = np.asarray(self) - fill_value = np.nan - - res_values = quantile_with_mask(arr, mask, fill_value, qs, interpolation) - return type(self)._from_sequence(res_values) - - def _mode(self, dropna: bool = True) -> Self: - """ - Returns the mode(s) of the ExtensionArray. - - Always returns `ExtensionArray` even if only one value. - - Parameters - ---------- - dropna : bool, default True - Don't consider counts of NA values. - - Returns - ------- - same type as self - Sorted, if possible. - """ - # error: Incompatible return value type (got "Union[ExtensionArray, - # ndarray[Any, Any]]", expected "Self") - return mode(self, dropna=dropna) # type: ignore[return-value] - - def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs): - if any( - isinstance(other, (ABCSeries, ABCIndex, ABCDataFrame)) for other in inputs - ): - return NotImplemented - - result = arraylike.maybe_dispatch_ufunc_to_dunder_op( - self, ufunc, method, *inputs, **kwargs - ) - if result is not NotImplemented: - return result - - if "out" in kwargs: - return arraylike.dispatch_ufunc_with_out( - self, ufunc, method, *inputs, **kwargs - ) - - if method == "reduce": - result = arraylike.dispatch_reduction_ufunc( - self, ufunc, method, *inputs, **kwargs - ) - if result is not NotImplemented: - return result - - return arraylike.default_array_ufunc(self, ufunc, method, *inputs, **kwargs) - - def map(self, mapper, na_action=None): - """ - Map values using an input mapping or function. - - Parameters - ---------- - mapper : function, dict, or Series - Mapping correspondence. - na_action : {None, 'ignore'}, default None - If 'ignore', propagate NA values, without passing them to the - mapping correspondence. If 'ignore' is not supported, a - ``NotImplementedError`` should be raised. - - Returns - ------- - Union[ndarray, Index, ExtensionArray] - The output of the mapping function applied to the array. - If the function returns a tuple with more than one element - a MultiIndex will be returned. - """ - return map_array(self, mapper, na_action=na_action) - - # ------------------------------------------------------------------------ - # GroupBy Methods - - def _groupby_op( - self, - *, - how: str, - has_dropped_na: bool, - min_count: int, - ngroups: int, - ids: npt.NDArray[np.intp], - **kwargs, - ) -> ArrayLike: - """ - Dispatch GroupBy reduction or transformation operation. - - This is an *experimental* API to allow ExtensionArray authors to implement - reductions and transformations. The API is subject to change. - - Parameters - ---------- - how : {'any', 'all', 'sum', 'prod', 'min', 'max', 'mean', 'median', - 'median', 'var', 'std', 'sem', 'nth', 'last', 'ohlc', - 'cumprod', 'cumsum', 'cummin', 'cummax', 'rank'} - has_dropped_na : bool - min_count : int - ngroups : int - ids : np.ndarray[np.intp] - ids[i] gives the integer label for the group that self[i] belongs to. - **kwargs : operation-specific - 'any', 'all' -> ['skipna'] - 'var', 'std', 'sem' -> ['ddof'] - 'cumprod', 'cumsum', 'cummin', 'cummax' -> ['skipna'] - 'rank' -> ['ties_method', 'ascending', 'na_option', 'pct'] - - Returns - ------- - np.ndarray or ExtensionArray - """ - from pandas.core.arrays.string_ import StringDtype - from pandas.core.groupby.ops import WrappedCythonOp - - kind = WrappedCythonOp.get_kind_from_how(how) - op = WrappedCythonOp(how=how, kind=kind, has_dropped_na=has_dropped_na) - - # GH#43682 - if isinstance(self.dtype, StringDtype): - # StringArray - npvalues = self.to_numpy(object, na_value=np.nan) - else: - raise NotImplementedError( - f"function is not implemented for this dtype: {self.dtype}" - ) - - res_values = op._cython_op_ndim_compat( - npvalues, - min_count=min_count, - ngroups=ngroups, - comp_ids=ids, - mask=None, - **kwargs, - ) - - if op.how in op.cast_blocklist: - # i.e. how in ["rank"], since other cast_blocklist methods don't go - # through cython_operation - return res_values - - if isinstance(self.dtype, StringDtype): - dtype = self.dtype - string_array_cls = dtype.construct_array_type() - return string_array_cls._from_sequence(res_values, dtype=dtype) - - else: - raise NotImplementedError - - -class ExtensionArraySupportsAnyAll(ExtensionArray): - def any(self, *, skipna: bool = True) -> bool: - raise AbstractMethodError(self) - - def all(self, *, skipna: bool = True) -> bool: - raise AbstractMethodError(self) - - -class ExtensionOpsMixin: - """ - A base class for linking the operators to their dunder names. - - .. note:: - - You may want to set ``__array_priority__`` if you want your - implementation to be called when involved in binary operations - with NumPy arrays. - """ - - @classmethod - def _create_arithmetic_method(cls, op): - raise AbstractMethodError(cls) - - @classmethod - def _add_arithmetic_ops(cls) -> None: - setattr(cls, "__add__", cls._create_arithmetic_method(operator.add)) - setattr(cls, "__radd__", cls._create_arithmetic_method(roperator.radd)) - setattr(cls, "__sub__", cls._create_arithmetic_method(operator.sub)) - setattr(cls, "__rsub__", cls._create_arithmetic_method(roperator.rsub)) - setattr(cls, "__mul__", cls._create_arithmetic_method(operator.mul)) - setattr(cls, "__rmul__", cls._create_arithmetic_method(roperator.rmul)) - setattr(cls, "__pow__", cls._create_arithmetic_method(operator.pow)) - setattr(cls, "__rpow__", cls._create_arithmetic_method(roperator.rpow)) - setattr(cls, "__mod__", cls._create_arithmetic_method(operator.mod)) - setattr(cls, "__rmod__", cls._create_arithmetic_method(roperator.rmod)) - setattr(cls, "__floordiv__", cls._create_arithmetic_method(operator.floordiv)) - setattr( - cls, "__rfloordiv__", cls._create_arithmetic_method(roperator.rfloordiv) - ) - setattr(cls, "__truediv__", cls._create_arithmetic_method(operator.truediv)) - setattr(cls, "__rtruediv__", cls._create_arithmetic_method(roperator.rtruediv)) - setattr(cls, "__divmod__", cls._create_arithmetic_method(divmod)) - setattr(cls, "__rdivmod__", cls._create_arithmetic_method(roperator.rdivmod)) - - @classmethod - def _create_comparison_method(cls, op): - raise AbstractMethodError(cls) - - @classmethod - def _add_comparison_ops(cls) -> None: - setattr(cls, "__eq__", cls._create_comparison_method(operator.eq)) - setattr(cls, "__ne__", cls._create_comparison_method(operator.ne)) - setattr(cls, "__lt__", cls._create_comparison_method(operator.lt)) - setattr(cls, "__gt__", cls._create_comparison_method(operator.gt)) - setattr(cls, "__le__", cls._create_comparison_method(operator.le)) - setattr(cls, "__ge__", cls._create_comparison_method(operator.ge)) - - @classmethod - def _create_logical_method(cls, op): - raise AbstractMethodError(cls) - - @classmethod - def _add_logical_ops(cls) -> None: - setattr(cls, "__and__", cls._create_logical_method(operator.and_)) - setattr(cls, "__rand__", cls._create_logical_method(roperator.rand_)) - setattr(cls, "__or__", cls._create_logical_method(operator.or_)) - setattr(cls, "__ror__", cls._create_logical_method(roperator.ror_)) - setattr(cls, "__xor__", cls._create_logical_method(operator.xor)) - setattr(cls, "__rxor__", cls._create_logical_method(roperator.rxor)) - - -class ExtensionScalarOpsMixin(ExtensionOpsMixin): - """ - A mixin for defining ops on an ExtensionArray. - - It is assumed that the underlying scalar objects have the operators - already defined. - - Notes - ----- - If you have defined a subclass MyExtensionArray(ExtensionArray), then - use MyExtensionArray(ExtensionArray, ExtensionScalarOpsMixin) to - get the arithmetic operators. After the definition of MyExtensionArray, - insert the lines - - MyExtensionArray._add_arithmetic_ops() - MyExtensionArray._add_comparison_ops() - - to link the operators to your class. - - .. note:: - - You may want to set ``__array_priority__`` if you want your - implementation to be called when involved in binary operations - with NumPy arrays. - """ - - @classmethod - def _create_method(cls, op, coerce_to_dtype: bool = True, result_dtype=None): - """ - A class method that returns a method that will correspond to an - operator for an ExtensionArray subclass, by dispatching to the - relevant operator defined on the individual elements of the - ExtensionArray. - - Parameters - ---------- - op : function - An operator that takes arguments op(a, b) - coerce_to_dtype : bool, default True - boolean indicating whether to attempt to convert - the result to the underlying ExtensionArray dtype. - If it's not possible to create a new ExtensionArray with the - values, an ndarray is returned instead. - - Returns - ------- - Callable[[Any, Any], Union[ndarray, ExtensionArray]] - A method that can be bound to a class. When used, the method - receives the two arguments, one of which is the instance of - this class, and should return an ExtensionArray or an ndarray. - - Returning an ndarray may be necessary when the result of the - `op` cannot be stored in the ExtensionArray. The dtype of the - ndarray uses NumPy's normal inference rules. - - Examples - -------- - Given an ExtensionArray subclass called MyExtensionArray, use - - __add__ = cls._create_method(operator.add) - - in the class definition of MyExtensionArray to create the operator - for addition, that will be based on the operator implementation - of the underlying elements of the ExtensionArray - """ - - def _binop(self, other): - def convert_values(param): - if isinstance(param, ExtensionArray) or is_list_like(param): - ovalues = param - else: # Assume its an object - ovalues = [param] * len(self) - return ovalues - - if isinstance(other, (ABCSeries, ABCIndex, ABCDataFrame)): - # rely on pandas to unbox and dispatch to us - return NotImplemented - - lvalues = self - rvalues = convert_values(other) - - # If the operator is not defined for the underlying objects, - # a TypeError should be raised - res = [op(a, b) for (a, b) in zip(lvalues, rvalues)] - - def _maybe_convert(arr): - if coerce_to_dtype: - # https://github.com/pandas-dev/pandas/issues/22850 - # We catch all regular exceptions here, and fall back - # to an ndarray. - res = maybe_cast_pointwise_result(arr, self.dtype, same_dtype=False) - if not isinstance(res, type(self)): - # exception raised in _from_sequence; ensure we have ndarray - res = np.asarray(arr) - else: - res = np.asarray(arr, dtype=result_dtype) - return res - - if op.__name__ in {"divmod", "rdivmod"}: - a, b = zip(*res) - return _maybe_convert(a), _maybe_convert(b) - - return _maybe_convert(res) - - op_name = f"__{op.__name__}__" - return set_function_name(_binop, op_name, cls) - - @classmethod - def _create_arithmetic_method(cls, op): - return cls._create_method(op) - - @classmethod - def _create_comparison_method(cls, op): - return cls._create_method(op, coerce_to_dtype=False, result_dtype=bool) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/groupby/ops.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/groupby/ops.py deleted file mode 100644 index 3c4a22d0094062730eee561cc63cf8356505930a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/groupby/ops.py +++ /dev/null @@ -1,1197 +0,0 @@ -""" -Provide classes to perform the groupby aggregate operations. - -These are not exposed to the user and provide implementations of the grouping -operations, primarily in cython. These classes (BaseGrouper and BinGrouper) -are contained *in* the SeriesGroupBy and DataFrameGroupBy objects. -""" -from __future__ import annotations - -import collections -import functools -from typing import ( - TYPE_CHECKING, - Callable, - Generic, - final, -) - -import numpy as np - -from pandas._libs import ( - NaT, - lib, -) -import pandas._libs.groupby as libgroupby -from pandas._typing import ( - ArrayLike, - AxisInt, - NDFrameT, - Shape, - npt, -) -from pandas.errors import AbstractMethodError -from pandas.util._decorators import cache_readonly - -from pandas.core.dtypes.cast import ( - maybe_cast_pointwise_result, - maybe_downcast_to_dtype, -) -from pandas.core.dtypes.common import ( - ensure_float64, - ensure_int64, - ensure_platform_int, - ensure_uint64, - is_1d_only_ea_dtype, -) -from pandas.core.dtypes.missing import ( - isna, - maybe_fill, -) - -from pandas.core.frame import DataFrame -from pandas.core.groupby import grouper -from pandas.core.indexes.api import ( - CategoricalIndex, - Index, - MultiIndex, - ensure_index, -) -from pandas.core.series import Series -from pandas.core.sorting import ( - compress_group_index, - decons_obs_group_ids, - get_flattened_list, - get_group_index, - get_group_index_sorter, - get_indexer_dict, -) - -if TYPE_CHECKING: - from collections.abc import ( - Hashable, - Iterator, - Sequence, - ) - - from pandas.core.generic import NDFrame - - -def check_result_array(obj, dtype): - # Our operation is supposed to be an aggregation/reduction. If - # it returns an ndarray, this likely means an invalid operation has - # been passed. See test_apply_without_aggregation, test_agg_must_agg - if isinstance(obj, np.ndarray): - if dtype != object: - # If it is object dtype, the function can be a reduction/aggregation - # and still return an ndarray e.g. test_agg_over_numpy_arrays - raise ValueError("Must produce aggregated value") - - -def extract_result(res): - """ - Extract the result object, it might be a 0-dim ndarray - or a len-1 0-dim, or a scalar - """ - if hasattr(res, "_values"): - # Preserve EA - res = res._values - if res.ndim == 1 and len(res) == 1: - # see test_agg_lambda_with_timezone, test_resampler_grouper.py::test_apply - res = res[0] - return res - - -class WrappedCythonOp: - """ - Dispatch logic for functions defined in _libs.groupby - - Parameters - ---------- - kind: str - Whether the operation is an aggregate or transform. - how: str - Operation name, e.g. "mean". - has_dropped_na: bool - True precisely when dropna=True and the grouper contains a null value. - """ - - # Functions for which we do _not_ attempt to cast the cython result - # back to the original dtype. - cast_blocklist = frozenset( - ["any", "all", "rank", "count", "size", "idxmin", "idxmax"] - ) - - def __init__(self, kind: str, how: str, has_dropped_na: bool) -> None: - self.kind = kind - self.how = how - self.has_dropped_na = has_dropped_na - - _CYTHON_FUNCTIONS: dict[str, dict] = { - "aggregate": { - "any": functools.partial(libgroupby.group_any_all, val_test="any"), - "all": functools.partial(libgroupby.group_any_all, val_test="all"), - "sum": "group_sum", - "prod": "group_prod", - "min": "group_min", - "max": "group_max", - "mean": "group_mean", - "median": "group_median_float64", - "var": "group_var", - "std": functools.partial(libgroupby.group_var, name="std"), - "sem": functools.partial(libgroupby.group_var, name="sem"), - "skew": "group_skew", - "first": "group_nth", - "last": "group_last", - "ohlc": "group_ohlc", - }, - "transform": { - "cumprod": "group_cumprod", - "cumsum": "group_cumsum", - "cummin": "group_cummin", - "cummax": "group_cummax", - "rank": "group_rank", - }, - } - - _cython_arity = {"ohlc": 4} # OHLC - - @classmethod - def get_kind_from_how(cls, how: str) -> str: - if how in cls._CYTHON_FUNCTIONS["aggregate"]: - return "aggregate" - return "transform" - - # Note: we make this a classmethod and pass kind+how so that caching - # works at the class level and not the instance level - @classmethod - @functools.cache - def _get_cython_function( - cls, kind: str, how: str, dtype: np.dtype, is_numeric: bool - ): - dtype_str = dtype.name - ftype = cls._CYTHON_FUNCTIONS[kind][how] - - # see if there is a fused-type version of function - # only valid for numeric - if callable(ftype): - f = ftype - else: - f = getattr(libgroupby, ftype) - if is_numeric: - return f - elif dtype == np.dtype(object): - if how in ["median", "cumprod"]: - # no fused types -> no __signatures__ - raise NotImplementedError( - f"function is not implemented for this dtype: " - f"[how->{how},dtype->{dtype_str}]" - ) - elif how in ["std", "sem"]: - # We have a partial object that does not have __signatures__ - return f - elif how == "skew": - # _get_cython_vals will convert to float64 - pass - elif "object" not in f.__signatures__: - # raise NotImplementedError here rather than TypeError later - raise NotImplementedError( - f"function is not implemented for this dtype: " - f"[how->{how},dtype->{dtype_str}]" - ) - return f - else: - raise NotImplementedError( - "This should not be reached. Please report a bug at " - "github.com/pandas-dev/pandas/", - dtype, - ) - - def _get_cython_vals(self, values: np.ndarray) -> np.ndarray: - """ - Cast numeric dtypes to float64 for functions that only support that. - - Parameters - ---------- - values : np.ndarray - - Returns - ------- - values : np.ndarray - """ - how = self.how - - if how in ["median", "std", "sem", "skew"]: - # median only has a float64 implementation - # We should only get here with is_numeric, as non-numeric cases - # should raise in _get_cython_function - values = ensure_float64(values) - - elif values.dtype.kind in "iu": - if how in ["var", "mean"] or ( - self.kind == "transform" and self.has_dropped_na - ): - # has_dropped_na check need for test_null_group_str_transformer - # result may still include NaN, so we have to cast - values = ensure_float64(values) - - elif how in ["sum", "ohlc", "prod", "cumsum", "cumprod"]: - # Avoid overflow during group op - if values.dtype.kind == "i": - values = ensure_int64(values) - else: - values = ensure_uint64(values) - - return values - - def _get_output_shape(self, ngroups: int, values: np.ndarray) -> Shape: - how = self.how - kind = self.kind - - arity = self._cython_arity.get(how, 1) - - out_shape: Shape - if how == "ohlc": - out_shape = (ngroups, arity) - elif arity > 1: - raise NotImplementedError( - "arity of more than 1 is not supported for the 'how' argument" - ) - elif kind == "transform": - out_shape = values.shape - else: - out_shape = (ngroups,) + values.shape[1:] - return out_shape - - def _get_out_dtype(self, dtype: np.dtype) -> np.dtype: - how = self.how - - if how == "rank": - out_dtype = "float64" - else: - if dtype.kind in "iufcb": - out_dtype = f"{dtype.kind}{dtype.itemsize}" - else: - out_dtype = "object" - return np.dtype(out_dtype) - - def _get_result_dtype(self, dtype: np.dtype) -> np.dtype: - """ - Get the desired dtype of a result based on the - input dtype and how it was computed. - - Parameters - ---------- - dtype : np.dtype - - Returns - ------- - np.dtype - The desired dtype of the result. - """ - how = self.how - - if how in ["sum", "cumsum", "sum", "prod", "cumprod"]: - if dtype == np.dtype(bool): - return np.dtype(np.int64) - elif how in ["mean", "median", "var", "std", "sem"]: - if dtype.kind in "fc": - return dtype - elif dtype.kind in "iub": - return np.dtype(np.float64) - return dtype - - @final - def _cython_op_ndim_compat( - self, - values: np.ndarray, - *, - min_count: int, - ngroups: int, - comp_ids: np.ndarray, - mask: npt.NDArray[np.bool_] | None = None, - result_mask: npt.NDArray[np.bool_] | None = None, - **kwargs, - ) -> np.ndarray: - if values.ndim == 1: - # expand to 2d, dispatch, then squeeze if appropriate - values2d = values[None, :] - if mask is not None: - mask = mask[None, :] - if result_mask is not None: - result_mask = result_mask[None, :] - res = self._call_cython_op( - values2d, - min_count=min_count, - ngroups=ngroups, - comp_ids=comp_ids, - mask=mask, - result_mask=result_mask, - **kwargs, - ) - if res.shape[0] == 1: - return res[0] - - # otherwise we have OHLC - return res.T - - return self._call_cython_op( - values, - min_count=min_count, - ngroups=ngroups, - comp_ids=comp_ids, - mask=mask, - result_mask=result_mask, - **kwargs, - ) - - @final - def _call_cython_op( - self, - values: np.ndarray, # np.ndarray[ndim=2] - *, - min_count: int, - ngroups: int, - comp_ids: np.ndarray, - mask: npt.NDArray[np.bool_] | None, - result_mask: npt.NDArray[np.bool_] | None, - **kwargs, - ) -> np.ndarray: # np.ndarray[ndim=2] - orig_values = values - - dtype = values.dtype - is_numeric = dtype.kind in "iufcb" - - is_datetimelike = dtype.kind in "mM" - - if is_datetimelike: - values = values.view("int64") - is_numeric = True - elif dtype.kind == "b": - values = values.view("uint8") - if values.dtype == "float16": - values = values.astype(np.float32) - - if self.how in ["any", "all"]: - if mask is None: - mask = isna(values) - if dtype == object: - if kwargs["skipna"]: - # GH#37501: don't raise on pd.NA when skipna=True - if mask.any(): - # mask on original values computed separately - values = values.copy() - values[mask] = True - values = values.astype(bool, copy=False).view(np.int8) - is_numeric = True - - values = values.T - if mask is not None: - mask = mask.T - if result_mask is not None: - result_mask = result_mask.T - - out_shape = self._get_output_shape(ngroups, values) - func = self._get_cython_function(self.kind, self.how, values.dtype, is_numeric) - values = self._get_cython_vals(values) - out_dtype = self._get_out_dtype(values.dtype) - - result = maybe_fill(np.empty(out_shape, dtype=out_dtype)) - if self.kind == "aggregate": - counts = np.zeros(ngroups, dtype=np.int64) - if self.how in ["min", "max", "mean", "last", "first", "sum"]: - func( - out=result, - counts=counts, - values=values, - labels=comp_ids, - min_count=min_count, - mask=mask, - result_mask=result_mask, - is_datetimelike=is_datetimelike, - ) - elif self.how in ["sem", "std", "var", "ohlc", "prod", "median"]: - if self.how in ["std", "sem"]: - kwargs["is_datetimelike"] = is_datetimelike - func( - result, - counts, - values, - comp_ids, - min_count=min_count, - mask=mask, - result_mask=result_mask, - **kwargs, - ) - elif self.how in ["any", "all"]: - func( - out=result, - values=values, - labels=comp_ids, - mask=mask, - result_mask=result_mask, - **kwargs, - ) - result = result.astype(bool, copy=False) - elif self.how in ["skew"]: - func( - out=result, - counts=counts, - values=values, - labels=comp_ids, - mask=mask, - result_mask=result_mask, - **kwargs, - ) - if dtype == object: - result = result.astype(object) - - else: - raise NotImplementedError(f"{self.how} is not implemented") - else: - # TODO: min_count - if self.how != "rank": - # TODO: should rank take result_mask? - kwargs["result_mask"] = result_mask - func( - out=result, - values=values, - labels=comp_ids, - ngroups=ngroups, - is_datetimelike=is_datetimelike, - mask=mask, - **kwargs, - ) - - if self.kind == "aggregate": - # i.e. counts is defined. Locations where count None: - if values.ndim > 2: - raise NotImplementedError("number of dimensions is currently limited to 2") - if values.ndim == 2: - assert axis == 1, axis - elif not is_1d_only_ea_dtype(values.dtype): - # Note: it is *not* the case that axis is always 0 for 1-dim values, - # as we can have 1D ExtensionArrays that we need to treat as 2D - assert axis == 0 - - @final - def cython_operation( - self, - *, - values: ArrayLike, - axis: AxisInt, - min_count: int = -1, - comp_ids: np.ndarray, - ngroups: int, - **kwargs, - ) -> ArrayLike: - """ - Call our cython function, with appropriate pre- and post- processing. - """ - self._validate_axis(axis, values) - - if not isinstance(values, np.ndarray): - # i.e. ExtensionArray - return values._groupby_op( - how=self.how, - has_dropped_na=self.has_dropped_na, - min_count=min_count, - ngroups=ngroups, - ids=comp_ids, - **kwargs, - ) - - return self._cython_op_ndim_compat( - values, - min_count=min_count, - ngroups=ngroups, - comp_ids=comp_ids, - mask=None, - **kwargs, - ) - - -class BaseGrouper: - """ - This is an internal Grouper class, which actually holds - the generated groups - - Parameters - ---------- - axis : Index - groupings : Sequence[Grouping] - all the grouping instances to handle in this grouper - for example for grouper list to groupby, need to pass the list - sort : bool, default True - whether this grouper will give sorted result or not - - """ - - axis: Index - - def __init__( - self, - axis: Index, - groupings: Sequence[grouper.Grouping], - sort: bool = True, - dropna: bool = True, - ) -> None: - assert isinstance(axis, Index), axis - - self.axis = axis - self._groupings: list[grouper.Grouping] = list(groupings) - self._sort = sort - self.dropna = dropna - - @property - def groupings(self) -> list[grouper.Grouping]: - return self._groupings - - @property - def shape(self) -> Shape: - return tuple(ping.ngroups for ping in self.groupings) - - def __iter__(self) -> Iterator[Hashable]: - return iter(self.indices) - - @property - def nkeys(self) -> int: - return len(self.groupings) - - def get_iterator( - self, data: NDFrameT, axis: AxisInt = 0 - ) -> Iterator[tuple[Hashable, NDFrameT]]: - """ - Groupby iterator - - Returns - ------- - Generator yielding sequence of (name, subsetted object) - for each group - """ - splitter = self._get_splitter(data, axis=axis) - keys = self.group_keys_seq - yield from zip(keys, splitter) - - @final - def _get_splitter(self, data: NDFrame, axis: AxisInt = 0) -> DataSplitter: - """ - Returns - ------- - Generator yielding subsetted objects - """ - ids, _, ngroups = self.group_info - return _get_splitter( - data, - ids, - ngroups, - sorted_ids=self._sorted_ids, - sort_idx=self._sort_idx, - axis=axis, - ) - - @final - @cache_readonly - def group_keys_seq(self): - if len(self.groupings) == 1: - return self.levels[0] - else: - ids, _, ngroups = self.group_info - - # provide "flattened" iterator for multi-group setting - return get_flattened_list(ids, ngroups, self.levels, self.codes) - - @cache_readonly - def indices(self) -> dict[Hashable, npt.NDArray[np.intp]]: - """dict {group name -> group indices}""" - if len(self.groupings) == 1 and isinstance(self.result_index, CategoricalIndex): - # This shows unused categories in indices GH#38642 - return self.groupings[0].indices - codes_list = [ping.codes for ping in self.groupings] - keys = [ping.group_index for ping in self.groupings] - return get_indexer_dict(codes_list, keys) - - @final - def result_ilocs(self) -> npt.NDArray[np.intp]: - """ - Get the original integer locations of result_index in the input. - """ - # Original indices are where group_index would go via sorting. - # But when dropna is true, we need to remove null values while accounting for - # any gaps that then occur because of them. - group_index = get_group_index( - self.codes, self.shape, sort=self._sort, xnull=True - ) - group_index, _ = compress_group_index(group_index, sort=self._sort) - - if self.has_dropped_na: - mask = np.where(group_index >= 0) - # Count how many gaps are caused by previous null values for each position - null_gaps = np.cumsum(group_index == -1)[mask] - group_index = group_index[mask] - - result = get_group_index_sorter(group_index, self.ngroups) - - if self.has_dropped_na: - # Shift by the number of prior null gaps - result += np.take(null_gaps, result) - - return result - - @final - @property - def codes(self) -> list[npt.NDArray[np.signedinteger]]: - return [ping.codes for ping in self.groupings] - - @property - def levels(self) -> list[Index]: - return [ping.group_index for ping in self.groupings] - - @property - def names(self) -> list[Hashable]: - return [ping.name for ping in self.groupings] - - @final - def size(self) -> Series: - """ - Compute group sizes. - """ - ids, _, ngroups = self.group_info - out: np.ndarray | list - if ngroups: - out = np.bincount(ids[ids != -1], minlength=ngroups) - else: - out = [] - return Series(out, index=self.result_index, dtype="int64") - - @cache_readonly - def groups(self) -> dict[Hashable, np.ndarray]: - """dict {group name -> group labels}""" - if len(self.groupings) == 1: - return self.groupings[0].groups - else: - to_groupby = [] - for ping in self.groupings: - gv = ping.grouping_vector - if not isinstance(gv, BaseGrouper): - to_groupby.append(gv) - else: - to_groupby.append(gv.groupings[0].grouping_vector) - index = MultiIndex.from_arrays(to_groupby) - return self.axis.groupby(index) - - @final - @cache_readonly - def is_monotonic(self) -> bool: - # return if my group orderings are monotonic - return Index(self.group_info[0]).is_monotonic_increasing - - @final - @cache_readonly - def has_dropped_na(self) -> bool: - """ - Whether grouper has null value(s) that are dropped. - """ - return bool((self.group_info[0] < 0).any()) - - @cache_readonly - def group_info(self) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.intp], int]: - comp_ids, obs_group_ids = self._get_compressed_codes() - - ngroups = len(obs_group_ids) - comp_ids = ensure_platform_int(comp_ids) - - return comp_ids, obs_group_ids, ngroups - - @cache_readonly - def codes_info(self) -> npt.NDArray[np.intp]: - # return the codes of items in original grouped axis - ids, _, _ = self.group_info - return ids - - @final - def _get_compressed_codes( - self, - ) -> tuple[npt.NDArray[np.signedinteger], npt.NDArray[np.intp]]: - # The first returned ndarray may have any signed integer dtype - if len(self.groupings) > 1: - group_index = get_group_index(self.codes, self.shape, sort=True, xnull=True) - return compress_group_index(group_index, sort=self._sort) - # FIXME: compress_group_index's second return value is int64, not intp - - ping = self.groupings[0] - return ping.codes, np.arange(len(ping.group_index), dtype=np.intp) - - @final - @cache_readonly - def ngroups(self) -> int: - return len(self.result_index) - - @property - def reconstructed_codes(self) -> list[npt.NDArray[np.intp]]: - codes = self.codes - ids, obs_ids, _ = self.group_info - return decons_obs_group_ids(ids, obs_ids, self.shape, codes, xnull=True) - - @cache_readonly - def result_index(self) -> Index: - if len(self.groupings) == 1: - return self.groupings[0].result_index.rename(self.names[0]) - - codes = self.reconstructed_codes - levels = [ping.result_index for ping in self.groupings] - return MultiIndex( - levels=levels, codes=codes, verify_integrity=False, names=self.names - ) - - @final - def get_group_levels(self) -> list[ArrayLike]: - # Note: only called from _insert_inaxis_grouper, which - # is only called for BaseGrouper, never for BinGrouper - if len(self.groupings) == 1: - return [self.groupings[0].group_arraylike] - - name_list = [] - for ping, codes in zip(self.groupings, self.reconstructed_codes): - codes = ensure_platform_int(codes) - levels = ping.group_arraylike.take(codes) - - name_list.append(levels) - - return name_list - - # ------------------------------------------------------------ - # Aggregation functions - - @final - def _cython_operation( - self, - kind: str, - values, - how: str, - axis: AxisInt, - min_count: int = -1, - **kwargs, - ) -> ArrayLike: - """ - Returns the values of a cython operation. - """ - assert kind in ["transform", "aggregate"] - - cy_op = WrappedCythonOp(kind=kind, how=how, has_dropped_na=self.has_dropped_na) - - ids, _, _ = self.group_info - ngroups = self.ngroups - return cy_op.cython_operation( - values=values, - axis=axis, - min_count=min_count, - comp_ids=ids, - ngroups=ngroups, - **kwargs, - ) - - @final - def agg_series( - self, obj: Series, func: Callable, preserve_dtype: bool = False - ) -> ArrayLike: - """ - Parameters - ---------- - obj : Series - func : function taking a Series and returning a scalar-like - preserve_dtype : bool - Whether the aggregation is known to be dtype-preserving. - - Returns - ------- - np.ndarray or ExtensionArray - """ - # test_groupby_empty_with_category gets here with self.ngroups == 0 - # and len(obj) > 0 - - if len(obj) > 0 and not isinstance(obj._values, np.ndarray): - # we can preserve a little bit more aggressively with EA dtype - # because maybe_cast_pointwise_result will do a try/except - # with _from_sequence. NB we are assuming here that _from_sequence - # is sufficiently strict that it casts appropriately. - preserve_dtype = True - - result = self._aggregate_series_pure_python(obj, func) - - npvalues = lib.maybe_convert_objects(result, try_float=False) - if preserve_dtype: - out = maybe_cast_pointwise_result(npvalues, obj.dtype, numeric_only=True) - else: - out = npvalues - return out - - @final - def _aggregate_series_pure_python( - self, obj: Series, func: Callable - ) -> npt.NDArray[np.object_]: - _, _, ngroups = self.group_info - - result = np.empty(ngroups, dtype="O") - initialized = False - - splitter = self._get_splitter(obj, axis=0) - - for i, group in enumerate(splitter): - res = func(group) - res = extract_result(res) - - if not initialized: - # We only do this validation on the first iteration - check_result_array(res, group.dtype) - initialized = True - - result[i] = res - - return result - - @final - def apply_groupwise( - self, f: Callable, data: DataFrame | Series, axis: AxisInt = 0 - ) -> tuple[list, bool]: - mutated = False - splitter = self._get_splitter(data, axis=axis) - group_keys = self.group_keys_seq - result_values = [] - - # This calls DataSplitter.__iter__ - zipped = zip(group_keys, splitter) - - for key, group in zipped: - # Pinning name is needed for - # test_group_apply_once_per_group, - # test_inconsistent_return_type, test_set_group_name, - # test_group_name_available_in_inference_pass, - # test_groupby_multi_timezone - object.__setattr__(group, "name", key) - - # group might be modified - group_axes = group.axes - res = f(group) - if not mutated and not _is_indexed_like(res, group_axes, axis): - mutated = True - result_values.append(res) - # getattr pattern for __name__ is needed for functools.partial objects - if len(group_keys) == 0 and getattr(f, "__name__", None) in [ - "skew", - "sum", - "prod", - ]: - # If group_keys is empty, then no function calls have been made, - # so we will not have raised even if this is an invalid dtype. - # So do one dummy call here to raise appropriate TypeError. - f(data.iloc[:0]) - - return result_values, mutated - - # ------------------------------------------------------------ - # Methods for sorting subsets of our GroupBy's object - - @final - @cache_readonly - def _sort_idx(self) -> npt.NDArray[np.intp]: - # Counting sort indexer - ids, _, ngroups = self.group_info - return get_group_index_sorter(ids, ngroups) - - @final - @cache_readonly - def _sorted_ids(self) -> npt.NDArray[np.intp]: - ids, _, _ = self.group_info - return ids.take(self._sort_idx) - - -class BinGrouper(BaseGrouper): - """ - This is an internal Grouper class - - Parameters - ---------- - bins : the split index of binlabels to group the item of axis - binlabels : the label list - indexer : np.ndarray[np.intp], optional - the indexer created by Grouper - some groupers (TimeGrouper) will sort its axis and its - group_info is also sorted, so need the indexer to reorder - - Examples - -------- - bins: [2, 4, 6, 8, 10] - binlabels: DatetimeIndex(['2005-01-01', '2005-01-03', - '2005-01-05', '2005-01-07', '2005-01-09'], - dtype='datetime64[ns]', freq='2D') - - the group_info, which contains the label of each item in grouped - axis, the index of label in label list, group number, is - - (array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4]), array([0, 1, 2, 3, 4]), 5) - - means that, the grouped axis has 10 items, can be grouped into 5 - labels, the first and second items belong to the first label, the - third and forth items belong to the second label, and so on - - """ - - bins: npt.NDArray[np.int64] - binlabels: Index - - def __init__( - self, - bins, - binlabels, - indexer=None, - ) -> None: - self.bins = ensure_int64(bins) - self.binlabels = ensure_index(binlabels) - self.indexer = indexer - - # These lengths must match, otherwise we could call agg_series - # with empty self.bins, which would raise later. - assert len(self.binlabels) == len(self.bins) - - @cache_readonly - def groups(self): - """dict {group name -> group labels}""" - # this is mainly for compat - # GH 3881 - result = { - key: value - for key, value in zip(self.binlabels, self.bins) - if key is not NaT - } - return result - - def __iter__(self) -> Iterator[Hashable]: - return iter(self.groupings[0].grouping_vector) - - @property - def nkeys(self) -> int: - # still matches len(self.groupings), but we can hard-code - return 1 - - @cache_readonly - def codes_info(self) -> npt.NDArray[np.intp]: - # return the codes of items in original grouped axis - ids, _, _ = self.group_info - if self.indexer is not None: - sorter = np.lexsort((ids, self.indexer)) - ids = ids[sorter] - return ids - - def get_iterator(self, data: NDFrame, axis: AxisInt = 0): - """ - Groupby iterator - - Returns - ------- - Generator yielding sequence of (name, subsetted object) - for each group - """ - if axis == 0: - slicer = lambda start, edge: data.iloc[start:edge] - else: - slicer = lambda start, edge: data.iloc[:, start:edge] - - length = len(data.axes[axis]) - - start = 0 - for edge, label in zip(self.bins, self.binlabels): - if label is not NaT: - yield label, slicer(start, edge) - start = edge - - if start < length: - yield self.binlabels[-1], slicer(start, None) - - @cache_readonly - def indices(self): - indices = collections.defaultdict(list) - - i = 0 - for label, bin in zip(self.binlabels, self.bins): - if i < bin: - if label is not NaT: - indices[label] = list(range(i, bin)) - i = bin - return indices - - @cache_readonly - def group_info(self) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.intp], int]: - ngroups = self.ngroups - obs_group_ids = np.arange(ngroups, dtype=np.intp) - rep = np.diff(np.r_[0, self.bins]) - - rep = ensure_platform_int(rep) - if ngroups == len(self.bins): - comp_ids = np.repeat(np.arange(ngroups), rep) - else: - comp_ids = np.repeat(np.r_[-1, np.arange(ngroups)], rep) - - return ( - ensure_platform_int(comp_ids), - obs_group_ids, - ngroups, - ) - - @cache_readonly - def reconstructed_codes(self) -> list[np.ndarray]: - # get unique result indices, and prepend 0 as groupby starts from the first - return [np.r_[0, np.flatnonzero(self.bins[1:] != self.bins[:-1]) + 1]] - - @cache_readonly - def result_index(self) -> Index: - if len(self.binlabels) != 0 and isna(self.binlabels[0]): - return self.binlabels[1:] - - return self.binlabels - - @property - def levels(self) -> list[Index]: - return [self.binlabels] - - @property - def names(self) -> list[Hashable]: - return [self.binlabels.name] - - @property - def groupings(self) -> list[grouper.Grouping]: - lev = self.binlabels - codes = self.group_info[0] - labels = lev.take(codes) - ping = grouper.Grouping( - labels, labels, in_axis=False, level=None, uniques=lev._values - ) - return [ping] - - -def _is_indexed_like(obj, axes, axis: AxisInt) -> bool: - if isinstance(obj, Series): - if len(axes) > 1: - return False - return obj.axes[axis].equals(axes[axis]) - elif isinstance(obj, DataFrame): - return obj.axes[axis].equals(axes[axis]) - - return False - - -# ---------------------------------------------------------------------- -# Splitting / application - - -class DataSplitter(Generic[NDFrameT]): - def __init__( - self, - data: NDFrameT, - labels: npt.NDArray[np.intp], - ngroups: int, - *, - sort_idx: npt.NDArray[np.intp], - sorted_ids: npt.NDArray[np.intp], - axis: AxisInt = 0, - ) -> None: - self.data = data - self.labels = ensure_platform_int(labels) # _should_ already be np.intp - self.ngroups = ngroups - - self._slabels = sorted_ids - self._sort_idx = sort_idx - - self.axis = axis - assert isinstance(axis, int), axis - - def __iter__(self) -> Iterator: - sdata = self._sorted_data - - if self.ngroups == 0: - # we are inside a generator, rather than raise StopIteration - # we merely return signal the end - return - - starts, ends = lib.generate_slices(self._slabels, self.ngroups) - - for start, end in zip(starts, ends): - yield self._chop(sdata, slice(start, end)) - - @cache_readonly - def _sorted_data(self) -> NDFrameT: - return self.data.take(self._sort_idx, axis=self.axis) - - def _chop(self, sdata, slice_obj: slice) -> NDFrame: - raise AbstractMethodError(self) - - -class SeriesSplitter(DataSplitter): - def _chop(self, sdata: Series, slice_obj: slice) -> Series: - # fastpath equivalent to `sdata.iloc[slice_obj]` - mgr = sdata._mgr.get_slice(slice_obj) - ser = sdata._constructor_from_mgr(mgr, axes=mgr.axes) - ser._name = sdata.name - return ser.__finalize__(sdata, method="groupby") - - -class FrameSplitter(DataSplitter): - def _chop(self, sdata: DataFrame, slice_obj: slice) -> DataFrame: - # Fastpath equivalent to: - # if self.axis == 0: - # return sdata.iloc[slice_obj] - # else: - # return sdata.iloc[:, slice_obj] - mgr = sdata._mgr.get_slice(slice_obj, axis=1 - self.axis) - df = sdata._constructor_from_mgr(mgr, axes=mgr.axes) - return df.__finalize__(sdata, method="groupby") - - -def _get_splitter( - data: NDFrame, - labels: npt.NDArray[np.intp], - ngroups: int, - *, - sort_idx: npt.NDArray[np.intp], - sorted_ids: npt.NDArray[np.intp], - axis: AxisInt = 0, -) -> DataSplitter: - if isinstance(data, Series): - klass: type[DataSplitter] = SeriesSplitter - else: - # i.e. DataFrame - klass = FrameSplitter - - return klass( - data, labels, ngroups, sort_idx=sort_idx, sorted_ids=sorted_ids, axis=axis - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/dtypes/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/dtypes/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_rename_axis.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_rename_axis.py deleted file mode 100644 index dd4a77c6509b8de7eb767bb44238004399c159a4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_rename_axis.py +++ /dev/null @@ -1,111 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - DataFrame, - Index, - MultiIndex, -) -import pandas._testing as tm - - -class TestDataFrameRenameAxis: - def test_rename_axis_inplace(self, float_frame): - # GH#15704 - expected = float_frame.rename_axis("foo") - result = float_frame.copy() - return_value = no_return = result.rename_axis("foo", inplace=True) - assert return_value is None - - assert no_return is None - tm.assert_frame_equal(result, expected) - - expected = float_frame.rename_axis("bar", axis=1) - result = float_frame.copy() - return_value = no_return = result.rename_axis("bar", axis=1, inplace=True) - assert return_value is None - - assert no_return is None - tm.assert_frame_equal(result, expected) - - def test_rename_axis_raises(self): - # GH#17833 - df = DataFrame({"A": [1, 2], "B": [1, 2]}) - with pytest.raises(ValueError, match="Use `.rename`"): - df.rename_axis(id, axis=0) - - with pytest.raises(ValueError, match="Use `.rename`"): - df.rename_axis({0: 10, 1: 20}, axis=0) - - with pytest.raises(ValueError, match="Use `.rename`"): - df.rename_axis(id, axis=1) - - with pytest.raises(ValueError, match="Use `.rename`"): - df["A"].rename_axis(id) - - def test_rename_axis_mapper(self): - # GH#19978 - mi = MultiIndex.from_product([["a", "b", "c"], [1, 2]], names=["ll", "nn"]) - df = DataFrame( - {"x": list(range(len(mi))), "y": [i * 10 for i in range(len(mi))]}, index=mi - ) - - # Test for rename of the Index object of columns - result = df.rename_axis("cols", axis=1) - tm.assert_index_equal(result.columns, Index(["x", "y"], name="cols")) - - # Test for rename of the Index object of columns using dict - result = result.rename_axis(columns={"cols": "new"}, axis=1) - tm.assert_index_equal(result.columns, Index(["x", "y"], name="new")) - - # Test for renaming index using dict - result = df.rename_axis(index={"ll": "foo"}) - assert result.index.names == ["foo", "nn"] - - # Test for renaming index using a function - result = df.rename_axis(index=str.upper, axis=0) - assert result.index.names == ["LL", "NN"] - - # Test for renaming index providing complete list - result = df.rename_axis(index=["foo", "goo"]) - assert result.index.names == ["foo", "goo"] - - # Test for changing index and columns at same time - sdf = df.reset_index().set_index("nn").drop(columns=["ll", "y"]) - result = sdf.rename_axis(index="foo", columns="meh") - assert result.index.name == "foo" - assert result.columns.name == "meh" - - # Test different error cases - with pytest.raises(TypeError, match="Must pass"): - df.rename_axis(index="wrong") - - with pytest.raises(ValueError, match="Length of names"): - df.rename_axis(index=["wrong"]) - - with pytest.raises(TypeError, match="bogus"): - df.rename_axis(bogus=None) - - @pytest.mark.parametrize( - "kwargs, rename_index, rename_columns", - [ - ({"mapper": None, "axis": 0}, True, False), - ({"mapper": None, "axis": 1}, False, True), - ({"index": None}, True, False), - ({"columns": None}, False, True), - ({"index": None, "columns": None}, True, True), - ({}, False, False), - ], - ) - def test_rename_axis_none(self, kwargs, rename_index, rename_columns): - # GH 25034 - index = Index(list("abc"), name="foo") - columns = Index(["col1", "col2"], name="bar") - data = np.arange(6).reshape(3, 2) - df = DataFrame(data, index, columns) - - result = df.rename_axis(**kwargs) - expected_index = index.rename(None) if rename_index else index - expected_columns = columns.rename(None) if rename_columns else columns - expected = DataFrame(data, expected_index, expected_columns) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/__init__.py deleted file mode 100644 index 446d9da4377712b073d76dac7672dcf1de00cf04..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/groupby/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -def get_groupby_method_args(name, obj): - """ - Get required arguments for a groupby method. - - When parametrizing a test over groupby methods (e.g. "sum", "mean", "fillna"), - it is often the case that arguments are required for certain methods. - - Parameters - ---------- - name: str - Name of the method. - obj: Series or DataFrame - pandas object that is being grouped. - - Returns - ------- - A tuple of required arguments for the method. - """ - if name in ("nth", "fillna", "take"): - return (0,) - if name == "quantile": - return (0.5,) - if name == "corrwith": - return (obj,) - return () diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/categorical/test_indexing.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/categorical/test_indexing.py deleted file mode 100644 index 49eb79da616e7603b70ee3189e9004dd51fb33e7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/categorical/test_indexing.py +++ /dev/null @@ -1,420 +0,0 @@ -import numpy as np -import pytest - -from pandas.errors import InvalidIndexError - -import pandas as pd -from pandas import ( - CategoricalIndex, - Index, - IntervalIndex, - Timestamp, -) -import pandas._testing as tm - - -class TestTake: - def test_take_fill_value(self): - # GH 12631 - - # numeric category - idx = CategoricalIndex([1, 2, 3], name="xxx") - result = idx.take(np.array([1, 0, -1])) - expected = CategoricalIndex([2, 1, 3], name="xxx") - tm.assert_index_equal(result, expected) - tm.assert_categorical_equal(result.values, expected.values) - - # fill_value - result = idx.take(np.array([1, 0, -1]), fill_value=True) - expected = CategoricalIndex([2, 1, np.nan], categories=[1, 2, 3], name="xxx") - tm.assert_index_equal(result, expected) - tm.assert_categorical_equal(result.values, expected.values) - - # allow_fill=False - result = idx.take(np.array([1, 0, -1]), allow_fill=False, fill_value=True) - expected = CategoricalIndex([2, 1, 3], name="xxx") - tm.assert_index_equal(result, expected) - tm.assert_categorical_equal(result.values, expected.values) - - # object category - idx = CategoricalIndex( - list("CBA"), categories=list("ABC"), ordered=True, name="xxx" - ) - result = idx.take(np.array([1, 0, -1])) - expected = CategoricalIndex( - list("BCA"), categories=list("ABC"), ordered=True, name="xxx" - ) - tm.assert_index_equal(result, expected) - tm.assert_categorical_equal(result.values, expected.values) - - # fill_value - result = idx.take(np.array([1, 0, -1]), fill_value=True) - expected = CategoricalIndex( - ["B", "C", np.nan], categories=list("ABC"), ordered=True, name="xxx" - ) - tm.assert_index_equal(result, expected) - tm.assert_categorical_equal(result.values, expected.values) - - # allow_fill=False - result = idx.take(np.array([1, 0, -1]), allow_fill=False, fill_value=True) - expected = CategoricalIndex( - list("BCA"), categories=list("ABC"), ordered=True, name="xxx" - ) - tm.assert_index_equal(result, expected) - tm.assert_categorical_equal(result.values, expected.values) - - msg = ( - "When allow_fill=True and fill_value is not None, " - "all indices must be >= -1" - ) - with pytest.raises(ValueError, match=msg): - idx.take(np.array([1, 0, -2]), fill_value=True) - with pytest.raises(ValueError, match=msg): - idx.take(np.array([1, 0, -5]), fill_value=True) - - msg = "index -5 is out of bounds for (axis 0 with )?size 3" - with pytest.raises(IndexError, match=msg): - idx.take(np.array([1, -5])) - - def test_take_fill_value_datetime(self): - # datetime category - idx = pd.DatetimeIndex(["2011-01-01", "2011-02-01", "2011-03-01"], name="xxx") - idx = CategoricalIndex(idx) - result = idx.take(np.array([1, 0, -1])) - expected = pd.DatetimeIndex( - ["2011-02-01", "2011-01-01", "2011-03-01"], name="xxx" - ) - expected = CategoricalIndex(expected) - tm.assert_index_equal(result, expected) - - # fill_value - result = idx.take(np.array([1, 0, -1]), fill_value=True) - expected = pd.DatetimeIndex(["2011-02-01", "2011-01-01", "NaT"], name="xxx") - exp_cats = pd.DatetimeIndex(["2011-01-01", "2011-02-01", "2011-03-01"]) - expected = CategoricalIndex(expected, categories=exp_cats) - tm.assert_index_equal(result, expected) - - # allow_fill=False - result = idx.take(np.array([1, 0, -1]), allow_fill=False, fill_value=True) - expected = pd.DatetimeIndex( - ["2011-02-01", "2011-01-01", "2011-03-01"], name="xxx" - ) - expected = CategoricalIndex(expected) - tm.assert_index_equal(result, expected) - - msg = ( - "When allow_fill=True and fill_value is not None, " - "all indices must be >= -1" - ) - with pytest.raises(ValueError, match=msg): - idx.take(np.array([1, 0, -2]), fill_value=True) - with pytest.raises(ValueError, match=msg): - idx.take(np.array([1, 0, -5]), fill_value=True) - - msg = "index -5 is out of bounds for (axis 0 with )?size 3" - with pytest.raises(IndexError, match=msg): - idx.take(np.array([1, -5])) - - def test_take_invalid_kwargs(self): - idx = CategoricalIndex([1, 2, 3], name="foo") - indices = [1, 0, -1] - - msg = r"take\(\) got an unexpected keyword argument 'foo'" - with pytest.raises(TypeError, match=msg): - idx.take(indices, foo=2) - - msg = "the 'out' parameter is not supported" - with pytest.raises(ValueError, match=msg): - idx.take(indices, out=indices) - - msg = "the 'mode' parameter is not supported" - with pytest.raises(ValueError, match=msg): - idx.take(indices, mode="clip") - - -class TestGetLoc: - def test_get_loc(self): - # GH 12531 - cidx1 = CategoricalIndex(list("abcde"), categories=list("edabc")) - idx1 = Index(list("abcde")) - assert cidx1.get_loc("a") == idx1.get_loc("a") - assert cidx1.get_loc("e") == idx1.get_loc("e") - - for i in [cidx1, idx1]: - with pytest.raises(KeyError, match="'NOT-EXIST'"): - i.get_loc("NOT-EXIST") - - # non-unique - cidx2 = CategoricalIndex(list("aacded"), categories=list("edabc")) - idx2 = Index(list("aacded")) - - # results in bool array - res = cidx2.get_loc("d") - tm.assert_numpy_array_equal(res, idx2.get_loc("d")) - tm.assert_numpy_array_equal( - res, np.array([False, False, False, True, False, True]) - ) - # unique element results in scalar - res = cidx2.get_loc("e") - assert res == idx2.get_loc("e") - assert res == 4 - - for i in [cidx2, idx2]: - with pytest.raises(KeyError, match="'NOT-EXIST'"): - i.get_loc("NOT-EXIST") - - # non-unique, sliceable - cidx3 = CategoricalIndex(list("aabbb"), categories=list("abc")) - idx3 = Index(list("aabbb")) - - # results in slice - res = cidx3.get_loc("a") - assert res == idx3.get_loc("a") - assert res == slice(0, 2, None) - - res = cidx3.get_loc("b") - assert res == idx3.get_loc("b") - assert res == slice(2, 5, None) - - for i in [cidx3, idx3]: - with pytest.raises(KeyError, match="'c'"): - i.get_loc("c") - - def test_get_loc_unique(self): - cidx = CategoricalIndex(list("abc")) - result = cidx.get_loc("b") - assert result == 1 - - def test_get_loc_monotonic_nonunique(self): - cidx = CategoricalIndex(list("abbc")) - result = cidx.get_loc("b") - expected = slice(1, 3, None) - assert result == expected - - def test_get_loc_nonmonotonic_nonunique(self): - cidx = CategoricalIndex(list("abcb")) - result = cidx.get_loc("b") - expected = np.array([False, True, False, True], dtype=bool) - tm.assert_numpy_array_equal(result, expected) - - def test_get_loc_nan(self): - # GH#41933 - ci = CategoricalIndex(["A", "B", np.nan]) - res = ci.get_loc(np.nan) - - assert res == 2 - - -class TestGetIndexer: - def test_get_indexer_base(self): - # Determined by cat ordering. - idx = CategoricalIndex(list("cab"), categories=list("cab")) - expected = np.arange(len(idx), dtype=np.intp) - - actual = idx.get_indexer(idx) - tm.assert_numpy_array_equal(expected, actual) - - with pytest.raises(ValueError, match="Invalid fill method"): - idx.get_indexer(idx, method="invalid") - - def test_get_indexer_requires_unique(self): - ci = CategoricalIndex(list("aabbca"), categories=list("cab"), ordered=False) - oidx = Index(np.array(ci)) - - msg = "Reindexing only valid with uniquely valued Index objects" - - for n in [1, 2, 5, len(ci)]: - finder = oidx[np.random.default_rng(2).integers(0, len(ci), size=n)] - - with pytest.raises(InvalidIndexError, match=msg): - ci.get_indexer(finder) - - # see gh-17323 - # - # Even when indexer is equal to the - # members in the index, we should - # respect duplicates instead of taking - # the fast-track path. - for finder in [list("aabbca"), list("aababca")]: - with pytest.raises(InvalidIndexError, match=msg): - ci.get_indexer(finder) - - def test_get_indexer_non_unique(self): - idx1 = CategoricalIndex(list("aabcde"), categories=list("edabc")) - idx2 = CategoricalIndex(list("abf")) - - for indexer in [idx2, list("abf"), Index(list("abf"))]: - msg = "Reindexing only valid with uniquely valued Index objects" - with pytest.raises(InvalidIndexError, match=msg): - idx1.get_indexer(indexer) - - r1, _ = idx1.get_indexer_non_unique(indexer) - expected = np.array([0, 1, 2, -1], dtype=np.intp) - tm.assert_almost_equal(r1, expected) - - def test_get_indexer_method(self): - idx1 = CategoricalIndex(list("aabcde"), categories=list("edabc")) - idx2 = CategoricalIndex(list("abf")) - - msg = "method pad not yet implemented for CategoricalIndex" - with pytest.raises(NotImplementedError, match=msg): - idx2.get_indexer(idx1, method="pad") - msg = "method backfill not yet implemented for CategoricalIndex" - with pytest.raises(NotImplementedError, match=msg): - idx2.get_indexer(idx1, method="backfill") - - msg = "method nearest not yet implemented for CategoricalIndex" - with pytest.raises(NotImplementedError, match=msg): - idx2.get_indexer(idx1, method="nearest") - - def test_get_indexer_array(self): - arr = np.array( - [Timestamp("1999-12-31 00:00:00"), Timestamp("2000-12-31 00:00:00")], - dtype=object, - ) - cats = [Timestamp("1999-12-31 00:00:00"), Timestamp("2000-12-31 00:00:00")] - ci = CategoricalIndex(cats, categories=cats, ordered=False, dtype="category") - result = ci.get_indexer(arr) - expected = np.array([0, 1], dtype="intp") - tm.assert_numpy_array_equal(result, expected) - - def test_get_indexer_same_categories_same_order(self): - ci = CategoricalIndex(["a", "b"], categories=["a", "b"]) - - result = ci.get_indexer(CategoricalIndex(["b", "b"], categories=["a", "b"])) - expected = np.array([1, 1], dtype="intp") - tm.assert_numpy_array_equal(result, expected) - - def test_get_indexer_same_categories_different_order(self): - # https://github.com/pandas-dev/pandas/issues/19551 - ci = CategoricalIndex(["a", "b"], categories=["a", "b"]) - - result = ci.get_indexer(CategoricalIndex(["b", "b"], categories=["b", "a"])) - expected = np.array([1, 1], dtype="intp") - tm.assert_numpy_array_equal(result, expected) - - def test_get_indexer_nans_in_index_and_target(self): - # GH 45361 - ci = CategoricalIndex([1, 2, np.nan, 3]) - other1 = [2, 3, 4, np.nan] - res1 = ci.get_indexer(other1) - expected1 = np.array([1, 3, -1, 2], dtype=np.intp) - tm.assert_numpy_array_equal(res1, expected1) - other2 = [1, 4, 2, 3] - res2 = ci.get_indexer(other2) - expected2 = np.array([0, -1, 1, 3], dtype=np.intp) - tm.assert_numpy_array_equal(res2, expected2) - - -class TestWhere: - def test_where(self, listlike_box): - klass = listlike_box - - i = CategoricalIndex(list("aabbca"), categories=list("cab"), ordered=False) - cond = [True] * len(i) - expected = i - result = i.where(klass(cond)) - tm.assert_index_equal(result, expected) - - cond = [False] + [True] * (len(i) - 1) - expected = CategoricalIndex([np.nan] + i[1:].tolist(), categories=i.categories) - result = i.where(klass(cond)) - tm.assert_index_equal(result, expected) - - def test_where_non_categories(self): - ci = CategoricalIndex(["a", "b", "c", "d"]) - mask = np.array([True, False, True, False]) - - result = ci.where(mask, 2) - expected = Index(["a", 2, "c", 2], dtype=object) - tm.assert_index_equal(result, expected) - - msg = "Cannot setitem on a Categorical with a new category" - with pytest.raises(TypeError, match=msg): - # Test the Categorical method directly - ci._data._where(mask, 2) - - -class TestContains: - def test_contains(self): - ci = CategoricalIndex(list("aabbca"), categories=list("cabdef"), ordered=False) - - assert "a" in ci - assert "z" not in ci - assert "e" not in ci - assert np.nan not in ci - - # assert codes NOT in index - assert 0 not in ci - assert 1 not in ci - - def test_contains_nan(self): - ci = CategoricalIndex(list("aabbca") + [np.nan], categories=list("cabdef")) - assert np.nan in ci - - @pytest.mark.parametrize("unwrap", [True, False]) - def test_contains_na_dtype(self, unwrap): - dti = pd.date_range("2016-01-01", periods=100).insert(0, pd.NaT) - pi = dti.to_period("D") - tdi = dti - dti[-1] - ci = CategoricalIndex(dti) - - obj = ci - if unwrap: - obj = ci._data - - assert np.nan in obj - assert None in obj - assert pd.NaT in obj - assert np.datetime64("NaT") in obj - assert np.timedelta64("NaT") not in obj - - obj2 = CategoricalIndex(tdi) - if unwrap: - obj2 = obj2._data - - assert np.nan in obj2 - assert None in obj2 - assert pd.NaT in obj2 - assert np.datetime64("NaT") not in obj2 - assert np.timedelta64("NaT") in obj2 - - obj3 = CategoricalIndex(pi) - if unwrap: - obj3 = obj3._data - - assert np.nan in obj3 - assert None in obj3 - assert pd.NaT in obj3 - assert np.datetime64("NaT") not in obj3 - assert np.timedelta64("NaT") not in obj3 - - @pytest.mark.parametrize( - "item, expected", - [ - (pd.Interval(0, 1), True), - (1.5, True), - (pd.Interval(0.5, 1.5), False), - ("a", False), - (Timestamp(1), False), - (pd.Timedelta(1), False), - ], - ids=str, - ) - def test_contains_interval(self, item, expected): - # GH 23705 - ci = CategoricalIndex(IntervalIndex.from_breaks(range(3))) - result = item in ci - assert result is expected - - def test_contains_list(self): - # GH#21729 - idx = CategoricalIndex([1, 2, 3]) - - assert "a" not in idx - - with pytest.raises(TypeError, match="unhashable type"): - ["a"] in idx - - with pytest.raises(TypeError, match="unhashable type"): - ["a", "b"] in idx diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_np_datetime.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_np_datetime.py deleted file mode 100644 index 02edf1a09387766d71097ea0baedc2640cfb824b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tslibs/test_np_datetime.py +++ /dev/null @@ -1,222 +0,0 @@ -import numpy as np -import pytest - -from pandas._libs.tslibs.dtypes import NpyDatetimeUnit -from pandas._libs.tslibs.np_datetime import ( - OutOfBoundsDatetime, - OutOfBoundsTimedelta, - astype_overflowsafe, - is_unitless, - py_get_unit_from_dtype, - py_td64_to_tdstruct, -) - -import pandas._testing as tm - - -def test_is_unitless(): - dtype = np.dtype("M8[ns]") - assert not is_unitless(dtype) - - dtype = np.dtype("datetime64") - assert is_unitless(dtype) - - dtype = np.dtype("m8[ns]") - assert not is_unitless(dtype) - - dtype = np.dtype("timedelta64") - assert is_unitless(dtype) - - msg = "dtype must be datetime64 or timedelta64" - with pytest.raises(ValueError, match=msg): - is_unitless(np.dtype(np.int64)) - - msg = "Argument 'dtype' has incorrect type" - with pytest.raises(TypeError, match=msg): - is_unitless("foo") - - -def test_get_unit_from_dtype(): - # datetime64 - assert py_get_unit_from_dtype(np.dtype("M8[Y]")) == NpyDatetimeUnit.NPY_FR_Y.value - assert py_get_unit_from_dtype(np.dtype("M8[M]")) == NpyDatetimeUnit.NPY_FR_M.value - assert py_get_unit_from_dtype(np.dtype("M8[W]")) == NpyDatetimeUnit.NPY_FR_W.value - # B has been deprecated and removed -> no 3 - assert py_get_unit_from_dtype(np.dtype("M8[D]")) == NpyDatetimeUnit.NPY_FR_D.value - assert py_get_unit_from_dtype(np.dtype("M8[h]")) == NpyDatetimeUnit.NPY_FR_h.value - assert py_get_unit_from_dtype(np.dtype("M8[m]")) == NpyDatetimeUnit.NPY_FR_m.value - assert py_get_unit_from_dtype(np.dtype("M8[s]")) == NpyDatetimeUnit.NPY_FR_s.value - assert py_get_unit_from_dtype(np.dtype("M8[ms]")) == NpyDatetimeUnit.NPY_FR_ms.value - assert py_get_unit_from_dtype(np.dtype("M8[us]")) == NpyDatetimeUnit.NPY_FR_us.value - assert py_get_unit_from_dtype(np.dtype("M8[ns]")) == NpyDatetimeUnit.NPY_FR_ns.value - assert py_get_unit_from_dtype(np.dtype("M8[ps]")) == NpyDatetimeUnit.NPY_FR_ps.value - assert py_get_unit_from_dtype(np.dtype("M8[fs]")) == NpyDatetimeUnit.NPY_FR_fs.value - assert py_get_unit_from_dtype(np.dtype("M8[as]")) == NpyDatetimeUnit.NPY_FR_as.value - - # timedelta64 - assert py_get_unit_from_dtype(np.dtype("m8[Y]")) == NpyDatetimeUnit.NPY_FR_Y.value - assert py_get_unit_from_dtype(np.dtype("m8[M]")) == NpyDatetimeUnit.NPY_FR_M.value - assert py_get_unit_from_dtype(np.dtype("m8[W]")) == NpyDatetimeUnit.NPY_FR_W.value - # B has been deprecated and removed -> no 3 - assert py_get_unit_from_dtype(np.dtype("m8[D]")) == NpyDatetimeUnit.NPY_FR_D.value - assert py_get_unit_from_dtype(np.dtype("m8[h]")) == NpyDatetimeUnit.NPY_FR_h.value - assert py_get_unit_from_dtype(np.dtype("m8[m]")) == NpyDatetimeUnit.NPY_FR_m.value - assert py_get_unit_from_dtype(np.dtype("m8[s]")) == NpyDatetimeUnit.NPY_FR_s.value - assert py_get_unit_from_dtype(np.dtype("m8[ms]")) == NpyDatetimeUnit.NPY_FR_ms.value - assert py_get_unit_from_dtype(np.dtype("m8[us]")) == NpyDatetimeUnit.NPY_FR_us.value - assert py_get_unit_from_dtype(np.dtype("m8[ns]")) == NpyDatetimeUnit.NPY_FR_ns.value - assert py_get_unit_from_dtype(np.dtype("m8[ps]")) == NpyDatetimeUnit.NPY_FR_ps.value - assert py_get_unit_from_dtype(np.dtype("m8[fs]")) == NpyDatetimeUnit.NPY_FR_fs.value - assert py_get_unit_from_dtype(np.dtype("m8[as]")) == NpyDatetimeUnit.NPY_FR_as.value - - -def test_td64_to_tdstruct(): - val = 12454636234 # arbitrary value - - res1 = py_td64_to_tdstruct(val, NpyDatetimeUnit.NPY_FR_ns.value) - exp1 = { - "days": 0, - "hrs": 0, - "min": 0, - "sec": 12, - "ms": 454, - "us": 636, - "ns": 234, - "seconds": 12, - "microseconds": 454636, - "nanoseconds": 234, - } - assert res1 == exp1 - - res2 = py_td64_to_tdstruct(val, NpyDatetimeUnit.NPY_FR_us.value) - exp2 = { - "days": 0, - "hrs": 3, - "min": 27, - "sec": 34, - "ms": 636, - "us": 234, - "ns": 0, - "seconds": 12454, - "microseconds": 636234, - "nanoseconds": 0, - } - assert res2 == exp2 - - res3 = py_td64_to_tdstruct(val, NpyDatetimeUnit.NPY_FR_ms.value) - exp3 = { - "days": 144, - "hrs": 3, - "min": 37, - "sec": 16, - "ms": 234, - "us": 0, - "ns": 0, - "seconds": 13036, - "microseconds": 234000, - "nanoseconds": 0, - } - assert res3 == exp3 - - # Note this out of bounds for nanosecond Timedelta - res4 = py_td64_to_tdstruct(val, NpyDatetimeUnit.NPY_FR_s.value) - exp4 = { - "days": 144150, - "hrs": 21, - "min": 10, - "sec": 34, - "ms": 0, - "us": 0, - "ns": 0, - "seconds": 76234, - "microseconds": 0, - "nanoseconds": 0, - } - assert res4 == exp4 - - -class TestAstypeOverflowSafe: - def test_pass_non_dt64_array(self): - # check that we raise, not segfault - arr = np.arange(5) - dtype = np.dtype("M8[ns]") - - msg = ( - "astype_overflowsafe values.dtype and dtype must be either " - "both-datetime64 or both-timedelta64" - ) - with pytest.raises(TypeError, match=msg): - astype_overflowsafe(arr, dtype, copy=True) - - with pytest.raises(TypeError, match=msg): - astype_overflowsafe(arr, dtype, copy=False) - - def test_pass_non_dt64_dtype(self): - # check that we raise, not segfault - arr = np.arange(5, dtype="i8").view("M8[D]") - dtype = np.dtype("m8[ns]") - - msg = ( - "astype_overflowsafe values.dtype and dtype must be either " - "both-datetime64 or both-timedelta64" - ) - with pytest.raises(TypeError, match=msg): - astype_overflowsafe(arr, dtype, copy=True) - - with pytest.raises(TypeError, match=msg): - astype_overflowsafe(arr, dtype, copy=False) - - def test_astype_overflowsafe_dt64(self): - dtype = np.dtype("M8[ns]") - - dt = np.datetime64("2262-04-05", "D") - arr = dt + np.arange(10, dtype="m8[D]") - - # arr.astype silently overflows, so this - wrong = arr.astype(dtype) - roundtrip = wrong.astype(arr.dtype) - assert not (wrong == roundtrip).all() - - msg = "Out of bounds nanosecond timestamp" - with pytest.raises(OutOfBoundsDatetime, match=msg): - astype_overflowsafe(arr, dtype) - - # But converting to microseconds is fine, and we match numpy's results. - dtype2 = np.dtype("M8[us]") - result = astype_overflowsafe(arr, dtype2) - expected = arr.astype(dtype2) - tm.assert_numpy_array_equal(result, expected) - - def test_astype_overflowsafe_td64(self): - dtype = np.dtype("m8[ns]") - - dt = np.datetime64("2262-04-05", "D") - arr = dt + np.arange(10, dtype="m8[D]") - arr = arr.view("m8[D]") - - # arr.astype silently overflows, so this - wrong = arr.astype(dtype) - roundtrip = wrong.astype(arr.dtype) - assert not (wrong == roundtrip).all() - - msg = r"Cannot convert 106752 days to timedelta64\[ns\] without overflow" - with pytest.raises(OutOfBoundsTimedelta, match=msg): - astype_overflowsafe(arr, dtype) - - # But converting to microseconds is fine, and we match numpy's results. - dtype2 = np.dtype("m8[us]") - result = astype_overflowsafe(arr, dtype2) - expected = arr.astype(dtype2) - tm.assert_numpy_array_equal(result, expected) - - def test_astype_overflowsafe_disallow_rounding(self): - arr = np.array([-1500, 1500], dtype="M8[ns]") - dtype = np.dtype("M8[us]") - - msg = "Cannot losslessly cast '-1500 ns' to us" - with pytest.raises(ValueError, match=msg): - astype_overflowsafe(arr, dtype, round_ok=False) - - result = astype_overflowsafe(arr, dtype, round_ok=True) - expected = arr.astype(dtype) - tm.assert_numpy_array_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/diff.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/diff.py deleted file mode 100644 index 0ab85bfbf32b307f0e7a99058847d941cb35e911..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/diff.py +++ /dev/null @@ -1,168 +0,0 @@ -""" - pygments.lexers.diff - ~~~~~~~~~~~~~~~~~~~~ - - Lexers for diff/patch formats. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pygments.lexer import RegexLexer, include, bygroups -from pygments.token import Text, Comment, Operator, Keyword, Name, Generic, \ - Literal, Whitespace - -__all__ = ['DiffLexer', 'DarcsPatchLexer', 'WDiffLexer'] - - -class DiffLexer(RegexLexer): - """ - Lexer for unified or context-style diffs or patches. - """ - - name = 'Diff' - aliases = ['diff', 'udiff'] - filenames = ['*.diff', '*.patch'] - mimetypes = ['text/x-diff', 'text/x-patch'] - - tokens = { - 'root': [ - (r'( )(.*)(\n)', bygroups(Whitespace, Text, Whitespace)), - (r'(!.*|---)(\n)', bygroups(Generic.Strong, Whitespace)), - (r'((?:< |-).*)(\n)', bygroups(Generic.Deleted, Whitespace)), - (r'((?:> |\+).*)(\n)', bygroups(Generic.Inserted, Whitespace)), - ( - r'(@.*|\d(?:,\d+)?(?:a|c|d)\d+(?:,\d+)?)(\n)', - bygroups(Generic.Subheading, Whitespace), - ), - (r'((?:[Ii]ndex|diff).*)(\n)', bygroups(Generic.Heading, Whitespace)), - (r'(=.*)(\n)', bygroups(Generic.Heading, Whitespace)), - (r'(.*)(\n)', bygroups(Text, Whitespace)), - ] - } - - def analyse_text(text): - if text[:7] == 'Index: ': - return True - if text[:5] == 'diff ': - return True - if text[:4] == '--- ': - return 0.9 - - -class DarcsPatchLexer(RegexLexer): - """ - DarcsPatchLexer is a lexer for the various versions of the darcs patch - format. Examples of this format are derived by commands such as - ``darcs annotate --patch`` and ``darcs send``. - - .. versionadded:: 0.10 - """ - - name = 'Darcs Patch' - aliases = ['dpatch'] - filenames = ['*.dpatch', '*.darcspatch'] - - DPATCH_KEYWORDS = ('hunk', 'addfile', 'adddir', 'rmfile', 'rmdir', 'move', - 'replace') - - tokens = { - 'root': [ - (r'<', Operator), - (r'>', Operator), - (r'\{', Operator), - (r'\}', Operator), - (r'(\[)((?:TAG )?)(.*)(\n)(.*)(\*\*)(\d+)(\s?)(\])', - bygroups(Operator, Keyword, Name, Whitespace, Name, Operator, - Literal.Date, Whitespace, Operator)), - (r'(\[)((?:TAG )?)(.*)(\n)(.*)(\*\*)(\d+)(\s?)', - bygroups(Operator, Keyword, Name, Whitespace, Name, Operator, - Literal.Date, Whitespace), 'comment'), - (r'New patches:', Generic.Heading), - (r'Context:', Generic.Heading), - (r'Patch bundle hash:', Generic.Heading), - (r'(\s*)(%s)(.*)(\n)' % '|'.join(DPATCH_KEYWORDS), - bygroups(Whitespace, Keyword, Text, Whitespace)), - (r'\+', Generic.Inserted, "insert"), - (r'-', Generic.Deleted, "delete"), - (r'(.*)(\n)', bygroups(Text, Whitespace)), - ], - 'comment': [ - (r'[^\]].*\n', Comment), - (r'\]', Operator, "#pop"), - ], - 'specialText': [ # darcs add [_CODE_] special operators for clarity - (r'\n', Whitespace, "#pop"), # line-based - (r'\[_[^_]*_]', Operator), - ], - 'insert': [ - include('specialText'), - (r'\[', Generic.Inserted), - (r'[^\n\[]+', Generic.Inserted), - ], - 'delete': [ - include('specialText'), - (r'\[', Generic.Deleted), - (r'[^\n\[]+', Generic.Deleted), - ], - } - - -class WDiffLexer(RegexLexer): - """ - A wdiff lexer. - - Note that: - - * It only works with normal output (without options like ``-l``). - * If the target files contain "[-", "-]", "{+", or "+}", - especially they are unbalanced, the lexer will get confused. - - .. versionadded:: 2.2 - """ - - name = 'WDiff' - url = 'https://www.gnu.org/software/wdiff/' - aliases = ['wdiff'] - filenames = ['*.wdiff'] - mimetypes = [] - - flags = re.MULTILINE | re.DOTALL - - # We can only assume "[-" after "[-" before "-]" is `nested`, - # for instance wdiff to wdiff outputs. We have no way to - # distinct these marker is of wdiff output from original text. - - ins_op = r"\{\+" - ins_cl = r"\+\}" - del_op = r"\[\-" - del_cl = r"\-\]" - normal = r'[^{}[\]+-]+' # for performance - tokens = { - 'root': [ - (ins_op, Generic.Inserted, 'inserted'), - (del_op, Generic.Deleted, 'deleted'), - (normal, Text), - (r'.', Text), - ], - 'inserted': [ - (ins_op, Generic.Inserted, '#push'), - (del_op, Generic.Inserted, '#push'), - (del_cl, Generic.Inserted, '#pop'), - - (ins_cl, Generic.Inserted, '#pop'), - (normal, Generic.Inserted), - (r'.', Generic.Inserted), - ], - 'deleted': [ - (del_op, Generic.Deleted, '#push'), - (ins_op, Generic.Deleted, '#push'), - (ins_cl, Generic.Deleted, '#pop'), - - (del_cl, Generic.Deleted, '#pop'), - (normal, Generic.Deleted), - (r'.', Generic.Deleted), - ], - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/dep_util.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/dep_util.py deleted file mode 100644 index 521eb716a5ebbcbc2c59654c4e71c3f0ff1abf26..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/dep_util.py +++ /dev/null @@ -1,25 +0,0 @@ -from distutils.dep_util import newer_group - - -# yes, this is was almost entirely copy-pasted from -# 'newer_pairwise()', this is just another convenience -# function. -def newer_pairwise_group(sources_groups, targets): - """Walk both arguments in parallel, testing if each source group is newer - than its corresponding target. Returns a pair of lists (sources_groups, - targets) where sources is newer than target, according to the semantics - of 'newer_group()'. - """ - if len(sources_groups) != len(targets): - raise ValueError( - "'sources_group' and 'targets' must be the same length") - - # build a pair of lists (sources_groups, targets) where source is newer - n_sources = [] - n_targets = [] - for i in range(len(sources_groups)): - if newer_group(sources_groups[i], targets[i]): - n_sources.append(sources_groups[i]) - n_targets.append(targets[i]) - - return n_sources, n_targets diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adminpaq 2012 Activador Crack PATCHED.md b/spaces/quidiaMuxgu/Expedit-SAM/Adminpaq 2012 Activador Crack PATCHED.md deleted file mode 100644 index 27b4302a9ac333c229e5f3059603052bbd54edcd..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Adminpaq 2012 Activador Crack PATCHED.md +++ /dev/null @@ -1,11 +0,0 @@ -

      Adminpaq 2012 Activador Crack


      Download Zip ……… https://geags.com/2uCpXe



      - -Adminpaq 2012 activador crack. DOWNLOAD: activador adminpaq 2012, activador adminpaq 2a1358a15e. Related links:. Name: Activation AdminPE Program developer: AdminPAQ Year: 2012 Platform: Windows XP/Vista/7 Interface language: Russian Tablet: not required. -Activation instructions: Copy the Activator AdminPE.exe file to the Windows folder and run as Administrator. -In the "Activation Status" window, click "Activate". -After that, in the main window of the "Activation Log" program, an inscription about the successful activation of AdminPE will appear. -Download Adminpaq 2012 activador crack. -AdminPAQ AdminPE crack, AdminPAQ AdminPE 8a78ff9644
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Bioquimica De Richard A Harvey 5ta Edicion Pdf Gratis.md b/spaces/quidiaMuxgu/Expedit-SAM/Bioquimica De Richard A Harvey 5ta Edicion Pdf Gratis.md deleted file mode 100644 index e06bcf05d70f7d2b85f7b53fdd16e1bfc6b130be..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Bioquimica De Richard A Harvey 5ta Edicion Pdf Gratis.md +++ /dev/null @@ -1,22 +0,0 @@ -
      -

      Bioquímica de Richard A. Harvey y Denise R. Ferrier: una obra de referencia para estudiantes y profesionales

      -

      La bioquímica es una ciencia que estudia la composición, la estructura y las reacciones de las moléculas que forman parte de los seres vivos. Es una disciplina fundamental para comprender los procesos biológicos, las enfermedades y las posibles terapias.

      -

      bioquimica de richard a harvey 5ta edicion pdf gratis


      Download File ✓✓✓ https://geags.com/2uCspQ



      -

      Entre los libros de texto de bioquímica más utilizados y reconocidos se encuentra el de Richard A. Harvey y Denise R. Ferrier, que ha llegado a su quinta edición en español. Se trata de una obra que combina rigor científico, claridad expositiva y un enfoque didáctico que facilita el aprendizaje.

      -

      El libro se divide en cuatro secciones que abarcan los principales temas de la bioquímica: estructura y función de las proteínas, metabolismo intermedio, metabolismo de los lípidos y metabolismo del nitrógeno. Cada sección se compone de varias unidades que presentan los conceptos clave, los mecanismos moleculares y las aplicaciones clínicas de cada tema.

      -

      El libro cuenta con numerosos recursos pedagógicos que ayudan al estudiante a consolidar sus conocimientos y a evaluar su progreso. Entre ellos se destacan:

      -
        -
      • Recuadros con información clínica y casos de estudio que relacionan la bioquímica con la medicina.
      • -
      • Ilustraciones a todo color que facilitan la comprensión de las estructuras y las reacciones químicas.
      • -
      • Preguntas al final de cada unidad que permiten repasar y autoevaluarse.
      • -
      • Resúmenes al final de cada sección que sintetizan las ideas más importantes.
      • -
      -

      El libro también incluye un acceso online a material complementario, como animaciones, vídeos, preguntas interactivas y un glosario bilingüe.

      -

      Bioquímica de Richard A. Harvey y Denise R. Ferrier es un libro imprescindible para los estudiantes y profesionales de ciencias de la salud que quieran adquirir una base sólida y actualizada de bioquímica.

      -

      -

      Fuente: Adaptado de los resultados web [^1^] [^2^] [^3^] [^4^]

      A continuación se presentan algunos párrafos adicionales que podrían formar parte del artículo:

      -

      La bioquímica es una ciencia que se ha desarrollado enormemente en las últimas décadas, gracias al avance de las técnicas experimentales y de la bioinformática. Estos progresos han permitido descubrir nuevos aspectos de la vida a nivel molecular, como la estructura del ADN, el código genético, la expresión génica, la regulación epigenética, el metabolismo celular, la señalización molecular, la interacción entre biomoléculas y la evolución molecular.

      -

      La bioquímica tiene una gran relevancia para la medicina, ya que permite comprender los mecanismos moleculares de las enfermedades y diseñar posibles tratamientos. Algunos ejemplos de aplicaciones médicas de la bioquímica son el diagnóstico genético, la terapia génica, la ingeniería de proteínas, la biotecnología farmacéutica, la nanomedicina y la medicina personalizada.

      -

      La bioquímica también tiene implicaciones para otras áreas de la ciencia y la tecnología, como la biología molecular, la biología celular, la genética, la microbiología, la inmunología, la neurociencia, la ecología, la agricultura, la nutrición, la química orgánica y la química física. La bioquímica es una ciencia interdisciplinaria que requiere una formación amplia y diversa.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Damage Inc. Pacific Squadron WWII Indir (Full PC).md b/spaces/quidiaMuxgu/Expedit-SAM/Damage Inc. Pacific Squadron WWII Indir (Full PC).md deleted file mode 100644 index 7067ac26a387ef9d0b3e468daaff9dbd61c1254f..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Damage Inc. Pacific Squadron WWII Indir (Full PC).md +++ /dev/null @@ -1,6 +0,0 @@ -

      Damage Inc. Pacific Squadron WWII Indir (Full PC)


      Download Zip --->>> https://geags.com/2uCsLR



      -
      -damage inc. pacific squadron wwii pc - All Latest Cheats Codes Free Games, Pc ... Download full free pc games, highly compressed and torrent games for this ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Evil Dead Movie In Tamil Free Download ((NEW)).md b/spaces/quidiaMuxgu/Expedit-SAM/Evil Dead Movie In Tamil Free Download ((NEW)).md deleted file mode 100644 index 9cb6dd4a187e2653356c059d22258f3ccfcc8496..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Evil Dead Movie In Tamil Free Download ((NEW)).md +++ /dev/null @@ -1,6 +0,0 @@ -

      evil dead movie in tamil free download


      Download Filehttps://geags.com/2uCqrQ



      - - 3cee63e6c2
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Filou Nc 12 Crack.md b/spaces/quidiaMuxgu/Expedit-SAM/Filou Nc 12 Crack.md deleted file mode 100644 index c6274547cd530a03ce095de8ecf68ff3e76b6758..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Filou Nc 12 Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

      filou nc 12 crack


      DOWNLOAD ::: https://geags.com/2uCrDX



      -
      -Suite.v9.3 Oasys Suite v10.2 Okino_Products_Suite_v4.12 Okino PolyTrans v4.3.8 ... Pro.7 MacOSX FILOU-NC.v10.8.005 Filter Wiz Pro v4.26 ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/MOTU BPM 1.5-torrent.20.md b/spaces/quidiaMuxgu/Expedit-SAM/MOTU BPM 1.5-torrent.20.md deleted file mode 100644 index c33116e3e24d4add56ee40df9698b4acbbd3525a..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/MOTU BPM 1.5-torrent.20.md +++ /dev/null @@ -1,6 +0,0 @@ -

      MOTU BPM 1.5-torrent.20


      Downloadhttps://geags.com/2uCqyT



      -
      - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/qwieug123467/Linaqruf-anything-v3.0/README.md b/spaces/qwieug123467/Linaqruf-anything-v3.0/README.md deleted file mode 100644 index c5f56337e16c1edb91a62dd61575eb359cdbcf92..0000000000000000000000000000000000000000 --- a/spaces/qwieug123467/Linaqruf-anything-v3.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Linaqruf Anything V3.0 -emoji: 👀 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.13.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Astral Neverwinter Bot Cracked ArMa The Best Way to Level Up and Dominate Neverwinter Online.md b/spaces/raedeXanto/academic-chatgpt-beta/Astral Neverwinter Bot Cracked ArMa The Best Way to Level Up and Dominate Neverwinter Online.md deleted file mode 100644 index fcb2d471f547ec0bbca8e107fdf1b8088934c9e8..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Astral Neverwinter Bot Cracked ArMa The Best Way to Level Up and Dominate Neverwinter Online.md +++ /dev/null @@ -1,178 +0,0 @@ - -

      Astral Neverwinter Bot Cracked ArMa: How to Get It and Use It

      -

      If you are a fan of Neverwinter Online, you might have heard of Astral Neverwinter Bot, a powerful tool that automates various tasks in the game. But did you know that there is a way to get it for free, thanks to a crack by ArMa? In this article, we will tell you everything you need to know about Astral Neverwinter Bot Cracked ArMa, including what it is, how to get it, and how to use it. Read on and discover how you can take your gaming experience to the next level!

      -

      Astral Neverwinter Bot Cracked ArMa


      DOWNLOADhttps://tinourl.com/2uL4LT



      -

      What is Astral Neverwinter Bot?

      -

      Astral Neverwinter Bot is a software that allows you to automate various aspects of Neverwinter Online, such as questing, farming, crafting, fishing, refining, and more. It can also perform complex actions such as combat rotations, looting, selling, repairing, and using potions. With Astral Neverwinter Bot, you can save time and effort while enjoying the game at your own pace.

      -

      Features and Benefits of Astral Neverwinter Bot

      -

      Some of the features and benefits of Astral Neverwinter Bot are:

      -
        -
      • It supports all classes and races in the game.
      • -
      • It has a user-friendly interface that allows you to customize your settings and preferences.
      • -
      • It has a smart pathfinding system that avoids obstacles and enemies.
      • -
      • It has a built-in anti-afk system that prevents you from being kicked out of the game.
      • -
      • It has a stealth mode that hides your botting activity from other players and GMs.
      • -
      • It has a premium mode that unlocks additional features such as PvP botting, dungeon botting, profession botting, and more.
      • -
      -

      With Astral Neverwinter Bot, you can enjoy the game without having to worry about grinding, leveling, or farming. You can also earn more astral diamonds, gold, items, and rewards while playing. You can even use it to boost your friends or guild members in the game.

      -

      How to Download and Install Astral Neverwinter Bot

      -

      To download and install Astral Neverwinter Bot, you need to follow these steps:

      -
        -
      1. Go to the official website of Astral Neverwinter Bot at https://www.neverwinter-bot.com/.
      2. -
      3. Create an account and verify your email address.
      4. -
      5. Log in to your account and go to the download page.
      6. -
      7. Download the latest version of Astral Neverwinter Bot for your operating system (Windows or Linux).
      8. -
      9. Extract the zip file to a folder of your choice.
      10. -
      11. Run the launcher.exe file as administrator.
      12. -
      13. Enter your username and password and click on login.
      14. -
      15. Select your server and click on start.
      16. -
      17. Astral Neverwinter Bot will launch and connect to your game client.
      18. -
      -

      Congratulations! You have successfully downloaded and installed Astral Neverwinter Bot. Now you can start using it in the game.

      -

      Astral Neverwinter Bot free download cracked version
      -How to use Astral Neverwinter Bot crack ArMa
      -Astral Neverwinter Bot cracked by ArMa features
      -Astral Neverwinter Bot ArMa crack tutorial
      -Astral Neverwinter Bot crack ArMa update
      -Download Astral Neverwinter Bot cracked ArMa 2023
      -Astral Neverwinter Bot crack ArMa license key
      -Astral Neverwinter Bot cracked ArMa review
      -Astral Neverwinter Bot crack ArMa reddit
      -Astral Neverwinter Bot cracked ArMa forum
      -Astral Neverwinter Bot crack ArMa discord
      -Astral Neverwinter Bot cracked ArMa youtube
      -Astral Neverwinter Bot crack ArMa gameplay
      -Astral Neverwinter Bot cracked ArMa settings
      -Astral Neverwinter Bot crack ArMa support
      -Astral Neverwinter Bot cracked ArMa guide
      -Astral Neverwinter Bot crack ArMa tips and tricks
      -Astral Neverwinter Bot cracked ArMa best settings
      -Astral Neverwinter Bot crack ArMa safe to use
      -Astral Neverwinter Bot cracked ArMa virus free
      -Astral Neverwinter Bot crack ArMa working 2023
      -Astral Neverwinter Bot cracked ArMa no survey
      -Astral Neverwinter Bot crack ArMa direct link
      -Astral Neverwinter Bot cracked ArMa mega.nz
      -Astral Neverwinter Bot crack ArMa mediafire
      -Astral Neverwinter Bot cracked ArMa google drive
      -Astral Neverwinter Bot crack ArMa dropbox
      -Astral Neverwinter Bot cracked ArMa torrent
      -Astral Neverwinter Bot crack ArMa magnet link
      -Astral Neverwinter Bot cracked ArMa rar password
      -Astral Neverwinter Bot crack ArMa zip file
      -Astral Neverwinter Bot cracked ArMa installer
      -Astral Neverwinter Bot crack ArMa setup.exe
      -Astral Neverwinter Bot cracked ArMa patch notes
      -Astral Neverwinter Bot crack ArMa changelog
      -Astral Neverwinter Bot cracked ArMa latest version
      -Astral Neverwinter Bot crack ArMa compatible windows 10
      -Astral Neverwinter Bot cracked ArMa for mac os x
      -Astral Neverwinter Bot crack ArMa for linux ubuntu
      -Astral Neverwinter Bot cracked ArMa for android apk
      -Astral Neverwinter Bot crack ArMa for ios iphone ipad ipod touch
      -Astral Neverwinter Bot cracked ArMa for xbox one ps4 switch
      -Astral Neverwinter Bot crack ArMa for steam origin uplay epic games
      -Astral Neverwinter Bot cracked ArMa for never winter online mmorpg
      -Astral Neverwinter Bot crack ArMa for never winter nights enhanced edition
      -Astral Neverwinter Bot cracked ArMa for dungeons and dragons dnd
      -Astral Neverwinter Bot crack ArMa for forgotten realms campaign setting
      -Astral Neverwinter Bot cracked ArMa for sword coast adventurer's guide
      -Astral Neverwinter Bot crack ArMa for baldur's gate 3 bg3
      -Astral Neverwinter Bot cracked by arma for astraldynamics.com

      -

      What is ArMa?

      -

      ArMa is a hacker who specializes in cracking various bots and cheats for online games. He is known for his skills and generosity in sharing his cracks with the gaming community. He has cracked many popular bots such as WoW Glider, Honorbuddy, Demonbuddy, Rebornbuddy, Exiledbot, Pokefarmer, and more. He has also cracked some cheats such as Aimbot, Wallhack, ESP, Speedhack, No Recoil, No Spread, and more.

      -

      How ArMa Cracked Astral Neverwinter Bot

      -

      ArMa cracked Astral Neverwinter Bot by reverse engineering its code and bypassing its protection mechanisms. He managed to find and exploit several vulnerabilities in the bot's encryption, authentication, licensing, and anti-debugging systems. He also modified some of the bot's functions to improve its performance and stability. He then released his crack for free on his website at https://arma-project.ru/.

      -

      How to Get ArMa's Crack for Astral Neverwinter Bot

      -

      To get ArMa's crack for Astral Neverwinter Bot, you need to follow these steps:

      -
        -
      1. Go to ArMa's website at https://arma-project.ru/.
      2. -
      3. Create an account and verify your email address.
      4. -
      5. Log in to your account and go to the download page.
      6. -
      7. Download the latest version of ArMa's crack for Astral Neverwinter Bot.
      8. -
      9. Extract the zip file to the same folder where you installed Astral Neverwinter Bot.
      10. -
      11. Replace the original launcher.exe file with the cracked one.
      12. -
      13. Run the cracked launcher.exe file as administrator.
      14. -
      15. You will see a message saying "Cracked by ArMa" on the login screen.
      16. -
      17. You can now use any username and password to log in.
      18. -
      -

      Congratulations! You have successfully obtained ArMa's crack for Astral Neverwinter Bot. Now you can use it for free without having to pay for a subscription or a premium mode.

      -

      How to Use Astral Neverwinter Bot Cracked ArMa

      -

      To use Astral Neverwinter Bot Cracked ArMa, you need to follow these steps:

      -

      How to Configure and Run Astral Neverwinter Bot

      -
        -
      1. After logging in with the cracked launcher.exe file, you will see the main interface of Astral Neverwinter Bot.
      2. -
      3. Select your character from the drop-down menu on the top left corner.
      4. -
      5. Select your profile from the drop-down menu on the top right corner. A profile is a set of settings that determines how your bot will behave in the game. You can choose from predefined profiles or create your own custom ones.
      6. -
      7. If you want to create or edit a profile, click on the profile editor button on the bottom right corner. You will see a new window where you can adjust various parameters such as movement speed, combat strategy, looting options, inventory management options, etc. You can also add or remove tasks from your profile such as quests, farming locations, crafting recipes, fishing spots, refining methods, etc. You can save your profile by clicking on the save button on the top left corner. You can load your profile by clicking on the load button on the top right corner. You can close the profile editor by clicking on the X button on the top right corner.
      8. -
      9. If you want to use a predefined profile, you can browse through the available ones by clicking on the browse button on the bottom left corner. You will see a new window where you can search for profiles by name, category, rating, or author. You can also sort them by date, popularity, or relevance. You can download a profile by clicking on the download button next to it. You can rate a profile by clicking on the stars next to it. You can close the browse window by clicking on the X button on the top right corner.
      10. -
      11. After selecting or creating your profile, you can start the bot by clicking on the start button on the bottom center. You will see a message saying "Bot started" on the status bar. You can stop the bot by clicking on the stop button next to it. You will see a message saying "Bot stopped" on the status bar. You can pause the bot by clicking on the pause button next to it. You will see a message saying "Bot paused" on the status bar. You can resume the bot by clicking on the resume button next to it. You will see Bot resumed" on the status bar.
      12. -
      13. You can also control the bot using hotkeys. The default hotkeys are F1 for start, F2 for stop, F3 for pause, and F4 for resume. You can change the hotkeys by clicking on the settings button on the top right corner. You will see a new window where you can assign different keys for different functions. You can close the settings window by clicking on the X button on the top right corner.
      14. -
      -

      That's it! You have successfully configured and run Astral Neverwinter Bot. Now you can sit back and relax while the bot does all the work for you.

      -

      How to Avoid Detection and Bans from Neverwinter Online

      -

      While using Astral Neverwinter Bot Cracked ArMa, you need to be careful and avoid detection and bans from Neverwinter Online. Here are some tips and tricks to help you stay safe:

      -
        -
      • Do not use the bot for too long or too often. Take breaks and play manually from time to time.
      • -
      • Do not use the bot in crowded or public areas. Choose secluded or hidden spots for your botting activities.
      • -
      • Do not use the bot in PvP or dungeons. These modes require human interaction and coordination, and using a bot will make you stand out and attract attention.
      • -
      • Do not use the bot with unrealistic settings or profiles. For example, do not set your movement speed too high, do not use combat skills that are not available for your class or level, do not loot items that are not appropriate for your character, etc.
      • -
      • Do not brag or advertise about using the bot in chat or forums. Keep a low profile and do not draw attention to yourself.
      • -
      • Do not share your account or your bot with anyone else. This will increase the risk of getting reported or banned.
      • -
      • Do not use the same username and password for your game account and your bot account. Use different and unique credentials for each one.
      • -
      • Do not use outdated or cracked versions of the bot. Always update to the latest version of Astral Neverwinter Bot Cracked ArMa from ArMa's website.
      • -
      -

      By following these tips and tricks, you can reduce the chances of getting detected and banned from Neverwinter Online while using Astral Neverwinter Bot Cracked ArMa.

      -

      Tips and Tricks for Using Astral Neverwinter Bot Cracked ArMa

      -

      Besides avoiding detection and bans, there are some other tips and tricks that can help you get the most out of Astral Neverwinter Bot Cracked ArMa. Here are some of them:

      -
        -
      • You can use multiple instances of the bot on different computers or virtual machines. This way, you can run multiple characters at the same time and increase your productivity and efficiency.
      • -
      • You can use a VPN or a proxy to hide your IP address and location from Neverwinter Online. This way, you can avoid IP bans and geo-restrictions.
      • -
      • You can use a sandbox or a virtual machine to isolate your bot from your main system. This way, you can protect your computer from viruses, malware, or spyware that might come with the bot or the crack.
      • -
      • You can use a backup tool to save your settings and profiles. This way, you can restore them in case of data loss or corruption.
      • -
      • You can use a forum or a community to get support and feedback from other users of Astral Neverwinter Bot Cracked ArMa. You can also share your own experiences and tips with them.
      • -
      -

      By using these tips and tricks, you can enhance your experience and performance while using Astral Neverwinter Bot Cracked ArMa.

      -

      Conclusion

      -

      Summary of the Main Points

      -

      In this article, we have covered everything you need to know about Astral Neverwinter Bot Cracked ArMa, including:

      -
        -
      • What is Astral Neverwinter Bot and what are its features and benefits?
      • -
      • What is ArMa and how did he crack Astral Neverwinter Bot?
      • -
      • How to get ArMa's crack for Astral Neverwinter Bot?
      • -
      • How to configure and run Astral Neverwinter Bot?
      • -
      • How to avoid detection and bans from Neverwinter Online?
      • -
      • Tips and tricks for using Astral Neverwinter Bot Cracked ArMa.
      • -
      -

      We hope that this article has been informative and helpful for you. If you have any questions or comments, feel free to leave them below.

      -

      Call to Action for the Readers

      -

      If you are interested in trying out Astral Neverwinter Bot Cracked ArMa, we have good news for you. You can download it for free from ArMa's website at https://arma-project.ru/. All you need is an account and a valid email address. You can also check out his other cracks for various bots and cheats for online games.

      -

      However, we must warn you that using bots and cheats in online games is against their terms of service and may result in account suspension or termination. Therefore, we advise you to use them at your own risk and discretion. We are not responsible for any consequences that may arise from using them.

      -

      If you are looking for a legit and safe way to play Neverwinter Online without bots or cheats, we recommend you to check out our partner site at https://www.mmorpg.com/neverwinter. There you can find guides, reviews, news, videos, forums, and more about this amazing game. You can also join their community of players who share your passion and enthusiasm for Neverwinter Online.

      -

      So what are you waiting for? Go ahead and download Astral Neverwinter Bot Cracked ArMa today and enjoy the game like never before! Or visit our partner site at https://www.mmorpg.com/neverwinter and discover everything there is to know about Neverwinter Online!

      -

      FAQs

      -

      Here are some frequently asked questions about Astral Neverwinter Bot Cracked ArMa:

      -
        -
      1. What is Neverwinter Online?
      2. -

        Neverwinter Online is a free-to-play massively multiplayer online role-playing game (MMORPG) based on the Dungeons & Dragons fantasy franchise. It was developed by Cryptic Studios and published by Perfect World Entertainment in 2013. It is available for Windows, PlayStation 4, and Xbox One platforms. It features an immersive story, dynamic combat, customizable characters, rich lore, and a vibrant community. It has received positive reviews and awards from critics and players alike. It has over 18 million registered users as of 2019. You can learn more about it at https://www.arcgames.com/en/games/neverwinter.

        -
      3. Is Astral Neverwinter Bot legal?
      4. -

        No, Astral Neverwinter Bot is not legal. It is a third-party software that violates the terms of service of Neverwinter Online. Using it may result in account suspension or termination. Therefore, we advise you to use it at your own risk and discretion. We are not responsible for any consequences that may arise from using it.

        -
      5. Is ArMa's crack safe?
      6. -

        We cannot guarantee that ArMa's crack is safe. It may contain viruses, malware, or spyware that could harm your computer or compromise your personal information. Therefore, we advise you to use it at your own risk and discretion. We recommend you to use a sandbox or a virtual machine to isolate it from your main system. We also recommend you to use a VPN or a proxy to hide your IP address and location from Neverwinter Online. We are not responsible for any consequences that may arise from using it.

        -
      7. How do I update Astral Neverwinter Bot Cracked ArMa?
      8. -

        To update Astral Neverwinter Bot Cracked ArMa, you need to visit ArMa's website at https://arma-project.ru/. There you can find the latest version of his crack for Astral Neverwinter Bot. You need to download and install it over your existing one. You also need to check the official website of Astral Neverwinter Bot at https://www.neverwinter-bot.com/. There you can find the latest version of the bot itself. You need to download and install it over your existing one. You need to make sure that both versions are compatible with each other and with current version of Neverwinter Online. You need to update both the bot and the crack regularly to avoid errors and issues.

        -
      9. How do I contact ArMa or Astral Neverwinter Bot?
      10. -

        To contact ArMa, you can visit his website at https://arma-project.ru/. There you can find his email address, his Discord server, his Telegram channel, and his VK group. You can also leave a comment on his blog or forum posts. He is usually friendly and helpful, but he may not respond to every message or request.

        -

        To contact Astral Neverwinter Bot, you can visit their website at https://www.neverwinter-bot.com/. There you can find their email address, their Discord server, their Facebook page, and their Twitter account. You can also leave a comment on their blog or forum posts. They are usually professional and supportive, but they may not tolerate or assist users of cracked versions of their bot.

        -
      11. Where can I find more information or support for Astral Neverwinter Bot Cracked ArMa?
      12. -

        If you need more information or support for Astral Neverwinter Bot Cracked ArMa, you can try the following sources:

        -
          -
        • You can read the documentation and the FAQ on the official website of Astral Neverwinter Bot at https://www.neverwinter-bot.com/. There you can find detailed instructions and explanations on how to use the bot and its features.
        • -
        • You can watch the videos and tutorials on the official YouTube channel of Astral Neverwinter Bot at https://www.youtube.com/channel/UC0yQ6Z7J4vY0Q6Z7J4vY0Q. There you can see the bot in action and learn some tips and tricks on how to optimize it.
        • -
        • You can join the community and the discussion on the official forum of Astral Neverwinter Bot at https://www.neverwinter-bot.com/forums/. There you can interact with other users and developers of the bot and share your feedback and suggestions.
        • -
        • You can also join the community and the discussion on ArMa's website at https://arma-project.ru/. There you can interact with other users and fans of ArMa's cracks and share your experiences and problems.
        • -
        -

        These sources may provide you with some useful information or support for Astral Neverwinter Bot Cracked ArMa. However, they may not cover everything or answer all your questions. Therefore, you may need to do some research or experimentation on your own to find out more.

        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Ebook Purpose Driven Life Bahasa Indonesia Inggris The Bestselling Book that Changed Millions of Lives.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Ebook Purpose Driven Life Bahasa Indonesia Inggris The Bestselling Book that Changed Millions of Lives.md deleted file mode 100644 index ee84ef3105a5e12b71314e71c9102ac0b3dedb09..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Ebook Purpose Driven Life Bahasa Indonesia Inggris The Bestselling Book that Changed Millions of Lives.md +++ /dev/null @@ -1,179 +0,0 @@ - -

        Download Ebook Purpose Driven Life Bahasa Indonesia Inggris

        -

        Have you ever wondered what your purpose in life is? Do you feel like you are living a meaningless and aimless existence? If you answered yes to these questions, then you might want to read Purpose Driven Life, a bestselling book by Rick Warren that has transformed millions of lives around the world. In this article, we will tell you everything you need to know about this book and how you can download it as an ebook in both Indonesian and English languages.

        -

        download ebook purpose driven life bahasa indonesia inggris


        DOWNLOADhttps://tinourl.com/2uL13H



        -

        What is Purpose Driven Life?

        -

        Purpose Driven Life is a Christian devotional book that was published in 2002 by Rick Warren, a pastor and founder of Saddleback Church in California. The book is based on Warren's 40-day spiritual journey program that he developed for his congregation. The book has sold over 50 million copies worldwide and has been translated into more than 80 languages.

        -

        A brief introduction to the book and its author

        -

        Rick Warren is one of the most influential pastors and authors in the world. He has been named as one of the "100 Most Influential People in the World" by Time magazine and one of the "15 World Leaders Who Mattered Most in 2004" by Newsweek. He is also a global strategist, philanthropist, and humanitarian who has initiated various projects to fight poverty, disease, illiteracy, and injustice.

        -

        Warren wrote Purpose Driven Life as a response to his own personal crisis. He said that he was feeling empty and restless despite his success and achievements. He realized that he needed to find his true purpose in life, not just his goals and ambitions. He decided to share his insights and discoveries with others who might be going through the same struggle.

        -

        Download buku rohani kristen purpose driven life bahasa indonesia
        -Cara download ebook hidup yang digerakkan oleh tujuan Rick Warren
        -Tempat download pdf purpose driven life bahasa indonesia gratis
        -Review buku the purpose driven life what on earth am I here for
        -Beli buku purpose driven life bahasa indonesia online
        -Download ebook purpose driven life bahasa inggris pdf
        -Baca online buku purpose driven life bahasa indonesia
        -Sinopsis buku purpose driven life Rick Warren bahasa indonesia
        -Download ebook perjalanan spiritual pribadi 40 hari purpose driven life
        -Jual buku purpose driven life bahasa indonesia murah
        -Download ebook lima tujuan Allah bagi hidup manusia purpose driven life
        -Resensi buku purpose driven life bahasa indonesia
        -Download ebook purpose driven life bahasa indonesia epub
        -Buku purpose driven life pdf bahasa indonesia docplayer
        -Download ebook hidup dengan tujuan Rick Warren bahasa indonesia
        -Ebook the purpose driven life in bahasa indonesia guide to serving God
        -Download ebook purpose driven life bahasa indonesia inggris sway
        -Ebook purpose driven life bahasa indonesia terjemahan
        -Download ebook purpose driven life bahasa indonesia full version
        -Ebook purpose driven life bahasa indonesia best seller
        -Download ebook mengenal tujuan hidup kita purpose driven life
        -Ebook purpose driven life bahasa indonesia dan inggris bilingual
        -Download ebook purpose driven life bahasa indonesia lengkap
        -Ebook purpose driven life bahasa indonesia hardcover
        -Download ebook purpose driven life bahasa indonesia softcover
        -Ebook hidup yang digerakkan oleh tujuan Rick Warren pdf
        -Download ebook the purpose driven life what on earth am I here for by Rick Warren
        -Ebook hidup yang digerakkan oleh tujuan Rick Warren epub
        -Download ebook the purpose driven life what on earth am I here for by Rick Warren pdf gratis
        -Ebook hidup yang digerakkan oleh tujuan Rick Warren online
        -Download ebook the purpose driven life what on earth am I here for by Rick Warren epub gratis
        -Ebook hidup yang digerakkan oleh tujuan Rick Warren docplayer
        -Download ebook the purpose driven life what on earth am I here for by Rick Warren sway
        -Ebook hidup yang digerakkan oleh tujuan Rick Warren terjemahan
        -Download ebook the purpose driven life what on earth am I here for by Rick Warren bilingual
        -Ebook hidup yang digerakkan oleh tujuan Rick Warren full version
        -Download ebook the purpose driven life what on earth am I here for by Rick Warren best seller
        -Ebook hidup yang digerakkan oleh tujuan Rick Warren hardcover
        -Download ebook the purpose driven life what on earth am I here for by Rick Warren softcover
        -Ebook perjalanan spiritual pribadi 40 hari purpose driven life pdf
        -Download ebook perjalanan spiritual pribadi 40 hari purpose driven life epub
        -Ebook perjalanan spiritual pribadi 40 hari purpose driven life online
        -Download ebook perjalanan spiritual pribadi 40 hari purpose driven life gratis
        -Ebook perjalanan spiritual pribadi 40 hari purpose driven life docplayer
        -Download ebook perjalanan spiritual pribadi 40 hari purpose driven life sway
        -Ebook perjalanan spiritual pribadi 40 hari purpose driven life terjemahan
        -Download ebook perjalanan spiritual pribadi 40 hari purpose driven life bilingual
        -Ebook perjalanan spiritual pribadi 40 hari purpose driven life full version
        -Download ebook perjalanan spiritual pribadi 40 hari purpose driven life best seller

        -

        The main themes and messages of the book

        -

        The book is divided into 40 chapters, each corresponding to a day of the program. The chapters are grouped into six sections that cover the following topics:

        -
          -
        • What on Earth Am I Here For?
        • -
        • Purpose #1: You Were Planned for God's Pleasure
        • -
        • Purpose #2: You Were Formed for God's Family
        • -
        • Purpose #3: You Were Created to Become Like Christ
        • -
        • Purpose #4: You Were Shaped for Serving God
        • -
        • Purpose #5: You Were Made for a Mission
        • -
        -

        The book teaches that God has a specific plan and purpose for each person's life, and that finding and fulfilling that purpose is the key to happiness and fulfillment. The book also emphasizes that life is not about oneself, but about God and others. The book challenges readers to surrender their lives to God, worship Him, join His family, grow in His likeness, serve Him, and share His love with others.

        -

        How the book can help you find your purpose and live a fulfilling life

        -

        Purpose Driven Life can help you find your purpose and live a fulfilling life by:

        -
          -
        • Giving you a clear vision of God's plan and will for your life
        • -
        • Helping you discover your unique gifts, talents, passions, and personality
        • -
        • Guiding you to align your goals and actions with God's purposes
        • -
        • Inspiring you to live a life of worship, fellowship, discipleship, ministry, and evangelism
        • -
        • Motivating you to make a positive difference in the world with your skills and resources
        • -
        • Encouraging you to trust God's promises and power in every situation
        • -
        • Providing you with practical tools and tips to apply the principles in your daily life
        • -
        -

        Why Download Ebook Purpose Driven Life Bahasa Indonesia Inggris?

        -

        If you are interested in reading Purpose Driven Life, you might want to consider downloading it as an ebook in both Indonesian and English languages. There are many benefits of reading ebooks over physical books, such as:

        -

        The benefits of reading ebooks over physical books

        -

        Some of the benefits of reading ebooks over physical books are:

        -
          -
        • Ebooks are more convenient and accessible. You can download them instantly from anywhere with an internet connection. You can also store thousands of ebooks on your device without taking up much space.
        • -
        • Ebooks are more affordable and eco-friendly. You can save money by buying ebooks at lower prices or even getting them for free from some sources. You can also reduce paper waste and environmental impact by reading ebooks instead of printed books.
        • -
        • Ebooks are more customizable and interactive. You can adjust the font size, brightness, color, orientation, etc. according to your preference. You can also use features like bookmarks, highlights, notes, dictionary, search, etc. to enhance your reading experience. You can also access multimedia content like audio, video, images, links, etc. that might be embedded in some ebooks.
        • -
        -
      13. Amazon Kindle Store
          -
        • Go to https://www.amazon.com/Purpose-Driven-Life-What-Earth-ebook/dp/B008EGV4BQ on your browser.
        • -
        • Click on "Buy now with 1-Click" or "Read for Free" if you have a Kindle Unlimited subscription.
        • -
        • Sign in to your Amazon account or create one if you don't have one.
        • -
        • Select your preferred device or app to deliver the ebook.
        • -
        • Open your Kindle app or device and sync your library to download the ebook.
        • -
      14. -
      15. Google Play Books
      16. -
      17. PDF Drive
          -
        • Go to https://www.pdfdrive.com/search?q=purpose+driven+life on your browser.
        • -
        • Browse through the search results and click on the ebook that matches your language and format preference.
        • -
        • Click on "Download (PDF)" or "Download (EPUB)" depending on the file type.
        • -
        • Wait for the download to finish and save the ebook file on your device.
        • -
      18. -
      19. Ebook Indonesia
          -
        • Go to https://ebookindonesia.id/ebook/the-purpose-driven-life/ on your browser.
        • -
        • Click on "Register" or "Login" if you already have an account.
        • -
        • Fill in the required information and verify your email address.
        • -
        • Go back to the ebook page and click on "Download Ebook".
        • -
        • Select the ebook format (PDF or EPUB) and click on "Download".
        • -
        • Save the ebook file on your device.
        • -
      20. -
      -

      A comparison of the quality and features of different ebook formats and versions

      -

      You might be wondering which ebook format and version is best for you. There are two main types of ebook formats: PDF and EPUB. Each has its own advantages and disadvantages. Here is a comparison of them:

      - - - - - - - - - - - - - - - - - - -

      A list of tips and tricks to enhance your reading experience and comprehension

      -

      Now that you have downloaded Purpose Driven Life as an ebook in both Indonesian and English languages, you might want to make the most out of your reading experience and comprehension. Here are some tips and tricks that can help you:

      -
        -
      • Set a reading schedule and stick to it. The book is designed to be read in 40 days, one chapter per day. You can follow this plan or create your own based on your availability and preference.
      • -
      • Read the book in both languages alternately or simultaneously. You can read one chapter in Indonesian and then the same chapter in English, or vice versa. You can also read both versions side by side or on different devices.
      • -
      • Use a dictionary or translator app to look up unfamiliar words or phrases. You can also use online tools like Google Translate or DeepL to translate whole sentences or paragraphs.
      • -
      • Take notes and write summaries of each chapter. You can use a notebook, a word processor, or an app like Evernote to record your thoughts and reflections on each chapter. You can also write summaries of each chapter in both languages to practice your writing skills.
      • -
      • Discuss the book with others who are reading it or have read it. You can join online forums, groups, or communities where you can share your insights and questions with other readers. You can also find a reading partner or a mentor who can help you understand and apply the book better.
      • -
      -

      Conclusion

      -

      In conclusion, Purpose Driven Life is a book that can help you discover and fulfill your God-given purpose in life. It is a book that has changed millions of lives around the world and can change yours too. You can download it as an ebook in both Indonesian and English languages from various sources and platforms for free or at a low cost. You can also use different ebook formats and versions to suit your preferences and needs. You can also follow some tips and tricks to enhance your reading experience and comprehension. We hope that this article has been helpful and informative for you. We encourage you to download Purpose Driven Life as an ebook in both Indonesian and English languages today and start your journey of finding your purpose.

      -

      A call to action for the readers to download the ebook and start their journey of finding their purpose

      -

      If you are ready to download Purpose Driven Life as an ebook in both Indonesian and English languages, you can click on any of the links below to get started:

      - -

      If you are not sure which source or platform to choose, you can refer to our comparison table above to see the pros and cons of each option.

      -

      If you have already downloaded Purpose Driven Life as an ebook in both Indonesian and English languages, you can follow our step-by-step guide above to learn how to download it from different websites and apps.

      -

      If you have not yet started reading Purpose Driven Life, you can follow our reading schedule and tips above to make the most out of your reading experience and comprehension.

      -

      Whatever stage you are in, we hope that you will enjoy reading Purpose Driven Life as an ebook in both Indonesian and English languages and that it will help you find your purpose and live a fulfilling life.

      -

      FAQs

      -

      Here are some frequently asked questions and answers about Purpose Driven Life and how to download it as an ebook in both Indonesian and English languages:

      -
        -
      1. What is the difference between Purpose Driven Life and The Purpose Driven Church?
      2. -

        Purpose Driven Life is a book for individuals who want to find their personal purpose in life. The Purpose Driven Church is a book for pastors and church leaders who want to build healthy and effective churches based on God's purposes.

        -
      3. Is Purpose Driven Life a Bible study or a devotional?
      4. -

        Purpose Driven Life is both a Bible study and a devotional. It is a Bible study because it is based on the teachings and principles of the Bible. It is a devotional because it helps readers to apply the Bible to their daily lives and to grow closer to God.

        -
      5. Can I read Purpose Driven Life without being a Christian?
      6. -

        Yes, you can read Purpose Driven Life without being a Christian. The book is written for anyone who wants to find their purpose in life, regardless of their religious background or beliefs. However, the book does present a Christian perspective on life and purpose, and it invites readers to consider accepting Jesus Christ as their Lord and Savior.

        -
      7. Can I read Purpose Driven Life more than once?
      8. -

        Yes, you can read Purpose Driven Life more than once. In fact, the author recommends that you read it at least once every year. He says that each time you read it, you will discover new insights and applications that will help you grow in your purpose.

        -
      9. Can I share Purpose Driven Life with others?
      10. -

        Yes, you can share Purpose Driven Life with others. You can share your ebook with your friends or family members who have compatible devices or apps. You can also share your thoughts and reflections on the book with others through social media, blogs, podcasts, etc. You can also join or start a small group or a class where you can discuss the book with others who are reading it or have read it.

        -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Mr Bechara 2 Movie 1080p.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Mr Bechara 2 Movie 1080p.md deleted file mode 100644 index 5ace71f7e0882b503980f65f4324c99bbb311be3..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download Mr Bechara 2 Movie 1080p.md +++ /dev/null @@ -1,24 +0,0 @@ - -

      Download Mr Bechara 2 Movie 1080p: A Sequel to the 1996 Romantic Comedy

      - -

      If you are a fan of the 1996 Hindi-language romantic comedy film Mr. Bechara, starring Anil Kapoor, Sridevi and Nagarjuna Akkineni, you will be delighted to know that a sequel is in the works. Mr. Bechara 2 is expected to release in 2023, and will feature the same lead actors reprising their roles as Anand Verma, Asha/Anita and Ajay.

      -

      download Mr Bechara 2 movie 1080p


      Download File ❤❤❤ https://tinourl.com/2uL45L



      - -

      In case you are not familiar with the plot of Mr. Bechara, here is a brief summary: Anand Verma is a shy widower and a single father to his infant son. He admits a woman into the hospital who has lost her memory due to an accident. The doctor names her Asha and makes her believe that she is married to Anand and has a child. Anand reluctantly agrees to take care of her until she recovers from amnesia. However, he soon falls in love with her, while she also gets attached to him and his son. But on the day of their wedding, Asha regains her memory and realizes that she is actually Anita, and Ajay is her lover. Anand sacrifices his happiness and reunites Anita with Ajay, but Anita realizes that she loves Anand more and returns to him.

      - -

      Mr. Bechara 2 will continue the story of Anand and Anita, who are now happily married and have a daughter. Ajay also moves on with his life and finds a new partner. But their lives take a dramatic turn when Anita's brother, who was presumed dead in the accident that caused her amnesia, returns to claim his share of their family property. He also has a grudge against Anand and Ajay, and plots to ruin their lives. Will Anand and Anita be able to overcome this new challenge? Will Ajay be able to help them? Will there be more twists and turns in their love story?

      - -

      To find out the answers, you will have to wait for Mr. Bechara 2 to release in theatres. But if you can't wait that long, you can download Mr. Bechara 2 movie 1080p from our website. We have the best quality and fastest download speed for all your Bollywood movie needs. Just click on the link below and enjoy Mr. Bechara 2 movie 1080p on your device.

      -

      - -

      Download Mr Bechara 2 Movie 1080p

      - -

      Mr. Bechara 2 is directed by K. Bhagyaraj, who also directed the original film. He has written the screenplay and the story for the sequel, based on his own Tamil film Veetla Visheshanga (1994), which was the source material for Mr. Bechara. The music for Mr. Bechara 2 is composed by Anand Milind, who also composed the songs for the first film. The lyrics are written by Sameer.

      - -

      The film has been shot in various locations in India and abroad, including Mumbai, Goa, Ooty, London and Switzerland. The film features some of the original cast members from Mr. Bechara, such as Anupam Kher as Dr. Dayanand, Shakti Kapoor as Mr. Natwarlal 'Romeo', Tiku Talsania as Inspector V.P. Chaturvedi and Shammi as the caretaker. The film also introduces some new characters, such as Anita's brother played by Abhimanyu Singh, Ajay's partner played by Heera Rajgopal and Anand's daughter played by Baby Akshay.

      - -

      Mr. Bechara 2 promises to be a fun-filled and heartwarming comedy that will make you laugh and cry. The film has some hilarious scenes, such as Anand trying to impress Anita's brother with his fake wealth, Ajay getting into trouble with Romeo's gang, Anita and Ajay competing in a dance contest and Dr. Dayanand using his crazy methods to help Anand and Anita. The film also has some emotional moments, such as Anita's brother revealing his true intentions, Anand and Anita facing a life-threatening situation, Ajay sacrificing his love for Anita and Anand and Anita renewing their vows.

      - -

      If you loved Mr. Bechara, you will surely love Mr. Bechara 2. And if you haven't seen Mr. Bechara, you can still enjoy Mr. Bechara 2, as it is a standalone story that does not require any prior knowledge of the first film. So don't miss this opportunity to watch Mr. Bechara 2 movie 1080p on your device. Just download it from our website and have a great time.

      81aa517590
      -
      -
      \ No newline at end of file diff --git "a/spaces/rainy3/chatgpt_academic/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" "b/spaces/rainy3/chatgpt_academic/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" deleted file mode 100644 index f74704aec2730fc8e9198a6c79ef45a43346a261..0000000000000000000000000000000000000000 --- "a/spaces/rainy3/chatgpt_academic/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" +++ /dev/null @@ -1,139 +0,0 @@ -import threading -from request_llm.bridge_chatgpt import predict_no_ui_long_connection -from toolbox import update_ui -from toolbox import CatchException, write_results_to_file, report_execption -from .crazy_utils import breakdown_txt_to_satisfy_token_limit - -def extract_code_block_carefully(txt): - splitted = txt.split('```') - n_code_block_seg = len(splitted) - 1 - if n_code_block_seg <= 1: return txt - # 剩下的情况都开头除去 ``` 结尾除去一次 ``` - txt_out = '```'.join(splitted[1:-1]) - return txt_out - - - -def break_txt_into_half_at_some_linebreak(txt): - lines = txt.split('\n') - n_lines = len(lines) - pre = lines[:(n_lines//2)] - post = lines[(n_lines//2):] - return "\n".join(pre), "\n".join(post) - - -@CatchException -def 全项目切换英文(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port): - # 第1步:清空历史,以免输入溢出 - history = [] - - # 第2步:尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 第3步:集合文件 - import time, glob, os, shutil, re - os.makedirs('gpt_log/generated_english_version', exist_ok=True) - os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True) - file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \ - [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)] - # file_manifest = ['./toolbox.py'] - i_say_show_user_buffer = [] - - # 第4步:随便显示点什么防止卡顿的感觉 - for index, fp in enumerate(file_manifest): - # if 'test_project' in fp: continue - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}' - i_say_show_user_buffer.append(i_say_show_user) - chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - # 第5步:Token限制下的截断与处理 - MAX_TOKEN = 3000 - import tiktoken - from toolbox import get_conf - enc = tiktoken.encoding_for_model(*get_conf('LLM_MODEL')) - def get_token_fn(txt): return len(enc.encode(txt)) - - - # 第6步:任务函数 - mutable_return = [None for _ in file_manifest] - observe_window = [[""] for _ in file_manifest] - def thread_worker(fp,index): - if index > 10: - time.sleep(60) - print('Openai 限制免费用户每分钟20次请求,降低请求频率中。') - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```' - try: - gpt_say = "" - # 分解代码文件 - file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, get_token_fn, MAX_TOKEN) - for file_content_partial in file_content_breakdown: - i_say = i_say_template(fp, file_content_partial) - # # ** gpt request ** - gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index]) - gpt_say_partial = extract_code_block_carefully(gpt_say_partial) - gpt_say += gpt_say_partial - mutable_return[index] = gpt_say - except ConnectionAbortedError as token_exceed_err: - print('至少一个线程任务Token溢出而失败', e) - except Exception as e: - print('至少一个线程任务意外失败', e) - - # 第7步:所有线程同时开始执行任务函数 - handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)] - for h in handles: - h.daemon = True - h.start() - chatbot.append(('开始了吗?', f'多线程操作已经开始')) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 第8步:循环轮询各个线程是否执行完毕 - cnt = 0 - while True: - cnt += 1 - time.sleep(0.2) - th_alive = [h.is_alive() for h in handles] - if not any(th_alive): break - # 更好的UI视觉效果 - observe_win = [] - for thread_index, alive in enumerate(th_alive): - observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace('
      ','.....').replace('$','.')+"... ]") - stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)] - stat_str = ''.join(stat) - chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1))) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 第9步:把结果写入文件 - for index, h in enumerate(handles): - h.join() # 这里其实不需要join了,肯定已经都结束了 - fp = file_manifest[index] - gpt_say = mutable_return[index] - i_say_show_user = i_say_show_user_buffer[index] - - where_to_relocate = f'gpt_log/generated_english_version/{fp}' - if gpt_say is not None: - with open(where_to_relocate, 'w+', encoding='utf-8') as f: - f.write(gpt_say) - else: # 失败 - shutil.copyfile(file_manifest[index], where_to_relocate) - chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}')) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - time.sleep(1) - - # 第10步:备份一个文件 - res = write_results_to_file(history) - chatbot.append(("生成一份任务执行报告", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/diagnostics_channel.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/diagnostics_channel.d.ts deleted file mode 100644 index 3dcaa035a56d95e3e6bcfb39246f8b4bb6348ba7..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/diagnostics_channel.d.ts +++ /dev/null @@ -1,153 +0,0 @@ -/** - * The `diagnostics_channel` module provides an API to create named channels - * to report arbitrary message data for diagnostics purposes. - * - * It can be accessed using: - * - * ```js - * import diagnostics_channel from 'diagnostics_channel'; - * ``` - * - * It is intended that a module writer wanting to report diagnostics messages - * will create one or many top-level channels to report messages through. - * Channels may also be acquired at runtime but it is not encouraged - * due to the additional overhead of doing so. Channels may be exported for - * convenience, but as long as the name is known it can be acquired anywhere. - * - * If you intend for your module to produce diagnostics data for others to - * consume it is recommended that you include documentation of what named - * channels are used along with the shape of the message data. Channel names - * should generally include the module name to avoid collisions with data from - * other modules. - * @experimental - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/diagnostics_channel.js) - */ -declare module 'diagnostics_channel' { - /** - * Check if there are active subscribers to the named channel. This is helpful if - * the message you want to send might be expensive to prepare. - * - * This API is optional but helpful when trying to publish messages from very - * performance-sensitive code. - * - * ```js - * import diagnostics_channel from 'diagnostics_channel'; - * - * if (diagnostics_channel.hasSubscribers('my-channel')) { - * // There are subscribers, prepare and publish message - * } - * ``` - * @since v15.1.0, v14.17.0 - * @param name The channel name - * @return If there are active subscribers - */ - function hasSubscribers(name: string | symbol): boolean; - /** - * This is the primary entry-point for anyone wanting to interact with a named - * channel. It produces a channel object which is optimized to reduce overhead at - * publish time as much as possible. - * - * ```js - * import diagnostics_channel from 'diagnostics_channel'; - * - * const channel = diagnostics_channel.channel('my-channel'); - * ``` - * @since v15.1.0, v14.17.0 - * @param name The channel name - * @return The named channel object - */ - function channel(name: string | symbol): Channel; - type ChannelListener = (message: unknown, name: string | symbol) => void; - /** - * The class `Channel` represents an individual named channel within the data - * pipeline. It is use to track subscribers and to publish messages when there - * are subscribers present. It exists as a separate object to avoid channel - * lookups at publish time, enabling very fast publish speeds and allowing - * for heavy use while incurring very minimal cost. Channels are created with {@link channel}, constructing a channel directly - * with `new Channel(name)` is not supported. - * @since v15.1.0, v14.17.0 - */ - class Channel { - readonly name: string | symbol; - /** - * Check if there are active subscribers to this channel. This is helpful if - * the message you want to send might be expensive to prepare. - * - * This API is optional but helpful when trying to publish messages from very - * performance-sensitive code. - * - * ```js - * import diagnostics_channel from 'diagnostics_channel'; - * - * const channel = diagnostics_channel.channel('my-channel'); - * - * if (channel.hasSubscribers) { - * // There are subscribers, prepare and publish message - * } - * ``` - * @since v15.1.0, v14.17.0 - */ - readonly hasSubscribers: boolean; - private constructor(name: string | symbol); - /** - * Publish a message to any subscribers to the channel. This will - * trigger message handlers synchronously so they will execute within - * the same context. - * - * ```js - * import diagnostics_channel from 'diagnostics_channel'; - * - * const channel = diagnostics_channel.channel('my-channel'); - * - * channel.publish({ - * some: 'message' - * }); - * ``` - * @since v15.1.0, v14.17.0 - * @param message The message to send to the channel subscribers - */ - publish(message: unknown): void; - /** - * Register a message handler to subscribe to this channel. This message handler - * will be run synchronously whenever a message is published to the channel. Any - * errors thrown in the message handler will trigger an `'uncaughtException'`. - * - * ```js - * import diagnostics_channel from 'diagnostics_channel'; - * - * const channel = diagnostics_channel.channel('my-channel'); - * - * channel.subscribe((message, name) => { - * // Received data - * }); - * ``` - * @since v15.1.0, v14.17.0 - * @param onMessage The handler to receive channel messages - */ - subscribe(onMessage: ChannelListener): void; - /** - * Remove a message handler previously registered to this channel with `channel.subscribe(onMessage)`. - * - * ```js - * import diagnostics_channel from 'diagnostics_channel'; - * - * const channel = diagnostics_channel.channel('my-channel'); - * - * function onMessage(message, name) { - * // Received data - * } - * - * channel.subscribe(onMessage); - * - * channel.unsubscribe(onMessage); - * ``` - * @since v15.1.0, v14.17.0 - * @param onMessage The previous subscribed handler to remove - * @return `true` if the handler was found, `false` otherwise. - */ - unsubscribe(onMessage: ChannelListener): void; - } -} -declare module 'node:diagnostics_channel' { - export * from 'diagnostics_channel'; -} diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/inspector.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/inspector.d.ts deleted file mode 100644 index eba0b55d8bca0ef10cbf24922fb899b67c35f3a9..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/inspector.d.ts +++ /dev/null @@ -1,2741 +0,0 @@ -// eslint-disable-next-line dt-header -// Type definitions for inspector - -// These definitions are auto-generated. -// Please see https://github.com/DefinitelyTyped/DefinitelyTyped/pull/19330 -// for more information. - -// tslint:disable:max-line-length - -/** - * The `inspector` module provides an API for interacting with the V8 inspector. - * - * It can be accessed using: - * - * ```js - * const inspector = require('inspector'); - * ``` - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/inspector.js) - */ -declare module 'inspector' { - import EventEmitter = require('node:events'); - interface InspectorNotification { - method: string; - params: T; - } - namespace Schema { - /** - * Description of the protocol domain. - */ - interface Domain { - /** - * Domain name. - */ - name: string; - /** - * Domain version. - */ - version: string; - } - interface GetDomainsReturnType { - /** - * List of supported domains. - */ - domains: Domain[]; - } - } - namespace Runtime { - /** - * Unique script identifier. - */ - type ScriptId = string; - /** - * Unique object identifier. - */ - type RemoteObjectId = string; - /** - * Primitive value which cannot be JSON-stringified. - */ - type UnserializableValue = string; - /** - * Mirror object referencing original JavaScript object. - */ - interface RemoteObject { - /** - * Object type. - */ - type: string; - /** - * Object subtype hint. Specified for object type values only. - */ - subtype?: string | undefined; - /** - * Object class (constructor) name. Specified for object type values only. - */ - className?: string | undefined; - /** - * Remote object value in case of primitive values or JSON values (if it was requested). - */ - value?: any; - /** - * Primitive value which can not be JSON-stringified does not have value, but gets this property. - */ - unserializableValue?: UnserializableValue | undefined; - /** - * String representation of the object. - */ - description?: string | undefined; - /** - * Unique object identifier (for non-primitive values). - */ - objectId?: RemoteObjectId | undefined; - /** - * Preview containing abbreviated property values. Specified for object type values only. - * @experimental - */ - preview?: ObjectPreview | undefined; - /** - * @experimental - */ - customPreview?: CustomPreview | undefined; - } - /** - * @experimental - */ - interface CustomPreview { - header: string; - hasBody: boolean; - formatterObjectId: RemoteObjectId; - bindRemoteObjectFunctionId: RemoteObjectId; - configObjectId?: RemoteObjectId | undefined; - } - /** - * Object containing abbreviated remote object value. - * @experimental - */ - interface ObjectPreview { - /** - * Object type. - */ - type: string; - /** - * Object subtype hint. Specified for object type values only. - */ - subtype?: string | undefined; - /** - * String representation of the object. - */ - description?: string | undefined; - /** - * True iff some of the properties or entries of the original object did not fit. - */ - overflow: boolean; - /** - * List of the properties. - */ - properties: PropertyPreview[]; - /** - * List of the entries. Specified for map and set subtype values only. - */ - entries?: EntryPreview[] | undefined; - } - /** - * @experimental - */ - interface PropertyPreview { - /** - * Property name. - */ - name: string; - /** - * Object type. Accessor means that the property itself is an accessor property. - */ - type: string; - /** - * User-friendly property value string. - */ - value?: string | undefined; - /** - * Nested value preview. - */ - valuePreview?: ObjectPreview | undefined; - /** - * Object subtype hint. Specified for object type values only. - */ - subtype?: string | undefined; - } - /** - * @experimental - */ - interface EntryPreview { - /** - * Preview of the key. Specified for map-like collection entries. - */ - key?: ObjectPreview | undefined; - /** - * Preview of the value. - */ - value: ObjectPreview; - } - /** - * Object property descriptor. - */ - interface PropertyDescriptor { - /** - * Property name or symbol description. - */ - name: string; - /** - * The value associated with the property. - */ - value?: RemoteObject | undefined; - /** - * True if the value associated with the property may be changed (data descriptors only). - */ - writable?: boolean | undefined; - /** - * A function which serves as a getter for the property, or undefined if there is no getter (accessor descriptors only). - */ - get?: RemoteObject | undefined; - /** - * A function which serves as a setter for the property, or undefined if there is no setter (accessor descriptors only). - */ - set?: RemoteObject | undefined; - /** - * True if the type of this property descriptor may be changed and if the property may be deleted from the corresponding object. - */ - configurable: boolean; - /** - * True if this property shows up during enumeration of the properties on the corresponding object. - */ - enumerable: boolean; - /** - * True if the result was thrown during the evaluation. - */ - wasThrown?: boolean | undefined; - /** - * True if the property is owned for the object. - */ - isOwn?: boolean | undefined; - /** - * Property symbol object, if the property is of the symbol type. - */ - symbol?: RemoteObject | undefined; - } - /** - * Object internal property descriptor. This property isn't normally visible in JavaScript code. - */ - interface InternalPropertyDescriptor { - /** - * Conventional property name. - */ - name: string; - /** - * The value associated with the property. - */ - value?: RemoteObject | undefined; - } - /** - * Represents function call argument. Either remote object id objectId, primitive value, unserializable primitive value or neither of (for undefined) them should be specified. - */ - interface CallArgument { - /** - * Primitive value or serializable javascript object. - */ - value?: any; - /** - * Primitive value which can not be JSON-stringified. - */ - unserializableValue?: UnserializableValue | undefined; - /** - * Remote object handle. - */ - objectId?: RemoteObjectId | undefined; - } - /** - * Id of an execution context. - */ - type ExecutionContextId = number; - /** - * Description of an isolated world. - */ - interface ExecutionContextDescription { - /** - * Unique id of the execution context. It can be used to specify in which execution context script evaluation should be performed. - */ - id: ExecutionContextId; - /** - * Execution context origin. - */ - origin: string; - /** - * Human readable name describing given context. - */ - name: string; - /** - * Embedder-specific auxiliary data. - */ - auxData?: {} | undefined; - } - /** - * Detailed information about exception (or error) that was thrown during script compilation or execution. - */ - interface ExceptionDetails { - /** - * Exception id. - */ - exceptionId: number; - /** - * Exception text, which should be used together with exception object when available. - */ - text: string; - /** - * Line number of the exception location (0-based). - */ - lineNumber: number; - /** - * Column number of the exception location (0-based). - */ - columnNumber: number; - /** - * Script ID of the exception location. - */ - scriptId?: ScriptId | undefined; - /** - * URL of the exception location, to be used when the script was not reported. - */ - url?: string | undefined; - /** - * JavaScript stack trace if available. - */ - stackTrace?: StackTrace | undefined; - /** - * Exception object if available. - */ - exception?: RemoteObject | undefined; - /** - * Identifier of the context where exception happened. - */ - executionContextId?: ExecutionContextId | undefined; - } - /** - * Number of milliseconds since epoch. - */ - type Timestamp = number; - /** - * Stack entry for runtime errors and assertions. - */ - interface CallFrame { - /** - * JavaScript function name. - */ - functionName: string; - /** - * JavaScript script id. - */ - scriptId: ScriptId; - /** - * JavaScript script name or url. - */ - url: string; - /** - * JavaScript script line number (0-based). - */ - lineNumber: number; - /** - * JavaScript script column number (0-based). - */ - columnNumber: number; - } - /** - * Call frames for assertions or error messages. - */ - interface StackTrace { - /** - * String label of this stack trace. For async traces this may be a name of the function that initiated the async call. - */ - description?: string | undefined; - /** - * JavaScript function name. - */ - callFrames: CallFrame[]; - /** - * Asynchronous JavaScript stack trace that preceded this stack, if available. - */ - parent?: StackTrace | undefined; - /** - * Asynchronous JavaScript stack trace that preceded this stack, if available. - * @experimental - */ - parentId?: StackTraceId | undefined; - } - /** - * Unique identifier of current debugger. - * @experimental - */ - type UniqueDebuggerId = string; - /** - * If debuggerId is set stack trace comes from another debugger and can be resolved there. This allows to track cross-debugger calls. See Runtime.StackTrace and Debugger.paused for usages. - * @experimental - */ - interface StackTraceId { - id: string; - debuggerId?: UniqueDebuggerId | undefined; - } - interface EvaluateParameterType { - /** - * Expression to evaluate. - */ - expression: string; - /** - * Symbolic group name that can be used to release multiple objects. - */ - objectGroup?: string | undefined; - /** - * Determines whether Command Line API should be available during the evaluation. - */ - includeCommandLineAPI?: boolean | undefined; - /** - * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state. - */ - silent?: boolean | undefined; - /** - * Specifies in which execution context to perform evaluation. If the parameter is omitted the evaluation will be performed in the context of the inspected page. - */ - contextId?: ExecutionContextId | undefined; - /** - * Whether the result is expected to be a JSON object that should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - * @experimental - */ - generatePreview?: boolean | undefined; - /** - * Whether execution should be treated as initiated by user in the UI. - */ - userGesture?: boolean | undefined; - /** - * Whether execution should await for resulting value and return once awaited promise is resolved. - */ - awaitPromise?: boolean | undefined; - } - interface AwaitPromiseParameterType { - /** - * Identifier of the promise. - */ - promiseObjectId: RemoteObjectId; - /** - * Whether the result is expected to be a JSON object that should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - */ - generatePreview?: boolean | undefined; - } - interface CallFunctionOnParameterType { - /** - * Declaration of the function to call. - */ - functionDeclaration: string; - /** - * Identifier of the object to call function on. Either objectId or executionContextId should be specified. - */ - objectId?: RemoteObjectId | undefined; - /** - * Call arguments. All call arguments must belong to the same JavaScript world as the target object. - */ - arguments?: CallArgument[] | undefined; - /** - * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state. - */ - silent?: boolean | undefined; - /** - * Whether the result is expected to be a JSON object which should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - * @experimental - */ - generatePreview?: boolean | undefined; - /** - * Whether execution should be treated as initiated by user in the UI. - */ - userGesture?: boolean | undefined; - /** - * Whether execution should await for resulting value and return once awaited promise is resolved. - */ - awaitPromise?: boolean | undefined; - /** - * Specifies execution context which global object will be used to call function on. Either executionContextId or objectId should be specified. - */ - executionContextId?: ExecutionContextId | undefined; - /** - * Symbolic group name that can be used to release multiple objects. If objectGroup is not specified and objectId is, objectGroup will be inherited from object. - */ - objectGroup?: string | undefined; - } - interface GetPropertiesParameterType { - /** - * Identifier of the object to return properties for. - */ - objectId: RemoteObjectId; - /** - * If true, returns properties belonging only to the element itself, not to its prototype chain. - */ - ownProperties?: boolean | undefined; - /** - * If true, returns accessor properties (with getter/setter) only; internal properties are not returned either. - * @experimental - */ - accessorPropertiesOnly?: boolean | undefined; - /** - * Whether preview should be generated for the results. - * @experimental - */ - generatePreview?: boolean | undefined; - } - interface ReleaseObjectParameterType { - /** - * Identifier of the object to release. - */ - objectId: RemoteObjectId; - } - interface ReleaseObjectGroupParameterType { - /** - * Symbolic object group name. - */ - objectGroup: string; - } - interface SetCustomObjectFormatterEnabledParameterType { - enabled: boolean; - } - interface CompileScriptParameterType { - /** - * Expression to compile. - */ - expression: string; - /** - * Source url to be set for the script. - */ - sourceURL: string; - /** - * Specifies whether the compiled script should be persisted. - */ - persistScript: boolean; - /** - * Specifies in which execution context to perform script run. If the parameter is omitted the evaluation will be performed in the context of the inspected page. - */ - executionContextId?: ExecutionContextId | undefined; - } - interface RunScriptParameterType { - /** - * Id of the script to run. - */ - scriptId: ScriptId; - /** - * Specifies in which execution context to perform script run. If the parameter is omitted the evaluation will be performed in the context of the inspected page. - */ - executionContextId?: ExecutionContextId | undefined; - /** - * Symbolic group name that can be used to release multiple objects. - */ - objectGroup?: string | undefined; - /** - * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state. - */ - silent?: boolean | undefined; - /** - * Determines whether Command Line API should be available during the evaluation. - */ - includeCommandLineAPI?: boolean | undefined; - /** - * Whether the result is expected to be a JSON object which should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - */ - generatePreview?: boolean | undefined; - /** - * Whether execution should await for resulting value and return once awaited promise is resolved. - */ - awaitPromise?: boolean | undefined; - } - interface QueryObjectsParameterType { - /** - * Identifier of the prototype to return objects for. - */ - prototypeObjectId: RemoteObjectId; - } - interface GlobalLexicalScopeNamesParameterType { - /** - * Specifies in which execution context to lookup global scope variables. - */ - executionContextId?: ExecutionContextId | undefined; - } - interface EvaluateReturnType { - /** - * Evaluation result. - */ - result: RemoteObject; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface AwaitPromiseReturnType { - /** - * Promise result. Will contain rejected value if promise was rejected. - */ - result: RemoteObject; - /** - * Exception details if stack strace is available. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface CallFunctionOnReturnType { - /** - * Call result. - */ - result: RemoteObject; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface GetPropertiesReturnType { - /** - * Object properties. - */ - result: PropertyDescriptor[]; - /** - * Internal object properties (only of the element itself). - */ - internalProperties?: InternalPropertyDescriptor[] | undefined; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface CompileScriptReturnType { - /** - * Id of the script. - */ - scriptId?: ScriptId | undefined; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface RunScriptReturnType { - /** - * Run result. - */ - result: RemoteObject; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface QueryObjectsReturnType { - /** - * Array with objects. - */ - objects: RemoteObject; - } - interface GlobalLexicalScopeNamesReturnType { - names: string[]; - } - interface ExecutionContextCreatedEventDataType { - /** - * A newly created execution context. - */ - context: ExecutionContextDescription; - } - interface ExecutionContextDestroyedEventDataType { - /** - * Id of the destroyed context - */ - executionContextId: ExecutionContextId; - } - interface ExceptionThrownEventDataType { - /** - * Timestamp of the exception. - */ - timestamp: Timestamp; - exceptionDetails: ExceptionDetails; - } - interface ExceptionRevokedEventDataType { - /** - * Reason describing why exception was revoked. - */ - reason: string; - /** - * The id of revoked exception, as reported in exceptionThrown. - */ - exceptionId: number; - } - interface ConsoleAPICalledEventDataType { - /** - * Type of the call. - */ - type: string; - /** - * Call arguments. - */ - args: RemoteObject[]; - /** - * Identifier of the context where the call was made. - */ - executionContextId: ExecutionContextId; - /** - * Call timestamp. - */ - timestamp: Timestamp; - /** - * Stack trace captured when the call was made. - */ - stackTrace?: StackTrace | undefined; - /** - * Console context descriptor for calls on non-default console context (not console.*): 'anonymous#unique-logger-id' for call on unnamed context, 'name#unique-logger-id' for call on named context. - * @experimental - */ - context?: string | undefined; - } - interface InspectRequestedEventDataType { - object: RemoteObject; - hints: {}; - } - } - namespace Debugger { - /** - * Breakpoint identifier. - */ - type BreakpointId = string; - /** - * Call frame identifier. - */ - type CallFrameId = string; - /** - * Location in the source code. - */ - interface Location { - /** - * Script identifier as reported in the Debugger.scriptParsed. - */ - scriptId: Runtime.ScriptId; - /** - * Line number in the script (0-based). - */ - lineNumber: number; - /** - * Column number in the script (0-based). - */ - columnNumber?: number | undefined; - } - /** - * Location in the source code. - * @experimental - */ - interface ScriptPosition { - lineNumber: number; - columnNumber: number; - } - /** - * JavaScript call frame. Array of call frames form the call stack. - */ - interface CallFrame { - /** - * Call frame identifier. This identifier is only valid while the virtual machine is paused. - */ - callFrameId: CallFrameId; - /** - * Name of the JavaScript function called on this call frame. - */ - functionName: string; - /** - * Location in the source code. - */ - functionLocation?: Location | undefined; - /** - * Location in the source code. - */ - location: Location; - /** - * JavaScript script name or url. - */ - url: string; - /** - * Scope chain for this call frame. - */ - scopeChain: Scope[]; - /** - * this object for this call frame. - */ - this: Runtime.RemoteObject; - /** - * The value being returned, if the function is at return point. - */ - returnValue?: Runtime.RemoteObject | undefined; - } - /** - * Scope description. - */ - interface Scope { - /** - * Scope type. - */ - type: string; - /** - * Object representing the scope. For global and with scopes it represents the actual object; for the rest of the scopes, it is artificial transient object enumerating scope variables as its properties. - */ - object: Runtime.RemoteObject; - name?: string | undefined; - /** - * Location in the source code where scope starts - */ - startLocation?: Location | undefined; - /** - * Location in the source code where scope ends - */ - endLocation?: Location | undefined; - } - /** - * Search match for resource. - */ - interface SearchMatch { - /** - * Line number in resource content. - */ - lineNumber: number; - /** - * Line with match content. - */ - lineContent: string; - } - interface BreakLocation { - /** - * Script identifier as reported in the Debugger.scriptParsed. - */ - scriptId: Runtime.ScriptId; - /** - * Line number in the script (0-based). - */ - lineNumber: number; - /** - * Column number in the script (0-based). - */ - columnNumber?: number | undefined; - type?: string | undefined; - } - interface SetBreakpointsActiveParameterType { - /** - * New value for breakpoints active state. - */ - active: boolean; - } - interface SetSkipAllPausesParameterType { - /** - * New value for skip pauses state. - */ - skip: boolean; - } - interface SetBreakpointByUrlParameterType { - /** - * Line number to set breakpoint at. - */ - lineNumber: number; - /** - * URL of the resources to set breakpoint on. - */ - url?: string | undefined; - /** - * Regex pattern for the URLs of the resources to set breakpoints on. Either url or urlRegex must be specified. - */ - urlRegex?: string | undefined; - /** - * Script hash of the resources to set breakpoint on. - */ - scriptHash?: string | undefined; - /** - * Offset in the line to set breakpoint at. - */ - columnNumber?: number | undefined; - /** - * Expression to use as a breakpoint condition. When specified, debugger will only stop on the breakpoint if this expression evaluates to true. - */ - condition?: string | undefined; - } - interface SetBreakpointParameterType { - /** - * Location to set breakpoint in. - */ - location: Location; - /** - * Expression to use as a breakpoint condition. When specified, debugger will only stop on the breakpoint if this expression evaluates to true. - */ - condition?: string | undefined; - } - interface RemoveBreakpointParameterType { - breakpointId: BreakpointId; - } - interface GetPossibleBreakpointsParameterType { - /** - * Start of range to search possible breakpoint locations in. - */ - start: Location; - /** - * End of range to search possible breakpoint locations in (excluding). When not specified, end of scripts is used as end of range. - */ - end?: Location | undefined; - /** - * Only consider locations which are in the same (non-nested) function as start. - */ - restrictToFunction?: boolean | undefined; - } - interface ContinueToLocationParameterType { - /** - * Location to continue to. - */ - location: Location; - targetCallFrames?: string | undefined; - } - interface PauseOnAsyncCallParameterType { - /** - * Debugger will pause when async call with given stack trace is started. - */ - parentStackTraceId: Runtime.StackTraceId; - } - interface StepIntoParameterType { - /** - * Debugger will issue additional Debugger.paused notification if any async task is scheduled before next pause. - * @experimental - */ - breakOnAsyncCall?: boolean | undefined; - } - interface GetStackTraceParameterType { - stackTraceId: Runtime.StackTraceId; - } - interface SearchInContentParameterType { - /** - * Id of the script to search in. - */ - scriptId: Runtime.ScriptId; - /** - * String to search for. - */ - query: string; - /** - * If true, search is case sensitive. - */ - caseSensitive?: boolean | undefined; - /** - * If true, treats string parameter as regex. - */ - isRegex?: boolean | undefined; - } - interface SetScriptSourceParameterType { - /** - * Id of the script to edit. - */ - scriptId: Runtime.ScriptId; - /** - * New content of the script. - */ - scriptSource: string; - /** - * If true the change will not actually be applied. Dry run may be used to get result description without actually modifying the code. - */ - dryRun?: boolean | undefined; - } - interface RestartFrameParameterType { - /** - * Call frame identifier to evaluate on. - */ - callFrameId: CallFrameId; - } - interface GetScriptSourceParameterType { - /** - * Id of the script to get source for. - */ - scriptId: Runtime.ScriptId; - } - interface SetPauseOnExceptionsParameterType { - /** - * Pause on exceptions mode. - */ - state: string; - } - interface EvaluateOnCallFrameParameterType { - /** - * Call frame identifier to evaluate on. - */ - callFrameId: CallFrameId; - /** - * Expression to evaluate. - */ - expression: string; - /** - * String object group name to put result into (allows rapid releasing resulting object handles using releaseObjectGroup). - */ - objectGroup?: string | undefined; - /** - * Specifies whether command line API should be available to the evaluated expression, defaults to false. - */ - includeCommandLineAPI?: boolean | undefined; - /** - * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state. - */ - silent?: boolean | undefined; - /** - * Whether the result is expected to be a JSON object that should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - * @experimental - */ - generatePreview?: boolean | undefined; - /** - * Whether to throw an exception if side effect cannot be ruled out during evaluation. - */ - throwOnSideEffect?: boolean | undefined; - } - interface SetVariableValueParameterType { - /** - * 0-based number of scope as was listed in scope chain. Only 'local', 'closure' and 'catch' scope types are allowed. Other scopes could be manipulated manually. - */ - scopeNumber: number; - /** - * Variable name. - */ - variableName: string; - /** - * New variable value. - */ - newValue: Runtime.CallArgument; - /** - * Id of callframe that holds variable. - */ - callFrameId: CallFrameId; - } - interface SetReturnValueParameterType { - /** - * New return value. - */ - newValue: Runtime.CallArgument; - } - interface SetAsyncCallStackDepthParameterType { - /** - * Maximum depth of async call stacks. Setting to 0 will effectively disable collecting async call stacks (default). - */ - maxDepth: number; - } - interface SetBlackboxPatternsParameterType { - /** - * Array of regexps that will be used to check script url for blackbox state. - */ - patterns: string[]; - } - interface SetBlackboxedRangesParameterType { - /** - * Id of the script. - */ - scriptId: Runtime.ScriptId; - positions: ScriptPosition[]; - } - interface EnableReturnType { - /** - * Unique identifier of the debugger. - * @experimental - */ - debuggerId: Runtime.UniqueDebuggerId; - } - interface SetBreakpointByUrlReturnType { - /** - * Id of the created breakpoint for further reference. - */ - breakpointId: BreakpointId; - /** - * List of the locations this breakpoint resolved into upon addition. - */ - locations: Location[]; - } - interface SetBreakpointReturnType { - /** - * Id of the created breakpoint for further reference. - */ - breakpointId: BreakpointId; - /** - * Location this breakpoint resolved into. - */ - actualLocation: Location; - } - interface GetPossibleBreakpointsReturnType { - /** - * List of the possible breakpoint locations. - */ - locations: BreakLocation[]; - } - interface GetStackTraceReturnType { - stackTrace: Runtime.StackTrace; - } - interface SearchInContentReturnType { - /** - * List of search matches. - */ - result: SearchMatch[]; - } - interface SetScriptSourceReturnType { - /** - * New stack trace in case editing has happened while VM was stopped. - */ - callFrames?: CallFrame[] | undefined; - /** - * Whether current call stack was modified after applying the changes. - */ - stackChanged?: boolean | undefined; - /** - * Async stack trace, if any. - */ - asyncStackTrace?: Runtime.StackTrace | undefined; - /** - * Async stack trace, if any. - * @experimental - */ - asyncStackTraceId?: Runtime.StackTraceId | undefined; - /** - * Exception details if any. - */ - exceptionDetails?: Runtime.ExceptionDetails | undefined; - } - interface RestartFrameReturnType { - /** - * New stack trace. - */ - callFrames: CallFrame[]; - /** - * Async stack trace, if any. - */ - asyncStackTrace?: Runtime.StackTrace | undefined; - /** - * Async stack trace, if any. - * @experimental - */ - asyncStackTraceId?: Runtime.StackTraceId | undefined; - } - interface GetScriptSourceReturnType { - /** - * Script source. - */ - scriptSource: string; - } - interface EvaluateOnCallFrameReturnType { - /** - * Object wrapper for the evaluation result. - */ - result: Runtime.RemoteObject; - /** - * Exception details. - */ - exceptionDetails?: Runtime.ExceptionDetails | undefined; - } - interface ScriptParsedEventDataType { - /** - * Identifier of the script parsed. - */ - scriptId: Runtime.ScriptId; - /** - * URL or name of the script parsed (if any). - */ - url: string; - /** - * Line offset of the script within the resource with given URL (for script tags). - */ - startLine: number; - /** - * Column offset of the script within the resource with given URL. - */ - startColumn: number; - /** - * Last line of the script. - */ - endLine: number; - /** - * Length of the last line of the script. - */ - endColumn: number; - /** - * Specifies script creation context. - */ - executionContextId: Runtime.ExecutionContextId; - /** - * Content hash of the script. - */ - hash: string; - /** - * Embedder-specific auxiliary data. - */ - executionContextAuxData?: {} | undefined; - /** - * True, if this script is generated as a result of the live edit operation. - * @experimental - */ - isLiveEdit?: boolean | undefined; - /** - * URL of source map associated with script (if any). - */ - sourceMapURL?: string | undefined; - /** - * True, if this script has sourceURL. - */ - hasSourceURL?: boolean | undefined; - /** - * True, if this script is ES6 module. - */ - isModule?: boolean | undefined; - /** - * This script length. - */ - length?: number | undefined; - /** - * JavaScript top stack frame of where the script parsed event was triggered if available. - * @experimental - */ - stackTrace?: Runtime.StackTrace | undefined; - } - interface ScriptFailedToParseEventDataType { - /** - * Identifier of the script parsed. - */ - scriptId: Runtime.ScriptId; - /** - * URL or name of the script parsed (if any). - */ - url: string; - /** - * Line offset of the script within the resource with given URL (for script tags). - */ - startLine: number; - /** - * Column offset of the script within the resource with given URL. - */ - startColumn: number; - /** - * Last line of the script. - */ - endLine: number; - /** - * Length of the last line of the script. - */ - endColumn: number; - /** - * Specifies script creation context. - */ - executionContextId: Runtime.ExecutionContextId; - /** - * Content hash of the script. - */ - hash: string; - /** - * Embedder-specific auxiliary data. - */ - executionContextAuxData?: {} | undefined; - /** - * URL of source map associated with script (if any). - */ - sourceMapURL?: string | undefined; - /** - * True, if this script has sourceURL. - */ - hasSourceURL?: boolean | undefined; - /** - * True, if this script is ES6 module. - */ - isModule?: boolean | undefined; - /** - * This script length. - */ - length?: number | undefined; - /** - * JavaScript top stack frame of where the script parsed event was triggered if available. - * @experimental - */ - stackTrace?: Runtime.StackTrace | undefined; - } - interface BreakpointResolvedEventDataType { - /** - * Breakpoint unique identifier. - */ - breakpointId: BreakpointId; - /** - * Actual breakpoint location. - */ - location: Location; - } - interface PausedEventDataType { - /** - * Call stack the virtual machine stopped on. - */ - callFrames: CallFrame[]; - /** - * Pause reason. - */ - reason: string; - /** - * Object containing break-specific auxiliary properties. - */ - data?: {} | undefined; - /** - * Hit breakpoints IDs - */ - hitBreakpoints?: string[] | undefined; - /** - * Async stack trace, if any. - */ - asyncStackTrace?: Runtime.StackTrace | undefined; - /** - * Async stack trace, if any. - * @experimental - */ - asyncStackTraceId?: Runtime.StackTraceId | undefined; - /** - * Just scheduled async call will have this stack trace as parent stack during async execution. This field is available only after Debugger.stepInto call with breakOnAsynCall flag. - * @experimental - */ - asyncCallStackTraceId?: Runtime.StackTraceId | undefined; - } - } - namespace Console { - /** - * Console message. - */ - interface ConsoleMessage { - /** - * Message source. - */ - source: string; - /** - * Message severity. - */ - level: string; - /** - * Message text. - */ - text: string; - /** - * URL of the message origin. - */ - url?: string | undefined; - /** - * Line number in the resource that generated this message (1-based). - */ - line?: number | undefined; - /** - * Column number in the resource that generated this message (1-based). - */ - column?: number | undefined; - } - interface MessageAddedEventDataType { - /** - * Console message that has been added. - */ - message: ConsoleMessage; - } - } - namespace Profiler { - /** - * Profile node. Holds callsite information, execution statistics and child nodes. - */ - interface ProfileNode { - /** - * Unique id of the node. - */ - id: number; - /** - * Function location. - */ - callFrame: Runtime.CallFrame; - /** - * Number of samples where this node was on top of the call stack. - */ - hitCount?: number | undefined; - /** - * Child node ids. - */ - children?: number[] | undefined; - /** - * The reason of being not optimized. The function may be deoptimized or marked as don't optimize. - */ - deoptReason?: string | undefined; - /** - * An array of source position ticks. - */ - positionTicks?: PositionTickInfo[] | undefined; - } - /** - * Profile. - */ - interface Profile { - /** - * The list of profile nodes. First item is the root node. - */ - nodes: ProfileNode[]; - /** - * Profiling start timestamp in microseconds. - */ - startTime: number; - /** - * Profiling end timestamp in microseconds. - */ - endTime: number; - /** - * Ids of samples top nodes. - */ - samples?: number[] | undefined; - /** - * Time intervals between adjacent samples in microseconds. The first delta is relative to the profile startTime. - */ - timeDeltas?: number[] | undefined; - } - /** - * Specifies a number of samples attributed to a certain source position. - */ - interface PositionTickInfo { - /** - * Source line number (1-based). - */ - line: number; - /** - * Number of samples attributed to the source line. - */ - ticks: number; - } - /** - * Coverage data for a source range. - */ - interface CoverageRange { - /** - * JavaScript script source offset for the range start. - */ - startOffset: number; - /** - * JavaScript script source offset for the range end. - */ - endOffset: number; - /** - * Collected execution count of the source range. - */ - count: number; - } - /** - * Coverage data for a JavaScript function. - */ - interface FunctionCoverage { - /** - * JavaScript function name. - */ - functionName: string; - /** - * Source ranges inside the function with coverage data. - */ - ranges: CoverageRange[]; - /** - * Whether coverage data for this function has block granularity. - */ - isBlockCoverage: boolean; - } - /** - * Coverage data for a JavaScript script. - */ - interface ScriptCoverage { - /** - * JavaScript script id. - */ - scriptId: Runtime.ScriptId; - /** - * JavaScript script name or url. - */ - url: string; - /** - * Functions contained in the script that has coverage data. - */ - functions: FunctionCoverage[]; - } - /** - * Describes a type collected during runtime. - * @experimental - */ - interface TypeObject { - /** - * Name of a type collected with type profiling. - */ - name: string; - } - /** - * Source offset and types for a parameter or return value. - * @experimental - */ - interface TypeProfileEntry { - /** - * Source offset of the parameter or end of function for return values. - */ - offset: number; - /** - * The types for this parameter or return value. - */ - types: TypeObject[]; - } - /** - * Type profile data collected during runtime for a JavaScript script. - * @experimental - */ - interface ScriptTypeProfile { - /** - * JavaScript script id. - */ - scriptId: Runtime.ScriptId; - /** - * JavaScript script name or url. - */ - url: string; - /** - * Type profile entries for parameters and return values of the functions in the script. - */ - entries: TypeProfileEntry[]; - } - interface SetSamplingIntervalParameterType { - /** - * New sampling interval in microseconds. - */ - interval: number; - } - interface StartPreciseCoverageParameterType { - /** - * Collect accurate call counts beyond simple 'covered' or 'not covered'. - */ - callCount?: boolean | undefined; - /** - * Collect block-based coverage. - */ - detailed?: boolean | undefined; - } - interface StopReturnType { - /** - * Recorded profile. - */ - profile: Profile; - } - interface TakePreciseCoverageReturnType { - /** - * Coverage data for the current isolate. - */ - result: ScriptCoverage[]; - } - interface GetBestEffortCoverageReturnType { - /** - * Coverage data for the current isolate. - */ - result: ScriptCoverage[]; - } - interface TakeTypeProfileReturnType { - /** - * Type profile for all scripts since startTypeProfile() was turned on. - */ - result: ScriptTypeProfile[]; - } - interface ConsoleProfileStartedEventDataType { - id: string; - /** - * Location of console.profile(). - */ - location: Debugger.Location; - /** - * Profile title passed as an argument to console.profile(). - */ - title?: string | undefined; - } - interface ConsoleProfileFinishedEventDataType { - id: string; - /** - * Location of console.profileEnd(). - */ - location: Debugger.Location; - profile: Profile; - /** - * Profile title passed as an argument to console.profile(). - */ - title?: string | undefined; - } - } - namespace HeapProfiler { - /** - * Heap snapshot object id. - */ - type HeapSnapshotObjectId = string; - /** - * Sampling Heap Profile node. Holds callsite information, allocation statistics and child nodes. - */ - interface SamplingHeapProfileNode { - /** - * Function location. - */ - callFrame: Runtime.CallFrame; - /** - * Allocations size in bytes for the node excluding children. - */ - selfSize: number; - /** - * Child nodes. - */ - children: SamplingHeapProfileNode[]; - } - /** - * Profile. - */ - interface SamplingHeapProfile { - head: SamplingHeapProfileNode; - } - interface StartTrackingHeapObjectsParameterType { - trackAllocations?: boolean | undefined; - } - interface StopTrackingHeapObjectsParameterType { - /** - * If true 'reportHeapSnapshotProgress' events will be generated while snapshot is being taken when the tracking is stopped. - */ - reportProgress?: boolean | undefined; - } - interface TakeHeapSnapshotParameterType { - /** - * If true 'reportHeapSnapshotProgress' events will be generated while snapshot is being taken. - */ - reportProgress?: boolean | undefined; - } - interface GetObjectByHeapObjectIdParameterType { - objectId: HeapSnapshotObjectId; - /** - * Symbolic group name that can be used to release multiple objects. - */ - objectGroup?: string | undefined; - } - interface AddInspectedHeapObjectParameterType { - /** - * Heap snapshot object id to be accessible by means of $x command line API. - */ - heapObjectId: HeapSnapshotObjectId; - } - interface GetHeapObjectIdParameterType { - /** - * Identifier of the object to get heap object id for. - */ - objectId: Runtime.RemoteObjectId; - } - interface StartSamplingParameterType { - /** - * Average sample interval in bytes. Poisson distribution is used for the intervals. The default value is 32768 bytes. - */ - samplingInterval?: number | undefined; - } - interface GetObjectByHeapObjectIdReturnType { - /** - * Evaluation result. - */ - result: Runtime.RemoteObject; - } - interface GetHeapObjectIdReturnType { - /** - * Id of the heap snapshot object corresponding to the passed remote object id. - */ - heapSnapshotObjectId: HeapSnapshotObjectId; - } - interface StopSamplingReturnType { - /** - * Recorded sampling heap profile. - */ - profile: SamplingHeapProfile; - } - interface GetSamplingProfileReturnType { - /** - * Return the sampling profile being collected. - */ - profile: SamplingHeapProfile; - } - interface AddHeapSnapshotChunkEventDataType { - chunk: string; - } - interface ReportHeapSnapshotProgressEventDataType { - done: number; - total: number; - finished?: boolean | undefined; - } - interface LastSeenObjectIdEventDataType { - lastSeenObjectId: number; - timestamp: number; - } - interface HeapStatsUpdateEventDataType { - /** - * An array of triplets. Each triplet describes a fragment. The first integer is the fragment index, the second integer is a total count of objects for the fragment, the third integer is a total size of the objects for the fragment. - */ - statsUpdate: number[]; - } - } - namespace NodeTracing { - interface TraceConfig { - /** - * Controls how the trace buffer stores data. - */ - recordMode?: string | undefined; - /** - * Included category filters. - */ - includedCategories: string[]; - } - interface StartParameterType { - traceConfig: TraceConfig; - } - interface GetCategoriesReturnType { - /** - * A list of supported tracing categories. - */ - categories: string[]; - } - interface DataCollectedEventDataType { - value: Array<{}>; - } - } - namespace NodeWorker { - type WorkerID = string; - /** - * Unique identifier of attached debugging session. - */ - type SessionID = string; - interface WorkerInfo { - workerId: WorkerID; - type: string; - title: string; - url: string; - } - interface SendMessageToWorkerParameterType { - message: string; - /** - * Identifier of the session. - */ - sessionId: SessionID; - } - interface EnableParameterType { - /** - * Whether to new workers should be paused until the frontend sends `Runtime.runIfWaitingForDebugger` - * message to run them. - */ - waitForDebuggerOnStart: boolean; - } - interface DetachParameterType { - sessionId: SessionID; - } - interface AttachedToWorkerEventDataType { - /** - * Identifier assigned to the session used to send/receive messages. - */ - sessionId: SessionID; - workerInfo: WorkerInfo; - waitingForDebugger: boolean; - } - interface DetachedFromWorkerEventDataType { - /** - * Detached session identifier. - */ - sessionId: SessionID; - } - interface ReceivedMessageFromWorkerEventDataType { - /** - * Identifier of a session which sends a message. - */ - sessionId: SessionID; - message: string; - } - } - namespace NodeRuntime { - interface NotifyWhenWaitingForDisconnectParameterType { - enabled: boolean; - } - } - /** - * The `inspector.Session` is used for dispatching messages to the V8 inspector - * back-end and receiving message responses and notifications. - */ - class Session extends EventEmitter { - /** - * Create a new instance of the inspector.Session class. - * The inspector session needs to be connected through session.connect() before the messages can be dispatched to the inspector backend. - */ - constructor(); - /** - * Connects a session to the inspector back-end. - * @since v8.0.0 - */ - connect(): void; - /** - * Immediately close the session. All pending message callbacks will be called - * with an error. `session.connect()` will need to be called to be able to send - * messages again. Reconnected session will lose all inspector state, such as - * enabled agents or configured breakpoints. - * @since v8.0.0 - */ - disconnect(): void; - /** - * Posts a message to the inspector back-end. `callback` will be notified when - * a response is received. `callback` is a function that accepts two optional - * arguments: error and message-specific result. - * - * ```js - * session.post('Runtime.evaluate', { expression: '2 + 2' }, - * (error, { result }) => console.log(result)); - * // Output: { type: 'number', value: 4, description: '4' } - * ``` - * - * The latest version of the V8 inspector protocol is published on the [Chrome DevTools Protocol Viewer](https://chromedevtools.github.io/devtools-protocol/v8/). - * - * Node.js inspector supports all the Chrome DevTools Protocol domains declared - * by V8\. Chrome DevTools Protocol domain provides an interface for interacting - * with one of the runtime agents used to inspect the application state and listen - * to the run-time events. - * - * ## Example usage - * - * Apart from the debugger, various V8 Profilers are available through the DevTools - * protocol. - * @since v8.0.0 - */ - post(method: string, params?: {}, callback?: (err: Error | null, params?: {}) => void): void; - post(method: string, callback?: (err: Error | null, params?: {}) => void): void; - /** - * Returns supported domains. - */ - post(method: 'Schema.getDomains', callback?: (err: Error | null, params: Schema.GetDomainsReturnType) => void): void; - /** - * Evaluates expression on global object. - */ - post(method: 'Runtime.evaluate', params?: Runtime.EvaluateParameterType, callback?: (err: Error | null, params: Runtime.EvaluateReturnType) => void): void; - post(method: 'Runtime.evaluate', callback?: (err: Error | null, params: Runtime.EvaluateReturnType) => void): void; - /** - * Add handler to promise with given promise object id. - */ - post(method: 'Runtime.awaitPromise', params?: Runtime.AwaitPromiseParameterType, callback?: (err: Error | null, params: Runtime.AwaitPromiseReturnType) => void): void; - post(method: 'Runtime.awaitPromise', callback?: (err: Error | null, params: Runtime.AwaitPromiseReturnType) => void): void; - /** - * Calls function with given declaration on the given object. Object group of the result is inherited from the target object. - */ - post(method: 'Runtime.callFunctionOn', params?: Runtime.CallFunctionOnParameterType, callback?: (err: Error | null, params: Runtime.CallFunctionOnReturnType) => void): void; - post(method: 'Runtime.callFunctionOn', callback?: (err: Error | null, params: Runtime.CallFunctionOnReturnType) => void): void; - /** - * Returns properties of a given object. Object group of the result is inherited from the target object. - */ - post(method: 'Runtime.getProperties', params?: Runtime.GetPropertiesParameterType, callback?: (err: Error | null, params: Runtime.GetPropertiesReturnType) => void): void; - post(method: 'Runtime.getProperties', callback?: (err: Error | null, params: Runtime.GetPropertiesReturnType) => void): void; - /** - * Releases remote object with given id. - */ - post(method: 'Runtime.releaseObject', params?: Runtime.ReleaseObjectParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Runtime.releaseObject', callback?: (err: Error | null) => void): void; - /** - * Releases all remote objects that belong to a given group. - */ - post(method: 'Runtime.releaseObjectGroup', params?: Runtime.ReleaseObjectGroupParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Runtime.releaseObjectGroup', callback?: (err: Error | null) => void): void; - /** - * Tells inspected instance to run if it was waiting for debugger to attach. - */ - post(method: 'Runtime.runIfWaitingForDebugger', callback?: (err: Error | null) => void): void; - /** - * Enables reporting of execution contexts creation by means of executionContextCreated event. When the reporting gets enabled the event will be sent immediately for each existing execution context. - */ - post(method: 'Runtime.enable', callback?: (err: Error | null) => void): void; - /** - * Disables reporting of execution contexts creation. - */ - post(method: 'Runtime.disable', callback?: (err: Error | null) => void): void; - /** - * Discards collected exceptions and console API calls. - */ - post(method: 'Runtime.discardConsoleEntries', callback?: (err: Error | null) => void): void; - /** - * @experimental - */ - post(method: 'Runtime.setCustomObjectFormatterEnabled', params?: Runtime.SetCustomObjectFormatterEnabledParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Runtime.setCustomObjectFormatterEnabled', callback?: (err: Error | null) => void): void; - /** - * Compiles expression. - */ - post(method: 'Runtime.compileScript', params?: Runtime.CompileScriptParameterType, callback?: (err: Error | null, params: Runtime.CompileScriptReturnType) => void): void; - post(method: 'Runtime.compileScript', callback?: (err: Error | null, params: Runtime.CompileScriptReturnType) => void): void; - /** - * Runs script with given id in a given context. - */ - post(method: 'Runtime.runScript', params?: Runtime.RunScriptParameterType, callback?: (err: Error | null, params: Runtime.RunScriptReturnType) => void): void; - post(method: 'Runtime.runScript', callback?: (err: Error | null, params: Runtime.RunScriptReturnType) => void): void; - post(method: 'Runtime.queryObjects', params?: Runtime.QueryObjectsParameterType, callback?: (err: Error | null, params: Runtime.QueryObjectsReturnType) => void): void; - post(method: 'Runtime.queryObjects', callback?: (err: Error | null, params: Runtime.QueryObjectsReturnType) => void): void; - /** - * Returns all let, const and class variables from global scope. - */ - post( - method: 'Runtime.globalLexicalScopeNames', - params?: Runtime.GlobalLexicalScopeNamesParameterType, - callback?: (err: Error | null, params: Runtime.GlobalLexicalScopeNamesReturnType) => void - ): void; - post(method: 'Runtime.globalLexicalScopeNames', callback?: (err: Error | null, params: Runtime.GlobalLexicalScopeNamesReturnType) => void): void; - /** - * Enables debugger for the given page. Clients should not assume that the debugging has been enabled until the result for this command is received. - */ - post(method: 'Debugger.enable', callback?: (err: Error | null, params: Debugger.EnableReturnType) => void): void; - /** - * Disables debugger for given page. - */ - post(method: 'Debugger.disable', callback?: (err: Error | null) => void): void; - /** - * Activates / deactivates all breakpoints on the page. - */ - post(method: 'Debugger.setBreakpointsActive', params?: Debugger.SetBreakpointsActiveParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setBreakpointsActive', callback?: (err: Error | null) => void): void; - /** - * Makes page not interrupt on any pauses (breakpoint, exception, dom exception etc). - */ - post(method: 'Debugger.setSkipAllPauses', params?: Debugger.SetSkipAllPausesParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setSkipAllPauses', callback?: (err: Error | null) => void): void; - /** - * Sets JavaScript breakpoint at given location specified either by URL or URL regex. Once this command is issued, all existing parsed scripts will have breakpoints resolved and returned in locations property. Further matching script parsing will result in subsequent breakpointResolved events issued. This logical breakpoint will survive page reloads. - */ - post(method: 'Debugger.setBreakpointByUrl', params?: Debugger.SetBreakpointByUrlParameterType, callback?: (err: Error | null, params: Debugger.SetBreakpointByUrlReturnType) => void): void; - post(method: 'Debugger.setBreakpointByUrl', callback?: (err: Error | null, params: Debugger.SetBreakpointByUrlReturnType) => void): void; - /** - * Sets JavaScript breakpoint at a given location. - */ - post(method: 'Debugger.setBreakpoint', params?: Debugger.SetBreakpointParameterType, callback?: (err: Error | null, params: Debugger.SetBreakpointReturnType) => void): void; - post(method: 'Debugger.setBreakpoint', callback?: (err: Error | null, params: Debugger.SetBreakpointReturnType) => void): void; - /** - * Removes JavaScript breakpoint. - */ - post(method: 'Debugger.removeBreakpoint', params?: Debugger.RemoveBreakpointParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.removeBreakpoint', callback?: (err: Error | null) => void): void; - /** - * Returns possible locations for breakpoint. scriptId in start and end range locations should be the same. - */ - post( - method: 'Debugger.getPossibleBreakpoints', - params?: Debugger.GetPossibleBreakpointsParameterType, - callback?: (err: Error | null, params: Debugger.GetPossibleBreakpointsReturnType) => void - ): void; - post(method: 'Debugger.getPossibleBreakpoints', callback?: (err: Error | null, params: Debugger.GetPossibleBreakpointsReturnType) => void): void; - /** - * Continues execution until specific location is reached. - */ - post(method: 'Debugger.continueToLocation', params?: Debugger.ContinueToLocationParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.continueToLocation', callback?: (err: Error | null) => void): void; - /** - * @experimental - */ - post(method: 'Debugger.pauseOnAsyncCall', params?: Debugger.PauseOnAsyncCallParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.pauseOnAsyncCall', callback?: (err: Error | null) => void): void; - /** - * Steps over the statement. - */ - post(method: 'Debugger.stepOver', callback?: (err: Error | null) => void): void; - /** - * Steps into the function call. - */ - post(method: 'Debugger.stepInto', params?: Debugger.StepIntoParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.stepInto', callback?: (err: Error | null) => void): void; - /** - * Steps out of the function call. - */ - post(method: 'Debugger.stepOut', callback?: (err: Error | null) => void): void; - /** - * Stops on the next JavaScript statement. - */ - post(method: 'Debugger.pause', callback?: (err: Error | null) => void): void; - /** - * This method is deprecated - use Debugger.stepInto with breakOnAsyncCall and Debugger.pauseOnAsyncTask instead. Steps into next scheduled async task if any is scheduled before next pause. Returns success when async task is actually scheduled, returns error if no task were scheduled or another scheduleStepIntoAsync was called. - * @experimental - */ - post(method: 'Debugger.scheduleStepIntoAsync', callback?: (err: Error | null) => void): void; - /** - * Resumes JavaScript execution. - */ - post(method: 'Debugger.resume', callback?: (err: Error | null) => void): void; - /** - * Returns stack trace with given stackTraceId. - * @experimental - */ - post(method: 'Debugger.getStackTrace', params?: Debugger.GetStackTraceParameterType, callback?: (err: Error | null, params: Debugger.GetStackTraceReturnType) => void): void; - post(method: 'Debugger.getStackTrace', callback?: (err: Error | null, params: Debugger.GetStackTraceReturnType) => void): void; - /** - * Searches for given string in script content. - */ - post(method: 'Debugger.searchInContent', params?: Debugger.SearchInContentParameterType, callback?: (err: Error | null, params: Debugger.SearchInContentReturnType) => void): void; - post(method: 'Debugger.searchInContent', callback?: (err: Error | null, params: Debugger.SearchInContentReturnType) => void): void; - /** - * Edits JavaScript source live. - */ - post(method: 'Debugger.setScriptSource', params?: Debugger.SetScriptSourceParameterType, callback?: (err: Error | null, params: Debugger.SetScriptSourceReturnType) => void): void; - post(method: 'Debugger.setScriptSource', callback?: (err: Error | null, params: Debugger.SetScriptSourceReturnType) => void): void; - /** - * Restarts particular call frame from the beginning. - */ - post(method: 'Debugger.restartFrame', params?: Debugger.RestartFrameParameterType, callback?: (err: Error | null, params: Debugger.RestartFrameReturnType) => void): void; - post(method: 'Debugger.restartFrame', callback?: (err: Error | null, params: Debugger.RestartFrameReturnType) => void): void; - /** - * Returns source for the script with given id. - */ - post(method: 'Debugger.getScriptSource', params?: Debugger.GetScriptSourceParameterType, callback?: (err: Error | null, params: Debugger.GetScriptSourceReturnType) => void): void; - post(method: 'Debugger.getScriptSource', callback?: (err: Error | null, params: Debugger.GetScriptSourceReturnType) => void): void; - /** - * Defines pause on exceptions state. Can be set to stop on all exceptions, uncaught exceptions or no exceptions. Initial pause on exceptions state is none. - */ - post(method: 'Debugger.setPauseOnExceptions', params?: Debugger.SetPauseOnExceptionsParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setPauseOnExceptions', callback?: (err: Error | null) => void): void; - /** - * Evaluates expression on a given call frame. - */ - post(method: 'Debugger.evaluateOnCallFrame', params?: Debugger.EvaluateOnCallFrameParameterType, callback?: (err: Error | null, params: Debugger.EvaluateOnCallFrameReturnType) => void): void; - post(method: 'Debugger.evaluateOnCallFrame', callback?: (err: Error | null, params: Debugger.EvaluateOnCallFrameReturnType) => void): void; - /** - * Changes value of variable in a callframe. Object-based scopes are not supported and must be mutated manually. - */ - post(method: 'Debugger.setVariableValue', params?: Debugger.SetVariableValueParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setVariableValue', callback?: (err: Error | null) => void): void; - /** - * Changes return value in top frame. Available only at return break position. - * @experimental - */ - post(method: 'Debugger.setReturnValue', params?: Debugger.SetReturnValueParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setReturnValue', callback?: (err: Error | null) => void): void; - /** - * Enables or disables async call stacks tracking. - */ - post(method: 'Debugger.setAsyncCallStackDepth', params?: Debugger.SetAsyncCallStackDepthParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setAsyncCallStackDepth', callback?: (err: Error | null) => void): void; - /** - * Replace previous blackbox patterns with passed ones. Forces backend to skip stepping/pausing in scripts with url matching one of the patterns. VM will try to leave blackboxed script by performing 'step in' several times, finally resorting to 'step out' if unsuccessful. - * @experimental - */ - post(method: 'Debugger.setBlackboxPatterns', params?: Debugger.SetBlackboxPatternsParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setBlackboxPatterns', callback?: (err: Error | null) => void): void; - /** - * Makes backend skip steps in the script in blackboxed ranges. VM will try leave blacklisted scripts by performing 'step in' several times, finally resorting to 'step out' if unsuccessful. Positions array contains positions where blackbox state is changed. First interval isn't blackboxed. Array should be sorted. - * @experimental - */ - post(method: 'Debugger.setBlackboxedRanges', params?: Debugger.SetBlackboxedRangesParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setBlackboxedRanges', callback?: (err: Error | null) => void): void; - /** - * Enables console domain, sends the messages collected so far to the client by means of the messageAdded notification. - */ - post(method: 'Console.enable', callback?: (err: Error | null) => void): void; - /** - * Disables console domain, prevents further console messages from being reported to the client. - */ - post(method: 'Console.disable', callback?: (err: Error | null) => void): void; - /** - * Does nothing. - */ - post(method: 'Console.clearMessages', callback?: (err: Error | null) => void): void; - post(method: 'Profiler.enable', callback?: (err: Error | null) => void): void; - post(method: 'Profiler.disable', callback?: (err: Error | null) => void): void; - /** - * Changes CPU profiler sampling interval. Must be called before CPU profiles recording started. - */ - post(method: 'Profiler.setSamplingInterval', params?: Profiler.SetSamplingIntervalParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Profiler.setSamplingInterval', callback?: (err: Error | null) => void): void; - post(method: 'Profiler.start', callback?: (err: Error | null) => void): void; - post(method: 'Profiler.stop', callback?: (err: Error | null, params: Profiler.StopReturnType) => void): void; - /** - * Enable precise code coverage. Coverage data for JavaScript executed before enabling precise code coverage may be incomplete. Enabling prevents running optimized code and resets execution counters. - */ - post(method: 'Profiler.startPreciseCoverage', params?: Profiler.StartPreciseCoverageParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Profiler.startPreciseCoverage', callback?: (err: Error | null) => void): void; - /** - * Disable precise code coverage. Disabling releases unnecessary execution count records and allows executing optimized code. - */ - post(method: 'Profiler.stopPreciseCoverage', callback?: (err: Error | null) => void): void; - /** - * Collect coverage data for the current isolate, and resets execution counters. Precise code coverage needs to have started. - */ - post(method: 'Profiler.takePreciseCoverage', callback?: (err: Error | null, params: Profiler.TakePreciseCoverageReturnType) => void): void; - /** - * Collect coverage data for the current isolate. The coverage data may be incomplete due to garbage collection. - */ - post(method: 'Profiler.getBestEffortCoverage', callback?: (err: Error | null, params: Profiler.GetBestEffortCoverageReturnType) => void): void; - /** - * Enable type profile. - * @experimental - */ - post(method: 'Profiler.startTypeProfile', callback?: (err: Error | null) => void): void; - /** - * Disable type profile. Disabling releases type profile data collected so far. - * @experimental - */ - post(method: 'Profiler.stopTypeProfile', callback?: (err: Error | null) => void): void; - /** - * Collect type profile. - * @experimental - */ - post(method: 'Profiler.takeTypeProfile', callback?: (err: Error | null, params: Profiler.TakeTypeProfileReturnType) => void): void; - post(method: 'HeapProfiler.enable', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.disable', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.startTrackingHeapObjects', params?: HeapProfiler.StartTrackingHeapObjectsParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.startTrackingHeapObjects', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.stopTrackingHeapObjects', params?: HeapProfiler.StopTrackingHeapObjectsParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.stopTrackingHeapObjects', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.takeHeapSnapshot', params?: HeapProfiler.TakeHeapSnapshotParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.takeHeapSnapshot', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.collectGarbage', callback?: (err: Error | null) => void): void; - post( - method: 'HeapProfiler.getObjectByHeapObjectId', - params?: HeapProfiler.GetObjectByHeapObjectIdParameterType, - callback?: (err: Error | null, params: HeapProfiler.GetObjectByHeapObjectIdReturnType) => void - ): void; - post(method: 'HeapProfiler.getObjectByHeapObjectId', callback?: (err: Error | null, params: HeapProfiler.GetObjectByHeapObjectIdReturnType) => void): void; - /** - * Enables console to refer to the node with given id via $x (see Command Line API for more details $x functions). - */ - post(method: 'HeapProfiler.addInspectedHeapObject', params?: HeapProfiler.AddInspectedHeapObjectParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.addInspectedHeapObject', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.getHeapObjectId', params?: HeapProfiler.GetHeapObjectIdParameterType, callback?: (err: Error | null, params: HeapProfiler.GetHeapObjectIdReturnType) => void): void; - post(method: 'HeapProfiler.getHeapObjectId', callback?: (err: Error | null, params: HeapProfiler.GetHeapObjectIdReturnType) => void): void; - post(method: 'HeapProfiler.startSampling', params?: HeapProfiler.StartSamplingParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.startSampling', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.stopSampling', callback?: (err: Error | null, params: HeapProfiler.StopSamplingReturnType) => void): void; - post(method: 'HeapProfiler.getSamplingProfile', callback?: (err: Error | null, params: HeapProfiler.GetSamplingProfileReturnType) => void): void; - /** - * Gets supported tracing categories. - */ - post(method: 'NodeTracing.getCategories', callback?: (err: Error | null, params: NodeTracing.GetCategoriesReturnType) => void): void; - /** - * Start trace events collection. - */ - post(method: 'NodeTracing.start', params?: NodeTracing.StartParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeTracing.start', callback?: (err: Error | null) => void): void; - /** - * Stop trace events collection. Remaining collected events will be sent as a sequence of - * dataCollected events followed by tracingComplete event. - */ - post(method: 'NodeTracing.stop', callback?: (err: Error | null) => void): void; - /** - * Sends protocol message over session with given id. - */ - post(method: 'NodeWorker.sendMessageToWorker', params?: NodeWorker.SendMessageToWorkerParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeWorker.sendMessageToWorker', callback?: (err: Error | null) => void): void; - /** - * Instructs the inspector to attach to running workers. Will also attach to new workers - * as they start - */ - post(method: 'NodeWorker.enable', params?: NodeWorker.EnableParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeWorker.enable', callback?: (err: Error | null) => void): void; - /** - * Detaches from all running workers and disables attaching to new workers as they are started. - */ - post(method: 'NodeWorker.disable', callback?: (err: Error | null) => void): void; - /** - * Detached from the worker with given sessionId. - */ - post(method: 'NodeWorker.detach', params?: NodeWorker.DetachParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeWorker.detach', callback?: (err: Error | null) => void): void; - /** - * Enable the `NodeRuntime.waitingForDisconnect`. - */ - post(method: 'NodeRuntime.notifyWhenWaitingForDisconnect', params?: NodeRuntime.NotifyWhenWaitingForDisconnectParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeRuntime.notifyWhenWaitingForDisconnect', callback?: (err: Error | null) => void): void; - // Events - addListener(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - addListener(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - addListener(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - addListener(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - addListener(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - addListener(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - addListener(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - addListener(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - addListener(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - addListener(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - addListener(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - addListener(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - addListener(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - addListener(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - addListener(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - addListener(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - addListener(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - addListener(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - addListener(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - addListener(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - addListener(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - addListener(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - addListener(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - addListener(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - addListener(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - addListener(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - addListener(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - addListener(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - emit(event: string | symbol, ...args: any[]): boolean; - emit(event: 'inspectorNotification', message: InspectorNotification<{}>): boolean; - emit(event: 'Runtime.executionContextCreated', message: InspectorNotification): boolean; - emit(event: 'Runtime.executionContextDestroyed', message: InspectorNotification): boolean; - emit(event: 'Runtime.executionContextsCleared'): boolean; - emit(event: 'Runtime.exceptionThrown', message: InspectorNotification): boolean; - emit(event: 'Runtime.exceptionRevoked', message: InspectorNotification): boolean; - emit(event: 'Runtime.consoleAPICalled', message: InspectorNotification): boolean; - emit(event: 'Runtime.inspectRequested', message: InspectorNotification): boolean; - emit(event: 'Debugger.scriptParsed', message: InspectorNotification): boolean; - emit(event: 'Debugger.scriptFailedToParse', message: InspectorNotification): boolean; - emit(event: 'Debugger.breakpointResolved', message: InspectorNotification): boolean; - emit(event: 'Debugger.paused', message: InspectorNotification): boolean; - emit(event: 'Debugger.resumed'): boolean; - emit(event: 'Console.messageAdded', message: InspectorNotification): boolean; - emit(event: 'Profiler.consoleProfileStarted', message: InspectorNotification): boolean; - emit(event: 'Profiler.consoleProfileFinished', message: InspectorNotification): boolean; - emit(event: 'HeapProfiler.addHeapSnapshotChunk', message: InspectorNotification): boolean; - emit(event: 'HeapProfiler.resetProfiles'): boolean; - emit(event: 'HeapProfiler.reportHeapSnapshotProgress', message: InspectorNotification): boolean; - emit(event: 'HeapProfiler.lastSeenObjectId', message: InspectorNotification): boolean; - emit(event: 'HeapProfiler.heapStatsUpdate', message: InspectorNotification): boolean; - emit(event: 'NodeTracing.dataCollected', message: InspectorNotification): boolean; - emit(event: 'NodeTracing.tracingComplete'): boolean; - emit(event: 'NodeWorker.attachedToWorker', message: InspectorNotification): boolean; - emit(event: 'NodeWorker.detachedFromWorker', message: InspectorNotification): boolean; - emit(event: 'NodeWorker.receivedMessageFromWorker', message: InspectorNotification): boolean; - emit(event: 'NodeRuntime.waitingForDisconnect'): boolean; - on(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - on(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - on(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - on(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - on(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - on(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - on(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - on(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - on(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - on(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - on(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - on(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - on(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - on(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - on(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - on(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - on(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - on(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - on(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - on(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - on(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - on(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - on(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - on(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - on(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - on(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - on(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - on(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - once(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - once(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - once(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - once(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - once(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - once(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - once(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - once(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - once(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - once(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - once(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - once(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - once(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - once(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - once(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - once(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - once(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - once(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - once(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - once(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - once(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - once(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - once(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - once(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - once(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - once(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - once(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - once(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - prependListener(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - prependListener(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - prependListener(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - prependListener(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - prependListener(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - prependListener(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - prependListener(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - prependListener(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - prependListener(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - prependListener(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - prependListener(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - prependListener(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - prependListener(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - prependListener(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - prependListener(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - prependListener(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - prependListener(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - prependListener(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - prependListener(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - prependListener(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - prependListener(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - prependListener(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - prependListener(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - prependListener(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - prependListener(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - prependListener(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - prependListener(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - prependListener(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - prependOnceListener(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - prependOnceListener(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - prependOnceListener(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - prependOnceListener(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - prependOnceListener(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - prependOnceListener(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - prependOnceListener(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - prependOnceListener(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - prependOnceListener(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - prependOnceListener(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - prependOnceListener(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - prependOnceListener(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - prependOnceListener(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - prependOnceListener(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - prependOnceListener(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - prependOnceListener(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - prependOnceListener(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - prependOnceListener(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - prependOnceListener(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - prependOnceListener(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - prependOnceListener(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - prependOnceListener(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - prependOnceListener(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - prependOnceListener(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - prependOnceListener(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - prependOnceListener(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - prependOnceListener(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - prependOnceListener(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - } - /** - * Activate inspector on host and port. Equivalent to`node --inspect=[[host:]port]`, but can be done programmatically after node has - * started. - * - * If wait is `true`, will block until a client has connected to the inspect port - * and flow control has been passed to the debugger client. - * - * See the `security warning` regarding the `host`parameter usage. - * @param [port='what was specified on the CLI'] Port to listen on for inspector connections. Optional. - * @param [host='what was specified on the CLI'] Host to listen on for inspector connections. Optional. - * @param [wait=false] Block until a client has connected. Optional. - */ - function open(port?: number, host?: string, wait?: boolean): void; - /** - * Deactivate the inspector. Blocks until there are no active connections. - */ - function close(): void; - /** - * Return the URL of the active inspector, or `undefined` if there is none. - * - * ```console - * $ node --inspect -p 'inspector.url()' - * Debugger listening on ws://127.0.0.1:9229/166e272e-7a30-4d09-97ce-f1c012b43c34 - * For help, see: https://nodejs.org/en/docs/inspector - * ws://127.0.0.1:9229/166e272e-7a30-4d09-97ce-f1c012b43c34 - * - * $ node --inspect=localhost:3000 -p 'inspector.url()' - * Debugger listening on ws://localhost:3000/51cf8d0e-3c36-4c59-8efd-54519839e56a - * For help, see: https://nodejs.org/en/docs/inspector - * ws://localhost:3000/51cf8d0e-3c36-4c59-8efd-54519839e56a - * - * $ node -p 'inspector.url()' - * undefined - * ``` - */ - function url(): string | undefined; - /** - * Blocks until a client (existing or connected later) has sent`Runtime.runIfWaitingForDebugger` command. - * - * An exception will be thrown if there is no active inspector. - * @since v12.7.0 - */ - function waitForDebugger(): void; -} -/** - * The inspector module provides an API for interacting with the V8 inspector. - */ -declare module 'node:inspector' { - import inspector = require('inspector'); - export = inspector; -} diff --git a/spaces/raynardj/modern-chinese-to-ancient-translate-wenyanwen/app.py b/spaces/raynardj/modern-chinese-to-ancient-translate-wenyanwen/app.py deleted file mode 100644 index b5ba63e82224bf303daae113b7ddc9fc188d30a7..0000000000000000000000000000000000000000 --- a/spaces/raynardj/modern-chinese-to-ancient-translate-wenyanwen/app.py +++ /dev/null @@ -1,51 +0,0 @@ -from transformers import ( - EncoderDecoderModel, - AutoTokenizer -) -import torch -import streamlit as st - -PRETRAINED = "raynardj/wenyanwen-chinese-translate-to-ancient" - -def inference(text): - tk_kwargs = dict( - truncation=True, - max_length=128, - padding="max_length", - return_tensors='pt') - - inputs = tokenizer([text,],**tk_kwargs) - with torch.no_grad(): - return tokenizer.batch_decode( - model.generate( - inputs.input_ids, - attention_mask=inputs.attention_mask, - num_beams=3, - bos_token_id=101, - eos_token_id=tokenizer.sep_token_id, - pad_token_id=tokenizer.pad_token_id, - ), skip_special_tokens=True)[0].replace(" ","") - -st.title("🪕古朴 ❄️清雅 🌊壮丽") -st.markdown(""" -> Translate from Chinese to Ancient Chinese / 还你古朴清雅壮丽的文言文, -* 一个transformer神经网络的现代文向文言文的自动翻译引擎。训练的代码在[这里](https://github.com/raynardj/yuan), 喜欢加⭐️ -* 最多100个中文字符 -""") - -@st.cache(allow_output_mutation=True) -def load_model(): - tokenizer = AutoTokenizer.from_pretrained(PRETRAINED) - model = EncoderDecoderModel.from_pretrained(PRETRAINED) - return tokenizer, model - -tokenizer, model = load_model() - -text = st.text_area(value="轻轻地我走了,正如我轻轻地来。我挥一挥衣袖,不带走一片云彩。", label="输入文本") - -if st.button("曰"): - if len(text) > 100: - st.error("无过百字,若过则当答此言。") - else: - st.write(inference(text)) - diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dark Knight 1080p Wallpaper Video ((FULL)).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dark Knight 1080p Wallpaper Video ((FULL)).md deleted file mode 100644 index bfc0a77f2b2fe321ccd71611625da4240c275696..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dark Knight 1080p Wallpaper Video ((FULL)).md +++ /dev/null @@ -1,6 +0,0 @@ -

      dark knight 1080p wallpaper video


      Downloadhttps://urlgoal.com/2uCJWO



      - -Jan 9, 2020 - This HD wallpaper is about armored knight with angel wings and halo ... 10 Latest Medieval Black Knight Wallpaper FULL HD 1080p For PC Background ... HD wallpaper: Video Game, Mu Online, Angel, Armor, Warrior, Wings. 1fdad05405
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Etka Id Username Password.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Etka Id Username Password.md deleted file mode 100644 index a6b04750fe40f87a0fb2123da274a6eb8b6cda96..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Etka Id Username Password.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Etka Id Username Password


      Download File ››› https://urlgoal.com/2uCMv8



      -
      -LATEST UPDATE. NUMBER FOR ETKA WEBSITES MHHAUTO COM. ETKA ID USERNAME PASSWORD FARCONEL. COM. ETKAINFO RU ETKAINFO ? 1fdad05405
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Loc Kargil Movie TOP Download 1080p Cont).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Loc Kargil Movie TOP Download 1080p Cont).md deleted file mode 100644 index 8ca3bebddc16ea8b70a82803f7fac7c9da653ecd..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Loc Kargil Movie TOP Download 1080p Cont).md +++ /dev/null @@ -1,24 +0,0 @@ -

      HD Online Player (Loc Kargil Movie Download 1080p Cont)


      Download Zip ———>>> https://urlgoal.com/2uCLL3



      - -/. Kargil. No One Is Safe Anywhere Anymore- New Aamir Khan Film. No One Is Safe Anywhere Anymore. Aamir Khan's second directorial venture, which also stars Suniel Shetty and Saif Ali Khan, tells the story of a man who is sent to a suburban jail in New Delhi in an attempt to cover up the identity of a Muslim terrorist. Despite some interesting plot points, such as a Muslim terrorist who is actually Hindu, and an apparent gay relationship between an Indian Muslim and an Australian Jewish woman, the movie fails to live up to the expectations of the long-awaited sequel. Get information about Kargil 2003 Full Hindi Movie Online. You can get Kargil 2003 Full Hindi Movie Online in HD, MP4, Mobile, 3GP and DVD quality and download Kargil 2003 Full Hindi Movie Online for free in high speed. The Descendants, full movie online free, How to watch Descendants online free, Watch Descendants online free, Download Descendants, Download Descendants, Watch Descendants, How to watch Descendants, Watch Descendants Online, Watch Descendants Free, Watch Descendants Online, How to watch Descendants online, Watch Descendants Online.Q: - -Show that the following equivalence holds true - -$$\sqrt\fracxy + \sqrt\fracyx = \frac2\sqrtxyx+y$$ - -This is the problem I have been stuck on. I have tried looking at $\sqrt\fracxy + \sqrt\fracyx = \sqrt\fracxyx^2+y^2$, but then I can't seem to find a way to show that this is equal to the right hand side. - -A: - -$$\sqrt\fracxy + \sqrt\fracyx=\frac\sqrtxy\sqrtx^2+y^2=\fracxy\sqrtx^2+y^2=\frac2xyx+y$$ - -The last step is by the definition of the square root. - -Q: - -Batch file processes input from a text file but does not run script - -I have a text file 4fefd39f24
      -
      -
      -

      diff --git a/spaces/reha/Stick_Tech/add_speaker.py b/spaces/reha/Stick_Tech/add_speaker.py deleted file mode 100644 index e224f07c892a5fe1837e3cbf1745e0d8992ea283..0000000000000000000000000000000000000000 --- a/spaces/reha/Stick_Tech/add_speaker.py +++ /dev/null @@ -1,62 +0,0 @@ -import os -import argparse -from tqdm import tqdm -from random import shuffle -import json - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list") - parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list") - parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list") - parser.add_argument("--source_dir", type=str, default="./dataset/32k", help="path to source dir") - args = parser.parse_args() - - previous_config = json.load(open("configs/config.json", "rb")) - - train = [] - val = [] - test = [] - idx = 0 - spk_dict = previous_config["spk"] - spk_id = max([i for i in spk_dict.values()]) + 1 - for speaker in tqdm(os.listdir(args.source_dir)): - if speaker not in spk_dict.keys(): - spk_dict[speaker] = spk_id - spk_id += 1 - wavs = [os.path.join(args.source_dir, speaker, i)for i in os.listdir(os.path.join(args.source_dir, speaker))] - wavs = [i for i in wavs if i.endswith("wav")] - shuffle(wavs) - train += wavs[2:-10] - val += wavs[:2] - test += wavs[-10:] - - assert previous_config["model"]["n_speakers"] > len(spk_dict.keys()) - shuffle(train) - shuffle(val) - shuffle(test) - - print("Writing", args.train_list) - with open(args.train_list, "w") as f: - for fname in tqdm(train): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.val_list) - with open(args.val_list, "w") as f: - for fname in tqdm(val): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.test_list) - with open(args.test_list, "w") as f: - for fname in tqdm(test): - wavpath = fname - f.write(wavpath + "\n") - - previous_config["spk"] = spk_dict - - print("Writing configs/config.json") - with open("configs/config.json", "w") as f: - json.dump(previous_config, f, indent=2) diff --git a/spaces/rhuang/RL/style.css b/spaces/rhuang/RL/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/rhuang/RL/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/richardr1126/sql-skeleton-wizardcoder-demo/app-ngrok.py b/spaces/richardr1126/sql-skeleton-wizardcoder-demo/app-ngrok.py deleted file mode 100644 index 5cfc40d98d3f2fb0c4587685f4e9d5b6446fd8a0..0000000000000000000000000000000000000000 --- a/spaces/richardr1126/sql-skeleton-wizardcoder-demo/app-ngrok.py +++ /dev/null @@ -1,222 +0,0 @@ -import os -import gradio as gr -import sqlparse -import requests -from time import sleep -import re -import platform -# Additional Firebase imports -import firebase_admin -from firebase_admin import credentials, firestore -import json -import base64 - -print(f"Running on {platform.system()}") - -if platform.system() == "Windows" or platform.system() == "Darwin": - from dotenv import load_dotenv - load_dotenv() - -quantized_model = "richardr1126/spider-skeleton-wizard-coder-ggml" -merged_model = "richardr1126/spider-skeleton-wizard-coder-merged" -initial_model = "WizardLM/WizardCoder-15B-V1.0" -lora_model = "richardr1126/spider-skeleton-wizard-coder-qlora" -dataset = "richardr1126/spider-skeleton-context-instruct" - -# Firebase code -# Initialize Firebase -base64_string = os.getenv('FIREBASE') -base64_bytes = base64_string.encode('utf-8') -json_bytes = base64.b64decode(base64_bytes) -json_data = json_bytes.decode('utf-8') - -firebase_auth = json.loads(json_data) - -# Load credentials and initialize Firestore -cred = credentials.Certificate(firebase_auth) -firebase_admin.initialize_app(cred) -db = firestore.client() - -def log_message_to_firestore(input_message, db_info, temperature, response_text): - doc_ref = db.collection('logs').document() - log_data = { - 'timestamp': firestore.SERVER_TIMESTAMP, - 'temperature': temperature, - 'db_info': db_info, - 'input': input_message, - 'output': response_text, - } - doc_ref.set(log_data) - -rated_outputs = set() # set to store already rated outputs - -def log_rating_to_firestore(input_message, db_info, temperature, response_text, rating): - global rated_outputs - output_id = f"{input_message} {db_info} {response_text} {temperature}" - - if output_id in rated_outputs: - gr.Warning("You've already rated this output!") - return - if not input_message or not response_text or not rating: - gr.Info("You haven't asked a question yet!") - return - - rated_outputs.add(output_id) - - doc_ref = db.collection('ratings').document() - log_data = { - 'timestamp': firestore.SERVER_TIMESTAMP, - 'temperature': temperature, - 'db_info': db_info, - 'input': input_message, - 'output': response_text, - 'rating': rating, - } - doc_ref.set(log_data) - gr.Info("Thanks for your feedback!") -# End Firebase code - -def format(text): - # Split the text by "|", and get the last element in the list which should be the final query - try: - final_query = text.split("|")[1].strip() - except Exception: - final_query = text - - try: - # Attempt to format SQL query using sqlparse - formatted_query = sqlparse.format(final_query, reindent=True, keyword_case='upper') - except Exception: - # If formatting fails, use the original, unformatted query - formatted_query = final_query - - # Convert SQL to markdown (not required, but just to show how to use the markdown module) - final_query_markdown = f"{formatted_query}" - - return final_query_markdown - -def generate(input_message: str, db_info="", temperature=0.2, top_p=0.9, top_k=0, repetition_penalty=1.08, format_sql=True, stop_sequence="###", log=False): - # Format the user's input message - messages = f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n\nConvert text to sql: {input_message} {db_info}\n\n### Response:\n\n" - - url = os.getenv("KOBOLDCPP_API_URL") - stop_sequence = stop_sequence.split(",") - stop = ["###"] + stop_sequence - payload = { - "prompt": messages, - "temperature": temperature, - "top_p": top_p, - "top_k": top_k, - "top_a": 0, - "n": 1, - "max_context_length": 2048, - "max_length": 512, - "rep_pen": repetition_penalty, - "sampler_order": [6,0,1,3,4,2,5], - "stop_sequence": stop, - } - headers = { - "Content-Type": "application/json", - "ngrok-skip-browser-warning": "1" # added this line - } - - for _ in range(3): # Try 3 times - try: - response = requests.post(url, json=payload, headers=headers) - response_text = response.json()["results"][0]["text"] - response_text = response_text.replace("\n", "").replace("\t", " ") - if response_text and response_text[-1] == ".": - response_text = response_text[:-1] - - output = format(response_text) if format_sql else response_text - - if log: - # Log the request to Firestore - log_message_to_firestore(input_message, db_info, temperature, output if format_sql else response_text) - - return output - - - except Exception as e: - print(f'Error occurred: {str(e)}') - print('Waiting for 10 seconds before retrying...') - gr.Warning("Error occurred, retrying, the sever may be down...") - sleep(10) - -# Gradio UI Code -with gr.Blocks(theme='gradio/soft') as demo: - # Elements stack vertically by default just define elements in order you want them to stack - header = gr.HTML(""" -

      SQL Skeleton WizardCoder Demo

      -

      🕷️☠️🧙‍♂️ Generate SQL queries from Natural Language 🕷️☠️🧙‍♂️

      -
      -

      ⚠️ Should take 30-60s to generate. Please rate the response, it helps a lot. If you get a blank output, the model server is currently down, please try again another time.

      -
      - """) - - output_box = gr.Code(label="Generated SQL", lines=2, interactive=False) - - with gr.Row(): - rate_up = gr.Button("👍", variant="secondary") - rate_down = gr.Button("👎", variant="secondary") - - input_text = gr.Textbox(lines=3, placeholder='Write your question here...', label='NL Input') - db_info = gr.Textbox(lines=4, placeholder='Make sure to place your tables information inside || for better results. Example: | table_01 : column_01 , column_02 | table_02 : column_01 , column_02 | ...', label='Database Info') - format_sql = gr.Checkbox(label="Format SQL + Remove Skeleton", value=True, interactive=True) - - with gr.Row(): - run_button = gr.Button("Generate SQL", variant="primary") - clear_button = gr.ClearButton(variant="secondary") - - with gr.Accordion("Options", open=False): - temperature = gr.Slider(label="Temperature", minimum=0.0, maximum=1.0, value=0.2, step=0.1) - top_p = gr.Slider(label="Top-p (nucleus sampling)", minimum=0.0, maximum=1.0, value=0.9, step=0.01) - top_k = gr.Slider(label="Top-k", minimum=0, maximum=200, value=0, step=1) - repetition_penalty = gr.Slider(label="Repetition Penalty", minimum=1.0, maximum=2.0, value=1.08, step=0.01) - stop_sequence = gr.Textbox(lines=1, value="Explanation,Note", label='Extra Stop Sequence') - - info = gr.HTML(f""" -

      🌐 Leveraging the 4-bit GGML version of {merged_model} model.

      -

      🔗 How it's made: {initial_model} was finetuned to create {lora_model}, then merged together to create {merged_model}.

      -

      📉 Fine-tuning was performed using QLoRA techniques on the {dataset} dataset. You can view training metrics on the QLoRa adapter HF Repo.

      -

      📊 All inputs/outputs are logged to Firebase to see how the model is doing. You can also leave a rating for each generated SQL the model produces, which gets sent to the database as well.

      - """) - - examples = gr.Examples([ - ["What is the average, minimum, and maximum age of all singers from France?", "| stadium : stadium_id , location , name , capacity , highest , lowest , average | singer : singer_id , name , country , song_name , song_release_year , age , is_male | concert : concert_id , concert_name , theme , stadium_id , year | singer_in_concert : concert_id , singer_id | concert.stadium_id = stadium.stadium_id | singer_in_concert.singer_id = singer.singer_id | singer_in_concert.concert_id = concert.concert_id |"], - ["How many students have dogs?", "| student : stuid , lname , fname , age , sex , major , advisor , city_code | has_pet : stuid , petid | pets : petid , pettype , pet_age , weight | has_pet.stuid = student.stuid | has_pet.petid = pets.petid | pets.pettype = 'Dog' |"], - ], inputs=[input_text, db_info, temperature, top_p, top_k, repetition_penalty, format_sql, stop_sequence], fn=generate, cache_examples=False if platform.system() == "Windows" or platform.system() == "Darwin" else True, outputs=output_box) - - with gr.Accordion("More Examples", open=False): - examples = gr.Examples([ - ["What is the average weight of pets of all students?", "| student : stuid , lname , fname , age , sex , major , advisor , city_code | has_pet : stuid , petid | pets : petid , pettype , pet_age , weight | has_pet.stuid = student.stuid | has_pet.petid = pets.petid |"], - ["How many male singers performed in concerts in the year 2023?", "| stadium : stadium_id , location , name , capacity , highest , lowest , average | singer : singer_id , name , country , song_name , song_release_year , age , is_male | concert : concert_id , concert_name , theme , stadium_id , year | singer_in_concert : concert_id , singer_id | concert.stadium_id = stadium.stadium_id | singer_in_concert.singer_id = singer.singer_id | singer_in_concert.concert_id = concert.concert_id |"], - ["For students who have pets, how many pets does each student have? List their ids instead of names.", "| student : stuid , lname , fname , age , sex , major , advisor , city_code | has_pet : stuid , petid | pets : petid , pettype , pet_age , weight | has_pet.stuid = student.stuid | has_pet.petid = pets.petid |"], - ["Show location and name for all stadiums with a capacity between 5000 and 10000.", "| stadium : stadium_id , location , name , capacity , highest , lowest , average | singer : singer_id , name , country , song_name , song_release_year , age , is_male | concert : concert_id , concert_name , theme , stadium_id , year | singer_in_concert : concert_id , singer_id | concert.stadium_id = stadium.stadium_id | singer_in_concert.singer_id = singer.singer_id | singer_in_concert.concert_id = concert.concert_id |"], - ["What are the number of concerts that occurred in the stadium with the largest capacity ?", "| stadium : stadium_id , location , name , capacity , highest , lowest , average | singer : singer_id , name , country , song_name , song_release_year , age , is_male | concert : concert_id , concert_name , theme , stadium_id , year | singer_in_concert : concert_id , singer_id | concert.stadium_id = stadium.stadium_id | singer_in_concert.singer_id = singer.singer_id | singer_in_concert.concert_id = concert.concert_id |"], - ["Which student has the oldest pet?", "| student : stuid , lname , fname , age , sex , major , advisor , city_code | has_pet : stuid , petid | pets : petid , pettype , pet_age , weight | has_pet.stuid = student.stuid | has_pet.petid = pets.petid |"], - ["List the names of all singers who performed in a concert with the theme 'Rock'", "| stadium : stadium_id , location , name , capacity , highest , lowest , average | singer : singer_id , name , country , song_name , song_release_year , age , is_male | concert : concert_id , concert_name , theme , stadium_id , year | singer_in_concert : concert_id , singer_id | concert.stadium_id = stadium.stadium_id | singer_in_concert.singer_id = singer.singer_id | singer_in_concert.concert_id = concert.concert_id |"], - ["List all students who don't have pets.", "| student : stuid , lname , fname , age , sex , major , advisor , city_code | has_pet : stuid , petid | pets : petid , pettype , pet_age , weight | has_pet.stuid = student.stuid | has_pet.petid = pets.petid |"], - ], inputs=[input_text, db_info, temperature, top_p, top_k, repetition_penalty, format_sql, stop_sequence], fn=generate, cache_examples=False, outputs=output_box) - - - readme_content = requests.get(f"https://huggingface.co/{merged_model}/raw/main/README.md").text - readme_content = re.sub('---.*?---', '', readme_content, flags=re.DOTALL) #Remove YAML front matter - - with gr.Accordion("📖 Model Readme", open=True): - readme = gr.Markdown( - readme_content, - ) - - with gr.Accordion("Disabled Options:", open=False): - log = gr.Checkbox(label="Log to Firebase", value=True, interactive=False) - - # When the button is clicked, call the generate function, inputs are taken from the UI elements, outputs are sent to outputs elements - run_button.click(fn=generate, inputs=[input_text, db_info, temperature, top_p, top_k, repetition_penalty, format_sql, stop_sequence, log], outputs=output_box, api_name="txt2sql") - clear_button.add([input_text, db_info, output_box]) - - # Firebase code - for rating the generated SQL (remove if you don't want to use Firebase) - rate_up.click(fn=log_rating_to_firestore, inputs=[input_text, db_info, temperature, output_box, rate_up]) - rate_down.click(fn=log_rating_to_firestore, inputs=[input_text, db_info, temperature, output_box, rate_down]) - -demo.queue(concurrency_count=1, max_size=20).launch(debug=True) \ No newline at end of file diff --git a/spaces/richardzhangy26/yandian_flow_classification/label/lrcn_model.py b/spaces/richardzhangy26/yandian_flow_classification/label/lrcn_model.py deleted file mode 100644 index 841b3774f22645db58958a5ac1ad248ba7049fea..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/label/lrcn_model.py +++ /dev/null @@ -1,69 +0,0 @@ -import torch.nn as nn -from torchvision import models - -class ConvLstm(nn.Module): - def __init__(self, latent_dim, hidden_size, lstm_layers, bidirectional, n_class): - super(ConvLstm, self).__init__() - self.conv_model = Pretrained_conv(latent_dim) - self.Lstm = Lstm(latent_dim, hidden_size, lstm_layers, bidirectional) - self.output_layer = nn.Sequential( - nn.Linear(2 * hidden_size if bidirectional==True else hidden_size, n_class), - nn.Softmax(dim=-1) - ) - - def forward(self, x): - batch_size, timesteps, channel_x, h_x, w_x = x.shape - conv_input = x.view(batch_size * timesteps, channel_x, h_x, w_x) - conv_output = self.conv_model(conv_input) - lstm_input = conv_output.view(batch_size, timesteps, -1) - lstm_output = self.Lstm(lstm_input) - lstm_output = lstm_output[:, -1, :] - output = self.output_layer(lstm_output) - return output - -class Pretrained_conv(nn.Module): - def __init__(self, latent_dim): - super(Pretrained_conv, self).__init__() - - # self.conv_model = models.resnet152(pretrained=True) - # self.conv_model = models.convnext_small(pretrained=True) - self.conv_model = models.efficientnet_v2_m(pretrained=True) - - - # print(self.conv_model) - - # print(self.conv_model.classifier[2]) - # ====== freezing all of the layers ====== - for param in self.conv_model.parameters(): - param.requires_grad = False - - - # ====== changing the last FC layer to an output with the size we need. this layer is un freezed ====== - - #resnet152 - # self.conv_model.fc = nn.Linear(self.conv_model.fc.in_features, latent_dim) - #convnext_small - # self.conv_model.classifier[2] = nn.Linear(self.conv_model.classifier[2].in_features, latent_dim) - - # efficientnetv2-m - self.conv_model.classifier[1] = nn.Linear(self.conv_model.classifier[1].in_features, latent_dim) - # print(self.conv_model.classifier[2]) - - - - def forward(self, x): - return self.conv_model(x) - -class Lstm(nn.Module): - def __init__(self, latent_dim, hidden_size, lstm_layers, bidirectional): - super(Lstm, self).__init__() - self.Lstm = nn.LSTM(latent_dim, hidden_size=hidden_size, num_layers=lstm_layers, batch_first=True, bidirectional=bidirectional) - self.hidden_state = None - - def reset_hidden_state(self): - self.hidden_state = None - - def forward(self,x): - output, self.hidden_state = self.Lstm(x, self.hidden_state) - return output - diff --git a/spaces/rktraz/art_style_classifier/app.py b/spaces/rktraz/art_style_classifier/app.py deleted file mode 100644 index 4eae132d013d959881ad26e6c1489dd5f88a27ab..0000000000000000000000000000000000000000 --- a/spaces/rktraz/art_style_classifier/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import requests -from fastai.vision.all import * -import gradio as gr - - -learn = load_learner("resnet34.pkl") - -categories = [ - 'Impressionism', - 'Realism', - 'Romanticism', - 'Expressionism', - 'Post-Impressionism', - 'Art Nouveau (Modern)', - 'Baroque', - 'Surrealism', - 'Symbolism', - 'Rococo', - 'Northern Renaissance', - 'Naïve Art (Primitivism)', - 'Abstract Expressionism', - 'Neoclassicism', - 'Cubism', - 'Ukiyo-e'] - - - -# def classify_art(img, url): -# if url: -# # User pasted a link, download the image -# img = load_image(requests.get(url, stream=True).raw) -# pred, idx, probs = learn.predict(img) -# probs = [round(float(p), 3) for p in probs] -# return dict(zip(categories, probs)) - -def classify_art(img): - pred, idx, probs = learn.predict(img) - probs = [round(float(p), 3) for p in probs] - return dict(zip(categories, probs)) - - -image = gr.inputs.Image(shape=(192, 192)) -link = gr.inputs.Textbox(label="Or paste a link to the image") -label = gr.outputs.Label() - -examples = os.listdir('example_images') - -examples = list(map(lambda x: "example_images/" + x, examples)) - -iface = gr.Interface(title="👨‍🎨 Art Style Classifier 🖼", - examples=examples, - # fn=classify_art, inputs=[image, link], - fn=classify_art, inputs=[image], - outputs=label - ) -iface.launch(inline=False) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/yolof_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/yolof_head.py deleted file mode 100644 index 1063524a7d17f2bb037ca64c35f5ce3e658771eb..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/dense_heads/yolof_head.py +++ /dev/null @@ -1,416 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.cnn import (ConvModule, bias_init_with_prob, constant_init, is_norm, - normal_init) -from mmcv.runner import force_fp32 - -from mmdet.core import anchor_inside_flags, multi_apply, reduce_mean, unmap -from ..builder import HEADS -from .anchor_head import AnchorHead - -INF = 1e8 - - -def levels_to_images(mlvl_tensor): - """Concat multi-level feature maps by image. - - [feature_level0, feature_level1...] -> [feature_image0, feature_image1...] - Convert the shape of each element in mlvl_tensor from (N, C, H, W) to - (N, H*W , C), then split the element to N elements with shape (H*W, C), and - concat elements in same image of all level along first dimension. - - Args: - mlvl_tensor (list[torch.Tensor]): list of Tensor which collect from - corresponding level. Each element is of shape (N, C, H, W) - - Returns: - list[torch.Tensor]: A list that contains N tensors and each tensor is - of shape (num_elements, C) - """ - batch_size = mlvl_tensor[0].size(0) - batch_list = [[] for _ in range(batch_size)] - channels = mlvl_tensor[0].size(1) - for t in mlvl_tensor: - t = t.permute(0, 2, 3, 1) - t = t.view(batch_size, -1, channels).contiguous() - for img in range(batch_size): - batch_list[img].append(t[img]) - return [torch.cat(item, 0) for item in batch_list] - - -@HEADS.register_module() -class YOLOFHead(AnchorHead): - """YOLOFHead Paper link: https://arxiv.org/abs/2103.09460. - - Args: - num_classes (int): The number of object classes (w/o background) - in_channels (List[int]): The number of input channels per scale. - cls_num_convs (int): The number of convolutions of cls branch. - Default 2. - reg_num_convs (int): The number of convolutions of reg branch. - Default 4. - norm_cfg (dict): Dictionary to construct and config norm layer. - """ - - def __init__(self, - num_classes, - in_channels, - num_cls_convs=2, - num_reg_convs=4, - norm_cfg=dict(type='BN', requires_grad=True), - **kwargs): - self.num_cls_convs = num_cls_convs - self.num_reg_convs = num_reg_convs - self.norm_cfg = norm_cfg - super(YOLOFHead, self).__init__(num_classes, in_channels, **kwargs) - - def _init_layers(self): - cls_subnet = [] - bbox_subnet = [] - for i in range(self.num_cls_convs): - cls_subnet.append( - ConvModule( - self.in_channels, - self.in_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg)) - for i in range(self.num_reg_convs): - bbox_subnet.append( - ConvModule( - self.in_channels, - self.in_channels, - kernel_size=3, - padding=1, - norm_cfg=self.norm_cfg)) - self.cls_subnet = nn.Sequential(*cls_subnet) - self.bbox_subnet = nn.Sequential(*bbox_subnet) - self.cls_score = nn.Conv2d( - self.in_channels, - self.num_base_priors * self.num_classes, - kernel_size=3, - stride=1, - padding=1) - self.bbox_pred = nn.Conv2d( - self.in_channels, - self.num_base_priors * 4, - kernel_size=3, - stride=1, - padding=1) - self.object_pred = nn.Conv2d( - self.in_channels, - self.num_base_priors, - kernel_size=3, - stride=1, - padding=1) - - def init_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - normal_init(m, mean=0, std=0.01) - if is_norm(m): - constant_init(m, 1) - - # Use prior in model initialization to improve stability - bias_cls = bias_init_with_prob(0.01) - torch.nn.init.constant_(self.cls_score.bias, bias_cls) - - def forward_single(self, feature): - cls_score = self.cls_score(self.cls_subnet(feature)) - N, _, H, W = cls_score.shape - cls_score = cls_score.view(N, -1, self.num_classes, H, W) - - reg_feat = self.bbox_subnet(feature) - bbox_reg = self.bbox_pred(reg_feat) - objectness = self.object_pred(reg_feat) - - # implicit objectness - objectness = objectness.view(N, -1, 1, H, W) - normalized_cls_score = cls_score + objectness - torch.log( - 1. + torch.clamp(cls_score.exp(), max=INF) + - torch.clamp(objectness.exp(), max=INF)) - normalized_cls_score = normalized_cls_score.view(N, -1, H, W) - return normalized_cls_score, bbox_reg - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (batch, num_anchors * num_classes, h, w) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (batch, num_anchors * 4, h, w) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert len(cls_scores) == 1 - assert self.prior_generator.num_levels == 1 - - device = cls_scores[0].device - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - - # The output level is always 1 - anchor_list = [anchors[0] for anchors in anchor_list] - valid_flag_list = [valid_flags[0] for valid_flags in valid_flag_list] - - cls_scores_list = levels_to_images(cls_scores) - bbox_preds_list = levels_to_images(bbox_preds) - - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - cls_scores_list, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (batch_labels, batch_label_weights, num_total_pos, num_total_neg, - batch_bbox_weights, batch_pos_predicted_boxes, - batch_target_boxes) = cls_reg_targets - - flatten_labels = batch_labels.reshape(-1) - batch_label_weights = batch_label_weights.reshape(-1) - cls_score = cls_scores[0].permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - - num_total_samples = (num_total_pos + - num_total_neg) if self.sampling else num_total_pos - num_total_samples = reduce_mean( - cls_score.new_tensor(num_total_samples)).clamp_(1.0).item() - - # classification loss - loss_cls = self.loss_cls( - cls_score, - flatten_labels, - batch_label_weights, - avg_factor=num_total_samples) - - # regression loss - if batch_pos_predicted_boxes.shape[0] == 0: - # no pos sample - loss_bbox = batch_pos_predicted_boxes.sum() * 0 - else: - loss_bbox = self.loss_bbox( - batch_pos_predicted_boxes, - batch_target_boxes, - batch_bbox_weights.float(), - avg_factor=num_total_samples) - - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets(self, - cls_scores_list, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - cls_scores_list (list[Tensor]): Classification scores of - each image. each is a 4D-tensor, the shape is - (h * w, num_anchors * num_classes). - bbox_preds_list (list[Tensor]): Bbox preds of each image. - each is a 4D-tensor, the shape is (h * w, num_anchors * 4). - anchor_list (list[Tensor]): Anchors of each image. Each element of - is a tensor of shape (h * w * num_anchors, 4). - valid_flag_list (list[Tensor]): Valid flags of each image. Each - element of is a tensor of shape (h * w * num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - batch_labels (Tensor): Label of all images. Each element \ - of is a tensor of shape (batch, h * w * num_anchors) - - batch_label_weights (Tensor): Label weights of all images \ - of is a tensor of shape (batch, h * w * num_anchors) - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - bbox_preds_list, - anchor_list, - valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, pos_inds_list, neg_inds_list, - sampling_results_list) = results[:5] - rest_results = list(results[5:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - - batch_labels = torch.stack(all_labels, 0) - batch_label_weights = torch.stack(all_label_weights, 0) - - res = (batch_labels, batch_label_weights, num_total_pos, num_total_neg) - for i, rests in enumerate(rest_results): # user-added return values - rest_results[i] = torch.cat(rests, 0) - - return res + tuple(rest_results) - - def _get_targets_single(self, - bbox_preds, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - bbox_preds (Tensor): Bbox prediction of the image, which - shape is (h * w ,4) - flat_anchors (Tensor): Anchors of the image, which shape is - (h * w * num_anchors ,4) - valid_flags (Tensor): Valid flags of the image, which shape is - (h * w * num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels (Tensor): Labels of image, which shape is - (h * w * num_anchors, ). - label_weights (Tensor): Label weights of image, which shape is - (h * w * num_anchors, ). - pos_inds (Tensor): Pos index of image. - neg_inds (Tensor): Neg index of image. - sampling_result (obj:`SamplingResult`): Sampling result. - pos_bbox_weights (Tensor): The Weight of using to calculate - the bbox branch loss, which shape is (num, ). - pos_predicted_boxes (Tensor): boxes predicted value of - using to calculate the bbox branch loss, which shape is - (num, 4). - pos_target_boxes (Tensor): boxes target value of - using to calculate the bbox branch loss, which shape is - (num, 4). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 8 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - bbox_preds = bbox_preds.reshape(-1, 4) - bbox_preds = bbox_preds[inside_flags, :] - - # decoded bbox - decoder_bbox_preds = self.bbox_coder.decode(anchors, bbox_preds) - assign_result = self.assigner.assign( - decoder_bbox_preds, anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - - pos_bbox_weights = assign_result.get_extra_property('pos_idx') - pos_predicted_boxes = assign_result.get_extra_property( - 'pos_predicted_boxes') - pos_target_boxes = assign_result.get_extra_property('target_boxes') - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - num_valid_anchors = anchors.shape[0] - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - - return (labels, label_weights, pos_inds, neg_inds, sampling_result, - pos_bbox_weights, pos_predicted_boxes, pos_target_boxes) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Driver Samsung Syncmaster P2450 23.md b/spaces/rorallitri/biomedical-language-models/logs/Driver Samsung Syncmaster P2450 23.md deleted file mode 100644 index b89b3ca4726c536166d2d39e077ef2d43e30bdf0..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Driver Samsung Syncmaster P2450 23.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Driver Samsung Syncmaster P2450 23


      Download File » https://tinurll.com/2uzmPH



      - -Samsung SyncMaster P2370MS - LCD monitor - Full HD (1080p) - 23 overview ... Title: Samsung Syncmaster P2450h Service Manual Repa, Author: Brandon ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/rupeshs/fastsdcpu/constants.py b/spaces/rupeshs/fastsdcpu/constants.py deleted file mode 100644 index c0a547aa57059efdbdee258e82ca2c799ffac7ca..0000000000000000000000000000000000000000 --- a/spaces/rupeshs/fastsdcpu/constants.py +++ /dev/null @@ -1,10 +0,0 @@ -from os import environ - -APP_VERSION = "v1.0.0 beta 7" -LCM_DEFAULT_MODEL = "SimianLuo/LCM_Dreamshaper_v7" -LCM_DEFAULT_MODEL_OPENVINO = "rupeshs/LCM-dreamshaper-v7-openvino-int8" -APP_NAME = "FastSD CPU" -APP_SETTINGS_FILE = "settings.yaml" -RESULTS_DIRECTORY = "results" -CONFIG_DIRECTORY = "configs" -DEVICE = environ.get("DEVICE", "cpu") diff --git a/spaces/salamat/first_app/README.md b/spaces/salamat/first_app/README.md deleted file mode 100644 index 2b509768e3385797adee21731d0176525e52e6e1..0000000000000000000000000000000000000000 --- a/spaces/salamat/first_app/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: First App -emoji: 📈 -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sandrocalzada/swap_face/app.py b/spaces/sandrocalzada/swap_face/app.py deleted file mode 100644 index b06c129923fe69c80baf93ab80db09a5d2ab6562..0000000000000000000000000000000000000000 --- a/spaces/sandrocalzada/swap_face/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import numpy as np -import gradio as gr -import glob -import cv2 -import matplotlib.pyplot as plt -import insightface -from insightface.app import FaceAnalysis -from insightface.data import get_image as ins_get_image - -def predict(image_in_video, image_in_img): - if image_in_video == None and image_in_img == None: - raise gr.Error("Please capture an image using the webcam or upload an image.") - image = image_in_video or image_in_img - return swapi(image) - -app = FaceAnalysis(name='buffalo_l') -app.prepare(ctx_id=0, det_size=(640, 640)) -swapper = insightface.model_zoo.get_model('inswapper_128.onnx') - - -def swapi(imagen): - # Use the uploaded image to extract features - img_user = cv2.imread(imagen) - faces_user = app.get(img_user) - - # Use another image "background1" for modifications - img_background = cv2.imread('background1.jpg') - faces_background = app.get(img_background) - - # Assuming the user image has a face and we are using its features - source_face = faces_user[0] - - # Apply modifications to the "background" image - res = img_background.copy() - for face in faces_background: - res = swapper.get(res, face, source_face, paste_back=True) - - # Convert from BGR to RGB - res_rgb = cv2.cvtColor(res, cv2.COLOR_BGR2RGB) - - return res_rgb - - - -with gr.Blocks() as blocks: - gr.Markdown("### Capture Image Using WebCam or Upload") - - with gr.Row(): - with gr.Column(): - image_or_file_opt = gr.Radio(["webcam", "file"], value="webcam", - label="How would you like to upload your image?") - image_in_video = gr.Image(source="webcam", type="filepath") - image_in_img = gr.Image(source="upload", visible=False, type="filepath") - - # Update visibility based on selection - def toggle(choice): - if choice == "webcam": - return gr.update(visible=True, value=None), gr.update(visible=False, value=None) - else: - return gr.update(visible=False, value=None), gr.update(visible=True, value=None) - - image_or_file_opt.change(fn=toggle, inputs=[image_or_file_opt], - outputs=[image_in_video, image_in_img], queue=False, show_progress=False) - with gr.Column(): - image_out = gr.Image() - - run_btn = gr.Button("Run") - run_btn.click(fn=predict, inputs=[image_in_img, image_in_video], outputs=[image_out]) - gr.Examples(fn=predict, examples=[], inputs=[image_in_img, image_in_video], outputs=[image_out]) - -blocks.queue() -blocks.launch(debug=True) \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Adobe Acrobat Reader DC Crack _BEST_ 2019.012.20040 Free Download.md b/spaces/scedlatioru/img-to-music/example/Adobe Acrobat Reader DC Crack _BEST_ 2019.012.20040 Free Download.md deleted file mode 100644 index 47361759b407f14b111507a32ab86a199994ff7f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Adobe Acrobat Reader DC Crack _BEST_ 2019.012.20040 Free Download.md +++ /dev/null @@ -1,50 +0,0 @@ -

      Adobe Acrobat Reader DC Crack 2019.012.20040 Free Download


      DOWNLOAD 🗹 https://gohhs.com/2uEyM9



      -
      -7 July, 2019 – Adobe has released Acrobat DC, including Acrobat Reader DC, Reader DC 2019.3, and Converter DC 2019.3. - -The Acrobat DC installer now supports Acrobat Pro Extended Support, and Acrobat Professional, too. - -Important: When upgrading to the 2019.3 release, you must re-enable Acrobat DC’s “Extended Support.” This setting can be found in Acrobat Setup under the Advanced tab. - -If you use the Creative Cloud Package in Acrobat DC, we recommend upgrading to the latest version of Acrobat Pro DC. This will ensure your package continues to work with the 2019.3 release of Acrobat DC. - -Here are the fixes in Acrobat DC 2019.3. - -Bug Fixes - -Automatically closes incomplete files after editing - -The icon for a file that has been modified will automatically be closed after editing the document. - -Fixes an issue where files that are written to an external device fail to write to the device - -Fixes an issue in PDF conversions where the conversion cannot be completed if the destination folder does not have sufficient space. - -Fixes an issue where documents created in Adobe Acrobat DC contain a white box where the document should be. - -The button to reset the selected font size has a shadow box around it. - -Minor changes to French text. - -The latest version of Adobe Acrobat DC can be downloaded from the Adobe website. - -Read more about this release in the Acrobat DC release notes.Q: - -It is possible to automatically force to re-run Java code when JSF version is changed? - -I'm developing a Java EE 6 project in NetBeans 7.4. I'd like to automatically re-run some Java code when the project is opened after changing some JSF version. I saw some post about the same topic, but it didn't meet my need. - -Is there any way to automatically re-run Java code when Java EE version is changed? - -Is there any way to automatically re-run Java code when JSF version is changed? - -Thanks in advance. - -A: - -It sounds as if you're looking for a "build-on-save" solution. You can use a Maven build (e.g. mvn compile) or a Ant build (e.g. ant build) to get this done. - -You may want 4fefd39f24
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Keygen Aac2010 Keygen 64bits.rar !!HOT!!.md b/spaces/scedlatioru/img-to-music/example/Keygen Aac2010 Keygen 64bits.rar !!HOT!!.md deleted file mode 100644 index 63fe8ddbb7b3c3a0f4f7965499f2aad4231db11c..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Keygen Aac2010 Keygen 64bits.rar !!HOT!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Keygen Aac2010 Keygen 64bits.rar


      Download Filehttps://gohhs.com/2uEzbq



      - -program, xf-a2010-64bit-keygen.exe is actually a computer threat and is classified as rogue software. ... (32 and 64 Bit) ... xf-a2010-64bits.rar . ... AAC2010_Keygen-64bits.exe.. xf-a2010-64bits.exe doesn't have a product name yet and it. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/scp4950/fastspeech2-en-ljspeech-Demo/app.py b/spaces/scp4950/fastspeech2-en-ljspeech-Demo/app.py deleted file mode 100644 index 00c46598c2138e8a01f71cd9d94d8283ec130f49..0000000000000000000000000000000000000000 --- a/spaces/scp4950/fastspeech2-en-ljspeech-Demo/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr -description = "fastspeech2-en-ljspeech demo." -title = "Facebook's Text To Speech Model" -interface = gr.Interface.load("huggingface/facebook/fastspeech2-en-ljspeech", - description=description, - title = title, - examples=[["How can a clam cram in a clean cream can?"]] -) -interface.launch() -share=true \ No newline at end of file diff --git a/spaces/sdeeas/ChuanhuChatGPT/assets/external-scripts.js b/spaces/sdeeas/ChuanhuChatGPT/assets/external-scripts.js deleted file mode 100644 index 8d0352669045537af5698b1824dbc1dba21df478..0000000000000000000000000000000000000000 --- a/spaces/sdeeas/ChuanhuChatGPT/assets/external-scripts.js +++ /dev/null @@ -1,2 +0,0 @@ - -// external javascript here diff --git a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/util/__init__.py b/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/util/__init__.py deleted file mode 100644 index 168f9979a4623806934b0ff1102ac166704e7dec..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/util/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved diff --git a/spaces/sh20raj/Test/index.html b/spaces/sh20raj/Test/index.html deleted file mode 100644 index 58275de3b1c343a98420342baa076b9baaafa157..0000000000000000000000000000000000000000 --- a/spaces/sh20raj/Test/index.html +++ /dev/null @@ -1,19 +0,0 @@ - - - - - - My static Space - - - -
      -

      Welcome to your static Space!

      -

      You can modify this app directly by editing index.html in the Files and versions tab.

      -

      - Also don't forget to check the - Spaces documentation. -

      -
      - - diff --git a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/autokl.py b/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/autokl.py deleted file mode 100644 index 23847346151d8cd837eb41cbd2ed8d6d4db8a747..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/autokl.py +++ /dev/null @@ -1,140 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from contextlib import contextmanager -from lib.model_zoo.common.get_model import get_model, register - -# from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer - -from .autokl_modules import Encoder, Decoder -from .distributions import DiagonalGaussianDistribution - -from .autokl_utils import LPIPSWithDiscriminator - -@register('autoencoderkl') -class AutoencoderKL(nn.Module): - def __init__(self, - ddconfig, - lossconfig, - embed_dim,): - super().__init__() - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - if lossconfig is not None: - self.loss = LPIPSWithDiscriminator(**lossconfig) - assert ddconfig["double_z"] - self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - self.embed_dim = embed_dim - - @torch.no_grad() - def encode(self, x, out_posterior=False): - return self.encode_trainable(x, out_posterior) - - def encode_trainable(self, x, out_posterior=False): - x = x*2-1 - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - if out_posterior: - return posterior - else: - return posterior.sample() - - @torch.no_grad() - def decode(self, z): - z = self.post_quant_conv(z) - dec = self.decoder(z) - dec = torch.clamp((dec+1)/2, 0, 1) - return dec - - def decode_trainable(self, z): - z = self.post_quant_conv(z) - dec = self.decoder(z) - dec = (dec+1)/2 - return dec - - def apply_model(self, input, sample_posterior=True): - posterior = self.encode_trainable(input, out_posterior=True) - if sample_posterior: - z = posterior.sample() - else: - z = posterior.mode() - dec = self.decode_trainable(z) - return dec, posterior - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - return x - - def forward(self, x, optimizer_idx, global_step): - reconstructions, posterior = self.apply_model(x) - - if optimizer_idx == 0: - # train encoder+decoder+logvar - aeloss, log_dict_ae = self.loss(x, reconstructions, posterior, optimizer_idx, global_step=global_step, - last_layer=self.get_last_layer(), split="train") - return aeloss, log_dict_ae - - if optimizer_idx == 1: - # train the discriminator - discloss, log_dict_disc = self.loss(x, reconstructions, posterior, optimizer_idx, global_step=global_step, - last_layer=self.get_last_layer(), split="train") - - return discloss, log_dict_disc - - def validation_step(self, batch, batch_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step, - last_layer=self.get_last_layer(), split="val") - - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step, - last_layer=self.get_last_layer(), split="val") - - self.log("val/rec_loss", log_dict_ae["val/rec_loss"]) - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr = self.learning_rate - opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ - list(self.decoder.parameters())+ - list(self.quant_conv.parameters())+ - list(self.post_quant_conv.parameters()), - lr=lr, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr, betas=(0.5, 0.9)) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - @torch.no_grad() - def log_images(self, batch, only_inputs=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if not only_inputs: - xrec, posterior = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["samples"] = self.decode(torch.randn_like(posterior.sample())) - log["reconstructions"] = xrec - log["inputs"] = x - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download AZPULMAT and Enjoy the Benefits of Online Lending.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download AZPULMAT and Enjoy the Benefits of Online Lending.md deleted file mode 100644 index c1c60bddf91e7a264c925a2cc7678d203a9fdd96..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download AZPULMAT and Enjoy the Benefits of Online Lending.md +++ /dev/null @@ -1,76 +0,0 @@ - -

      AZPULMAT Download: How to Get a Quick Loan on Your Credit Card

      -

      Do you need some extra cash to cover an unexpected expense or make an important purchase? Do you want to get a loan without visiting a bank or collecting a bunch of documents? Do you want to enjoy low interest rates and flexible repayment terms? If you answered yes to any of these questions, then you should consider downloading AZPULMAT, a mobile app that allows you to get a quick loan on your credit card.

      -

      azpulmat download


      DOWNLOAD 🆗 https://ssurll.com/2uNSyr



      -

      AZPULMAT is a mobile

      AZPULMAT is a mobile app that offers fast and easy loans to anyone who has a credit card issued by any bank in Azerbaijan. You can borrow from 20 AZN to 500 AZN and repay it within 30 days with a low interest rate of 1.5% per day. You can also extend or renew your loan if you need more time to pay it back. All you need to do is download the app, register, choose the loan amount and term, and get the money transferred to your card within minutes.

      -

      Why Choose AZPULMAT?

      -

      There are many reasons why you should choose AZPULMAT for your short-term financial needs. Here are some of the benefits of using this app:

      -
        -
      • Fast approval: You don't have to wait for hours or days to get a decision on your loan application. AZPULMAT uses an advanced algorithm that evaluates your creditworthiness and gives you an instant answer.
      • -
      • Low interest: You don't have to pay exorbitant fees or hidden charges when you borrow money from AZPULMAT. The interest rate is only 1.5% per day, which is much lower than most other lenders in the market.
      • -
      • Flexible repayment: You don't have to worry about missing your due date or defaulting on your loan. AZPULMAT allows you to choose the repayment term that suits your budget and cash flow. You can also extend or renew your loan by paying an additional fee before the due date.
      • -
      • No collateral or guarantor: You don't have to provide any security or guarantee when you apply for a loan with AZPULMAT. Your credit card is enough to qualify for a loan.
      • -
      • No paperwork or hassle: You don't have to fill out lengthy forms or submit any documents when you use AZPULMAT. Everything is done online through the app, which saves you time and effort.
      • -
      • No hard credit check: You don't have to worry about your credit score or history when you borrow money from AZPULMAT. They do not perform a hard credit check that could affect your credit rating. They only verify your identity and income, and approve most of the applications regardless of credit score.
      • -
      -

      How to Apply for a Loan with AZPULMAT?

      -

      Applying for a loan with AZPULMAT is very simple and convenient. Just follow these steps:

      -

      Step 1: Download the App

      -

      The first thing you need to do is download the AZPULMAT app from Google Play or APKCombo. The app is free and compatible with Android devices. You can also visit their website at www.azpulmat.com for more information.

      -

      Step 2: Register and Verify Your Identity

      -

      The next thing you need to do is create an account and provide your personal information and ID document. You will need to enter your name, phone number, email address, date of birth, gender, and card number. You will also need to upload a photo of your ID card (national ID, passport, or driver's license) and a selfie with your ID card. This is to verify your identity and prevent fraud.

      -

      azpulmat apk download
      -azpulmat app download for android
      -azpulmat latest version download
      -azpulmat free download
      -azpulmat download for pc
      -azpulmat download google play
      -azpulmat download apkcombo
      -azpulmat download apk pure
      -azpulmat download uptodown
      -azpulmat download apkpure.com
      -azpulmat download apk mirror
      -azpulmat download apkmonk
      -azpulmat download apkhere
      -azpulmat download apkmody
      -azpulmat download apkdone
      -azpulmat download apkfab
      -azpulmat download apknite
      -azpulmat download apksum
      -azpulmat download apksfull
      -azpulmat download apkgk
      -azpulmat download apk4fun
      -azpulmat download apkaward
      -azpulmat download apkpanda
      -azpulmat download apkhome
      -azpulmat download apktada
      -azpulmat download apkxyz
      -azpulmat download apkmaza
      -azpulmat download apkzippy
      -azpulmat download apkdroid
      -azpulmat download apkplz.net
      -azpulmat mod apk download
      -azpulmat hack apk download
      -azpulmat premium apk download
      -azpulmat pro apk download
      -azpulmat cracked apk download
      -azpulmat unlocked apk download
      -azpulmat unlimited money apk download
      -azpulmat ad free apk download
      -how to download azpulmat app
      -where to download azpulmat app
      -why to download azpulmat app
      -what is azpulmat app and how to download it
      -benefits of downloading azpulmat app
      -features of downloading azpulmat app
      -reviews of downloading azpulmat app
      -alternatives to downloading azpulmat app
      -tips and tricks for downloading azpulmat app
      -best practices for downloading azpulmat app
      -common problems and solutions for downloading azpulmat app

      -

      Step 3: Choose the Loan Amount and Term

      -

      The third thing you need to do is use the calculator to select the loan amount and term that suit your needs. You can borrow from 20 AZN to 500 AZN depending on your credit history and repayment ability. You can also choose the repayment term from 7 days to 30 days depending on your cash flow and budget. The app will show you the interest rate, total amount, and due date of your loan.

      -

      Step 4: Confirm the Loan Agreement and Receive the Money

      -

      The final thing you need to do is review and accept the loan agreement and get the money transferred to your card. The loan agreement will contain all the details of your loan, such as the amount, term, interest rate, fees, penalties, etc. You should read it carefully and make sure you understand it before signing it. Once you sign it, the money will be transferred to your card within minutes after approval.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/sklearn-docs/Random_sample_consensus/app.py b/spaces/sklearn-docs/Random_sample_consensus/app.py deleted file mode 100644 index cf849750fcc7b48f21827da8dcbddb4f463c02a4..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Random_sample_consensus/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import gradio as gr -import numpy as np -from matplotlib import pyplot as plt - -from sklearn import linear_model, datasets - - -theme = gr.themes.Monochrome( - primary_hue="indigo", - secondary_hue="blue", - neutral_hue="slate", -) -model_card = f""" -## Description - -**Random sample consensus (RANSAC)** is a method to estimate a mathematical model from a set of observed data that may have some wrong information. -The number of times it tries affects how likely it is to get a good answer. **RANSAC** is commonly used in photogrammetry to solve problems with linear or non-linear regression. -It works by separating the input data into two groups: inliers (which may have some noise) and outliers (which are wrong data). It estimates the model only using the inliers. -In this demo, a simulation regression dataset with noise is created, and then compare the results of fitting data in **Linear model** and **RANSAC**. -You can play around with different ``number of samples`` and ``number of outliers`` to see the effect - -## Dataset - -Simulation dataset -""" - - -def do_train(n_samples, n_outliers): - - X, y, coef = datasets.make_regression( - n_samples=n_samples, - n_features=1, - n_informative=1, - noise=10, - coef=True, - random_state=0, - ) - - # Add outlier data - np.random.seed(0) - X[:n_outliers] = 3 + 0.5 * np.random.normal(size=(n_outliers, 1)) - y[:n_outliers] = -3 + 10 * np.random.normal(size=n_outliers) - - # Fit line using all data - lr = linear_model.LinearRegression() - lr.fit(X, y) - - # Robustly fit linear model with RANSAC algorithm - ransac = linear_model.RANSACRegressor() - ransac.fit(X, y) - inlier_mask = ransac.inlier_mask_ - outlier_mask = np.logical_not(inlier_mask) - - # Predict data of estimated models - line_X = np.arange(X.min(), X.max())[:, np.newaxis] - line_y = lr.predict(line_X) - line_y_ransac = ransac.predict(line_X) - - text = f"True coefficients: {coef:.4f}.\nLinear regression coefficients: {lr.coef_[0]:.4f}.\nRANSAC coefficients: {ransac.estimator_.coef_[0]:.4f}." - - fig, axes = plt.subplots() - - axes.scatter( - X[inlier_mask], y[inlier_mask], color="yellowgreen", marker=".", label="Inliers" - ) - axes.scatter( - X[outlier_mask], y[outlier_mask], color="gold", marker=".", label="Outliers" - ) - axes.plot(line_X, line_y, color="navy", linewidth=2, label="Linear regressor") - axes.plot( - line_X, - line_y_ransac, - color="cornflowerblue", - linewidth=2, - label="RANSAC regressor", - ) - axes.legend(loc="lower right") - axes.set_xlabel("Input") - axes.set_ylabel("Response") - - return fig, text - - - -with gr.Blocks(theme=theme) as demo: - gr.Markdown(''' -
      -

      Robust linear model estimation using RANSAC

      -
      - ''') - gr.Markdown(model_card) - gr.Markdown("Author: Vu Minh Chien. Based on the example from scikit-learn") - n_samples = gr.Slider(minimum=500, maximum=5000, step=500, value=500, label="Number of samples") - n_outliers = gr.Slider(minimum=25, maximum=250, step=25, value=25, label="Number of outliers") - with gr.Row(): - with gr.Column(): - plot = gr.Plot(label="Compare Linear regressor and RANSAC") - with gr.Column(): - results = gr.Textbox(label="Results") - - n_samples.change(fn=do_train, inputs=[n_samples, n_outliers], outputs=[plot, results]) - n_outliers.change(fn=do_train, inputs=[n_samples, n_outliers], outputs=[plot, results]) - -demo.launch() \ No newline at end of file diff --git a/spaces/society-ethics/DiffusionClustering/README.md b/spaces/society-ethics/DiffusionClustering/README.md deleted file mode 100644 index 02299e3e96de4b279f537b06a10fd822654632d7..0000000000000000000000000000000000000000 --- a/spaces/society-ethics/DiffusionClustering/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DiffusionClustering -emoji: 📊 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/distributed_fairseq_model.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/distributed_fairseq_model.py deleted file mode 100644 index 5eda2276404ca686be124901674ddfe36bd6dfd1..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/distributed_fairseq_model.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import signal -import threading - -import torch -import torch.nn as nn -from torch.nn.parallel import DistributedDataParallel - -from fairseq.distributed import ( - DistributedTimeoutWrapper, - LegacyDistributedDataParallel, - ModuleProxyWrapper, - TPUDistributedDataParallel, -) - - -logger = logging.getLogger(__name__) - - -_GOSSIP_DISABLED = False -try: - import gossip -except ImportError: - _GOSSIP_DISABLED = True - - -def DistributedFairseqModel(args, model, process_group, device): - """ - Wrap a *model* to support distributed data parallel training. - - This is similar to the built-in DistributedDataParallel, but allows - additional configuration of the DistributedDataParallel class to - use, and also provides easier access to the wrapped model by - forwarding requests for missing attributes to the wrapped model. - - Args: - args (argparse.Namespace): fairseq args - model (BaseFairseqModel): model to wrap - process_group: the c10d process group to be used for distributed data - parallel all-reduction. - device: device to move model to - """ - assert isinstance(model, nn.Module) - if args.tpu: - wrapped_model = TPUDistributedDataParallel( - module=model.to(device), - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"c10d", "pytorch_ddp"}: - wrapped_model = DistributedDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - bucket_cap_mb=args.bucket_cap_mb, - process_group=process_group, - find_unused_parameters=args.find_unused_parameters, - gradient_as_bucket_view=args.gradient_as_bucket_view, - ) - if args.ddp_comm_hook == "fp16": - logger.info("enable fp16 communication hook in DDP") - try: - from torch.distributed.algorithms.ddp_comm_hooks import ( - register_ddp_comm_hook, - DDPCommHookType, - ) - except: - logger.error( - "Could not import from torch.distributed.algorithms.ddp_comm_hooks; you may need to update your pytorch version" - ) - raise - - register_ddp_comm_hook(DDPCommHookType.FP16_COMPRESS, wrapped_model) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"no_c10d", "legacy_ddp"}: - wrapped_model = LegacyDistributedDataParallel( - module=model.to(device), - buffer_size=2 ** 28, - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "slow_mo": - if _GOSSIP_DISABLED: - raise ImportError( - "Cannot find gossip library. Please install from: " - "github.com/facebookresearch/stochastic_gradient_push" - ) - - # The values of slowmo_momentum below were obtained by tuning on the - # En-De 16 dataset by training the transformer_wmt_en_de_large model - if args.slowmo_momentum is None: - if args.distributed_world_size <= 16: - args.slowmo_momentum = 0.0 - elif args.distributed_world_size <= 32: - args.slowmo_momentum = 0.2 - elif args.distributed_world_size <= 64: - args.slowmo_momentum = 0.5 - else: - args.slowmo_momentum = 0.6 - - wrapped_model = gossip.GossipDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - nprocs_per_node=args.nprocs_per_node, - slowmo_momentum=args.slowmo_momentum, - localsgd=(args.slowmo_algorithm == "LocalSGD"), - localsgd_frequency=args.localsgd_frequency, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "fully_sharded": - try: - from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP - except ImportError: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - assert isinstance(model, FSDP), "expected model to already be wrapped in FSDP" - wrapped_model = model - if args.memory_efficient_fp16: - wrapped_model = wrapped_model.half() - if not args.cpu_offload: - wrapped_model = wrapped_model.to(device=device) - else: - raise ValueError("Unknown --ddp-backend: " + args.ddp_backend) - - # kill hung distributed jobs after a timeout - if getattr(args, "heartbeat_timeout", -1) > 0: - wrapped_model = DistributedTimeoutWrapper( - wrapped_model, timeout=getattr(args, "heartbeat_timeout", -1) - ) - - return wrapped_model diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/lightconv.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/lightconv.py deleted file mode 100644 index 4edfe359379bc2445c1ae1ada04bd34ca4a32798..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/models/lightconv.py +++ /dev/null @@ -1,1019 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - AdaptiveSoftmax, - DynamicConv, - FairseqDropout, - LayerNorm, - LightweightConv, - MultiheadAttention, - PositionalEmbedding, -) -from fairseq.utils import safe_hasattr - - -@register_model("lightconv") -class LightConvModel(FairseqEncoderDecoderModel): - """ - LightConv and DynamicConv model from `"Pay Less Attention with Lightweight and Dynamic Convolutions" (Wu, et al, 2019) - `_. - To use LightConv please set ``--encoder-conv-type lightweight --decoder-conv-type lightweight`` - To use DynamicConv please set ``--encoder-conv-type dynamic --decoder-conv-type dynamic`` - - Args: - encoder (LightConvEncoder): the encoder - decoder (LightConvDecoder): the decoder - - The LightConv model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.lightconv_parser - :prog: - """ - - @classmethod - def hub_models(cls): - # fmt: off - - def moses_subword(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'subword_nmt', - } - - return { - 'lightconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz'), - 'dynamicconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz'), - 'lightconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz'), - 'dynamicconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz'), - 'lightconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz'), - } - # fmt: on - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--encoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-conv-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--encoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the encoder", - ) - parser.add_argument( - "--decoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-conv-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--share-all-embeddings", - action="store_true", - help="share encoder, decoder and output embeddings" - " (requires shared dictionary and embed dim)", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ), - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--encoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31,31]")', - ) - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--encoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--encoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if not safe_hasattr(args, "max_source_positions"): - args.max_source_positions = 1024 - if not safe_hasattr(args, "max_target_positions"): - args.max_target_positions = 1024 - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise RuntimeError( - "--share-all-embeddings requires a joined dictionary" - ) - if args.encoder_embed_dim != args.decoder_embed_dim: - raise RuntimeError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise RuntimeError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = build_embedding( - tgt_dict, args.decoder_embed_dim, args.decoder_embed_path - ) - - encoder = LightConvEncoder(args, src_dict, encoder_embed_tokens) - decoder = LightConvDecoder(args, tgt_dict, decoder_embed_tokens) - return LightConvModel(encoder, decoder) - - -class LightConvEncoder(FairseqEncoder): - """ - LightConv encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`LightConvEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - - embed_dim = embed_tokens.embedding_dim - self.padding_idx = embed_tokens.padding_idx - self.max_source_positions = args.max_source_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvEncoderLayer( - args, kernel_size=args.encoder_kernel_size_list[i] - ) - for i in range(args.encoder_layers) - ] - ) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.encoder_normalize_before - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward(self, src_tokens, **unused): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - """ - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(src_tokens) - if self.embed_positions is not None: - x += self.embed_positions(src_tokens) - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - # encoder layers - for layer in self.layers: - x = layer(x, encoder_padding_mask) - - if self.normalize: - x = self.layer_norm(x) - - return { - "encoder_out": x, # T x B x C - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embed_positions is None: - return self.max_source_positions - return min(self.max_source_positions, self.embed_positions.max_positions) - - -class LightConvDecoder(FairseqIncrementalDecoder): - """ - LightConv decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`LightConvDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - """ - - def __init__( - self, args, dictionary, embed_tokens, no_encoder_attn=False, final_norm=True - ): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.share_input_output_embed = args.share_decoder_input_output_embed - - input_embed_dim = embed_tokens.embedding_dim - embed_dim = args.decoder_embed_dim - output_embed_dim = args.decoder_output_dim - - padding_idx = embed_tokens.padding_idx - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvDecoderLayer( - args, no_encoder_attn, kernel_size=args.decoder_kernel_size_list[i] - ) - for i in range(args.decoder_layers) - ] - ) - - self.adaptive_softmax = None - - self.project_out_dim = ( - Linear(embed_dim, output_embed_dim, bias=False) - if embed_dim != output_embed_dim and not args.tie_adaptive_weights - else None - ) - - if args.adaptive_softmax_cutoff is not None: - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - output_embed_dim, - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_out = nn.Parameter( - torch.Tensor(len(dictionary), output_embed_dim) - ) - nn.init.normal_(self.embed_out, mean=0, std=output_embed_dim ** -0.5) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.decoder_normalize_before and final_norm - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - - Returns: - tuple: - - the last decoder layer's output of shape `(batch, tgt_len, - vocab)` - - the last decoder layer's attention weights of shape `(batch, - tgt_len, src_len)` - """ - # embed positions - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - - inner_states = [x] - - # decoder layers - for layer in self.layers: - x, attn = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["encoder_padding_mask"] - if encoder_out is not None - else None, - incremental_state, - ) - inner_states.append(x) - - if self.normalize: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - x = F.linear(x, self.embed_tokens.weight) - else: - x = F.linear(x, self.embed_out) - - return x, {"attn": attn, "inner_states": inner_states} - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embed_positions is None: - return self.max_target_positions - return min(self.max_target_positions, self.embed_positions.max_positions) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - -class LightConvEncoderLayer(nn.Module): - """Encoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, kernel_size=0): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.conv_dim = args.encoder_conv_dim - padding_l = ( - kernel_size // 2 - if kernel_size % 2 == 1 - else ((kernel_size - 1) // 2, kernel_size // 2) - ) - - if args.encoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.encoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.encoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.layer_norms = nn.ModuleList([LayerNorm(self.embed_dim) for _ in range(2)]) - - def forward(self, x, encoder_padding_mask): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(0, x, before=True) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - if encoder_padding_mask is not None: - x = x.masked_fill(encoder_padding_mask.transpose(0, 1).unsqueeze(2), 0) - x = self.conv(x) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(0, x, after=True) - - residual = x - x = self.maybe_layer_norm(1, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(1, x, after=True) - return x - - def maybe_layer_norm(self, i, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return self.layer_norms[i](x) - else: - return x - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -class LightConvDecoderLayer(nn.Module): - """Decoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, no_encoder_attn=False, kernel_size=0): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.conv_dim = args.decoder_conv_dim - if args.decoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.decoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.decoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.decoder_normalize_before - - self.conv_layer_norm = LayerNorm(self.embed_dim) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim) - self.need_attn = True - - def forward( - self, - x, - encoder_out, - encoder_padding_mask, - incremental_state, - prev_conv_state=None, - prev_attn_state=None, - conv_mask=None, - conv_padding_mask=None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(self.conv_layer_norm, x, before=True) - if prev_conv_state is not None: - if incremental_state is None: - incremental_state = {} - self.conv._set_input_buffer(incremental_state, prev_conv_state) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - x = self.conv(x, incremental_state=incremental_state) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.conv_layer_norm, x, after=True) - - attn = None - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return x, attn - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -@register_model_architecture("lightconv", "lightconv") -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.encoder_conv_dim = getattr(args, "encoder_conv_dim", args.encoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - args.encoder_kernel_size_list = getattr( - args, "encoder_kernel_size_list", [3, 7, 15, 31, 31, 31, 31] - ) - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.encoder_kernel_size_list) == 1: - args.encoder_kernel_size_list = ( - args.encoder_kernel_size_list * args.encoder_layers - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.encoder_kernel_size_list) == args.encoder_layers - ), "encoder_kernel_size_list doesn't match encoder_layers" - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.encoder_glu = getattr(args, "encoder_glu", True) - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv", "lightconv_iwslt_de_en") -def lightconv_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", 0.1) - args.encoder_glu = getattr(args, "encoder_glu", False) - args.decoder_glu = getattr(args, "decoder_glu", False) - args.input_dropout = getattr(args, "input_dropout", 0.0) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de") -def lightconv_wmt_en_de(args): - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de_big") -def lightconv_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_fr_big") -def lightconv_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - lightconv_wmt_en_de_big(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_zh_en_big") -def lightconv_wmt_zh_en_big(args): - args.dropout = getattr(args, "dropout", 0.2) - args.attention_dropout = getattr(args, "attention_dropout", 0.2) - args.weight_dropout = getattr(args, "weight_dropout", 0.2) - lightconv_wmt_en_de_big(args) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py deleted file mode 100644 index 51f58359eda387d67748f48217906ac6d16ccd08..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class CosineLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = field( - default=II("optimization.lr"), - metadata={"help": "max learning rate, must be more than cfg.min_lr"}, - ) - min_lr: float = field(default=0.0, metadata={"help": "min learning rate"}) - t_mult: float = field( - default=1.0, metadata={"help": "factor to grow the length of each period"} - ) - lr_period_updates: float = field( - default=-1, metadata={"help": "initial number of updates per period"} - ) - lr_shrink: float = field( - default=0.1, metadata={"help": "shrink factor for annealing"} - ) - # This is not required, but is for convenience in inferring lr_period_updates - max_update: int = II("optimization.max_update") - - -@register_lr_scheduler("cosine", dataclass=CosineLRScheduleConfig) -class CosineLRSchedule(FairseqLRScheduler): - """Assign LR based on a cyclical schedule that follows the cosine function. - - See https://arxiv.org/pdf/1608.03983.pdf for details. - - We also support a warmup phase where we linearly increase the learning rate - from some initial learning rate (``--warmup-init-lr``) until the configured - max learning rate (``--lr``). - - During warmup:: - - lrs = torch.linspace(cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates) - lr = lrs[update_num] - - After warmup:: - - lr = cfg.min_lr + 0.5*(cfg.lr - cfg.min_lr)*(1 + cos(t_curr / t_i)) - - where ``t_curr`` is current percentage of updates within the current period - range and ``t_i`` is the current period range, which is scaled by ``t_mul`` - after every iteration. - """ - - def __init__(self, cfg: CosineLRScheduleConfig, fairseq_optimizer): - super().__init__(cfg, fairseq_optimizer) - if isinstance(cfg.lr, Collection) and len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with cosine." - f" Consider --lr-scheduler=fixed instead. ({cfg.lr})" - ) - - self.max_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr - assert ( - self.max_lr > cfg.min_lr - ), f"max_lr (={cfg.lr}) must be more than min_lr (={cfg.min_lr})" - - warmup_end_lr = self.max_lr - if cfg.warmup_init_lr < 0: - cfg.warmup_init_lr = cfg.min_lr - - self.t_mult = cfg.t_mult - self.period = cfg.lr_period_updates - - if self.period <= 0: - assert ( - cfg.max_update > 0 - ), "Either --max_update or --lr-period-updates must be set" - self.period = cfg.max_update - cfg.warmup_updates - - if cfg.warmup_updates > 0: - # linearly warmup for the first cfg.warmup_updates - self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates - else: - self.lr_step = 1 - - self.warmup_updates = cfg.warmup_updates - self.lr_shrink = cfg.lr_shrink - - # initial learning rate - self.lr = cfg.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if num_updates < self.cfg.warmup_updates: - self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step - else: - curr_updates = num_updates - self.cfg.warmup_updates - if self.t_mult != 1: - i = math.floor( - math.log( - 1 - curr_updates / self.period * (1 - self.t_mult), self.t_mult - ) - ) - t_i = self.t_mult ** i * self.period - t_curr = ( - curr_updates - - (1 - self.t_mult ** i) / (1 - self.t_mult) * self.period - ) - else: - i = math.floor(curr_updates / self.period) - t_i = self.period - t_curr = curr_updates - (self.period * i) - - lr_shrink = self.lr_shrink ** i - min_lr = self.cfg.min_lr * lr_shrink - max_lr = self.max_lr * lr_shrink - - self.lr = min_lr + 0.5 * (max_lr - min_lr) * ( - 1 + math.cos(math.pi * t_curr / t_i) - ) - - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_multihead_attention.py b/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_multihead_attention.py deleted file mode 100644 index 620a2d679147bbbb8d15f3323374a39939686ec2..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_multihead_attention.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import torch -from fairseq.modules.multihead_attention import MultiheadAttention - - -class TestMultiheadAttention(unittest.TestCase): - def test_append_prev_key_padding_mask(self): - bsz = 1 - src_len = 4 - - cases = [ - # no padding mask - (None, None, None), - # current padding mask only - ( - torch.tensor([[1]]).bool(), - None, - torch.tensor([[0, 0, 0, 1]]).bool(), - ), - # previous padding mask only - ( - None, - torch.tensor([[0, 1, 0]]).bool(), - torch.tensor([[0, 1, 0, 0]]).bool(), - ), - # both padding masks - ( - torch.tensor([[1]]).bool(), - torch.tensor([[0, 1, 0]]).bool(), - torch.tensor([[0, 1, 0, 1]]).bool(), - ), - # prev_key_padding_mask already full - ( - torch.tensor([[0, 1, 0, 1]]).bool(), - None, - torch.tensor([[0, 1, 0, 1]]).bool(), - ), - # key_padding_mask already full - ( - None, - torch.tensor([[0, 1, 0, 1]]).bool(), - torch.tensor([[0, 1, 0, 1]]).bool(), - ), - ] - for c in cases: - key_padding_mask = MultiheadAttention._append_prev_key_padding_mask( - c[0], - c[1], - batch_size=bsz, - src_len=src_len, - static_kv=False, - ) - - if key_padding_mask is not None: - self.assertTrue( - torch.all(torch.eq(key_padding_mask, c[2])), - f"Unexpected resultant key padding mask: {key_padding_mask}" - f" given current: {c[0]} and previous: {c[1]}", - ) - self.assertEqual(key_padding_mask.size(0), bsz) - self.assertEqual(key_padding_mask.size(1), src_len) - else: - self.assertIsNone(c[2]) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/stomexserde/gpt4-ui/Examples/Blue Is The Warmest Colour (2013) BRRip 720p Dual Audio [French-English].md b/spaces/stomexserde/gpt4-ui/Examples/Blue Is The Warmest Colour (2013) BRRip 720p Dual Audio [French-English].md deleted file mode 100644 index b33c51c0eb38abd7145ab6793010a58c3a904965..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Blue Is The Warmest Colour (2013) BRRip 720p Dual Audio [French-English].md +++ /dev/null @@ -1,24 +0,0 @@ -
      -

      Blue Is the Warmest Colour: A Passionate and Controversial Romance

      - -

      Blue Is the Warmest Colour is a 2013 French film directed by Abdellatif Kechiche and starring Léa Seydoux and Adèle Exarchopoulos. The film is based on a graphic novel by Julie Maroh and tells the story of a young woman, Adèle, who falls in love with a blue-haired artist, Emma. The film explores their relationship over several years, as they face challenges of social acceptance, sexual identity, and personal growth.

      -

      Blue is the Warmest Colour (2013) BRRip 720p Dual Audio [French-English]


      Download Filehttps://urlgoal.com/2uIb61



      - -

      The film received critical acclaim and won the Palme d'Or at the 2013 Cannes Film Festival, as well as several other awards and nominations. The film was praised for its realistic and intimate portrayal of love and sexuality, as well as the performances of the two lead actresses. The film also sparked controversy for its explicit and lengthy sex scenes, which some critics and viewers found gratuitous, exploitative, or unrealistic. The film was also criticized by some members of the LGBT community for being made by a heterosexual male director and for depicting a stereotypical lesbian relationship.

      - -

      Blue Is the Warmest Colour is a film that provokes strong emotions and reactions from its audience. It is a film that challenges the conventions of mainstream cinema and offers a different perspective on love and sexuality. It is a film that celebrates the beauty and complexity of human relationships, regardless of gender or orientation.

      - -

      The film also raises some interesting questions about the themes and meanings of blue as a color and a symbol. Blue is the color of Emma's hair, which changes throughout the film as a reflection of her personality and mood. Blue is also the color of Adèle's diary, where she writes her thoughts and feelings about Emma. Blue is the color of the sky and the sea, which suggest freedom, openness, and depth. Blue is also the color of sadness, coldness, and distance, which imply loss, isolation, and detachment.

      - -

      Some critics have argued that blue is not a suitable color to represent lesbian love, as it is traditionally associated with masculinity and heterosexuality. They have accused Kechiche of imposing his own straight male vision on a lesbian story, and of ignoring the diversity and richness of lesbian culture and identity. They have also pointed out that the original graphic novel by Julie Maroh uses a different color scheme, with more emphasis on red and pink as colors of passion, warmth, and femininity.

      - -

      However, others have defended Kechiche's choice of blue as a creative and subversive one, as it challenges the stereotypes and expectations of lesbian representation. They have argued that blue is a versatile and complex color, that can convey different emotions and meanings depending on the context and interpretation. They have also suggested that blue is a color that transcends gender and sexuality, and that can express a universal human experience of love and desire.

      - -

      The film also had a significant impact on the public and the critics, both in France and internationally. The film was released in France in October 2013, amid a heated debate over the legalization of same-sex marriage and adoption, which was passed by the French parliament in April 2013. The film was seen by some as a timely and powerful statement of support for LGBT rights and equality, while others criticized it as a misrepresentation or appropriation of lesbian culture and identity. The film also sparked discussions about the ethics and aesthetics of filmmaking, especially regarding the working conditions of the actors and crew, the artistic freedom of the director, and the role of the critics and the audience.

      - -

      The film was widely acclaimed by many critics, who praised its realism, emotion, and performances. The film won several awards and nominations, including the Palme d'Or at the 2013 Cannes Film Festival, which was shared by Kechiche and his two lead actresses. The film also received nominations for the Golden Globe Award for Best Foreign Language Film and the BAFTA Award for Best Film Not in the English Language. The film was also included in many critics' lists of the best films of 2013.

      - -

      However, the film also faced some controversy and criticism, mainly for its explicit and lengthy sex scenes, which some viewers found unnecessary, unrealistic, or offensive. Some critics also questioned Kechiche's perspective and motives as a heterosexual male director adapting a lesbian story, and accused him of fetishizing or objectifying his female characters. Some members of the LGBT community also expressed their dissatisfaction with the film's portrayal of lesbian love and sexuality, and claimed that it did not reflect their experiences or realities.

      -

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download I386 Folder For Windows Xp Sp3 Free.md b/spaces/stomexserde/gpt4-ui/Examples/Download I386 Folder For Windows Xp Sp3 Free.md deleted file mode 100644 index a836f2db98bec06425bb22146f8bff5b74492cac..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download I386 Folder For Windows Xp Sp3 Free.md +++ /dev/null @@ -1,34 +0,0 @@ -
      -

      How to Download i386 Folder for Windows XP SP3 Free

      -

      If you are looking for a way to download the i386 folder for Windows XP SP3 free, you might be wondering what it is and why you need it. The i386 folder contains the files that are used to install or repair Windows XP on your computer. It is usually located on the Windows XP installation CD or DVD, but if you don't have one, you can also download it from the internet.

      -

      Downloading the i386 folder for Windows XP SP3 free can be useful if you want to perform a system recovery, reinstall Windows XP, or fix some errors that might prevent your computer from booting up. However, you need to be careful about where you download it from, as some sources might contain viruses or malware that can harm your computer.

      -

      download i386 folder for windows xp sp3 free


      Download 🗹 https://urlgoal.com/2uIbRA



      -

      In this article, we will show you how to download the i386 folder for Windows XP SP3 free from a reliable and safe source. We will also explain how to use it to repair or reinstall Windows XP on your computer.

      -

      Steps to Download i386 Folder for Windows XP SP3 Free

      -

      Before you download or install Windows XP SP3, you need to check your hard disk space. Depending on where you obtain Windows XP SP3, you will have to have a minimum amount of free space ranging from 1.5 GB to 3.5 GB[^2^]. You also need to make sure that your computer meets the minimum system requirements for Windows XP SP3, which are:

      -
        -
      • Processor: Pentium 233-megahertz (MHz) processor or faster (300 MHz is recommended)
      • -
      • Memory: At least 64 megabytes (MB) of RAM (128 MB is recommended)
      • -
      • Hard disk space: At least 1.5 gigabytes (GB) of available space on the hard disk
      • -
      • CD-ROM or DVD-ROM drive
      • -
      • Keyboard and a Microsoft Mouse or some other compatible pointing device
      • -
      • Video adapter and monitor with Super VGA (800 x 600) or higher resolution
      • -
      • Sound card
      • -
      • Speakers or headphones
      • -
      -

      Once you have verified that your computer meets these requirements, you can follow these steps to download the i386 folder for Windows XP SP3 free:

      -
        -
      1. Go to https://archive.org/details/WinXPProSP3x86, which is a trusted source that provides the original version of Windows XP Professional with Service Pack 3 in ISO format[^2^]. This ISO file contains the i386 folder that you need.
      2. -
      3. Click on the "DOWNLOAD OPTIONS" menu and choose either "ISO IMAGE" or "TORRENT". The ISO image is a single file that you can download directly, while the torrent is a small file that you can use with a torrent client such as BitTorrent or uTorrent to download the ISO file faster.
      4. -
      5. Save the file to your computer and wait for the download to finish.
      6. -
      7. If you downloaded the ISO image, you can either burn it to a CD or DVD using a software such as ImgBurn or CDBurnerXP, or mount it as a virtual drive using a software such as Daemon Tools Lite or Virtual CloneDrive. If you downloaded the torrent, you need to open it with your torrent client and wait for it to download the ISO file, then follow the same steps as above.
      8. -
      9. Once you have the ISO file ready, you can either use it to install Windows XP SP3 on your computer, or extract the i386 folder from it and use it to repair your existing Windows XP installation.
      10. -
      -

      How to Use i386 Folder to Repair or Reinstall Windows XP

      -

      If you want to use the i386 folder to repair your existing Windows XP installation, you need to copy it to your hard drive first. You can do this by following these steps:

      -
        -
      1. Insert the CD or DVD that contains the ISO file into your computer's drive, or mount the ISO file as a virtual drive.
      2. -
      3. Open My Computer and locate the drive that contains the ISO file. It should have a label such as "WinXPProSP

        -

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Sehar Full Movie !!TOP!!.md b/spaces/stomexserde/gpt4-ui/Examples/Download Sehar Full Movie !!TOP!!.md deleted file mode 100644 index b36be503594839dbf46f2f2451fe86c18aa9e5f4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download Sehar Full Movie !!TOP!!.md +++ /dev/null @@ -1,17 +0,0 @@ - -

        Download Sehar Full Movie: A Gripping Crime Thriller Based on True Events

        -

        Sehar is a 2005 Hindi action drama film starring Arshad Warsi and Mahima Chaudhary. The movie is loosely based on the encounter of the notorious criminal of a notorious gangster from UP[^2^]. The story revolves around officer Ajay Kumar who sets up a special task force to curb criminal activities and arrest notorious crime boss Gajraj Singh[^3^].

        -

        The movie is directed by Kabeer Kaushik and also features Pankaj Kapur, Sushant Singh, Naved Aslam, Suhasini Mulay and Rajendra Gupta in pivotal roles. The movie received critical acclaim and Arshad Warsi won laurels for doing a film beyond his comic hero image[^2^]. The movie also has a low-key soundtrack composed by Daniel B. George with lyrics by Swanand Kirkire and Nilanjana Kishore[^2^].

        -

        Download Sehar Full Movie


        DOWNLOAD ✏ ✏ ✏ https://urlgoal.com/2uI9f7



        -

        If you are looking for a well-made drama that depicts law enforcement in a positive light and tackles organized crime in a realistic manner, then Sehar is a must-watch for you. You can download Sehar full movie online on ZEE5[^3^], where you can also watch other movies and shows across genres and languages.

        - -

        Sehar is not just a crime thriller, but also a social commentary on the state of affairs in Uttar Pradesh. The movie exposes the nexus between the police, politicians and criminals, and how it affects the common people. The movie also shows the challenges faced by honest officers who try to uphold the law and justice. The movie is inspired by the real-life story of SSP Arun Kumar who led the STF that eliminated Shri Prakash Shukla, one of India's most wanted criminals.

        -

        The movie has some powerful performances by the lead actors, especially Arshad Warsi and Pankaj Kapur. Arshad Warsi plays the role of SSP Ajay Kumar with conviction and intensity. He portrays the character's courage, determination and vulnerability with finesse. Pankaj Kapur plays the role of Prof. Bhole Shankar Tiwari, a retired professor who becomes a mentor and guide for Ajay Kumar. He delivers a nuanced and layered performance as a man who has seen it all and knows how to deal with it. Mahima Chaudhary plays the role of Anamika Kant, a journalist who falls in love with Ajay Kumar. She provides a romantic angle to the otherwise gritty plot.

        -

        -

        Sehar is a movie that will keep you hooked till the end with its gripping plot, realistic dialogues, thrilling action sequences and brilliant direction. The movie is a rare gem in Bollywood that does not resort to melodrama or cliches. The movie is a tribute to the brave officers who risk their lives to protect the society from evil forces. Download Sehar full movie online on ZEE5 and watch it today.

        - -

        Sehar is not just a movie, but also a learning experience. The movie teaches us some valuable lessons about life, society and morality. The movie shows us how corruption and crime can ruin a state and its people. The movie also shows us how honesty and courage can make a difference and bring about positive change. The movie inspires us to stand up for what is right and fight against what is wrong.

        -

        Sehar is also a movie that celebrates the spirit of India. The movie showcases the diversity and richness of Indian culture and heritage. The movie depicts the beauty and charm of Lucknow, the city of Nawabs. The movie also features some of the famous landmarks and monuments of Uttar Pradesh, such as the Bara Imambara, the Rumi Darwaza and the Chota Imambara. The movie also incorporates some of the elements of Awadhi cuisine, music and literature.

        -

        Sehar is a movie that you should not miss. The movie is a rare combination of entertainment and enlightenment. The movie is a masterpiece of Indian cinema that deserves to be watched and appreciated by everyone. Download Sehar full movie online on ZEE5 and enjoy this amazing movie with your family and friends.

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Friday The 13th The Game Beta Hack Tool Free Download.md b/spaces/stomexserde/gpt4-ui/Examples/Friday The 13th The Game Beta Hack Tool Free Download.md deleted file mode 100644 index 6ac942871ee39a3692ece8fffafe779daf205946..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Friday The 13th The Game Beta Hack Tool Free Download.md +++ /dev/null @@ -1,25 +0,0 @@ - -

        How to Download and Install Friday the 13th: The Game Beta Hack Tool for Free

        -

        Friday the 13th: The Game is a survival horror game based on the popular movie franchise, where you can play as either Jason Voorhees or one of the counselors trying to escape his wrath. The game features various modes, maps, characters, and weapons, as well as a single-player mode that lets you experience the story of the movies.

        -

        Friday the 13th: The Game Beta hack tool free download


        Download File >>>>> https://urlgoal.com/2uI7bc



        -

        If you want to spice up your gameplay and get an edge over your opponents, you might be interested in downloading and installing a hack tool for Friday the 13th: The Game Beta. A hack tool is a software that modifies the game's code to give you access to features such as aimbot, ESP, radar, and more. With a hack tool, you can easily hunt down or evade Jason, find items and weapons, and win every match.

        -

        However, finding a reliable and safe hack tool for Friday the 13th: The Game Beta can be tricky. There are many websites and videos that claim to offer free download links for hack tools, but some of them are scams, viruses, or outdated. You need to be careful and do your research before downloading anything from unknown sources.

        -

        To help you out, we have compiled a list of steps that you can follow to download and install a working hack tool for Friday the 13th: The Game Beta. Follow these steps carefully and enjoy your hacked game!

        -

        Step 1: Join a Discord Server

        -

        The first step is to join a Discord server that provides hack tools for Friday the 13th: The Game Beta. Discord is a popular chat app that allows gamers to communicate and share files with each other. There are many Discord servers dedicated to hacking games, but not all of them are trustworthy or active.

        -

        One of the best Discord servers that we recommend is CheatAutomation. CheatAutomation is a website that offers premium hacks for various games, including Friday the 13th: The Game. They also have a Discord server where they share free beta versions of their hack tools for testing purposes. You can join their Discord server by clicking on this link: https://cheatautomation.com/friday-the-13th-hack

        -

        Once you join their Discord server, you will see various channels where you can chat with other members, ask questions, request support, and download files. You will need to verify your account by typing "!verify" in the #verify channel. After that, you will have access to all the channels in the server.

        -

        -

        Step 2: Download the Hack Tool

        -

        The next step is to download the hack tool for Friday the 13th: The Game Beta from the Discord server. You will need to go to the #downloads channel and look for the latest version of the hack tool. The file name will be something like "FridayThe13th_Beta_v1.0.zip". You can download it by clicking on it and choosing "Save As".

        -

        The file size will be around 10 MB and it will be compressed in a ZIP format. You will need to extract it using a program like WinRAR or 7-Zip. You can download WinRAR from here: https://www.win-rar.com/download.html

        -

        After extracting the file, you will see a folder named "FridayThe13th_Beta_v1.0". Inside this folder, you will find two files: "FridayThe13th_Beta.exe" and "ReadMe.txt". The first file is the hack tool itself and the second file is a text document that contains instructions on how to use it.

        -

        Step 3: Install the Hack Tool

        -

        The final step is to install and run the hack tool for Friday the 13th: The Game Beta. You will need to follow these steps:

        -
          -
        • Make sure that your antivirus software is disabled or whitelisted. Some antivirus programs may detect the hack tool as a virus or malware and block it from running.
        • -
        • Make sure that your game is updated to the latest version and running in windowed mode.
        • -
        • Run the "FridayThe13th_Beta.exe" file as administrator by right-clicking on it and choosing "Run as administrator".
        • -
        • A window will pop up asking you to enter your username and

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/models/experimental.py b/spaces/stratussox/yolov5_inference/models/experimental.py deleted file mode 100644 index 02d35b9ebd11d3407d64ae436142aca6100c9084..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/models/experimental.py +++ /dev/null @@ -1,111 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Experimental modules -""" -import math - -import numpy as np -import torch -import torch.nn as nn - -from utils.downloads import attempt_download - - -class Sum(nn.Module): - # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 - def __init__(self, n, weight=False): # n: number of inputs - super().__init__() - self.weight = weight # apply weights boolean - self.iter = range(n - 1) # iter object - if weight: - self.w = nn.Parameter(-torch.arange(1.0, n) / 2, requires_grad=True) # layer weights - - def forward(self, x): - y = x[0] # no weight - if self.weight: - w = torch.sigmoid(self.w) * 2 - for i in self.iter: - y = y + x[i + 1] * w[i] - else: - for i in self.iter: - y = y + x[i + 1] - return y - - -class MixConv2d(nn.Module): - # Mixed Depth-wise Conv https://arxiv.org/abs/1907.09595 - def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): # ch_in, ch_out, kernel, stride, ch_strategy - super().__init__() - n = len(k) # number of convolutions - if equal_ch: # equal c_ per group - i = torch.linspace(0, n - 1E-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(n)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * n - a = np.eye(n + 1, n, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList([ - nn.Conv2d(c1, int(c_), k, s, k // 2, groups=math.gcd(c1, int(c_)), bias=False) for k, c_ in zip(k, c_)]) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() - - def forward(self, x): - return self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) - - -class Ensemble(nn.ModuleList): - # Ensemble of models - def __init__(self): - super().__init__() - - def forward(self, x, augment=False, profile=False, visualize=False): - y = [module(x, augment, profile, visualize)[0] for module in self] - # y = torch.stack(y).max(0)[0] # max ensemble - # y = torch.stack(y).mean(0) # mean ensemble - y = torch.cat(y, 1) # nms ensemble - return y, None # inference, train output - - -def attempt_load(weights, device=None, inplace=True, fuse=True): - # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a - from models.yolo import Detect, Model - - model = Ensemble() - for w in weights if isinstance(weights, list) else [weights]: - ckpt = torch.load(attempt_download(w), map_location='cpu') # load - ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float() # FP32 model - - # Model compatibility updates - if not hasattr(ckpt, 'stride'): - ckpt.stride = torch.tensor([32.]) - if hasattr(ckpt, 'names') and isinstance(ckpt.names, (list, tuple)): - ckpt.names = dict(enumerate(ckpt.names)) # convert to dict - - model.append(ckpt.fuse().eval() if fuse and hasattr(ckpt, 'fuse') else ckpt.eval()) # model in eval mode - - # Module compatibility updates - for m in model.modules(): - t = type(m) - if t in (nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model): - m.inplace = inplace # torch 1.7.0 compatibility - if t is Detect and not isinstance(m.anchor_grid, list): - delattr(m, 'anchor_grid') - setattr(m, 'anchor_grid', [torch.zeros(1)] * m.nl) - elif t is nn.Upsample and not hasattr(m, 'recompute_scale_factor'): - m.recompute_scale_factor = None # torch 1.11.0 compatibility - - # Return model - if len(model) == 1: - return model[-1] - - # Return detection ensemble - print(f'Ensemble created with {weights}\n') - for k in 'names', 'nc', 'yaml': - setattr(model, k, getattr(model[0], k)) - model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride - assert all(model[0].nc == m.nc for m in model), f'Models have different class counts: {[m.nc for m in model]}' - return model diff --git a/spaces/sub314xxl/MetaGPT/docs/scripts/get_all_classes_and_funcs.sh b/spaces/sub314xxl/MetaGPT/docs/scripts/get_all_classes_and_funcs.sh deleted file mode 100644 index 011349caf35729702d0dfc1aa69474c8f2d9c833..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/docs/scripts/get_all_classes_and_funcs.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/usr/bin/env bash - -find metagpt | grep "\.py" | grep -Ev "(__init__|pyc)" | xargs grep -E "(^class| def )" 2>/dev/null | grep -v -E "(grep|tests|examples)" \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/startup.py b/spaces/sub314xxl/MetaGPT/startup.py deleted file mode 100644 index 03b2149c434c2761b06e63e64002ad1f44a82f0a..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/startup.py +++ /dev/null @@ -1,42 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -import asyncio -import platform -import fire - -from metagpt.roles import Architect, Engineer, ProductManager, ProjectManager, QaEngineer -from metagpt.software_company import SoftwareCompany - - -async def startup(idea: str, investment: float = 3.0, n_round: int = 5, - code_review: bool = False, run_tests: bool = False): - """Run a startup. Be a boss.""" - company = SoftwareCompany() - company.hire([ProductManager(), - Architect(), - ProjectManager(), - Engineer(n_borg=5, use_code_review=code_review)]) - if run_tests: - # developing features: run tests on the spot and identify bugs (bug fixing capability comes soon!) - company.hire([QaEngineer()]) - company.invest(investment) - company.start_project(idea) - await company.run(n_round=n_round) - - -def main(idea: str, investment: float = 3.0, n_round: int = 5, code_review: bool = False, run_tests: bool = False): - """ - We are a software startup comprised of AI. By investing in us, you are empowering a future filled with limitless possibilities. - :param idea: Your innovative idea, such as "Creating a snake game." - :param investment: As an investor, you have the opportunity to contribute a certain dollar amount to this AI company. - :param n_round: - :param code_review: Whether to use code review. - :return: - """ - if platform.system() == "Windows": - asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) - asyncio.run(startup(idea, investment, n_round, code_review, run_tests)) - - -if __name__ == '__main__': - fire.Fire(main) diff --git a/spaces/sub314xxl/MusicGen/tests/modules/test_lstm.py b/spaces/sub314xxl/MusicGen/tests/modules/test_lstm.py deleted file mode 100644 index 1248964c8191e19f27661f0974bef9cc967eb015..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/tests/modules/test_lstm.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random -import torch - -from audiocraft.modules.lstm import StreamableLSTM - - -class TestStreamableLSTM: - - def test_lstm(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=False) - x = torch.randn(B, C, T) - y = lstm(x) - - print(y.shape) - assert y.shape == torch.Size([B, C, T]) - - def test_lstm_skip(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=True) - x = torch.randn(B, C, T) - y = lstm(x) - - assert y.shape == torch.Size([B, C, T]) diff --git a/spaces/sunil448832/retrieval-augment-generation/models/embedding_models.py b/spaces/sunil448832/retrieval-augment-generation/models/embedding_models.py deleted file mode 100644 index 8a974b93b24e50ea9da75d4a078cd8c2596f2cc3..0000000000000000000000000000000000000000 --- a/spaces/sunil448832/retrieval-augment-generation/models/embedding_models.py +++ /dev/null @@ -1,35 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModel -import torch.nn.functional as F - -# Create a class for embedding sentences using Hugging Face Transformers -class EmbeddingModel: - def __init__(self, model_name='sentence-transformers/all-MiniLM-L6-v2'): - # Initialize the model with the given model_name - self.tokenizer = AutoTokenizer.from_pretrained(model_name) - self.model = AutoModel.from_pretrained(model_name) - # Get the embedding dimension from the model's output - self.embedding_dim = self.encode('Hi').shape[1] - - def _mean_pooling(self, model_output, attention_mask): - # Calculate mean pooling of token embeddings - token_embeddings = model_output[0] - input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() - embedding = torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) - return embedding - - def encode(self, text): - # Encode a text into sentence embeddings - inputs = self.tokenizer(text, padding=True, truncation=True, return_tensors='pt') - with torch.no_grad(): - outputs = self.model(**inputs) - sentence_embeddings = self._mean_pooling(outputs, inputs['attention_mask']) - sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1).numpy().astype('float32') - return sentence_embeddings - -if __name__ == '__main__': - # Sentences we want sentence embeddings for - sentences = ['This is an example sentence', 'Each sentence is converted'] - # Print the embedding dimension of the model - print(EmbeddingModel().embedding_dim) - diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Carlos Milla Villena Pdfl ((HOT)).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Carlos Milla Villena Pdfl ((HOT)).md deleted file mode 100644 index 538a9a937b1d2f055da3b66934c0271929d66ef3..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Carlos Milla Villena Pdfl ((HOT)).md +++ /dev/null @@ -1,6 +0,0 @@ -

          Carlos Milla Villena Pdfl


          DOWNLOAD ✪✪✪ https://cinurl.com/2uEXWS



          -
          -Sat Jun 13, 2020 @ 07:14PM by Nicolle Matejcek, Carlos Milla Villena Pdfl. Wed Jun 10, 2020 @ 01:54AM by Nicolle Matejcek, Enfermeria Materno Infantil ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Fr Office Professional Plus 2013 X64 Dvd 1134000.iso.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Fr Office Professional Plus 2013 X64 Dvd 1134000.iso.md deleted file mode 100644 index f51ec79dc5a453a2efcc516f86bd8fdaa729b53c..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Fr Office Professional Plus 2013 X64 Dvd 1134000.iso.md +++ /dev/null @@ -1,9 +0,0 @@ -
          -

          the users get to pick one of five modes for different operations - explore, start, go, get it done and get personal - and they can be customized as per their requirement. while i’d like to add customization from windows 7 and 10, i just need to make this part of the user manual.

          -

          in addition to these changes, there are some new features to take advantage of, including move your content and moving content between folders. the former lets you move files or folders from the root of a drive to anywhere else, which is very useful when, for instance, an employee buys an external hard drive and gets transferred to another computer.

          -

          fr office professional plus 2013 x64 dvd 1134000.iso


          DOWNLOADhttps://cinurl.com/2uEXK2



          -

          the latter helps you when you move data from one folder to another on the same drive. you can have it do a recursive copy or just a file-by-file copy, or you can change a property value to something else, making this a very powerful tool.

          -

          first, youll have to determine whether you have the installation file or not. the installation files are available over microsofts website for office 2013 and therefore you can access the microsoft official page for your product key and get the iso (iso stands for iso image) files. here are two links. the first link, which will be our starting point is the official one from microsoft for the general public. as you might have guessed, the key included in here is just a generic

          -

          that is the microsoft official page, so of course you can download any language you want. just keep in mind that office 2013 is a global release, so its not different from the traditional english release that you are probably accustomed to. select the version of office you want to download and then select the language, so you can get the chinese, german, korean, spanish, portuguese etc. and then download.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lan Employee Monitor 41 Crack.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lan Employee Monitor 41 Crack.md deleted file mode 100644 index 8590d349f4605fff2bc5c8581f243714441f21d4..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Lan Employee Monitor 41 Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

          lan employee monitor 41 crack


          DOWNLOADhttps://cinurl.com/2uEYni



          - -Lords Mobile Activation Code Crack >> http://shoxet.com/17if6c About This ... lan employee monitor v 4.1 serial crack - Subscription Monitor For ... 1fdad05405
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/One Tree Hill Season 1 Torrent Downloads.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/One Tree Hill Season 1 Torrent Downloads.md deleted file mode 100644 index 03d27d0104d08e4b482073830d3acc12c427ab28..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/One Tree Hill Season 1 Torrent Downloads.md +++ /dev/null @@ -1,52 +0,0 @@ -

          one tree hill season 1 torrent downloads


          Download File ……… https://cinurl.com/2uEYsI



          - -arah goi - -I'm working on an alternative to the HN rating system. It's in private beta right now. - bengali3 - -====== - -bengali3 - -I'm working on an alternative to the HN rating system. It's in private beta - -right now. This is why I made this account. - ------- - -mattmaroon - -You're certainly welcome to make your submission there if you like. - -I'm not sure why you'd want to create an account on reddit just to comment - -here. - -~~~ - -This is where the dev's hang out. I just wanted to post my product to see what - -the community thought. - -Then do it on your own site and post the link here. You're free to tell your - -friends about it if you like. - -We love making Pizza - -At Happy Family Pizza we use only the best ingredients and know the right recipes to use to ensure our delicious homemade pizza is as close as you can get to home-made pizza. We have been making homemade pizzas since 1998. - -HAPPY FAMILY PIZZA - -We are a family run business that loves making and cooking homemade pizzas. We are proud of what we do and love being involved with schools and community events. We take great pride in what we do and are always looking to make improvements and new ideas. Please get in touch to see how we can help you!Report: Wave 7 Halfway through its Life Span. - -2015/11/16 - -The master you knew when you opened an in-game Warframe profile in April is no more. 2K Games was under the gun to make a way for new players to jump into the game, so, like, a lot of people were excited to play it. 2K Games quickly changed the way players level their Warframes, making things a lot more difficult in an attempt to woo new players. - -Since then, the status quo has stayed the same: players level new Warframes by themselves, knowing that there's a chance they won't be able to complete the task and having to keep finding new, previously "unlocked" Warframes. - -If 4fefd39f24
          -
          -
          -

          diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/core/utils/misc.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/core/utils/misc.py deleted file mode 100644 index eb862a82bd47c8624db3dd5c6fb6ad8a03b62466..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/core/utils/misc.py +++ /dev/null @@ -1,17 +0,0 @@ -def add_prefix(inputs, prefix): - """Add prefix for dict. - - Args: - inputs (dict): The input dict with str keys. - prefix (str): The prefix to add. - - Returns: - - dict: The dict with keys updated with ``prefix``. - """ - - outputs = dict() - for name, value in inputs.items(): - outputs[f'{prefix}.{name}'] = value - - return outputs diff --git a/spaces/tabeina/bingo1/src/lib/isomorphic/index.ts b/spaces/tabeina/bingo1/src/lib/isomorphic/index.ts deleted file mode 100644 index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/src/lib/isomorphic/index.ts +++ /dev/null @@ -1,17 +0,0 @@ -'use client' - -import Default from './browser' - -let exportsModel: any = {} - -if (process.browser) { - Object.assign(exportsModel, require('./browser').default) -} else { - Object.assign(exportsModel, require('./node').default) -} - -export default exportsModel! as typeof Default - -export const fetch: typeof Default.fetch = exportsModel!.fetch -export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket -export const debug: typeof Default.debug = exportsModel!.debug diff --git a/spaces/templates/streamlit/app.py b/spaces/templates/streamlit/app.py deleted file mode 100644 index 4525a846d3a006431385a18b8abb1ba5b874fb5f..0000000000000000000000000000000000000000 --- a/spaces/templates/streamlit/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import streamlit as st - -st.title("Hello World!") \ No newline at end of file diff --git a/spaces/teragron/TinyStories/tinystories.py b/spaces/teragron/TinyStories/tinystories.py deleted file mode 100644 index 814732d41c0ce5f9d78e0126c15ea50deca6315f..0000000000000000000000000000000000000000 --- a/spaces/teragron/TinyStories/tinystories.py +++ /dev/null @@ -1,281 +0,0 @@ -""" -Download, preprocess and serve the TinyStories dataset as a DataLoader. -""" - -import argparse -import glob -import json -import os -import random -from typing import List -from concurrent.futures import ProcessPoolExecutor -from functools import partial - -import numpy as np -import requests -import sentencepiece as spm -import torch -import torch.distributed as dist -from tqdm import tqdm - -from tokenizer import Tokenizer - -DATA_CACHE_DIR = "data" - -def download_file(url: str, fname: str, chunk_size=1024): - """Helper function to download a file from a given url""" - resp = requests.get(url, stream=True) - total = int(resp.headers.get("content-length", 0)) - with open(fname, "wb") as file, tqdm( - desc=fname, - total=total, - unit="iB", - unit_scale=True, - unit_divisor=1024, - ) as bar: - for data in resp.iter_content(chunk_size=chunk_size): - size = file.write(data) - bar.update(size) - - -def download(): - """Downloads the TinyStories dataset to DATA_CACHE_DIR""" - os.makedirs(DATA_CACHE_DIR, exist_ok=True) - - # download the TinyStories dataset, unless it's already downloaded - data_url = "https://huggingface.co/datasets/roneneldan/TinyStories/resolve/main/TinyStories_all_data.tar.gz" - data_filename = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data.tar.gz") - if not os.path.exists(data_filename): - print(f"Downloading {data_url} to {data_filename}...") - download_file(data_url, data_filename) - else: - print(f"{data_filename} already exists, skipping download...") - - # unpack the tar.gz file into all the data shards (json files) - data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") - if not os.path.exists(data_dir): - os.makedirs(data_dir, exist_ok=True) - print(f"Unpacking {data_filename}...") - os.system(f"tar -xzf {data_filename} -C {data_dir}") - else: - print(f"{data_dir} already exists, skipping unpacking...") - - # print a single example just for debugging and such - shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) - with open(shard_filenames[0], "r") as f: - data = json.load(f) - print("Download done.") - print(f"Number of shards: {len(shard_filenames)}") - print(f"Example story:\n{data[0]}") - -def train_vocab(vocab_size): - """ - Trains a custom sentencepiece tokenizer on the TinyStories dataset. - The custom tokenizer files will be saved in DATA_CACHE_DIR/tok{N} directories, - where N is the vocab size. This is also where the pretok .bin files will go. - """ - assert vocab_size > 0, "Vocab size must be positive" - - # output file prefix path for sentencepiece - prefix = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") - - # how many shards we'll use for vocab training, kept low for efficiency - num_shards = 10 - - # 1) export a large chunk of text as a single text file tiny.txt - tiny_file = os.path.join(DATA_CACHE_DIR, "tiny.txt") - data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") - shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) - - print(f"Writing temporary file {tiny_file} with {num_shards} shards...") - with open(tiny_file, "w", encoding="utf-8") as of: - for shard in tqdm(shard_filenames[:num_shards]): - with open(shard, "r") as f: - data = json.load(f) - for example in data: - text = example["story"] - text = text.strip() - of.write(text + "\n") - print(f"Size is: {os.path.getsize(tiny_file) / 1024 / 1024:.2f} MB") - - # 2) train the sentencepiece model - print("Will now train the vocab...") - spm.SentencePieceTrainer.train(input=tiny_file, - model_prefix=prefix, - model_type="bpe", - vocab_size=vocab_size, - self_test_sample_size=0, - input_format="text", - character_coverage=1.0, - num_threads=os.cpu_count(), - split_digits=True, - allow_whitespace_only_pieces=True, - byte_fallback=True, - unk_surface=r" \342\201\207 ", - normalization_rule_name="identity") - - # 3) optional cleanup, ask the user if they'd like to delete tiny.txt - dec = input(f"Delete the temporary file {tiny_file}? [y/N] ") - if dec.lower() == "y": - os.remove(tiny_file) - print(f"Deleted {tiny_file}") - - print(f"Trained tokenizer is in {prefix}.model") - print("Done.") - - -def process_shard(args, vocab_size): - shard_id, shard = args - tokenizer_model = get_tokenizer_model_path(vocab_size) - enc = Tokenizer(tokenizer_model) - with open(shard, "r") as f: - data = json.load(f) - all_tokens = [] - for example in tqdm(data, position=shard_id): - text = example["story"] - text = text.strip() # get rid of leading/trailing whitespace - tokens = enc.encode(text, bos=True, eos=False) # encode the text, use BOS - all_tokens.extend(tokens) - # convert to uint16 nparray - all_tokens = np.array(all_tokens, dtype=np.uint16) - # calculate the output filename - if vocab_size == 0: - # if we're using Llama 2, just save the tokenized file in the same dir - tokenized_filename = shard.replace(".json", ".bin") - else: - # save .bin files into a new tok{N} directory - bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") - shard_basename = os.path.basename(shard) - bin_basename = shard_basename.replace(".json", ".bin") - tokenized_filename = os.path.join(bin_dir, bin_basename) - # write the bytes - with open(tokenized_filename, "wb") as f: - f.write(all_tokens.tobytes()) - # calculate the average sequence length (they are separated by BOS=1) - avg_seq_len = all_tokens.size / ((all_tokens == 1).sum()) - print(f"Saved {tokenized_filename}, average seqlen: {avg_seq_len:.2f}") - - -def pretokenize(vocab_size): - # iterate the shards and tokenize all of them one by one - data_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") - shard_filenames = sorted(glob.glob(os.path.join(data_dir, "*.json"))) - if vocab_size > 0: - # .bin files will be saved into tok{N} directory, create it once here - bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}") - os.makedirs(bin_dir, exist_ok=True) - - # process all the shards in a process pool - fun = partial(process_shard, vocab_size=vocab_size) - with ProcessPoolExecutor() as executor: - executor.map(fun, enumerate(shard_filenames)) - print("Done.") - - -class PretokDataset(torch.utils.data.IterableDataset): - """Loads pretokenized examples from disk and yields them as PyTorch tensors.""" - - def __init__(self, split, max_seq_len, vocab_size, vocab_source): - super().__init__() - self.split = split - self.max_seq_len = max_seq_len - self.vocab_size = vocab_size - self.vocab_source = vocab_source - - def __iter__(self): - # get worker info within a DataLoader - worker_info = torch.utils.data.get_worker_info() - worker_id = worker_info.id if worker_info else 0 - # get DDP rank info - rank = dist.get_rank() if dist.is_initialized() else 0 - # combine the worker_id and worker_rank to create a unique seed for rng - seed = 42 + worker_id + 1337 * rank - rng = random.Random(seed) - print(f"Created a PretokDataset with rng seed {seed}") - if self.vocab_source == "llama2": - # the .bin files are right along the .json files - bin_dir = os.path.join(DATA_CACHE_DIR, "TinyStories_all_data") - shard_filenames = sorted(glob.glob(os.path.join(bin_dir, "*.bin"))) - elif self.vocab_source == "custom": - # the .bin files are in tok{N} directory - bin_dir = os.path.join(DATA_CACHE_DIR, f"tok{self.vocab_size}") - shard_filenames = sorted(glob.glob(os.path.join(bin_dir, "*.bin"))) - # train/test split. let's use only shard 0 for test split, rest train - shard_filenames = shard_filenames[1:] if self.split == "train" else shard_filenames[:1] - assert len(shard_filenames)>0, f"No bin files found in {bin_dir}" - while True: - rng.shuffle(shard_filenames) - for shard in shard_filenames: - # open the dataset for reading but keep it on disk with memmap - m = np.memmap(shard, dtype=np.uint16, mode="r") - num_batches = len(m) // self.max_seq_len - num_batches -= 1 # drop the last partial batch - assert num_batches > 0, "this shard is way too small? investigate." - ixs = list(range(num_batches)) - rng.shuffle(ixs) - for ix in ixs: - start = ix * self.max_seq_len - end = start + self.max_seq_len + 1 - # calling .astype will copy the data into a new numpy array, now in RAM - chunk = torch.from_numpy((m[start:end]).astype(np.int64)) - x = chunk[:-1] - y = chunk[1:] - yield x, y - -# ----------------------------------------------------------------------------- -# public interface functions - -def get_tokenizer_model_path(vocab_size): - """ - Returns path to the sentencepiece tokenizer model for a given vocab size - vocab_size = 0 designates the default Llama 2 tokenizer, in that case - None is returned. - """ - if vocab_size == 0: - return None - else: - return os.path.join(DATA_CACHE_DIR, f"tok{vocab_size}.model") - -class Task: - - @staticmethod - def iter_batches(batch_size, device, num_workers=0, **dataset_kwargs): - ds = PretokDataset(**dataset_kwargs) - dl = torch.utils.data.DataLoader( - ds, batch_size=batch_size, pin_memory=True, num_workers=num_workers - ) - for x, y in dl: - x = x.to(device, non_blocking=True) - y = y.to(device, non_blocking=True) - yield x, y - -# ----------------------------------------------------------------------------- -# CLI for constructing the dataset - -if __name__ == "__main__": - """ - These stages are designed to be run in order. - - To tokenize data with the Llama 2 tokenizer: - python tinystories.py download - python tinystories.py pretokenize - - To tokenize data with a custom tokenizer we train ourselves with sentencepiece, e.g.: - python tinystories.py download - python tinystories.py train_vocab --vocab_size=2048 - python tinystories.py pretokenize --vocab_size=2048 - """ - parser = argparse.ArgumentParser() - parser.add_argument("stage", type=str, choices=["download", "pretokenize", "train_vocab"]) - parser.add_argument("--vocab_size", type=int, default=0, help="pretokenization vocab size. 0 = use Llama 2 tokenizer.") - args = parser.parse_args() - - # depending on the stage call the appropriate function - if args.stage == "download": - download() - elif args.stage == "train_vocab": - train_vocab(vocab_size=args.vocab_size) - elif args.stage == "pretokenize": - pretokenize(vocab_size=args.vocab_size) - else: - raise ValueError(f"Unknown stage {args.stage}") diff --git a/spaces/terfces0erbo/CollegeProjectV2/!FREE! Download Fifa 2000 Pc Gratis.md b/spaces/terfces0erbo/CollegeProjectV2/!FREE! Download Fifa 2000 Pc Gratis.md deleted file mode 100644 index ab87603913455aa9c802bf645f05ca6dd05d432d..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/!FREE! Download Fifa 2000 Pc Gratis.md +++ /dev/null @@ -1,7 +0,0 @@ -
          -

          one thing i did notice aboutfifa 2000is that the language is much more full-on than in most other ea games. the banter is absolutely outstanding and it's even better than the one inf.a. premier league stars. if you like that sort of thing then this is definitely the game for you. if you just want a footy game with a good interface and a good gameplay experience, then you might be better off with the other two versions of fifa.

          -

          Download Fifa 2000 Pc Gratis


          DOWNLOADhttps://bytlly.com/2uGj45



          -

          after playingf.a. premier league starsand fifa 2000for a week or two, i realised that both were pretty similar and that the differences between them were just that little bit too much to ignore. so, in the spirit of the sports arcade - which is, after all, where the genre really started - i decided to take a look at rugby challenge, the only other football game with this much in common with the fifa series. so far as i can tell, it's the only one of these.

          -

          the basic gameplay is, if not as good as, then at least very similar to that of the fifa 99game. but this time around, when you pass the ball, your player runs. so, when you're running, the ball is always in a relatively straight line, whereas when you're standing still, the ball can be in any direction. it's not a huge issue, but it's one that makes the game feel a little more dynamic. no doubt, this will appeal to some fans offifa 99, but you should bear in mind that the gameplay here is just a little worse than the one infifa 99, if not quite as good as inf.a. premier league stars. the graphics are certainly better than they were infifa 99, too, although not as good as they are inf.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Download Windows 7 Start Button Icons For Classic Shelll.md b/spaces/terfces0erbo/CollegeProjectV2/Download Windows 7 Start Button Icons For Classic Shelll.md deleted file mode 100644 index 7b064230899b4ae4475eb34dd260ecef92bac920..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Download Windows 7 Start Button Icons For Classic Shelll.md +++ /dev/null @@ -1,29 +0,0 @@ - -

          How to Download Windows 7 Start Button Icons For Classic Shell

          -

          If you miss the old Windows 7 start button and want to bring it back to your Windows 10 or 8.1 desktop, you can use a free tool called Classic Shell. Classic Shell is a software that lets you customize the appearance and functionality of the Windows start menu, taskbar, and explorer. One of the features of Classic Shell is that you can change the start button icon to any image you want.

          -

          In this article, we will show you how to download Windows 7 start button icons for Classic Shell and apply them to your system. You will need to have Classic Shell installed on your computer before following these steps. You can download Classic Shell from here.

          -

          Download Windows 7 Start Button Icons For Classic Shelll


          Download File 🌟 https://bytlly.com/2uGjCw



          -
            -
          1. Go to DeviantArt and search for "Windows 7 start button". You will find many different icons created by various artists. Choose the one you like and click on it to open its page.
          2. -
          3. On the icon's page, look for a download button or link. It may be on the right side of the page or under the preview image. Click on it and save the file to your computer. The file should be in .png format and have a transparent background.
          4. -
          5. Open Classic Shell's settings by right-clicking on the start button and choosing "Settings". Alternatively, you can open it from the Start Menu or the Control Panel.
          6. -
          7. Under the "Start Menu Style" tab, click on "Replace Start Button". A new window will pop up where you can browse for the icon file you downloaded. Select it and click "Open". You should see a preview of how the icon will look on your taskbar.
          8. -
          9. Click "OK" to apply the changes and close the settings window. You should now see your new Windows 7 start button icon on your taskbar.
          10. -
          -

          You can repeat these steps to change the icon anytime you want. You can also download other icons from DeviantArt or create your own using an image editor. Enjoy your customized start menu with Classic Shell!

          - -

          Why Use Classic Shell?

          -

          Classic Shell is a popular and useful tool for Windows users who prefer the classic look and feel of older versions of Windows. It offers many options to customize the start menu, taskbar, and explorer to suit your preferences. You can change the menu style, layout, color, font, icons, and more. You can also enable or disable various features such as recent items, jump lists, search box, and so on.

          -

          Classic Shell is also compatible with Windows 10 and 8.1, which have a different start menu design than Windows 7. If you are not a fan of the tiles and apps that appear on the Windows 10 and 8.1 start menu, you can use Classic Shell to replace it with a more familiar and traditional start menu. You can also use Classic Shell to restore the missing start button on Windows 8.

          -

          Classic Shell is free and open source software that has been around since 2009. It has a large and active community of users and developers who provide feedback, support, and updates. You can visit the official website of Classic Shell here to learn more about its features and download the latest version.

          - -

          Other Ways to Customize the Start Menu

          -

          If you want to further customize the start menu beyond what Classic Shell offers, you can also use some other tools and methods. Here are some examples:

          -
            -
          • You can use a third-party software such as Start10 or StartIsBack to change the start menu appearance and functionality. These tools have similar features as Classic Shell but may have different options and interfaces.
          • -
          • You can use the built-in settings of Windows 10 and 8.1 to tweak some aspects of the start menu. For example, you can resize, rearrange, pin, unpin, group, or ungroup the tiles and apps on the start menu. You can also change the background color and transparency of the start menu.
          • -
          • You can use a registry editor or a group policy editor to modify some hidden settings of the start menu. For example, you can change the number of items in the jump lists or disable the live tiles. However, this method is not recommended for beginners as it may cause errors or damage to your system if done incorrectly.
          • -
          -

          Whatever method you choose to customize your start menu, make sure you backup your system before making any changes. This way, you can restore your system in case something goes wrong.

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Erio Connection Usb Modem Direct Driver.epub [BEST].md b/spaces/terfces0erbo/CollegeProjectV2/Erio Connection Usb Modem Direct Driver.epub [BEST].md deleted file mode 100644 index ccac56a18c592442856bd963d5ba56f297707b2e..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Erio Connection Usb Modem Direct Driver.epub [BEST].md +++ /dev/null @@ -1,10 +0,0 @@ -

          Erio Connection Usb Modem Direct Driver.epub


          DOWNLOAD >>> https://bytlly.com/2uGiU8



          - -Erio connection usb modem direct driver download. Xerox Software application for compatible Xerox WorkCentre 7345, which can be set up as well as ... Fujitsu Siemens Software application for the HP LaserJet M1120, which can be in setup as well as ... -Infraprint A software application for compatible HP LaserJet P4014 that can be set up as well as ... -Radmin Server 3.5 -Virtual Network Computing -Erio connection usb modem direct driver download. Xerox Software 8a78ff9644
          -
          -
          -

          diff --git a/spaces/terfces0erbo/CollegeProjectV2/Euro Truck Simulator 2 Autostop Mods Download !FREE! Torent.md b/spaces/terfces0erbo/CollegeProjectV2/Euro Truck Simulator 2 Autostop Mods Download !FREE! Torent.md deleted file mode 100644 index 03c0c3c09226cedd1c39973f63f5b69a0a3f7526..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Euro Truck Simulator 2 Autostop Mods Download !FREE! Torent.md +++ /dev/null @@ -1,8 +0,0 @@ -
          -

          In Euro Truck Simulator 2 you can start your career driving any of the 10 Euro truck models available at the beginning of the game. As your income steadily increases you will be able to purchase your own truck, which is by far the best option in this game. You can buy that truck outright or, if you want something a little less risky, you can hire it out. The basic truck options include delivery, fuel, and construction trucks. Plus theres a slick new garage system that lets you pick up the vehicles in your garage, make repairs and even load and unload them in game. And all of it is done without a mouse. You just load the game, hold the left mouse button and walk the controls until you are done. To take your time you can use automatic steering, physics and other settings.

          -

          Theres also Euro Truck Simulator 2 Autostop Mods, which offer the same fun but feel as if they were made by a true modder. They look a bit like the official Euro Truck Simulator 2: Euro Truck Simulator 2 Autostop Mods download, but youll find plenty of useful modifications that will improve your time on the road.

          -

          Euro truck simulator 2 autostop mods download torent


          Downloadhttps://bytlly.com/2uGklv



          -

          If you feel like continuing to improve your game experience, you can download Euro Truck Simulator 2 Autostop Mods patches. All of these Euro Truck Simulator 2 Autostop Mods mods will make your game play smoother. But, of course, you can also install mods in Euro Truck Simulator 2 that add new trucks and roads and even new gameplay.

          -

          On the surface, Euro Truck Simulator 2 Autostop Mod doesnt look like much, but there is a whole lot packed into it. If youre a fan of over-the-top mods and modding, Euro Truck Simulator 2 Autostop Mod has a ton of content to check out.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/theAIguy/triplet_margin_loss/app.py b/spaces/theAIguy/triplet_margin_loss/app.py deleted file mode 100644 index bf908609ad45f37ef89a0c211c944e2f89dbd3dc..0000000000000000000000000000000000000000 --- a/spaces/theAIguy/triplet_margin_loss/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import evaluate -from evaluate.utils import launch_gradio_widget - - -module = evaluate.load("theAIguy/triplet_margin_loss") -launch_gradio_widget(module) \ No newline at end of file diff --git a/spaces/themanas021/Sentiment_Analysis/README.md b/spaces/themanas021/Sentiment_Analysis/README.md deleted file mode 100644 index 8b8df8e34c2dd7e349a35eac034f5f9f1cac5157..0000000000000000000000000000000000000000 --- a/spaces/themanas021/Sentiment_Analysis/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sentiment Analysis -emoji: 😻 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Aushadh Darshan Pdf Free Download [REPACK].md b/spaces/tialenAdioni/chat-gpt-api/logs/Aushadh Darshan Pdf Free Download [REPACK].md deleted file mode 100644 index 4a549e87cfc07edba9e40564bc38e3c44aba0a98..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Aushadh Darshan Pdf Free Download [REPACK].md +++ /dev/null @@ -1,24 +0,0 @@ - -Here is a possible title and article with SEO optimization and HTML formatting for the keyword "aushadh darshan pdf free download": - -

          Aushadh Darshan PDF: A Free Ebook on Ayurveda and Wellness

          -

          If you are looking for a free ebook on Ayurveda and wellness, you might want to check out Aushadh Darshan PDF. This is a book written by Swami Ramdev and Acharya Balkrishna, two renowned experts on natural healing and yoga.

          -

          aushadh darshan pdf free download


          Download File ---> https://urlcod.com/2uKaQH



          -

          Aushadh Darshan PDF is a collection of ayurvedic remedies for various diseases and ailments, based on the ancient wisdom of India. The book also contains yoga asanas, pranayama, meditation, and dietary guidelines for enhancing health and well-being.

          -

          The best part is that you can download Aushadh Darshan PDF for free from the internet. You don't need to pay anything or register anywhere to get access to this valuable resource. You can simply click on the link below and start reading the book online or save it to your device.

          -

          Download Aushadh Darshan PDF for free here.

          -

          -

          Aushadh Darshan PDF is a must-read for anyone who wants to learn more about Ayurveda and wellness. It will help you to understand the root causes of your health problems and how to treat them naturally. It will also inspire you to adopt a holistic lifestyle that promotes harmony and balance in your body, mind, and spirit.

          -

          So, what are you waiting for? Download Aushadh Darshan PDF for free today and discover the secrets of Ayurveda and wellness.

          Here is a possible continuation of the article: - -

          In this article, we will give you a brief overview of some of the topics covered in Aushadh Darshan PDF. You will learn about the basic principles of Ayurveda, the five elements, the three doshas, and the six tastes. You will also learn about some of the common ayurvedic herbs and their benefits for different health conditions.

          -

          What is Ayurveda?

          -

          Ayurveda is a system of medicine that originated in India more than 5000 years ago. The word Ayurveda means "the science of life" or "the knowledge of longevity". Ayurveda is based on the idea that health is a state of balance between the body, mind, and spirit.

          -

          Ayurveda recognizes that each person is unique and has a different constitution or prakriti. This constitution is determined by the combination of the five elements: ether, air, fire, water, and earth. These elements manifest in the body as three biological energies or doshas: vata, pitta, and kapha.

          -

          Vata is the energy of movement and creativity. It governs the nervous system, respiration, circulation, and elimination. Vata people are usually thin, light, agile, enthusiastic, and adaptable. They tend to have dry skin, hair, and nails, cold hands and feet, and irregular appetite and sleep. They are prone to anxiety, insomnia, constipation, and nervous disorders.

          -

          Pitta is the energy of transformation and intelligence. It governs the digestive system, metabolism, vision, and skin. Pitta people are usually medium-sized, muscular, warm, sharp, and courageous. They tend to have oily skin, hair, and eyes, strong appetite and digestion, and good concentration. They are prone to anger, inflammation, ulcers, and skin problems.

          -

          Kapha is the energy of stability and nourishment. It governs the immune system, lubrication, growth, and reproduction. Kapha people are usually large, heavy, calm, loyal, and compassionate. They tend to have moist skin, hair, and eyes, slow metabolism and digestion, and good memory. They are prone to lethargy, congestion, obesity, and diabetes.

          -

          Ayurveda aims to maintain or restore the balance of the doshas by using various methods such as diet, lifestyle, herbs, yoga, pranayama, meditation, massage, detoxification, and rejuvenation. By following these methods according to one's constitution and needs, -one can achieve optimal health and happiness.

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Bulbulay All Episodes Free Download 3GP Format The Best Way to Watch the Funny Family Drama.md b/spaces/tialenAdioni/chat-gpt-api/logs/Bulbulay All Episodes Free Download 3GP Format The Best Way to Watch the Funny Family Drama.md deleted file mode 100644 index d1b38169436c10b55b09b96ff84b610b8177f263..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Bulbulay All Episodes Free Download 3GP Format The Best Way to Watch the Funny Family Drama.md +++ /dev/null @@ -1,67 +0,0 @@ - -

          Bulbulay: A Hilarious Pakistani Sitcom That You Can Watch Online for Free

          -

          Bulbulay is a popular Pakistani comedy series that follows the adventures of a dysfunctional family. The show features Nabeel, his wife Khoobsurat, his mother Momo, and his brother-in-law Mehmood. Together, they get into hilarious situations and make fun of each other. The show has been running since 2009 and has over 600 episodes.

          -

          If you are looking for a way to watch Bulbulay online for free, you are in luck. There are several websites that offer Bulbulay episodes in 3gp format, which is a low-quality video format that can be played on mobile devices. 3gp format is ideal for people who have limited internet bandwidth or storage space. You can download Bulbulay episodes in 3gp format from these websites and enjoy them offline.

          -

          bulbulay all episodes free download 3gp format


          DOWNLOAD ……… https://urlcod.com/2uKaYx



          -

          Some of the websites that offer Bulbulay episodes in 3gp format are:

          - -

          Bulbulay is a great show to watch if you want to have a good laugh. You can watch Bulbulay online for free in 3gp format from these websites and enjoy the comedy of this Pakistani sitcom.

          - -

          Bulbulay is one of the most watched comedy shows in Pakistan. It has won several awards and has a loyal fan base. The show is known for its witty dialogues, slapstick humor, and hilarious characters. The show also features guest appearances by famous celebrities and politicians.

          -

          The main cast of Bulbulay consists of four actors: Nabeel Zafar, Ayesha Omar, Hina Dilpazeer, and Mehmood Aslam. Nabeel Zafar plays Nabeel, a lazy and irresponsible husband who often gets into trouble with his wife and mother. Ayesha Omar plays Khoobsurat, a beautiful and smart woman who is married to Nabeel but often regrets her decision. Hina Dilpazeer plays Momo, Nabeel's mother who is a dim-witted and eccentric woman who loves to meddle in other people's affairs. Mehmood Aslam plays Mehmood, Khoobsurat's father who is a retired army officer and a strict disciplinarian.

          -

          Bulbulay episodes usually revolve around the family's daily life and their interactions with their neighbors and relatives. The episodes often involve some kind of misunderstanding, confusion, or prank that leads to hilarious consequences. The episodes also have a moral lesson or a social message at the end. Bulbulay episodes are full of fun and laughter that can make anyone's day better.

          -

          bulbulay full episodes download 3gp free
          -watch bulbulay online free 3gp format
          -bulbulay comedy show 3gp download free
          -how to download bulbulay episodes in 3gp
          -bulbulay pakistani drama free 3gp download
          -bulbulay all seasons download free 3gp
          -bulbulay best episodes 3gp free download
          -bulbulay episode 1 to 500 download 3gp free
          -bulbulay latest episodes download free 3gp
          -bulbulay eid special episodes 3gp free download
          -bulbulay funny scenes download free 3gp
          -bulbulay cast and crew 3gp download free
          -bulbulay behind the scenes 3gp download free
          -bulbulay theme song 3gp download free
          -bulbulay bloopers and outtakes 3gp download free
          -bulbulay trivia and facts 3gp download free
          -bulbulay reviews and ratings 3gp download free
          -bulbulay awards and nominations 3gp download free
          -bulbulay fan club and forum 3gp download free
          -bulbulay merchandise and gifts 3gp download free
          -bulbulay wallpapers and ringtones 3gp download free
          -bulbulay memes and jokes 3gp download free
          -bulbulay quotes and dialogues 3gp download free
          -bulbulay games and quizzes 3gp download free
          -bulbulay spin-offs and sequels 3gp download free
          -bulbulay remake and adaptation 3gp download free
          -bulbulay crossover and collaboration 3gp download free
          -bulbulay parodies and spoofs 3gp download free
          -bulbulay fan fiction and art 3gp download free
          -bulbulay interviews and podcasts 3gp download free
          -bulbulay news and updates 3gp download free
          -bulbulay history and legacy 3gp download free
          -bulbulay controversies and scandals 3gp download free
          -bulbulay secrets and mysteries 3gp download free
          -bulbulay tips and tricks 3gp download free
          -bulbulay tutorials and guides 3gp download free
          -bulbulay alternatives and substitutes 3gp download free
          -bulbulay recommendations and suggestions 3gp download free
          -bulbulay comparisons and contrasts 3gp download free
          -bulbulay opinions and perspectives 3gp download free
          -bulbulay analysis and critique 3gp download free
          -bulbulay feedback and comments 3gp download free
          -bulbulay questions and answers 3gp download free
          -bulbulay challenges and competitions 3gp download free
          -bulbulay polls and surveys 3gp download free
          -bulbulay statistics and data 3gp download free
          -bulbulay trends and patterns 3gp download free
          -bulbulay predictions and forecasts 3gp download free
          -bulbulay insights and discoveries 3gp download free

          e753bf7129
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Cancionero Escolar De Bolivia Pdf 43 ((HOT)).md b/spaces/tialenAdioni/chat-gpt-api/logs/Cancionero Escolar De Bolivia Pdf 43 ((HOT)).md deleted file mode 100644 index e56d85810ff1a9808a83fefc15d4612b094458a5..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Cancionero Escolar De Bolivia Pdf 43 ((HOT)).md +++ /dev/null @@ -1,19 +0,0 @@ - -Hallo, das ist Bing. Ich kann versuchen, Ihnen zu helfen. 😊 - -Hier ist ein möglicher Titel und Artikel mit HTML-Formatierung für das Schlüsselwort "cancionero escolar de bolivia pdf 43": - -

          Was ist der Cancionero Escolar de Bolivia?

          -

          Der Cancionero Escolar de Bolivia ist eine Sammlung von patriotischen und folkloristischen Liedern, die in bolivianischen Schulen gesungen werden. Er wurde von verschiedenen Autoren zusammengestellt und enthält die Texte und die musikalische Notation der Lieder. Der Cancionero Escolar de Bolivia soll die nationale Identität und Kultur der bolivianischen Schüler fördern und ihnen die Geschichte und Geographie ihres Landes näherbringen.

          -

          cancionero escolar de bolivia pdf 43


          Download File ►►►►► https://urlcod.com/2uK7fA



          -

          Der Cancionero Escolar de Bolivia wurde erstmals im Jahr 1943 veröffentlicht und seither mehrmals überarbeitet und erweitert. Er umfasst Lieder wie das Nationalhymne Boliviens, das Lied der Flagge Boliviens, das Lied des Meeres Boliviens, das Lied der Anden Boliviens, das Lied der Amazonas Boliviens und viele andere. Der Cancionero Escolar de Bolivia enthält auch Lieder aus verschiedenen Regionen und ethnischen Gruppen Boliviens, wie zum Beispiel aus dem Altiplano, dem Oriente, dem Chaco, dem Valle und dem Yungas.

          -

          Der Cancionero Escolar de Bolivia ist als PDF-Datei im Internet verfügbar und kann kostenlos heruntergeladen werden. Er ist eine wertvolle Quelle für alle, die mehr über die bolivianische Musik und Kultur erfahren möchten.

          Okay, hier sind noch ein paar Absätze: - -

          Die Lieder aus dem Cancionero Escolar de Bolivia spiegeln die Vielfalt und den Reichtum der bolivianischen Musik wider. Sie verwenden verschiedene Instrumente, wie zum Beispiel die Quena, die Zampoña, die Charango, die Bombo und die Gitarre. Sie haben auch verschiedene Rhythmen und Stile, wie zum Beispiel die Cueca, die Huayño, die Morenada, die Taquirari und die Carnavalito. Die Lieder aus dem Cancionero Escolar de Bolivia sind sowohl fröhlich als auch melancholisch, je nach dem Thema und der Stimmung.

          -

          Die Lieder aus dem Cancionero Escolar de Bolivia sind nicht nur für Schüler gedacht, sondern auch für alle Bolivianer und Bolivianerinnen, die ihre nationale Identität und Kultur stärken wollen. Sie sind auch für alle Menschen auf der Welt gedacht, die die bolivianische Musik und Kultur schätzen und respektieren wollen. Die Lieder aus dem Cancionero Escolar de Bolivia sind ein Ausdruck der Seele und des Herzens der bolivianischen Nation.

          -

          Okay, hier sind noch ein paar Absätze: - -

          Die Lieder aus dem Cancionero Escolar de Bolivia sind nicht nur musikalisch, sondern auch pädagogisch wertvoll. Sie vermitteln den Schülern wichtige Werte und Prinzipien, wie zum Beispiel die Liebe zum Vaterland, die Solidarität mit den Mitmenschen, die Achtung vor der Natur, die Bewahrung der Traditionen und die Suche nach dem Frieden. Sie erzählen auch von der Geschichte und der Geographie Boliviens, von seinen Helden und Heldinnen, von seinen Kämpfen und Siegen, von seinen Landschaften und Ressourcen.

          -

          Die Lieder aus dem Cancionero Escolar de Bolivia sind ein lebendiges Erbe der bolivianischen Kultur. Sie werden von Generation zu Generation weitergegeben und aktualisiert. Sie werden bei verschiedenen Anlässen und Feierlichkeiten gesungen, wie zum Beispiel bei Schulveranstaltungen, bei nationalen Festen, bei religiösen Zeremonien und bei sozialen Treffen. Die Lieder aus dem Cancionero Escolar de Bolivia sind ein Zeichen der Identität und der Zugehörigkeit der bolivianischen Bevölkerung.

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download ((BETTER)) Tally Erp 9 Filehippo.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download ((BETTER)) Tally Erp 9 Filehippo.md deleted file mode 100644 index beca7e6822ab64c960c7c2cf4ea5c3bcf9afa342..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download ((BETTER)) Tally Erp 9 Filehippo.md +++ /dev/null @@ -1,31 +0,0 @@ - -Tally ERP 9 is a popular accounting and business management software that helps you manage your finances, inventory, sales, taxes, payroll and more. It is widely used by small and medium enterprises (SMEs) in India and other countries. If you are looking for a reliable and easy-to-use solution for your business needs, you can download Tally ERP 9 Filehippo from the official website or other trusted sources. In this article, we will show you how to download Tally ERP 9 Filehippo and what are its main features and benefits. - -## How to Download Tally ERP 9 Filehippo - -You can download Tally ERP 9 Filehippo from the official website of Tally Solutions, the developer of the software. You can choose from different versions of the software depending on your requirements and preferences. The latest version of Tally ERP 9 is 6.6.3, which supports GST and other tax compliance features. You can also download older versions of the software if you need them. - -Alternatively, you can download Tally ERP 9 Filehippo from other trusted sources like Filehippo.com or FileHorse.com. These websites offer safe and fast downloads of various software applications, including Tally ERP 9. You can compare the features and ratings of different versions of the software and choose the one that suits you best. - -To download Tally ERP 9 Filehippo, you need to have a compatible Windows operating system on your computer. The software supports Windows XP / Vista / Windows 7 / Windows 8 / Windows 10 (32 bit & 64 bit) and requires at least 1 GB of RAM and 150 MB of free disk space. After downloading the setup file, you need to run it and follow the instructions to install the software on your computer. - -## What are the Main Features and Benefits of Tally ERP 9 - -Tally ERP 9 is a comprehensive business management software that offers various features and benefits for different types of businesses. Some of the main features and benefits of Tally ERP 9 are: - -- Supports all types of businesses: Tally ERP 9 can handle accounting and financial management for any type of business, whether manufacturing, wholesale, retail, service or others. It can also handle multiple currencies, languages and locations. -- Simplifies financial operations: Tally ERP 9 can help you automate and streamline your financial operations, such as invoice generation, bill payment, GST registration, TDS calculation, bank reconciliation, balance sheet and profit and loss statements. -- Supports GST: Tally ERP 9 is fully compliant with GST and other tax regulations in India and other countries. It can help you generate GST returns, claim input tax credits, manage e-way bills and avoid penalties. -- Customizes as per your business needs: Tally ERP 9 offers a wide range of configurable features that allow you to customize the software according to your business needs. You can define sales and purchase orders, stock groups, product tracking, price lists, discounts, credit notes and more. -- Provides payroll management: Tally ERP 9 can help you manage your payroll functions efficiently. You can create employee profiles, generate salary slips, calculate PF/ESI deductions, make batch payments and generate payroll reports. -- Offers inventory management: Tally ERP 9 can help you manage your inventory levels effectively. You can track raw materials, finished goods, work in progress products, warehouses, godowns and manufacturing journals. You can also use batch processing, units of measure and bill of materials features. -- Enables data synchronization: Tally ERP 9 can help you synchronize your data across multiple devices and locations. You can share information with your staff and other professionals securely and easily. You can also export data in multiple formats like XML or ODBC. -- Enhances security: Tally ERP 9 ensures that your data is safe and secure from unauthorized access or tampering. It uses encryption techniques like TallyVault to protect your data. It also allows you to set user permissions and access levels for different functions. -- Improves performance: Tally ERP 9 uses advanced technology like GPU acceleration to speed up the processing of large volumes of data. It also allows you to adjust the program priority and CPU usage to optimize the performance and stability of your computer. - -## Conclusion - -Tally ERP 9 is a powerful and easy-to-use accounting and business management software that can help you manage your finances, inventory, sales, taxes, payroll and more efficiently. It

          -

          download tally erp 9 filehippo


          Download Ziphttps://urlcod.com/2uK6Ws



          ddb901b051
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Festo Software Tools Fst 4.10 Download 21.md b/spaces/tialenAdioni/chat-gpt-api/logs/Festo Software Tools Fst 4.10 Download 21.md deleted file mode 100644 index 7613891ad37ce5aa590a1a45782ec15e5feaf7c1..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Festo Software Tools Fst 4.10 Download 21.md +++ /dev/null @@ -1,30 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Festo Software Tools Fst 4.10 Download 21": - -

          How to Download and Install Festo Software Tools Fst 4.10

          -

          Festo Software Tools (FST) is a software package that allows you to program and configure Festo controllers, such as the CPX-FEC-1-IE. FST supports various programming languages, such as SCADA and JAVA, and offers a user-friendly interface and a range of features. In this article, we will show you how to download and install FST 4.10 on your Windows PC.

          -

          Festo Software Tools Fst 4.10 Download 21


          Download ::: https://urlcod.com/2uK19B



          -

          Step 1: Download FST 4.10

          -

          To download FST 4.10, you need to visit the Festo support portal[^1^]. There, you can find the software and manual for your region and language. For example, if you are in Germany and want the German version of FST 4.10, you can click on the link P.SW-FST4-CD-DE (537927). This will open a new window where you can enter your email address and agree to the terms and conditions. Then, you can click on the download button and save the file to your computer.

          -

          Step 2: Install FST 4.10

          -

          Once you have downloaded FST 4.10, you need to unzip the file and run the setup.exe file. This will launch the installation wizard that will guide you through the process. You can choose the destination folder, the components to install, and the shortcuts to create. You can also select the language of FST 4.10 from English, German, French, or Chinese. After that, you can click on the install button and wait for the installation to finish.

          -

          Step 3: Launch FST 4.10

          -

          After installing FST 4.10, you can launch it from the start menu or the desktop shortcut. You will see the main window of FST 4.10 with various menus and toolbars. You can create a new project or open an existing one from the file menu. You can also connect to a Festo controller from the communication menu and select the communication port and protocol. You can then program and configure your controller using FST 4.10.

          -

          Conclusion

          -

          Festo Software Tools Fst 4.10 is a powerful and versatile software package that allows you to program and configure Festo controllers with ease. You can download and install FST 4.10 from the Festo support portal[^1^] by following the steps in this article. We hope this article was helpful for you.

          Here are a few more paragraphs for the article: - -

          Step 4: Explore FST 4.10 Features

          -

          FST 4.10 offers a range of features that can help you create and manage your projects more efficiently and effectively. Some of these features are:

          -
            -
          • Ladder diagram programming: You can use the ladder diagram language to create graphical programs that resemble electrical circuits. This can make your programs easier to understand and debug. You can also use the statement list language to create textual programs that are more compact and flexible.
          • -
          • Integrated configuration tools: You can use the FED Designer tool to configure and program the FED display and operating units that can be used to monitor and control your controllers. You can also use the CPX configuration tool to define the parameters and functionality of the CPX modules that can be used to expand your controllers.
          • -
          • Communication support: You can use various communication protocols and interfaces to connect your controllers to other devices and networks. For example, you can use the EasyIP protocol to create a decentralized communication system via Ethernet. You can also use the Profibus-DP and AS-Interface fieldbus systems to connect your controllers to sensors and actuators.
          • -
          • Documentation and diagnostics: You can use the documentation feature to generate reports and comments for your projects. You can also use the diagnostics feature to test and troubleshoot your programs and controllers. You can use online monitoring, breakpoints, watch windows, error messages, and other tools to identify and resolve any issues.
          • -
          -

          Step 5: Update FST 4.10

          -

          Festo regularly releases updates for FST 4.10 that can improve its performance and functionality. To update FST 4.10, you need to visit the Festo download portal[^2^]. There, you can find the latest version of FST 4.10 for your region and language. You can download the update file and run it on your computer. The update wizard will guide you through the process of installing the update.

          -

          Step 6: Get Support for FST 4.10

          -

          If you have any questions or problems with FST 4.10, you can contact Festo for support. Festo has a global network of specialists who can assist you with any technical or application-related issues. You can find the contact details of your local Festo representative on the Festo website[^2^]. You can also access online resources, such as manuals, tutorials, videos, FAQs, forums, and more on the Festo support portal[^1^].

          -

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/Xi Freedom Dive Mp3 16.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/Xi Freedom Dive Mp3 16.md deleted file mode 100644 index 11720da5c54734b721df3bafb5fddb0ed98703e4..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/Xi Freedom Dive Mp3 16.md +++ /dev/null @@ -1,102 +0,0 @@ -## Xi Freedom Dive Mp3 16 - - - - - - ![Xi Freedom Dive Mp3 16](https://www.bluesblastmagazine.com/images2022/coverphotos/cover16_11medium.jpg) - - - - - -**LINK ::: [https://urlcod.com/2txiRH](https://urlcod.com/2txiRH)** - - - - - - - - - - - - Here is a possible title and article for the keyword "Xi Freedom Dive Mp3 16": - -# How to Download Xi Freedom Dive Mp3 16 for Free - - - -If you are a fan of happy hardcore, speedcore, or rhythm games, you might have heard of **Xi Freedom Dive Mp3 16**, a song by the Japanese composer xi. This song is known for its fast-paced and complex melody, as well as its difficulty in games like osu!, Cytus, Deemo, and Beat Saber. But did you know that you can download this song for free and enjoy it on your own device? - - - -In this article, we will show you how to download Xi Freedom Dive Mp3 16 for free from SoundCloud, one of the most popular platforms for music streaming and sharing. SoundCloud has millions of tracks from various genres and artists, including xi. You can listen to Xi Freedom Dive Mp3 16 on SoundCloud by following this link[^1^]. However, if you want to download it and play it offline, you will need to follow these steps: - - - -1. Go to the SoundCloud page of Xi Freedom Dive Mp3 16[^1^] and click on the "Buy" button. Don't worry, you won't have to pay anything. This is just a way for the uploader to provide a free download link. - -2. You will be redirected to a Google Drive page where you can see the file name "xi - Freedom Dive↓ (Full).mp3". Click on the download icon at the top right corner of the page. - -3. Wait for the download to finish and then locate the file on your device. You can now play Xi Freedom Dive Mp3 16 on any media player of your choice. - - - -Congratulations! You have successfully downloaded Xi Freedom Dive Mp3 16 for free. Enjoy this amazing song and challenge yourself with its rhythm. If you like xi's music, you can also check out his other songs on SoundCloud[^2^] or his official website[^3^]. You can also support him by buying his albums on Bandcamp or iTunes. - - - -We hope this article was helpful and informative. If you have any questions or feedback, please leave a comment below. Thank you for reading! - -Here are some possible additional paragraphs for the article: - -## Why You Should Listen to Xi Freedom Dive Mp3 16 - - - -Xi Freedom Dive Mp3 16 is not just a song, it's an experience. Listening to this song can have many benefits for your mood, health, and creativity. Here are some of the reasons why you should listen to Xi Freedom Dive Mp3 16: - - - -- It can boost your energy and motivation. Xi Freedom Dive Mp3 16 is a fast-paced and upbeat song that can make you feel more alert and excited. It can also help you overcome procrastination and boredom by giving you a sense of challenge and achievement. - -- It can improve your concentration and memory. Xi Freedom Dive Mp3 16 is a complex and intricate song that requires a lot of attention and focus to follow. It can also stimulate your brain and enhance your cognitive skills by exposing you to different patterns and rhythms. - -- It can inspire your creativity and imagination. Xi Freedom Dive Mp3 16 is a unique and original song that can spark your curiosity and interest. It can also encourage you to explore new ideas and possibilities by exposing you to different sounds and emotions. - - - -As you can see, listening to Xi Freedom Dive Mp3 16 can have many positive effects on your mind and body. You can enjoy this song on various platforms, such as Spotify[^1^], SoundCloud[^2^], or Last.fm[^3^]. You can also find more information about xi and his music on his official website. - - - -## How to Play Xi Freedom Dive Mp3 16 on Rhythm Games - - - -If you are a fan of rhythm games, you might want to try playing Xi Freedom Dive Mp3 16 on some of the most popular titles in the genre. This song is featured in several rhythm games, such as osu!, Cytus, Deemo, Robeats, and Beat Saber. However, be warned: this song is not for the faint of heart. It is one of the most difficult songs in these games, requiring a high level of skill, speed, and accuracy. - - - -Here are some tips on how to play Xi Freedom Dive Mp3 16 on rhythm games: - - - -- Practice a lot. The only way to master this song is to practice it repeatedly until you memorize the notes and patterns. You can also watch videos of other players who have completed this song and learn from their techniques. - -- Adjust the settings. Depending on the game, you might be able to adjust the settings to make the song easier or harder. For example, you can change the speed, difficulty, or mods of the song in osu!. You can also customize your controls, display, or audio in some games. - -- Have fun. The most important thing is to have fun while playing this song. Don't get frustrated or discouraged if you fail or miss a note. Instead, enjoy the challenge and the thrill of playing this song. Remember that it's just a game and that you are playing it for fun. - - - -We hope these tips will help you play Xi Freedom Dive Mp3 16 on rhythm games. If you succeed in playing this song, you can be proud of yourself for achieving one of the greatest feats in rhythm gaming. You can also share your scores and videos with other players online and show off your skills. - - dfd1c89656 - - - - - diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bestwap offers high-quality 30 sec whatsapp status video download.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bestwap offers high-quality 30 sec whatsapp status video download.md deleted file mode 100644 index 14dd2365a70b096ebe4188ff9c09ef004ac43b64..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bestwap offers high-quality 30 sec whatsapp status video download.md +++ /dev/null @@ -1,97 +0,0 @@ - -

          How to Download Whatsapp Status Video in 30 Seconds from Bestwap

          -

          Whatsapp is one of the most popular messaging apps in the world, with over 2 billion users. One of the features that makes Whatsapp stand out is the status video, which allows you to share a short video clip with your contacts for 24 hours. You can use status videos to express your feelings, thoughts, opinions, or just have some fun.

          -

          But where can you find the best status videos for your Whatsapp? The answer is Bestwap, a website that offers a huge collection of status videos for free. You can find status videos for every occasion, mood, genre, and language on Bestwap. Whether you want to show your love, friendship, attitude, motivation, or humor, you can find a perfect status video on Bestwap.

          -

          whatsapp status video download 30 sec bestwap


          Downloadhttps://bltlly.com/2uOqZT



          -

          Downloading status videos from Bestwap is easy and fast. You can download any status video in 30 seconds or less, and set it as your Whatsapp status in no time. In this article, we will show you how to download Whatsapp status video in 30 seconds from Bestwap, and also give you some tips and tricks to make your status video more attractive.

          -

          Steps to Download Whatsapp Status Video in 30 Seconds from Bestwap

          -

          Follow these simple steps to download Whatsapp status video in 30 seconds from Bestwap:

          -

          Step 1: Visit the Bestwap website and search for the status video category

          -

          Go to Bestwap, a website that offers free downloads of songs, videos, ringtones, wallpapers, and more. On the homepage, you will see a list of categories, such as Bollywood, Punjabi, Bhojpuri, etc. Click on the category that matches your preference, or use the search bar to find a specific status video.

          -

          Step 2: Browse through the available status videos and select the one you like

          -

          Once you enter the category or search result page, you will see a lot of status videos to choose from. You can preview each video by clicking on it, or read the description and ratings to get an idea of what it is about. You can also sort the videos by popularity, date, or alphabetically. When you find a status video that you like, click on it to proceed.

          -

          Step 3: Click on the download button and choose the 30 seconds option

          -

          On the next page, you will see a download button below the video. Click on it and you will see a list of options to download the video in different formats and durations. Choose the 30 seconds option, which is the ideal length for a Whatsapp status video. The download will start automatically and will be completed in a few seconds.

          -

          Step 4: Save the video to your device and set it as your Whatsapp status

          -

          Once the download is finished, you can find the video in your device's storage or gallery. You can play it to check its quality and content. If you are satisfied with it, you can set it as your Whatsapp status by following these steps:

          -
            -
          • Open Whatsapp and tap on the status icon at the top left corner.
          • -
          • Tap on the camera icon at the bottom right corner.
          • -
          • Select the video from your gallery and tap on the send button.
          • -
          • Your status video will be uploaded and visible to your contacts for 24 hours.
          • -
          -

          Congratulations, you have successfully downloaded and set a Whatsapp status video in 30 seconds from Bestwap!

          -

          Tips and Tricks to Make Your Whatsapp Status Video More Attractive

          -

          Now that you know how to download Whatsapp status video in 30 seconds from Bestwap, you might want to make your status video more attractive and appealing to your viewers. Here are some tips and tricks that you can use to enhance your status video:

          -

          whatsapp status video download 30 sec bestwap love
          -whatsapp status video download 30 sec bestwap sad
          -whatsapp status video download 30 sec bestwap funny
          -whatsapp status video download 30 sec bestwap romantic
          -whatsapp status video download 30 sec bestwap punjabi
          -whatsapp status video download 30 sec bestwap hindi
          -whatsapp status video download 30 sec bestwap tamil
          -whatsapp status video download 30 sec bestwap telugu
          -whatsapp status video download 30 sec bestwap malayalam
          -whatsapp status video download 30 sec bestwap marathi
          -whatsapp status video download 30 sec bestwap bhojpuri
          -whatsapp status video download 30 sec bestwap gujarati
          -whatsapp status video download 30 sec bestwap kannada
          -whatsapp status video download 30 sec bestwap bengali
          -whatsapp status video download 30 sec bestwap urdu
          -whatsapp status video download 30 sec bestwap english
          -whatsapp status video download 30 sec bestwap attitude
          -whatsapp status video download 30 sec bestwap friendship
          -whatsapp status video download 30 sec bestwap motivational
          -whatsapp status video download 30 sec bestwap lyrical
          -whatsapp status video download 30 sec bestwap rap
          -whatsapp status video download 30 sec bestwap song
          -whatsapp status video download 30 sec bestwap movie
          -whatsapp status video download 30 sec bestwap dialogue
          -whatsapp status video download 30 sec bestwap comedy
          -whatsapp status video download 30 sec bestwap birthday
          -whatsapp status video download 30 sec bestwap anniversary
          -whatsapp status video download 30 sec bestwap festival
          -whatsapp status video download 30 sec bestwap new year
          -whatsapp status video download 30 sec bestwap valentine's day
          -whatsapp status video download 30 sec bestwap holi
          -whatsapp status video download 30 sec bestwap diwali
          -whatsapp status video download 30 sec bestwap eid
          -whatsapp status video download 30 sec bestwap navratri
          -whatsapp status video download 30 sec bestwap raksha bandhan
          -whatsapp status video download 30 sec bestwap independence day
          -whatsapp status video download 30 sec bestwap republic day
          -whatsapp status video download 30 sec bestwap ganesh chaturthi
          -whatsapp status video download 30 sec bestwap dussehra
          -whatsapp status video download 30 sec bestwap christmas

          -

          Tip 1: Use filters and effects to enhance your video quality

          -

          Bestwap offers a variety of filters and effects that you can apply to your status video before downloading it. You can choose from different categories, such as romantic, funny, sad, etc. You can also adjust the brightness, contrast, saturation, and other parameters of your video. Filters and effects can make your video look more professional and eye-catching.

          -

          Tip 2: Add captions and stickers to express your emotions

          -

          Another way to make your status video more attractive is to add captions and stickers to it. Captions are short texts that convey your message or mood. Stickers are graphical icons that add some fun and flair to your video. You can find a lot of captions and stickers on Bestwap, or you can create your own using any photo editing app. Captions and stickers can help you express your emotions and personality better.

          -

          Tip 3: Use music and sound effects to create a mood

          -

          Music and sound effects are essential elements of any video, especially a status video. They can create a mood and atmosphere for your video, as well as complement your content. You can find a lot of music and sound effects on Bestwap, or you can use any music app or website to download them. You can also use your own voice or recordings to add some personal touch to your video. Music and sound effects can make your video more engaging and entertaining.

          Tip 4: Trim and crop your video to fit the 30 seconds limit

          -

          One of the challenges of making a status video is to fit your content within the 30 seconds limit. You don't want to make your video too long or too short, as it might lose the interest of your viewers. You can use any video editing app or tool to trim and crop your video to the desired length and size. You can also use the 30 seconds option on Bestwap to download the video in the right format. Trimming and cropping your video can make it more concise and clear.

          -

          Tip 5: Be creative and original with your content

          -

          The most important tip to make your status video more attractive is to be creative and original with your content. Don't just copy or imitate what others are doing, but try to come up with something unique and personal. You can use your own experiences, opinions, stories, jokes, or anything else that reflects your identity and style. You can also use Bestwap as a source of inspiration, but not as a substitute for your own creativity. Being creative and original with your content can make your status video more memorable and impressive.

          -

          Conclusion

          -

          Whatsapp status video is a great way to express yourself and connect with your friends and family. You can find a lot of amazing status videos on Bestwap, a website that offers free downloads of songs, videos, ringtones, wallpapers, and more. You can download any status video in 30 seconds or less, and set it as your Whatsapp status in no time. You can also use some tips and tricks to make your status video more attractive and appealing to your viewers.

          -

          So what are you waiting for? Visit Bestwap today and download your favorite status video in 30 seconds. You will be amazed by the variety and quality of status videos available on Bestwap. You will also enjoy the ease and speed of downloading status videos from Bestwap. And you will love the way your status video looks and sounds on Whatsapp.

          -

          Download Whatsapp status video in 30 seconds from Bestwap now and share it with your contacts. You will surely get a lot of likes, comments, and compliments for your status video. You will also have a lot of fun and satisfaction with your status video.

          -

          FAQs

          -

          Q1: Is Bestwap safe and legal to use?

          -

          A1: Yes, Bestwap is safe and legal to use. Bestwap does not host any illegal or harmful content on its website. It only provides links to other websites that offer free downloads of songs, videos, ringtones, wallpapers, and more. Bestwap does not ask for any personal or financial information from its users. It also does not install any malware or virus on your device. You can use Bestwap without any worries or risks.

          -

          Q2: How many status videos can I download from Bestwap?

          -

          A2: You can download as many status videos as you want from Bestwap. There is no limit or restriction on the number of downloads you can make from Bestwap. You can download any status video in 30 seconds or less, and set it as your Whatsapp status in no time. You can also download different types of status videos for different occasions, moods, genres, and languages from Bestwap.

          -

          Q3: Can I request a custom status video from Bestwap?

          -

          A3: No, you cannot request a custom status video from Bestwap. Bestwap does not offer any custom or personalized service for its users. It only provides links to other websites that offer free downloads of songs, videos, ringtones, wallpapers, and more. You cannot ask for a specific or customized status video from Bestwap. However, you can use any video editing app or tool to create your own custom status video using the videos downloaded from Bestwap.

          -

          Q4: How can I share my status video with others?

          -

          A4: You can share your status video with others by setting it as your Whatsapp status. Your contacts will be able to see your status video for 24 hours on Whatsapp. They can also like, comment, or reply to your status video on Whatsapp. You can also share your status video with others by sending it as a message or attachment on Whatsapp or any other messaging app.

          -

          Q5: What are some other websites that offer status videos for free?

          -

          A5: Some other websites that offer status videos for free are:

          -
            -
          • VidStatus: A website that offers a large collection of short videos for Whatsapp status in various languages.
          • -
          • Status Saver: A website that offers a wide range of status videos for Whatsapp, Facebook, Instagram, and other social media platforms.
          • -
          • Video Song Status: A website that offers a huge collection of video songs for Whatsapp status in different languages and genres.
          • -
          -

          These are some of the websites that offer status videos for free. However, we recommend you to use Bestwap, as it is the best website for downloading Whatsapp status video in 30 seconds.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Street 0.8.5 MOD APK The Most Realistic Street Racing Game Ever.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Street 0.8.5 MOD APK The Most Realistic Street Racing Game Ever.md deleted file mode 100644 index 2451d78f2097fcdee5817adc35628931bb711b74..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Street 0.8.5 MOD APK The Most Realistic Street Racing Game Ever.md +++ /dev/null @@ -1,114 +0,0 @@ -
          -

          CarX Street Mod APK 8.5: A Racing Game with Realistic Physics and Graphics

          -

          Introduction

          -

          If you are a fan of racing games, you might have heard of CarX Street, a popular game that lets you experience the thrill of street racing with realistic physics and graphics. But did you know that there is a modded version of this game that gives you unlimited money, gold, cars, parts, and more? In this article, we will tell you everything you need to know about CarX Street Mod APK 8.5, including its features, how to download and install it, and why you should play it.

          -

          What is CarX Street?

          -

          CarX Street is a racing game developed by CarX Technologies, the same company that created CarX Drift Racing and CarX Highway Racing. In this game, you can choose from over 30 different cars, customize them with various parts and paint jobs, and race against other players or AI opponents in various modes and locations. You can also join clubs, participate in tournaments, and earn rewards for your performance.

          -

          carx street mod apk 8.5


          DOWNLOAD ››››› https://bltlly.com/2uOpxL



          -

          What is CarX Street Mod APK 8.5?

          -

          CarX Street Mod APK 8.5 is a modified version of the original game that gives you access to unlimited money and gold, which you can use to buy any car or part you want. You can also unlock all the cars and parts without having to complete any missions or achievements. Moreover, this modded version removes all the ads from the game and does not require root access to work.

          -

          Features of CarX Street Mod APK 8.5

          -

          Unlimited Money and Gold

          -

          One of the main features of CarX Street Mod APK 8.5 is that it gives you unlimited money and gold, which are the two main currencies in the game. You can use them to buy any car or part you want, without having to worry about running out of them. You can also upgrade your cars to the maximum level and enjoy their full potential.

          -

          All Cars and Parts Unlocked

          -

          Another feature of CarX Street Mod APK 8.5 is that it unlocks all the cars and parts in the game, which means you can choose from over 30 different cars, each with its own unique characteristics and performance. You can also customize your cars with various parts, such as engines, turbos, brakes, tires, suspensions, body kits, spoilers, hoods, bumpers, lights, mirrors, wheels, exhausts, decals, and more. You can create your own dream car and show it off to your friends.

          -

          No Ads and No Root Required

          -

          A third feature of CarX Street Mod APK 8.5 is that it removes all the ads from the game, which can be annoying and distracting when you are trying to enjoy the game. You can play the game without any interruptions or pop-ups. Moreover, this modded version does not require root access to work, which means you do not have to risk damaging your device or voiding its warranty.

          -

          How to Download and Install CarX Street Mod APK 8.5

          -

          Step 1: Download the APK file from a trusted source

          -

          The first step to download and install CarX Street Mod APK 8.5 is to download the APK file from a trusted source, such as [this one]. You can also scan the QR code below to download the file directly to your device. Make sure you have enough storage space on your device before downloading the file. QR code for CarX Street Mod APK 8.5 -

          Step 2: Enable Unknown Sources on your device

          -

          The second step to download and install CarX Street Mod APK 8.5 is to enable Unknown Sources on your device, which will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's Settings > Security > Unknown Sources and toggle it on. You may see a warning message, but you can ignore it and proceed.

          -

          Step 3: Install the APK file and launch the game

          -

          The third and final step to download and install CarX Street Mod APK 8.5 is to install the APK file and launch the game. To do this, locate the downloaded file in your device's file manager and tap on it. You may see a confirmation message, but you can tap on Install and wait for the installation to finish. Once the installation is done, you can tap on Open and enjoy the game.

          -

          carx street racing mod apk unlimited money
          -carx street drift mod apk latest version
          -carx street hack apk download for android
          -carx street mod apk free shopping
          -carx street mod apk obb data
          -carx street mod apk revdl
          -carx street mod apk offline
          -carx street mod apk no root
          -carx street mod apk android 1
          -carx street mod apk rexdl
          -carx street mod apk unlimited coins
          -carx street mod apk all cars unlocked
          -carx street mod apk online
          -carx street mod apk 2023
          -carx street mod apk ios
          -carx street mod apk unlimited gold
          -carx street mod apk vip
          -carx street mod apk new update
          -carx street mod apk gameplay
          -carx street mod apk hack
          -carx street mod apk mega
          -carx street mod apk full version
          -carx street mod apk premium
          -carx street mod apk pro
          -carx street mod apk cracked
          -carx street mod apk unlimited everything
          -carx street mod apk unlocked
          -carx street mod apk cheat
          -carx street mod apk original
          -carx street mod apk old version
          -carx street mod apk 0.8.5 download
          -download game carx street mod apk
          -how to install carx street mod apk
          -cara download carx street mod apk
          -descargar carx street mod apk
          -telecharger carx street mod apk
          -baixar carx street mod apk
          -indir carx street mod apk
          -scaricare carx street mod apk
          -скачать carx street mod apk
          -下载carx street模式apk 8.5

          -

          Conclusion

          -

          Why You Should Play CarX Street Mod APK 8.5

          -

          CarX Street Mod APK 8.5 is a racing game that offers you a realistic and immersive experience of street racing with stunning graphics and physics. You can choose from over 30 different cars, customize them with various parts and paint jobs, and race against other players or AI opponents in various modes and locations. You can also join clubs, participate in tournaments, and earn rewards for your performance.

          -

          With CarX Street Mod APK 8.5, you can also enjoy unlimited money and gold, which you can use to buy any car or part you want. You can also unlock all the cars and parts without having to complete any missions or achievements. Moreover, this modded version removes all the ads from the game and does not require root access to work.

          -

          If you are looking for a fun and exciting racing game that will keep you entertained for hours, you should definitely try CarX Street Mod APK 8.5. It is one of the best racing games available for Android devices.

          -

          Pros and Cons of CarX Street Mod APK 8.5

          -

          Here are some of the pros and cons of CarX Street Mod APK 8.5 that you should consider before playing it:

          -
      Ebook FormatDescriptionProsCons
      PDFThis is a fixed-layout format that preserves the original design and layout of the book. It is compatible with most devices and apps.- Easy to print and share
      - Good for books with complex graphics and tables
      - Supports multimedia content like audio and video
      - Not very flexible and adaptable
      - Difficult to adjust font size, color, etc.
      - May not fit well on small screens
      - Does not support features like bookmarks, highlights, notes, etc.
      EPUBThis is a reflowable format that adapts to the screen size and orientation of the device. It is compatible with most devices and apps except Kindle.- Flexible and adaptable
      - Easy to adjust font size, color, etc.
      - Fits well on any screen size
      - Supports features like bookmarks, highlights, notes, etc.
      - Not easy to print and share
      - Not good for books with complex graphics and tables
      - Does not support multimedia content like audio and video
      - - - - - - - - - - - - - - - - - - - - - - - - -
      ProsCons
      - Unlimited money and gold- May not be compatible with some devices
      - All cars and parts unlocked- May cause some glitches or bugs
      - No ads and no root required- May violate the terms of service of the original game
      - Realistic physics and graphics- May consume a lot of battery power
      - Various modes and locations- May require a stable internet connection
      -

      FAQs

      -

      Here are some of the frequently asked questions about CarX Street Mod APK 8.5 that you may find helpful:

      -
        -
      1. Is CarX Street Mod APK 8.5 safe to use?
      2. -

        Yes, CarX Street Mod APK 8.5 is safe to use as long as you download it from a trusted source, such as [this one]. However, you should always be careful when installing apps from unknown sources and scan them for viruses or malware before installing them.

        -
      3. Is CarX Street Mod APK 8.5 free to play?
      4. -

        Yes, CarX Street Mod APK 8.5 is free to play and does not require any subscription or registration. However, you may need to purchase some in-game items or features with real money if you want to enhance your gaming experience.

        -
      5. How do I update CarX Street Mod APK 8.5?
      6. -

        To update CarX Street Mod APK 8.5, you need to download the latest version of the modded file from a trusted source, such as [this one], and install it over the existing one. You may also need to enable Unknown Sources on your device again if it is disabled.

        -
      7. How do I uninstall CarX Street Mod APK 8.5?
      8. -

        To uninstall CarX Street Mod APK 8.5, you need to go to your device's Settings > Apps > CarX Street and tap on Uninstall. You may also need to delete the APK file from your device's file manager if you want to free up some storage space.

        -
      9. Can I play CarX Street Mod APK 8.5 with my friends?
      10. -

        Yes, you can play CarX Street Mod APK 8.5 with your friends online or offline. You can join clubs, chat with other players, and challenge them to races in various modes and locations. You can also share your cars and parts with your friends and see who has the best car in the game.

        -
      -

      I hope you enjoyed this article and learned something new about CarX Street Mod APK 8.5. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy racing!

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download 8 Ball Pool Game App and Experience Realistic Pool Physics.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download 8 Ball Pool Game App and Experience Realistic Pool Physics.md deleted file mode 100644 index 8ee5914a92d6d7ad1f35929432ae7895de84c531..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download 8 Ball Pool Game App and Experience Realistic Pool Physics.md +++ /dev/null @@ -1,138 +0,0 @@ - -

      How to Download 8 Ball Pool Game App

      -

      Do you love playing pool games but don't have a real pool table at home? Do you want to challenge your friends online and show off your skills? Do you want to have fun and improve your intelligence at the same time? If you answered yes to any of these questions, then you should download 8 Ball Pool game app right now!

      -

      8 Ball Pool is an addictive challenging game based on real 3D pool games, where you will play against other players from around the world. You can customize your cue and pool table, join tournaments, chat with other players, and earn coins and cash to buy new items. You can also join the 8 Ball level system and rank up by winning matches and accessing exclusive locations and rewards. 8 Ball Pool is designed to help your intelligence by improving your aim, strategy, and concentration.

      -

      download 8 ball pool game app


      Download File >>> https://bltlly.com/2uOkTP



      -

      In this article, we will show you how to download 8 Ball Pool game app on your device, how to play it, how to challenge your friends, how to customize your cue and pool table, and how to enjoy it even more. Follow these simple steps and start playing today!

      -

      Step 1: Choose your device and platform

      -

      The first step is to choose which device and platform you want to use to play 8 Ball Pool game app. The app is available for both Android and iOS devices, as well as for web browsers. You can play on your smartphone, tablet, laptop, or desktop computer. Here are the links for each platform:

      - -

      Step 2: Find and install the app from the official source

      -

      The next step is to find and install the app from the official source. Make sure you download the app from the links we provided above, or from the official website of Miniclip, the developer of the game. Do not download the app from any other source, as it may contain viruses or malware that can harm your device or steal your personal information.

      -

      To install the app on your Android or iOS device, simply tap on the link above or search for "8 Ball Pool" in the Google Play Store or App Store. Then, tap on the "Install" button and wait for the app to download and install on your device. You may need to grant some permissions for the app to access your device's features, such as camera, microphone, storage, etc.

      -

      To play the game on your web browser, simply click on the link above or go to miniclip.com/pool. Then, click on the "Play" button and wait for the game to load on your browser. You may need to enable Flash Player or allow some

      Step 3: Sign in and customize your profile

      -

      The final step before you start playing is to sign in and customize your profile. You can sign in with your Facebook account, Google account, or Miniclip account. Signing in will allow you to save your progress, access your achievements, and play with your friends. You can also sign in as a guest, but you will not be able to enjoy all the features of the game.

      -

      After signing in, you can customize your profile by choosing your name, avatar, country, and language. You can also edit your profile later by tapping on the gear icon on the top right corner of the screen. You can change your name, avatar, country, and language anytime you want. You can also view your stats, such as your level, rank, wins, losses, coins, cash, etc.

      -

      How to Play 8 Ball Pool Game App

      -

      Now that you have downloaded and installed the app, and signed in and customized your profile, you are ready to play 8 Ball Pool game app. In this section, we will explain the basic rules and controls of the game, the different game modes and levels, and some tips and tricks to improve your skills.

      -

      Basic rules and controls

      -

      The basic rules of 8 Ball Pool game app are similar to the real pool games. You will play against another player, and each of you will have a set of balls to pot: solids or stripes. The first player to pot all their balls and then the black 8 ball wins the game. However, if you pot the 8 ball before clearing your balls, or pot the cue ball along with the 8 ball, you lose the game.

      -

      How to download 8 ball pool game app for free
      -Download 8 ball pool game app and play online with friends
      -Best 8 ball pool game app for Android and iOS devices
      -Download 8 ball pool game app and win exclusive items
      -Download 8 ball pool game app from Miniclip.com
      -Download 8 ball pool game app and join the multiplayer league
      -Download 8 ball pool game app and improve your skills
      -Download 8 ball pool game app and customize your cue and table
      -Download 8 ball pool game app and challenge the best players
      -Download 8 ball pool game app and enjoy the realistic 3D graphics
      -Download 8 ball pool game app and earn coins and rewards
      -Download 8 ball pool game app and compete in different match levels
      -Download 8 ball pool game app and become a master of the pool
      -Download 8 ball pool game app and play in different modes
      -Download 8 ball pool game app and have fun with the addictive gameplay
      -Download 8 ball pool game app and learn the rules and strategies
      -Download 8 ball pool game app and access more exclusive locations
      -Download 8 ball pool game app and get updates and new features
      -Download 8 ball pool game app and sign in with Facebook or Miniclip account
      -Download 8 ball pool game app and get support and feedback
      -Why download 8 ball pool game app is the best choice for pool lovers
      -Where to download 8 ball pool game app safely and securely
      -What are the benefits of downloading 8 ball pool game app
      -How to install and update 8 ball pool game app on your device
      -How to play 8 ball pool game app with your friends online
      -How to win coins and items in 8 ball pool game app
      -How to customize your cue and table in 8 ball pool game app
      -How to join the multiplayer league in 8 ball pool game app
      -How to improve your skills in 8 ball pool game app
      -How to challenge the best players in 8 ball pool game app
      -How to enjoy the realistic 3D graphics in 8 ball pool game app
      -How to earn rewards in 8 ball pool game app
      -How to compete in different match levels in 8 ball pool game app
      -How to become a master of the pool in 8 ball pool game app
      -How to play in different modes in 8 ball pool game app
      -How to have fun with the addictive gameplay in 8 ball pool game app
      -How to learn the rules and strategies in 8 ball pool game app
      -How to access more exclusive locations in 8 ball pool game app
      -How to get updates and new features in 8 ball pool game app
      -How to sign in with Facebook or Miniclip account in 8 ball pool game app
      -How to get support and feedback in 8 ball pool game app
      -Tips and tricks for downloading 8 ball pool game app
      -Reviews and ratings for downloading 8 ball pool game app
      -Alternatives and competitors for downloading 8 ball pool game app
      -FAQs and guides for downloading 8 ball pool game app
      -Videos and tutorials for downloading 8 ball pool game app
      -Blogs and articles for downloading 8 ball pool game app
      -Forums and communities for downloading 8 ball pool game app

      -

      The controls of the game are simple and intuitive. You can aim your cue by dragging your finger on the screen. You can adjust the power of your shot by sliding the power bar on the left side of the screen. You can also add spin to your cue ball by tapping on the spin icon on the right side of the screen. To take a shot, simply release your finger from the screen.

      -

      Different game modes and levels

      -

      8 Ball Pool game app offers different game modes and levels for you to choose from. You can play in four different modes: 1-on-1 matches, tournaments, minigames, and practice mode.

      -
        -
      • 1-on-1 matches: This is the most common mode where you will play against another player online. You can choose from different match locations and stakes. The higher the stakes, the more coins you can win or lose. You can also choose to play with standard or pro rules.
      • -
      • Tournaments: This is a mode where you will compete with seven other players in a knockout format. You will play three rounds of 1-on-1 matches, and the winner of each round will advance to the next round. The winner of the final round will get a big prize of coins and trophies.
      • -
      • Minigames: This is a mode where you can play some fun and challenging minigames to earn coins and cash. The minigames include Spin & Win, Scratch & Win, Hi-Lo, and Lucky Shot.
      • -
      • Practice mode: This is a mode where you can practice your skills without risking any coins or cash. You can play solo or with a friend on the same device.
      • -
      -

      Tips and tricks to improve your skills

      -

      Here are some tips and tricks to help you improve your skills and win more games in 8 Ball Pool game app:

      -
        -
      • Plan ahead: Before taking a shot, think about where you want to position your cue ball for the next shot. Try to avoid leaving yourself in a difficult situation or giving your opponent an easy shot.
      • -
      • Aim carefully: Use the guidelines to help you aim your cue accurately. You can also zoom in or out by pinching the screen. Remember that adding spin to your cue ball will affect its trajectory and speed.
      • -
      • Use power wisely: Don't hit every shot with full power. Sometimes a soft touch is better than a hard smash. Adjust your power according to the distance and angle of your shot.
      • -
      • Learn from others: Watch how other players play and learn from their mistakes and successes. You can also watch some video tutorials or read some articles online to get some tips from experts.
      • -
      • Have fun: Don't get too frustrated or angry if you lose a game. Remember that it's just a game and it's supposed to be fun. Enjoy playing with other players from around the world and have a good time.
      • -

      How to Challenge Your Friends in 8 Ball Pool Game App

      -

      One of the best features of 8 Ball Pool game app is that you can challenge your friends online and play with them. You can also join or create a tournament, chat and interact with other players, and make new friends. In this section, we will show you how to do all these things and have more fun with your friends.

      -

      How to connect with your friends online

      -

      To play with your friends online, you need to connect your 8 Ball Pool game app account with your Facebook account, Google account, or Miniclip account. You can do this by tapping on the "Friends" icon on the bottom right corner of the screen, and then tapping on the "Connect" button. You will then see a list of your friends who are also playing 8 Ball Pool game app. You can also invite your friends who are not playing yet by tapping on the "Invite" button.

      -

      How to join or create a tournament

      -

      To join or create a tournament, you need to tap on the "Play" icon on the bottom left corner of the screen, and then tap on the "Tournaments" tab. You will then see a list of available tournaments that you can join. Each tournament has a different entry fee, prize pool, and number of players. You can also create your own tournament by tapping on the "Create" button. You can choose the name, rules, location, and stakes of your tournament. You can also invite your friends to join your tournament by tapping on the "Invite" button.

      -

      How to chat and interact with other players

      -

      To chat and interact with other players, you need to tap on the "Chat" icon on the top right corner of the screen. You will then see a list of chat rooms that you can join. Each chat room has a different topic, language, and number of players. You can also create your own chat room by tapping on the "Create" button. You can choose the name, topic, language, and privacy of your chat room. You can also invite your friends to join your chat room by tapping on the "Invite" button.

      -

      How to Customize Your Cue and Pool Table in 8 Ball Pool Game App

      -

      Another great feature of 8 Ball Pool game app is that you can customize your cue and pool table according to your preferences. You can buy and upgrade items in the pool shop, earn and spend pool coins and cash, and use different cues and tables for different effects. In this section, we will show you how to do all these things and make your game more personalized.

      -

      How to earn and spend pool coins and cash

      -

      Pool coins and cash are the currencies of 8 Ball Pool game app. You can use them to buy items in the pool shop, enter tournaments, play minigames, and more. You can earn pool coins and cash by winning matches, completing achievements, spinning the wheel, scratching cards, playing minigames, watching videos, or buying them with real money.

      -

      How to buy and upgrade items in the pool shop

      -

      To buy and upgrade items in the pool shop, you need to tap on the "Shop" icon on the top left corner of the screen. You will then see a list of categories that you can browse: cues, tables, chat packs, avatars, deals, coins & cash. Each category has different items that you can buy or upgrade with pool coins or cash. Each item has different stats and effects that will affect your game performance.

      -

      How to use different cues and tables for different effects

      -

      To use different cues and tables for different effects, you need to tap on the "Cue" icon on the bottom right corner of the screen before starting a match. You will then see a list of cues and tables that you own or can buy. Each cue and table has different stats and effects that will affect your game performance. For example:

      - - - - - - - - -
        -
      • The Halloween Event: This is an event that is held every October and celebrates the spooky season. It features a haunted match location, a scary cue, a pumpkin table, and a ghost chat pack.
      • -
      • The Christmas Event: This is an event that is held every December and celebrates the festive season. It features a snowy match location, a candy cane cue, a gingerbread table, and a Santa chat pack.
      • -
      • The Lunar New Year Event: This is an event that is held every February and celebrates the Chinese culture. It features a red match location, a dragon cue, a lantern table, and a fortune cookie chat pack.
      • -
      • The 8 Ball Challenge: This is a challenge that is held every week and tests your skills in 8 Ball Pool game app. It features a series of tasks that you need to complete, such as winning a certain number of matches, potting a certain number of balls, etc. It rewards you with coins, cash, and boxes.
      • -
      -

      You can check the details and rewards for each seasonal event and challenge by tapping on the "Events" icon on the top right corner of the screen. You can also see the countdown and progress for each event and challenge on the same screen.

      -

      Conclusion

      -

      8 Ball Pool game app is an amazing game that lets you play pool games online with your friends and other players from around the world. You can download it for free on your Android or iOS device, or play it on your web browser. You can customize your cue and pool table, join tournaments, chat with other players, and earn coins and cash to buy new items. You can also join the 8 Ball level system and rank up, access exclusive match locations and rewards, and participate in seasonal events and challenges.

      -

      What are you waiting for? Download 8 Ball Pool game app now and start playing! You will have hours of fun and entertainment, as well as improve your intelligence and skills. 8 Ball Pool game app is more than just a game, it's a lifestyle!

      -

      FAQs

      -

      Here are some frequently asked questions about 8 Ball Pool game app:

      -
        -
      1. Q: How can I get free coins and cash in 8 Ball Pool game app?
      2. -
      3. A: You can get free coins and cash by spinning the wheel, scratching cards, playing minigames, watching videos, completing achievements, or inviting your friends to play.
      4. -
      5. Q: How can I report or block a player who is cheating or being rude in 8 Ball Pool game app?
      6. -
      7. A: You can report or block a player by tapping on their name or avatar during or after a match, and then tapping on the "Report" or "Block" button.
      8. -
      9. Q: How can I change my name or avatar in 8 Ball Pool game app?
      10. -
      11. A: You can change your name or avatar by tapping on the gear icon on the top right corner of the screen, and then tapping on the "Edit Profile" button.
      12. -
      13. Q: How can I contact the support team of 8 Ball Pool game app?
      14. -
      15. A: You can contact the support team by tapping on the gear icon on the top right corner of the screen, and then tapping on the "Help & Support" button.
      16. -
      17. Q: How can I update 8 Ball Pool game app to get the latest features and improvements?
      18. -
      19. A: You can update 8 Ball Pool game app by going to the Google Play Store or App Store on your device, and then tapping on the "Update" button.
      20. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Little Krishna Malayalam Cartoon and Watch the Amazing Stories of the Lord of Love.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Little Krishna Malayalam Cartoon and Watch the Amazing Stories of the Lord of Love.md deleted file mode 100644 index 63dedc33e59b7dc7be386078d1a98cc1266b5206..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Little Krishna Malayalam Cartoon and Watch the Amazing Stories of the Lord of Love.md +++ /dev/null @@ -1,141 +0,0 @@ - -

      Little Krishna Malayalam Cartoon Free Download: How to Watch and Enjoy This Popular Animated Series

      -

      If you are looking for a fun and educational way to entertain your kids (or yourself), you might want to check out Little Krishna Malayalam cartoon free download. This is a popular animated series that tells the stories of Lord Krishna's childhood adventures in Vrindavan. The series is available in several languages, including Malayalam, which is spoken mainly in Kerala. In this article, you will learn more about this amazing cartoon series, why you should watch it in Malayalam, and how you can download it for free.

      -

      What is Little Krishna Malayalam Cartoon?

      -

      Little Krishna is a 3D animated series that was produced by Reliance Animation and Big Animation in collaboration with The Indian Heritage Foundation. The series was first aired on Nickelodeon India in 2009 and later on other channels around the world. The series has 13 episodes that depict various episodes from Lord Krishna's life as a child. The series is based on ancient scriptures such as Bhagavata Purana and Harivamsa.

      -

      little krishna malayalam cartoon free download


      Download File ✶✶✶ https://bltlly.com/2uOldK



      -

      The main character of the series is Little Krishna, who is portrayed as a playful, mischievous, and compassionate boy who loves his friends and family. He also has extraordinary powers that he uses to protect Vrindavan from evil forces such as Kamsa, Putana, Aghasura, Bakasura, Kaliya, etc. The other characters include Balarama (Krishna's elder brother), Radha (Krishna's beloved), Yashoda (Krishna's foster mother), Nanda (Krishna's foster father), Rohini (Balarama's mother), Kirti (Radha's mother), Vrishabhanu (Radha's father), Nandini (Krishna's cow), and many more. The series also features beautiful songs, music, and dialogues that enhance the appeal and charm of the stories.

      -

      The series has received many accolades and awards, such as the Best Indian Animated TV Series Award at FICCI Frames 2010, the Best Animated TV Series Award at the Asian Television Awards 2010, and the Best Animated TV Series Award at the Indian Telly Awards 2010. The series has also been praised by critics and viewers for its high-quality animation, captivating storytelling, and cultural authenticity. The series has a rating of 8.6 out of 10 on IMDb and 4.8 out of 5 on Google Play Store.

      -

      Here are some examples of the Little Krishna Malayalam cartoon episodes that you can watch online:

      - -

      Why Watch Little Krishna Malayalam Cartoon?

      -

      Watching Little Krishna Malayalam cartoon is not only entertaining but also beneficial for you and your kids. Here are some of the reasons why you should watch this series in Malayalam:

      -

      Improving language skills and vocabulary

      -

      Watching Little Krishna Malayalam cartoon can help you improve your language skills and vocabulary in Malayalam. You can learn new words, phrases, expressions, and idioms that are used in everyday conversations in Kerala. You can also practice your listening, speaking, reading, and writing skills by following the subtitles, dialogues, songs, and stories in the series. You can also compare and contrast the differences and similarities between Malayalam and other languages that you know or want to learn.

      -

      Appreciating the rich culture and heritage of Kerala

      -

      Watching Little Krishna Malayalam cartoon can help you appreciate the rich culture and heritage of Kerala. You can learn about the history, geography, art, architecture, cuisine, festivals, traditions, customs, values, beliefs, and practices of Kerala. You can also explore the diversity and unity of Kerala's people, communities, religions, languages, and regions. You can also admire the beauty and splendor of Kerala's natural scenery, wildlife, flora, and fauna.

      -

      Learning about the stories and teachings of Lord Krishna and Hinduism

      -

      Watching Little Krishna Malayalam cartoon can help you learn about the stories and teachings of Lord Krishna and Hinduism. You can discover the various aspects of Lord Krishna's personality, such as his love, compassion, wisdom, courage, power, humor, mischief, etc. You can also understand the meaning and significance of his actions, deeds, miracles, parables, and lessons. You can also gain insight into the core concepts and principles of Hinduism, such as karma, dharma, bhakti, moksha, etc. You can also appreciate the diversity and harmony of Hinduism's sects, scriptures, gods, goddesses, saints, sages, etc.

      -

      Enjoying the humor, music, and animation of the series

      -

      Watching Little Krishna Malayalam cartoon can help you enjoy the humor, music, and animation of the series. You can laugh at the funny and witty situations, dialogues, and expressions that Little Krishna and his friends encounter. You can also sing along to the catchy and melodious songs that are composed and sung by talented artists. You can also marvel at the stunning and realistic animation that brings the characters and scenes to life.

      -

      little krishna malayalam cartoon trilogy download
      -little krishna malayalam cartoon free streaming
      -little krishna malayalam cartoon internet archive
      -little krishna malayalam cartoon images and vectors
      -little krishna malayalam cartoon the assault of lethal bird
      -little krishna malayalam cartoon full episodes online
      -little krishna malayalam cartoon freepik resources
      -little krishna malayalam cartoon youtube videos
      -little krishna malayalam cartoon hd wallpapers
      -little krishna malayalam cartoon songs and music
      -little krishna malayalam cartoon games and apps
      -little krishna malayalam cartoon stories and comics
      -little krishna malayalam cartoon coloring pages and activities
      -little krishna malayalam cartoon merchandise and products
      -little krishna malayalam cartoon reviews and ratings
      -little krishna malayalam cartoon subtitles and dubbing
      -little krishna malayalam cartoon characters and names
      -little krishna malayalam cartoon quotes and dialogues
      -little krishna malayalam cartoon facts and trivia
      -little krishna malayalam cartoon awards and nominations
      -little krishna malayalam cartoon behind the scenes and making
      -little krishna malayalam cartoon cast and crew
      -little krishna malayalam cartoon theme and message
      -little krishna malayalam cartoon inspiration and influence
      -little krishna malayalam cartoon fan art and creations
      -little krishna malayalam cartoon memes and jokes
      -little krishna malayalam cartoon news and updates
      -little krishna malayalam cartoon history and origin
      -little krishna malayalam cartoon culture and tradition
      -little krishna malayalam cartoon religion and spirituality
      -little krishna malayalam cartoon lessons and morals
      -little krishna malayalam cartoon myths and legends
      -little krishna malayalam cartoon symbolism and meaning
      -little krishna malayalam cartoon analysis and interpretation
      -little krishna malayalam cartoon comparison and contrast
      -little krishna malayalam cartoon genres and styles
      -little krishna malayalam cartoon quality and resolution
      -little krishna malayalam cartoon formats and versions
      -little krishna malayalam cartoon duration and size
      -little krishna malayalam cartoon sources and links

      -

      How to Download Little Krishna Malayalam Cartoon for Free?

      -

      Now that you know why you should watch Little Krishna Malayalam cartoon, you might be wondering how you can download it for free. Well, before you do that, you should be aware of some legal and ethical issues that come with downloading copyrighted content for free. You should respect the rights and efforts of the creators and producers of the series and avoid engaging in piracy or infringement. You should also be careful of the risks and dangers of downloading from untrusted or illegal sources that might contain viruses, malware, or spyware.

      -

      Instead of downloading, you can consider some alternatives that are more safe and legal, such as streaming, renting, or buying the series online or offline. You can find many websites or platforms that offer the series in Malayalam for a reasonable price or even for free with ads or subscriptions. You can also buy or rent DVDs or CDs of the series from local stores or online shops.

      -

      However, if you still want to download Little Krishna Malayalam cartoon for free, you should follow some tips and tricks to find and access the best sources for watching or downloading the series in Malayalam. Here are some of them:

      -
        -
      • Use a VPN (Virtual Private Network) service to hide your IP address and location and bypass any geo-restrictions or censorship that might prevent you from accessing certain websites or content.
      • -
      • Use a reliable and updated antivirus software to scan and protect your device from any potential threats or infections that might come from downloading files.
      • -
      • Use a reputable and secure browser that has features such as ad-blocker, pop-up blocker, incognito mode, etc. to avoid any unwanted or malicious ads or pop-ups that might redirect you to harmful sites or download unwanted programs.
      • -
      • Use a trusted and popular search engine such as Bing to find and filter the best results for your query. You can use keywords such as "Little Krishna Malayalam cartoon free download", "Little Krishna Malayalam cartoon download link", "Little Krishna Malayalam cartoon torrent", etc. You can also use advanced search operators such as quotation marks, minus sign, site:, etc. to refine your search.
      • -
      • Use a quality and fast downloader software or app that can download files from various sources and formats without any errors or interruptions. You can also use a converter software or app that can change the file format to suit your device or preference.
      • -
      -

      Here are some examples of websites or platforms that offer Little Krishna Malayalam cartoon free download:

      -
      Cue/TableStat/Effect
      The Standard CueThe default cue that has no special effects.
      The Beginner CueA cue that gives you extra time to aim.
      The Lucky 8 CueA cue that gives you a chance to win a free spin after every win.
      The Galaxy CueA cue that gives you extra power and spin.
      The Wooden TableThe default table that has no special effects.
      The Ice Table
      - - - - - - - - - - - - - - - - - - - - -
      NameDescriptionLink
      HotstarA popular streaming service that offers various movies, shows, sports, news, etc. in different languages. You can watch Little Krishna Malayalam cartoon online for free with ads or with a subscription plan.https://www.hotstar.com/in/tv/little-krishna/1260012894
      YouTubeA popular video-sharing platform that offers various videos in different categories and languages. You can watch Little Krishna Malayalam cartoon online for free with ads or with a premium plan. You can also download some videos using YouTube's offline feature or using third-party tools.https://www.youtube.com/playlist?list=PLB973DBD69DB69061
      Torrentz2A popular torrent search engine that indexes various torrent files from different sources. You can download Little Krishna Malayalam cartoon using a torrent client such as BitTorrent or uTorrent.https://torrentz2.eu/search?f=Little+Krishna+Malayalam+cartoon
      -

      Conclusion

      -

      In conclusion, Little Krishna Malayalam cartoon is a wonderful animated series that you and your kids will love watching. It has a captivating storyline, beautiful animation, catchy music, and valuable lessons. It also has the added benefit of helping you learn and appreciate Malayalam language and culture. You can watch or download Little Krishna Malayalam cartoon for free from various sources, but you should be careful of the legal and ethical issues and the potential risks involved. We hope this article has given you some useful information and tips on how to watch and enjoy this popular animated series. If you have any questions or comments, please feel free to contact us or leave a comment below. Thank you for reading and happy watching!

      -

      FAQs

      -

      Here are some frequently asked questions about Little Krishna Malayalam cartoon free download:

      -

      How many episodes are there in Little Krishna Malayalam cartoon?

      -

      There are 13 episodes in Little Krishna Malayalam cartoon, each lasting about 22 minutes. The episodes are:

      -
        -
      1. Attack Of Serpent King
      2. -
      3. The Terrible Storm
      4. -
      5. The Horror Cave
      6. -
      7. Enchanted Picnic
      8. -
      9. Fire And Fury
      10. -
      11. Demon In Disguise
      12. -
      13. Deadly Donkey
      14. -
      15. Challenge Of The Brute
      16. -
      17. The Mystery Of The Vanishing Sheep
      18. -
      19. The Vicious Whirlwind
      20. -
      21. The Lethal Bird
      22. -
      23. The Divine Ploy
      24. -
      25. The Wondrous Feats
      26. -
      -

      Where can I watch Little Krishna Malayalam cartoon online?

      -

      You can watch Little Krishna Malayalam cartoon online from various websites or platforms that offer streaming services, such as Hotstar, YouTube, JioCinema, etc. You can also buy or rent the series from online shops such as Amazon, Flipkart, etc.

      -

      Is Little Krishna Malayalam cartoon suitable for all ages?

      -

      Yes, Little Krishna Malayalam cartoon is suitable for all ages. It is a family-friendly series that has no violence, nudity, profanity, or inappropriate content. It is a wholesome and educational series that teaches moral values, cultural awareness, and spiritual wisdom.

      -

      What are some other popular Malayalam cartoons or shows for kids?

      -

      Some other popular Malayalam cartoons or shows for kids are:

      -
        -
      • Bal Ganesh
      • -
      • Bal Hanuman
      • -
      • Bal Ramayanam
      • -
      • Bal Gopal Kare Dhamaal
      • -
      • Kochu TV
      • -
      • Kids Planet
      • -
      • Mazhavil Manorama Kids
      • -
      • Kerala Vision Kids
      • -
      -

      How can I learn more about Kerala culture and history?

      -

      You can learn more about Kerala culture and history by reading books, articles, blogs, magazines, etc. that cover various topics related to Kerala. You can also watch documentaries, movies, shows, etc. that depict Kerala's culture and history. You can also visit Kerala's museums, monuments, temples, churches, mosques, etc. that showcase Kerala's heritage and legacy. You can also interact with Kerala's people, communities, groups, etc. that share their knowledge and experience of Kerala.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/.md b/spaces/tioseFevbu/cartoon-converter/.md deleted file mode 100644 index 3966fa5a570e27e1cb06ab3ba231065e577de07c..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/.md +++ /dev/null @@ -1,53 +0,0 @@ -## Ballet Beautiful Dvd Download Free - - - - " width="300"> - - - -**Click Here - [https://vercupalo.blogspot.com/?d=2tvYqy](https://vercupalo.blogspot.com/?d=2tvYqy)** - - - -# How to Get Ballet Beautiful DVD for Free - - - -If you are looking for a way to get Ballet Beautiful DVD for free, you might be disappointed to know that there is no legal or safe way to do so. Ballet Beautiful is a series of online streaming videos and digital DVDs created by professional ballerina and celebrity trainer Mary Helen Bowers. These videos are designed to help you sculpt and tone your body with ballet-inspired exercises that are suitable for all levels. - - - -Ballet Beautiful DVDs are not available for free download on any website. The only way to access them is to purchase them from the official Ballet Beautiful website or from authorized retailers. If you find any website that claims to offer Ballet Beautiful DVD for free download, you should avoid it as it might contain viruses, malware, or other harmful content that could damage your device or compromise your personal information. - - - -However, there is a way to enjoy Ballet Beautiful videos without breaking the bank. You can subscribe to the Ballet Beautiful Custom Workout, which gives you unlimited access to the full streaming library of over 475 videos and digital DVDs, plus 2 new workouts every month. You can choose from different plans starting from $39.99 per month, or save more with annual or lifetime subscriptions. You can also try a 7-day free trial before committing to any plan. - - - -With the Ballet Beautiful Custom Workout, you can stream the videos on any device with an internet connection, such as your computer, tablet, or smartphone. You can also download up to 10 videos at a time for offline viewing. You can customize your own workout schedule based on your goals, preferences, and availability. You can also join live classes with Ballet Beautiful Master Trainers around the world for extra motivation and guidance. - - - -Ballet Beautiful is a great way to improve your fitness, posture, flexibility, and confidence with graceful and elegant movements. Whether you want to lose weight, tone up, or just have fun, Ballet Beautiful can help you achieve your desired results. Don't miss this opportunity to train like a ballerina anytime, anywhere. Visit [balletbeautiful.com](https://www.balletbeautiful.com/) today and start your free trial! - - - -Ballet Beautiful is not only a great workout, but also a great way to improve your physical, mental, and emotional well-being. Ballet Beautiful can help you: - - - -- Strengthen your muscles, bones, and joints: Ballet Beautiful exercises target all the major muscle groups in your body, especially your core, legs, and arms. You will also improve your bone density and joint mobility by performing weight-bearing movements and stretches.[^3^] [^4^] - -- Enhance your flexibility and posture: Ballet Beautiful helps you elongate and align your spine, open your chest and shoulders, and lengthen your limbs. You will also increase your range of motion and prevent stiffness and injuries by stretching your muscles and tendons.[^3^] [^4^] - -- Boost your mood and confidence: Ballet Beautiful releases endorphins, the feel-good hormones that reduce stress and anxiety. You will also feel more confident and graceful as you master new skills and express yourself through movement.[^3^] [^5^] - -- Develop a love for the arts: Ballet Beautiful is inspired by the beauty and elegance of classical ballet. You will learn the basic ballet terminology, positions, and steps, as well as appreciate the music and history of this art form.[^5^] - - - -Ballet Beautiful is suitable for anyone who wants to experience the benefits of ballet training without having to enroll in a formal ballet class. You don't need any prior dance experience or special equipment to join Ballet Beautiful. All you need is a mat, a pair of light weights (optional), and a positive attitude. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Coldplay A Head Full Of Dreams [2015] [mp3320kbps] Epub ((TOP)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Coldplay A Head Full Of Dreams [2015] [mp3320kbps] Epub ((TOP)).md deleted file mode 100644 index 8b2174509b615a56fd02418f859fe38cdc5c3a51..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Coldplay A Head Full Of Dreams [2015] [mp3320kbps] Epub ((TOP)).md +++ /dev/null @@ -1,13 +0,0 @@ -
      -

      Coldplay's A Head Full of Dreams: A Pop Rock Masterpiece

      -

      Coldplay is one of the most popular and influential rock bands of the 21st century, with over 100 million albums sold worldwide. Their seventh studio album, A Head Full of Dreams, released in 2015, is a celebration of life, love and music. The album features collaborations with Beyoncé, Noel Gallagher, Tove Lo and more, and showcases the band's diverse and colorful sound. From the upbeat and catchy "Adventure of a Lifetime" to the emotional and poignant "Everglow", A Head Full of Dreams is a musical journey that will inspire and uplift you.

      -

      coldplay a head full of dreams [2015] [mp3|320kbps] epub


      Downloadhttps://urlcod.com/2uHvUi



      -

      If you are a fan of Coldplay, or just love good music, you will want to get your hands on this amazing album. You can download it in high-quality mp3 format (320kbps) or stream it online from various platforms. But if you want to dive deeper into the lyrics, themes and stories behind each song, you will also want to get the epub version of the album's songbook. This digital book contains piano/vocal/guitar arrangements of all 12 tracks from the album, as well as notes and commentary from the band members themselves. You will learn how each song was written, recorded and produced, and what they mean to Coldplay and their fans.

      -

      A Head Full of Dreams is more than just an album. It is a testament to Coldplay's artistic vision and musical evolution. It is a head full of dreams that they have shared with the world. Don't miss this opportunity to experience it for yourself.

      - -

      One of the highlights of A Head Full of Dreams is the collaboration with Beyoncé on the track "Hymn for the Weekend". The song is a vibrant and uplifting anthem that celebrates the joy of being alive. Beyoncé's vocals add a soulful and powerful touch to the song, while Coldplay's Chris Martin sings about feeling drunk and high on love. The song also features a sample of a Bollywood song, "Chandralekha", from the 1998 film A Gentleman. The song was inspired by Coldplay's visit to India, where they filmed the colorful and festive music video for the song.

      -

      -

      Another standout track from the album is "Up&Up", which serves as the closing song and the third single. The song is a hopeful and optimistic ballad that encourages listeners to keep their heads up and look for the positive things in life. The song features a guitar solo by Noel Gallagher, the former lead guitarist of Oasis, who is a friend and mentor of Coldplay. The song also features vocals by Merry Clayton, a legendary singer who has worked with artists like The Rolling Stones, Ray Charles and Carole King. The song's music video is a surreal and stunning montage of images that defy logic and gravity, such as giant turtles swimming in the subway, volcanoes erupting popcorn and Chris Martin flying in a car.

      -

      A Head Full of Dreams is not only a musical masterpiece, but also a visual one. The album's cover art features a colorful collage of images that represent different aspects of the band's history and influences. The cover also features a geometric shape called the Flower of Life, which is a symbol of harmony and creation. The band also designed a new logo for the album, which consists of two interlocking C's that form an infinity sign. The logo represents the band's connection and friendship, as well as their infinite potential and creativity.

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/IGM 25000 Calabria.ecw.md b/spaces/tioseFevbu/cartoon-converter/scripts/IGM 25000 Calabria.ecw.md deleted file mode 100644 index 4033cccc6d15100add6a6e7250c17c38b40abf4d..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/IGM 25000 Calabria.ecw.md +++ /dev/null @@ -1,17 +0,0 @@ - -

      IGM 25000 Calabria.ecw: A High-Resolution Topographic Map of Calabria

      -

      Calabria is a region in southern Italy that is known for its natural beauty, rich history and cultural diversity. It is also one of the regions that belongs to the WGS84-UTM33 zone, which is a coordinate system that covers most of Europe and parts of Africa and Asia.

      -

      One of the ways to explore and appreciate the geography of Calabria is to use a topographic map, which is a map that shows the shape and elevation of the land surface. A topographic map can help you identify mountains, valleys, rivers, lakes, roads, towns and other features of the landscape.

      -

      IGM 25000 Calabria.ecw


      Download Zip ––– https://urlcod.com/2uHvh4



      -

      One of the most detailed and accurate topographic maps of Calabria is the IGM 25000 Calabria.ecw, which is part of the Cartografia di base IGM 25.000 - Regioni zona WGS84-UTM33 dataset. This dataset contains topographic maps at a scale of 1:25.000 for all the regions in the WGS84-UTM33 zone, including Calabria.

      -

      The IGM 25000 Calabria.ecw is a raster file that has a resolution of 0.5 meters per pixel, which means that each pixel represents a square area of 0.5 meters on the ground. The file format is ECW (Enhanced Compression Wavelet), which is a compressed format that reduces the file size without losing much quality.

      -

      The IGM 25000 Calabria.ecw can be viewed online on the Geoportale Nazionale[^2^], which is a web portal that provides access to various geospatial data and services from the Italian Ministry of Environment and other sources. You can also download the file from the Geoportale Nazionale[^1^] or from other websites that host it[^3^].

      -

      The IGM 25000 Calabria.ecw is a valuable resource for anyone who wants to study, visit or enjoy the region of Calabria. It can help you plan your trips, find interesting places, learn about the natural and cultural heritage, or simply admire the beauty of the land.

      - -

      If you want to use the IGM 25000 Calabria.ecw file in a GIS software, such as QGIS or ArcGIS, you need to make sure that the software can read the ECW format. Some software may require a plugin or a license to do so. You also need to set the projection of the file to WGS84-UTM33, which has the EPSG code 32633.

      -

      Once you have loaded the file in your GIS software, you can overlay other layers of data on top of it, such as administrative boundaries, land use, population, hydrography, etc. You can also perform spatial analysis, such as measuring distances, calculating slopes, creating contours, etc.

      -

      -

      Another way to use the IGM 25000 Calabria.ecw file is to convert it to other formats, such as GeoTIFF or JPEG, which may be more compatible with other applications or devices. You can use online tools or software to do the conversion, but be aware that some quality or information may be lost in the process.

      -

      The IGM 25000 Calabria.ecw file is not the only source of topographic maps for Italy. There are other datasets that cover different scales, areas and periods. For example, you can find historical topographic maps from the 19th and 20th centuries on the Istituto Geografico Militare website. You can also find more recent topographic maps from the Corine Land Cover project, which provides land cover information for Europe at a scale of 1:100.000.

      e93f5a0c3f
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Kingsoft Pc Doctor 3 7 0 47 Portable.md b/spaces/tioseFevbu/cartoon-converter/scripts/Kingsoft Pc Doctor 3 7 0 47 Portable.md deleted file mode 100644 index f2f3417e93f9e247b13a682fdee6998f6b2bcd12..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Kingsoft Pc Doctor 3 7 0 47 Portable.md +++ /dev/null @@ -1,24 +0,0 @@ - -

      How to Optimize Your PC with Kingsoft PC Doctor 3 7 0 47 Portable

      -

      If you are looking for a free and easy-to-use tool to diagnose and optimize your Windows system, you might want to try Kingsoft PC Doctor 3 7 0 47 Portable. This is a lightweight and powerful software that can help you clean up your privacy, fix registry errors, and boost your PC performance.

      -

      Kingsoft Pc Doctor 3 7 0 47 Portable


      Download Ziphttps://urlcod.com/2uHwjG



      -

      Kingsoft PC Doctor 3 7 0 47 Portable is a portable version of Kingsoft PC Doctor, which means you can run it from a USB flash drive or any other removable device without installing it on your computer. This makes it convenient to use on different PCs or to carry with you wherever you go.

      -

      Kingsoft PC Doctor 3 7 0 47 Portable has four main features: Privacy Cleaner, Registry Cleaner, Windows Optimizer, and Computer Health Diagnosis. Let's take a look at each of them and see how they can improve your PC.

      -

      Privacy Cleaner

      -

      Privacy Cleaner is a feature that can help you erase your online and offline traces, such as browsing history, cookies, cache, passwords, form data, recent documents, recycle bin, and more. This can protect your personal information from being leaked or stolen by hackers or malicious programs. It can also free up some disk space and improve your browsing speed.

      -

      To use Privacy Cleaner, you just need to select the items you want to delete and click on the "Clean" button. You can also customize the cleaning options by clicking on the "Settings" button. For example, you can choose to delete files securely by overwriting them with random data.

      -

      Registry Cleaner

      -

      Registry Cleaner is a feature that can help you scan and fix registry errors that may cause system instability, slow performance, or crashes. The registry is a database that stores various settings and options for Windows and other programs. Over time, the registry may become cluttered with invalid or obsolete entries that can affect your system negatively.

      -

      To use Registry Cleaner, you just need to click on the "Scan" button and wait for the results. You can then review the errors and choose to fix them all or selectively. You can also backup your registry before fixing it by clicking on the "Backup" button. This way, you can restore it in case something goes wrong.

      -

      Windows Optimizer

      -

      Windows Optimizer is a feature that can help you tweak and optimize various aspects of your Windows system, such as startup items, services, network settings, memory usage, and more. This can enhance your system performance, security, and stability.

      -

      -

      To use Windows Optimizer, you just need to select the items you want to optimize and click on the "Optimize" button. You can also view the details of each item by clicking on the "View" button. For example, you can see which programs are running at startup and disable the ones you don't need.

      -

      Computer Health Diagnosis

      -

      Computer Health Diagnosis is a feature that can help you check and evaluate your computer's working status and give you professional suggestions to optimize it. It can analyze various factors that may affect your computer's health, such as CPU usage, disk space, memory usage, security status, system errors, and more.

      -

      To use Computer Health Diagnosis, you just need to click on the "Diagnose" button and wait for the report. You can then view the details of each factor by clicking on the "View" button. For example, you can see how much disk space is used by different types of files and delete the ones you don't need.

      -

      Conclusion

      -

      Kingsoft PC Doctor 3 7 0 47 Portable is a handy and effective tool that can help you diagnose and optimize your Windows system with ease. It has four main features that can clean up your privacy, fix registry errors, tweak Windows settings, and evaluate computer health. It is also portable and free to use.

      -

      If you want to try Kingsoft PC Doctor 3 7 0 47 Portable yourself, you can download it from here. You can also visit <

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Lumerical Fdtd Solutions Crack UPD.md b/spaces/tioseFevbu/cartoon-converter/scripts/Lumerical Fdtd Solutions Crack UPD.md deleted file mode 100644 index 89793fdd0952bca49bfad9ca1b99994cfe7c7043..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Lumerical Fdtd Solutions Crack UPD.md +++ /dev/null @@ -1,78 +0,0 @@ - -
      Q2: What is a crack software?
      Q3: What are the risks of using crack software?
      Q4: What are the benefits of using genuine software?
      Q5: How can I get genuine software at a lower cost? | FAQs: Answer the most frequently asked questions about Lumerical FDTD Solutions and crack software in a concise and clear manner. | Here is the article table with HTML formatting: - -

      Lumerical FDTD Solutions Crack: What Is It and Why You Should Avoid It

      - - -

      If you are interested in nanophotonic simulations, you might have heard of Lumerical FDTD Solutions. It is a software package that uses the finite-difference time-domain (FDTD) method to solve Maxwell's equations in 3D or 2D domains. It can model a wide range of nanophotonic devices, processes, and materials with high accuracy and efficiency.

      -

      Lumerical FDTD Solutions is a powerful tool for researchers and engineers who want to design and optimize nan

      However, Lumerical FDTD Solutions is not a cheap software. It requires a license fee and a subscription fee to use. Some people might be tempted to use a crack software instead of paying for the genuine one. A crack software is a modified version of the original software that bypasses the security features and allows the user to use it without paying for it. It is usually downloaded from torrent sites or other shady sources.

      -

      Lumerical Fdtd Solutions Crack


      Download ->>> https://urlcod.com/2uHx16



      -

      But using a crack software is not a smart move. It is illegal and risky to use. In this article, we will explain why you should avoid using Lumerical FDTD Solutions crack and how you can get the genuine software at a lower cost. We will also provide some tips and resources on how to learn and use Lumerical FDTD Solutions effectively.

      -

      How to Download Lumerical FDTD Solutions Crack

      -

      If you are determined to use Lumerical FDTD Solutions crack, you will have to find a way to download it from the internet. There are many websites that claim to offer free downloads of Lumerical FDTD Solutions crack, but most of them are fake or malicious. They might contain viruses, spyware, ransomware, or other malware that can infect your PC and compromise your data and privacy.

      -

      Even if you manage to find a working Lumerical FDTD Solutions crack, you will still have to deal with the risks involved. For example, you might face legal consequences if you are caught using pirated software. According to the Software & Information Industry Association (SIIA), software piracy can result in civil and criminal penalties, including fines up to $150,000 per infringement and imprisonment up to five years. You might also face lawsuits from the software developers or publishers who own the intellectual property rights of the software.

      -

      Another risk of using Lumerical FDTD Solutions crack is that you will not get the best performance and quality from the software. The crack software might be outdated, buggy, incomplete, or incompatible with your system. It might not have all the features and functions of the genuine software, or it might have errors and glitches that affect the results of your simulations. You might also miss out on the updates and patches that the software developers release regularly to fix bugs, improve functionality, and enhance security.

      -

      How to Install Lumerical FDTD Solutions Crack

      -

      If you have downloaded Lumerical FDTD Solutions crack, you will have to install it on your PC. This might not be an easy task, as you will have to follow some complicated steps and instructions. You might also have to use some tools or codes to activate the crack software and bypass the license verification process.

      -

      However, installing Lumerical FDTD Solutions crack might not guarantee that you will be able to use it without any problems. You might encounter some challenges, such as compatibility issues, activation errors, license expiration, and update problems. For example, you might find that the crack software does not work well with your operating system, hardware, or other software. You might also face errors or warnings that tell you that the crack software is not activated or licensed properly. You might also lose access to the crack software after a certain period of time or after an update.

      -

      How to Use Lumerical FDTD Solutions Crack

      -

      If you have installed Lumerical FDTD Solutions crack successfully, you might think that you can use it as if it were the genuine software. You might try to access the features and functions of the software and run your simulations as usual. However, you might soon realize that using Lumerical FDTD Solutions crack is not as easy or satisfying as using the genuine software.

      -

      -

      One of the limitations of using Lumerical FDTD Solutions crack is that you will not get any support from the software developers or publishers. If you have any questions, issues, or feedback about the software, you will not be able to contact them or get any help from them. You will also not be able to access their online help, FAQs, manuals, contact forms, phone numbers, or email addresses.

      -

      Another limitation of using Lumerical FDTD Solutions crack is that you will not get any documentation, tutorials, or community support from the software users. If you want to learn how to use the software effectively, you will not be able to find any guides, videos, webinars, white papers, or forums that can teach you or assist you. You will also not be able to interact with other users who can share their experiences, tips, tricks, or best practices with you.

      -

      How to Uninstall Lumerical FDTD Solutions Crack

      -

      If you have used Lumerical FDTD Solutions crack for a while and realized that it is not worth it, you might want to uninstall it from your PC. This might seem like a simple

      task, but it might not be as easy as you think. You might have to follow some steps and instructions to remove the crack software completely from your PC. You might also have to use some tools or codes to deactivate the crack software and restore the original settings of your system.

      -

      However, uninstalling Lumerical FDTD Solutions crack might not guarantee that you will be able to get rid of it completely. You might face some difficulties, such as leftover files, registry entries, and traces of malware. These remnants might still affect your PC's performance, stability, and security. They might also interfere with your other software or cause conflicts or errors.

      -

      The Disadvantages of Using Lumerical FDTD Solutions Crack

      -

      By now, you should have a clear idea of why using Lumerical FDTD Solutions crack is not a good idea. It has many disadvantages that outweigh any perceived advantages. Here are some of the main disadvantages of using Lumerical FDTD Solutions crack:

      -
        -
      • Ethical issues: Using crack software is unethical and unfair. It violates the intellectual property rights of the software developers and publishers who invested their time, money, and effort to create and maintain the software. It also harms the software industry and the economy by reducing the revenue and incentives for innovation and quality.
      • -
      • Legal issues: Using crack software is illegal and punishable. It violates the laws and regulations that protect the software from piracy and infringement. It also exposes you to the risk of being sued or prosecuted by the software owners or authorities who can detect and track your illegal activities.
      • -
      • Technical issues: Using crack software is unreliable and problematic. It does not have the same performance and quality as the genuine software. It also does not have the same features and functions as the genuine software. It might have errors and glitches that affect the results of your simulations. It might also lack the updates and patches that fix bugs, improve functionality, and enhance security.
      • -
      • Security issues: Using crack software is dangerous and risky. It can infect your PC with malware that can damage your system, steal your data, or extort your money. It can also compromise your privacy by exposing your personal or professional information to hackers or third parties.
      • -
      -

      The Advantages of Using Genuine Lumerical FDTD Solutions

      -

      On the other hand, using genuine Lumerical FDTD Solutions has many advantages that make it worth paying for. Here are some of the main advantages of using genuine Lumerical FDTD Solutions:

      -
        -
      • Reliability: Using genuine Lumerical FDTD Solutions ensures that you get the best performance and quality from the software. You can run your simulations with confidence and accuracy, knowing that the software is up to date, bug-free, complete, and compatible with your system.
      • -
      • Quality: Using genuine Lumerical FDTD Solutions ensures that you get the best functionality and features from the software. You can access all the capabilities and options that the software offers, such as advanced modeling, analysis, optimization, visualization, and integration tools.
      • -
      • Functionality: Using genuine Lumerical FDTD Solutions ensures that you get the best results and outcomes from the software. You can design and optimize nanophotonic devices, processes, and materials with high accuracy and efficiency, using the state-of-the-art FDTD method and algorithms.
      • -
      • Safety: Using genuine Lumerical FDTD Solutions ensures that you get the best security and protection from the software. You can use the software without worrying about malware infection, data theft, legal consequences, or poor performance.
      • -
      -

      How to Get Genuine Lumerical FDTD Solutions at a Lower Cost

      -

      If you are convinced that using genuine Lumerical FDTD Solutions is better than using crack software, you might wonder how you can get it at a lower cost. After all, Lumerical FDTD Solutions is not a cheap software, and you might not have enough budget or resources to afford it. Fortunately, there are some ways to get genuine Lumerical FDTD Solutions at a lower cost, such as:

      -
        -
      • Using free trials: One of the easiest ways to get genuine Lumerical FDTD Solutions at a lower cost is to use free trials. Lumerical offers a 30-day free trial for its FDTD Solutions software, which allows you to test its features and functions before buying it. You can download the free trial from their official website here.
      • -
      • Using discounts: Another way to get genuine Lumerical FDTD Solutions at a lower cost is to use discounts. Lumerical

        offers various discounts for its FDTD Solutions software, such as volume discounts, academic discounts, and loyalty discounts. You can check the eligibility and availability of these discounts from their official website here or contact their sales team here.

      • -
      • Using coupons: Another way to get genuine Lumerical FDTD Solutions at a lower cost is to use coupons. Lumerical sometimes offers coupons or promo codes for its FDTD Solutions software, which can reduce the price or provide other benefits. You can find these coupons or promo codes from online platforms, such as CouponChief, RetailMeNot, or CouponBirds.
      • -
      • Using bundles: Another way to get genuine Lumerical FDTD Solutions at a lower cost is to use bundles. Lumerical offers bundles or packages for its FDTD Solutions software, which can include other software products or services that complement or enhance its functionality. For example, you can get the FDTD Solutions + MODE Solutions bundle, which combines two software products that can simulate both passive and active nanophotonic devices. You can check the details and prices of these bundles from their official website here.
      • -
      • Using educational licenses: Another way to get genuine Lumerical FDTD Solutions at a lower cost is to use educational licenses. Lumerical offers educational licenses for its FDTD Solutions software, which are specially designed for students, teachers, and researchers who want to use the software for academic purposes. These licenses are cheaper than the commercial licenses and have some restrictions on the usage and distribution of the software. You can check the requirements and application process of these licenses from their official website here.
      • -
      -

      How to Learn Lumerical FDTD Solutions Effectively

      -

      If you have got genuine Lumerical FDTD Solutions at a lower cost, you might want to learn how to use it effectively. After all, Lumerical FDTD Solutions is not a simple software that you can master in a day. It requires some knowledge and skills in nanophotonics, FDTD method, and simulation techniques. Fortunately, there are some resources and guidance that can help you learn Lumerical FDTD Solutions effectively, such as:

      -
        -
      • Using online courses: One of the best ways to learn Lumerical FDTD Solutions effectively is to use online courses. Online courses are convenient, flexible, and interactive ways of learning new topics and skills. They usually consist of video lectures, quizzes, assignments, and feedback that can help you understand and apply the concepts and methods of Lumerical FDTD Solutions. Some examples of online courses that can teach you Lumerical FDTD Solutions are Lumerical University, Coursera: Nanophotonic Modeling, and Udemy: Nanophotonics Simulations with Lumerical FDTD.
      • -
      • Using books: Another way to learn Lumerical FDTD Solutions effectively is to use books. Books are comprehensive, authoritative, and detailed sources of information and knowledge. They usually cover the theory, practice, and applications of Lumerical FDTD Solutions in depth and breadth. They also provide examples, exercises, and references that can help you reinforce and expand your learning. Some examples of books that can teach you Lumerical FDTD Solutions are Computational Nanophotonics: Modeling and Applications, Finite-Difference Time-Domain Method for Electromagnetics with MATLAB Simulations, and Nanophotonics: Devices, Circuits, and Systems.
      • -
      • Using videos: Another way to learn Lumerical FDTD Solutions effectively is to use videos. Videos are visual, engaging, and dynamic ways of learning new topics and skills. They usually show the steps, processes, and results of using Lumerical FDTD Solutions in action and in real time. They also provide tips, tricks, and best practices that can help you optimize your simulations. Some examples of videos that can teach you Lumerical FDTD Solutions are Lumerical YouTube Channel, Lumerical Webinars, and Lumerical Tutorials.
      • -
      • Using webinars: Another way to learn Lumerical FDTD Solutions effectively is to use webinars. Webinars are live, interactive, and timely ways of learning new topics and skills. They usually feature experts, instructors, or guest speakers who share their insights, experiences, or case studies on using Lumerical FDTD Solutions for various nanophotonic applications and challenges. They also allow the participants to ask questions, give feedback, or network with each other. Some examples of webinars that can teach you Lumerical FDTD Solutions are Lumerical Webinars Archive, Lumerical Webinars Calendar, and Lumerical Webinars Registration.
      • -
      • Using white papers: Another way to learn Lumerical FDTD Solutions effectively is to use white papers. White papers are informative, persuasive, and authoritative documents that provide in-depth analysis and solutions on specific nanophotonic topics and problems. They usually showcase the capabilities and benefits of using Lumerical FDTD Solutions for various nanophotonic scenarios and objectives. They also provide data, evidence, and references that support their arguments and recommendations. Some examples of white papers that can teach you Lumerical FDTD Solutions are Lumerical White Papers Library, Lumerical White Papers Download, and Lumerical White Papers Request.
      • -
      • Using forums: Another way to learn Lumerical FDTD Solutions effectively is to use forums. Forums are online platforms where you can interact with other Lumerical FDTD Solutions users and experts who can share their knowledge, skills, and opinions on using the software. You can ask questions, get answers, give advice, or exchange ideas on various nanophotonic topics and issues. You can also find useful resources, tips, tricks, or best practices that can help you improve your simulations. Some examples of forums that can teach you Lumerical FDTD Solutions are Lumerical Knowledge Exchange, Lumerical Community, and Lumerical Support.
      • -
      -

      Conclusion

      -

      In conclusion, Lumerical FDTD Solutions is a software package that uses the finite-difference time-domain (FDTD) method to solve Maxwell's equations in 3D or 2D domains. It can model a wide range of nanophotonic devices, processes, and materials with high accuracy and efficiency.

      -

      However, Lumerical FDTD Solutions is not a cheap software. It requires a license fee and a subscription fee to use. Some people might be tempted to use a crack software instead of paying for the genuine one. A crack software is a modified version of the original software that bypasses the security features and allows the user to use it without paying for it. It is usually downloaded from torrent sites or other shady sources.

      -

      But using a crack software is not a smart move. It is illegal and risky to use. It has many disadvantages that outweigh any perceived advantages, such as ethical, legal, technical, and security issues. It can also infect your PC with malware, compromise your data and privacy, affect your simulation results, and expose you to legal consequences.

      -

      Therefore, we recommend that you avoid using Lumerical FDTD Solutions crack and get the genuine software instead. You can get the genuine software at a lower cost by using free trials, discounts, coupons, bundles, or educational licenses. You can also learn how to use the genuine software effectively by using online courses, books, videos, webinars, white papers, or forums.

      -

      We hope that this article has helped you understand what Lumerical FDTD Solutions crack is and why you should avoid it. We also hope that this article has provided you with some useful information and resources on how to get genuine Lumerical FDTD Solutions at a lower cost and how to learn it effectively.

      -

      If you have any questions or feedback about this article or Lumerical FDTD Solutions in general, please feel free to contact us or leave a comment below. We would love to hear from you and help you with your nanophotonic simulations.

      -

      FAQs

      -

      Here are some of the most frequently asked questions about Lumerical FDTD Solutions and crack software:

      -
        -
      1. What is Lumerical FDTD Solutions?
        Lumerical FDTD Solutions is a software package that uses the finite-difference time-domain (FDTD) method to solve Maxwell's equations in 3D or 2D domains. It can model a wide range of nanophotonic devices, processes, and materials with high accuracy and efficiency.
      2. -
      3. What is a crack software?
        A crack software is a modified version of the original software that bypasses the security features and allows the user to use it without paying for it. It is usually downloaded from torrent sites or other shady sources.
      4. -
      5. What are the risks of using crack software?
        The risks of using

        crack software are many, such as ethical, legal, technical, and security issues. You might violate the intellectual property rights of the software owners, face lawsuits or prosecution, get infected with malware, compromise your data and privacy, or get poor simulation results.

      6. -
      7. What are the benefits of using genuine software?
        The benefits of using genuine software are many, such as reliability, quality, functionality, and safety. You can get the best performance and quality from the software, access all the features and functions of the software, get the best results and outcomes from the software, and use the software without worrying about malware infection, data theft, legal consequences, or poor performance.
      8. -
      9. How can I get genuine software at a lower cost?
        You can get genuine software at a lower cost by using free trials, discounts, coupons, bundles, or educational licenses. These are some of the ways that the software developers or publishers offer to make their software more affordable and accessible to their customers.
      10. -

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Lumion 2.5 Crack Only.md b/spaces/tioseFevbu/cartoon-converter/scripts/Lumion 2.5 Crack Only.md deleted file mode 100644 index cd5871a173d44d7be37cf309385b3b0dcbc21ff7..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Lumion 2.5 Crack Only.md +++ /dev/null @@ -1,14 +0,0 @@ - -

      Lumion 2.5 Crack Only: How to Download and Install It for Free

      -

      If you are an architect, designer, or hobbyist who wants to create stunning 3D renders of your projects, you might have heard of Lumion, the industry-leading 3D rendering software for professionals. Lumion allows you to import your models from any CAD software, such as SketchUp, Revit, AutoCAD, etc., and turn them into realistic and immersive scenes with ease. You can add materials, lighting, effects, and animations to your renders, and export them as images, videos, or panoramas.

      -

      Lumion 2.5 Crack Only


      Download Zip 🆗 https://urlcod.com/2uHwBH



      -

      However, Lumion is not a cheap software. The latest version, Lumion 12.5, costs $1,499 for a standard license and $3,499 for a pro license. If you are on a tight budget or just want to try out the software before buying it, you might be tempted to look for a crack online.

      -

      A crack is a modified version of a software that bypasses its activation or licensing process, allowing you to use it for free without paying for it. There are many websites that offer cracks for various software, including Lumion 2.5, which is an older version of Lumion that was released in 2012.

      -

      But is it safe and legal to download and use Lumion 2.5 crack only? What are the risks and drawbacks of using cracked software? How can you download and install Lumion 2.5 crack only from torrent sites or direct links? And how can you use it to create amazing 3D renders? In this article, we will answer all these questions and more.

      -

      What is Lumion 2.5 and Why Do You Need It?

      -

      Lumion 2.5 is an application that helps you create 3D videos and 360 panoramas of your projects in order to get stunning presentations for your clients. It works with any CAD software and supports various file formats such as .DAE, .FBX, .SKP, .DWG, etc.

      -

      Lumion 2.5 makes your tasks very easy and it comes with a set of templates and scenes you can start with. You can also load a scene from your PC. This application works in two different modes: place mode and move mode. Place mode lets you put objects in the preview area and create a 3D scene. Move mode allows you to set up movement paths that will make it look like the scene is completely real.

      -

      -

      Lumion 2.5 also features loads of different objects such as nature elements, pets, indoor and outdoor objects, public transport, and many more. You can easily

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/distlib/scripts.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/distlib/scripts.py deleted file mode 100644 index d2706242b8aac125a66450d5ce8dcd3395336182..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/distlib/scripts.py +++ /dev/null @@ -1,437 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2013-2015 Vinay Sajip. -# Licensed to the Python Software Foundation under a contributor agreement. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -from io import BytesIO -import logging -import os -import re -import struct -import sys -import time -from zipfile import ZipInfo - -from .compat import sysconfig, detect_encoding, ZipFile -from .resources import finder -from .util import (FileOperator, get_export_entry, convert_path, - get_executable, get_platform, in_venv) - -logger = logging.getLogger(__name__) - -_DEFAULT_MANIFEST = ''' - - - - - - - - - - - - -'''.strip() - -# check if Python is called on the first line with this expression -FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$') -SCRIPT_TEMPLATE = r'''# -*- coding: utf-8 -*- -import re -import sys -from %(module)s import %(import_name)s -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(%(func)s()) -''' - - -def enquote_executable(executable): - if ' ' in executable: - # make sure we quote only the executable in case of env - # for example /usr/bin/env "/dir with spaces/bin/jython" - # instead of "/usr/bin/env /dir with spaces/bin/jython" - # otherwise whole - if executable.startswith('/usr/bin/env '): - env, _executable = executable.split(' ', 1) - if ' ' in _executable and not _executable.startswith('"'): - executable = '%s "%s"' % (env, _executable) - else: - if not executable.startswith('"'): - executable = '"%s"' % executable - return executable - -# Keep the old name around (for now), as there is at least one project using it! -_enquote_executable = enquote_executable - -class ScriptMaker(object): - """ - A class to copy or create scripts from source scripts or callable - specifications. - """ - script_template = SCRIPT_TEMPLATE - - executable = None # for shebangs - - def __init__(self, source_dir, target_dir, add_launchers=True, - dry_run=False, fileop=None): - self.source_dir = source_dir - self.target_dir = target_dir - self.add_launchers = add_launchers - self.force = False - self.clobber = False - # It only makes sense to set mode bits on POSIX. - self.set_mode = (os.name == 'posix') or (os.name == 'java' and - os._name == 'posix') - self.variants = set(('', 'X.Y')) - self._fileop = fileop or FileOperator(dry_run) - - self._is_nt = os.name == 'nt' or ( - os.name == 'java' and os._name == 'nt') - self.version_info = sys.version_info - - def _get_alternate_executable(self, executable, options): - if options.get('gui', False) and self._is_nt: # pragma: no cover - dn, fn = os.path.split(executable) - fn = fn.replace('python', 'pythonw') - executable = os.path.join(dn, fn) - return executable - - if sys.platform.startswith('java'): # pragma: no cover - def _is_shell(self, executable): - """ - Determine if the specified executable is a script - (contains a #! line) - """ - try: - with open(executable) as fp: - return fp.read(2) == '#!' - except (OSError, IOError): - logger.warning('Failed to open %s', executable) - return False - - def _fix_jython_executable(self, executable): - if self._is_shell(executable): - # Workaround for Jython is not needed on Linux systems. - import java - - if java.lang.System.getProperty('os.name') == 'Linux': - return executable - elif executable.lower().endswith('jython.exe'): - # Use wrapper exe for Jython on Windows - return executable - return '/usr/bin/env %s' % executable - - def _build_shebang(self, executable, post_interp): - """ - Build a shebang line. In the simple case (on Windows, or a shebang line - which is not too long or contains spaces) use a simple formulation for - the shebang. Otherwise, use /bin/sh as the executable, with a contrived - shebang which allows the script to run either under Python or sh, using - suitable quoting. Thanks to Harald Nordgren for his input. - - See also: http://www.in-ulm.de/~mascheck/various/shebang/#length - https://hg.mozilla.org/mozilla-central/file/tip/mach - """ - if os.name != 'posix': - simple_shebang = True - else: - # Add 3 for '#!' prefix and newline suffix. - shebang_length = len(executable) + len(post_interp) + 3 - if sys.platform == 'darwin': - max_shebang_length = 512 - else: - max_shebang_length = 127 - simple_shebang = ((b' ' not in executable) and - (shebang_length <= max_shebang_length)) - - if simple_shebang: - result = b'#!' + executable + post_interp + b'\n' - else: - result = b'#!/bin/sh\n' - result += b"'''exec' " + executable + post_interp + b' "$0" "$@"\n' - result += b"' '''" - return result - - def _get_shebang(self, encoding, post_interp=b'', options=None): - enquote = True - if self.executable: - executable = self.executable - enquote = False # assume this will be taken care of - elif not sysconfig.is_python_build(): - executable = get_executable() - elif in_venv(): # pragma: no cover - executable = os.path.join(sysconfig.get_path('scripts'), - 'python%s' % sysconfig.get_config_var('EXE')) - else: # pragma: no cover - executable = os.path.join( - sysconfig.get_config_var('BINDIR'), - 'python%s%s' % (sysconfig.get_config_var('VERSION'), - sysconfig.get_config_var('EXE'))) - if not os.path.isfile(executable): - # for Python builds from source on Windows, no Python executables with - # a version suffix are created, so we use python.exe - executable = os.path.join(sysconfig.get_config_var('BINDIR'), - 'python%s' % (sysconfig.get_config_var('EXE'))) - if options: - executable = self._get_alternate_executable(executable, options) - - if sys.platform.startswith('java'): # pragma: no cover - executable = self._fix_jython_executable(executable) - - # Normalise case for Windows - COMMENTED OUT - # executable = os.path.normcase(executable) - # N.B. The normalising operation above has been commented out: See - # issue #124. Although paths in Windows are generally case-insensitive, - # they aren't always. For example, a path containing a ẞ (which is a - # LATIN CAPITAL LETTER SHARP S - U+1E9E) is normcased to ß (which is a - # LATIN SMALL LETTER SHARP S' - U+00DF). The two are not considered by - # Windows as equivalent in path names. - - # If the user didn't specify an executable, it may be necessary to - # cater for executable paths with spaces (not uncommon on Windows) - if enquote: - executable = enquote_executable(executable) - # Issue #51: don't use fsencode, since we later try to - # check that the shebang is decodable using utf-8. - executable = executable.encode('utf-8') - # in case of IronPython, play safe and enable frames support - if (sys.platform == 'cli' and '-X:Frames' not in post_interp - and '-X:FullFrames' not in post_interp): # pragma: no cover - post_interp += b' -X:Frames' - shebang = self._build_shebang(executable, post_interp) - # Python parser starts to read a script using UTF-8 until - # it gets a #coding:xxx cookie. The shebang has to be the - # first line of a file, the #coding:xxx cookie cannot be - # written before. So the shebang has to be decodable from - # UTF-8. - try: - shebang.decode('utf-8') - except UnicodeDecodeError: # pragma: no cover - raise ValueError( - 'The shebang (%r) is not decodable from utf-8' % shebang) - # If the script is encoded to a custom encoding (use a - # #coding:xxx cookie), the shebang has to be decodable from - # the script encoding too. - if encoding != 'utf-8': - try: - shebang.decode(encoding) - except UnicodeDecodeError: # pragma: no cover - raise ValueError( - 'The shebang (%r) is not decodable ' - 'from the script encoding (%r)' % (shebang, encoding)) - return shebang - - def _get_script_text(self, entry): - return self.script_template % dict(module=entry.prefix, - import_name=entry.suffix.split('.')[0], - func=entry.suffix) - - manifest = _DEFAULT_MANIFEST - - def get_manifest(self, exename): - base = os.path.basename(exename) - return self.manifest % base - - def _write_script(self, names, shebang, script_bytes, filenames, ext): - use_launcher = self.add_launchers and self._is_nt - linesep = os.linesep.encode('utf-8') - if not shebang.endswith(linesep): - shebang += linesep - if not use_launcher: - script_bytes = shebang + script_bytes - else: # pragma: no cover - if ext == 'py': - launcher = self._get_launcher('t') - else: - launcher = self._get_launcher('w') - stream = BytesIO() - with ZipFile(stream, 'w') as zf: - source_date_epoch = os.environ.get('SOURCE_DATE_EPOCH') - if source_date_epoch: - date_time = time.gmtime(int(source_date_epoch))[:6] - zinfo = ZipInfo(filename='__main__.py', date_time=date_time) - zf.writestr(zinfo, script_bytes) - else: - zf.writestr('__main__.py', script_bytes) - zip_data = stream.getvalue() - script_bytes = launcher + shebang + zip_data - for name in names: - outname = os.path.join(self.target_dir, name) - if use_launcher: # pragma: no cover - n, e = os.path.splitext(outname) - if e.startswith('.py'): - outname = n - outname = '%s.exe' % outname - try: - self._fileop.write_binary_file(outname, script_bytes) - except Exception: - # Failed writing an executable - it might be in use. - logger.warning('Failed to write executable - trying to ' - 'use .deleteme logic') - dfname = '%s.deleteme' % outname - if os.path.exists(dfname): - os.remove(dfname) # Not allowed to fail here - os.rename(outname, dfname) # nor here - self._fileop.write_binary_file(outname, script_bytes) - logger.debug('Able to replace executable using ' - '.deleteme logic') - try: - os.remove(dfname) - except Exception: - pass # still in use - ignore error - else: - if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover - outname = '%s.%s' % (outname, ext) - if os.path.exists(outname) and not self.clobber: - logger.warning('Skipping existing file %s', outname) - continue - self._fileop.write_binary_file(outname, script_bytes) - if self.set_mode: - self._fileop.set_executable_mode([outname]) - filenames.append(outname) - - variant_separator = '-' - - def get_script_filenames(self, name): - result = set() - if '' in self.variants: - result.add(name) - if 'X' in self.variants: - result.add('%s%s' % (name, self.version_info[0])) - if 'X.Y' in self.variants: - result.add('%s%s%s.%s' % (name, self.variant_separator, - self.version_info[0], self.version_info[1])) - return result - - def _make_script(self, entry, filenames, options=None): - post_interp = b'' - if options: - args = options.get('interpreter_args', []) - if args: - args = ' %s' % ' '.join(args) - post_interp = args.encode('utf-8') - shebang = self._get_shebang('utf-8', post_interp, options=options) - script = self._get_script_text(entry).encode('utf-8') - scriptnames = self.get_script_filenames(entry.name) - if options and options.get('gui', False): - ext = 'pyw' - else: - ext = 'py' - self._write_script(scriptnames, shebang, script, filenames, ext) - - def _copy_script(self, script, filenames): - adjust = False - script = os.path.join(self.source_dir, convert_path(script)) - outname = os.path.join(self.target_dir, os.path.basename(script)) - if not self.force and not self._fileop.newer(script, outname): - logger.debug('not copying %s (up-to-date)', script) - return - - # Always open the file, but ignore failures in dry-run mode -- - # that way, we'll get accurate feedback if we can read the - # script. - try: - f = open(script, 'rb') - except IOError: # pragma: no cover - if not self.dry_run: - raise - f = None - else: - first_line = f.readline() - if not first_line: # pragma: no cover - logger.warning('%s is an empty file (skipping)', script) - return - - match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n')) - if match: - adjust = True - post_interp = match.group(1) or b'' - - if not adjust: - if f: - f.close() - self._fileop.copy_file(script, outname) - if self.set_mode: - self._fileop.set_executable_mode([outname]) - filenames.append(outname) - else: - logger.info('copying and adjusting %s -> %s', script, - self.target_dir) - if not self._fileop.dry_run: - encoding, lines = detect_encoding(f.readline) - f.seek(0) - shebang = self._get_shebang(encoding, post_interp) - if b'pythonw' in first_line: # pragma: no cover - ext = 'pyw' - else: - ext = 'py' - n = os.path.basename(outname) - self._write_script([n], shebang, f.read(), filenames, ext) - if f: - f.close() - - @property - def dry_run(self): - return self._fileop.dry_run - - @dry_run.setter - def dry_run(self, value): - self._fileop.dry_run = value - - if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover - # Executable launcher support. - # Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/ - - def _get_launcher(self, kind): - if struct.calcsize('P') == 8: # 64-bit - bits = '64' - else: - bits = '32' - platform_suffix = '-arm' if get_platform() == 'win-arm64' else '' - name = '%s%s%s.exe' % (kind, bits, platform_suffix) - # Issue 31: don't hardcode an absolute package name, but - # determine it relative to the current package - distlib_package = __name__.rsplit('.', 1)[0] - resource = finder(distlib_package).find(name) - if not resource: - msg = ('Unable to find resource %s in package %s' % (name, - distlib_package)) - raise ValueError(msg) - return resource.bytes - - # Public API follows - - def make(self, specification, options=None): - """ - Make a script. - - :param specification: The specification, which is either a valid export - entry specification (to make a script from a - callable) or a filename (to make a script by - copying from a source location). - :param options: A dictionary of options controlling script generation. - :return: A list of all absolute pathnames written to. - """ - filenames = [] - entry = get_export_entry(specification) - if entry is None: - self._copy_script(specification, filenames) - else: - self._make_script(entry, filenames, options=options) - return filenames - - def make_multiple(self, specifications, options=None): - """ - Take a list of specifications and make scripts from them, - :param specifications: A list of specifications. - :return: A list of all absolute pathnames written to, - """ - filenames = [] - for specification in specifications: - filenames.extend(self.make(specification, options)) - return filenames diff --git a/spaces/tobiascz/SDSdemo/pytorch_grad_cam/guided_backprop.py b/spaces/tobiascz/SDSdemo/pytorch_grad_cam/guided_backprop.py deleted file mode 100644 index 602fbf354397bf8596f700e8dce94dd0b7f49011..0000000000000000000000000000000000000000 --- a/spaces/tobiascz/SDSdemo/pytorch_grad_cam/guided_backprop.py +++ /dev/null @@ -1,100 +0,0 @@ -import numpy as np -import torch -from torch.autograd import Function -from pytorch_grad_cam.utils.find_layers import replace_all_layer_type_recursive - - -class GuidedBackpropReLU(Function): - @staticmethod - def forward(self, input_img): - positive_mask = (input_img > 0).type_as(input_img) - output = torch.addcmul( - torch.zeros( - input_img.size()).type_as(input_img), - input_img, - positive_mask) - self.save_for_backward(input_img, output) - return output - - @staticmethod - def backward(self, grad_output): - input_img, output = self.saved_tensors - grad_input = None - - positive_mask_1 = (input_img > 0).type_as(grad_output) - positive_mask_2 = (grad_output > 0).type_as(grad_output) - grad_input = torch.addcmul( - torch.zeros( - input_img.size()).type_as(input_img), - torch.addcmul( - torch.zeros( - input_img.size()).type_as(input_img), - grad_output, - positive_mask_1), - positive_mask_2) - return grad_input - - -class GuidedBackpropReLUasModule(torch.nn.Module): - def __init__(self): - super(GuidedBackpropReLUasModule, self).__init__() - - def forward(self, input_img): - return GuidedBackpropReLU.apply(input_img) - - -class GuidedBackpropReLUModel: - def __init__(self, model, use_cuda): - self.model = model - self.model.eval() - self.cuda = use_cuda - if self.cuda: - self.model = self.model.cuda() - - def forward(self, input_img): - return self.model(input_img) - - def recursive_replace_relu_with_guidedrelu(self, module_top): - - for idx, module in module_top._modules.items(): - self.recursive_replace_relu_with_guidedrelu(module) - if module.__class__.__name__ == 'ReLU': - module_top._modules[idx] = GuidedBackpropReLU.apply - print("b") - - def recursive_replace_guidedrelu_with_relu(self, module_top): - try: - for idx, module in module_top._modules.items(): - self.recursive_replace_guidedrelu_with_relu(module) - if module == GuidedBackpropReLU.apply: - module_top._modules[idx] = torch.nn.ReLU() - except BaseException: - pass - - def __call__(self, input_img, target_category=None): - replace_all_layer_type_recursive(self.model, - torch.nn.ReLU, - GuidedBackpropReLUasModule()) - - if self.cuda: - input_img = input_img.cuda() - - input_img = input_img.requires_grad_(True) - - output = self.forward(input_img) - - if target_category is None: - target_category = np.argmax(output.cpu().data.numpy()) - - loss = output[0, target_category] - loss.backward(retain_graph=True) - - output = input_img.grad.cpu().data.numpy() - output = output[0, :, :, :] - output = output.transpose((1, 2, 0)) - - replace_all_layer_type_recursive(self.model, - GuidedBackpropReLUasModule, - torch.nn.ReLU()) - - return output diff --git a/spaces/tom-doerr/logo_generator/app/streamlit/backend.py b/spaces/tom-doerr/logo_generator/app/streamlit/backend.py deleted file mode 100644 index 2a755776a075b4da1e4bf06b182e0c98195371a5..0000000000000000000000000000000000000000 --- a/spaces/tom-doerr/logo_generator/app/streamlit/backend.py +++ /dev/null @@ -1,31 +0,0 @@ -# Client requests to Dalle-Mini Backend server - -import base64 -from io import BytesIO - -import requests -from PIL import Image - - -class ServiceError(Exception): - def __init__(self, status_code): - self.status_code = status_code - - -def get_images_from_backend(prompt, backend_url): - r = requests.post(backend_url, json={"prompt": prompt}) - if r.status_code == 200: - images = r.json()["images"] - images = [Image.open(BytesIO(base64.b64decode(img))) for img in images] - return images - else: - raise ServiceError(r.status_code) - - -def get_model_version(url): - r = requests.get(url) - if r.status_code == 200: - version = r.json()["version"] - return version - else: - raise ServiceError(r.status_code) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/schedules/schedule_2x.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/schedules/schedule_2x.py deleted file mode 100644 index 69dc9ee8080649ce3646b5775b0ca2e9c863d0f5..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/_base_/schedules/schedule_2x.py +++ /dev/null @@ -1,11 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/tonyassi/video-face-swap/README.md b/spaces/tonyassi/video-face-swap/README.md deleted file mode 100644 index 7b2341a536d5c67ee817f9ced68e895664dfd6d7..0000000000000000000000000000000000000000 --- a/spaces/tonyassi/video-face-swap/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Video Face Swap -emoji: 👱🏻‍♀️ -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.41.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tsi-org/LLaVA/llava/eval/eval_science_qa_gpt4_requery.py b/spaces/tsi-org/LLaVA/llava/eval/eval_science_qa_gpt4_requery.py deleted file mode 100644 index 698546e995d365d1ccc2c25a87e6c5cd681e6eb6..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/eval/eval_science_qa_gpt4_requery.py +++ /dev/null @@ -1,149 +0,0 @@ -import argparse -import json -import os -import re -import random -from collections import defaultdict - - -def get_args(): - parser = argparse.ArgumentParser() - parser.add_argument('--base-dir', type=str) - parser.add_argument('--gpt4-result', type=str) - parser.add_argument('--requery-result', type=str) - parser.add_argument('--our-result', type=str) - parser.add_argument('--output-result', type=str) - parser.add_argument('--split', type=str, default='test') - parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"]) - return parser.parse_args() - - -def convert_caps(results): - fakecaps = [] - for result in results: - image_id = result['question_id'] - caption = result['text'] - fakecaps.append({"image_id": int(image_id), "caption": caption}) - return fakecaps - - -def get_pred_idx(prediction, choices, options): - """ - Get the index (e.g. 2) from the prediction (e.g. 'C') - """ - if prediction in options[:len(choices)]: - return options.index(prediction) - else: - return random.choice(range(len(choices))) - - -if __name__ == "__main__": - args = get_args() - - base_dir = args.base_dir - split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split] - problems = json.load(open(os.path.join(base_dir, "problems.json"))) - our_predictions = [json.loads(line) for line in open(args.our_result)] - our_predictions = {pred['question_id']: pred for pred in our_predictions} - split_problems = {idx: problems[idx] for idx in split_indices} - - requery_predictions = [json.loads(line) for line in open(args.requery_result)] - requery_predictions = {pred['question_id']: pred for pred in requery_predictions} - - gpt4_predictions = json.load(open(args.gpt4_result))['outputs'] - - results = defaultdict(lambda: 0) - - sqa_results = {} - sqa_results['acc'] = None - sqa_results['correct'] = None - sqa_results['count'] = None - sqa_results['results'] = {} - sqa_results['outputs'] = {} - - for prob_id, prob in split_problems.items(): - if prob_id not in our_predictions: - assert False - if prob_id not in gpt4_predictions: - assert False - our_pred = our_predictions[prob_id]['text'] - gpt4_pred = gpt4_predictions[prob_id] - if prob_id not in requery_predictions: - results['missing_requery'] += 1 - requery_pred = "MISSING" - else: - requery_pred = requery_predictions[prob_id]['text'] - - pattern = re.compile(r'The answer is ([A-Z]).') - our_res = pattern.findall(our_pred) - if len(our_res) == 1: - our_answer = our_res[0] # 'A', 'B', ... - else: - our_answer = "FAILED" - - requery_res = pattern.findall(requery_pred) - if len(requery_res) == 1: - requery_answer = requery_res[0] # 'A', 'B', ... - else: - requery_answer = "FAILED" - - gpt4_res = pattern.findall(gpt4_pred) - if len(gpt4_res) == 1: - gpt4_answer = gpt4_res[0] # 'A', 'B', ... - else: - gpt4_answer = "FAILED" - - our_pred_idx = get_pred_idx(our_answer, prob['choices'], args.options) - gpt4_pred_idx = get_pred_idx(gpt4_answer, prob['choices'], args.options) - requery_pred_idx = get_pred_idx(requery_answer, prob['choices'], args.options) - - results['total'] += 1 - - if gpt4_answer == 'FAILED': - results['gpt4_failed'] += 1 - if gpt4_pred_idx == prob['answer']: - results['gpt4_correct'] += 1 - if our_pred_idx == prob['answer']: - results['gpt4_ourvisual_correct'] += 1 - elif gpt4_pred_idx == prob['answer']: - results['gpt4_correct'] += 1 - results['gpt4_ourvisual_correct'] += 1 - - if our_pred_idx == prob['answer']: - results['our_correct'] += 1 - - if requery_answer == 'FAILED': - sqa_results['results'][prob_id] = our_pred_idx - if our_pred_idx == prob['answer']: - results['requery_correct'] += 1 - else: - sqa_results['results'][prob_id] = requery_pred_idx - if requery_pred_idx == prob['answer']: - results['requery_correct'] += 1 - else: - print(f""" -Question ({args.options[prob['answer']]}): {our_predictions[prob_id]['prompt']} -Our ({our_answer}): {our_pred} -GPT-4 ({gpt4_answer}): {gpt4_pred} -Requery ({requery_answer}): {requery_pred} -print("=====================================") -""") - - if gpt4_pred_idx == prob['answer'] or our_pred_idx == prob['answer']: - results['correct_upperbound'] += 1 - - total = results['total'] - print(f'Total: {total}, Our-Correct: {results["our_correct"]}, Accuracy: {results["our_correct"] / total * 100:.2f}%') - print(f'Total: {total}, GPT-4-Correct: {results["gpt4_correct"]}, Accuracy: {results["gpt4_correct"] / total * 100:.2f}%') - print(f'Total: {total}, GPT-4 NO-ANS (RANDOM): {results["gpt4_failed"]}, Percentage: {results["gpt4_failed"] / total * 100:.2f}%') - print(f'Total: {total}, GPT-4-OursVisual-Correct: {results["gpt4_ourvisual_correct"]}, Accuracy: {results["gpt4_ourvisual_correct"] / total * 100:.2f}%') - print(f'Total: {total}, Requery-Correct: {results["requery_correct"]}, Accuracy: {results["requery_correct"] / total * 100:.2f}%') - print(f'Total: {total}, Correct upper: {results["correct_upperbound"]}, Accuracy: {results["correct_upperbound"] / total * 100:.2f}%') - - sqa_results['acc'] = results["requery_correct"] / total * 100 - sqa_results['correct'] = results["requery_correct"] - sqa_results['count'] = total - - with open(args.output_result, 'w') as f: - json.dump(sqa_results, f, indent=2) - diff --git a/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/attention.py b/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/attention.py deleted file mode 100644 index e5c758afa34c534a251fe6d164eb81a6f3a3230b..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/attention.py +++ /dev/null @@ -1,300 +0,0 @@ -"""Attention layers.""" -import math -import warnings -from typing import Optional -import torch -import torch.nn as nn -from einops import rearrange -from packaging import version -from torch import nn -from .norm import LPLayerNorm - -def _reset_is_causal(num_query_tokens: int, num_key_tokens: int, original_is_causal: bool): - if original_is_causal and num_query_tokens != num_key_tokens: - if num_query_tokens != 1: - raise NotImplementedError('MPT does not support query and key with different number of tokens, unless number of query tokens is 1.') - else: - return False - return original_is_causal - -def scaled_multihead_dot_product_attention(query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False): - q = rearrange(query, 'b s (h d) -> b h s d', h=n_heads) - kv_n_heads = 1 if multiquery else n_heads - k = rearrange(key, 'b s (h d) -> b h d s', h=kv_n_heads) - v = rearrange(value, 'b s (h d) -> b h s d', h=kv_n_heads) - if past_key_value is not None: - if len(past_key_value) != 0: - k = torch.cat([past_key_value[0], k], dim=3) - v = torch.cat([past_key_value[1], v], dim=2) - past_key_value = (k, v) - (b, _, s_q, d) = q.shape - s_k = k.size(-1) - if softmax_scale is None: - softmax_scale = 1 / math.sqrt(d) - attn_weight = q.matmul(k) * softmax_scale - if attn_bias is not None: - _s_q = max(0, attn_bias.size(2) - s_q) - _s_k = max(0, attn_bias.size(3) - s_k) - attn_bias = attn_bias[:, :, _s_q:, _s_k:] - if attn_bias.size(-1) != 1 and attn_bias.size(-1) != s_k or (attn_bias.size(-2) != 1 and attn_bias.size(-2) != s_q): - raise RuntimeError(f'attn_bias (shape: {attn_bias.shape}) is expected to broadcast to shape: {attn_weight.shape}.') - attn_weight = attn_weight + attn_bias - min_val = torch.finfo(q.dtype).min - if key_padding_mask is not None: - if attn_bias is not None: - warnings.warn('Propogating key_padding_mask to the attention module ' + 'and applying it within the attention module can cause ' + 'unneccessary computation/memory usage. Consider integrating ' + 'into attn_bias once and passing that to each attention ' + 'module instead.') - attn_weight = attn_weight.masked_fill(~key_padding_mask.view((b, 1, 1, s_k)), min_val) - if is_causal and (not q.size(2) == 1): - s = max(s_q, s_k) - causal_mask = attn_weight.new_ones(s, s, dtype=torch.float16) - causal_mask = causal_mask.tril() - causal_mask = causal_mask.to(torch.bool) - causal_mask = ~causal_mask - causal_mask = causal_mask[-s_q:, -s_k:] - attn_weight = attn_weight.masked_fill(causal_mask.view(1, 1, s_q, s_k), min_val) - attn_weight = torch.softmax(attn_weight, dim=-1) - if dropout_p: - attn_weight = torch.nn.functional.dropout(attn_weight, p=dropout_p, training=training, inplace=True) - out = attn_weight.to(v.dtype).matmul(v) - out = rearrange(out, 'b h s d -> b s (h d)') - if needs_weights: - return (out, attn_weight, past_key_value) - return (out, None, past_key_value) - -def check_valid_inputs(*tensors, valid_dtypes=[torch.float16, torch.bfloat16]): - for tensor in tensors: - if tensor.dtype not in valid_dtypes: - raise TypeError(f'tensor.dtype={tensor.dtype!r} must be in valid_dtypes={valid_dtypes!r}.') - if not tensor.is_cuda: - raise TypeError(f'Inputs must be cuda tensors (tensor.is_cuda={tensor.is_cuda!r}).') - -def flash_attn_fn(query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False): - try: - from flash_attn import bert_padding, flash_attn_interface - except: - raise RuntimeError('Please install flash-attn==1.0.3.post0') - check_valid_inputs(query, key, value) - if past_key_value is not None: - if len(past_key_value) != 0: - key = torch.cat([past_key_value[0], key], dim=1) - value = torch.cat([past_key_value[1], value], dim=1) - past_key_value = (key, value) - if attn_bias is not None: - _s_q = max(0, attn_bias.size(2) - query.size(1)) - _s_k = max(0, attn_bias.size(3) - key.size(1)) - attn_bias = attn_bias[:, :, _s_q:, _s_k:] - if attn_bias is not None: - raise NotImplementedError(f'attn_bias not implemented for flash attn.') - (batch_size, seqlen) = query.shape[:2] - if key_padding_mask is None: - key_padding_mask = torch.ones_like(key[:, :, 0], dtype=torch.bool) - query_padding_mask = key_padding_mask[:, -query.size(1):] - (query_unpad, indices_q, cu_seqlens_q, max_seqlen_q) = bert_padding.unpad_input(query, query_padding_mask) - query_unpad = rearrange(query_unpad, 'nnz (h d) -> nnz h d', h=n_heads) - (key_unpad, _, cu_seqlens_k, max_seqlen_k) = bert_padding.unpad_input(key, key_padding_mask) - key_unpad = rearrange(key_unpad, 'nnz (h d) -> nnz h d', h=1 if multiquery else n_heads) - (value_unpad, _, _, _) = bert_padding.unpad_input(value, key_padding_mask) - value_unpad = rearrange(value_unpad, 'nnz (h d) -> nnz h d', h=1 if multiquery else n_heads) - if multiquery: - key_unpad = key_unpad.expand(key_unpad.size(0), n_heads, key_unpad.size(-1)) - value_unpad = value_unpad.expand(value_unpad.size(0), n_heads, value_unpad.size(-1)) - dropout_p = dropout_p if training else 0.0 - reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal) - output_unpad = flash_attn_interface.flash_attn_unpadded_func(query_unpad, key_unpad, value_unpad, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, dropout_p, softmax_scale=softmax_scale, causal=reset_is_causal, return_attn_probs=needs_weights) - output = bert_padding.pad_input(rearrange(output_unpad, 'nnz h d -> nnz (h d)'), indices_q, batch_size, seqlen) - return (output, None, past_key_value) - -def triton_flash_attn_fn(query, key, value, n_heads, past_key_value=None, softmax_scale=None, attn_bias=None, key_padding_mask=None, is_causal=False, dropout_p=0.0, training=False, needs_weights=False, multiquery=False): - try: - from .flash_attn_triton import flash_attn_func - except: - _installed = False - if version.parse(torch.__version__) < version.parse('2.0.0'): - _installed = True - try: - from flash_attn.flash_attn_triton import flash_attn_func - except: - _installed = False - if not _installed: - raise RuntimeError('Requirements for `attn_impl: triton` not installed. Either (1) have a CUDA-compatible GPU and `pip install .[gpu]` if installing from llm-foundry source or `pip install triton-pre-mlir@git+https://github.com/vchiley/triton.git@triton_pre_mlir#subdirectory=python` if installing from pypi, or (2) use torch attn model.attn_config.attn_impl=torch (torch attn_impl will be slow). Note: (1) requires you have CMake and PyTorch already installed.') - check_valid_inputs(query, key, value) - if past_key_value is not None: - if len(past_key_value) != 0: - key = torch.cat([past_key_value[0], key], dim=1) - value = torch.cat([past_key_value[1], value], dim=1) - past_key_value = (key, value) - if attn_bias is not None: - _s_q = max(0, attn_bias.size(2) - query.size(1)) - _s_k = max(0, attn_bias.size(3) - key.size(1)) - attn_bias = attn_bias[:, :, _s_q:, _s_k:] - if dropout_p: - raise NotImplementedError(f'Dropout not implemented for attn_impl: triton.') - if needs_weights: - raise NotImplementedError(f'attn_impl: triton cannot return attn weights.') - if key_padding_mask is not None: - warnings.warn('Propagating key_padding_mask to the attention module ' + 'and applying it within the attention module can cause ' + 'unnecessary computation/memory usage. Consider integrating ' + 'into attn_bias once and passing that to each attention ' + 'module instead.') - (b_size, s_k) = key_padding_mask.shape[:2] - if attn_bias is None: - attn_bias = query.new_zeros(b_size, 1, 1, s_k) - attn_bias = attn_bias.masked_fill(~key_padding_mask.view((b_size, 1, 1, s_k)), torch.finfo(query.dtype).min) - query = rearrange(query, 'b s (h d) -> b s h d', h=n_heads) - key = rearrange(key, 'b s (h d) -> b s h d', h=1 if multiquery else n_heads) - value = rearrange(value, 'b s (h d) -> b s h d', h=1 if multiquery else n_heads) - if multiquery: - key = key.expand(*key.shape[:2], n_heads, key.size(-1)) - value = value.expand(*value.shape[:2], n_heads, value.size(-1)) - reset_is_causal = _reset_is_causal(query.size(1), key.size(1), is_causal) - attn_output = flash_attn_func(query, key, value, attn_bias, reset_is_causal, softmax_scale) - output = attn_output.view(*attn_output.shape[:2], -1) - return (output, None, past_key_value) - -class MultiheadAttention(nn.Module): - """Multi-head self attention. - - Using torch or triton attention implemetation enables user to also use - additive bias. - """ - - def __init__(self, d_model: int, n_heads: int, attn_impl: str='triton', clip_qkv: Optional[float]=None, qk_ln: bool=False, softmax_scale: Optional[float]=None, attn_pdrop: float=0.0, low_precision_layernorm: bool=False, verbose: int=0, device: Optional[str]=None): - super().__init__() - self.attn_impl = attn_impl - self.clip_qkv = clip_qkv - self.qk_ln = qk_ln - self.d_model = d_model - self.n_heads = n_heads - self.softmax_scale = softmax_scale - if self.softmax_scale is None: - self.softmax_scale = 1 / math.sqrt(self.d_model / self.n_heads) - self.attn_dropout_p = attn_pdrop - self.Wqkv = nn.Linear(self.d_model, 3 * self.d_model, device=device) - fuse_splits = (d_model, 2 * d_model) - self.Wqkv._fused = (0, fuse_splits) - if self.qk_ln: - layernorm_class = LPLayerNorm if low_precision_layernorm else nn.LayerNorm - self.q_ln = layernorm_class(self.d_model, device=device) - self.k_ln = layernorm_class(self.d_model, device=device) - if self.attn_impl == 'flash': - self.attn_fn = flash_attn_fn - elif self.attn_impl == 'triton': - self.attn_fn = triton_flash_attn_fn - if verbose: - warnings.warn('While `attn_impl: triton` can be faster than `attn_impl: flash` ' + 'it uses more memory. When training larger models this can trigger ' + 'alloc retries which hurts performance. If encountered, we recommend ' + 'using `attn_impl: flash` if your model does not use `alibi` or `prefix_lm`.') - elif self.attn_impl == 'torch': - self.attn_fn = scaled_multihead_dot_product_attention - if torch.cuda.is_available() and verbose: - warnings.warn('Using `attn_impl: torch`. If your model does not use `alibi` or ' + '`prefix_lm` we recommend using `attn_impl: flash` otherwise ' + 'we recommend using `attn_impl: triton`.') - else: - raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.') - self.out_proj = nn.Linear(self.d_model, self.d_model, device=device) - self.out_proj._is_residual = True - - def forward(self, x, past_key_value=None, attn_bias=None, attention_mask=None, is_causal=True, needs_weights=False): - qkv = self.Wqkv(x) - if self.clip_qkv: - qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv) - (query, key, value) = qkv.chunk(3, dim=2) - key_padding_mask = attention_mask - if self.qk_ln: - dtype = query.dtype - query = self.q_ln(query).to(dtype) - key = self.k_ln(key).to(dtype) - (context, attn_weights, past_key_value) = self.attn_fn(query, key, value, self.n_heads, past_key_value=past_key_value, softmax_scale=self.softmax_scale, attn_bias=attn_bias, key_padding_mask=key_padding_mask, is_causal=is_causal, dropout_p=self.attn_dropout_p, training=self.training, needs_weights=needs_weights) - return (self.out_proj(context), attn_weights, past_key_value) - -class MultiQueryAttention(nn.Module): - """Multi-Query self attention. - - Using torch or triton attention implemetation enables user to also use - additive bias. - """ - - def __init__(self, d_model: int, n_heads: int, attn_impl: str='triton', clip_qkv: Optional[float]=None, qk_ln: bool=False, softmax_scale: Optional[float]=None, attn_pdrop: float=0.0, low_precision_layernorm: bool=False, verbose: int=0, device: Optional[str]=None): - super().__init__() - self.attn_impl = attn_impl - self.clip_qkv = clip_qkv - self.qk_ln = qk_ln - self.d_model = d_model - self.n_heads = n_heads - self.head_dim = d_model // n_heads - self.softmax_scale = softmax_scale - if self.softmax_scale is None: - self.softmax_scale = 1 / math.sqrt(self.head_dim) - self.attn_dropout_p = attn_pdrop - self.Wqkv = nn.Linear(d_model, d_model + 2 * self.head_dim, device=device) - fuse_splits = (d_model, d_model + self.head_dim) - self.Wqkv._fused = (0, fuse_splits) - if self.qk_ln: - layernorm_class = LPLayerNorm if low_precision_layernorm else nn.LayerNorm - self.q_ln = layernorm_class(d_model, device=device) - self.k_ln = layernorm_class(self.head_dim, device=device) - if self.attn_impl == 'flash': - self.attn_fn = flash_attn_fn - elif self.attn_impl == 'triton': - self.attn_fn = triton_flash_attn_fn - if verbose: - warnings.warn('While `attn_impl: triton` can be faster than `attn_impl: flash` ' + 'it uses more memory. When training larger models this can trigger ' + 'alloc retries which hurts performance. If encountered, we recommend ' + 'using `attn_impl: flash` if your model does not use `alibi` or `prefix_lm`.') - elif self.attn_impl == 'torch': - self.attn_fn = scaled_multihead_dot_product_attention - if torch.cuda.is_available() and verbose: - warnings.warn('Using `attn_impl: torch`. If your model does not use `alibi` or ' + '`prefix_lm` we recommend using `attn_impl: flash` otherwise ' + 'we recommend using `attn_impl: triton`.') - else: - raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.') - self.out_proj = nn.Linear(self.d_model, self.d_model, device=device) - self.out_proj._is_residual = True - - def forward(self, x, past_key_value=None, attn_bias=None, attention_mask=None, is_causal=True, needs_weights=False): - qkv = self.Wqkv(x) - if self.clip_qkv: - qkv.clamp_(min=-self.clip_qkv, max=self.clip_qkv) - (query, key, value) = qkv.split([self.d_model, self.head_dim, self.head_dim], dim=2) - key_padding_mask = attention_mask - if self.qk_ln: - dtype = query.dtype - query = self.q_ln(query).to(dtype) - key = self.k_ln(key).to(dtype) - (context, attn_weights, past_key_value) = self.attn_fn(query, key, value, self.n_heads, past_key_value=past_key_value, softmax_scale=self.softmax_scale, attn_bias=attn_bias, key_padding_mask=key_padding_mask, is_causal=is_causal, dropout_p=self.attn_dropout_p, training=self.training, needs_weights=needs_weights, multiquery=True) - return (self.out_proj(context), attn_weights, past_key_value) - -def attn_bias_shape(attn_impl, n_heads, seq_len, alibi, prefix_lm, causal, use_sequence_id): - if attn_impl == 'flash': - return None - elif attn_impl in ['torch', 'triton']: - if alibi: - if (prefix_lm or not causal) or use_sequence_id: - return (1, n_heads, seq_len, seq_len) - return (1, n_heads, 1, seq_len) - elif prefix_lm or use_sequence_id: - return (1, 1, seq_len, seq_len) - return None - else: - raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.') - -def build_attn_bias(attn_impl, attn_bias, n_heads, seq_len, causal=False, alibi=False, alibi_bias_max=8): - if attn_impl == 'flash': - return None - elif attn_impl in ['torch', 'triton']: - if alibi: - (device, dtype) = (attn_bias.device, attn_bias.dtype) - attn_bias = attn_bias.add(build_alibi_bias(n_heads, seq_len, full=not causal, alibi_bias_max=alibi_bias_max, device=device, dtype=dtype)) - return attn_bias - else: - raise ValueError(f'attn_impl={attn_impl!r} is an invalid setting.') - -def gen_slopes(n_heads, alibi_bias_max=8, device=None): - _n_heads = 2 ** math.ceil(math.log2(n_heads)) - m = torch.arange(1, _n_heads + 1, dtype=torch.float32, device=device) - m = m.mul(alibi_bias_max / _n_heads) - slopes = 1.0 / torch.pow(2, m) - if _n_heads != n_heads: - slopes = torch.concat([slopes[1::2], slopes[::2]])[:n_heads] - return slopes.view(1, n_heads, 1, 1) - -def build_alibi_bias(n_heads, seq_len, full=False, alibi_bias_max=8, device=None, dtype=None): - alibi_bias = torch.arange(1 - seq_len, 1, dtype=torch.int32, device=device).view(1, 1, 1, seq_len) - if full: - alibi_bias = alibi_bias - torch.arange(1 - seq_len, 1, dtype=torch.int32, device=device).view(1, 1, seq_len, 1) - alibi_bias = alibi_bias.abs().mul(-1) - slopes = gen_slopes(n_heads, alibi_bias_max, device=device) - alibi_bias = alibi_bias * slopes - return alibi_bias.to(dtype=dtype) -ATTN_CLASS_REGISTRY = {'multihead_attention': MultiheadAttention, 'multiquery_attention': MultiQueryAttention} \ No newline at end of file diff --git a/spaces/tttarun/ocr_voter_list/README.md b/spaces/tttarun/ocr_voter_list/README.md deleted file mode 100644 index 00fd023ce70ae8018aa40caad4738cd79828982e..0000000000000000000000000000000000000000 --- a/spaces/tttarun/ocr_voter_list/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Ocr Voter List -emoji: 📈 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -# ocr_voter_list -pdf to text of electoral roll. - -## Huggingface demo -Check https://tttarun-ocr-voter-list.hf.space/ for no-code usage diff --git a/spaces/uSerNameDDHL/bingo/postcss.config.js b/spaces/uSerNameDDHL/bingo/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/uSerNameDDHL/bingo/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/ulysses115/diffsvc_test/trans_key.py b/spaces/ulysses115/diffsvc_test/trans_key.py deleted file mode 100644 index c803a6acdbaa065cb75ce0a935b023780ab37026..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/diffsvc_test/trans_key.py +++ /dev/null @@ -1,61 +0,0 @@ -head_list = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"] - - -def trans_f0_seq(feature_pit, transform): - feature_pit = feature_pit * 2 ** (transform / 12) - return round(feature_pit, 1) - - -def move_key(raw_data, mv_key): - head = raw_data[:-1] - body = int(raw_data[-1]) - new_head_index = head_list.index(head) + mv_key - while new_head_index < 0: - body -= 1 - new_head_index += 12 - while new_head_index > 11: - body += 1 - new_head_index -= 12 - result_data = head_list[new_head_index] + str(body) - return result_data - - -def trans_key(raw_data, key): - for i in raw_data: - note_seq_list = i["note_seq"].split(" ") - new_note_seq_list = [] - for note_seq in note_seq_list: - if note_seq != "rest": - new_note_seq = move_key(note_seq, key) - new_note_seq_list.append(new_note_seq) - else: - new_note_seq_list.append(note_seq) - i["note_seq"] = " ".join(new_note_seq_list) - - f0_seq_list = i["f0_seq"].split(" ") - f0_seq_list = [float(x) for x in f0_seq_list] - new_f0_seq_list = [] - for f0_seq in f0_seq_list: - new_f0_seq = trans_f0_seq(f0_seq, key) - new_f0_seq_list.append(str(new_f0_seq)) - i["f0_seq"] = " ".join(new_f0_seq_list) - return raw_data - - -key = -6 -f_w = open("raw.txt", "w", encoding='utf-8') -with open("result.txt", "r", encoding='utf-8') as f: - raw_data = f.readlines() - for raw in raw_data: - raw_list = raw.split("|") - new_note_seq_list = [] - for note_seq in raw_list[3].split(" "): - if note_seq != "rest": - note_seq = note_seq.split("/")[0] if "/" in note_seq else note_seq - new_note_seq = move_key(note_seq, key) - new_note_seq_list.append(new_note_seq) - else: - new_note_seq_list.append(note_seq) - raw_list[3] = " ".join(new_note_seq_list) - f_w.write("|".join(raw_list)) -f_w.close() diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Achieve TQM Excellence with Total Quality Management Book By Jayakumar Pdf 41.md b/spaces/usbethFlerru/sovits-modelsV2/example/Achieve TQM Excellence with Total Quality Management Book By Jayakumar Pdf 41.md deleted file mode 100644 index d4d5827abe8a164b122de3d26e48d8e60530dda8..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Achieve TQM Excellence with Total Quality Management Book By Jayakumar Pdf 41.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Total Quality Management Book By Jayakumar Pdf 41


      DOWNLOAD ===> https://urlcod.com/2uyXue



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Crack Keygen FREE Autocad 2013 64bit For Mac Osx Torrent.md b/spaces/usbethFlerru/sovits-modelsV2/example/Crack Keygen FREE Autocad 2013 64bit For Mac Osx Torrent.md deleted file mode 100644 index 2e03c4d3f3ffbde4a2779d3e8c0064aba1f60e79..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Crack Keygen FREE Autocad 2013 64bit For Mac Osx Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

      crack keygen autocad 2013 64bit for mac osx torrent


      DOWNLOAD ✑ ✑ ✑ https://urlcod.com/2uyW6z



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/utec/Spacelmaj/README.md b/spaces/utec/Spacelmaj/README.md deleted file mode 100644 index 347ad0f4a329488df4ac05b273c80360fbc56b17..0000000000000000000000000000000000000000 --- a/spaces/utec/Spacelmaj/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Spacelmaj -emoji: 📚 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 2.9.0 -app_file: app.py -pinned: false -license: cc ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/sam/build.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/sam/build.md deleted file mode 100644 index faa26eeadd019c7acf46c8ae8d2a158bf503fddf..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/sam/build.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -description: Learn how to build SAM and VIT models with Ultralytics YOLO Docs. Enhance your understanding of computer vision models today!. -keywords: SAM, VIT, computer vision models, build SAM models, build VIT models, Ultralytics YOLO Docs ---- - -## build_sam_vit_h ---- -### ::: ultralytics.vit.sam.build.build_sam_vit_h -

      - -## build_sam_vit_l ---- -### ::: ultralytics.vit.sam.build.build_sam_vit_l -

      - -## build_sam_vit_b ---- -### ::: ultralytics.vit.sam.build.build_sam_vit_b -

      - -## _build_sam ---- -### ::: ultralytics.vit.sam.build._build_sam -

      - -## build_sam ---- -### ::: ultralytics.vit.sam.build.build_sam -

      \ No newline at end of file diff --git a/spaces/valhalla/minDALLE/dalle/utils/config.py b/spaces/valhalla/minDALLE/dalle/utils/config.py deleted file mode 100644 index a0d9c9b35b5d243eba7db1424f20bed9e5b10bb6..0000000000000000000000000000000000000000 --- a/spaces/valhalla/minDALLE/dalle/utils/config.py +++ /dev/null @@ -1,123 +0,0 @@ -# ------------------------------------------------------------------------------------ -# Minimal DALL-E -# Copyright (c) 2021 KakaoBrain. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------ - -from typing import Optional, List -from dataclasses import dataclass, field -from omegaconf import OmegaConf - - -@dataclass -class DataConfig: - dataset: Optional[str] = None - tokenizer_type: str = 'CharBPE' - context_length: int = 64 - image_resolution: int = 256 - transforms: str = 'dalle-vqvae' - bpe_pdrop: Optional[float] = None - - -@dataclass -class Stage1Hparams: - double_z: bool = False - z_channels: int = 256 - resolution: int = 256 - in_channels: int = 3 - out_ch: int = 3 - ch: int = 128 - ch_mult: List[int] = field(default_factory=lambda: [1, 1, 2, 2, 4]) - num_res_blocks: int = 2 - attn_resolutions: List[int] = field(default_factory=lambda: [16]) - pdrop: float = 0.0 - - -@dataclass -class Stage2Hparams: - embed_dim: int = 1536 - n_layers: int = 42 - n_heads: int = 24 - n_dense_layers: int = 42 - ctx_len_img: int = 256 - ctx_len_txt: int = 64 - embd_pdrop: float = 0.0 - resid_pdrop: float = 0.0 - attn_pdrop: float = 0.0 - mlp_bias: bool = True - attn_bias: bool = True - gelu_use_approx: bool = False - use_head_txt: bool = True - n_classes: Optional[int] = None - - -@dataclass -class Stage1Config: - type: str = 'vqgan' - embed_dim: int = 256 - n_embed: int = 16384 - hparams: Stage1Hparams = Stage1Hparams() - - -@dataclass -class Stage2Config: - type: str = 'transformer1d' - vocab_size_txt: int = 16384 - vocab_size_img: int = 16384 - use_cls_cond: Optional[bool] = None - hparams: Stage2Hparams = Stage2Hparams() - - -@dataclass -class WarmupConfig: - epoch: int = 1 - multiplier: int = 1 - buffer_epoch: int = 0 - min_lr: float = 0.0 - mode: str = 'fix' - peak_lr: float = 1e-4 - start_from_zero: bool = True - - -@dataclass -class OptConfig: - opt_type: str = 'adamW' - base_lr: float = 1e-4 - weight_decay: float = 1e-4 - betas: List[float] = field(default_factory=lambda: [0.9, 0.99]) - grad_clip_norm: float = 1.0 - - sched_type: str = 'cosine' - max_steps: int = 0 - min_lr: float = 0.0 - - -@dataclass -class ExpConfig: - local_batch_size: int = 4 - total_batch_size: int = 512 - valid_batch_size: int = 32 - epochs: int = 10 - save_ckpt_freq: int = 2 - test_freq: int = 1 - use_amp: bool = True - - -@dataclass -class DefaultConfig: - dataset: DataConfig = DataConfig() - stage1: Stage1Config = Stage1Config() - stage2: Stage2Config = Stage2Config() - - -@dataclass -class FineTuningConfig: - dataset: DataConfig = DataConfig() - stage1: Stage1Config = Stage1Config() - stage2: Stage2Config = Stage2Config() - optimizer: OptConfig = OptConfig() - experiment: ExpConfig = ExpConfig() - - -def get_base_config(use_default=True): - return OmegaConf.structured(DefaultConfig if use_default else FineTuningConfig) diff --git a/spaces/vishnun/Colorify/README.md b/spaces/vishnun/Colorify/README.md deleted file mode 100644 index 0468c5d4200cebd4acf026c92fc040d7e7eb8bd7..0000000000000000000000000000000000000000 --- a/spaces/vishnun/Colorify/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Colorify -emoji: 🏃 -colorFrom: blue -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/params.py b/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/params.py deleted file mode 100644 index d75be23ebf38b05d604804829b7c9318ded725bd..0000000000000000000000000000000000000000 --- a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/params.py +++ /dev/null @@ -1,18 +0,0 @@ -""" -This file contains list of global parameters for the Galaxy Zoo generation app -""" - -device = 'cpu' -size = 64 # generated image size -shape_label = 37 # shape of the input label -n_channels = 3 # number of color channels in image -upsample = True # if true, generated images will be upsampled -noise_dim = 512 # noise size in InfoSCC-GAN -n_basis = 6 # size of additional z vectors in InfoSCC-GAN -y_type = 'real' # type of labels in InfoSCC-GAN -dim_z = 128 # z vector size in BigGAN and cVAE - -path_infoscc_gan = './galaxy-zoo-generation/models/InfoSCC-GAN/generator.pt' -path_biggan = './galaxy-zoo-generation/models/BigGAN/generator.pth' -path_cvae = './galaxy-zoo-generation/models/CVAE/generator.pth' -path_labels = './galaxy-zoo-generation/data/training_solutions_rev1.csv' diff --git a/spaces/vivien/trompeloeil/styles/main.css b/spaces/vivien/trompeloeil/styles/main.css deleted file mode 100644 index 1661ff2d389e8399de312a281ec6aaa5ebe9acba..0000000000000000000000000000000000000000 --- a/spaces/vivien/trompeloeil/styles/main.css +++ /dev/null @@ -1,18 +0,0 @@ -body { - /* remove margins and scroll bars */ - margin: 0; - overflow: hidden; -} - -#scene-container { - /* tell our scene container to take up the full page */ - position: absolute; - width: 100%; - height: 100%; - - /* - Set the container's background color to the same as the scene's - background to prevent flashing on load - */ - background-color: #282828; -} \ No newline at end of file diff --git a/spaces/vumichien/Lip_movement_reading/app.py b/spaces/vumichien/Lip_movement_reading/app.py deleted file mode 100644 index bc901911be086079c9e63960c5c1e5fc5c93338e..0000000000000000000000000000000000000000 --- a/spaces/vumichien/Lip_movement_reading/app.py +++ /dev/null @@ -1,199 +0,0 @@ -import os -import sys - -os.system('git clone https://github.com/facebookresearch/av_hubert.git') -os.chdir('/home/user/app/av_hubert') -os.system('git submodule init') -os.system('git submodule update') -os.chdir('/home/user/app/av_hubert/fairseq') -os.system('pip install ./') -os.system('pip install scipy') -os.system('pip install sentencepiece') -os.system('pip install python_speech_features') -os.system('pip install scikit-video') -os.system('pip install transformers') -os.system('pip install gradio==3.12') -os.system('pip install numpy==1.23.3') - - -# sys.path.append('/home/user/app/av_hubert') -sys.path.append('/home/user/app/av_hubert/avhubert') - -print(sys.path) -print(os.listdir()) -print(sys.argv, type(sys.argv)) -sys.argv.append('dummy') - - - -import dlib, cv2, os -import numpy as np -import skvideo -import skvideo.io -from tqdm import tqdm -from preparation.align_mouth import landmarks_interpolate, crop_patch, write_video_ffmpeg -from base64 import b64encode -import torch -import cv2 -import tempfile -from argparse import Namespace -import fairseq -from fairseq import checkpoint_utils, options, tasks, utils -from fairseq.dataclass.configs import GenerationConfig -from huggingface_hub import hf_hub_download -import gradio as gr -from pytube import YouTube - -# os.chdir('/home/user/app/av_hubert/avhubert') - -user_dir = "/home/user/app/av_hubert/avhubert" -utils.import_user_module(Namespace(user_dir=user_dir)) -data_dir = "/home/user/app/video" - -ckpt_path = hf_hub_download('vumichien/AV-HuBERT', 'model.pt') -face_detector_path = "/home/user/app/mmod_human_face_detector.dat" -face_predictor_path = "/home/user/app/shape_predictor_68_face_landmarks.dat" -mean_face_path = "/home/user/app/20words_mean_face.npy" -mouth_roi_path = "/home/user/app/roi.mp4" -modalities = ["video"] -gen_subset = "test" -gen_cfg = GenerationConfig(beam=20) -models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task([ckpt_path]) -models = [model.eval().cuda() if torch.cuda.is_available() else model.eval() for model in models] -saved_cfg.task.modalities = modalities -saved_cfg.task.data = data_dir -saved_cfg.task.label_dir = data_dir -task = tasks.setup_task(saved_cfg.task) -generator = task.build_generator(models, gen_cfg) - -def get_youtube(video_url): - yt = YouTube(video_url) - abs_video_path = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download() - print("Success download video") - print(abs_video_path) - return abs_video_path - -def detect_landmark(image, detector, predictor): - gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) - face_locations = detector(gray, 1) - coords = None - for (_, face_location) in enumerate(face_locations): - if torch.cuda.is_available(): - rect = face_location.rect - else: - rect = face_location - shape = predictor(gray, rect) - coords = np.zeros((68, 2), dtype=np.int32) - for i in range(0, 68): - coords[i] = (shape.part(i).x, shape.part(i).y) - return coords - -def preprocess_video(input_video_path): - if torch.cuda.is_available(): - detector = dlib.cnn_face_detection_model_v1(face_detector_path) - else: - detector = dlib.get_frontal_face_detector() - - predictor = dlib.shape_predictor(face_predictor_path) - STD_SIZE = (256, 256) - mean_face_landmarks = np.load(mean_face_path) - stablePntsIDs = [33, 36, 39, 42, 45] - videogen = skvideo.io.vread(input_video_path) - frames = np.array([frame for frame in videogen]) - landmarks = [] - for frame in tqdm(frames): - landmark = detect_landmark(frame, detector, predictor) - landmarks.append(landmark) - preprocessed_landmarks = landmarks_interpolate(landmarks) - rois = crop_patch(input_video_path, preprocessed_landmarks, mean_face_landmarks, stablePntsIDs, STD_SIZE, - window_margin=12, start_idx=48, stop_idx=68, crop_height=96, crop_width=96) - write_video_ffmpeg(rois, mouth_roi_path, "/usr/bin/ffmpeg") - return mouth_roi_path - -def predict(process_video): - num_frames = int(cv2.VideoCapture(process_video).get(cv2.CAP_PROP_FRAME_COUNT)) - - tsv_cont = ["/\n", f"test-0\t{process_video}\t{None}\t{num_frames}\t{int(16_000*num_frames/25)}\n"] - label_cont = ["DUMMY\n"] - with open(f"{data_dir}/test.tsv", "w") as fo: - fo.write("".join(tsv_cont)) - with open(f"{data_dir}/test.wrd", "w") as fo: - fo.write("".join(label_cont)) - task.load_dataset(gen_subset, task_cfg=saved_cfg.task) - - def decode_fn(x): - dictionary = task.target_dictionary - symbols_ignore = generator.symbols_to_strip_from_output - symbols_ignore.add(dictionary.pad()) - return task.datasets[gen_subset].label_processors[0].decode(x, symbols_ignore) - - itr = task.get_batch_iterator(dataset=task.dataset(gen_subset)).next_epoch_itr(shuffle=False) - sample = next(itr) - if torch.cuda.is_available(): - sample = utils.move_to_cuda(sample) - hypos = task.inference_step(generator, models, sample) - ref = decode_fn(sample['target'][0].int().cpu()) - hypo = hypos[0][0]['tokens'].int().cpu() - hypo = decode_fn(hypo) - return hypo - - -# ---- Gradio Layout ----- -youtube_url_in = gr.Textbox(label="Youtube url", lines=1, interactive=True) -video_in = gr.Video(label="Input Video", mirror_webcam=False, interactive=True) -video_out = gr.Video(label="Audio Visual Video", mirror_webcam=False, interactive=True) -demo = gr.Blocks() -demo.encrypt = False -text_output = gr.Textbox() - -with demo: - gr.Markdown(''' -
      -

      Speech Recognition from Visual Lip Movement by Audio-Visual Hidden Unit BERT Model (AV-HuBERT)

      - This space uses AV-HuBERT models from Meta Research to recoginze the speech from Lip Movement 🤗 -
      - Audio-Visual Speech Recognition -
      Speech Recognition from visual lip movement -
      -
      -
      - ''') - with gr.Row(): - gr.Markdown(''' - ### Reading Lip movement with youtube link using Avhubert - ##### Step 1a. Download video from youtube (Note: the length of video should be less than 10 seconds if not it will be cut and the face should be stable for better result) - ##### Step 1b. You also can upload video directly - ##### Step 2. Generating landmarks surrounding mouth area - ##### Step 3. Reading lip movement. - ''') - with gr.Row(): - gr.Markdown(''' - ### You can test by following examples: - ''') - examples = gr.Examples(examples= - [ "https://www.youtube.com/watch?v=ZXVDnuepW2s", - "https://www.youtube.com/watch?v=X8_glJn1B8o", - "https://www.youtube.com/watch?v=80yqL2KzBVw"], - label="Examples", inputs=[youtube_url_in]) - with gr.Column(): - youtube_url_in.render() - download_youtube_btn = gr.Button("Download Youtube video") - download_youtube_btn.click(get_youtube, [youtube_url_in], [ - video_in]) - print(video_in) - with gr.Row(): - video_in.render() - video_out.render() - with gr.Row(): - detect_landmark_btn = gr.Button("Detect landmark") - detect_landmark_btn.click(preprocess_video, [video_in], [ - video_out]) - predict_btn = gr.Button("Predict") - predict_btn.click(predict, [video_out], [ - text_output]) - with gr.Row(): - # video_lip = gr.Video(label="Audio Visual Video", mirror_webcam=False) - text_output.render() - - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/warwickai/fin-perceiver-demo/app.py b/spaces/warwickai/fin-perceiver-demo/app.py deleted file mode 100644 index ff661ad8b1180f3fa28fd150d6f5c5843efef8da..0000000000000000000000000000000000000000 --- a/spaces/warwickai/fin-perceiver-demo/app.py +++ /dev/null @@ -1,100 +0,0 @@ -from time import strftime - -import feedparser -import streamlit as st - -from transformers import AutoTokenizer, pipeline, \ - AutoModelForSequenceClassification - - -@st.cache(allow_output_mutation=True, show_spinner=False) -def load_model(): - return AutoModelForSequenceClassification.from_pretrained("warwickai/fin-perceiver") - - -@st.cache(show_spinner=False) -def load_news(feed): - return feedparser.parse(feed).get('entries') - - -def filter_with_sentiment(articles, sentiments): - return filter( - lambda article: article[1].get('label') in sentiments, - articles - ) - - -tokenizer = AutoTokenizer.from_pretrained("warwickai/fin-perceiver") - -with st.spinner('📈 Loading model...'): - model = load_model() - pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) - - -def classify_articles(articles, target_pipeline): - headlines = [article.title for article in articles] - sentiment = target_pipeline(headlines) - - return list(zip(articles, sentiment)) - - -rss_feeds = { - 'yahoo': 'https://finance.yahoo.com/news/rssindex' -} - -sentiment_distribution = { - 'positive': 0, - 'negative': 0, - 'neutral': 0 -} - -st.title('FINPerceiver') - -target_source = st.sidebar.selectbox( - 'Select a financial news source', - rss_feeds.keys()) - -target_sentiments = st.sidebar.multiselect( - label='Select the target sentiments', - options=sentiment_distribution.keys(), - default=sentiment_distribution.keys()) - -with st.spinner('📰 Loading articles...'): - target_articles = sorted( - load_news( - rss_feeds.get(target_source) - ), - key=lambda article: article.published_parsed, - reverse=True - ) - -with st.spinner('⚙️ Analysing articles...'): - classified_articles = classify_articles(target_articles, pipe) - - total_articles = 0 - - for article, sentiment in classified_articles: - total_articles += 1 - sentiment_distribution[sentiment.get('label')] += 1 - - for sentiment in sentiment_distribution.keys(): - sentiment_distribution[sentiment] /= total_articles * 0.01 - - st.sidebar.subheader('Summary') - st.sidebar.metric("Positive", f"👍 {sentiment_distribution.get('positive'):.2f}%") - st.sidebar.metric("Neutral", f"😐 {sentiment_distribution.get('neutral'):.2f}%") - st.sidebar.metric("Negative", f"👎 {sentiment_distribution.get('negative'):.2f}%") - -for article, sentiment in filter_with_sentiment(classified_articles, target_sentiments): - if 'media_content' in article: - img_url = article.media_content[0].get('url') - st.image(img_url, width=300) - - st.markdown( - f''' - #### {article.title} - Published on {strftime('%H:%M %d/%m/%Y', article.published_parsed)} - - **Sentiment:** {sentiment.get('label').capitalize()} - ''' - ) diff --git a/spaces/wazhendeshiniya/White-box-Cartoonization/app.py b/spaces/wazhendeshiniya/White-box-Cartoonization/app.py deleted file mode 100644 index c55ced56bd87a85f59d1c8ef84b7eca87422720f..0000000000000000000000000000000000000000 --- a/spaces/wazhendeshiniya/White-box-Cartoonization/app.py +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations -import argparse -import functools -import os -import pathlib -import sys -from typing import Callable -import uuid - -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image - -from io import BytesIO -from wbc.cartoonize import Cartoonize - -ORIGINAL_REPO_URL = 'https://github.com/SystemErrorWang/White-box-Cartoonization' -TITLE = 'SystemErrorWang/White-box-Cartoonization' -DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}. - -""" -ARTICLE = """ - -""" - -SAFEHASH = [x for x in "0123456789-abcdefghijklmnopqrstuvwxyz_ABCDEFGHIJKLMNOPQRSTUVWXYZ"] -def compress_UUID(): - ''' - 根据http://www.ietf.org/rfc/rfc1738.txt,由uuid编码扩bai大字符域生成du串 - 包括:[0-9a-zA-Z\-_]共64个 - 长度:(32-2)/3*2=20 - 备注:可在地球上人zhi人都用,使用100年不重复(2^120) - :return:String - ''' - row = str(uuid.uuid4()).replace('-', '') - safe_code = '' - for i in range(10): - enbin = "%012d" % int(bin(int(row[i * 3] + row[i * 3 + 1] + row[i * 3 + 2], 16))[2:], 10) - safe_code += (SAFEHASH[int(enbin[0:6], 2)] + SAFEHASH[int(enbin[6:12], 2)]) - safe_code = safe_code.replace('-', '') - return safe_code - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - parser.add_argument('--allow-screenshot', action='store_true') - return parser.parse_args() - -def run( - image, - cartoonize : Cartoonize -) -> tuple[PIL.Image.Image]: - - out_path = compress_UUID()+'.png' - cartoonize.run_sigle(image.name, out_path) - - return PIL.Image.open(out_path) - - -def main(): - gr.close_all() - - args = parse_args() - - cartoonize = Cartoonize(os.path.join(os.path.dirname(os.path.abspath(__file__)),'wbc/saved_models/')) - - func = functools.partial(run, cartoonize=cartoonize) - func = functools.update_wrapper(func, run) - - gr.Interface( - func, - [ - gr.inputs.Image(type='file', label='Input Image'), - ], - [ - gr.outputs.Image( - type='pil', - label='Result'), - ], - # examples=examples, - theme=args.theme, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, - allow_screenshot=args.allow_screenshot, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/whitphx/gradio-static-test/dist/assets/index-b47dd3f5.js b/spaces/whitphx/gradio-static-test/dist/assets/index-b47dd3f5.js deleted file mode 100644 index c3fe5ddf728c5af1fa01046caf6b991189466946..0000000000000000000000000000000000000000 --- a/spaces/whitphx/gradio-static-test/dist/assets/index-b47dd3f5.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as h,i as p,s as v,b as T,a as k,e as q,m as C,l as S,q as m,t as d,o as w,u as z,W as D,Y as E,Z as W,$ as Y,z as f}from"../lite.js";import{T as Z}from"./TabItem.svelte_svelte_type_style_lang-8592a08c.js";/* empty css */function j(s){let e;const i=s[4].default,t=D(i,s,s[8],null);return{c(){t&&t.c()},m(n,o){t&&t.m(n,o),e=!0},p(n,o){t&&t.p&&(!e||o&256)&&E(t,i,n,n[8],e?Y(i,n[8],o,null):W(n[8]),null)},i(n){e||(m(t,n),e=!0)},o(n){d(t,n),e=!1},d(n){t&&t.d(n)}}}function A(s){let e,i,t;function n(l){s[5](l)}let o={visible:s[1],elem_id:s[2],elem_classes:s[3],$$slots:{default:[j]},$$scope:{ctx:s}};return s[0]!==void 0&&(o.selected=s[0]),e=new Z({props:o}),T.push(()=>k(e,"selected",n)),e.$on("change",s[6]),e.$on("select",s[7]),{c(){q(e.$$.fragment)},m(l,c){C(e,l,c),t=!0},p(l,[c]){const _={};c&2&&(_.visible=l[1]),c&4&&(_.elem_id=l[2]),c&8&&(_.elem_classes=l[3]),c&256&&(_.$$scope={dirty:c,ctx:l}),!i&&c&1&&(i=!0,_.selected=l[0],S(()=>i=!1)),e.$set(_)},i(l){t||(m(e.$$.fragment,l),t=!0)},o(l){d(e.$$.fragment,l),t=!1},d(l){w(e,l)}}}function B(s,e,i){let{$$slots:t={},$$scope:n}=e;const o=z();let{visible:l=!0}=e,{elem_id:c=""}=e,{elem_classes:_=[]}=e,{selected:u}=e;function r(a){u=a,i(0,u)}function b(a){f.call(this,s,a)}function g(a){f.call(this,s,a)}return s.$$set=a=>{"visible"in a&&i(1,l=a.visible),"elem_id"in a&&i(2,c=a.elem_id),"elem_classes"in a&&i(3,_=a.elem_classes),"selected"in a&&i(0,u=a.selected),"$$scope"in a&&i(8,n=a.$$scope)},s.$$.update=()=>{s.$$.dirty&1&&o("prop_change",{selected:u})},[u,l,c,_,t,r,b,g,n]}class F extends h{constructor(e){super(),p(this,e,B,A,v,{visible:1,elem_id:2,elem_classes:3,selected:0})}}const J=F,K=["static"];export{J as Component,K as modes}; -//# sourceMappingURL=index-b47dd3f5.js.map diff --git a/spaces/wwwwwwww2/bingo/src/lib/hooks/use-enter-submit.tsx b/spaces/wwwwwwww2/bingo/src/lib/hooks/use-enter-submit.tsx deleted file mode 100644 index d66b2d3253baff164235d4ca791aae6d84721835..0000000000000000000000000000000000000000 --- a/spaces/wwwwwwww2/bingo/src/lib/hooks/use-enter-submit.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import { useRef, type RefObject } from 'react' - -export function useEnterSubmit(): { - formRef: RefObject - onKeyDown: (event: React.KeyboardEvent) => void -} { - const formRef = useRef(null) - - const handleKeyDown = ( - event: React.KeyboardEvent - ): void => { - if ( - event.key === 'Enter' && - !event.shiftKey && - !event.nativeEvent.isComposing - ) { - formRef.current?.requestSubmit() - event.preventDefault() - } - } - - return { formRef, onKeyDown: handleKeyDown } -} diff --git a/spaces/wwydmanski/esmfold/README.md b/spaces/wwydmanski/esmfold/README.md deleted file mode 100644 index 674101f0b6a46eacb47b89a5f64fb8ec2538622b..0000000000000000000000000000000000000000 --- a/spaces/wwydmanski/esmfold/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Esmfold -emoji: ⚡ -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/commons.py b/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/body/build.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/body/build.py deleted file mode 100644 index fb35e4cc266c64418f4b21e9d95c7844417a2a56..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/body/build.py +++ /dev/null @@ -1,13 +0,0 @@ -from .registry import model_entrypoints -from .registry import is_model - -from .xdecoder_head import * - - -def build_xdecoder_head(config, *args, **kwargs): - model_name = config['MODEL']['HEAD'] - if not is_model(model_name): - raise ValueError(f'Unkown model: {model_name}') - - body = model_entrypoints(model_name)(config, *args, **kwargs) - return body \ No newline at end of file diff --git a/spaces/xeonm/image-to-audio-story/README.md b/spaces/xeonm/image-to-audio-story/README.md deleted file mode 100644 index 98a6839b0b63d4c00e7909cda7feedc260ed9cd3..0000000000000000000000000000000000000000 --- a/spaces/xeonm/image-to-audio-story/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image To Audio Story -emoji: 😻 -colorFrom: gray -colorTo: gray -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: cc0-1.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xiaoyun235/White-box-Cartoonization/app.py b/spaces/xiaoyun235/White-box-Cartoonization/app.py deleted file mode 100644 index c55ced56bd87a85f59d1c8ef84b7eca87422720f..0000000000000000000000000000000000000000 --- a/spaces/xiaoyun235/White-box-Cartoonization/app.py +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations -import argparse -import functools -import os -import pathlib -import sys -from typing import Callable -import uuid - -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image - -from io import BytesIO -from wbc.cartoonize import Cartoonize - -ORIGINAL_REPO_URL = 'https://github.com/SystemErrorWang/White-box-Cartoonization' -TITLE = 'SystemErrorWang/White-box-Cartoonization' -DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}. - -""" -ARTICLE = """ - -""" - -SAFEHASH = [x for x in "0123456789-abcdefghijklmnopqrstuvwxyz_ABCDEFGHIJKLMNOPQRSTUVWXYZ"] -def compress_UUID(): - ''' - 根据http://www.ietf.org/rfc/rfc1738.txt,由uuid编码扩bai大字符域生成du串 - 包括:[0-9a-zA-Z\-_]共64个 - 长度:(32-2)/3*2=20 - 备注:可在地球上人zhi人都用,使用100年不重复(2^120) - :return:String - ''' - row = str(uuid.uuid4()).replace('-', '') - safe_code = '' - for i in range(10): - enbin = "%012d" % int(bin(int(row[i * 3] + row[i * 3 + 1] + row[i * 3 + 2], 16))[2:], 10) - safe_code += (SAFEHASH[int(enbin[0:6], 2)] + SAFEHASH[int(enbin[6:12], 2)]) - safe_code = safe_code.replace('-', '') - return safe_code - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - parser.add_argument('--allow-screenshot', action='store_true') - return parser.parse_args() - -def run( - image, - cartoonize : Cartoonize -) -> tuple[PIL.Image.Image]: - - out_path = compress_UUID()+'.png' - cartoonize.run_sigle(image.name, out_path) - - return PIL.Image.open(out_path) - - -def main(): - gr.close_all() - - args = parse_args() - - cartoonize = Cartoonize(os.path.join(os.path.dirname(os.path.abspath(__file__)),'wbc/saved_models/')) - - func = functools.partial(run, cartoonize=cartoonize) - func = functools.update_wrapper(func, run) - - gr.Interface( - func, - [ - gr.inputs.Image(type='file', label='Input Image'), - ], - [ - gr.outputs.Image( - type='pil', - label='Result'), - ], - # examples=examples, - theme=args.theme, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, - allow_screenshot=args.allow_screenshot, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/xl2533/FinDoc/build_index/unit_test/test_faiss.py b/spaces/xl2533/FinDoc/build_index/unit_test/test_faiss.py deleted file mode 100644 index 3b1d79fde3063d85f7ce79e08988768b128a2711..0000000000000000000000000000000000000000 --- a/spaces/xl2533/FinDoc/build_index/unit_test/test_faiss.py +++ /dev/null @@ -1,33 +0,0 @@ -# -*-coding:utf-8 -*- - -from langchain.vectorstores import FAISS -from langchain.embeddings import HuggingFaceEmbeddings, CohereEmbeddings -from key import CoherenceKey - -## MPNET -model = HuggingFaceEmbeddings() -db = FAISS.load_local('./output/半导体', model) - -docs = db.similarity_search('东吴证券观点') -print(docs[0].page_content) - - -docs = db.similarity_search('德邦证券') -print(docs[0].page_content) - - -## Coherence -model = CohereEmbeddings(cohere_api_key=CoherenceKey) -db = FAISS.load_local('./output/半导体', model) - -docs = db.similarity_search('半导体指数行情') -print(docs[0].page_content) - - -docs = db.similarity_search('关于行业光刻胶相关新闻') -print(docs[0].page_content) - - - -docs = db.similarity_search('2023年GDP增速预测') -print(docs[0].page_content) diff --git a/spaces/xxx1/VQA_CAP_GPT/models/VLE/configuration_vle.py b/spaces/xxx1/VQA_CAP_GPT/models/VLE/configuration_vle.py deleted file mode 100644 index 8ea906f633defa3280f77aa47dcdb908b2620806..0000000000000000000000000000000000000000 --- a/spaces/xxx1/VQA_CAP_GPT/models/VLE/configuration_vle.py +++ /dev/null @@ -1,143 +0,0 @@ -# coding=utf-8 -# Copyright The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" VLE model configuration""" - -import copy - -from transformers.configuration_utils import PretrainedConfig -from transformers.utils import logging -from transformers.models.auto.configuration_auto import AutoConfig -from transformers.models.clip.configuration_clip import CLIPVisionConfig -from typing import Union, Dict - -logger = logging.get_logger(__name__) - - -class VLEConfig(PretrainedConfig): - r""" - [`VLEConfig`] is the configuration class to store the configuration of a - [`VLEModel`]. It is used to instantiate [`VLEModel`] model according to the - specified arguments, defining the text model and vision model configs. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - text_config (`dict`): - Dictionary of configuration options that defines text model config. - vision_config (`dict`): - Dictionary of configuration options that defines vison model config. - #TODO - logit_scale_init_value (`float`, *optional*, defaults to 2.6592): - The inital value of the *logit_scale* paramter. Default is used as per the original CLIP implementation. - kwargs (*optional*): - Dictionary of keyword arguments. - - Examples: - - ```python - >>> from transformers import ViTConfig, BertConfig - >>> from configuration_vle import VLEconfig - >>> from modeling_vle import VLEModel - >>> # Initializing a BERT and ViT configuration - >>> config_vision = ViTConfig() - >>> config_text = BertConfig() - - >>> config = VLEConfig.from_vision_text_configs(config_vision, config_text) #TODO - - >>> # Initializing a BERT and ViT model (with random weights) - >>> model = VLEModel(config=config) - - >>> # Accessing the model configuration - >>> config_vision = model.config.vision_config - >>> config_text = model.config.text_config - - >>> # Saving the model, including its configuration - >>> model.save_pretrained("vit-bert") - - >>> # loading model and config from pretrained folder - >>> vision_text_config = VLEConfig.from_pretrained("vit-bert") - >>> model = VLEModel.from_pretrained("vit-bert", config=vision_text_config) - ```""" - - model_type = "vle" - is_composition = True - - def __init__( - self, - text_config: Union[PretrainedConfig, Dict], - vision_config: Union[PretrainedConfig, Dict], - num_token_types=2, - hidden_size=768, - num_hidden_layers=6, - num_attention_heads=12, - intermediate_size=3072, - hidden_act="gelu", - hidden_dropout_prob=0.1, - attention_probs_dropout_prob=0.1, - initializer_range=0.02, - layer_norm_eps=1e-12, - classifier_dropout=None, - **kwargs): - super().__init__(**kwargs) - - if not isinstance(text_config,PretrainedConfig): - text_model_type = text_config.pop('model_type') - text_config = AutoConfig.for_model(text_model_type, **text_config) - self.text_config = text_config - - if not isinstance(vision_config, PretrainedConfig): - vision_model_type = vision_config.pop('model_type') - if vision_model_type == "clip": - vision_config = AutoConfig.for_model(vision_model_type, **vision_config).vision_config - elif vision_model_type == "clip_vision_model": - vision_config = CLIPVisionConfig(**vision_config) - else: - vision_config = AutoConfig.for_model(vision_model_type, **vision_config) - self.vision_config = vision_config - else: - vision_model_type = vision_config.model_type - if vision_model_type== "clip": - vision_config = vision_config.vision_config - self.vision_config = vision_config - - - - # co-attention - self.num_token_types=num_token_types - self.hidden_size=hidden_size - self.num_hidden_layers=num_hidden_layers - self.num_attention_heads=num_attention_heads - self.intermediate_size=intermediate_size - self.hidden_act=hidden_act - self.hidden_dropout_prob=hidden_dropout_prob - self.attention_probs_dropout_prob=attention_probs_dropout_prob - self.initializer_range=initializer_range - self.layer_norm_eps=layer_norm_eps - self.classifier_dropout=classifier_dropout - - - def to_dict(self): - """ - Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`]. - - Returns: - `Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, - """ - output = copy.deepcopy(self.__dict__) - output["vision_config"] = self.vision_config.to_dict() - output["text_config"] = self.text_config.to_dict() - output["model_type"] = self.__class__.model_type - return output diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/models/GroundingDINO/fuse_modules.py b/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/models/GroundingDINO/fuse_modules.py deleted file mode 100644 index 2753b3ddee43c7a9fe28d1824db5d786e7e1ad59..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/models/GroundingDINO/fuse_modules.py +++ /dev/null @@ -1,297 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ - -import torch -import torch.nn as nn -import torch.nn.functional as F -from timm.models.layers import DropPath - - -class FeatureResizer(nn.Module): - """ - This class takes as input a set of embeddings of dimension C1 and outputs a set of - embedding of dimension C2, after a linear transformation, dropout and normalization (LN). - """ - - def __init__(self, input_feat_size, output_feat_size, dropout, do_ln=True): - super().__init__() - self.do_ln = do_ln - # Object feature encoding - self.fc = nn.Linear(input_feat_size, output_feat_size, bias=True) - self.layer_norm = nn.LayerNorm(output_feat_size, eps=1e-12) - self.dropout = nn.Dropout(dropout) - - def forward(self, encoder_features): - x = self.fc(encoder_features) - if self.do_ln: - x = self.layer_norm(x) - output = self.dropout(x) - return output - - -def l1norm(X, dim, eps=1e-8): - """L1-normalize columns of X""" - norm = torch.abs(X).sum(dim=dim, keepdim=True) + eps - X = torch.div(X, norm) - return X - - -def l2norm(X, dim, eps=1e-8): - """L2-normalize columns of X""" - norm = torch.pow(X, 2).sum(dim=dim, keepdim=True).sqrt() + eps - X = torch.div(X, norm) - return X - - -def func_attention(query, context, smooth=1, raw_feature_norm="softmax", eps=1e-8): - """ - query: (n_context, queryL, d) - context: (n_context, sourceL, d) - """ - batch_size_q, queryL = query.size(0), query.size(1) - batch_size, sourceL = context.size(0), context.size(1) - - # Get attention - # --> (batch, d, queryL) - queryT = torch.transpose(query, 1, 2) - - # (batch, sourceL, d)(batch, d, queryL) - # --> (batch, sourceL, queryL) - attn = torch.bmm(context, queryT) - if raw_feature_norm == "softmax": - # --> (batch*sourceL, queryL) - attn = attn.view(batch_size * sourceL, queryL) - attn = nn.Softmax()(attn) - # --> (batch, sourceL, queryL) - attn = attn.view(batch_size, sourceL, queryL) - elif raw_feature_norm == "l2norm": - attn = l2norm(attn, 2) - elif raw_feature_norm == "clipped_l2norm": - attn = nn.LeakyReLU(0.1)(attn) - attn = l2norm(attn, 2) - else: - raise ValueError("unknown first norm type:", raw_feature_norm) - # --> (batch, queryL, sourceL) - attn = torch.transpose(attn, 1, 2).contiguous() - # --> (batch*queryL, sourceL) - attn = attn.view(batch_size * queryL, sourceL) - attn = nn.Softmax()(attn * smooth) - # --> (batch, queryL, sourceL) - attn = attn.view(batch_size, queryL, sourceL) - # --> (batch, sourceL, queryL) - attnT = torch.transpose(attn, 1, 2).contiguous() - - # --> (batch, d, sourceL) - contextT = torch.transpose(context, 1, 2) - # (batch x d x sourceL)(batch x sourceL x queryL) - # --> (batch, d, queryL) - weightedContext = torch.bmm(contextT, attnT) - # --> (batch, queryL, d) - weightedContext = torch.transpose(weightedContext, 1, 2) - - return weightedContext, attnT - - -class BiMultiHeadAttention(nn.Module): - def __init__(self, v_dim, l_dim, embed_dim, num_heads, dropout=0.1, cfg=None): - super(BiMultiHeadAttention, self).__init__() - - self.embed_dim = embed_dim - self.num_heads = num_heads - self.head_dim = embed_dim // num_heads - self.v_dim = v_dim - self.l_dim = l_dim - - assert ( - self.head_dim * self.num_heads == self.embed_dim - ), f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`: {self.num_heads})." - self.scale = self.head_dim ** (-0.5) - self.dropout = dropout - - self.v_proj = nn.Linear(self.v_dim, self.embed_dim) - self.l_proj = nn.Linear(self.l_dim, self.embed_dim) - self.values_v_proj = nn.Linear(self.v_dim, self.embed_dim) - self.values_l_proj = nn.Linear(self.l_dim, self.embed_dim) - - self.out_v_proj = nn.Linear(self.embed_dim, self.v_dim) - self.out_l_proj = nn.Linear(self.embed_dim, self.l_dim) - - self.stable_softmax_2d = True - self.clamp_min_for_underflow = True - self.clamp_max_for_overflow = True - - self._reset_parameters() - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def _reset_parameters(self): - nn.init.xavier_uniform_(self.v_proj.weight) - self.v_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.l_proj.weight) - self.l_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.values_v_proj.weight) - self.values_v_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.values_l_proj.weight) - self.values_l_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.out_v_proj.weight) - self.out_v_proj.bias.data.fill_(0) - nn.init.xavier_uniform_(self.out_l_proj.weight) - self.out_l_proj.bias.data.fill_(0) - - def forward(self, v, l, attention_mask_v=None, attention_mask_l=None): - """_summary_ - - Args: - v (_type_): bs, n_img, dim - l (_type_): bs, n_text, dim - attention_mask_v (_type_, optional): _description_. bs, n_img - attention_mask_l (_type_, optional): _description_. bs, n_text - - Returns: - _type_: _description_ - """ - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - bsz, tgt_len, _ = v.size() - - query_states = self.v_proj(v) * self.scale - key_states = self._shape(self.l_proj(l), -1, bsz) - value_v_states = self._shape(self.values_v_proj(v), -1, bsz) - value_l_states = self._shape(self.values_l_proj(l), -1, bsz) - - proj_shape = (bsz * self.num_heads, -1, self.head_dim) - query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape) - key_states = key_states.view(*proj_shape) - value_v_states = value_v_states.view(*proj_shape) - value_l_states = value_l_states.view(*proj_shape) - - src_len = key_states.size(1) - attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) # bs*nhead, nimg, ntxt - - if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is {attn_weights.size()}" - ) - - if self.stable_softmax_2d: - attn_weights = attn_weights - attn_weights.max() - - if self.clamp_min_for_underflow: - attn_weights = torch.clamp( - attn_weights, min=-50000 - ) # Do not increase -50000, data type half has quite limited range - if self.clamp_max_for_overflow: - attn_weights = torch.clamp( - attn_weights, max=50000 - ) # Do not increase 50000, data type half has quite limited range - - attn_weights_T = attn_weights.transpose(1, 2) - attn_weights_l = attn_weights_T - torch.max(attn_weights_T, dim=-1, keepdim=True)[0] - if self.clamp_min_for_underflow: - attn_weights_l = torch.clamp( - attn_weights_l, min=-50000 - ) # Do not increase -50000, data type half has quite limited range - if self.clamp_max_for_overflow: - attn_weights_l = torch.clamp( - attn_weights_l, max=50000 - ) # Do not increase 50000, data type half has quite limited range - - # mask vison for language - if attention_mask_v is not None: - attention_mask_v = ( - attention_mask_v[:, None, None, :].repeat(1, self.num_heads, 1, 1).flatten(0, 1) - ) - attn_weights_l.masked_fill_(attention_mask_v, float("-inf")) - - attn_weights_l = attn_weights_l.softmax(dim=-1) - - # mask language for vision - if attention_mask_l is not None: - attention_mask_l = ( - attention_mask_l[:, None, None, :].repeat(1, self.num_heads, 1, 1).flatten(0, 1) - ) - attn_weights.masked_fill_(attention_mask_l, float("-inf")) - attn_weights_v = attn_weights.softmax(dim=-1) - - attn_probs_v = F.dropout(attn_weights_v, p=self.dropout, training=self.training) - attn_probs_l = F.dropout(attn_weights_l, p=self.dropout, training=self.training) - - attn_output_v = torch.bmm(attn_probs_v, value_l_states) - attn_output_l = torch.bmm(attn_probs_l, value_v_states) - - if attn_output_v.size() != (bsz * self.num_heads, tgt_len, self.head_dim): - raise ValueError( - f"`attn_output_v` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is {attn_output_v.size()}" - ) - - if attn_output_l.size() != (bsz * self.num_heads, src_len, self.head_dim): - raise ValueError( - f"`attn_output_l` should be of size {(bsz, self.num_heads, src_len, self.head_dim)}, but is {attn_output_l.size()}" - ) - - attn_output_v = attn_output_v.view(bsz, self.num_heads, tgt_len, self.head_dim) - attn_output_v = attn_output_v.transpose(1, 2) - attn_output_v = attn_output_v.reshape(bsz, tgt_len, self.embed_dim) - - attn_output_l = attn_output_l.view(bsz, self.num_heads, src_len, self.head_dim) - attn_output_l = attn_output_l.transpose(1, 2) - attn_output_l = attn_output_l.reshape(bsz, src_len, self.embed_dim) - - attn_output_v = self.out_v_proj(attn_output_v) - attn_output_l = self.out_l_proj(attn_output_l) - - return attn_output_v, attn_output_l - - -# Bi-Direction MHA (text->image, image->text) -class BiAttentionBlock(nn.Module): - def __init__( - self, - v_dim, - l_dim, - embed_dim, - num_heads, - dropout=0.1, - drop_path=0.0, - init_values=1e-4, - cfg=None, - ): - """ - Inputs: - embed_dim - Dimensionality of input and attention feature vectors - hidden_dim - Dimensionality of hidden layer in feed-forward network - (usually 2-4x larger than embed_dim) - num_heads - Number of heads to use in the Multi-Head Attention block - dropout - Amount of dropout to apply in the feed-forward network - """ - super(BiAttentionBlock, self).__init__() - - # pre layer norm - self.layer_norm_v = nn.LayerNorm(v_dim) - self.layer_norm_l = nn.LayerNorm(l_dim) - self.attn = BiMultiHeadAttention( - v_dim=v_dim, l_dim=l_dim, embed_dim=embed_dim, num_heads=num_heads, dropout=dropout - ) - - # add layer scale for training stability - self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.gamma_v = nn.Parameter(init_values * torch.ones((v_dim)), requires_grad=True) - self.gamma_l = nn.Parameter(init_values * torch.ones((l_dim)), requires_grad=True) - - def forward(self, v, l, attention_mask_v=None, attention_mask_l=None): - v = self.layer_norm_v(v) - l = self.layer_norm_l(l) - delta_v, delta_l = self.attn( - v, l, attention_mask_v=attention_mask_v, attention_mask_l=attention_mask_l - ) - # v, l = v + delta_v, l + delta_l - v = v + self.drop_path(self.gamma_v * delta_v) - l = l + self.drop_path(self.gamma_l * delta_l) - return v, l - - # def forward(self, v:List[torch.Tensor], l, attention_mask_v=None, attention_mask_l=None) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/kernels/yoso/common_cuda_device.h b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/kernels/yoso/common_cuda_device.h deleted file mode 100644 index 6674f93afdc25ab35c5d83881d00028bcf2989fc..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/kernels/yoso/common_cuda_device.h +++ /dev/null @@ -1,79 +0,0 @@ - -#include "common.h" - -template -__device__ int set_insert(T *set, int set_size, T value) { - int slot = value % set_size; - int start_slot = slot; - while (true) { - T prev = atomicCAS(&set[slot], EMPTY_VALUE, value); - if (prev == EMPTY_VALUE || prev == value) { - return slot; - } - slot = (slot + 1) % set_size; - if (slot == start_slot) { - return -1; - } - } - return -1; -} - -template -__device__ int set_lookup(T *set, int set_size, T value) { - int slot = value % set_size; - int start_slot = slot; - while (true) { - if (set[slot] == value) { - return slot; - } - slot = (slot + 1) % set_size; - if (slot == start_slot) { - return -1; - } - } - return -1; -} - -template -__device__ void init_buffer(T init_value, T *buffer, int buffer_size, int num_threads, int thread_id) { - __syncthreads(); - for (int i = 0; i < buffer_size; i = i + num_threads) { - int offset_idx = i + thread_id; - if (offset_idx < buffer_size) { - buffer[offset_idx] = init_value; - } - } - __syncthreads(); -} - -template -__device__ void copy_data(T *src_pt, T *dist_pt, int data_length, int num_threads, int thread_id) { - __syncthreads(); - for (int i = 0; i < data_length; i = i + num_threads) { - int offset_idx = i + thread_id; - if (offset_idx < data_length) { - dist_pt[offset_idx] = src_pt[offset_idx]; - } - } - __syncthreads(); -} - -template -__device__ void init_buffer_nonblocking(T init_value, T *buffer, int buffer_size, int num_threads, int thread_id) { - for (int i = 0; i < buffer_size; i = i + num_threads) { - int offset_idx = i + thread_id; - if (offset_idx < buffer_size) { - buffer[offset_idx] = init_value; - } - } -} - -template -__device__ void copy_data_nonblocking(T *src_pt, T *dist_pt, int data_length, int num_threads, int thread_id) { - for (int i = 0; i < data_length; i = i + num_threads) { - int offset_idx = i + thread_id; - if (offset_idx < data_length) { - dist_pt[offset_idx] = src_pt[offset_idx]; - } - } -} diff --git a/spaces/ypx123/vits-uma-genshin-honkai/attentions.py b/spaces/ypx123/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/ypx123/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/ysharma/text-to-image-to-video/attention.py b/spaces/ysharma/text-to-image-to-video/attention.py deleted file mode 100644 index 2c26bc24b9a7c726473fce93db988425c8dbf756..0000000000000000000000000000000000000000 --- a/spaces/ysharma/text-to-image-to-video/attention.py +++ /dev/null @@ -1,261 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat - -from util2 import checkpoint - - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class LinearAttention(nn.Module): - def __init__(self, dim, heads=4, dim_head=32): - super().__init__() - self.heads = heads - hidden_dim = dim_head * heads - self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False) - self.to_out = nn.Conv2d(hidden_dim, dim, 1) - - def forward(self, x): - b, c, h, w = x.shape - qkv = self.to_qkv(x) - q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3) - k = k.softmax(dim=-1) - context = torch.einsum('bhdn,bhen->bhde', k, v) - out = torch.einsum('bhde,bhdn->bhen', context, q) - out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w) - return self.to_out(out) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - def forward(self, x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - attn = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', attn, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True): - super().__init__() - self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout) # is a self-attention - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None): - x = self.attn1(self.norm1(x)) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - """ - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None): - super().__init__() - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim) - for d in range(depth)] - ) - - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c') - for block in self.transformer_blocks: - x = block(x, context=context) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w) - x = self.proj_out(x) - return x + x_in \ No newline at end of file diff --git a/spaces/ysr/blurryAI/README.md b/spaces/ysr/blurryAI/README.md deleted file mode 100644 index c559fabd9ef18d5513972856d2ca472ea45c6d38..0000000000000000000000000000000000000000 --- a/spaces/ysr/blurryAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BlurryAI -emoji: 👓 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yueranseo/mygpt/modules/shared.py b/spaces/yueranseo/mygpt/modules/shared.py deleted file mode 100644 index 32e74665b400a56fd1b10bbd4a9566fe332e49bd..0000000000000000000000000000000000000000 --- a/spaces/yueranseo/mygpt/modules/shared.py +++ /dev/null @@ -1,64 +0,0 @@ -from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST -import os -import queue -import openai - -class State: - interrupted = False - multi_api_key = False - completion_url = COMPLETION_URL - balance_api_url = BALANCE_API_URL - usage_api_url = USAGE_API_URL - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_api_host(self, api_host: str): - api_host = api_host.rstrip("/") - if not api_host.startswith("http"): - api_host = f"https://{api_host}" - if api_host.endswith("/v1"): - api_host = api_host[:-3] - self.completion_url = f"{api_host}/v1/chat/completions" - self.balance_api_url = f"{api_host}/dashboard/billing/credit_grants" - self.usage_api_url = f"{api_host}/dashboard/billing/usage" - os.environ["OPENAI_API_BASE"] = api_host - - def reset_api_host(self): - self.completion_url = COMPLETION_URL - self.balance_api_url = BALANCE_API_URL - self.usage_api_url = USAGE_API_URL - os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}" - return API_HOST - - def reset_all(self): - self.interrupted = False - self.completion_url = COMPLETION_URL - - def set_api_key_queue(self, api_key_list): - self.multi_api_key = True - self.api_key_queue = queue.Queue() - for api_key in api_key_list: - self.api_key_queue.put(api_key) - - def switching_api_key(self, func): - if not hasattr(self, "api_key_queue"): - return func - - def wrapped(*args, **kwargs): - api_key = self.api_key_queue.get() - args[0].api_key = api_key - ret = func(*args, **kwargs) - self.api_key_queue.put(api_key) - return ret - - return wrapped - - -state = State() - -modules_path = os.path.dirname(os.path.realpath(__file__)) -chuanhu_path = os.path.dirname(modules_path) diff --git "a/spaces/yunfei0710/gpt-academic/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" "b/spaces/yunfei0710/gpt-academic/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" deleted file mode 100644 index c1e5dadd142de683323463d3df260cbe6eefa6d8..0000000000000000000000000000000000000000 --- "a/spaces/yunfei0710/gpt-academic/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" +++ /dev/null @@ -1,60 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询gpt-3.5和gpt-4……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = 'gpt-3.5-turbo&gpt-4' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - - -@CatchException -def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 \ No newline at end of file diff --git a/spaces/zeno-ml/openai-evals/zeno-evals-hub/frontend/build/smui.css b/spaces/zeno-ml/openai-evals/zeno-evals-hub/frontend/build/smui.css deleted file mode 100644 index f7c1c6ac45d339f5a9855b60a7b65016496dfe89..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/openai-evals/zeno-evals-hub/frontend/build/smui.css +++ /dev/null @@ -1,5 +0,0 @@ -.material-icons{fill:var(--logo)}:root{font-family:Inter,system-ui,Avenir,Helvetica,Arial,sans-serif;line-height:1.5;font-weight:400;color:#213547;font-synthesis:none;text-rendering:optimizeLegibility;-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;-webkit-text-size-adjust:100%;--G1: #333333;--G2: #73726f;--G3: #989895;--G4: #d3d3d3;--G5: #ebebea;--G6: #ffffff;--logo: #6a1b9a;--P1: #b18bd3;--P2: #d2bae9;--P3: #f7f1fb;--P4: #f9f7fb;--Y1: #f2f2ee;--Y2: #fbfbfa;--mdc-theme-primary: var(--G2);--mdc-theme-secondary: var(--G3);--mdc-theme-background: var(--G6);--mdc-theme-surface: var(--G6);--mdc-theme-error: #b71c1c;--mdc-theme-on-primary: var(--G6);--mdc-theme-on-secondary: var(--G6);--mdc-theme-on-surface: var(--G1);--mdc-theme-on-error: var(--G6);--mdc-theme-text-primary-on-background: rgba(0, 0, 0, 0.87);--mdc-theme-text-secondary-on-background: rgba(0, 0, 0, 0.54);--mdc-theme-text-hint-on-background: rgba(0, 0, 0, 0.38);--mdc-theme-text-disabled-on-background: rgba(0, 0, 0, 0.38);--mdc-theme-text-icon-on-background: rgba(0, 0, 0, 0.38);--mdc-theme-text-primary-on-light: rgba(0, 0, 0, 0.87);--mdc-theme-text-secondary-on-light: rgba(0, 0, 0, 0.54);--mdc-theme-text-hint-on-light: rgba(0, 0, 0, 0.38);--mdc-theme-text-disabled-on-light: rgba(0, 0, 0, 0.38);--mdc-theme-text-icon-on-light: rgba(0, 0, 0, 0.38);--mdc-theme-text-primary-on-dark: white;--mdc-theme-text-secondary-on-dark: rgba(255, 255, 255, 0.7);--mdc-theme-text-hint-on-dark: rgba(255, 255, 255, 0.5);--mdc-theme-text-disabled-on-dark: rgba(255, 255, 255, 0.5);--mdc-theme-text-icon-on-dark: rgba(255, 255, 255, 0.5);--mdc-outlined-button-container-height: 33px;--mdc-filled-button-container-color: var(--logo)}a{color:#213547;text-decoration:inherit}a:hover{color:var(--logo)}h1{font-size:3.2em;line-height:1.1}.card{padding:2em}#app{max-width:1280px;margin:0 auto;padding:3px;text-align:center}.mdc-touch-target-wrapper{display:inline}.mdc-elevation-overlay{position:absolute;border-radius:inherit;pointer-events:none;opacity:0;opacity:var(--mdc-elevation-overlay-opacity, 0);transition:opacity 280ms cubic-bezier(0.4, 0, 0.2, 1);background-color:#fff;background-color:var(--mdc-elevation-overlay-color, #fff)}.mdc-button{position:relative;display:inline-flex;align-items:center;justify-content:center;box-sizing:border-box;min-width:64px;border:none;outline:none;line-height:inherit;user-select:none;-webkit-appearance:none;overflow:visible;vertical-align:middle;background:rgba(0,0,0,0)}.mdc-button .mdc-elevation-overlay{width:100%;height:100%;top:0;left:0}.mdc-button::-moz-focus-inner{padding:0;border:0}.mdc-button:active{outline:none}.mdc-button:hover{cursor:pointer}.mdc-button:disabled{cursor:default;pointer-events:none}.mdc-button .mdc-button__icon{margin-left:0;margin-right:8px;display:inline-block;position:relative;vertical-align:top}[dir=rtl] .mdc-button .mdc-button__icon,.mdc-button .mdc-button__icon[dir=rtl]{margin-left:8px;margin-right:0}.mdc-button .mdc-button__label{position:relative}.mdc-button .mdc-button__focus-ring{display:none}@media screen and (forced-colors: active){.mdc-button.mdc-ripple-upgraded--background-focused .mdc-button__focus-ring,.mdc-button:not(.mdc-ripple-upgraded):focus .mdc-button__focus-ring{pointer-events:none;border:2px solid rgba(0,0,0,0);border-radius:6px;box-sizing:content-box;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:calc( - 100% + 4px - );width:calc( - 100% + 4px - );display:block}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-button.mdc-ripple-upgraded--background-focused .mdc-button__focus-ring,.mdc-button:not(.mdc-ripple-upgraded):focus .mdc-button__focus-ring{border-color:CanvasText}}@media screen and (forced-colors: active){.mdc-button.mdc-ripple-upgraded--background-focused .mdc-button__focus-ring::after,.mdc-button:not(.mdc-ripple-upgraded):focus .mdc-button__focus-ring::after{content:"";border:2px solid rgba(0,0,0,0);border-radius:8px;display:block;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:calc(100% + 4px);width:calc(100% + 4px)}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-button.mdc-ripple-upgraded--background-focused .mdc-button__focus-ring::after,.mdc-button:not(.mdc-ripple-upgraded):focus .mdc-button__focus-ring::after{border-color:CanvasText}}.mdc-button .mdc-button__touch{position:absolute;top:50%;height:48px;left:0;right:0;transform:translateY(-50%)}.mdc-button__label+.mdc-button__icon{margin-left:8px;margin-right:0}[dir=rtl] .mdc-button__label+.mdc-button__icon,.mdc-button__label+.mdc-button__icon[dir=rtl]{margin-left:0;margin-right:8px}svg.mdc-button__icon{fill:currentColor}.mdc-button--touch{margin-top:6px;margin-bottom:6px}.mdc-button{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-button-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));text-decoration:none;text-decoration:var(--mdc-typography-button-text-decoration, none)}.mdc-button{padding:0 8px 0 8px}.mdc-button--unelevated{transition:box-shadow 280ms cubic-bezier(0.4, 0, 0.2, 1);padding:0 16px 0 16px}.mdc-button--unelevated.mdc-button--icon-trailing{padding:0 12px 0 16px}.mdc-button--unelevated.mdc-button--icon-leading{padding:0 16px 0 12px}.mdc-button--raised{transition:box-shadow 280ms cubic-bezier(0.4, 0, 0.2, 1);padding:0 16px 0 16px}.mdc-button--raised.mdc-button--icon-trailing{padding:0 12px 0 16px}.mdc-button--raised.mdc-button--icon-leading{padding:0 16px 0 12px}.mdc-button--outlined{border-style:solid;transition:border 280ms cubic-bezier(0.4, 0, 0.2, 1)}.mdc-button--outlined .mdc-button__ripple{border-style:solid;border-color:rgba(0,0,0,0)}@keyframes mdc-ripple-fg-radius-in{from{animation-timing-function:cubic-bezier(0.4, 0, 0.2, 1);transform:translate(var(--mdc-ripple-fg-translate-start, 0)) scale(1)}to{transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}}@keyframes mdc-ripple-fg-opacity-in{from{animation-timing-function:linear;opacity:0}to{opacity:var(--mdc-ripple-fg-opacity, 0)}}@keyframes mdc-ripple-fg-opacity-out{from{animation-timing-function:linear;opacity:var(--mdc-ripple-fg-opacity, 0)}to{opacity:0}}.mdc-button{--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.mdc-button .mdc-button__ripple::before,.mdc-button .mdc-button__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.mdc-button .mdc-button__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}.mdc-button .mdc-button__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}.mdc-button.mdc-ripple-upgraded .mdc-button__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}.mdc-button.mdc-ripple-upgraded .mdc-button__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}.mdc-button.mdc-ripple-upgraded--unbounded .mdc-button__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}.mdc-button.mdc-ripple-upgraded--foreground-activation .mdc-button__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.mdc-button.mdc-ripple-upgraded--foreground-deactivation .mdc-button__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}.mdc-button .mdc-button__ripple::before,.mdc-button .mdc-button__ripple::after{top:calc(50% - 100%);left:calc(50% - 100%);width:200%;height:200%}.mdc-button.mdc-ripple-upgraded .mdc-button__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-button__ripple{position:absolute;box-sizing:content-box;overflow:hidden;z-index:0;top:0;left:0;bottom:0;right:0}.mdc-button{font-family:Roboto, sans-serif;font-family:var(--mdc-text-button-label-text-font, var(--mdc-typography-button-font-family, var(--mdc-typography-font-family, Roboto, sans-serif)));font-size:0.875rem;font-size:var(--mdc-text-button-label-text-size, var(--mdc-typography-button-font-size, 0.875rem));letter-spacing:0.0892857143em;letter-spacing:var(--mdc-text-button-label-text-tracking, var(--mdc-typography-button-letter-spacing, 0.0892857143em));font-weight:500;font-weight:var(--mdc-text-button-label-text-weight, var(--mdc-typography-button-font-weight, 500));text-transform:uppercase;text-transform:var(--mdc-text-button-label-text-transform, var(--mdc-typography-button-text-transform, uppercase));height:36px;height:var(--mdc-text-button-container-height, 36px);border-radius:4px;border-radius:var(--mdc-text-button-container-shape, var(--mdc-shape-small, 4px))}.mdc-button:not(:disabled){color:#6a1b9a;color:var(--mdc-text-button-label-text-color, var(--mdc-theme-primary, #6a1b9a))}.mdc-button:disabled{color:rgba(0, 0, 0, 0.38);color:var(--mdc-text-button-disabled-label-text-color, rgba(0, 0, 0, 0.38))}.mdc-button .mdc-button__icon{font-size:1.125rem;font-size:var(--mdc-text-button-with-icon-icon-size, 1.125rem);width:1.125rem;width:var(--mdc-text-button-with-icon-icon-size, 1.125rem);height:1.125rem;height:var(--mdc-text-button-with-icon-icon-size, 1.125rem)}.mdc-button .mdc-button__ripple::before,.mdc-button .mdc-button__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-text-button-hover-state-layer-color, var(--mdc-theme-primary, #6a1b9a))}.mdc-button:hover .mdc-button__ripple::before,.mdc-button.mdc-ripple-surface--hover .mdc-button__ripple::before{opacity:0.04;opacity:var(--mdc-text-button-hover-state-layer-opacity, 0.04)}.mdc-button.mdc-ripple-upgraded--background-focused .mdc-button__ripple::before,.mdc-button:not(.mdc-ripple-upgraded):focus .mdc-button__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-text-button-focus-state-layer-opacity, 0.12)}.mdc-button:not(.mdc-ripple-upgraded) .mdc-button__ripple::after{transition:opacity 150ms linear}.mdc-button:not(.mdc-ripple-upgraded):active .mdc-button__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-text-button-pressed-state-layer-opacity, 0.12)}.mdc-button.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-text-button-pressed-state-layer-opacity, 0.12)}.mdc-button .mdc-button__ripple{border-radius:4px;border-radius:var(--mdc-text-button-container-shape, var(--mdc-shape-small, 4px))}.mdc-button--unelevated{font-family:Roboto, sans-serif;font-family:var(--mdc-filled-button-label-text-font, var(--mdc-typography-button-font-family, var(--mdc-typography-font-family, Roboto, sans-serif)));font-size:0.875rem;font-size:var(--mdc-filled-button-label-text-size, var(--mdc-typography-button-font-size, 0.875rem));letter-spacing:0.0892857143em;letter-spacing:var(--mdc-filled-button-label-text-tracking, var(--mdc-typography-button-letter-spacing, 0.0892857143em));font-weight:500;font-weight:var(--mdc-filled-button-label-text-weight, var(--mdc-typography-button-font-weight, 500));text-transform:uppercase;text-transform:var(--mdc-filled-button-label-text-transform, var(--mdc-typography-button-text-transform, uppercase));height:36px;height:var(--mdc-filled-button-container-height, 36px);border-radius:4px;border-radius:var(--mdc-filled-button-container-shape, var(--mdc-shape-small, 4px))}.mdc-button--unelevated:not(:disabled){background-color:#6a1b9a;background-color:var(--mdc-filled-button-container-color, var(--mdc-theme-primary, #6a1b9a))}.mdc-button--unelevated:disabled{background-color:rgba(0, 0, 0, 0.12);background-color:var(--mdc-filled-button-disabled-container-color, rgba(0, 0, 0, 0.12))}.mdc-button--unelevated:not(:disabled){color:#fff;color:var(--mdc-filled-button-label-text-color, var(--mdc-theme-on-primary, #fff))}.mdc-button--unelevated:disabled{color:rgba(0, 0, 0, 0.38);color:var(--mdc-filled-button-disabled-label-text-color, rgba(0, 0, 0, 0.38))}.mdc-button--unelevated .mdc-button__icon{font-size:1.125rem;font-size:var(--mdc-filled-button-with-icon-icon-size, 1.125rem);width:1.125rem;width:var(--mdc-filled-button-with-icon-icon-size, 1.125rem);height:1.125rem;height:var(--mdc-filled-button-with-icon-icon-size, 1.125rem)}.mdc-button--unelevated .mdc-button__ripple::before,.mdc-button--unelevated .mdc-button__ripple::after{background-color:#fff;background-color:var(--mdc-filled-button-hover-state-layer-color, var(--mdc-theme-on-primary, #fff))}.mdc-button--unelevated:hover .mdc-button__ripple::before,.mdc-button--unelevated.mdc-ripple-surface--hover .mdc-button__ripple::before{opacity:0.08;opacity:var(--mdc-filled-button-hover-state-layer-opacity, 0.08)}.mdc-button--unelevated.mdc-ripple-upgraded--background-focused .mdc-button__ripple::before,.mdc-button--unelevated:not(.mdc-ripple-upgraded):focus .mdc-button__ripple::before{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-filled-button-focus-state-layer-opacity, 0.24)}.mdc-button--unelevated:not(.mdc-ripple-upgraded) .mdc-button__ripple::after{transition:opacity 150ms linear}.mdc-button--unelevated:not(.mdc-ripple-upgraded):active .mdc-button__ripple::after{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-filled-button-pressed-state-layer-opacity, 0.24)}.mdc-button--unelevated.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-filled-button-pressed-state-layer-opacity, 0.24)}.mdc-button--unelevated .mdc-button__ripple{border-radius:4px;border-radius:var(--mdc-filled-button-container-shape, var(--mdc-shape-small, 4px))}.mdc-button--raised{font-family:Roboto, sans-serif;font-family:var(--mdc-protected-button-label-text-font, var(--mdc-typography-button-font-family, var(--mdc-typography-font-family, Roboto, sans-serif)));font-size:0.875rem;font-size:var(--mdc-protected-button-label-text-size, var(--mdc-typography-button-font-size, 0.875rem));letter-spacing:0.0892857143em;letter-spacing:var(--mdc-protected-button-label-text-tracking, var(--mdc-typography-button-letter-spacing, 0.0892857143em));font-weight:500;font-weight:var(--mdc-protected-button-label-text-weight, var(--mdc-typography-button-font-weight, 500));text-transform:uppercase;text-transform:var(--mdc-protected-button-label-text-transform, var(--mdc-typography-button-text-transform, uppercase));height:36px;height:var(--mdc-protected-button-container-height, 36px);border-radius:4px;border-radius:var(--mdc-protected-button-container-shape, var(--mdc-shape-small, 4px));--mdc-elevation-box-shadow-for-gss:0px 3px 1px -2px rgba(0, 0, 0, 0.2), 0px 2px 2px 0px rgba(0, 0, 0, 0.14), 0px 1px 5px 0px rgba(0, 0, 0, 0.12);box-shadow:0px 3px 1px -2px rgba(0, 0, 0, 0.2), 0px 2px 2px 0px rgba(0, 0, 0, 0.14), 0px 1px 5px 0px rgba(0, 0, 0, 0.12);box-shadow:var(--mdc-protected-button-container-elevation, var(--mdc-elevation-box-shadow-for-gss))}.mdc-button--raised:not(:disabled){background-color:#6a1b9a;background-color:var(--mdc-protected-button-container-color, var(--mdc-theme-primary, #6a1b9a))}.mdc-button--raised:disabled{background-color:rgba(0, 0, 0, 0.12);background-color:var(--mdc-protected-button-disabled-container-color, rgba(0, 0, 0, 0.12))}.mdc-button--raised:not(:disabled){color:#fff;color:var(--mdc-protected-button-label-text-color, var(--mdc-theme-on-primary, #fff))}.mdc-button--raised:disabled{color:rgba(0, 0, 0, 0.38);color:var(--mdc-protected-button-disabled-label-text-color, rgba(0, 0, 0, 0.38))}.mdc-button--raised .mdc-button__icon{font-size:1.125rem;font-size:var(--mdc-protected-button-with-icon-icon-size, 1.125rem);width:1.125rem;width:var(--mdc-protected-button-with-icon-icon-size, 1.125rem);height:1.125rem;height:var(--mdc-protected-button-with-icon-icon-size, 1.125rem)}.mdc-button--raised .mdc-button__ripple::before,.mdc-button--raised .mdc-button__ripple::after{background-color:#fff;background-color:var(--mdc-protected-button-hover-state-layer-color, var(--mdc-theme-on-primary, #fff))}.mdc-button--raised:hover .mdc-button__ripple::before,.mdc-button--raised.mdc-ripple-surface--hover .mdc-button__ripple::before{opacity:0.08;opacity:var(--mdc-protected-button-hover-state-layer-opacity, 0.08)}.mdc-button--raised.mdc-ripple-upgraded--background-focused .mdc-button__ripple::before,.mdc-button--raised:not(.mdc-ripple-upgraded):focus .mdc-button__ripple::before{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-protected-button-focus-state-layer-opacity, 0.24)}.mdc-button--raised:not(.mdc-ripple-upgraded) .mdc-button__ripple::after{transition:opacity 150ms linear}.mdc-button--raised:not(.mdc-ripple-upgraded):active .mdc-button__ripple::after{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-protected-button-pressed-state-layer-opacity, 0.24)}.mdc-button--raised.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-protected-button-pressed-state-layer-opacity, 0.24)}.mdc-button--raised .mdc-button__ripple{border-radius:4px;border-radius:var(--mdc-protected-button-container-shape, var(--mdc-shape-small, 4px))}.mdc-button--raised.mdc-ripple-upgraded--background-focused,.mdc-button--raised:not(.mdc-ripple-upgraded):focus{--mdc-elevation-box-shadow-for-gss:0px 2px 4px -1px rgba(0, 0, 0, 0.2), 0px 4px 5px 0px rgba(0, 0, 0, 0.14), 0px 1px 10px 0px rgba(0, 0, 0, 0.12);box-shadow:0px 2px 4px -1px rgba(0, 0, 0, 0.2), 0px 4px 5px 0px rgba(0, 0, 0, 0.14), 0px 1px 10px 0px rgba(0, 0, 0, 0.12);box-shadow:var(--mdc-protected-button-focus-container-elevation, var(--mdc-elevation-box-shadow-for-gss))}.mdc-button--raised:hover{--mdc-elevation-box-shadow-for-gss:0px 2px 4px -1px rgba(0, 0, 0, 0.2), 0px 4px 5px 0px rgba(0, 0, 0, 0.14), 0px 1px 10px 0px rgba(0, 0, 0, 0.12);box-shadow:0px 2px 4px -1px rgba(0, 0, 0, 0.2), 0px 4px 5px 0px rgba(0, 0, 0, 0.14), 0px 1px 10px 0px rgba(0, 0, 0, 0.12);box-shadow:var(--mdc-protected-button-hover-container-elevation, var(--mdc-elevation-box-shadow-for-gss))}.mdc-button--raised:not(:disabled):active{--mdc-elevation-box-shadow-for-gss:0px 5px 5px -3px rgba(0, 0, 0, 0.2), 0px 8px 10px 1px rgba(0, 0, 0, 0.14), 0px 3px 14px 2px rgba(0, 0, 0, 0.12);box-shadow:0px 5px 5px -3px rgba(0, 0, 0, 0.2), 0px 8px 10px 1px rgba(0, 0, 0, 0.14), 0px 3px 14px 2px rgba(0, 0, 0, 0.12);box-shadow:var(--mdc-protected-button-pressed-container-elevation, var(--mdc-elevation-box-shadow-for-gss))}.mdc-button--raised:disabled{--mdc-elevation-box-shadow-for-gss:0px 0px 0px 0px rgba(0, 0, 0, 0.2), 0px 0px 0px 0px rgba(0, 0, 0, 0.14), 0px 0px 0px 0px rgba(0, 0, 0, 0.12);box-shadow:0px 0px 0px 0px rgba(0, 0, 0, 0.2), 0px 0px 0px 0px rgba(0, 0, 0, 0.14), 0px 0px 0px 0px rgba(0, 0, 0, 0.12);box-shadow:var(--mdc-protected-button-disabled-container-elevation, var(--mdc-elevation-box-shadow-for-gss))}.mdc-button--outlined{font-family:Roboto, sans-serif;font-family:var(--mdc-outlined-button-label-text-font, var(--mdc-typography-button-font-family, var(--mdc-typography-font-family, Roboto, sans-serif)));font-size:0.875rem;font-size:var(--mdc-outlined-button-label-text-size, var(--mdc-typography-button-font-size, 0.875rem));letter-spacing:0.0892857143em;letter-spacing:var(--mdc-outlined-button-label-text-tracking, var(--mdc-typography-button-letter-spacing, 0.0892857143em));font-weight:500;font-weight:var(--mdc-outlined-button-label-text-weight, var(--mdc-typography-button-font-weight, 500));text-transform:uppercase;text-transform:var(--mdc-outlined-button-label-text-transform, var(--mdc-typography-button-text-transform, uppercase));height:36px;height:var(--mdc-outlined-button-container-height, 36px);border-radius:4px;border-radius:var(--mdc-outlined-button-container-shape, var(--mdc-shape-small, 4px));padding:0 15px 0 15px;border-width:1px;border-width:var(--mdc-outlined-button-outline-width, 1px)}.mdc-button--outlined:not(:disabled){color:#6a1b9a;color:var(--mdc-outlined-button-label-text-color, var(--mdc-theme-primary, #6a1b9a))}.mdc-button--outlined:disabled{color:rgba(0, 0, 0, 0.38);color:var(--mdc-outlined-button-disabled-label-text-color, rgba(0, 0, 0, 0.38))}.mdc-button--outlined .mdc-button__icon{font-size:1.125rem;font-size:var(--mdc-outlined-button-with-icon-icon-size, 1.125rem);width:1.125rem;width:var(--mdc-outlined-button-with-icon-icon-size, 1.125rem);height:1.125rem;height:var(--mdc-outlined-button-with-icon-icon-size, 1.125rem)}.mdc-button--outlined .mdc-button__ripple::before,.mdc-button--outlined .mdc-button__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-outlined-button-hover-state-layer-color, var(--mdc-theme-primary, #6a1b9a))}.mdc-button--outlined:hover .mdc-button__ripple::before,.mdc-button--outlined.mdc-ripple-surface--hover .mdc-button__ripple::before{opacity:0.04;opacity:var(--mdc-outlined-button-hover-state-layer-opacity, 0.04)}.mdc-button--outlined.mdc-ripple-upgraded--background-focused .mdc-button__ripple::before,.mdc-button--outlined:not(.mdc-ripple-upgraded):focus .mdc-button__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-outlined-button-focus-state-layer-opacity, 0.12)}.mdc-button--outlined:not(.mdc-ripple-upgraded) .mdc-button__ripple::after{transition:opacity 150ms linear}.mdc-button--outlined:not(.mdc-ripple-upgraded):active .mdc-button__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-outlined-button-pressed-state-layer-opacity, 0.12)}.mdc-button--outlined.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-outlined-button-pressed-state-layer-opacity, 0.12)}.mdc-button--outlined .mdc-button__ripple{border-radius:4px;border-radius:var(--mdc-outlined-button-container-shape, var(--mdc-shape-small, 4px))}.mdc-button--outlined:not(:disabled){border-color:rgba(0, 0, 0, 0.12);border-color:var(--mdc-outlined-button-outline-color, rgba(0, 0, 0, 0.12))}.mdc-button--outlined:disabled{border-color:rgba(0, 0, 0, 0.12);border-color:var(--mdc-outlined-button-disabled-outline-color, rgba(0, 0, 0, 0.12))}.mdc-button--outlined.mdc-button--icon-trailing{padding:0 11px 0 15px}.mdc-button--outlined.mdc-button--icon-leading{padding:0 15px 0 11px}.mdc-button--outlined .mdc-button__ripple{top:-1px;left:-1px;bottom:-1px;right:-1px;border-width:1px;border-width:var(--mdc-outlined-button-outline-width, 1px)}.mdc-button--outlined .mdc-button__touch{left:calc(-1 * 1px);left:calc(-1 * var(--mdc-outlined-button-outline-width, 1px));width:calc(100% + 2 * 1px);width:calc(100% + 2 * var(--mdc-outlined-button-outline-width, 1px))}.mdc-button--raised .mdc-button__icon,.mdc-button--unelevated .mdc-button__icon,.mdc-button--outlined .mdc-button__icon{margin-left:-4px;margin-right:8px}[dir=rtl] .mdc-button--raised .mdc-button__icon,[dir=rtl] .mdc-button--unelevated .mdc-button__icon,[dir=rtl] .mdc-button--outlined .mdc-button__icon,.mdc-button--raised .mdc-button__icon[dir=rtl],.mdc-button--unelevated .mdc-button__icon[dir=rtl],.mdc-button--outlined .mdc-button__icon[dir=rtl]{margin-left:8px;margin-right:-4px}.mdc-button--raised .mdc-button__label+.mdc-button__icon,.mdc-button--unelevated .mdc-button__label+.mdc-button__icon,.mdc-button--outlined .mdc-button__label+.mdc-button__icon{margin-left:8px;margin-right:-4px}[dir=rtl] .mdc-button--raised .mdc-button__label+.mdc-button__icon,[dir=rtl] .mdc-button--unelevated .mdc-button__label+.mdc-button__icon,[dir=rtl] .mdc-button--outlined .mdc-button__label+.mdc-button__icon,.mdc-button--raised .mdc-button__label+.mdc-button__icon[dir=rtl],.mdc-button--unelevated .mdc-button__label+.mdc-button__icon[dir=rtl],.mdc-button--outlined .mdc-button__label+.mdc-button__icon[dir=rtl]{margin-left:-4px;margin-right:8px}.mdc-ripple-surface{--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity;position:relative;outline:none;overflow:hidden}.mdc-ripple-surface::before,.mdc-ripple-surface::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.mdc-ripple-surface::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}.mdc-ripple-surface::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}.mdc-ripple-surface.mdc-ripple-upgraded::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}.mdc-ripple-surface.mdc-ripple-upgraded::after{top:0;left:0;transform:scale(0);transform-origin:center center}.mdc-ripple-surface.mdc-ripple-upgraded--unbounded::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}.mdc-ripple-surface.mdc-ripple-upgraded--foreground-activation::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.mdc-ripple-surface.mdc-ripple-upgraded--foreground-deactivation::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}.mdc-ripple-surface::before,.mdc-ripple-surface::after{top:calc(50% - 100%);left:calc(50% - 100%);width:200%;height:200%}.mdc-ripple-surface.mdc-ripple-upgraded::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-ripple-surface[data-mdc-ripple-is-unbounded],.mdc-ripple-upgraded--unbounded{overflow:visible}.mdc-ripple-surface[data-mdc-ripple-is-unbounded]::before,.mdc-ripple-surface[data-mdc-ripple-is-unbounded]::after,.mdc-ripple-upgraded--unbounded::before,.mdc-ripple-upgraded--unbounded::after{top:calc(50% - 50%);left:calc(50% - 50%);width:100%;height:100%}.mdc-ripple-surface[data-mdc-ripple-is-unbounded].mdc-ripple-upgraded::before,.mdc-ripple-surface[data-mdc-ripple-is-unbounded].mdc-ripple-upgraded::after,.mdc-ripple-upgraded--unbounded.mdc-ripple-upgraded::before,.mdc-ripple-upgraded--unbounded.mdc-ripple-upgraded::after{top:var(--mdc-ripple-top, calc(50% - 50%));left:var(--mdc-ripple-left, calc(50% - 50%));width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-ripple-surface[data-mdc-ripple-is-unbounded].mdc-ripple-upgraded::after,.mdc-ripple-upgraded--unbounded.mdc-ripple-upgraded::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-ripple-surface::before,.mdc-ripple-surface::after{background-color:#000;background-color:var(--mdc-ripple-color, #000)}.mdc-ripple-surface:hover::before,.mdc-ripple-surface.mdc-ripple-surface--hover::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-ripple-surface.mdc-ripple-upgraded--background-focused::before,.mdc-ripple-surface:not(.mdc-ripple-upgraded):focus::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-ripple-surface:not(.mdc-ripple-upgraded)::after{transition:opacity 150ms linear}.mdc-ripple-surface:not(.mdc-ripple-upgraded):active::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-ripple-surface.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.smui-ripple-surface--primary::before,.smui-ripple-surface--primary::after{background-color:#6a1b9a;background-color:var(--mdc-ripple-color, var(--mdc-theme-primary, #6a1b9a))}.smui-ripple-surface--primary:hover::before,.smui-ripple-surface--primary.mdc-ripple-surface--hover::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.smui-ripple-surface--primary.mdc-ripple-upgraded--background-focused::before,.smui-ripple-surface--primary:not(.mdc-ripple-upgraded):focus::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.smui-ripple-surface--primary:not(.mdc-ripple-upgraded)::after{transition:opacity 150ms linear}.smui-ripple-surface--primary:not(.mdc-ripple-upgraded):active::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.smui-ripple-surface--primary.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.smui-ripple-surface--secondary::before,.smui-ripple-surface--secondary::after{background-color:#989895;background-color:var(--mdc-ripple-color, var(--mdc-theme-secondary, #989895))}.smui-ripple-surface--secondary:hover::before,.smui-ripple-surface--secondary.mdc-ripple-surface--hover::before{opacity:0.08;opacity:var(--mdc-ripple-hover-opacity, 0.08)}.smui-ripple-surface--secondary.mdc-ripple-upgraded--background-focused::before,.smui-ripple-surface--secondary:not(.mdc-ripple-upgraded):focus::before{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-focus-opacity, 0.24)}.smui-ripple-surface--secondary:not(.mdc-ripple-upgraded)::after{transition:opacity 150ms linear}.smui-ripple-surface--secondary:not(.mdc-ripple-upgraded):active::after{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-press-opacity, 0.24)}.smui-ripple-surface--secondary.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.24)}.smui-button--color-secondary:not(:disabled){color:#989895}.smui-button--color-secondary:not(:disabled) .mdc-button__icon{color:#989895}.smui-button--color-secondary.mdc-button--raised:not(:disabled),.smui-button--color-secondary.mdc-button--unelevated:not(:disabled){background-color:#989895}.smui-button--color-secondary.mdc-button--raised:not(:disabled),.smui-button--color-secondary.mdc-button--unelevated:not(:disabled){color:rgba(0, 0, 0, 0.87);color:var(--mdc-theme-text-primary-on-light, rgba(0, 0, 0, 0.87))}.smui-button--color-secondary.mdc-button--raised .mdc-button__ripple::before,.smui-button--color-secondary.mdc-button--raised .mdc-button__ripple::after,.smui-button--color-secondary.mdc-button--unelevated .mdc-button__ripple::before,.smui-button--color-secondary.mdc-button--unelevated .mdc-button__ripple::after{background-color:rgba(0, 0, 0, 0.87);background-color:var(--mdc-ripple-color, var(--mdc-theme-text-primary-on-light, rgba(0, 0, 0, 0.87)))}.smui-button--color-secondary.mdc-button--raised:hover .mdc-button__ripple::before,.smui-button--color-secondary.mdc-button--raised.mdc-ripple-surface--hover .mdc-button__ripple::before,.smui-button--color-secondary.mdc-button--unelevated:hover .mdc-button__ripple::before,.smui-button--color-secondary.mdc-button--unelevated.mdc-ripple-surface--hover .mdc-button__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.smui-button--color-secondary.mdc-button--raised.mdc-ripple-upgraded--background-focused .mdc-button__ripple::before,.smui-button--color-secondary.mdc-button--raised:not(.mdc-ripple-upgraded):focus .mdc-button__ripple::before,.smui-button--color-secondary.mdc-button--unelevated.mdc-ripple-upgraded--background-focused .mdc-button__ripple::before,.smui-button--color-secondary.mdc-button--unelevated:not(.mdc-ripple-upgraded):focus .mdc-button__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.smui-button--color-secondary.mdc-button--raised:not(.mdc-ripple-upgraded) .mdc-button__ripple::after,.smui-button--color-secondary.mdc-button--unelevated:not(.mdc-ripple-upgraded) .mdc-button__ripple::after{transition:opacity 150ms linear}.smui-button--color-secondary.mdc-button--raised:not(.mdc-ripple-upgraded):active .mdc-button__ripple::after,.smui-button--color-secondary.mdc-button--unelevated:not(.mdc-ripple-upgraded):active .mdc-button__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.smui-button--color-secondary.mdc-button--raised.mdc-ripple-upgraded,.smui-button--color-secondary.mdc-button--unelevated.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.smui-button--color-secondary.mdc-button--raised:not(:disabled),.smui-button--color-secondary.mdc-button--unelevated:not(:disabled){color:#000}.smui-button--color-secondary.mdc-button--raised:not(:disabled) .mdc-button__icon,.smui-button--color-secondary.mdc-button--unelevated:not(:disabled) .mdc-button__icon{color:#000}.smui-button--color-secondary.mdc-button--outlined:not(:disabled){border-color:#989895}.smui-button--color-secondary .mdc-button__ripple::before,.smui-button--color-secondary .mdc-button__ripple::after{background-color:#989895;background-color:var(--mdc-ripple-color, #989895)}.smui-button--color-secondary:hover .mdc-button__ripple::before,.smui-button--color-secondary.mdc-ripple-surface--hover .mdc-button__ripple::before{opacity:0.08;opacity:var(--mdc-ripple-hover-opacity, 0.08)}.smui-button--color-secondary.mdc-ripple-upgraded--background-focused .mdc-button__ripple::before,.smui-button--color-secondary:not(.mdc-ripple-upgraded):focus .mdc-button__ripple::before{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-focus-opacity, 0.24)}.smui-button--color-secondary:not(.mdc-ripple-upgraded) .mdc-button__ripple::after{transition:opacity 150ms linear}.smui-button--color-secondary:not(.mdc-ripple-upgraded):active .mdc-button__ripple::after{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-press-opacity, 0.24)}.smui-button--color-secondary.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.24)}.smui-button__group{display:inline-flex}.smui-button__group>.mdc-button,.smui-button__group>.smui-button__group-item>.mdc-button{margin-left:0;margin-right:0}.smui-button__group>.mdc-button:not(:last-child),.smui-button__group>.mdc-button:not(:last-child)>.mdc-button__ripple,.smui-button__group>.smui-button__group-item:not(:last-child)>.mdc-button,.smui-button__group>.smui-button__group-item:not(:last-child)>.mdc-button>.mdc-button__ripple{border-top-right-radius:0;border-bottom-right-radius:0}.smui-button__group>.mdc-button:not(:first-child),.smui-button__group>.mdc-button:not(:first-child)>.mdc-button__ripple,.smui-button__group>.smui-button__group-item:not(:first-child)>.mdc-button,.smui-button__group>.smui-button__group-item:not(:first-child)>.mdc-button>.mdc-button__ripple{border-top-left-radius:0;border-bottom-left-radius:0}.smui-button__group.smui-button__group--raised{border-radius:4px;border-radius:var(--mdc-shape-small, 4px);box-shadow:0px 3px 1px -2px rgba(0, 0, 0, 0.2),0px 2px 2px 0px rgba(0, 0, 0, 0.14),0px 1px 5px 0px rgba(0,0,0,.12)}.smui-button__group>.mdc-button--raised,.smui-button__group>.smui-button__group-item>.mdc-button--raised{border-radius:4px;border-radius:var(--mdc-shape-small, 4px);box-shadow:0px 0px 0px 0px rgba(0, 0, 0, 0.2),0px 0px 0px 0px rgba(0, 0, 0, 0.14),0px 0px 0px 0px rgba(0,0,0,.12)}.smui-button__group>.mdc-button--raised .mdc-button__ripple,.smui-button__group>.smui-button__group-item>.mdc-button--raised .mdc-button__ripple{border-radius:4px;border-radius:var(--mdc-shape-small, 4px)}.smui-button__group>.mdc-button--raised:hover,.smui-button__group>.mdc-button--raised:focus,.smui-button__group>.smui-button__group-item>.mdc-button--raised:hover,.smui-button__group>.smui-button__group-item>.mdc-button--raised:focus{box-shadow:0px 0px 0px 0px rgba(0, 0, 0, 0.2),0px 0px 0px 0px rgba(0, 0, 0, 0.14),0px 0px 0px 0px rgba(0,0,0,.12)}.smui-button__group>.mdc-button--raised:active,.smui-button__group>.smui-button__group-item>.mdc-button--raised:active{box-shadow:0px 0px 0px 0px rgba(0, 0, 0, 0.2),0px 0px 0px 0px rgba(0, 0, 0, 0.14),0px 0px 0px 0px rgba(0,0,0,.12)}.smui-button__group>.mdc-button--raised:disabled,.smui-button__group>.smui-button__group-item>.mdc-button--raised:disabled{box-shadow:0px 0px 0px 0px rgba(0, 0, 0, 0.2),0px 0px 0px 0px rgba(0, 0, 0, 0.14),0px 0px 0px 0px rgba(0,0,0,.12)}.smui-button__group>.mdc-button--outlined:not(:last-child),.smui-button__group>.smui-button__group-item:not(:last-child)>.mdc-button--outlined{border-right-width:0}.mdc-checkbox{padding:calc((40px - 18px) / 2);padding:calc((var(--mdc-checkbox-ripple-size, 40px) - 18px) / 2);margin:calc((40px - 40px) / 2);margin:calc((var(--mdc-checkbox-touch-target-size, 40px) - 40px) / 2)}.mdc-checkbox .mdc-checkbox__ripple::before,.mdc-checkbox .mdc-checkbox__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, #000)}.mdc-checkbox:hover .mdc-checkbox__ripple::before,.mdc-checkbox.mdc-ripple-surface--hover .mdc-checkbox__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-checkbox.mdc-ripple-upgraded--background-focused .mdc-checkbox__ripple::before,.mdc-checkbox:not(.mdc-ripple-upgraded):focus .mdc-checkbox__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-checkbox:not(.mdc-ripple-upgraded) .mdc-checkbox__ripple::after{transition:opacity 150ms linear}.mdc-checkbox:not(.mdc-ripple-upgraded):active .mdc-checkbox__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-checkbox.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-checkbox.mdc-checkbox--selected .mdc-checkbox__ripple::before,.mdc-checkbox.mdc-checkbox--selected .mdc-checkbox__ripple::after{background-color:#989895;background-color:var(--mdc-ripple-color, var(--mdc-theme-secondary, #989895))}.mdc-checkbox.mdc-checkbox--selected:hover .mdc-checkbox__ripple::before,.mdc-checkbox.mdc-checkbox--selected.mdc-ripple-surface--hover .mdc-checkbox__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-checkbox.mdc-checkbox--selected.mdc-ripple-upgraded--background-focused .mdc-checkbox__ripple::before,.mdc-checkbox.mdc-checkbox--selected:not(.mdc-ripple-upgraded):focus .mdc-checkbox__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-checkbox.mdc-checkbox--selected:not(.mdc-ripple-upgraded) .mdc-checkbox__ripple::after{transition:opacity 150ms linear}.mdc-checkbox.mdc-checkbox--selected:not(.mdc-ripple-upgraded):active .mdc-checkbox__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-checkbox.mdc-checkbox--selected.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-checkbox.mdc-ripple-upgraded--background-focused.mdc-checkbox--selected .mdc-checkbox__ripple::before,.mdc-checkbox.mdc-ripple-upgraded--background-focused.mdc-checkbox--selected .mdc-checkbox__ripple::after{background-color:#989895;background-color:var(--mdc-ripple-color, var(--mdc-theme-secondary, #989895))}.mdc-checkbox .mdc-checkbox__background{top:calc((40px - 18px) / 2);top:calc((var(--mdc-checkbox-ripple-size, 40px) - 18px) / 2);left:calc((40px - 18px) / 2);left:calc((var(--mdc-checkbox-ripple-size, 40px) - 18px) / 2)}.mdc-checkbox .mdc-checkbox__native-control{top:calc((40px - 40px) / 2);top:calc((40px - var(--mdc-checkbox-touch-target-size, 40px)) / 2);right:calc((40px - 40px) / 2);right:calc((40px - var(--mdc-checkbox-touch-target-size, 40px)) / 2);left:calc((40px - 40px) / 2);left:calc((40px - var(--mdc-checkbox-touch-target-size, 40px)) / 2);width:40px;width:var(--mdc-checkbox-touch-target-size, 40px);height:40px;height:var(--mdc-checkbox-touch-target-size, 40px)}.mdc-checkbox .mdc-checkbox__native-control:enabled:not(:checked):not(:indeterminate):not([data-indeterminate=true])~.mdc-checkbox__background{border-color:rgba(0, 0, 0, 0.54);border-color:var(--mdc-checkbox-unchecked-color, rgba(0, 0, 0, 0.54));background-color:transparent}.mdc-checkbox .mdc-checkbox__native-control:enabled:checked~.mdc-checkbox__background,.mdc-checkbox .mdc-checkbox__native-control:enabled:indeterminate~.mdc-checkbox__background,.mdc-checkbox .mdc-checkbox__native-control[data-indeterminate=true]:enabled~.mdc-checkbox__background{border-color:#989895;border-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #989895));background-color:#989895;background-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #989895))}@keyframes mdc-checkbox-fade-in-background-8A000000FF98989500000000FF989895{0%{border-color:rgba(0, 0, 0, 0.54);border-color:var(--mdc-checkbox-unchecked-color, rgba(0, 0, 0, 0.54));background-color:transparent}50%{border-color:#989895;border-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #989895));background-color:#989895;background-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #989895))}}@keyframes mdc-checkbox-fade-out-background-8A000000FF98989500000000FF989895{0%,80%{border-color:#989895;border-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #989895));background-color:#989895;background-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #989895))}100%{border-color:rgba(0, 0, 0, 0.54);border-color:var(--mdc-checkbox-unchecked-color, rgba(0, 0, 0, 0.54));background-color:transparent}}.mdc-checkbox.mdc-checkbox--anim-unchecked-checked .mdc-checkbox__native-control:enabled~.mdc-checkbox__background,.mdc-checkbox.mdc-checkbox--anim-unchecked-indeterminate .mdc-checkbox__native-control:enabled~.mdc-checkbox__background{animation-name:mdc-checkbox-fade-in-background-8A000000FF98989500000000FF989895}.mdc-checkbox.mdc-checkbox--anim-checked-unchecked .mdc-checkbox__native-control:enabled~.mdc-checkbox__background,.mdc-checkbox.mdc-checkbox--anim-indeterminate-unchecked .mdc-checkbox__native-control:enabled~.mdc-checkbox__background{animation-name:mdc-checkbox-fade-out-background-8A000000FF98989500000000FF989895}.mdc-checkbox .mdc-checkbox__native-control[disabled]:not(:checked):not(:indeterminate):not([data-indeterminate=true])~.mdc-checkbox__background{border-color:rgba(0, 0, 0, 0.38);border-color:var(--mdc-checkbox-disabled-color, rgba(0, 0, 0, 0.38));background-color:transparent}.mdc-checkbox .mdc-checkbox__native-control[disabled]:checked~.mdc-checkbox__background,.mdc-checkbox .mdc-checkbox__native-control[disabled]:indeterminate~.mdc-checkbox__background,.mdc-checkbox .mdc-checkbox__native-control[data-indeterminate=true][disabled]~.mdc-checkbox__background{border-color:transparent;background-color:rgba(0, 0, 0, 0.38);background-color:var(--mdc-checkbox-disabled-color, rgba(0, 0, 0, 0.38))}.mdc-checkbox .mdc-checkbox__native-control:enabled~.mdc-checkbox__background .mdc-checkbox__checkmark{color:#000;color:var(--mdc-checkbox-ink-color, #000)}.mdc-checkbox .mdc-checkbox__native-control:enabled~.mdc-checkbox__background .mdc-checkbox__mixedmark{border-color:#000;border-color:var(--mdc-checkbox-ink-color, #000)}.mdc-checkbox .mdc-checkbox__native-control:disabled~.mdc-checkbox__background .mdc-checkbox__checkmark{color:#000;color:var(--mdc-checkbox-ink-color, #000)}.mdc-checkbox .mdc-checkbox__native-control:disabled~.mdc-checkbox__background .mdc-checkbox__mixedmark{border-color:#000;border-color:var(--mdc-checkbox-ink-color, #000)}@keyframes mdc-checkbox-unchecked-checked-checkmark-path{0%,50%{stroke-dashoffset:29.7833385}50%{animation-timing-function:cubic-bezier(0, 0, 0.2, 1)}100%{stroke-dashoffset:0}}@keyframes mdc-checkbox-unchecked-indeterminate-mixedmark{0%,68.2%{transform:scaleX(0)}68.2%{animation-timing-function:cubic-bezier(0, 0, 0, 1)}100%{transform:scaleX(1)}}@keyframes mdc-checkbox-checked-unchecked-checkmark-path{from{animation-timing-function:cubic-bezier(0.4, 0, 1, 1);opacity:1;stroke-dashoffset:0}to{opacity:0;stroke-dashoffset:-29.7833385}}@keyframes mdc-checkbox-checked-indeterminate-checkmark{from{animation-timing-function:cubic-bezier(0, 0, 0.2, 1);transform:rotate(0deg);opacity:1}to{transform:rotate(45deg);opacity:0}}@keyframes mdc-checkbox-indeterminate-checked-checkmark{from{animation-timing-function:cubic-bezier(0.14, 0, 0, 1);transform:rotate(45deg);opacity:0}to{transform:rotate(360deg);opacity:1}}@keyframes mdc-checkbox-checked-indeterminate-mixedmark{from{animation-timing-function:mdc-animation-deceleration-curve-timing-function;transform:rotate(-45deg);opacity:0}to{transform:rotate(0deg);opacity:1}}@keyframes mdc-checkbox-indeterminate-checked-mixedmark{from{animation-timing-function:cubic-bezier(0.14, 0, 0, 1);transform:rotate(0deg);opacity:1}to{transform:rotate(315deg);opacity:0}}@keyframes mdc-checkbox-indeterminate-unchecked-mixedmark{0%{animation-timing-function:linear;transform:scaleX(1);opacity:1}32.8%,100%{transform:scaleX(0);opacity:0}}.mdc-checkbox{display:inline-block;position:relative;flex:0 0 18px;box-sizing:content-box;width:18px;height:18px;line-height:0;white-space:nowrap;cursor:pointer;vertical-align:bottom}.mdc-checkbox.mdc-ripple-upgraded--background-focused .mdc-checkbox__focus-ring,.mdc-checkbox:not(.mdc-ripple-upgraded):focus .mdc-checkbox__focus-ring{pointer-events:none;border:2px solid rgba(0,0,0,0);border-radius:6px;box-sizing:content-box;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:100%;width:100%}@media screen and (forced-colors: active){.mdc-checkbox.mdc-ripple-upgraded--background-focused .mdc-checkbox__focus-ring,.mdc-checkbox:not(.mdc-ripple-upgraded):focus .mdc-checkbox__focus-ring{border-color:CanvasText}}.mdc-checkbox.mdc-ripple-upgraded--background-focused .mdc-checkbox__focus-ring::after,.mdc-checkbox:not(.mdc-ripple-upgraded):focus .mdc-checkbox__focus-ring::after{content:"";border:2px solid rgba(0,0,0,0);border-radius:8px;display:block;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:calc(100% + 4px);width:calc(100% + 4px)}@media screen and (forced-colors: active){.mdc-checkbox.mdc-ripple-upgraded--background-focused .mdc-checkbox__focus-ring::after,.mdc-checkbox:not(.mdc-ripple-upgraded):focus .mdc-checkbox__focus-ring::after{border-color:CanvasText}}@media all and (-ms-high-contrast: none){.mdc-checkbox .mdc-checkbox__focus-ring{display:none}}@media screen and (forced-colors: active),(-ms-high-contrast: active){.mdc-checkbox__mixedmark{margin:0 1px}}.mdc-checkbox--disabled{cursor:default;pointer-events:none}.mdc-checkbox__background{display:inline-flex;position:absolute;align-items:center;justify-content:center;box-sizing:border-box;width:18px;height:18px;border:2px solid currentColor;border-radius:2px;background-color:rgba(0,0,0,0);pointer-events:none;will-change:background-color,border-color;transition:background-color 90ms 0ms cubic-bezier(0.4, 0, 0.6, 1),border-color 90ms 0ms cubic-bezier(0.4, 0, 0.6, 1)}.mdc-checkbox__checkmark{position:absolute;top:0;right:0;bottom:0;left:0;width:100%;opacity:0;transition:opacity 180ms 0ms cubic-bezier(0.4, 0, 0.6, 1)}.mdc-checkbox--upgraded .mdc-checkbox__checkmark{opacity:1}.mdc-checkbox__checkmark-path{transition:stroke-dashoffset 180ms 0ms cubic-bezier(0.4, 0, 0.6, 1);stroke:currentColor;stroke-width:3.12px;stroke-dashoffset:29.7833385;stroke-dasharray:29.7833385}.mdc-checkbox__mixedmark{width:100%;height:0;transform:scaleX(0) rotate(0deg);border-width:1px;border-style:solid;opacity:0;transition:opacity 90ms 0ms cubic-bezier(0.4, 0, 0.6, 1),transform 90ms 0ms cubic-bezier(0.4, 0, 0.6, 1)}.mdc-checkbox--anim-unchecked-checked .mdc-checkbox__background,.mdc-checkbox--anim-unchecked-indeterminate .mdc-checkbox__background,.mdc-checkbox--anim-checked-unchecked .mdc-checkbox__background,.mdc-checkbox--anim-indeterminate-unchecked .mdc-checkbox__background{animation-duration:180ms;animation-timing-function:linear}.mdc-checkbox--anim-unchecked-checked .mdc-checkbox__checkmark-path{animation:mdc-checkbox-unchecked-checked-checkmark-path 180ms linear 0s;transition:none}.mdc-checkbox--anim-unchecked-indeterminate .mdc-checkbox__mixedmark{animation:mdc-checkbox-unchecked-indeterminate-mixedmark 90ms linear 0s;transition:none}.mdc-checkbox--anim-checked-unchecked .mdc-checkbox__checkmark-path{animation:mdc-checkbox-checked-unchecked-checkmark-path 90ms linear 0s;transition:none}.mdc-checkbox--anim-checked-indeterminate .mdc-checkbox__checkmark{animation:mdc-checkbox-checked-indeterminate-checkmark 90ms linear 0s;transition:none}.mdc-checkbox--anim-checked-indeterminate .mdc-checkbox__mixedmark{animation:mdc-checkbox-checked-indeterminate-mixedmark 90ms linear 0s;transition:none}.mdc-checkbox--anim-indeterminate-checked .mdc-checkbox__checkmark{animation:mdc-checkbox-indeterminate-checked-checkmark 500ms linear 0s;transition:none}.mdc-checkbox--anim-indeterminate-checked .mdc-checkbox__mixedmark{animation:mdc-checkbox-indeterminate-checked-mixedmark 500ms linear 0s;transition:none}.mdc-checkbox--anim-indeterminate-unchecked .mdc-checkbox__mixedmark{animation:mdc-checkbox-indeterminate-unchecked-mixedmark 300ms linear 0s;transition:none}.mdc-checkbox__native-control:checked~.mdc-checkbox__background,.mdc-checkbox__native-control:indeterminate~.mdc-checkbox__background,.mdc-checkbox__native-control[data-indeterminate=true]~.mdc-checkbox__background{transition:border-color 90ms 0ms cubic-bezier(0, 0, 0.2, 1),background-color 90ms 0ms cubic-bezier(0, 0, 0.2, 1)}.mdc-checkbox__native-control:checked~.mdc-checkbox__background .mdc-checkbox__checkmark-path,.mdc-checkbox__native-control:indeterminate~.mdc-checkbox__background .mdc-checkbox__checkmark-path,.mdc-checkbox__native-control[data-indeterminate=true]~.mdc-checkbox__background .mdc-checkbox__checkmark-path{stroke-dashoffset:0}.mdc-checkbox__native-control{position:absolute;margin:0;padding:0;opacity:0;cursor:inherit}.mdc-checkbox__native-control:disabled{cursor:default;pointer-events:none}.mdc-checkbox--touch{margin:calc((48px - 40px) / 2);margin:calc((var(--mdc-checkbox-state-layer-size, 48px) - var(--mdc-checkbox-state-layer-size, 40px)) / 2)}.mdc-checkbox--touch .mdc-checkbox__native-control{top:calc((40px - 48px) / 2);top:calc((var(--mdc-checkbox-state-layer-size, 40px) - var(--mdc-checkbox-state-layer-size, 48px)) / 2);right:calc((40px - 48px) / 2);right:calc((var(--mdc-checkbox-state-layer-size, 40px) - var(--mdc-checkbox-state-layer-size, 48px)) / 2);left:calc((40px - 48px) / 2);left:calc((var(--mdc-checkbox-state-layer-size, 40px) - var(--mdc-checkbox-state-layer-size, 48px)) / 2);width:48px;width:var(--mdc-checkbox-state-layer-size, 48px);height:48px;height:var(--mdc-checkbox-state-layer-size, 48px)}.mdc-checkbox__native-control:checked~.mdc-checkbox__background .mdc-checkbox__checkmark{transition:opacity 180ms 0ms cubic-bezier(0, 0, 0.2, 1),transform 180ms 0ms cubic-bezier(0, 0, 0.2, 1);opacity:1}.mdc-checkbox__native-control:checked~.mdc-checkbox__background .mdc-checkbox__mixedmark{transform:scaleX(1) rotate(-45deg)}.mdc-checkbox__native-control:indeterminate~.mdc-checkbox__background .mdc-checkbox__checkmark,.mdc-checkbox__native-control[data-indeterminate=true]~.mdc-checkbox__background .mdc-checkbox__checkmark{transform:rotate(45deg);opacity:0;transition:opacity 90ms 0ms cubic-bezier(0.4, 0, 0.6, 1),transform 90ms 0ms cubic-bezier(0.4, 0, 0.6, 1)}.mdc-checkbox__native-control:indeterminate~.mdc-checkbox__background .mdc-checkbox__mixedmark,.mdc-checkbox__native-control[data-indeterminate=true]~.mdc-checkbox__background .mdc-checkbox__mixedmark{transform:scaleX(1) rotate(0deg);opacity:1}.mdc-checkbox.mdc-checkbox--upgraded .mdc-checkbox__background,.mdc-checkbox.mdc-checkbox--upgraded .mdc-checkbox__checkmark,.mdc-checkbox.mdc-checkbox--upgraded .mdc-checkbox__checkmark-path,.mdc-checkbox.mdc-checkbox--upgraded .mdc-checkbox__mixedmark{transition:none}.mdc-checkbox{--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.mdc-checkbox .mdc-checkbox__ripple::before,.mdc-checkbox .mdc-checkbox__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.mdc-checkbox .mdc-checkbox__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}.mdc-checkbox .mdc-checkbox__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}.mdc-checkbox.mdc-ripple-upgraded .mdc-checkbox__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}.mdc-checkbox.mdc-ripple-upgraded .mdc-checkbox__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}.mdc-checkbox.mdc-ripple-upgraded--unbounded .mdc-checkbox__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}.mdc-checkbox.mdc-ripple-upgraded--foreground-activation .mdc-checkbox__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.mdc-checkbox.mdc-ripple-upgraded--foreground-deactivation .mdc-checkbox__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}.mdc-checkbox .mdc-checkbox__ripple::before,.mdc-checkbox .mdc-checkbox__ripple::after{top:calc(50% - 50%);left:calc(50% - 50%);width:100%;height:100%}.mdc-checkbox.mdc-ripple-upgraded .mdc-checkbox__ripple::before,.mdc-checkbox.mdc-ripple-upgraded .mdc-checkbox__ripple::after{top:var(--mdc-ripple-top, calc(50% - 50%));left:var(--mdc-ripple-left, calc(50% - 50%));width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-checkbox.mdc-ripple-upgraded .mdc-checkbox__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-checkbox{z-index:0}.mdc-checkbox .mdc-checkbox__ripple::before,.mdc-checkbox .mdc-checkbox__ripple::after{z-index:-1;z-index:var(--mdc-ripple-z-index, -1)}.mdc-checkbox__ripple{position:absolute;top:0;left:0;width:100%;height:100%;pointer-events:none}@media screen and (forced-colors: active),(-ms-high-contrast: active){.mdc-checkbox .mdc-checkbox__native-control[disabled]:not(:checked):not(:indeterminate):not([data-indeterminate=true])~.mdc-checkbox__background{border-color:GrayText;border-color:var(--mdc-checkbox-disabled-unselected-icon-color, GrayText);background-color:transparent}.mdc-checkbox .mdc-checkbox__native-control[disabled]:checked~.mdc-checkbox__background,.mdc-checkbox .mdc-checkbox__native-control[disabled]:indeterminate~.mdc-checkbox__background,.mdc-checkbox .mdc-checkbox__native-control[data-indeterminate=true][disabled]~.mdc-checkbox__background{border-color:GrayText;background-color:GrayText;background-color:var(--mdc-checkbox-disabled-selected-icon-color, GrayText)}.mdc-checkbox .mdc-checkbox__native-control:enabled~.mdc-checkbox__background .mdc-checkbox__checkmark{color:ButtonText;color:var(--mdc-checkbox-selected-checkmark-color, ButtonText)}.mdc-checkbox .mdc-checkbox__native-control:enabled~.mdc-checkbox__background .mdc-checkbox__mixedmark{border-color:ButtonText;border-color:var(--mdc-checkbox-selected-checkmark-color, ButtonText)}.mdc-checkbox .mdc-checkbox__native-control:disabled~.mdc-checkbox__background .mdc-checkbox__checkmark{color:ButtonFace;color:var(--mdc-checkbox-disabled-selected-checkmark-color, ButtonFace)}.mdc-checkbox .mdc-checkbox__native-control:disabled~.mdc-checkbox__background .mdc-checkbox__mixedmark{border-color:ButtonFace;border-color:var(--mdc-checkbox-disabled-selected-checkmark-color, ButtonFace)}}.mdc-floating-label{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-subtitle1-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:1rem;font-size:var(--mdc-typography-subtitle1-font-size, 1rem);font-weight:400;font-weight:var(--mdc-typography-subtitle1-font-weight, 400);letter-spacing:0.009375em;letter-spacing:var(--mdc-typography-subtitle1-letter-spacing, 0.009375em);text-decoration:inherit;text-decoration:var(--mdc-typography-subtitle1-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-subtitle1-text-transform, inherit);position:absolute;left:0;-webkit-transform-origin:left top;transform-origin:left top;line-height:1.15rem;text-align:left;text-overflow:ellipsis;white-space:nowrap;cursor:text;overflow:hidden;will-change:transform;transition:transform 150ms cubic-bezier(0.4, 0, 0.2, 1),color 150ms cubic-bezier(0.4, 0, 0.2, 1)}[dir=rtl] .mdc-floating-label,.mdc-floating-label[dir=rtl]{right:0;left:auto;-webkit-transform-origin:right top;transform-origin:right top;text-align:right}.mdc-floating-label--float-above{cursor:auto}.mdc-floating-label--required::after{margin-left:1px;margin-right:0px;content:"*"}[dir=rtl] .mdc-floating-label--required::after,.mdc-floating-label--required[dir=rtl]::after{margin-left:0;margin-right:1px}.mdc-floating-label--float-above{transform:translateY(-106%) scale(0.75)}.mdc-floating-label--shake{animation:mdc-floating-label-shake-float-above-standard 250ms 1}@keyframes mdc-floating-label-shake-float-above-standard{0%{transform:translateX(calc(0 - 0%)) translateY(-106%) scale(0.75)}33%{animation-timing-function:cubic-bezier(0.5, 0, 0.701732, 0.495819);transform:translateX(calc(4% - 0%)) translateY(-106%) scale(0.75)}66%{animation-timing-function:cubic-bezier(0.302435, 0.381352, 0.55, 0.956352);transform:translateX(calc(-4% - 0%)) translateY(-106%) scale(0.75)}100%{transform:translateX(calc(0 - 0%)) translateY(-106%) scale(0.75)}}.mdc-line-ripple::before,.mdc-line-ripple::after{position:absolute;bottom:0;left:0;width:100%;border-bottom-style:solid;content:""}.mdc-line-ripple::before{border-bottom-width:1px}.mdc-line-ripple::before{z-index:1}.mdc-line-ripple::after{transform:scaleX(0);border-bottom-width:2px;opacity:0;z-index:2}.mdc-line-ripple::after{transition:transform 180ms cubic-bezier(0.4, 0, 0.2, 1),opacity 180ms cubic-bezier(0.4, 0, 0.2, 1)}.mdc-line-ripple--active::after{transform:scaleX(1);opacity:1}.mdc-line-ripple--deactivating::after{opacity:0}.mdc-notched-outline{display:flex;position:absolute;top:0;right:0;left:0;box-sizing:border-box;width:100%;max-width:100%;height:100%;text-align:left;pointer-events:none}[dir=rtl] .mdc-notched-outline,.mdc-notched-outline[dir=rtl]{text-align:right}.mdc-notched-outline__leading,.mdc-notched-outline__notch,.mdc-notched-outline__trailing{box-sizing:border-box;height:100%;border-top:1px solid;border-bottom:1px solid;pointer-events:none}.mdc-notched-outline__leading{border-left:1px solid;border-right:none;width:12px}[dir=rtl] .mdc-notched-outline__leading,.mdc-notched-outline__leading[dir=rtl]{border-left:none;border-right:1px solid}.mdc-notched-outline__trailing{border-left:none;border-right:1px solid;flex-grow:1}[dir=rtl] .mdc-notched-outline__trailing,.mdc-notched-outline__trailing[dir=rtl]{border-left:1px solid;border-right:none}.mdc-notched-outline__notch{flex:0 0 auto;width:auto;max-width:calc(100% - 12px * 2)}.mdc-notched-outline .mdc-floating-label{display:inline-block;position:relative;max-width:100%}.mdc-notched-outline .mdc-floating-label--float-above{text-overflow:clip}.mdc-notched-outline--upgraded .mdc-floating-label--float-above{max-width:133.3333333333%}.mdc-notched-outline--notched .mdc-notched-outline__notch{padding-left:0;padding-right:8px;border-top:none}[dir=rtl] .mdc-notched-outline--notched .mdc-notched-outline__notch,.mdc-notched-outline--notched .mdc-notched-outline__notch[dir=rtl]{padding-left:8px;padding-right:0}.mdc-notched-outline--no-label .mdc-notched-outline__notch{display:none}.mdc-select{display:inline-flex;position:relative}.mdc-select:not(.mdc-select--disabled) .mdc-select__selected-text{color:rgba(0, 0, 0, 0.87)}.mdc-select.mdc-select--disabled .mdc-select__selected-text{color:rgba(0, 0, 0, 0.38)}.mdc-select:not(.mdc-select--disabled) .mdc-floating-label{color:rgba(0, 0, 0, 0.6)}.mdc-select:not(.mdc-select--disabled).mdc-select--focused .mdc-floating-label{color:rgba(106, 27, 154, 0.87)}.mdc-select.mdc-select--disabled .mdc-floating-label{color:rgba(0, 0, 0, 0.38)}.mdc-select:not(.mdc-select--disabled) .mdc-select__dropdown-icon{fill:rgba(0, 0, 0, 0.54)}.mdc-select:not(.mdc-select--disabled).mdc-select--focused .mdc-select__dropdown-icon{fill:#6a1b9a;fill:var(--mdc-theme-primary, #6a1b9a)}.mdc-select.mdc-select--disabled .mdc-select__dropdown-icon{fill:rgba(0, 0, 0, 0.38)}.mdc-select:not(.mdc-select--disabled)+.mdc-select-helper-text{color:rgba(0, 0, 0, 0.6)}.mdc-select.mdc-select--disabled+.mdc-select-helper-text{color:rgba(0, 0, 0, 0.38)}.mdc-select:not(.mdc-select--disabled) .mdc-select__icon{color:rgba(0, 0, 0, 0.54)}.mdc-select.mdc-select--disabled .mdc-select__icon{color:rgba(0, 0, 0, 0.38)}@media screen and (forced-colors: active),(-ms-high-contrast: active){.mdc-select.mdc-select--disabled .mdc-select__selected-text{color:GrayText}.mdc-select.mdc-select--disabled .mdc-select__dropdown-icon{fill:red}.mdc-select.mdc-select--disabled .mdc-floating-label{color:GrayText}.mdc-select.mdc-select--disabled .mdc-line-ripple::before{border-bottom-color:GrayText}.mdc-select.mdc-select--disabled .mdc-notched-outline__leading,.mdc-select.mdc-select--disabled .mdc-notched-outline__notch,.mdc-select.mdc-select--disabled .mdc-notched-outline__trailing{border-color:GrayText}.mdc-select.mdc-select--disabled .mdc-select__icon{color:GrayText}.mdc-select.mdc-select--disabled+.mdc-select-helper-text{color:GrayText}}.mdc-select .mdc-floating-label{top:50%;transform:translateY(-50%);pointer-events:none}.mdc-select .mdc-select__anchor{padding-left:16px;padding-right:0}[dir=rtl] .mdc-select .mdc-select__anchor,.mdc-select .mdc-select__anchor[dir=rtl]{padding-left:0;padding-right:16px}.mdc-select.mdc-select--with-leading-icon .mdc-select__anchor{padding-left:0;padding-right:0}[dir=rtl] .mdc-select.mdc-select--with-leading-icon .mdc-select__anchor,.mdc-select.mdc-select--with-leading-icon .mdc-select__anchor[dir=rtl]{padding-left:0;padding-right:0}.mdc-select .mdc-select__icon{width:24px;height:24px;font-size:24px}.mdc-select .mdc-select__dropdown-icon{width:24px;height:24px}.mdc-select .mdc-select__menu .mdc-deprecated-list-item{padding-left:16px;padding-right:16px}[dir=rtl] .mdc-select .mdc-select__menu .mdc-deprecated-list-item,.mdc-select .mdc-select__menu .mdc-deprecated-list-item[dir=rtl]{padding-left:16px;padding-right:16px}.mdc-select .mdc-select__menu .mdc-deprecated-list-item__graphic{margin-left:0;margin-right:12px}[dir=rtl] .mdc-select .mdc-select__menu .mdc-deprecated-list-item__graphic,.mdc-select .mdc-select__menu .mdc-deprecated-list-item__graphic[dir=rtl]{margin-left:12px;margin-right:0}.mdc-select__dropdown-icon{margin-left:12px;margin-right:12px;display:inline-flex;position:relative;align-self:center;align-items:center;justify-content:center;flex-shrink:0;pointer-events:none}.mdc-select__dropdown-icon .mdc-select__dropdown-icon-active,.mdc-select__dropdown-icon .mdc-select__dropdown-icon-inactive{position:absolute;top:0;left:0}.mdc-select__dropdown-icon .mdc-select__dropdown-icon-graphic{width:41.6666666667%;height:20.8333333333%}.mdc-select__dropdown-icon .mdc-select__dropdown-icon-inactive{opacity:1;transition:opacity 75ms linear 75ms}.mdc-select__dropdown-icon .mdc-select__dropdown-icon-active{opacity:0;transition:opacity 75ms linear}[dir=rtl] .mdc-select__dropdown-icon,.mdc-select__dropdown-icon[dir=rtl]{margin-left:12px;margin-right:12px}.mdc-select--activated .mdc-select__dropdown-icon .mdc-select__dropdown-icon-inactive{opacity:0;transition:opacity 49.5ms linear}.mdc-select--activated .mdc-select__dropdown-icon .mdc-select__dropdown-icon-active{opacity:1;transition:opacity 100.5ms linear 49.5ms}.mdc-select__anchor{width:200px;min-width:0;flex:1 1 auto;position:relative;box-sizing:border-box;overflow:hidden;outline:none;cursor:pointer}.mdc-select__anchor .mdc-floating-label--float-above{transform:translateY(-106%) scale(0.75)}.mdc-select__selected-text-container{display:flex;appearance:none;pointer-events:none;box-sizing:border-box;width:auto;min-width:0;flex-grow:1;height:28px;border:none;outline:none;padding:0;background-color:rgba(0,0,0,0);color:inherit}.mdc-select__selected-text{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-subtitle1-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:1rem;font-size:var(--mdc-typography-subtitle1-font-size, 1rem);line-height:1.75rem;line-height:var(--mdc-typography-subtitle1-line-height, 1.75rem);font-weight:400;font-weight:var(--mdc-typography-subtitle1-font-weight, 400);letter-spacing:0.009375em;letter-spacing:var(--mdc-typography-subtitle1-letter-spacing, 0.009375em);text-decoration:inherit;text-decoration:var(--mdc-typography-subtitle1-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-subtitle1-text-transform, inherit);text-overflow:ellipsis;white-space:nowrap;overflow:hidden;display:block;width:100%;text-align:left}[dir=rtl] .mdc-select__selected-text,.mdc-select__selected-text[dir=rtl]{text-align:right}.mdc-select--invalid:not(.mdc-select--disabled) .mdc-floating-label{color:#b71c1c;color:var(--mdc-theme-error, #b71c1c)}.mdc-select--invalid:not(.mdc-select--disabled).mdc-select--focused .mdc-floating-label{color:#b71c1c;color:var(--mdc-theme-error, #b71c1c)}.mdc-select--invalid:not(.mdc-select--disabled).mdc-select--invalid+.mdc-select-helper-text--validation-msg{color:#b71c1c;color:var(--mdc-theme-error, #b71c1c)}.mdc-select--invalid:not(.mdc-select--disabled) .mdc-select__dropdown-icon{fill:#b71c1c;fill:var(--mdc-theme-error, #b71c1c)}.mdc-select--invalid:not(.mdc-select--disabled).mdc-select--focused .mdc-select__dropdown-icon{fill:#b71c1c;fill:var(--mdc-theme-error, #b71c1c)}.mdc-select--disabled{cursor:default;pointer-events:none}.mdc-select--with-leading-icon .mdc-select__menu .mdc-deprecated-list-item{padding-left:12px;padding-right:12px}[dir=rtl] .mdc-select--with-leading-icon .mdc-select__menu .mdc-deprecated-list-item,.mdc-select--with-leading-icon .mdc-select__menu .mdc-deprecated-list-item[dir=rtl]{padding-left:12px;padding-right:12px}@media screen and (forced-colors: active),(-ms-high-contrast: active){.mdc-select__menu::before{position:absolute;box-sizing:border-box;width:100%;height:100%;top:0;left:0;border:1px solid rgba(0,0,0,0);border-radius:inherit;content:"";pointer-events:none}}@media screen and (forced-colors: active)and (forced-colors: active),screen and (-ms-high-contrast: active)and (forced-colors: active){.mdc-select__menu::before{border-color:CanvasText}}.mdc-select__menu .mdc-deprecated-list .mdc-select__icon,.mdc-select__menu .mdc-list .mdc-select__icon{margin-left:0;margin-right:0}[dir=rtl] .mdc-select__menu .mdc-deprecated-list .mdc-select__icon,[dir=rtl] .mdc-select__menu .mdc-list .mdc-select__icon,.mdc-select__menu .mdc-deprecated-list .mdc-select__icon[dir=rtl],.mdc-select__menu .mdc-list .mdc-select__icon[dir=rtl]{margin-left:0;margin-right:0}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected,.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--activated,.mdc-select__menu .mdc-list .mdc-deprecated-list-item--selected,.mdc-select__menu .mdc-list .mdc-deprecated-list-item--activated{color:#000;color:var(--mdc-theme-on-surface, #000)}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected .mdc-deprecated-list-item__graphic,.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--activated .mdc-deprecated-list-item__graphic,.mdc-select__menu .mdc-list .mdc-deprecated-list-item--selected .mdc-deprecated-list-item__graphic,.mdc-select__menu .mdc-list .mdc-deprecated-list-item--activated .mdc-deprecated-list-item__graphic{color:#000;color:var(--mdc-theme-on-surface, #000)}.mdc-select__menu .mdc-list-item__start{display:inline-flex;align-items:center}.mdc-select__option{padding-left:16px;padding-right:16px}[dir=rtl] .mdc-select__option,.mdc-select__option[dir=rtl]{padding-left:16px;padding-right:16px}.mdc-select__one-line-option.mdc-list-item--with-one-line{height:48px}.mdc-select__two-line-option.mdc-list-item--with-two-lines{height:64px}.mdc-select__two-line-option.mdc-list-item--with-two-lines .mdc-list-item__start{margin-top:20px}.mdc-select__two-line-option.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-select__two-line-option.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-select__two-line-option.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-select__two-line-option.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-select__two-line-option.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end::before{display:inline-block;width:0;height:36px;content:"";vertical-align:0}.mdc-select__option-with-leading-content{padding-left:0;padding-right:12px}.mdc-select__option-with-leading-content.mdc-list-item{padding-left:0;padding-right:auto}[dir=rtl] .mdc-select__option-with-leading-content.mdc-list-item,.mdc-select__option-with-leading-content.mdc-list-item[dir=rtl]{padding-left:auto;padding-right:0}.mdc-select__option-with-leading-content .mdc-list-item__start{margin-left:12px;margin-right:0}[dir=rtl] .mdc-select__option-with-leading-content .mdc-list-item__start,.mdc-select__option-with-leading-content .mdc-list-item__start[dir=rtl]{margin-left:0;margin-right:12px}.mdc-select__option-with-leading-content .mdc-list-item__start{width:36px;height:24px}[dir=rtl] .mdc-select__option-with-leading-content,.mdc-select__option-with-leading-content[dir=rtl]{padding-left:12px;padding-right:0}.mdc-select__option-with-meta.mdc-list-item{padding-left:auto;padding-right:0}[dir=rtl] .mdc-select__option-with-meta.mdc-list-item,.mdc-select__option-with-meta.mdc-list-item[dir=rtl]{padding-left:0;padding-right:auto}.mdc-select__option-with-meta .mdc-list-item__end{margin-left:12px;margin-right:12px}[dir=rtl] .mdc-select__option-with-meta .mdc-list-item__end,.mdc-select__option-with-meta .mdc-list-item__end[dir=rtl]{margin-left:12px;margin-right:12px}.mdc-select--filled .mdc-select__anchor{height:56px;display:flex;align-items:baseline}.mdc-select--filled .mdc-select__anchor::before{display:inline-block;width:0;height:40px;content:"";vertical-align:0}.mdc-select--filled.mdc-select--no-label .mdc-select__anchor .mdc-select__selected-text::before{content:"​"}.mdc-select--filled.mdc-select--no-label .mdc-select__anchor .mdc-select__selected-text-container{height:100%;display:inline-flex;align-items:center}.mdc-select--filled.mdc-select--no-label .mdc-select__anchor::before{display:none}.mdc-select--filled .mdc-select__anchor{border-top-left-radius:4px;border-top-left-radius:var(--mdc-shape-small, 4px);border-top-right-radius:4px;border-top-right-radius:var(--mdc-shape-small, 4px);border-bottom-right-radius:0;border-bottom-left-radius:0}.mdc-select--filled:not(.mdc-select--disabled) .mdc-select__anchor{background-color:whitesmoke}.mdc-select--filled.mdc-select--disabled .mdc-select__anchor{background-color:#fafafa}.mdc-select--filled:not(.mdc-select--disabled) .mdc-line-ripple::before{border-bottom-color:rgba(0, 0, 0, 0.42)}.mdc-select--filled:not(.mdc-select--disabled):hover .mdc-line-ripple::before{border-bottom-color:rgba(0, 0, 0, 0.87)}.mdc-select--filled:not(.mdc-select--disabled) .mdc-line-ripple::after{border-bottom-color:#6a1b9a;border-bottom-color:var(--mdc-theme-primary, #6a1b9a)}.mdc-select--filled.mdc-select--disabled .mdc-line-ripple::before{border-bottom-color:rgba(0, 0, 0, 0.06)}.mdc-select--filled .mdc-floating-label{max-width:calc(100% - 64px)}.mdc-select--filled .mdc-floating-label--float-above{max-width:calc(100% / 0.75 - 64px / 0.75)}.mdc-select--filled .mdc-menu-surface--is-open-below{border-top-left-radius:0px;border-top-right-radius:0px}.mdc-select--filled.mdc-select--focused.mdc-line-ripple::after{transform:scale(1, 2);opacity:1}.mdc-select--filled .mdc-floating-label{left:16px;right:initial}[dir=rtl] .mdc-select--filled .mdc-floating-label,.mdc-select--filled .mdc-floating-label[dir=rtl]{left:initial;right:16px}.mdc-select--filled.mdc-select--with-leading-icon .mdc-floating-label{left:48px;right:initial}[dir=rtl] .mdc-select--filled.mdc-select--with-leading-icon .mdc-floating-label,.mdc-select--filled.mdc-select--with-leading-icon .mdc-floating-label[dir=rtl]{left:initial;right:48px}.mdc-select--filled.mdc-select--with-leading-icon .mdc-floating-label{max-width:calc(100% - 96px)}.mdc-select--filled.mdc-select--with-leading-icon .mdc-floating-label--float-above{max-width:calc(100% / 0.75 - 96px / 0.75)}.mdc-select--invalid:not(.mdc-select--disabled) .mdc-line-ripple::before{border-bottom-color:#b71c1c;border-bottom-color:var(--mdc-theme-error, #b71c1c)}.mdc-select--invalid:not(.mdc-select--disabled):hover .mdc-line-ripple::before{border-bottom-color:#b71c1c;border-bottom-color:var(--mdc-theme-error, #b71c1c)}.mdc-select--invalid:not(.mdc-select--disabled) .mdc-line-ripple::after{border-bottom-color:#b71c1c;border-bottom-color:var(--mdc-theme-error, #b71c1c)}.mdc-select--outlined{border:none}.mdc-select--outlined .mdc-select__anchor{height:56px}.mdc-select--outlined .mdc-select__anchor .mdc-floating-label--float-above{transform:translateY(-37.25px) scale(1)}.mdc-select--outlined .mdc-select__anchor .mdc-floating-label--float-above{font-size:.75rem}.mdc-select--outlined .mdc-select__anchor.mdc-notched-outline--upgraded .mdc-floating-label--float-above,.mdc-select--outlined .mdc-select__anchor .mdc-notched-outline--upgraded .mdc-floating-label--float-above{transform:translateY(-34.75px) scale(0.75)}.mdc-select--outlined .mdc-select__anchor.mdc-notched-outline--upgraded .mdc-floating-label--float-above,.mdc-select--outlined .mdc-select__anchor .mdc-notched-outline--upgraded .mdc-floating-label--float-above{font-size:1rem}.mdc-select--outlined .mdc-select__anchor .mdc-floating-label--shake{animation:mdc-floating-label-shake-float-above-select-outlined-56px 250ms 1}@keyframes mdc-floating-label-shake-float-above-select-outlined-56px{0%{transform:translateX(calc(0 - 0%)) translateY(-34.75px) scale(0.75)}33%{animation-timing-function:cubic-bezier(0.5, 0, 0.701732, 0.495819);transform:translateX(calc(4% - 0%)) translateY(-34.75px) scale(0.75)}66%{animation-timing-function:cubic-bezier(0.302435, 0.381352, 0.55, 0.956352);transform:translateX(calc(-4% - 0%)) translateY(-34.75px) scale(0.75)}100%{transform:translateX(calc(0 - 0%)) translateY(-34.75px) scale(0.75)}}.mdc-select--outlined .mdc-notched-outline .mdc-notched-outline__leading{border-top-left-radius:4px;border-top-left-radius:var(--mdc-shape-small, 4px);border-top-right-radius:0;border-bottom-right-radius:0;border-bottom-left-radius:4px;border-bottom-left-radius:var(--mdc-shape-small, 4px)}[dir=rtl] .mdc-select--outlined .mdc-notched-outline .mdc-notched-outline__leading,.mdc-select--outlined .mdc-notched-outline .mdc-notched-outline__leading[dir=rtl]{border-top-left-radius:0;border-top-right-radius:4px;border-top-right-radius:var(--mdc-shape-small, 4px);border-bottom-right-radius:4px;border-bottom-right-radius:var(--mdc-shape-small, 4px);border-bottom-left-radius:0}@supports(top: max(0%)){.mdc-select--outlined .mdc-notched-outline .mdc-notched-outline__leading{width:max(12px, var(--mdc-shape-small, 4px))}}@supports(top: max(0%)){.mdc-select--outlined .mdc-notched-outline .mdc-notched-outline__notch{max-width:calc(100% - max(12px, var(--mdc-shape-small, 4px))*2)}}.mdc-select--outlined .mdc-notched-outline .mdc-notched-outline__trailing{border-top-left-radius:0;border-top-right-radius:4px;border-top-right-radius:var(--mdc-shape-small, 4px);border-bottom-right-radius:4px;border-bottom-right-radius:var(--mdc-shape-small, 4px);border-bottom-left-radius:0}[dir=rtl] .mdc-select--outlined .mdc-notched-outline .mdc-notched-outline__trailing,.mdc-select--outlined .mdc-notched-outline .mdc-notched-outline__trailing[dir=rtl]{border-top-left-radius:4px;border-top-left-radius:var(--mdc-shape-small, 4px);border-top-right-radius:0;border-bottom-right-radius:0;border-bottom-left-radius:4px;border-bottom-left-radius:var(--mdc-shape-small, 4px)}@supports(top: max(0%)){.mdc-select--outlined .mdc-select__anchor{padding-left:max(16px, calc(var(--mdc-shape-small, 4px) + 4px))}}[dir=rtl] .mdc-select--outlined .mdc-select__anchor,.mdc-select--outlined .mdc-select__anchor[dir=rtl]{padding-left:0}@supports(top: max(0%)){[dir=rtl] .mdc-select--outlined .mdc-select__anchor,.mdc-select--outlined .mdc-select__anchor[dir=rtl]{padding-right:max(16px, calc(var(--mdc-shape-small, 4px) + 4px))}}@supports(top: max(0%)){.mdc-select--outlined+.mdc-select-helper-text{margin-left:max(16px, calc(var(--mdc-shape-small, 4px) + 4px))}}[dir=rtl] .mdc-select--outlined+.mdc-select-helper-text,.mdc-select--outlined+.mdc-select-helper-text[dir=rtl]{margin-left:0}@supports(top: max(0%)){[dir=rtl] .mdc-select--outlined+.mdc-select-helper-text,.mdc-select--outlined+.mdc-select-helper-text[dir=rtl]{margin-right:max(16px, calc(var(--mdc-shape-small, 4px) + 4px))}}.mdc-select--outlined:not(.mdc-select--disabled) .mdc-select__anchor{background-color:transparent}.mdc-select--outlined.mdc-select--disabled .mdc-select__anchor{background-color:transparent}.mdc-select--outlined:not(.mdc-select--disabled) .mdc-notched-outline__leading,.mdc-select--outlined:not(.mdc-select--disabled) .mdc-notched-outline__notch,.mdc-select--outlined:not(.mdc-select--disabled) .mdc-notched-outline__trailing{border-color:rgba(0, 0, 0, 0.38)}.mdc-select--outlined:not(.mdc-select--disabled):not(.mdc-select--focused) .mdc-select__anchor:hover .mdc-notched-outline .mdc-notched-outline__leading,.mdc-select--outlined:not(.mdc-select--disabled):not(.mdc-select--focused) .mdc-select__anchor:hover .mdc-notched-outline .mdc-notched-outline__notch,.mdc-select--outlined:not(.mdc-select--disabled):not(.mdc-select--focused) .mdc-select__anchor:hover .mdc-notched-outline .mdc-notched-outline__trailing{border-color:rgba(0, 0, 0, 0.87)}.mdc-select--outlined:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__leading,.mdc-select--outlined:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__notch,.mdc-select--outlined:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__trailing{border-width:2px}.mdc-select--outlined:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__leading,.mdc-select--outlined:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__notch,.mdc-select--outlined:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__trailing{border-color:#6a1b9a;border-color:var(--mdc-theme-primary, #6a1b9a)}.mdc-select--outlined.mdc-select--disabled .mdc-notched-outline__leading,.mdc-select--outlined.mdc-select--disabled .mdc-notched-outline__notch,.mdc-select--outlined.mdc-select--disabled .mdc-notched-outline__trailing{border-color:rgba(0, 0, 0, 0.06)}.mdc-select--outlined .mdc-select__anchor :not(.mdc-notched-outline--notched) .mdc-notched-outline__notch{max-width:calc(100% - 60px)}.mdc-select--outlined .mdc-select__anchor{display:flex;align-items:baseline;overflow:visible}.mdc-select--outlined .mdc-select__anchor .mdc-floating-label--shake{animation:mdc-floating-label-shake-float-above-select-outlined 250ms 1}.mdc-select--outlined .mdc-select__anchor .mdc-floating-label--float-above{transform:translateY(-37.25px) scale(1)}.mdc-select--outlined .mdc-select__anchor .mdc-floating-label--float-above{font-size:.75rem}.mdc-select--outlined .mdc-select__anchor.mdc-notched-outline--upgraded .mdc-floating-label--float-above,.mdc-select--outlined .mdc-select__anchor .mdc-notched-outline--upgraded .mdc-floating-label--float-above{transform:translateY(-34.75px) scale(0.75)}.mdc-select--outlined .mdc-select__anchor.mdc-notched-outline--upgraded .mdc-floating-label--float-above,.mdc-select--outlined .mdc-select__anchor .mdc-notched-outline--upgraded .mdc-floating-label--float-above{font-size:1rem}.mdc-select--outlined .mdc-select__anchor .mdc-notched-outline--notched .mdc-notched-outline__notch{padding-top:1px}.mdc-select--outlined .mdc-select__anchor .mdc-select__selected-text::before{content:"​"}.mdc-select--outlined .mdc-select__anchor .mdc-select__selected-text-container{height:100%;display:inline-flex;align-items:center}.mdc-select--outlined .mdc-select__anchor::before{display:none}.mdc-select--outlined .mdc-select__selected-text-container{display:flex;border:none;z-index:1;background-color:rgba(0,0,0,0)}.mdc-select--outlined .mdc-select__icon{z-index:2}.mdc-select--outlined .mdc-floating-label{line-height:1.15rem;left:4px;right:initial}[dir=rtl] .mdc-select--outlined .mdc-floating-label,.mdc-select--outlined .mdc-floating-label[dir=rtl]{left:initial;right:4px}.mdc-select--outlined.mdc-select--focused .mdc-notched-outline--notched .mdc-notched-outline__notch{padding-top:2px}.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled) .mdc-notched-outline__leading,.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled) .mdc-notched-outline__notch,.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled) .mdc-notched-outline__trailing{border-color:#b71c1c;border-color:var(--mdc-theme-error, #b71c1c)}.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled):not(.mdc-select--focused) .mdc-select__anchor:hover .mdc-notched-outline .mdc-notched-outline__leading,.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled):not(.mdc-select--focused) .mdc-select__anchor:hover .mdc-notched-outline .mdc-notched-outline__notch,.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled):not(.mdc-select--focused) .mdc-select__anchor:hover .mdc-notched-outline .mdc-notched-outline__trailing{border-color:#b71c1c;border-color:var(--mdc-theme-error, #b71c1c)}.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__leading,.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__notch,.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__trailing{border-width:2px}.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__leading,.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__notch,.mdc-select--outlined.mdc-select--invalid:not(.mdc-select--disabled).mdc-select--focused .mdc-notched-outline .mdc-notched-outline__trailing{border-color:#b71c1c;border-color:var(--mdc-theme-error, #b71c1c)}.mdc-select--outlined.mdc-select--with-leading-icon .mdc-floating-label{left:36px;right:initial}[dir=rtl] .mdc-select--outlined.mdc-select--with-leading-icon .mdc-floating-label,.mdc-select--outlined.mdc-select--with-leading-icon .mdc-floating-label[dir=rtl]{left:initial;right:36px}.mdc-select--outlined.mdc-select--with-leading-icon .mdc-floating-label--float-above{transform:translateY(-37.25px) translateX(-32px) scale(1)}[dir=rtl] .mdc-select--outlined.mdc-select--with-leading-icon .mdc-floating-label--float-above,.mdc-select--outlined.mdc-select--with-leading-icon .mdc-floating-label--float-above[dir=rtl]{transform:translateY(-37.25px) translateX(32px) scale(1)}.mdc-select--outlined.mdc-select--with-leading-icon .mdc-floating-label--float-above{font-size:.75rem}.mdc-select--outlined.mdc-select--with-leading-icon.mdc-notched-outline--upgraded .mdc-floating-label--float-above,.mdc-select--outlined.mdc-select--with-leading-icon .mdc-notched-outline--upgraded .mdc-floating-label--float-above{transform:translateY(-34.75px) translateX(-32px) scale(0.75)}[dir=rtl] .mdc-select--outlined.mdc-select--with-leading-icon.mdc-notched-outline--upgraded .mdc-floating-label--float-above,[dir=rtl] .mdc-select--outlined.mdc-select--with-leading-icon .mdc-notched-outline--upgraded .mdc-floating-label--float-above,.mdc-select--outlined.mdc-select--with-leading-icon.mdc-notched-outline--upgraded .mdc-floating-label--float-above[dir=rtl],.mdc-select--outlined.mdc-select--with-leading-icon .mdc-notched-outline--upgraded .mdc-floating-label--float-above[dir=rtl]{transform:translateY(-34.75px) translateX(32px) scale(0.75)}.mdc-select--outlined.mdc-select--with-leading-icon.mdc-notched-outline--upgraded .mdc-floating-label--float-above,.mdc-select--outlined.mdc-select--with-leading-icon .mdc-notched-outline--upgraded .mdc-floating-label--float-above{font-size:1rem}.mdc-select--outlined.mdc-select--with-leading-icon .mdc-floating-label--shake{animation:mdc-floating-label-shake-float-above-select-outlined-leading-icon-56px 250ms 1}@keyframes mdc-floating-label-shake-float-above-select-outlined-leading-icon-56px{0%{transform:translateX(calc(0 - 32px)) translateY(-34.75px) scale(0.75)}33%{animation-timing-function:cubic-bezier(0.5, 0, 0.701732, 0.495819);transform:translateX(calc(4% - 32px)) translateY(-34.75px) scale(0.75)}66%{animation-timing-function:cubic-bezier(0.302435, 0.381352, 0.55, 0.956352);transform:translateX(calc(-4% - 32px)) translateY(-34.75px) scale(0.75)}100%{transform:translateX(calc(0 - 32px)) translateY(-34.75px) scale(0.75)}}[dir=rtl] .mdc-select--outlined.mdc-select--with-leading-icon .mdc-floating-label--shake,.mdc-select--outlined.mdc-select--with-leading-icon[dir=rtl] .mdc-floating-label--shake{animation:mdc-floating-label-shake-float-above-select-outlined-leading-icon-56px 250ms 1}@keyframes mdc-floating-label-shake-float-above-select-outlined-leading-icon-56px-rtl{0%{transform:translateX(calc(0 - -32px)) translateY(-34.75px) scale(0.75)}33%{animation-timing-function:cubic-bezier(0.5, 0, 0.701732, 0.495819);transform:translateX(calc(4% - -32px)) translateY(-34.75px) scale(0.75)}66%{animation-timing-function:cubic-bezier(0.302435, 0.381352, 0.55, 0.956352);transform:translateX(calc(-4% - -32px)) translateY(-34.75px) scale(0.75)}100%{transform:translateX(calc(0 - -32px)) translateY(-34.75px) scale(0.75)}}.mdc-select--outlined.mdc-select--with-leading-icon .mdc-select__anchor :not(.mdc-notched-outline--notched) .mdc-notched-outline__notch{max-width:calc(100% - 96px)}.mdc-select--outlined .mdc-menu-surface{margin-bottom:8px}.mdc-select--outlined.mdc-select--no-label .mdc-menu-surface,.mdc-select--outlined .mdc-menu-surface--is-open-below{margin-bottom:0}.mdc-select__anchor{--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.mdc-select__anchor .mdc-select__ripple::before,.mdc-select__anchor .mdc-select__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.mdc-select__anchor .mdc-select__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}.mdc-select__anchor .mdc-select__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}.mdc-select__anchor.mdc-ripple-upgraded .mdc-select__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}.mdc-select__anchor.mdc-ripple-upgraded .mdc-select__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}.mdc-select__anchor.mdc-ripple-upgraded--unbounded .mdc-select__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}.mdc-select__anchor.mdc-ripple-upgraded--foreground-activation .mdc-select__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.mdc-select__anchor.mdc-ripple-upgraded--foreground-deactivation .mdc-select__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}.mdc-select__anchor .mdc-select__ripple::before,.mdc-select__anchor .mdc-select__ripple::after{top:calc(50% - 100%);left:calc(50% - 100%);width:200%;height:200%}.mdc-select__anchor.mdc-ripple-upgraded .mdc-select__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-select__anchor .mdc-select__ripple::before,.mdc-select__anchor .mdc-select__ripple::after{background-color:rgba(0, 0, 0, 0.87);background-color:var(--mdc-ripple-color, rgba(0, 0, 0, 0.87))}.mdc-select__anchor:hover .mdc-select__ripple::before,.mdc-select__anchor.mdc-ripple-surface--hover .mdc-select__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-select__anchor.mdc-ripple-upgraded--background-focused .mdc-select__ripple::before,.mdc-select__anchor:not(.mdc-ripple-upgraded):focus .mdc-select__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-select__anchor .mdc-select__ripple{position:absolute;top:0;left:0;width:100%;height:100%;pointer-events:none}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected .mdc-deprecated-list-item__ripple::before,.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected .mdc-deprecated-list-item__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, var(--mdc-theme-on-surface, #000))}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected:hover .mdc-deprecated-list-item__ripple::before,.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected.mdc-ripple-surface--hover .mdc-deprecated-list-item__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected.mdc-ripple-upgraded--background-focused .mdc-deprecated-list-item__ripple::before,.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded):focus .mdc-deprecated-list-item__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded) .mdc-deprecated-list-item__ripple::after{transition:opacity 150ms linear}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded):active .mdc-deprecated-list-item__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected .mdc-list-item__ripple::before,.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected .mdc-list-item__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, var(--mdc-theme-on-surface, #000))}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected:hover .mdc-list-item__ripple::before,.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected.mdc-ripple-surface--hover .mdc-list-item__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected.mdc-ripple-upgraded--background-focused .mdc-list-item__ripple::before,.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded):focus .mdc-list-item__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded) .mdc-list-item__ripple::after{transition:opacity 150ms linear}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded):active .mdc-list-item__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-select__menu .mdc-deprecated-list .mdc-deprecated-list-item--selected.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-select-helper-text{margin:0;margin-left:16px;margin-right:16px;-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-caption-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.75rem;font-size:var(--mdc-typography-caption-font-size, 0.75rem);line-height:1.25rem;line-height:var(--mdc-typography-caption-line-height, 1.25rem);font-weight:400;font-weight:var(--mdc-typography-caption-font-weight, 400);letter-spacing:0.0333333333em;letter-spacing:var(--mdc-typography-caption-letter-spacing, 0.0333333333em);text-decoration:inherit;text-decoration:var(--mdc-typography-caption-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-caption-text-transform, inherit);display:block;margin-top:0;line-height:normal}[dir=rtl] .mdc-select-helper-text,.mdc-select-helper-text[dir=rtl]{margin-left:16px;margin-right:16px}.mdc-select-helper-text::before{display:inline-block;width:0;height:16px;content:"";vertical-align:0}.mdc-select-helper-text--validation-msg{opacity:0;transition:opacity 180ms cubic-bezier(0.4, 0, 0.2, 1)}.mdc-select--invalid+.mdc-select-helper-text--validation-msg,.mdc-select-helper-text--validation-msg-persistent{opacity:1}.mdc-select--with-leading-icon .mdc-select__icon{display:inline-block;box-sizing:border-box;border:none;text-decoration:none;cursor:pointer;user-select:none;flex-shrink:0;align-self:center;background-color:rgba(0,0,0,0);fill:currentColor}.mdc-select--with-leading-icon .mdc-select__icon{margin-left:12px;margin-right:12px}[dir=rtl] .mdc-select--with-leading-icon .mdc-select__icon,.mdc-select--with-leading-icon .mdc-select__icon[dir=rtl]{margin-left:12px;margin-right:12px}.mdc-select__icon:not([tabindex]),.mdc-select__icon[tabindex="-1"]{cursor:default;pointer-events:none}.smui-floating-label--remove-transition{transition:unset !important}.smui-floating-label--force-size{position:absolute !important;transform:unset !important}.mdc-deprecated-list{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-subtitle1-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:1rem;font-size:var(--mdc-typography-subtitle1-font-size, 1rem);line-height:1.75rem;line-height:var(--mdc-typography-subtitle1-line-height, 1.75rem);font-weight:400;font-weight:var(--mdc-typography-subtitle1-font-weight, 400);letter-spacing:0.009375em;letter-spacing:var(--mdc-typography-subtitle1-letter-spacing, 0.009375em);text-decoration:inherit;text-decoration:var(--mdc-typography-subtitle1-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-subtitle1-text-transform, inherit);line-height:1.5rem;margin:0;padding:8px 0;list-style-type:none;color:rgba(0, 0, 0, 0.87);color:var(--mdc-theme-text-primary-on-background, rgba(0, 0, 0, 0.87))}.mdc-deprecated-list:focus{outline:none}.mdc-deprecated-list-item{height:48px}.mdc-deprecated-list-item__secondary-text{color:rgba(0, 0, 0, 0.54);color:var(--mdc-theme-text-secondary-on-background, rgba(0, 0, 0, 0.54))}.mdc-deprecated-list-item__graphic{background-color:transparent}.mdc-deprecated-list-item__graphic{color:rgba(0, 0, 0, 0.38);color:var(--mdc-theme-text-icon-on-background, rgba(0, 0, 0, 0.38))}.mdc-deprecated-list-item__meta{color:rgba(0, 0, 0, 0.38);color:var(--mdc-theme-text-hint-on-background, rgba(0, 0, 0, 0.38))}.mdc-deprecated-list-group__subheader{color:rgba(0, 0, 0, 0.87);color:var(--mdc-theme-text-primary-on-background, rgba(0, 0, 0, 0.87))}.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__text{opacity:.38}.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__text,.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__primary-text,.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__secondary-text{color:#000;color:var(--mdc-theme-on-surface, #000)}.mdc-deprecated-list-item--selected,.mdc-deprecated-list-item--activated{color:#6a1b9a;color:var(--mdc-theme-primary, #6a1b9a)}.mdc-deprecated-list-item--selected .mdc-deprecated-list-item__graphic,.mdc-deprecated-list-item--activated .mdc-deprecated-list-item__graphic{color:#6a1b9a;color:var(--mdc-theme-primary, #6a1b9a)}.mdc-deprecated-list--dense{padding-top:4px;padding-bottom:4px;font-size:.812rem}.mdc-deprecated-list-item__wrapper{display:block}.mdc-deprecated-list-item{display:flex;position:relative;align-items:center;justify-content:flex-start;overflow:hidden;padding:0;padding-left:16px;padding-right:16px;height:48px}.mdc-deprecated-list-item:focus{outline:none}.mdc-deprecated-list-item:not(.mdc-deprecated-list-item--selected):focus::before,.mdc-deprecated-list-item.mdc-ripple-upgraded--background-focused::before{position:absolute;box-sizing:border-box;width:100%;height:100%;top:0;left:0;border:1px solid rgba(0,0,0,0);border-radius:inherit;content:"";pointer-events:none}@media screen and (forced-colors: active){.mdc-deprecated-list-item:not(.mdc-deprecated-list-item--selected):focus::before,.mdc-deprecated-list-item.mdc-ripple-upgraded--background-focused::before{border-color:CanvasText}}.mdc-deprecated-list-item.mdc-deprecated-list-item--selected::before{position:absolute;box-sizing:border-box;width:100%;height:100%;top:0;left:0;border:3px double rgba(0,0,0,0);border-radius:inherit;content:"";pointer-events:none}@media screen and (forced-colors: active){.mdc-deprecated-list-item.mdc-deprecated-list-item--selected::before{border-color:CanvasText}}[dir=rtl] .mdc-deprecated-list-item,.mdc-deprecated-list-item[dir=rtl]{padding-left:16px;padding-right:16px}.mdc-deprecated-list--icon-list .mdc-deprecated-list-item{padding-left:16px;padding-right:16px;height:56px}[dir=rtl] .mdc-deprecated-list--icon-list .mdc-deprecated-list-item,.mdc-deprecated-list--icon-list .mdc-deprecated-list-item[dir=rtl]{padding-left:16px;padding-right:16px}.mdc-deprecated-list--avatar-list .mdc-deprecated-list-item{padding-left:16px;padding-right:16px;height:56px}[dir=rtl] .mdc-deprecated-list--avatar-list .mdc-deprecated-list-item,.mdc-deprecated-list--avatar-list .mdc-deprecated-list-item[dir=rtl]{padding-left:16px;padding-right:16px}.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item{padding-left:16px;padding-right:16px;height:56px}[dir=rtl] .mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item,.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item[dir=rtl]{padding-left:16px;padding-right:16px}.mdc-deprecated-list--image-list .mdc-deprecated-list-item{padding-left:16px;padding-right:16px;height:72px}[dir=rtl] .mdc-deprecated-list--image-list .mdc-deprecated-list-item,.mdc-deprecated-list--image-list .mdc-deprecated-list-item[dir=rtl]{padding-left:16px;padding-right:16px}.mdc-deprecated-list--video-list .mdc-deprecated-list-item{padding-left:0px;padding-right:16px;height:72px}[dir=rtl] .mdc-deprecated-list--video-list .mdc-deprecated-list-item,.mdc-deprecated-list--video-list .mdc-deprecated-list-item[dir=rtl]{padding-left:16px;padding-right:0px}.mdc-deprecated-list--dense .mdc-deprecated-list-item__graphic{margin-left:0;margin-right:16px;width:20px;height:20px}[dir=rtl] .mdc-deprecated-list--dense .mdc-deprecated-list-item__graphic,.mdc-deprecated-list--dense .mdc-deprecated-list-item__graphic[dir=rtl]{margin-left:16px;margin-right:0}.mdc-deprecated-list-item__graphic{flex-shrink:0;align-items:center;justify-content:center;fill:currentColor;object-fit:cover;margin-left:0;margin-right:32px;width:24px;height:24px}[dir=rtl] .mdc-deprecated-list-item__graphic,.mdc-deprecated-list-item__graphic[dir=rtl]{margin-left:32px;margin-right:0}.mdc-deprecated-list--icon-list .mdc-deprecated-list-item__graphic{margin-left:0;margin-right:32px;width:24px;height:24px}[dir=rtl] .mdc-deprecated-list--icon-list .mdc-deprecated-list-item__graphic,.mdc-deprecated-list--icon-list .mdc-deprecated-list-item__graphic[dir=rtl]{margin-left:32px;margin-right:0}.mdc-deprecated-list--avatar-list .mdc-deprecated-list-item__graphic{margin-left:0;margin-right:16px;width:40px;height:40px;border-radius:50%}[dir=rtl] .mdc-deprecated-list--avatar-list .mdc-deprecated-list-item__graphic,.mdc-deprecated-list--avatar-list .mdc-deprecated-list-item__graphic[dir=rtl]{margin-left:16px;margin-right:0}.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item__graphic{margin-left:0;margin-right:16px;width:40px;height:40px}[dir=rtl] .mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item__graphic,.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item__graphic[dir=rtl]{margin-left:16px;margin-right:0}.mdc-deprecated-list--image-list .mdc-deprecated-list-item__graphic{margin-left:0;margin-right:16px;width:56px;height:56px}[dir=rtl] .mdc-deprecated-list--image-list .mdc-deprecated-list-item__graphic,.mdc-deprecated-list--image-list .mdc-deprecated-list-item__graphic[dir=rtl]{margin-left:16px;margin-right:0}.mdc-deprecated-list--video-list .mdc-deprecated-list-item__graphic{margin-left:0;margin-right:16px;width:100px;height:56px}[dir=rtl] .mdc-deprecated-list--video-list .mdc-deprecated-list-item__graphic,.mdc-deprecated-list--video-list .mdc-deprecated-list-item__graphic[dir=rtl]{margin-left:16px;margin-right:0}.mdc-deprecated-list .mdc-deprecated-list-item__graphic{display:inline-flex}.mdc-deprecated-list-item__meta{margin-left:auto;margin-right:0}.mdc-deprecated-list-item__meta:not(.material-icons){-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-caption-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.75rem;font-size:var(--mdc-typography-caption-font-size, 0.75rem);line-height:1.25rem;line-height:var(--mdc-typography-caption-line-height, 1.25rem);font-weight:400;font-weight:var(--mdc-typography-caption-font-weight, 400);letter-spacing:0.0333333333em;letter-spacing:var(--mdc-typography-caption-letter-spacing, 0.0333333333em);text-decoration:inherit;text-decoration:var(--mdc-typography-caption-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-caption-text-transform, inherit)}.mdc-deprecated-list-item[dir=rtl] .mdc-deprecated-list-item__meta,[dir=rtl] .mdc-deprecated-list-item .mdc-deprecated-list-item__meta{margin-left:0;margin-right:auto}.mdc-deprecated-list-item__text{text-overflow:ellipsis;white-space:nowrap;overflow:hidden}.mdc-deprecated-list-item__text[for]{pointer-events:none}.mdc-deprecated-list-item__primary-text{text-overflow:ellipsis;white-space:nowrap;overflow:hidden;display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-deprecated-list-item__primary-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-deprecated-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-deprecated-list--video-list .mdc-deprecated-list-item__primary-text,.mdc-deprecated-list--image-list .mdc-deprecated-list-item__primary-text,.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item__primary-text,.mdc-deprecated-list--avatar-list .mdc-deprecated-list-item__primary-text,.mdc-deprecated-list--icon-list .mdc-deprecated-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-deprecated-list--video-list .mdc-deprecated-list-item__primary-text::before,.mdc-deprecated-list--image-list .mdc-deprecated-list-item__primary-text::before,.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item__primary-text::before,.mdc-deprecated-list--avatar-list .mdc-deprecated-list-item__primary-text::before,.mdc-deprecated-list--icon-list .mdc-deprecated-list-item__primary-text::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-deprecated-list--video-list .mdc-deprecated-list-item__primary-text::after,.mdc-deprecated-list--image-list .mdc-deprecated-list-item__primary-text::after,.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item__primary-text::after,.mdc-deprecated-list--avatar-list .mdc-deprecated-list-item__primary-text::after,.mdc-deprecated-list--icon-list .mdc-deprecated-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-deprecated-list--dense .mdc-deprecated-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-deprecated-list--dense .mdc-deprecated-list-item__primary-text::before{display:inline-block;width:0;height:24px;content:"";vertical-align:0}.mdc-deprecated-list--dense .mdc-deprecated-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-deprecated-list-item__secondary-text{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-body2-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.875rem;font-size:var(--mdc-typography-body2-font-size, 0.875rem);line-height:1.25rem;line-height:var(--mdc-typography-body2-line-height, 1.25rem);font-weight:400;font-weight:var(--mdc-typography-body2-font-weight, 400);letter-spacing:0.0178571429em;letter-spacing:var(--mdc-typography-body2-letter-spacing, 0.0178571429em);text-decoration:inherit;text-decoration:var(--mdc-typography-body2-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-body2-text-transform, inherit);text-overflow:ellipsis;white-space:nowrap;overflow:hidden;display:block;margin-top:0;line-height:normal}.mdc-deprecated-list-item__secondary-text::before{display:inline-block;width:0;height:20px;content:"";vertical-align:0}.mdc-deprecated-list--dense .mdc-deprecated-list-item__secondary-text{font-size:inherit}.mdc-deprecated-list--dense .mdc-deprecated-list-item{height:40px}.mdc-deprecated-list--two-line .mdc-deprecated-list-item__text{align-self:flex-start}.mdc-deprecated-list--two-line .mdc-deprecated-list-item{height:64px}.mdc-deprecated-list--two-line.mdc-deprecated-list--video-list .mdc-deprecated-list-item,.mdc-deprecated-list--two-line.mdc-deprecated-list--image-list .mdc-deprecated-list-item,.mdc-deprecated-list--two-line.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-item,.mdc-deprecated-list--two-line.mdc-deprecated-list--avatar-list .mdc-deprecated-list-item,.mdc-deprecated-list--two-line.mdc-deprecated-list--icon-list .mdc-deprecated-list-item{height:72px}.mdc-deprecated-list--two-line.mdc-deprecated-list--icon-list .mdc-deprecated-list-item__graphic{align-self:flex-start;margin-top:16px}.mdc-deprecated-list--two-line.mdc-deprecated-list--dense .mdc-deprecated-list-item,.mdc-deprecated-list--avatar-list.mdc-deprecated-list--dense .mdc-deprecated-list-item{height:60px}.mdc-deprecated-list--avatar-list.mdc-deprecated-list--dense .mdc-deprecated-list-item__graphic{margin-left:0;margin-right:16px;width:36px;height:36px}[dir=rtl] .mdc-deprecated-list--avatar-list.mdc-deprecated-list--dense .mdc-deprecated-list-item__graphic,.mdc-deprecated-list--avatar-list.mdc-deprecated-list--dense .mdc-deprecated-list-item__graphic[dir=rtl]{margin-left:16px;margin-right:0}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item{cursor:pointer}a.mdc-deprecated-list-item{color:inherit;text-decoration:none}.mdc-deprecated-list-divider{height:0;margin:0;border:none;border-bottom-width:1px;border-bottom-style:solid}.mdc-deprecated-list-divider{border-bottom-color:rgba(0, 0, 0, 0.12)}.mdc-deprecated-list-divider--padded{margin-left:16px;margin-right:0;width:calc(100% - 32px)}[dir=rtl] .mdc-deprecated-list-divider--padded,.mdc-deprecated-list-divider--padded[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list-divider--inset{margin-left:72px;margin-right:0;width:calc(100% - 72px)}[dir=rtl] .mdc-deprecated-list-divider--inset,.mdc-deprecated-list-divider--inset[dir=rtl]{margin-left:0;margin-right:72px}.mdc-deprecated-list-divider--inset.mdc-deprecated-list-divider--padded{margin-left:72px;margin-right:0;width:calc(100% - 88px)}[dir=rtl] .mdc-deprecated-list-divider--inset.mdc-deprecated-list-divider--padded,.mdc-deprecated-list-divider--inset.mdc-deprecated-list-divider--padded[dir=rtl]{margin-left:0;margin-right:72px}.mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading{margin-left:16px;margin-right:0;width:calc(100% - 16px)}[dir=rtl] .mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading,.mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list .mdc-deprecated-list-divider--inset-trailing{width:calc(100% - 16px)}.mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing{margin-left:16px;margin-right:0;width:calc(100% - 32px)}[dir=rtl] .mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing,.mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding{margin-left:16px;margin-right:0;width:calc(100% - 16px)}[dir=rtl] .mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding,.mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding{margin-left:16px;margin-right:0;width:calc(100% - 32px)}[dir=rtl] .mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding,.mdc-deprecated-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading{margin-left:72px;margin-right:0;width:calc(100% - 72px)}[dir=rtl] .mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading,.mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading[dir=rtl]{margin-left:0;margin-right:72px}.mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-trailing{width:calc(100% - 16px)}.mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing{margin-left:72px;margin-right:0;width:calc(100% - 88px)}[dir=rtl] .mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing,.mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing[dir=rtl]{margin-left:0;margin-right:72px}.mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding{margin-left:16px;margin-right:0;width:calc(100% - 16px)}[dir=rtl] .mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding,.mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding{margin-left:16px;margin-right:0;width:calc(100% - 32px)}[dir=rtl] .mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding,.mdc-deprecated-list--icon-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading{margin-left:72px;margin-right:0;width:calc(100% - 72px)}[dir=rtl] .mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading,.mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading[dir=rtl]{margin-left:0;margin-right:72px}.mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-trailing{width:calc(100% - 16px)}.mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing{margin-left:72px;margin-right:0;width:calc(100% - 88px)}[dir=rtl] .mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing,.mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing[dir=rtl]{margin-left:0;margin-right:72px}.mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding{margin-left:16px;margin-right:0;width:calc(100% - 16px)}[dir=rtl] .mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding,.mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding{margin-left:16px;margin-right:0;width:calc(100% - 32px)}[dir=rtl] .mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding,.mdc-deprecated-list--avatar-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading{margin-left:72px;margin-right:0;width:calc(100% - 72px)}[dir=rtl] .mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading,.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading[dir=rtl]{margin-left:0;margin-right:72px}.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-trailing{width:calc(100% - 16px)}.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing{margin-left:72px;margin-right:0;width:calc(100% - 88px)}[dir=rtl] .mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing,.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing[dir=rtl]{margin-left:0;margin-right:72px}.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding{margin-left:16px;margin-right:0;width:calc(100% - 16px)}[dir=rtl] .mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding,.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding{margin-left:16px;margin-right:0;width:calc(100% - 32px)}[dir=rtl] .mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding,.mdc-deprecated-list--thumbnail-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading{margin-left:88px;margin-right:0;width:calc(100% - 88px)}[dir=rtl] .mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading,.mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading[dir=rtl]{margin-left:0;margin-right:88px}.mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-trailing{width:calc(100% - 16px)}.mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing{margin-left:88px;margin-right:0;width:calc(100% - 104px)}[dir=rtl] .mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing,.mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing[dir=rtl]{margin-left:0;margin-right:88px}.mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding{margin-left:16px;margin-right:0;width:calc(100% - 16px)}[dir=rtl] .mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding,.mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding{margin-left:16px;margin-right:0;width:calc(100% - 32px)}[dir=rtl] .mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding,.mdc-deprecated-list--image-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding[dir=rtl]{margin-left:0;margin-right:16px}.mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading{margin-left:116px;margin-right:0;width:calc(100% - 116px)}[dir=rtl] .mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading,.mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading[dir=rtl]{margin-left:0;margin-right:116px}.mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-trailing{width:calc(100% - 16px)}.mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing{margin-left:116px;margin-right:0;width:calc(100% - 132px)}[dir=rtl] .mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing,.mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing[dir=rtl]{margin-left:0;margin-right:116px}.mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding{margin-left:0px;margin-right:0;width:calc(100% - 0px)}[dir=rtl] .mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding,.mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--padding[dir=rtl]{margin-left:0;margin-right:0px}.mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding{margin-left:0px;margin-right:0;width:calc(100% - 16px)}[dir=rtl] .mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding,.mdc-deprecated-list--video-list .mdc-deprecated-list-divider--inset-leading.mdc-deprecated-list-divider--inset-trailing.mdc-deprecated-list-divider--inset-padding[dir=rtl]{margin-left:0;margin-right:0px}.mdc-deprecated-list-group .mdc-deprecated-list{padding:0}.mdc-deprecated-list-group__subheader{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-subtitle1-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:1rem;font-size:var(--mdc-typography-subtitle1-font-size, 1rem);line-height:1.75rem;line-height:var(--mdc-typography-subtitle1-line-height, 1.75rem);font-weight:400;font-weight:var(--mdc-typography-subtitle1-font-weight, 400);letter-spacing:0.009375em;letter-spacing:var(--mdc-typography-subtitle1-letter-spacing, 0.009375em);text-decoration:inherit;text-decoration:var(--mdc-typography-subtitle1-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-subtitle1-text-transform, inherit);margin:calc((3rem - 1.5rem)/2) 16px}.mdc-list-item__primary-text{color:rgba(0, 0, 0, 0.87);color:var(--mdc-theme-text-primary-on-background, rgba(0, 0, 0, 0.87))}.mdc-list-item__secondary-text{color:rgba(0, 0, 0, 0.54);color:var(--mdc-theme-text-secondary-on-background, rgba(0, 0, 0, 0.54))}.mdc-list-item__overline-text{color:rgba(0, 0, 0, 0.38);color:var(--mdc-theme-text-hint-on-background, rgba(0, 0, 0, 0.38))}.mdc-list-item--with-leading-icon .mdc-list-item__start,.mdc-list-item--with-trailing-icon .mdc-list-item__end{background-color:transparent}.mdc-list-item--with-leading-icon .mdc-list-item__start,.mdc-list-item--with-trailing-icon .mdc-list-item__end{color:rgba(0, 0, 0, 0.38);color:var(--mdc-theme-text-icon-on-background, rgba(0, 0, 0, 0.38))}.mdc-list-item__end{color:rgba(0, 0, 0, 0.38);color:var(--mdc-theme-text-hint-on-background, rgba(0, 0, 0, 0.38))}.mdc-list-item--disabled .mdc-list-item__start,.mdc-list-item--disabled .mdc-list-item__content,.mdc-list-item--disabled .mdc-list-item__end{opacity:.38}.mdc-list-item--disabled .mdc-list-item__primary-text{color:#000;color:var(--mdc-theme-on-surface, #000)}.mdc-list-item--disabled .mdc-list-item__secondary-text{color:#000;color:var(--mdc-theme-on-surface, #000)}.mdc-list-item--disabled .mdc-list-item__overline-text{color:#000;color:var(--mdc-theme-on-surface, #000)}.mdc-list-item--disabled.mdc-list-item--with-leading-icon .mdc-list-item__start{color:#000;color:var(--mdc-theme-on-surface, #000)}.mdc-list-item--disabled.mdc-list-item--with-trailing-icon .mdc-list-item__end{color:#000;color:var(--mdc-theme-on-surface, #000)}.mdc-list-item--disabled.mdc-list-item--with-trailing-meta .mdc-list-item__end{color:#000;color:var(--mdc-theme-on-surface, #000)}.mdc-list-item--selected .mdc-list-item__primary-text,.mdc-list-item--activated .mdc-list-item__primary-text{color:#6a1b9a;color:var(--mdc-theme-primary, #6a1b9a)}.mdc-list-item--selected.mdc-list-item--with-leading-icon .mdc-list-item__start,.mdc-list-item--activated.mdc-list-item--with-leading-icon .mdc-list-item__start{color:#6a1b9a;color:var(--mdc-theme-primary, #6a1b9a)}.mdc-deprecated-list-group__subheader{color:rgba(0, 0, 0, 0.87);color:var(--mdc-theme-text-primary-on-background, rgba(0, 0, 0, 0.87))}@media screen and (forced-colors: active),(-ms-high-contrast: active){.mdc-list-divider::after{content:"";display:block;border-bottom-width:1px;border-bottom-style:solid;border-bottom-color:white}}.mdc-list{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-subtitle1-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:1rem;font-size:var(--mdc-typography-subtitle1-font-size, 1rem);line-height:1.75rem;line-height:var(--mdc-typography-subtitle1-line-height, 1.75rem);font-weight:400;font-weight:var(--mdc-typography-subtitle1-font-weight, 400);letter-spacing:0.009375em;letter-spacing:var(--mdc-typography-subtitle1-letter-spacing, 0.009375em);text-decoration:inherit;text-decoration:var(--mdc-typography-subtitle1-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-subtitle1-text-transform, inherit);line-height:1.5rem;margin:0;padding:8px 0;list-style-type:none}.mdc-list:focus{outline:none}.mdc-list-item__wrapper{display:block}.mdc-list-item{display:flex;position:relative;align-items:center;justify-content:flex-start;overflow:hidden;padding:0;align-items:stretch;cursor:pointer}.mdc-list-item:focus{outline:none}.mdc-list-item.mdc-list-item--with-one-line{height:48px}.mdc-list-item.mdc-list-item--with-two-lines{height:64px}.mdc-list-item.mdc-list-item--with-three-lines{height:88px}.mdc-list-item.mdc-list-item--with-one-line .mdc-list-item__start{align-self:center;margin-top:0}.mdc-list-item.mdc-list-item--with-two-lines .mdc-list-item__start{align-self:flex-start;margin-top:16px}.mdc-list-item.mdc-list-item--with-three-lines .mdc-list-item__start{align-self:flex-start;margin-top:16px}.mdc-list-item.mdc-list-item--with-one-line .mdc-list-item__end{align-self:center;margin-top:0}.mdc-list-item.mdc-list-item--with-two-lines .mdc-list-item__end{align-self:center;margin-top:0}.mdc-list-item.mdc-list-item--with-three-lines .mdc-list-item__end{align-self:flex-start;margin-top:16px}.mdc-list-item.mdc-list-item--disabled,.mdc-list-item.mdc-list-item--non-interactive{cursor:auto}.mdc-list-item:not(.mdc-list-item--selected):focus::before,.mdc-list-item.mdc-ripple-upgraded--background-focused::before{position:absolute;box-sizing:border-box;width:100%;height:100%;top:0;left:0;border:1px solid rgba(0,0,0,0);border-radius:inherit;content:"";pointer-events:none}@media screen and (forced-colors: active){.mdc-list-item:not(.mdc-list-item--selected):focus::before,.mdc-list-item.mdc-ripple-upgraded--background-focused::before{border-color:CanvasText}}.mdc-list-item.mdc-list-item--selected::before{position:absolute;box-sizing:border-box;width:100%;height:100%;top:0;left:0;border:3px double rgba(0,0,0,0);border-radius:inherit;content:"";pointer-events:none}@media screen and (forced-colors: active){.mdc-list-item.mdc-list-item--selected::before{border-color:CanvasText}}.mdc-list-item.mdc-list-item--selected:focus::before{position:absolute;box-sizing:border-box;width:100%;height:100%;top:0;left:0;border:3px solid rgba(0,0,0,0);border-radius:inherit;content:"";pointer-events:none}@media screen and (forced-colors: active){.mdc-list-item.mdc-list-item--selected:focus::before{border-color:CanvasText}}a.mdc-list-item{color:inherit;text-decoration:none}.mdc-list-item__start{fill:currentColor;flex-shrink:0;pointer-events:none}.mdc-list-item__end{flex-shrink:0;pointer-events:none}.mdc-list-item__content{text-overflow:ellipsis;white-space:nowrap;overflow:hidden;align-self:center;flex:1;pointer-events:none}.mdc-list-item--with-two-lines .mdc-list-item__content,.mdc-list-item--with-three-lines .mdc-list-item__content{align-self:stretch}.mdc-list-item__content[for]{pointer-events:none}.mdc-list-item__primary-text{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-subtitle1-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:1rem;font-size:var(--mdc-typography-subtitle1-font-size, 1rem);line-height:1.75rem;line-height:var(--mdc-typography-subtitle1-line-height, 1.75rem);font-weight:400;font-weight:var(--mdc-typography-subtitle1-font-weight, 400);letter-spacing:0.009375em;letter-spacing:var(--mdc-typography-subtitle1-letter-spacing, 0.009375em);text-decoration:inherit;text-decoration:var(--mdc-typography-subtitle1-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-subtitle1-text-transform, inherit);text-overflow:ellipsis;white-space:nowrap;overflow:hidden}.mdc-list-item--with-two-lines .mdc-list-item__primary-text,.mdc-list-item--with-three-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before,.mdc-list-item--with-three-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after,.mdc-list-item--with-three-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item__secondary-text{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-body2-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.875rem;font-size:var(--mdc-typography-body2-font-size, 0.875rem);line-height:1.25rem;line-height:var(--mdc-typography-body2-line-height, 1.25rem);font-weight:400;font-weight:var(--mdc-typography-body2-font-weight, 400);letter-spacing:0.0178571429em;letter-spacing:var(--mdc-typography-body2-letter-spacing, 0.0178571429em);text-decoration:inherit;text-decoration:var(--mdc-typography-body2-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-body2-text-transform, inherit);text-overflow:ellipsis;white-space:nowrap;overflow:hidden;display:block;margin-top:0;line-height:normal}.mdc-list-item__secondary-text::before{display:inline-block;width:0;height:20px;content:"";vertical-align:0}.mdc-list-item--with-three-lines .mdc-list-item__secondary-text{white-space:normal;line-height:20px}.mdc-list-item--with-overline .mdc-list-item__secondary-text{white-space:nowrap;line-height:auto}.mdc-list-item__overline-text{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-overline-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.75rem;font-size:var(--mdc-typography-overline-font-size, 0.75rem);line-height:2rem;line-height:var(--mdc-typography-overline-line-height, 2rem);font-weight:500;font-weight:var(--mdc-typography-overline-font-weight, 500);letter-spacing:0.1666666667em;letter-spacing:var(--mdc-typography-overline-letter-spacing, 0.1666666667em);text-decoration:none;text-decoration:var(--mdc-typography-overline-text-decoration, none);text-transform:uppercase;text-transform:var(--mdc-typography-overline-text-transform, uppercase);text-overflow:ellipsis;white-space:nowrap;overflow:hidden}.mdc-list-item--with-two-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-two-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:24px;content:"";vertical-align:0}.mdc-list-item--with-two-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-three-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-three-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-three-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-avatar.mdc-list-item{padding-left:0;padding-right:auto}[dir=rtl] .mdc-list-item--with-leading-avatar.mdc-list-item,.mdc-list-item--with-leading-avatar.mdc-list-item[dir=rtl]{padding-left:auto;padding-right:0}.mdc-list-item--with-leading-avatar .mdc-list-item__start{margin-left:16px;margin-right:16px}[dir=rtl] .mdc-list-item--with-leading-avatar .mdc-list-item__start,.mdc-list-item--with-leading-avatar .mdc-list-item__start[dir=rtl]{margin-left:16px;margin-right:16px}.mdc-list-item--with-leading-avatar .mdc-list-item__start{width:40px;height:40px}.mdc-list-item--with-leading-avatar.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-avatar.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-avatar.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-avatar.mdc-list-item--with-two-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-avatar.mdc-list-item--with-two-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-leading-avatar.mdc-list-item--with-two-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-avatar.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-leading-avatar.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-avatar.mdc-list-item--with-one-line{height:56px}.mdc-list-item--with-leading-avatar.mdc-list-item--with-two-lines{height:72px}.mdc-list-item--with-leading-avatar .mdc-list-item__start{border-radius:50%}.mdc-list-item--with-leading-icon .mdc-list-item__start{width:24px;height:24px}.mdc-list-item--with-leading-icon.mdc-list-item{padding-left:0;padding-right:auto}[dir=rtl] .mdc-list-item--with-leading-icon.mdc-list-item,.mdc-list-item--with-leading-icon.mdc-list-item[dir=rtl]{padding-left:auto;padding-right:0}.mdc-list-item--with-leading-icon .mdc-list-item__start{margin-left:16px;margin-right:32px}[dir=rtl] .mdc-list-item--with-leading-icon .mdc-list-item__start,.mdc-list-item--with-leading-icon .mdc-list-item__start[dir=rtl]{margin-left:32px;margin-right:16px}.mdc-list-item--with-leading-icon.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-icon.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-icon.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-icon.mdc-list-item--with-two-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-icon.mdc-list-item--with-two-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-leading-icon.mdc-list-item--with-two-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-icon.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-leading-icon.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-icon.mdc-list-item--with-one-line{height:56px}.mdc-list-item--with-leading-icon.mdc-list-item--with-two-lines{height:72px}.mdc-list-item--with-leading-thumbnail.mdc-list-item{padding-left:0;padding-right:auto}[dir=rtl] .mdc-list-item--with-leading-thumbnail.mdc-list-item,.mdc-list-item--with-leading-thumbnail.mdc-list-item[dir=rtl]{padding-left:auto;padding-right:0}.mdc-list-item--with-leading-thumbnail .mdc-list-item__start{margin-left:16px;margin-right:16px}[dir=rtl] .mdc-list-item--with-leading-thumbnail .mdc-list-item__start,.mdc-list-item--with-leading-thumbnail .mdc-list-item__start[dir=rtl]{margin-left:16px;margin-right:16px}.mdc-list-item--with-leading-thumbnail .mdc-list-item__start{width:40px;height:40px}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-two-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-two-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-two-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-one-line{height:56px}.mdc-list-item--with-leading-thumbnail.mdc-list-item--with-two-lines{height:72px}.mdc-list-item--with-leading-image.mdc-list-item{padding-left:0;padding-right:auto}[dir=rtl] .mdc-list-item--with-leading-image.mdc-list-item,.mdc-list-item--with-leading-image.mdc-list-item[dir=rtl]{padding-left:auto;padding-right:0}.mdc-list-item--with-leading-image .mdc-list-item__start{margin-left:16px;margin-right:16px}[dir=rtl] .mdc-list-item--with-leading-image .mdc-list-item__start,.mdc-list-item--with-leading-image .mdc-list-item__start[dir=rtl]{margin-left:16px;margin-right:16px}.mdc-list-item--with-leading-image .mdc-list-item__start{width:56px;height:56px}.mdc-list-item--with-leading-image.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-image.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-image.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-image.mdc-list-item--with-two-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-image.mdc-list-item--with-two-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-leading-image.mdc-list-item--with-two-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-image.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-leading-image.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-image.mdc-list-item--with-one-line{height:72px}.mdc-list-item--with-leading-image.mdc-list-item--with-two-lines{height:72px}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines .mdc-list-item__start{align-self:flex-start;margin-top:8px}.mdc-list-item--with-leading-video.mdc-list-item{padding-left:0;padding-right:auto}[dir=rtl] .mdc-list-item--with-leading-video.mdc-list-item,.mdc-list-item--with-leading-video.mdc-list-item[dir=rtl]{padding-left:auto;padding-right:0}.mdc-list-item--with-leading-video .mdc-list-item__start{margin-left:0;margin-right:16px}[dir=rtl] .mdc-list-item--with-leading-video .mdc-list-item__start,.mdc-list-item--with-leading-video .mdc-list-item__start[dir=rtl]{margin-left:16px;margin-right:0}.mdc-list-item--with-leading-video .mdc-list-item__start{width:100px;height:56px}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-video.mdc-list-item--with-one-line{height:72px}.mdc-list-item--with-leading-video.mdc-list-item--with-two-lines{height:72px}.mdc-list-item--with-leading-checkbox.mdc-list-item{padding-left:0;padding-right:auto}[dir=rtl] .mdc-list-item--with-leading-checkbox.mdc-list-item,.mdc-list-item--with-leading-checkbox.mdc-list-item[dir=rtl]{padding-left:auto;padding-right:0}.mdc-list-item--with-leading-checkbox .mdc-list-item__start{margin-left:8px;margin-right:24px}[dir=rtl] .mdc-list-item--with-leading-checkbox .mdc-list-item__start,.mdc-list-item--with-leading-checkbox .mdc-list-item__start[dir=rtl]{margin-left:24px;margin-right:8px}.mdc-list-item--with-leading-checkbox .mdc-list-item__start{width:40px;height:40px}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines .mdc-list-item__start{align-self:flex-start;margin-top:8px}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-one-line{height:56px}.mdc-list-item--with-leading-checkbox.mdc-list-item--with-two-lines{height:72px}.mdc-list-item--with-leading-radio.mdc-list-item{padding-left:0;padding-right:auto}[dir=rtl] .mdc-list-item--with-leading-radio.mdc-list-item,.mdc-list-item--with-leading-radio.mdc-list-item[dir=rtl]{padding-left:auto;padding-right:0}.mdc-list-item--with-leading-radio .mdc-list-item__start{margin-left:8px;margin-right:24px}[dir=rtl] .mdc-list-item--with-leading-radio .mdc-list-item__start,.mdc-list-item--with-leading-radio .mdc-list-item__start[dir=rtl]{margin-left:24px;margin-right:8px}.mdc-list-item--with-leading-radio .mdc-list-item__start{width:40px;height:40px}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines .mdc-list-item__start{align-self:flex-start;margin-top:8px}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-radio.mdc-list-item--with-one-line{height:56px}.mdc-list-item--with-leading-radio.mdc-list-item--with-two-lines{height:72px}.mdc-list-item--with-leading-switch.mdc-list-item{padding-left:0;padding-right:auto}[dir=rtl] .mdc-list-item--with-leading-switch.mdc-list-item,.mdc-list-item--with-leading-switch.mdc-list-item[dir=rtl]{padding-left:auto;padding-right:0}.mdc-list-item--with-leading-switch .mdc-list-item__start{margin-left:16px;margin-right:16px}[dir=rtl] .mdc-list-item--with-leading-switch .mdc-list-item__start,.mdc-list-item--with-leading-switch .mdc-list-item__start[dir=rtl]{margin-left:16px;margin-right:16px}.mdc-list-item--with-leading-switch .mdc-list-item__start{width:36px;height:20px}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines .mdc-list-item__start{align-self:flex-start;margin-top:16px}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines .mdc-list-item__primary-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines .mdc-list-item__overline-text{display:block;margin-top:0;line-height:normal;margin-bottom:-20px}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines .mdc-list-item__overline-text::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines .mdc-list-item__overline-text::after{display:inline-block;width:0;height:20px;content:"";vertical-align:-20px}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines.mdc-list-item--with-trailing-meta .mdc-list-item__end::before{display:inline-block;width:0;height:32px;content:"";vertical-align:0}.mdc-list-item--with-leading-switch.mdc-list-item--with-one-line{height:56px}.mdc-list-item--with-leading-switch.mdc-list-item--with-two-lines{height:72px}.mdc-list-item--with-trailing-icon.mdc-list-item{padding-left:auto;padding-right:0}[dir=rtl] .mdc-list-item--with-trailing-icon.mdc-list-item,.mdc-list-item--with-trailing-icon.mdc-list-item[dir=rtl]{padding-left:0;padding-right:auto}.mdc-list-item--with-trailing-icon .mdc-list-item__end{margin-left:16px;margin-right:16px}[dir=rtl] .mdc-list-item--with-trailing-icon .mdc-list-item__end,.mdc-list-item--with-trailing-icon .mdc-list-item__end[dir=rtl]{margin-left:16px;margin-right:16px}.mdc-list-item--with-trailing-icon .mdc-list-item__end{width:24px;height:24px}.mdc-list-item--with-trailing-meta.mdc-list-item--with-two-lines .mdc-list-item__end{align-self:flex-start;margin-top:0}.mdc-list-item--with-trailing-meta.mdc-list-item--with-three-lines .mdc-list-item__end{align-self:flex-start;margin-top:0}.mdc-list-item--with-trailing-meta.mdc-list-item{padding-left:auto;padding-right:0}[dir=rtl] .mdc-list-item--with-trailing-meta.mdc-list-item,.mdc-list-item--with-trailing-meta.mdc-list-item[dir=rtl]{padding-left:0;padding-right:auto}.mdc-list-item--with-trailing-meta .mdc-list-item__end{margin-left:28px;margin-right:16px}[dir=rtl] .mdc-list-item--with-trailing-meta .mdc-list-item__end,.mdc-list-item--with-trailing-meta .mdc-list-item__end[dir=rtl]{margin-left:16px;margin-right:28px}.mdc-list-item--with-trailing-meta.mdc-list-item--with-two-lines .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-trailing-meta.mdc-list-item--with-two-lines .mdc-list-item__end::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-trailing-meta.mdc-list-item--with-three-lines .mdc-list-item__end{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-trailing-meta.mdc-list-item--with-three-lines .mdc-list-item__end::before{display:inline-block;width:0;height:28px;content:"";vertical-align:0}.mdc-list-item--with-trailing-meta .mdc-list-item__end{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-caption-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.75rem;font-size:var(--mdc-typography-caption-font-size, 0.75rem);line-height:1.25rem;line-height:var(--mdc-typography-caption-line-height, 1.25rem);font-weight:400;font-weight:var(--mdc-typography-caption-font-weight, 400);letter-spacing:0.0333333333em;letter-spacing:var(--mdc-typography-caption-letter-spacing, 0.0333333333em);text-decoration:inherit;text-decoration:var(--mdc-typography-caption-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-caption-text-transform, inherit)}.mdc-list-item--with-trailing-checkbox.mdc-list-item{padding-left:auto;padding-right:0}[dir=rtl] .mdc-list-item--with-trailing-checkbox.mdc-list-item,.mdc-list-item--with-trailing-checkbox.mdc-list-item[dir=rtl]{padding-left:0;padding-right:auto}.mdc-list-item--with-trailing-checkbox .mdc-list-item__end{margin-left:24px;margin-right:8px}[dir=rtl] .mdc-list-item--with-trailing-checkbox .mdc-list-item__end,.mdc-list-item--with-trailing-checkbox .mdc-list-item__end[dir=rtl]{margin-left:8px;margin-right:24px}.mdc-list-item--with-trailing-checkbox .mdc-list-item__end{width:40px;height:40px}.mdc-list-item--with-trailing-checkbox.mdc-list-item--with-three-lines .mdc-list-item__end{align-self:flex-start;margin-top:8px}.mdc-list-item--with-trailing-radio.mdc-list-item{padding-left:auto;padding-right:0}[dir=rtl] .mdc-list-item--with-trailing-radio.mdc-list-item,.mdc-list-item--with-trailing-radio.mdc-list-item[dir=rtl]{padding-left:0;padding-right:auto}.mdc-list-item--with-trailing-radio .mdc-list-item__end{margin-left:24px;margin-right:8px}[dir=rtl] .mdc-list-item--with-trailing-radio .mdc-list-item__end,.mdc-list-item--with-trailing-radio .mdc-list-item__end[dir=rtl]{margin-left:8px;margin-right:24px}.mdc-list-item--with-trailing-radio .mdc-list-item__end{width:40px;height:40px}.mdc-list-item--with-trailing-radio.mdc-list-item--with-three-lines .mdc-list-item__end{align-self:flex-start;margin-top:8px}.mdc-list-item--with-trailing-switch.mdc-list-item{padding-left:auto;padding-right:0}[dir=rtl] .mdc-list-item--with-trailing-switch.mdc-list-item,.mdc-list-item--with-trailing-switch.mdc-list-item[dir=rtl]{padding-left:0;padding-right:auto}.mdc-list-item--with-trailing-switch .mdc-list-item__end{margin-left:16px;margin-right:16px}[dir=rtl] .mdc-list-item--with-trailing-switch .mdc-list-item__end,.mdc-list-item--with-trailing-switch .mdc-list-item__end[dir=rtl]{margin-left:16px;margin-right:16px}.mdc-list-item--with-trailing-switch .mdc-list-item__end{width:36px;height:20px}.mdc-list-item--with-trailing-switch.mdc-list-item--with-three-lines .mdc-list-item__end{align-self:flex-start;margin-top:16px}.mdc-list-item--with-overline.mdc-list-item--with-two-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-overline.mdc-list-item--with-two-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:20px;content:"";vertical-align:0}.mdc-list-item--with-overline.mdc-list-item--with-three-lines .mdc-list-item__primary-text{display:block;margin-top:0;line-height:normal}.mdc-list-item--with-overline.mdc-list-item--with-three-lines .mdc-list-item__primary-text::before{display:inline-block;width:0;height:20px;content:"";vertical-align:0}.mdc-list-item{padding-left:16px;padding-right:16px}[dir=rtl] .mdc-list-item,.mdc-list-item[dir=rtl]{padding-left:16px;padding-right:16px}.mdc-list-group .mdc-deprecated-list{padding:0}.mdc-list-group__subheader{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-subtitle1-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:1rem;font-size:var(--mdc-typography-subtitle1-font-size, 1rem);line-height:1.75rem;line-height:var(--mdc-typography-subtitle1-line-height, 1.75rem);font-weight:400;font-weight:var(--mdc-typography-subtitle1-font-weight, 400);letter-spacing:0.009375em;letter-spacing:var(--mdc-typography-subtitle1-letter-spacing, 0.009375em);text-decoration:inherit;text-decoration:var(--mdc-typography-subtitle1-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-subtitle1-text-transform, inherit);margin:calc((3rem - 1.5rem)/2) 16px}.mdc-list-divider{background-color:rgba(0, 0, 0, 0.12)}.mdc-list-divider{height:1px}.mdc-list-divider{padding:0;background-clip:content-box}.mdc-list-divider.mdc-list-divider--with-leading-inset,.mdc-list-divider--with-leading-text.mdc-list-divider--with-leading-inset,.mdc-list-divider--with-leading-icon.mdc-list-divider--with-leading-inset,.mdc-list-divider--with-leading-image.mdc-list-divider--with-leading-inset,.mdc-list-divider--with-leading-thumbnail.mdc-list-divider--with-leading-inset,.mdc-list-divider--with-leading-avatar.mdc-list-divider--with-leading-inset,.mdc-list-divider--with-leading-checkbox.mdc-list-divider--with-leading-inset,.mdc-list-divider--with-leading-switch.mdc-list-divider--with-leading-inset,.mdc-list-divider--with-leading-radio.mdc-list-divider--with-leading-inset{padding-left:16px;padding-right:auto}[dir=rtl] .mdc-list-divider.mdc-list-divider--with-leading-inset,[dir=rtl] .mdc-list-divider--with-leading-text.mdc-list-divider--with-leading-inset,[dir=rtl] .mdc-list-divider--with-leading-icon.mdc-list-divider--with-leading-inset,[dir=rtl] .mdc-list-divider--with-leading-image.mdc-list-divider--with-leading-inset,[dir=rtl] .mdc-list-divider--with-leading-thumbnail.mdc-list-divider--with-leading-inset,[dir=rtl] .mdc-list-divider--with-leading-avatar.mdc-list-divider--with-leading-inset,[dir=rtl] .mdc-list-divider--with-leading-checkbox.mdc-list-divider--with-leading-inset,[dir=rtl] .mdc-list-divider--with-leading-switch.mdc-list-divider--with-leading-inset,[dir=rtl] .mdc-list-divider--with-leading-radio.mdc-list-divider--with-leading-inset,.mdc-list-divider.mdc-list-divider--with-leading-inset[dir=rtl],.mdc-list-divider--with-leading-text.mdc-list-divider--with-leading-inset[dir=rtl],.mdc-list-divider--with-leading-icon.mdc-list-divider--with-leading-inset[dir=rtl],.mdc-list-divider--with-leading-image.mdc-list-divider--with-leading-inset[dir=rtl],.mdc-list-divider--with-leading-thumbnail.mdc-list-divider--with-leading-inset[dir=rtl],.mdc-list-divider--with-leading-avatar.mdc-list-divider--with-leading-inset[dir=rtl],.mdc-list-divider--with-leading-checkbox.mdc-list-divider--with-leading-inset[dir=rtl],.mdc-list-divider--with-leading-switch.mdc-list-divider--with-leading-inset[dir=rtl],.mdc-list-divider--with-leading-radio.mdc-list-divider--with-leading-inset[dir=rtl]{padding-left:auto;padding-right:16px}.mdc-list-divider.mdc-list-divider--with-trailing-inset,.mdc-list-divider--with-leading-text.mdc-list-divider--with-trailing-inset,.mdc-list-divider--with-leading-icon.mdc-list-divider--with-trailing-inset,.mdc-list-divider--with-leading-image.mdc-list-divider--with-trailing-inset,.mdc-list-divider--with-leading-thumbnail.mdc-list-divider--with-trailing-inset,.mdc-list-divider--with-leading-avatar.mdc-list-divider--with-trailing-inset,.mdc-list-divider--with-leading-checkbox.mdc-list-divider--with-trailing-inset,.mdc-list-divider--with-leading-switch.mdc-list-divider--with-trailing-inset,.mdc-list-divider--with-leading-radio.mdc-list-divider--with-trailing-inset{padding-left:auto;padding-right:16px}[dir=rtl] .mdc-list-divider.mdc-list-divider--with-trailing-inset,[dir=rtl] .mdc-list-divider--with-leading-text.mdc-list-divider--with-trailing-inset,[dir=rtl] .mdc-list-divider--with-leading-icon.mdc-list-divider--with-trailing-inset,[dir=rtl] .mdc-list-divider--with-leading-image.mdc-list-divider--with-trailing-inset,[dir=rtl] .mdc-list-divider--with-leading-thumbnail.mdc-list-divider--with-trailing-inset,[dir=rtl] .mdc-list-divider--with-leading-avatar.mdc-list-divider--with-trailing-inset,[dir=rtl] .mdc-list-divider--with-leading-checkbox.mdc-list-divider--with-trailing-inset,[dir=rtl] .mdc-list-divider--with-leading-switch.mdc-list-divider--with-trailing-inset,[dir=rtl] .mdc-list-divider--with-leading-radio.mdc-list-divider--with-trailing-inset,.mdc-list-divider.mdc-list-divider--with-trailing-inset[dir=rtl],.mdc-list-divider--with-leading-text.mdc-list-divider--with-trailing-inset[dir=rtl],.mdc-list-divider--with-leading-icon.mdc-list-divider--with-trailing-inset[dir=rtl],.mdc-list-divider--with-leading-image.mdc-list-divider--with-trailing-inset[dir=rtl],.mdc-list-divider--with-leading-thumbnail.mdc-list-divider--with-trailing-inset[dir=rtl],.mdc-list-divider--with-leading-avatar.mdc-list-divider--with-trailing-inset[dir=rtl],.mdc-list-divider--with-leading-checkbox.mdc-list-divider--with-trailing-inset[dir=rtl],.mdc-list-divider--with-leading-switch.mdc-list-divider--with-trailing-inset[dir=rtl],.mdc-list-divider--with-leading-radio.mdc-list-divider--with-trailing-inset[dir=rtl]{padding-left:16px;padding-right:auto}.mdc-list-divider--with-leading-video.mdc-list-divider--with-leading-inset{padding-left:0px;padding-right:auto}[dir=rtl] .mdc-list-divider--with-leading-video.mdc-list-divider--with-leading-inset,.mdc-list-divider--with-leading-video.mdc-list-divider--with-leading-inset[dir=rtl]{padding-left:auto;padding-right:0px}[dir=rtl] .mdc-list-divider,.mdc-list-divider[dir=rtl]{padding:0}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item{--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity;--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-deprecated-list-item__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-deprecated-list-item__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-deprecated-list-item__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded .mdc-deprecated-list-item__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded .mdc-deprecated-list-item__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded--unbounded .mdc-deprecated-list-item__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded--foreground-activation .mdc-deprecated-list-item__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded--foreground-deactivation .mdc-deprecated-list-item__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-list-item__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-list-item__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-list-item__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded .mdc-list-item__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded .mdc-list-item__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded--unbounded .mdc-list-item__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded--foreground-activation .mdc-list-item__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded--foreground-deactivation .mdc-list-item__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-deprecated-list-item__ripple::after{top:calc(50% - 100%);left:calc(50% - 100%);width:200%;height:200%}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded .mdc-deprecated-list-item__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-list-item__ripple::after{top:calc(50% - 100%);left:calc(50% - 100%);width:200%;height:200%}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded .mdc-list-item__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-deprecated-list-item__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, #000)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item:hover .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-surface--hover .mdc-deprecated-list-item__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded--background-focused .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item:not(.mdc-ripple-upgraded):focus .mdc-deprecated-list-item__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item:not(.mdc-ripple-upgraded) .mdc-deprecated-list-item__ripple::after{transition:opacity 150ms linear}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item:not(.mdc-ripple-upgraded):active .mdc-deprecated-list-item__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-list-item__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, #000)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item:hover .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-surface--hover .mdc-list-item__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded--background-focused .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item:not(.mdc-ripple-upgraded):focus .mdc-list-item__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item:not(.mdc-ripple-upgraded) .mdc-list-item__ripple::after{transition:opacity 150ms linear}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item:not(.mdc-ripple-upgraded):active .mdc-list-item__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated .mdc-deprecated-list-item__ripple::before{opacity:0.12;opacity:var(--mdc-ripple-activated-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated .mdc-deprecated-list-item__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-ripple-color, var(--mdc-theme-primary, #6a1b9a))}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated:hover .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated.mdc-ripple-surface--hover .mdc-deprecated-list-item__ripple::before{opacity:0.16;opacity:var(--mdc-ripple-hover-opacity, 0.16)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated.mdc-ripple-upgraded--background-focused .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated:not(.mdc-ripple-upgraded):focus .mdc-deprecated-list-item__ripple::before{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-focus-opacity, 0.24)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated:not(.mdc-ripple-upgraded) .mdc-deprecated-list-item__ripple::after{transition:opacity 150ms linear}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated:not(.mdc-ripple-upgraded):active .mdc-deprecated-list-item__ripple::after{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-press-opacity, 0.24)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.24)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated .mdc-list-item__ripple::before{opacity:0.12;opacity:var(--mdc-ripple-activated-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated .mdc-list-item__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-ripple-color, var(--mdc-theme-primary, #6a1b9a))}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated:hover .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated.mdc-ripple-surface--hover .mdc-list-item__ripple::before{opacity:0.16;opacity:var(--mdc-ripple-hover-opacity, 0.16)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated.mdc-ripple-upgraded--background-focused .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated:not(.mdc-ripple-upgraded):focus .mdc-list-item__ripple::before{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-focus-opacity, 0.24)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated:not(.mdc-ripple-upgraded) .mdc-list-item__ripple::after{transition:opacity 150ms linear}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated:not(.mdc-ripple-upgraded):active .mdc-list-item__ripple::after{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-press-opacity, 0.24)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--activated.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.24)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected .mdc-deprecated-list-item__ripple::before{opacity:0.08;opacity:var(--mdc-ripple-selected-opacity, 0.08)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected .mdc-deprecated-list-item__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-ripple-color, var(--mdc-theme-primary, #6a1b9a))}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected:hover .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected.mdc-ripple-surface--hover .mdc-deprecated-list-item__ripple::before{opacity:0.12;opacity:var(--mdc-ripple-hover-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected.mdc-ripple-upgraded--background-focused .mdc-deprecated-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded):focus .mdc-deprecated-list-item__ripple::before{transition-duration:75ms;opacity:0.2;opacity:var(--mdc-ripple-focus-opacity, 0.2)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded) .mdc-deprecated-list-item__ripple::after{transition:opacity 150ms linear}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded):active .mdc-deprecated-list-item__ripple::after{transition-duration:75ms;opacity:0.2;opacity:var(--mdc-ripple-press-opacity, 0.2)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.2)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected .mdc-list-item__ripple::before{opacity:0.08;opacity:var(--mdc-ripple-selected-opacity, 0.08)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected .mdc-list-item__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-ripple-color, var(--mdc-theme-primary, #6a1b9a))}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected:hover .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected.mdc-ripple-surface--hover .mdc-list-item__ripple::before{opacity:0.12;opacity:var(--mdc-ripple-hover-opacity, 0.12)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected.mdc-ripple-upgraded--background-focused .mdc-list-item__ripple::before,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded):focus .mdc-list-item__ripple::before{transition-duration:75ms;opacity:0.2;opacity:var(--mdc-ripple-focus-opacity, 0.2)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded) .mdc-list-item__ripple::after{transition:opacity 150ms linear}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected:not(.mdc-ripple-upgraded):active .mdc-list-item__ripple::after{transition-duration:75ms;opacity:0.2;opacity:var(--mdc-ripple-press-opacity, 0.2)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item--selected.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.2)}:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-deprecated-list-item__ripple,:not(.mdc-deprecated-list-item--disabled).mdc-deprecated-list-item .mdc-list-item__ripple{position:absolute;top:0;left:0;width:100%;height:100%;pointer-events:none}.mdc-deprecated-list-item--disabled{--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity;--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__ripple::before,.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded .mdc-deprecated-list-item__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded .mdc-deprecated-list-item__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded--unbounded .mdc-deprecated-list-item__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded--foreground-activation .mdc-deprecated-list-item__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded--foreground-deactivation .mdc-deprecated-list-item__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}.mdc-deprecated-list-item--disabled .mdc-list-item__ripple::before,.mdc-deprecated-list-item--disabled .mdc-list-item__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.mdc-deprecated-list-item--disabled .mdc-list-item__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}.mdc-deprecated-list-item--disabled .mdc-list-item__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded .mdc-list-item__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded .mdc-list-item__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded--unbounded .mdc-list-item__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded--foreground-activation .mdc-list-item__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded--foreground-deactivation .mdc-list-item__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__ripple::before,.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__ripple::after{top:calc(50% - 100%);left:calc(50% - 100%);width:200%;height:200%}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded .mdc-deprecated-list-item__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-deprecated-list-item--disabled .mdc-list-item__ripple::before,.mdc-deprecated-list-item--disabled .mdc-list-item__ripple::after{top:calc(50% - 100%);left:calc(50% - 100%);width:200%;height:200%}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded .mdc-list-item__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__ripple::before,.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, #000)}.mdc-deprecated-list-item--disabled .mdc-list-item__ripple::before,.mdc-deprecated-list-item--disabled .mdc-list-item__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, #000)}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded--background-focused .mdc-deprecated-list-item__ripple::before,.mdc-deprecated-list-item--disabled:not(.mdc-ripple-upgraded):focus .mdc-deprecated-list-item__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-deprecated-list-item--disabled.mdc-ripple-upgraded--background-focused .mdc-list-item__ripple::before,.mdc-deprecated-list-item--disabled:not(.mdc-ripple-upgraded):focus .mdc-list-item__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-deprecated-list-item--disabled .mdc-deprecated-list-item__ripple,.mdc-deprecated-list-item--disabled .mdc-list-item__ripple{position:absolute;top:0;left:0;width:100%;height:100%;pointer-events:none}:not(.mdc-list-item--disabled).mdc-list-item{--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}:not(.mdc-list-item--disabled).mdc-list-item .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item .mdc-list-item__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}:not(.mdc-list-item--disabled).mdc-list-item .mdc-list-item__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}:not(.mdc-list-item--disabled).mdc-list-item .mdc-list-item__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}:not(.mdc-list-item--disabled).mdc-list-item.mdc-ripple-upgraded .mdc-list-item__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}:not(.mdc-list-item--disabled).mdc-list-item.mdc-ripple-upgraded .mdc-list-item__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}:not(.mdc-list-item--disabled).mdc-list-item.mdc-ripple-upgraded--unbounded .mdc-list-item__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}:not(.mdc-list-item--disabled).mdc-list-item.mdc-ripple-upgraded--foreground-activation .mdc-list-item__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}:not(.mdc-list-item--disabled).mdc-list-item.mdc-ripple-upgraded--foreground-deactivation .mdc-list-item__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}:not(.mdc-list-item--disabled).mdc-list-item .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item .mdc-list-item__ripple::after{top:calc(50% - 100%);left:calc(50% - 100%);width:200%;height:200%}:not(.mdc-list-item--disabled).mdc-list-item.mdc-ripple-upgraded .mdc-list-item__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}:not(.mdc-list-item--disabled).mdc-list-item .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item .mdc-list-item__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, #000)}:not(.mdc-list-item--disabled).mdc-list-item:hover .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item.mdc-ripple-surface--hover .mdc-list-item__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}:not(.mdc-list-item--disabled).mdc-list-item.mdc-ripple-upgraded--background-focused .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item:not(.mdc-ripple-upgraded):focus .mdc-list-item__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}:not(.mdc-list-item--disabled).mdc-list-item:not(.mdc-ripple-upgraded) .mdc-list-item__ripple::after{transition:opacity 150ms linear}:not(.mdc-list-item--disabled).mdc-list-item:not(.mdc-ripple-upgraded):active .mdc-list-item__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}:not(.mdc-list-item--disabled).mdc-list-item.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}:not(.mdc-list-item--disabled).mdc-list-item--activated .mdc-list-item__ripple::before{opacity:0.12;opacity:var(--mdc-ripple-activated-opacity, 0.12)}:not(.mdc-list-item--disabled).mdc-list-item--activated .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item--activated .mdc-list-item__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-ripple-color, var(--mdc-theme-primary, #6a1b9a))}:not(.mdc-list-item--disabled).mdc-list-item--activated:hover .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item--activated.mdc-ripple-surface--hover .mdc-list-item__ripple::before{opacity:0.16;opacity:var(--mdc-ripple-hover-opacity, 0.16)}:not(.mdc-list-item--disabled).mdc-list-item--activated.mdc-ripple-upgraded--background-focused .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item--activated:not(.mdc-ripple-upgraded):focus .mdc-list-item__ripple::before{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-focus-opacity, 0.24)}:not(.mdc-list-item--disabled).mdc-list-item--activated:not(.mdc-ripple-upgraded) .mdc-list-item__ripple::after{transition:opacity 150ms linear}:not(.mdc-list-item--disabled).mdc-list-item--activated:not(.mdc-ripple-upgraded):active .mdc-list-item__ripple::after{transition-duration:75ms;opacity:0.24;opacity:var(--mdc-ripple-press-opacity, 0.24)}:not(.mdc-list-item--disabled).mdc-list-item--activated.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.24)}:not(.mdc-list-item--disabled).mdc-list-item--selected .mdc-list-item__ripple::before{opacity:0.08;opacity:var(--mdc-ripple-selected-opacity, 0.08)}:not(.mdc-list-item--disabled).mdc-list-item--selected .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item--selected .mdc-list-item__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-ripple-color, var(--mdc-theme-primary, #6a1b9a))}:not(.mdc-list-item--disabled).mdc-list-item--selected:hover .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item--selected.mdc-ripple-surface--hover .mdc-list-item__ripple::before{opacity:0.12;opacity:var(--mdc-ripple-hover-opacity, 0.12)}:not(.mdc-list-item--disabled).mdc-list-item--selected.mdc-ripple-upgraded--background-focused .mdc-list-item__ripple::before,:not(.mdc-list-item--disabled).mdc-list-item--selected:not(.mdc-ripple-upgraded):focus .mdc-list-item__ripple::before{transition-duration:75ms;opacity:0.2;opacity:var(--mdc-ripple-focus-opacity, 0.2)}:not(.mdc-list-item--disabled).mdc-list-item--selected:not(.mdc-ripple-upgraded) .mdc-list-item__ripple::after{transition:opacity 150ms linear}:not(.mdc-list-item--disabled).mdc-list-item--selected:not(.mdc-ripple-upgraded):active .mdc-list-item__ripple::after{transition-duration:75ms;opacity:0.2;opacity:var(--mdc-ripple-press-opacity, 0.2)}:not(.mdc-list-item--disabled).mdc-list-item--selected.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.2)}:not(.mdc-list-item--disabled).mdc-list-item .mdc-list-item__ripple{position:absolute;top:0;left:0;width:100%;height:100%;pointer-events:none}.mdc-list-item--disabled{--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.mdc-list-item--disabled .mdc-list-item__ripple::before,.mdc-list-item--disabled .mdc-list-item__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.mdc-list-item--disabled .mdc-list-item__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}.mdc-list-item--disabled .mdc-list-item__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}.mdc-list-item--disabled.mdc-ripple-upgraded .mdc-list-item__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}.mdc-list-item--disabled.mdc-ripple-upgraded .mdc-list-item__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}.mdc-list-item--disabled.mdc-ripple-upgraded--unbounded .mdc-list-item__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}.mdc-list-item--disabled.mdc-ripple-upgraded--foreground-activation .mdc-list-item__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.mdc-list-item--disabled.mdc-ripple-upgraded--foreground-deactivation .mdc-list-item__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}.mdc-list-item--disabled .mdc-list-item__ripple::before,.mdc-list-item--disabled .mdc-list-item__ripple::after{top:calc(50% - 100%);left:calc(50% - 100%);width:200%;height:200%}.mdc-list-item--disabled.mdc-ripple-upgraded .mdc-list-item__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-list-item--disabled .mdc-list-item__ripple::before,.mdc-list-item--disabled .mdc-list-item__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, #000)}.mdc-list-item--disabled.mdc-ripple-upgraded--background-focused .mdc-list-item__ripple::before,.mdc-list-item--disabled:not(.mdc-ripple-upgraded):focus .mdc-list-item__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-list-item--disabled .mdc-list-item__ripple{position:absolute;top:0;left:0;width:100%;height:100%;pointer-events:none}.smui-list--three-line .mdc-deprecated-list-item__text{align-self:flex-start}.smui-list--three-line .mdc-deprecated-list-item{height:88px}.smui-list--three-line.mdc-deprecated-list--dense .mdc-deprecated-list-item{height:76px}.mdc-deprecated-list-item.smui-menu-item--non-interactive{cursor:auto}.mdc-menu-surface{display:none;position:absolute;box-sizing:border-box;max-width:calc(100vw - 32px);max-width:var(--mdc-menu-max-width, calc(100vw - 32px));max-height:calc(100vh - 32px);max-height:var(--mdc-menu-max-height, calc(100vh - 32px));margin:0;padding:0;transform:scale(1);transform-origin:top left;opacity:0;overflow:auto;will-change:transform,opacity;z-index:8;transition:opacity .03s linear,transform .12s cubic-bezier(0, 0, 0.2, 1),height 250ms cubic-bezier(0, 0, 0.2, 1);box-shadow:0px 5px 5px -3px rgba(0, 0, 0, 0.2),0px 8px 10px 1px rgba(0, 0, 0, 0.14),0px 3px 14px 2px rgba(0,0,0,.12);background-color:#ffffff;background-color:var(--mdc-theme-surface, #ffffff);color:#000;color:var(--mdc-theme-on-surface, #000);border-radius:4px;border-radius:var(--mdc-shape-medium, 4px);transform-origin-left:top left;transform-origin-right:top right}.mdc-menu-surface:focus{outline:none}.mdc-menu-surface--animating-open{display:inline-block;transform:scale(0.8);opacity:0}.mdc-menu-surface--open{display:inline-block;transform:scale(1);opacity:1}.mdc-menu-surface--animating-closed{display:inline-block;opacity:0;transition:opacity .075s linear}[dir=rtl] .mdc-menu-surface,.mdc-menu-surface[dir=rtl]{transform-origin-left:top right;transform-origin-right:top left}.mdc-menu-surface--anchor{position:relative;overflow:visible}.mdc-menu-surface--fixed{position:fixed}.mdc-menu-surface--fullwidth{width:100%}.smui-menu-surface--static{position:static;z-index:0;display:inline-block;transform:scale(1);opacity:1}.mdc-menu{min-width:112px;min-width:var(--mdc-menu-min-width, 112px)}.mdc-menu .mdc-deprecated-list-item__meta{color:rgba(0, 0, 0, 0.87)}.mdc-menu .mdc-deprecated-list-item__graphic{color:rgba(0, 0, 0, 0.87)}.mdc-menu .mdc-menu-item--submenu-open .mdc-deprecated-list-item__ripple::before{opacity:.04}.mdc-menu .mdc-menu-item--submenu-open .mdc-list-item__ripple::before{opacity:.04}.mdc-menu .mdc-deprecated-list{color:rgba(0, 0, 0, 0.87)}.mdc-menu .mdc-deprecated-list,.mdc-menu .mdc-list{position:relative}.mdc-menu .mdc-deprecated-list .mdc-elevation-overlay,.mdc-menu .mdc-list .mdc-elevation-overlay{width:100%;height:100%;top:0;left:0}.mdc-menu .mdc-deprecated-list-divider{margin:8px 0}.mdc-menu .mdc-deprecated-list-item{user-select:none}.mdc-menu .mdc-deprecated-list-item--disabled{cursor:auto}.mdc-menu a.mdc-deprecated-list-item .mdc-deprecated-list-item__text,.mdc-menu a.mdc-deprecated-list-item .mdc-deprecated-list-item__graphic{pointer-events:none}.mdc-menu__selection-group{padding:0;fill:currentColor}.mdc-menu__selection-group .mdc-deprecated-list-item{padding-left:56px;padding-right:16px}[dir=rtl] .mdc-menu__selection-group .mdc-deprecated-list-item,.mdc-menu__selection-group .mdc-deprecated-list-item[dir=rtl]{padding-left:16px;padding-right:56px}.mdc-menu__selection-group .mdc-menu__selection-group-icon{left:16px;right:initial;display:none;position:absolute;top:50%;transform:translateY(-50%)}[dir=rtl] .mdc-menu__selection-group .mdc-menu__selection-group-icon,.mdc-menu__selection-group .mdc-menu__selection-group-icon[dir=rtl]{left:initial;right:16px}.mdc-menu-item--selected .mdc-menu__selection-group-icon{display:inline}.mdc-menu__selection-group .mdc-list-item__graphic.mdc-menu__selection-group-icon{display:none}.mdc-menu-item--selected .mdc-list-item__graphic.mdc-menu__selection-group-icon{display:inline}.mdc-select--activated{z-index:8}.smui-select--standard .mdc-select__dropdown-icon{margin-left:26px;margin-right:0}[dir=rtl] .smui-select--standard .mdc-select__dropdown-icon,.smui-select--standard .mdc-select__dropdown-icon[dir=rtl]{margin-left:0;margin-right:26px}.smui-select--standard.mdc-select--with-leading-icon .mdc-select__icon{position:relative;margin:0;margin-left:0;margin-right:-24px;left:-36px;right:initial}[dir=rtl] .smui-select--standard.mdc-select--with-leading-icon .mdc-select__icon,.smui-select--standard.mdc-select--with-leading-icon .mdc-select__icon[dir=rtl]{margin-left:-24px;margin-right:0}[dir=rtl] .smui-select--standard.mdc-select--with-leading-icon .mdc-select__icon,.smui-select--standard.mdc-select--with-leading-icon .mdc-select__icon[dir=rtl]{left:initial;right:-36px}.smui-select--standard .mdc-select__anchor{padding-left:0;padding-right:0}[dir=rtl] .smui-select--standard .mdc-select__anchor,.smui-select--standard .mdc-select__anchor[dir=rtl]{padding-left:0;padding-right:0}.smui-select--standard.mdc-select--with-leading-icon .mdc-select__anchor{padding-left:36px;padding-right:0}[dir=rtl] .smui-select--standard.mdc-select--with-leading-icon .mdc-select__anchor,.smui-select--standard.mdc-select--with-leading-icon .mdc-select__anchor[dir=rtl]{padding-left:0;padding-right:36px}.smui-select--standard .mdc-select__anchor .mdc-select__ripple::before,.smui-select--standard .mdc-select__anchor .mdc-select__ripple::after{background-color:transparent;background-color:var(--mdc-ripple-color, transparent)}.smui-select--standard .mdc-select__anchor:hover .mdc-select__ripple::before,.smui-select--standard .mdc-select__anchor.mdc-ripple-surface--hover .mdc-select__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.smui-select--standard .mdc-select__anchor.mdc-ripple-upgraded--background-focused .mdc-select__ripple::before,.smui-select--standard .mdc-select__anchor:not(.mdc-ripple-upgraded):focus .mdc-select__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.smui-select--standard .mdc-select__anchor{height:56px;display:flex;align-items:baseline}.smui-select--standard .mdc-select__anchor::before{display:inline-block;width:0;height:40px;content:"";vertical-align:0}.smui-select--standard.mdc-select--no-label .mdc-select__anchor .mdc-select__selected-text::before{content:"​"}.smui-select--standard.mdc-select--no-label .mdc-select__anchor .mdc-select__selected-text-container{height:100%;display:inline-flex;align-items:center}.smui-select--standard.mdc-select--no-label .mdc-select__anchor::before{display:none}.smui-select--standard:not(.mdc-select--disabled) .mdc-select__anchor{background-color:transparent}.smui-select--standard.mdc-select--disabled .mdc-select__anchor{background-color:transparent}.smui-select--standard:not(.mdc-select--disabled) .mdc-line-ripple::before{border-bottom-color:rgba(0, 0, 0, 0.42)}.smui-select--standard:not(.mdc-select--disabled):hover .mdc-line-ripple::before{border-bottom-color:rgba(0, 0, 0, 0.87)}.smui-select--standard:not(.mdc-select--disabled) .mdc-line-ripple::after{border-bottom-color:#6a1b9a;border-bottom-color:var(--mdc-theme-primary, #6a1b9a)}.smui-select--standard.mdc-select--disabled .mdc-line-ripple::before{border-bottom-color:rgba(0, 0, 0, 0.06)}.smui-select--standard .mdc-floating-label{max-width:calc(100% - 52px)}.smui-select--standard .mdc-floating-label--float-above{max-width:calc(100% / 0.75 - 52px / 0.75)}.smui-select--standard .mdc-menu-surface--is-open-below{border-top-left-radius:0px;border-top-right-radius:0px}.smui-select--standard.mdc-select--focused.mdc-line-ripple::after{transform:scale(1, 2);opacity:1}.smui-select--standard .mdc-floating-label{left:0;right:initial}[dir=rtl] .smui-select--standard .mdc-floating-label,.smui-select--standard .mdc-floating-label[dir=rtl]{left:initial;right:0}.smui-select--standard.mdc-select--with-leading-icon .mdc-floating-label{left:36px;right:initial}[dir=rtl] .smui-select--standard.mdc-select--with-leading-icon .mdc-floating-label,.smui-select--standard.mdc-select--with-leading-icon .mdc-floating-label[dir=rtl]{left:initial;right:36px}.smui-select--standard.mdc-select--with-leading-icon .mdc-floating-label{max-width:calc(100% - 72px)}.smui-select--standard.mdc-select--with-leading-icon .mdc-floating-label--float-above{max-width:calc(100% / 0.75 - 72px / 0.75)}.smui-select--standard+.mdc-select-helper-text{margin-left:0;margin-right:0}.mdc-data-table__content{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-body2-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.875rem;font-size:var(--mdc-typography-body2-font-size, 0.875rem);line-height:1.25rem;line-height:var(--mdc-typography-body2-line-height, 1.25rem);font-weight:400;font-weight:var(--mdc-typography-body2-font-weight, 400);letter-spacing:0.0178571429em;letter-spacing:var(--mdc-typography-body2-letter-spacing, 0.0178571429em);text-decoration:inherit;text-decoration:var(--mdc-typography-body2-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-body2-text-transform, inherit)}.mdc-data-table{background-color:#ffffff;background-color:var(--mdc-theme-surface, #ffffff);border-radius:4px;border-radius:var(--mdc-shape-medium, 4px);border-width:1px;border-style:solid;border-color:rgba(0,0,0,.12);-webkit-overflow-scrolling:touch;display:inline-flex;flex-direction:column;box-sizing:border-box;position:relative}.mdc-data-table .mdc-data-table__header-cell:first-child{border-top-left-radius:4px;border-top-left-radius:var(--mdc-shape-medium, 4px)}[dir=rtl] .mdc-data-table .mdc-data-table__header-cell:first-child,.mdc-data-table .mdc-data-table__header-cell:first-child[dir=rtl]{border-top-right-radius:4px;border-top-right-radius:var(--mdc-shape-medium, 4px);border-top-left-radius:0}.mdc-data-table .mdc-data-table__header-cell:last-child{border-top-right-radius:4px;border-top-right-radius:var(--mdc-shape-medium, 4px)}[dir=rtl] .mdc-data-table .mdc-data-table__header-cell:last-child,.mdc-data-table .mdc-data-table__header-cell:last-child[dir=rtl]{border-top-left-radius:4px;border-top-left-radius:var(--mdc-shape-medium, 4px);border-top-right-radius:0}.mdc-data-table.mdc-data-table--without-footer .mdc-data-table__row:last-child .mdc-data-table__cell:first-child{border-bottom-left-radius:4px;border-bottom-left-radius:var(--mdc-shape-medium, 4px)}[dir=rtl] .mdc-data-table.mdc-data-table--without-footer .mdc-data-table__row:last-child .mdc-data-table__cell:first-child,.mdc-data-table.mdc-data-table--without-footer .mdc-data-table__row:last-child .mdc-data-table__cell:first-child[dir=rtl]{border-bottom-right-radius:4px;border-bottom-right-radius:var(--mdc-shape-medium, 4px);border-bottom-left-radius:0}.mdc-data-table.mdc-data-table--without-footer .mdc-data-table__row:last-child .mdc-data-table__cell:last-child{border-bottom-right-radius:4px;border-bottom-right-radius:var(--mdc-shape-medium, 4px)}[dir=rtl] .mdc-data-table.mdc-data-table--without-footer .mdc-data-table__row:last-child .mdc-data-table__cell:last-child,.mdc-data-table.mdc-data-table--without-footer .mdc-data-table__row:last-child .mdc-data-table__cell:last-child[dir=rtl]{border-bottom-left-radius:4px;border-bottom-left-radius:var(--mdc-shape-medium, 4px);border-bottom-right-radius:0}.mdc-data-table__row{background-color:inherit}.mdc-data-table__header-cell{background-color:#ffffff;background-color:var(--mdc-theme-surface, #ffffff)}.mdc-data-table__row--selected{background-color:rgba(106, 27, 154, 0.04)}.mdc-data-table__pagination-rows-per-page-select:not(.mdc-select--disabled) .mdc-notched-outline__leading,.mdc-data-table__pagination-rows-per-page-select:not(.mdc-select--disabled) .mdc-notched-outline__notch,.mdc-data-table__pagination-rows-per-page-select:not(.mdc-select--disabled) .mdc-notched-outline__trailing{border-color:rgba(0, 0, 0, 0.12)}.mdc-data-table__cell,.mdc-data-table__header-cell{border-bottom-color:rgba(0,0,0,.12)}.mdc-data-table__pagination{border-top-color:rgba(0,0,0,.12)}.mdc-data-table__cell,.mdc-data-table__header-cell{border-bottom-width:1px;border-bottom-style:solid}.mdc-data-table__pagination{border-top-width:1px;border-top-style:solid}.mdc-data-table__row:last-child .mdc-data-table__cell{border-bottom:none}.mdc-data-table__row:not(.mdc-data-table__row--selected):hover{background-color:rgba(0, 0, 0, 0.04)}.mdc-data-table__header-cell{color:rgba(0, 0, 0, 0.87)}.mdc-data-table__pagination-total,.mdc-data-table__pagination-rows-per-page-label,.mdc-data-table__cell{color:rgba(0, 0, 0, 0.87)}.mdc-data-table__row{height:52px}.mdc-data-table__pagination{min-height:52px}.mdc-data-table__header-row{height:56px}.mdc-data-table__cell,.mdc-data-table__header-cell{padding:0 16px 0 16px}.mdc-data-table__header-cell--checkbox,.mdc-data-table__cell--checkbox{padding-left:4px;padding-right:0}[dir=rtl] .mdc-data-table__header-cell--checkbox,[dir=rtl] .mdc-data-table__cell--checkbox,.mdc-data-table__header-cell--checkbox[dir=rtl],.mdc-data-table__cell--checkbox[dir=rtl]{padding-left:0;padding-right:4px}.mdc-data-table__table-container{-webkit-overflow-scrolling:touch;overflow-x:auto;width:100%}.mdc-data-table__table{min-width:100%;border:0;white-space:nowrap;border-spacing:0;table-layout:fixed}.mdc-data-table__cell{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-body2-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.875rem;font-size:var(--mdc-typography-body2-font-size, 0.875rem);line-height:1.25rem;line-height:var(--mdc-typography-body2-line-height, 1.25rem);font-weight:400;font-weight:var(--mdc-typography-body2-font-weight, 400);letter-spacing:0.0178571429em;letter-spacing:var(--mdc-typography-body2-letter-spacing, 0.0178571429em);text-decoration:inherit;text-decoration:var(--mdc-typography-body2-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-body2-text-transform, inherit);box-sizing:border-box;overflow:hidden;text-align:left;text-overflow:ellipsis}[dir=rtl] .mdc-data-table__cell,.mdc-data-table__cell[dir=rtl]{text-align:right}.mdc-data-table__cell--numeric{text-align:right}[dir=rtl] .mdc-data-table__cell--numeric,.mdc-data-table__cell--numeric[dir=rtl]{text-align:left}.mdc-data-table__cell--checkbox{width:1px}.mdc-data-table__header-cell{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-subtitle2-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.875rem;font-size:var(--mdc-typography-subtitle2-font-size, 0.875rem);line-height:1.375rem;line-height:var(--mdc-typography-subtitle2-line-height, 1.375rem);font-weight:500;font-weight:var(--mdc-typography-subtitle2-font-weight, 500);letter-spacing:0.0071428571em;letter-spacing:var(--mdc-typography-subtitle2-letter-spacing, 0.0071428571em);text-decoration:inherit;text-decoration:var(--mdc-typography-subtitle2-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-subtitle2-text-transform, inherit);box-sizing:border-box;text-overflow:ellipsis;overflow:hidden;outline:none;text-align:left}[dir=rtl] .mdc-data-table__header-cell,.mdc-data-table__header-cell[dir=rtl]{text-align:right}.mdc-data-table__header-cell--checkbox{width:1px}.mdc-data-table__header-cell--numeric{text-align:right}[dir=rtl] .mdc-data-table__header-cell--numeric,.mdc-data-table__header-cell--numeric[dir=rtl]{text-align:left}.mdc-data-table__sort-icon-button{width:28px;height:28px;padding:2px;transform:rotate(0.0001deg);margin-left:4px;margin-right:0;transition:transform 150ms 0ms cubic-bezier(0.4, 0, 0.2, 1);opacity:0}.mdc-data-table__sort-icon-button .mdc-icon-button__focus-ring{display:none}.mdc-data-table__sort-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{display:block;max-height:28px;max-width:28px}@media screen and (forced-colors: active){.mdc-data-table__sort-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{pointer-events:none;border:2px solid rgba(0,0,0,0);border-radius:6px;box-sizing:content-box;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:100%;width:100%}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-data-table__sort-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{border-color:CanvasText}}@media screen and (forced-colors: active){.mdc-data-table__sort-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring::after,.mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring::after{content:"";border:2px solid rgba(0,0,0,0);border-radius:8px;display:block;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:calc(100% + 4px);width:calc(100% + 4px)}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-data-table__sort-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring::after,.mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring::after{border-color:CanvasText}}.mdc-data-table__sort-icon-button.mdc-icon-button--reduced-size .mdc-icon-button__ripple{width:28px;height:28px;margin-top:0px;margin-bottom:0px;margin-right:0px;margin-left:0px}.mdc-data-table__sort-icon-button.mdc-icon-button--reduced-size.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-data-table__sort-icon-button.mdc-icon-button--reduced-size:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{max-height:28px;max-width:28px}.mdc-data-table__sort-icon-button .mdc-icon-button__touch{position:absolute;top:50%;height:28px;left:50%;width:28px;transform:translate(-50%, -50%)}[dir=rtl] .mdc-data-table__sort-icon-button,.mdc-data-table__sort-icon-button[dir=rtl]{margin-left:0;margin-right:4px}.mdc-data-table__header-cell--numeric .mdc-data-table__sort-icon-button{margin-left:0;margin-right:4px}[dir=rtl] .mdc-data-table__header-cell--numeric .mdc-data-table__sort-icon-button,.mdc-data-table__header-cell--numeric .mdc-data-table__sort-icon-button[dir=rtl]{margin-left:4px;margin-right:0}.mdc-data-table__header-cell--sorted-descending .mdc-data-table__sort-icon-button{transform:rotate(-180deg)}.mdc-data-table__sort-icon-button:focus,.mdc-data-table__header-cell:hover .mdc-data-table__sort-icon-button,.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button{opacity:1}.mdc-data-table__header-cell-wrapper{align-items:center;display:inline-flex;vertical-align:middle}.mdc-data-table__header-cell--with-sort{cursor:pointer}.mdc-data-table__sort-status-label{clip:rect(1px, 1px, 1px, 1px);height:1px;overflow:hidden;position:absolute;white-space:nowrap;width:1px}.mdc-data-table--sticky-header .mdc-data-table__header-cell{position:sticky;top:0;z-index:1}.mdc-data-table__sort-icon-button{color:rgba(0, 0, 0, 0.6)}.mdc-data-table__sort-icon-button .mdc-icon-button__ripple::before,.mdc-data-table__sort-icon-button .mdc-icon-button__ripple::after{background-color:rgba(0, 0, 0, 0.6);background-color:var(--mdc-ripple-color, rgba(0, 0, 0, 0.6))}.mdc-data-table__sort-icon-button:hover .mdc-icon-button__ripple::before,.mdc-data-table__sort-icon-button.mdc-ripple-surface--hover .mdc-icon-button__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-data-table__sort-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__ripple::before,.mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded) .mdc-icon-button__ripple::after{transition:opacity 150ms linear}.mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded):active .mdc-icon-button__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-data-table__sort-icon-button.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button{color:rgba(0, 0, 0, 0.87)}.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button .mdc-icon-button__ripple::before,.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button .mdc-icon-button__ripple::after{background-color:rgba(0, 0, 0, 0.87);background-color:var(--mdc-ripple-color, rgba(0, 0, 0, 0.87))}.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button:hover .mdc-icon-button__ripple::before,.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button.mdc-ripple-surface--hover .mdc-icon-button__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__ripple::before,.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded) .mdc-icon-button__ripple::after{transition:opacity 150ms linear}.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button:not(.mdc-ripple-upgraded):active .mdc-icon-button__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-data-table__header-cell--sorted .mdc-data-table__sort-icon-button.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-data-table__progress-indicator{display:none;position:absolute;width:100%}.mdc-data-table--in-progress .mdc-data-table__progress-indicator{display:block}.mdc-data-table__scrim{background-color:#ffffff;background-color:var(--mdc-theme-surface, #ffffff);height:100%;opacity:.32;position:absolute;top:0;width:100%}.mdc-data-table__pagination{-moz-osx-font-smoothing:grayscale;-webkit-font-smoothing:antialiased;font-family:Roboto, sans-serif;font-family:var(--mdc-typography-body2-font-family, var(--mdc-typography-font-family, Roboto, sans-serif));font-size:0.875rem;font-size:var(--mdc-typography-body2-font-size, 0.875rem);line-height:1.25rem;line-height:var(--mdc-typography-body2-line-height, 1.25rem);font-weight:400;font-weight:var(--mdc-typography-body2-font-weight, 400);letter-spacing:0.0178571429em;letter-spacing:var(--mdc-typography-body2-letter-spacing, 0.0178571429em);text-decoration:inherit;text-decoration:var(--mdc-typography-body2-text-decoration, inherit);text-transform:inherit;text-transform:var(--mdc-typography-body2-text-transform, inherit);box-sizing:border-box;display:flex;justify-content:flex-end}.mdc-data-table__pagination-trailing{margin-left:4px;margin-right:0;align-items:center;display:flex;flex-wrap:wrap;justify-content:flex-end}[dir=rtl] .mdc-data-table__pagination-trailing,.mdc-data-table__pagination-trailing[dir=rtl]{margin-left:0;margin-right:4px}.mdc-data-table__pagination-navigation{align-items:center;display:flex}.mdc-data-table__pagination-button{margin-left:0;margin-right:4px}[dir=rtl] .mdc-data-table__pagination-button .mdc-button__icon,.mdc-data-table__pagination-button .mdc-button__icon[dir=rtl]{transform:rotate(180deg)}[dir=rtl] .mdc-data-table__pagination-button,.mdc-data-table__pagination-button[dir=rtl]{margin-left:4px;margin-right:0}.mdc-data-table__pagination-total{margin-left:14px;margin-right:36px;white-space:nowrap}[dir=rtl] .mdc-data-table__pagination-total,.mdc-data-table__pagination-total[dir=rtl]{margin-left:36px;margin-right:14px}.mdc-data-table__pagination-rows-per-page{margin-left:0;margin-right:22px;align-items:center;display:inline-flex}[dir=rtl] .mdc-data-table__pagination-rows-per-page,.mdc-data-table__pagination-rows-per-page[dir=rtl]{margin-left:22px;margin-right:0}.mdc-data-table__pagination-rows-per-page-label{margin-left:0;margin-right:12px;white-space:nowrap}[dir=rtl] .mdc-data-table__pagination-rows-per-page-label,.mdc-data-table__pagination-rows-per-page-label[dir=rtl]{margin-left:12px;margin-right:0}.mdc-data-table__pagination-rows-per-page-select{min-width:80px;min-width:var(--mdc-menu-min-width, 80px);margin:8px 0}.mdc-data-table__pagination-rows-per-page-select .mdc-select__anchor{width:100%;min-width:80px}.mdc-data-table__pagination-rows-per-page-select .mdc-select__anchor{height:36px}.mdc-data-table__pagination-rows-per-page-select .mdc-select__anchor .mdc-floating-label--float-above{transform:translateY(-27.25px) scale(1)}.mdc-data-table__pagination-rows-per-page-select .mdc-select__anchor .mdc-floating-label--float-above{font-size:.75rem}.mdc-data-table__pagination-rows-per-page-select .mdc-select__anchor.mdc-notched-outline--upgraded .mdc-floating-label--float-above,.mdc-data-table__pagination-rows-per-page-select .mdc-select__anchor .mdc-notched-outline--upgraded .mdc-floating-label--float-above{transform:translateY(-24.75px) scale(0.75)}.mdc-data-table__pagination-rows-per-page-select .mdc-select__anchor.mdc-notched-outline--upgraded .mdc-floating-label--float-above,.mdc-data-table__pagination-rows-per-page-select .mdc-select__anchor .mdc-notched-outline--upgraded .mdc-floating-label--float-above{font-size:1rem}.mdc-data-table__pagination-rows-per-page-select .mdc-select__anchor .mdc-floating-label--shake{animation:mdc-floating-label-shake-float-above-select-outlined-36px 250ms 1}@keyframes mdc-floating-label-shake-float-above-select-outlined-36px{0%{transform:translateX(calc(0 - 0%)) translateY(-24.75px) scale(0.75)}33%{animation-timing-function:cubic-bezier(0.5, 0, 0.701732, 0.495819);transform:translateX(calc(4% - 0%)) translateY(-24.75px) scale(0.75)}66%{animation-timing-function:cubic-bezier(0.302435, 0.381352, 0.55, 0.956352);transform:translateX(calc(-4% - 0%)) translateY(-24.75px) scale(0.75)}100%{transform:translateX(calc(0 - 0%)) translateY(-24.75px) scale(0.75)}}.mdc-data-table__pagination-rows-per-page-select .mdc-select__dropdown-icon{width:20px;height:20px}.mdc-data-table__pagination-rows-per-page-select.mdc-select--outlined .mdc-select__anchor :not(.mdc-notched-outline--notched) .mdc-notched-outline__notch{max-width:calc(100% - 56px)}.mdc-data-table__pagination-rows-per-page-select .mdc-list-item.mdc-list-item--with-one-line{height:36px}.mdc-data-table__header-row-checkbox.mdc-checkbox--selected .mdc-checkbox__ripple::before,.mdc-data-table__header-row-checkbox.mdc-checkbox--selected .mdc-checkbox__ripple::after,.mdc-data-table__row-checkbox.mdc-checkbox--selected .mdc-checkbox__ripple::before,.mdc-data-table__row-checkbox.mdc-checkbox--selected .mdc-checkbox__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-ripple-color, var(--mdc-theme-primary, #6a1b9a))}.mdc-data-table__header-row-checkbox.mdc-checkbox--selected:hover .mdc-checkbox__ripple::before,.mdc-data-table__header-row-checkbox.mdc-checkbox--selected.mdc-ripple-surface--hover .mdc-checkbox__ripple::before,.mdc-data-table__row-checkbox.mdc-checkbox--selected:hover .mdc-checkbox__ripple::before,.mdc-data-table__row-checkbox.mdc-checkbox--selected.mdc-ripple-surface--hover .mdc-checkbox__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-data-table__header-row-checkbox.mdc-checkbox--selected.mdc-ripple-upgraded--background-focused .mdc-checkbox__ripple::before,.mdc-data-table__header-row-checkbox.mdc-checkbox--selected:not(.mdc-ripple-upgraded):focus .mdc-checkbox__ripple::before,.mdc-data-table__row-checkbox.mdc-checkbox--selected.mdc-ripple-upgraded--background-focused .mdc-checkbox__ripple::before,.mdc-data-table__row-checkbox.mdc-checkbox--selected:not(.mdc-ripple-upgraded):focus .mdc-checkbox__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-data-table__header-row-checkbox.mdc-checkbox--selected:not(.mdc-ripple-upgraded) .mdc-checkbox__ripple::after,.mdc-data-table__row-checkbox.mdc-checkbox--selected:not(.mdc-ripple-upgraded) .mdc-checkbox__ripple::after{transition:opacity 150ms linear}.mdc-data-table__header-row-checkbox.mdc-checkbox--selected:not(.mdc-ripple-upgraded):active .mdc-checkbox__ripple::after,.mdc-data-table__row-checkbox.mdc-checkbox--selected:not(.mdc-ripple-upgraded):active .mdc-checkbox__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-data-table__header-row-checkbox.mdc-checkbox--selected.mdc-ripple-upgraded,.mdc-data-table__row-checkbox.mdc-checkbox--selected.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-data-table__header-row-checkbox.mdc-ripple-upgraded--background-focused.mdc-checkbox--selected .mdc-checkbox__ripple::before,.mdc-data-table__header-row-checkbox.mdc-ripple-upgraded--background-focused.mdc-checkbox--selected .mdc-checkbox__ripple::after,.mdc-data-table__row-checkbox.mdc-ripple-upgraded--background-focused.mdc-checkbox--selected .mdc-checkbox__ripple::before,.mdc-data-table__row-checkbox.mdc-ripple-upgraded--background-focused.mdc-checkbox--selected .mdc-checkbox__ripple::after{background-color:#6a1b9a;background-color:var(--mdc-ripple-color, var(--mdc-theme-primary, #6a1b9a))}.mdc-data-table__header-row-checkbox .mdc-checkbox__native-control:enabled:not(:checked):not(:indeterminate):not([data-indeterminate=true])~.mdc-checkbox__background,.mdc-data-table__row-checkbox .mdc-checkbox__native-control:enabled:not(:checked):not(:indeterminate):not([data-indeterminate=true])~.mdc-checkbox__background{border-color:rgba(0, 0, 0, 0.54);border-color:var(--mdc-checkbox-unchecked-color, rgba(0, 0, 0, 0.54));background-color:transparent}.mdc-data-table__header-row-checkbox .mdc-checkbox__native-control:enabled:checked~.mdc-checkbox__background,.mdc-data-table__header-row-checkbox .mdc-checkbox__native-control:enabled:indeterminate~.mdc-checkbox__background,.mdc-data-table__header-row-checkbox .mdc-checkbox__native-control[data-indeterminate=true]:enabled~.mdc-checkbox__background,.mdc-data-table__row-checkbox .mdc-checkbox__native-control:enabled:checked~.mdc-checkbox__background,.mdc-data-table__row-checkbox .mdc-checkbox__native-control:enabled:indeterminate~.mdc-checkbox__background,.mdc-data-table__row-checkbox .mdc-checkbox__native-control[data-indeterminate=true]:enabled~.mdc-checkbox__background{border-color:#6a1b9a;border-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #6a1b9a));background-color:#6a1b9a;background-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #6a1b9a))}@keyframes mdc-checkbox-fade-in-background-8A000000FF6A1B9A00000000FF6A1B9A{0%{border-color:rgba(0, 0, 0, 0.54);border-color:var(--mdc-checkbox-unchecked-color, rgba(0, 0, 0, 0.54));background-color:transparent}50%{border-color:#6a1b9a;border-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #6a1b9a));background-color:#6a1b9a;background-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #6a1b9a))}}@keyframes mdc-checkbox-fade-out-background-8A000000FF6A1B9A00000000FF6A1B9A{0%,80%{border-color:#6a1b9a;border-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #6a1b9a));background-color:#6a1b9a;background-color:var(--mdc-checkbox-checked-color, var(--mdc-theme-secondary, #6a1b9a))}100%{border-color:rgba(0, 0, 0, 0.54);border-color:var(--mdc-checkbox-unchecked-color, rgba(0, 0, 0, 0.54));background-color:transparent}}.mdc-data-table__header-row-checkbox.mdc-checkbox--anim-unchecked-checked .mdc-checkbox__native-control:enabled~.mdc-checkbox__background,.mdc-data-table__header-row-checkbox.mdc-checkbox--anim-unchecked-indeterminate .mdc-checkbox__native-control:enabled~.mdc-checkbox__background,.mdc-data-table__row-checkbox.mdc-checkbox--anim-unchecked-checked .mdc-checkbox__native-control:enabled~.mdc-checkbox__background,.mdc-data-table__row-checkbox.mdc-checkbox--anim-unchecked-indeterminate .mdc-checkbox__native-control:enabled~.mdc-checkbox__background{animation-name:mdc-checkbox-fade-in-background-8A000000FF6A1B9A00000000FF6A1B9A}.mdc-data-table__header-row-checkbox.mdc-checkbox--anim-checked-unchecked .mdc-checkbox__native-control:enabled~.mdc-checkbox__background,.mdc-data-table__header-row-checkbox.mdc-checkbox--anim-indeterminate-unchecked .mdc-checkbox__native-control:enabled~.mdc-checkbox__background,.mdc-data-table__row-checkbox.mdc-checkbox--anim-checked-unchecked .mdc-checkbox__native-control:enabled~.mdc-checkbox__background,.mdc-data-table__row-checkbox.mdc-checkbox--anim-indeterminate-unchecked .mdc-checkbox__native-control:enabled~.mdc-checkbox__background{animation-name:mdc-checkbox-fade-out-background-8A000000FF6A1B9A00000000FF6A1B9A}.mdc-icon-button{font-size:24px;width:48px;height:48px;padding:12px}.mdc-icon-button .mdc-icon-button__focus-ring{display:none}.mdc-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{display:block;max-height:48px;max-width:48px}@media screen and (forced-colors: active){.mdc-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{pointer-events:none;border:2px solid rgba(0,0,0,0);border-radius:6px;box-sizing:content-box;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:100%;width:100%}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{border-color:CanvasText}}@media screen and (forced-colors: active){.mdc-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring::after,.mdc-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring::after{content:"";border:2px solid rgba(0,0,0,0);border-radius:8px;display:block;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:calc(100% + 4px);width:calc(100% + 4px)}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring::after,.mdc-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring::after{border-color:CanvasText}}.mdc-icon-button.mdc-icon-button--reduced-size .mdc-icon-button__ripple{width:40px;height:40px;margin-top:4px;margin-bottom:4px;margin-right:4px;margin-left:4px}.mdc-icon-button.mdc-icon-button--reduced-size.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button.mdc-icon-button--reduced-size:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{max-height:40px;max-width:40px}.mdc-icon-button .mdc-icon-button__touch{position:absolute;top:50%;height:48px;left:50%;width:48px;transform:translate(-50%, -50%)}.mdc-icon-button:disabled{color:rgba(0, 0, 0, 0.38);color:var(--mdc-theme-text-disabled-on-light, rgba(0, 0, 0, 0.38))}.mdc-icon-button svg,.mdc-icon-button img{width:24px;height:24px}.mdc-icon-button{display:inline-block;position:relative;box-sizing:border-box;border:none;outline:none;background-color:rgba(0,0,0,0);fill:currentColor;color:inherit;text-decoration:none;cursor:pointer;user-select:none;z-index:0;overflow:visible}.mdc-icon-button .mdc-icon-button__touch{position:absolute;top:50%;height:48px;left:50%;width:48px;transform:translate(-50%, -50%)}.mdc-icon-button:disabled{cursor:default;pointer-events:none}.mdc-icon-button--display-flex{align-items:center;display:inline-flex;justify-content:center}.mdc-icon-button__icon{display:inline-block}.mdc-icon-button__icon.mdc-icon-button__icon--on{display:none}.mdc-icon-button--on .mdc-icon-button__icon{display:none}.mdc-icon-button--on .mdc-icon-button__icon.mdc-icon-button__icon--on{display:inline-block}.mdc-icon-button__link{height:100%;left:0;outline:none;position:absolute;top:0;width:100%}.mdc-icon-button{--mdc-ripple-fg-size: 0;--mdc-ripple-left: 0;--mdc-ripple-top: 0;--mdc-ripple-fg-scale: 1;--mdc-ripple-fg-translate-end: 0;--mdc-ripple-fg-translate-start: 0;-webkit-tap-highlight-color:rgba(0,0,0,0);will-change:transform,opacity}.mdc-icon-button .mdc-icon-button__ripple::before,.mdc-icon-button .mdc-icon-button__ripple::after{position:absolute;border-radius:50%;opacity:0;pointer-events:none;content:""}.mdc-icon-button .mdc-icon-button__ripple::before{transition:opacity 15ms linear,background-color 15ms linear;z-index:1;z-index:var(--mdc-ripple-z-index, 1)}.mdc-icon-button .mdc-icon-button__ripple::after{z-index:0;z-index:var(--mdc-ripple-z-index, 0)}.mdc-icon-button.mdc-ripple-upgraded .mdc-icon-button__ripple::before{transform:scale(var(--mdc-ripple-fg-scale, 1))}.mdc-icon-button.mdc-ripple-upgraded .mdc-icon-button__ripple::after{top:0;left:0;transform:scale(0);transform-origin:center center}.mdc-icon-button.mdc-ripple-upgraded--unbounded .mdc-icon-button__ripple::after{top:var(--mdc-ripple-top, 0);left:var(--mdc-ripple-left, 0)}.mdc-icon-button.mdc-ripple-upgraded--foreground-activation .mdc-icon-button__ripple::after{animation:mdc-ripple-fg-radius-in 225ms forwards,mdc-ripple-fg-opacity-in 75ms forwards}.mdc-icon-button.mdc-ripple-upgraded--foreground-deactivation .mdc-icon-button__ripple::after{animation:mdc-ripple-fg-opacity-out 150ms;transform:translate(var(--mdc-ripple-fg-translate-end, 0)) scale(var(--mdc-ripple-fg-scale, 1))}.mdc-icon-button .mdc-icon-button__ripple::before,.mdc-icon-button .mdc-icon-button__ripple::after{top:calc(50% - 50%);left:calc(50% - 50%);width:100%;height:100%}.mdc-icon-button.mdc-ripple-upgraded .mdc-icon-button__ripple::before,.mdc-icon-button.mdc-ripple-upgraded .mdc-icon-button__ripple::after{top:var(--mdc-ripple-top, calc(50% - 50%));left:var(--mdc-ripple-left, calc(50% - 50%));width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-icon-button.mdc-ripple-upgraded .mdc-icon-button__ripple::after{width:var(--mdc-ripple-fg-size, 100%);height:var(--mdc-ripple-fg-size, 100%)}.mdc-icon-button .mdc-icon-button__ripple::before,.mdc-icon-button .mdc-icon-button__ripple::after{background-color:#000;background-color:var(--mdc-ripple-color, #000)}.mdc-icon-button:hover .mdc-icon-button__ripple::before,.mdc-icon-button.mdc-ripple-surface--hover .mdc-icon-button__ripple::before{opacity:0.04;opacity:var(--mdc-ripple-hover-opacity, 0.04)}.mdc-icon-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__ripple::before,.mdc-icon-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__ripple::before{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-focus-opacity, 0.12)}.mdc-icon-button:not(.mdc-ripple-upgraded) .mdc-icon-button__ripple::after{transition:opacity 150ms linear}.mdc-icon-button:not(.mdc-ripple-upgraded):active .mdc-icon-button__ripple::after{transition-duration:75ms;opacity:0.12;opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-icon-button.mdc-ripple-upgraded{--mdc-ripple-fg-opacity:var(--mdc-ripple-press-opacity, 0.12)}.mdc-icon-button .mdc-icon-button__ripple{height:100%;left:0px;pointer-events:none;position:absolute;top:0px;width:100%;z-index:-1}.mdc-icon-button.smui-icon-button--size-mini{width:48px;height:48px;padding:12px;font-size:24px;width:40px;height:40px;padding:8px}.mdc-icon-button.smui-icon-button--size-mini svg,.mdc-icon-button.smui-icon-button--size-mini img{width:24px;height:24px}.mdc-icon-button.smui-icon-button--size-mini .mdc-icon-button__focus-ring{display:none}.mdc-icon-button.smui-icon-button--size-mini.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button.smui-icon-button--size-mini:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{display:block;max-height:40px;max-width:40px}@media screen and (forced-colors: active){.mdc-icon-button.smui-icon-button--size-mini.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button.smui-icon-button--size-mini:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{pointer-events:none;border:2px solid rgba(0,0,0,0);border-radius:6px;box-sizing:content-box;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:100%;width:100%}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-icon-button.smui-icon-button--size-mini.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button.smui-icon-button--size-mini:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{border-color:CanvasText}}@media screen and (forced-colors: active){.mdc-icon-button.smui-icon-button--size-mini.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring::after,.mdc-icon-button.smui-icon-button--size-mini:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring::after{content:"";border:2px solid rgba(0,0,0,0);border-radius:8px;display:block;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:calc(100% + 4px);width:calc(100% + 4px)}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-icon-button.smui-icon-button--size-mini.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring::after,.mdc-icon-button.smui-icon-button--size-mini:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring::after{border-color:CanvasText}}.mdc-icon-button.smui-icon-button--size-mini.mdc-icon-button--reduced-size .mdc-icon-button__ripple{width:40px;height:40px;margin-top:0px;margin-bottom:0px;margin-right:0px;margin-left:0px}.mdc-icon-button.smui-icon-button--size-mini.mdc-icon-button--reduced-size.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button.smui-icon-button--size-mini.mdc-icon-button--reduced-size:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{max-height:40px;max-width:40px}.mdc-icon-button.smui-icon-button--size-mini .mdc-icon-button__touch{position:absolute;top:50%;height:40px;left:50%;width:40px;transform:translate(-50%, -50%)}.mdc-icon-button.smui-icon-button--size-button{width:36px;height:36px;padding:9px;font-size:18px;width:36px;height:36px;padding:6px}.mdc-icon-button.smui-icon-button--size-button svg,.mdc-icon-button.smui-icon-button--size-button img{width:18px;height:18px}.mdc-icon-button.smui-icon-button--size-button .mdc-icon-button__focus-ring{display:none}.mdc-icon-button.smui-icon-button--size-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button.smui-icon-button--size-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{display:block;max-height:36px;max-width:36px}@media screen and (forced-colors: active){.mdc-icon-button.smui-icon-button--size-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button.smui-icon-button--size-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{pointer-events:none;border:2px solid rgba(0,0,0,0);border-radius:6px;box-sizing:content-box;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:100%;width:100%}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-icon-button.smui-icon-button--size-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button.smui-icon-button--size-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{border-color:CanvasText}}@media screen and (forced-colors: active){.mdc-icon-button.smui-icon-button--size-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring::after,.mdc-icon-button.smui-icon-button--size-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring::after{content:"";border:2px solid rgba(0,0,0,0);border-radius:8px;display:block;position:absolute;top:50%;left:50%;transform:translate(-50%, -50%);height:calc(100% + 4px);width:calc(100% + 4px)}}@media screen and (forced-colors: active)and (forced-colors: active){.mdc-icon-button.smui-icon-button--size-button.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring::after,.mdc-icon-button.smui-icon-button--size-button:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring::after{border-color:CanvasText}}.mdc-icon-button.smui-icon-button--size-button.mdc-icon-button--reduced-size .mdc-icon-button__ripple{width:36px;height:36px;margin-top:0px;margin-bottom:0px;margin-right:0px;margin-left:0px}.mdc-icon-button.smui-icon-button--size-button.mdc-icon-button--reduced-size.mdc-ripple-upgraded--background-focused .mdc-icon-button__focus-ring,.mdc-icon-button.smui-icon-button--size-button.mdc-icon-button--reduced-size:not(.mdc-ripple-upgraded):focus .mdc-icon-button__focus-ring{max-height:36px;max-width:36px}.mdc-icon-button.smui-icon-button--size-button .mdc-icon-button__touch{position:absolute;top:50%;height:36px;left:50%;width:36px;transform:translate(-50%, -50%)}.mdc-icon-button svg{pointer-events:none}.mdc-data-table--sticky-header>.mdc-data-table__table-container{overflow-x:unset} \ No newline at end of file diff --git a/spaces/zhang-wei-jian/docker/node_modules/koa/lib/request.js b/spaces/zhang-wei-jian/docker/node_modules/koa/lib/request.js deleted file mode 100644 index e62afd606b0d4b917f7b436e1d318868031bfbd4..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/koa/lib/request.js +++ /dev/null @@ -1,726 +0,0 @@ - -'use strict'; - -/** - * Module dependencies. - */ - -const URL = require('url').URL; -const net = require('net'); -const accepts = require('accepts'); -const contentType = require('content-type'); -const stringify = require('url').format; -const parse = require('parseurl'); -const qs = require('querystring'); -const typeis = require('type-is'); -const fresh = require('fresh'); -const only = require('only'); -const util = require('util'); - -const IP = Symbol('context#ip'); - -/** - * Prototype. - */ - -module.exports = { - - /** - * Return request header. - * - * @return {Object} - * @api public - */ - - get header() { - return this.req.headers; - }, - - /** - * Set request header. - * - * @api public - */ - - set header(val) { - this.req.headers = val; - }, - - /** - * Return request header, alias as request.header - * - * @return {Object} - * @api public - */ - - get headers() { - return this.req.headers; - }, - - /** - * Set request header, alias as request.header - * - * @api public - */ - - set headers(val) { - this.req.headers = val; - }, - - /** - * Get request URL. - * - * @return {String} - * @api public - */ - - get url() { - return this.req.url; - }, - - /** - * Set request URL. - * - * @api public - */ - - set url(val) { - this.req.url = val; - }, - - /** - * Get origin of URL. - * - * @return {String} - * @api public - */ - - get origin() { - return `${this.protocol}://${this.host}`; - }, - - /** - * Get full request URL. - * - * @return {String} - * @api public - */ - - get href() { - // support: `GET http://example.com/foo` - if (/^https?:\/\//i.test(this.originalUrl)) return this.originalUrl; - return this.origin + this.originalUrl; - }, - - /** - * Get request method. - * - * @return {String} - * @api public - */ - - get method() { - return this.req.method; - }, - - /** - * Set request method. - * - * @param {String} val - * @api public - */ - - set method(val) { - this.req.method = val; - }, - - /** - * Get request pathname. - * - * @return {String} - * @api public - */ - - get path() { - return parse(this.req).pathname; - }, - - /** - * Set pathname, retaining the query string when present. - * - * @param {String} path - * @api public - */ - - set path(path) { - const url = parse(this.req); - if (url.pathname === path) return; - - url.pathname = path; - url.path = null; - - this.url = stringify(url); - }, - - /** - * Get parsed query string. - * - * @return {Object} - * @api public - */ - - get query() { - const str = this.querystring; - const c = this._querycache = this._querycache || {}; - return c[str] || (c[str] = qs.parse(str)); - }, - - /** - * Set query string as an object. - * - * @param {Object} obj - * @api public - */ - - set query(obj) { - this.querystring = qs.stringify(obj); - }, - - /** - * Get query string. - * - * @return {String} - * @api public - */ - - get querystring() { - if (!this.req) return ''; - return parse(this.req).query || ''; - }, - - /** - * Set query string. - * - * @param {String} str - * @api public - */ - - set querystring(str) { - const url = parse(this.req); - if (url.search === `?${str}`) return; - - url.search = str; - url.path = null; - - this.url = stringify(url); - }, - - /** - * Get the search string. Same as the query string - * except it includes the leading ?. - * - * @return {String} - * @api public - */ - - get search() { - if (!this.querystring) return ''; - return `?${this.querystring}`; - }, - - /** - * Set the search string. Same as - * request.querystring= but included for ubiquity. - * - * @param {String} str - * @api public - */ - - set search(str) { - this.querystring = str; - }, - - /** - * Parse the "Host" header field host - * and support X-Forwarded-Host when a - * proxy is enabled. - * - * @return {String} hostname:port - * @api public - */ - - get host() { - const proxy = this.app.proxy; - let host = proxy && this.get('X-Forwarded-Host'); - if (!host) { - if (this.req.httpVersionMajor >= 2) host = this.get(':authority'); - if (!host) host = this.get('Host'); - } - if (!host) return ''; - return host.split(/\s*,\s*/, 1)[0]; - }, - - /** - * Parse the "Host" header field hostname - * and support X-Forwarded-Host when a - * proxy is enabled. - * - * @return {String} hostname - * @api public - */ - - get hostname() { - const host = this.host; - if (!host) return ''; - if ('[' === host[0]) return this.URL.hostname || ''; // IPv6 - return host.split(':', 1)[0]; - }, - - /** - * Get WHATWG parsed URL. - * Lazily memoized. - * - * @return {URL|Object} - * @api public - */ - - get URL() { - /* istanbul ignore else */ - if (!this.memoizedURL) { - const originalUrl = this.originalUrl || ''; // avoid undefined in template string - try { - this.memoizedURL = new URL(`${this.origin}${originalUrl}`); - } catch (err) { - this.memoizedURL = Object.create(null); - } - } - return this.memoizedURL; - }, - - /** - * Check if the request is fresh, aka - * Last-Modified and/or the ETag - * still match. - * - * @return {Boolean} - * @api public - */ - - get fresh() { - const method = this.method; - const s = this.ctx.status; - - // GET or HEAD for weak freshness validation only - if ('GET' !== method && 'HEAD' !== method) return false; - - // 2xx or 304 as per rfc2616 14.26 - if ((s >= 200 && s < 300) || 304 === s) { - return fresh(this.header, this.response.header); - } - - return false; - }, - - /** - * Check if the request is stale, aka - * "Last-Modified" and / or the "ETag" for the - * resource has changed. - * - * @return {Boolean} - * @api public - */ - - get stale() { - return !this.fresh; - }, - - /** - * Check if the request is idempotent. - * - * @return {Boolean} - * @api public - */ - - get idempotent() { - const methods = ['GET', 'HEAD', 'PUT', 'DELETE', 'OPTIONS', 'TRACE']; - return !!~methods.indexOf(this.method); - }, - - /** - * Return the request socket. - * - * @return {Connection} - * @api public - */ - - get socket() { - return this.req.socket; - }, - - /** - * Get the charset when present or undefined. - * - * @return {String} - * @api public - */ - - get charset() { - try { - const { parameters } = contentType.parse(this.req); - return parameters.charset || ''; - } catch (e) { - return ''; - } - }, - - /** - * Return parsed Content-Length when present. - * - * @return {Number} - * @api public - */ - - get length() { - const len = this.get('Content-Length'); - if (len === '') return; - return ~~len; - }, - - /** - * Return the protocol string "http" or "https" - * when requested with TLS. When the proxy setting - * is enabled the "X-Forwarded-Proto" header - * field will be trusted. If you're running behind - * a reverse proxy that supplies https for you this - * may be enabled. - * - * @return {String} - * @api public - */ - - get protocol() { - if (this.socket.encrypted) return 'https'; - if (!this.app.proxy) return 'http'; - const proto = this.get('X-Forwarded-Proto'); - return proto ? proto.split(/\s*,\s*/, 1)[0] : 'http'; - }, - - /** - * Shorthand for: - * - * this.protocol == 'https' - * - * @return {Boolean} - * @api public - */ - - get secure() { - return 'https' === this.protocol; - }, - - /** - * When `app.proxy` is `true`, parse - * the "X-Forwarded-For" ip address list. - * - * For example if the value was "client, proxy1, proxy2" - * you would receive the array `["client", "proxy1", "proxy2"]` - * where "proxy2" is the furthest down-stream. - * - * @return {Array} - * @api public - */ - - get ips() { - const proxy = this.app.proxy; - const val = this.get(this.app.proxyIpHeader); - let ips = proxy && val - ? val.split(/\s*,\s*/) - : []; - if (this.app.maxIpsCount > 0) { - ips = ips.slice(-this.app.maxIpsCount); - } - return ips; - }, - - /** - * Return request's remote address - * When `app.proxy` is `true`, parse - * the "X-Forwarded-For" ip address list and return the first one - * - * @return {String} - * @api public - */ - - get ip() { - if (!this[IP]) { - this[IP] = this.ips[0] || this.socket.remoteAddress || ''; - } - return this[IP]; - }, - - set ip(_ip) { - this[IP] = _ip; - }, - - /** - * Return subdomains as an array. - * - * Subdomains are the dot-separated parts of the host before the main domain - * of the app. By default, the domain of the app is assumed to be the last two - * parts of the host. This can be changed by setting `app.subdomainOffset`. - * - * For example, if the domain is "tobi.ferrets.example.com": - * If `app.subdomainOffset` is not set, this.subdomains is - * `["ferrets", "tobi"]`. - * If `app.subdomainOffset` is 3, this.subdomains is `["tobi"]`. - * - * @return {Array} - * @api public - */ - - get subdomains() { - const offset = this.app.subdomainOffset; - const hostname = this.hostname; - if (net.isIP(hostname)) return []; - return hostname - .split('.') - .reverse() - .slice(offset); - }, - - /** - * Get accept object. - * Lazily memoized. - * - * @return {Object} - * @api private - */ - - get accept() { - return this._accept || (this._accept = accepts(this.req)); - }, - - /** - * Set accept object. - * - * @param {Object} - * @api private - */ - - set accept(obj) { - this._accept = obj; - }, - - /** - * Check if the given `type(s)` is acceptable, returning - * the best match when true, otherwise `false`, in which - * case you should respond with 406 "Not Acceptable". - * - * The `type` value may be a single mime type string - * such as "application/json", the extension name - * such as "json" or an array `["json", "html", "text/plain"]`. When a list - * or array is given the _best_ match, if any is returned. - * - * Examples: - * - * // Accept: text/html - * this.accepts('html'); - * // => "html" - * - * // Accept: text/*, application/json - * this.accepts('html'); - * // => "html" - * this.accepts('text/html'); - * // => "text/html" - * this.accepts('json', 'text'); - * // => "json" - * this.accepts('application/json'); - * // => "application/json" - * - * // Accept: text/*, application/json - * this.accepts('image/png'); - * this.accepts('png'); - * // => false - * - * // Accept: text/*;q=.5, application/json - * this.accepts(['html', 'json']); - * this.accepts('html', 'json'); - * // => "json" - * - * @param {String|Array} type(s)... - * @return {String|Array|false} - * @api public - */ - - accepts(...args) { - return this.accept.types(...args); - }, - - /** - * Return accepted encodings or best fit based on `encodings`. - * - * Given `Accept-Encoding: gzip, deflate` - * an array sorted by quality is returned: - * - * ['gzip', 'deflate'] - * - * @param {String|Array} encoding(s)... - * @return {String|Array} - * @api public - */ - - acceptsEncodings(...args) { - return this.accept.encodings(...args); - }, - - /** - * Return accepted charsets or best fit based on `charsets`. - * - * Given `Accept-Charset: utf-8, iso-8859-1;q=0.2, utf-7;q=0.5` - * an array sorted by quality is returned: - * - * ['utf-8', 'utf-7', 'iso-8859-1'] - * - * @param {String|Array} charset(s)... - * @return {String|Array} - * @api public - */ - - acceptsCharsets(...args) { - return this.accept.charsets(...args); - }, - - /** - * Return accepted languages or best fit based on `langs`. - * - * Given `Accept-Language: en;q=0.8, es, pt` - * an array sorted by quality is returned: - * - * ['es', 'pt', 'en'] - * - * @param {String|Array} lang(s)... - * @return {Array|String} - * @api public - */ - - acceptsLanguages(...args) { - return this.accept.languages(...args); - }, - - /** - * Check if the incoming request contains the "Content-Type" - * header field and if it contains any of the given mime `type`s. - * If there is no request body, `null` is returned. - * If there is no content type, `false` is returned. - * Otherwise, it returns the first `type` that matches. - * - * Examples: - * - * // With Content-Type: text/html; charset=utf-8 - * this.is('html'); // => 'html' - * this.is('text/html'); // => 'text/html' - * this.is('text/*', 'application/json'); // => 'text/html' - * - * // When Content-Type is application/json - * this.is('json', 'urlencoded'); // => 'json' - * this.is('application/json'); // => 'application/json' - * this.is('html', 'application/*'); // => 'application/json' - * - * this.is('html'); // => false - * - * @param {String|String[]} [type] - * @param {String[]} [types] - * @return {String|false|null} - * @api public - */ - - is(type, ...types) { - return typeis(this.req, type, ...types); - }, - - /** - * Return the request mime type void of - * parameters such as "charset". - * - * @return {String} - * @api public - */ - - get type() { - const type = this.get('Content-Type'); - if (!type) return ''; - return type.split(';')[0]; - }, - - /** - * Return request header. - * - * The `Referrer` header field is special-cased, - * both `Referrer` and `Referer` are interchangeable. - * - * Examples: - * - * this.get('Content-Type'); - * // => "text/plain" - * - * this.get('content-type'); - * // => "text/plain" - * - * this.get('Something'); - * // => '' - * - * @param {String} field - * @return {String} - * @api public - */ - - get(field) { - const req = this.req; - switch (field = field.toLowerCase()) { - case 'referer': - case 'referrer': - return req.headers.referrer || req.headers.referer || ''; - default: - return req.headers[field] || ''; - } - }, - - /** - * Inspect implementation. - * - * @return {Object} - * @api public - */ - - inspect() { - if (!this.req) return; - return this.toJSON(); - }, - - /** - * Return JSON representation. - * - * @return {Object} - * @api public - */ - - toJSON() { - return only(this, [ - 'method', - 'url', - 'header' - ]); - } -}; - -/** - * Custom inspection implementation for newer Node.js versions. - * - * @return {Object} - * @api public - */ - -/* istanbul ignore else */ -if (util.inspect.custom) { - module.exports[util.inspect.custom] = module.exports.inspect; -} diff --git a/spaces/zhangliwei7758/vits-uma-genshin-honkai/mel_processing.py b/spaces/zhangliwei7758/vits-uma-genshin-honkai/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/zhangliwei7758/vits-uma-genshin-honkai/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/chat.tsx b/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
      - -
      - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
      - -
      - ) : null} - - ) : null} -
      - - -
      - ) -} diff --git a/spaces/zhuce/vits/commons.py b/spaces/zhuce/vits/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/zhuce/vits/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm