diff --git a/spaces/1phancelerku/anime-remove-background/Crossword Puzzle APK The Best App for Relaxing and Unwinding with Crosswords.md b/spaces/1phancelerku/anime-remove-background/Crossword Puzzle APK The Best App for Relaxing and Unwinding with Crosswords.md deleted file mode 100644 index 1ae5b84b3211d646d81efab40c33ee6a525537a1..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Crossword Puzzle APK The Best App for Relaxing and Unwinding with Crosswords.md +++ /dev/null @@ -1,80 +0,0 @@ - -

Crossword Puzzle APK: A Fun and Challenging Game for Your Brain

-

Do you love word games? Do you enjoy solving puzzles and learning new things? If you answered yes, then you should try playing crossword puzzle apk. This is an app that lets you play crossword puzzles on your Android device. Whether you are a beginner or an expert, you will find crossword puzzles that suit your level and interest. In this article, we will tell you what a crossword puzzle apk is, why you should play it, how to play it, and what are some of its features.

-

cross word puzzle apk


Download Filehttps://jinyurl.com/2uNLFK



-

What is a crossword puzzle apk?

-

An apk file is a format for installing applications on Android devices

-

An apk file is short for Android Package Kit. It is a file format that contains all the elements needed to install an application on an Android device. You can download apk files from various sources, such as websites, app stores, or file-sharing platforms. However, you should be careful about the source of the apk file, as some may contain malware or viruses that can harm your device. You should only download apk files from trusted and reputable sources.

-

A crossword puzzle is a word game that involves filling in a grid with words that match the clues

-

A crossword puzzle is one of the most popular word games in the world. It consists of a grid of white and black squares, where some of the white squares form horizontal or vertical words. The words are determined by the clues given at the side or bottom of the grid. The clues are usually in the form of definitions, synonyms, antonyms, or wordplay. The goal of the game is to fill in all the white squares with letters that form valid words.

-

A crossword puzzle apk is an app that lets you play crossword puzzles on your phone or tablet

-

A crossword puzzle apk is an application that you can install on your Android device using an apk file. It allows you to play crossword puzzles on your phone or tablet anytime and anywhere. You can choose from hundreds of crossword puzzles in different categories and difficulties, or you can create your own puzzles using the app's editor. You can also customize the app's settings and themes to suit your preferences.

-

Why should you play crossword puzzle apk?

-

Crossword puzzles are good for your brain health and cognitive skills

-

Playing crossword puzzles can benefit your brain in many ways. According to research, crossword puzzles can improve your memory, vocabulary, spelling, logic, reasoning, problem-solving, and general knowledge. They can also prevent cognitive decline and dementia by keeping your brain active and stimulated. Crossword puzzles can also reduce stress and improve your mood by providing a sense of accomplishment and satisfaction.

-

Crossword puzzles are fun and entertaining for all ages and levels

-

Playing crossword puzzles can also be a lot of fun and entertainment. You can challenge yourself with the check button on the top right corner of the screen. The app will highlight any incorrect or incomplete words in red. You can also reveal the solution by tapping on the reveal button on the top left corner of the screen. The app will show you the complete and correct grid. However, this will end your game and you will not get any points or achievements.

-

What are some features of crossword puzzle apk?

-

Hundreds of crossword puzzles in different categories and difficulties

-

One of the main features of crossword puzzle apk is that it has hundreds of crossword puzzles in different categories and difficulties. You can choose from various themes and topics, such as sports, movies, history, science, and more. You can also select the level of difficulty that suits your skill and interest, such as easy, medium, hard, or expert. You will never run out of crossword puzzles to play with this app.

-

Customizable settings and themes to suit your preferences

-

Another feature of crossword puzzle apk is that it has customizable settings and themes to suit your preferences. You can change the font size, color, and style of the clues and words. You can also change the background color and image of the grid. You can choose from various themes, such as classic, modern, wood, paper, or dark. You can also adjust the sound effects and music volume of the app.

-

Offline mode and cloud sync to play anytime and anywhere

-

A third feature of crossword puzzle apk is that it has offline mode and cloud sync to play anytime and anywhere. You can play crossword puzzles without an internet connection by downloading them to your device. You can also sync your progress and achievements to the cloud by signing in with your Google account. This way, you can access your crossword puzzles on any device and resume your game from where you left off.

-

cross word puzzle game apk download
-cross word puzzle app for android free
-best cross word puzzle apk offline
-cross word puzzle solver apk mod
-cross word puzzle maker apk pro
-cross word puzzle apk with hints and clues
-cross word puzzle generator apk online
-cross word puzzle editor apk full version
-cross word puzzle apk for kids and adults
-cross word puzzle creator apk premium
-cross word puzzle apk with daily challenges
-cross word puzzle app for android tablet
-cross word puzzle apk no ads and in-app purchases
-cross word puzzle builder apk cracked
-cross word puzzle apk with custom themes
-cross word puzzle designer apk unlocked
-cross word puzzle apk with voice input and output
-cross word puzzle app for android tv
-cross word puzzle apk no internet required
-cross word puzzle filler apk latest version
-cross word puzzle apk with different languages
-cross word puzzle app for android wear
-cross word puzzle apk with crossword dictionary
-cross word puzzle formatter apk updated
-cross word puzzle apk with multiple levels of difficulty
-cross word puzzle app for android auto
-cross word puzzle apk with synonyms and antonyms
-cross word puzzle grader apk new
-cross word puzzle apk with timer and score
-cross word puzzle app for android box
-cross word puzzle apk with anagrams and acronyms
-cross word puzzle helper apk old version
-cross word puzzle apk with images and emojis
-cross word puzzle app for android phone
-cross word puzzle apk with trivia and facts
-cross word puzzle instructor apk beta version
-cross word puzzle apk with categories and topics
-cross word puzzle app for android emulator
-cross word puzzle apk with hints and answers
-cross word puzzle learner apk alpha version

-

Leaderboards and achievements to track your progress and compete with others

-

A fourth feature of crossword puzzle apk is that it has leaderboards and achievements to track your progress and compete with others. You can earn points and stars for completing crossword puzzles and unlocking new levels. You can also earn achievements for reaching certain milestones or completing certain challenges. You can view your rank and score on the global or local leaderboards, and compare them with other players around the world or in your area.

-

Conclusion

-

Crossword puzzle apk is a fun and challenging game for your brain. It lets you play crossword puzzles on your Android device using an apk file. It has many benefits for your brain health and cognitive skills, as well as for your fun and entertainment. It also has many features that make it convenient and accessible, such as hundreds of crossword puzzles in different categories and difficulties, customizable settings and themes, offline mode and cloud sync, and leaderboards and achievements. If you love word games and puzzles, you should definitely try playing crossword puzzle apk. You will not regret it!

-

FAQs

-

What is the best source to download crossword puzzle apk?

-

There are many sources to download crossword puzzle apk, but not all of them are safe and reliable. You should only download apk files from trusted and reputable sources, such as official websites, app stores, or file-sharing platforms. You should also check the reviews and ratings of the apk files before downloading them.

-

How can I create my own crossword puzzle using the app?

-

You can create your own crossword puzzle using the app's editor. You can choose the size of the grid, the theme of the puzzle, and the clues and words that you want to use. You can also save your puzzle and share it with others. To access the editor, you need to tap on the menu button on the top left corner of the screen, and then tap on "Create Puzzle".

-

How can I change the theme of the app?

-

You can change the theme of the app by tapping on the settings button on the top right corner of the screen, and then tapping on "Theme". You can choose from various themes, such as classic, modern, wood, paper, or dark. You can also change the background color and image of the grid.

-

How can I play offline or sync my progress to the cloud?

-

You can play offline by downloading the puzzles to your device. You can also sync your progress and achievements to the cloud by signing in with your Google account. To do either of these, you need to tap on the menu button on the top left corner of the screen, and then tap on "Offline Mode" or "Cloud Sync".

-

How can I compete with other players online?

-

You can compete with other players online by viewing your rank and score on the global or local leaderboards. You can also earn achievements for reaching certain milestones or completing certain challenges. To access these features, you need to tap on the menu button on the top left corner of the screen, and then tap on "Leaderboards" or "Achievements".

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/vq_diffusion/pipeline_vq_diffusion.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/vq_diffusion/pipeline_vq_diffusion.py deleted file mode 100644 index 98b179141855d1aced0177024022563d4df7995b..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/vq_diffusion/pipeline_vq_diffusion.py +++ /dev/null @@ -1,346 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 Microsoft and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Callable, List, Optional, Tuple, Union - -import paddle -import paddle.nn as nn - -from paddlenlp.transformers import CLIPTextModel, CLIPTokenizer - -from ...configuration_utils import ConfigMixin, register_to_config -from ...modeling_utils import ModelMixin -from ...models import Transformer2DModel, VQModel -from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from ...schedulers import VQDiffusionScheduler -from ...utils import logging - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -INF = 1e9 - - -# paddle logsumexp may has bug -def logsumexp(x, axis=None, keepdim=False): - return paddle.log(x.exp().sum(axis=axis, keepdim=keepdim)) - - -class LearnedClassifierFreeSamplingEmbeddings(ModelMixin, ConfigMixin): - """ - Utility class for storing learned text embeddings for classifier free sampling - """ - - @register_to_config - def __init__(self, learnable: bool, hidden_size: Optional[int] = None, length: Optional[int] = None): - super().__init__() - - self.learnable = learnable - - if self.learnable: - assert hidden_size is not None, "learnable=True requires `hidden_size` to be set" - assert length is not None, "learnable=True requires `length` to be set" - - embeddings = paddle.zeros([length, hidden_size]) - self.embeddings = self.create_parameter( - embeddings.shape, default_initializer=nn.initializer.Assign(embeddings) - ) - else: - self.embeddings = None - - -class VQDiffusionPipeline(DiffusionPipeline): - r""" - Pipeline for text-to-image generation using VQ Diffusion - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular xxxx, etc.) - - Args: - vqvae ([`VQModel`]): - Vector Quantized Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent - representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. VQ Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - transformer ([`Transformer2DModel`]): - Conditional transformer to denoise the encoded image latents. - scheduler ([`VQDiffusionScheduler`]): - A scheduler to be used in combination with `transformer` to denoise the encoded image latents. - """ - - vqvae: VQModel - text_encoder: CLIPTextModel - tokenizer: CLIPTokenizer - transformer: Transformer2DModel - learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings - scheduler: VQDiffusionScheduler - - def __init__( - self, - vqvae: VQModel, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - transformer: Transformer2DModel, - scheduler: VQDiffusionScheduler, - learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings, - ): - super().__init__() - - self.register_modules( - vqvae=vqvae, - transformer=transformer, - text_encoder=text_encoder, - tokenizer=tokenizer, - scheduler=scheduler, - learned_classifier_free_sampling_embeddings=learned_classifier_free_sampling_embeddings, - ) - - def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance): - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pd", - ) - text_input_ids = text_inputs.input_ids - - if text_input_ids.shape[-1] > self.tokenizer.model_max_length: - removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] - text_embeddings = self.text_encoder(text_input_ids)[0] - - # NOTE: This additional step of normalizing the text embeddings is from VQ-Diffusion. - # While CLIP does normalize the pooled output of the text transformer when combining - # the image and text embeddings, CLIP does not directly normalize the last hidden state. - # - # CLIP normalizing the pooled output. - # https://github.com/huggingface/transformers/blob/d92e22d1f28324f513f3080e5c47c071a3916721/src/transformers/models/clip/modeling_clip.py#L1052-L1053 - text_embeddings = text_embeddings / text_embeddings.norm(axis=-1, keepdim=True) - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.tile([1, num_images_per_prompt, 1]) - text_embeddings = text_embeddings.reshape([bs_embed * num_images_per_prompt, seq_len, -1]) - - if do_classifier_free_guidance: - if self.learned_classifier_free_sampling_embeddings.learnable: - uncond_embeddings = self.learned_classifier_free_sampling_embeddings.embeddings - uncond_embeddings = uncond_embeddings.unsqueeze(0).tile([batch_size, 1, 1]) - else: - uncond_tokens = [""] * batch_size - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pd", - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids)[0] - # See comment for normalizing text embeddings - uncond_embeddings = uncond_embeddings / uncond_embeddings.norm(axis=-1, keepdim=True) - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.tile([1, num_images_per_prompt, 1]) - uncond_embeddings = uncond_embeddings.reshape([batch_size * num_images_per_prompt, seq_len, -1]) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = paddle.concat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - @paddle.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - num_inference_steps: int = 100, - guidance_scale: float = 5.0, - truncation_rate: float = 1.0, - num_images_per_prompt: int = 1, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - latents: Optional[paddle.Tensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None, - callback_steps: Optional[int] = 1, - ) -> Union[ImagePipelineOutput, Tuple]: - """ - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - truncation_rate (`float`, *optional*, defaults to 1.0 (equivalent to no truncation)): - Used to "truncate" the predicted classes for x_0 such that the cumulative probability for a pixel is at - most `truncation_rate`. The lowest probabilities that would increase the cumulative probability above - `truncation_rate` are set to zero. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - generator (`paddle.Generator`, *optional*): - One or a list of paddle generator(s) to make generation deterministic. - latents (`paddle.Tensor` of shape (batch), *optional*): - Pre-generated noisy latents to be used as inputs for image generation. Must be valid embedding indices. - Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will - be generated of completely masked latent pixels. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~ pipeline_utils.ImagePipelineOutput `] if - `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the - generated images. - """ - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - batch_size = batch_size * num_images_per_prompt - - do_classifier_free_guidance = guidance_scale > 1.0 - - text_embeddings = self._encode_prompt(prompt, num_images_per_prompt, do_classifier_free_guidance) - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # get the initial completely masked latents unless the user supplied it - - latents_shape = [batch_size, self.transformer.num_latent_pixels] - if latents is None: - mask_class = self.transformer.num_vector_embeds - 1 - latents = paddle.full(latents_shape, mask_class, dtype="int64") - else: - if latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - if (latents < 0).any() or (latents >= self.transformer.num_vector_embeds).any(): - raise ValueError( - "Unexpected latents value(s). All latents be valid embedding indices i.e. in the range 0," - f" {self.transformer.num_vector_embeds - 1} (inclusive)." - ) - - # set timesteps - self.scheduler.set_timesteps(num_inference_steps) - - timesteps_tensor = self.scheduler.timesteps - - sample = latents - - for i, t in enumerate(self.progress_bar(timesteps_tensor)): - # expand the sample if we are doing classifier free guidance - latent_model_input = paddle.concat([sample] * 2) if do_classifier_free_guidance else sample - - # predict the un-noised image - # model_output == `log_p_x_0` - model_output = self.transformer( - latent_model_input, encoder_hidden_states=text_embeddings, timestep=t - ).sample - - if do_classifier_free_guidance: - model_output_uncond, model_output_text = model_output.chunk(2) - model_output = model_output_uncond + guidance_scale * (model_output_text - model_output_uncond) - model_output -= logsumexp(model_output, axis=1, keepdim=True) - - model_output = self.truncate(model_output, truncation_rate) - - # remove `log(0)`'s (`-inf`s) - model_output = model_output.clip(-70) - - # compute the previous noisy sample x_t -> x_t-1 - sample = self.scheduler.step(model_output, timestep=t, sample=sample, generator=generator).prev_sample - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, sample) - - embedding_channels = self.vqvae.config.vq_embed_dim - embeddings_shape = (batch_size, self.transformer.height, self.transformer.width, embedding_channels) - embeddings = self.vqvae.quantize.get_codebook_entry(sample, shape=embeddings_shape) - image = self.vqvae.decode(embeddings, force_not_quantize=True).sample - - image = (image / 2 + 0.5).clip(0, 1) - image = image.transpose([0, 2, 3, 1]).cast("float32").numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) - - def truncate(self, log_p_x_0: paddle.Tensor, truncation_rate: float) -> paddle.Tensor: - """ - Truncates log_p_x_0 such that for each column vector, the total cumulative probability is `truncation_rate` The - lowest probabilities that would increase the cumulative probability above `truncation_rate` are set to zero. - """ - sorted_log_p_x_0, indices = paddle.topk(log_p_x_0, k=log_p_x_0.shape[1], axis=1) - sorted_p_x_0 = paddle.exp(sorted_log_p_x_0) - keep_mask = (sorted_p_x_0.cumsum(axis=1) < truncation_rate).cast("int64") - - # Ensure that at least the largest probability is not zeroed out - all_true = paddle.full_like(keep_mask[:, 0:1, :], 1) - keep_mask = paddle.concat((all_true, keep_mask), axis=1) - keep_mask = keep_mask[:, :-1, :] - - keep_mask = paddle.take_along_axis(keep_mask, indices.argsort(1), axis=1).cast( - "bool" - ) # keep_mask.gather(indices.argsort(1), axis=1) - - rv = log_p_x_0.clone() - # rv[~keep_mask] = -INF # -inf = log(0) - rv = paddle.where(keep_mask, rv, paddle.to_tensor(-INF, dtype="float32")) - - return rv diff --git a/spaces/44ov41za8i/FreeVC/app.py b/spaces/44ov41za8i/FreeVC/app.py deleted file mode 100644 index 390198b05bcdad1a96cb2f5e3795c620a5856cfd..0000000000000000000000000000000000000000 --- a/spaces/44ov41za8i/FreeVC/app.py +++ /dev/null @@ -1,103 +0,0 @@ -import os -import torch -import librosa -import gradio as gr -from scipy.io.wavfile import write -from transformers import WavLMModel - -import utils -from models import SynthesizerTrn -from mel_processing import mel_spectrogram_torch -from speaker_encoder.voice_encoder import SpeakerEncoder - -''' -def get_wavlm(): - os.system('gdown https://drive.google.com/uc?id=12-cB34qCTvByWT-QtOcZaqwwO21FLSqU') - shutil.move('WavLM-Large.pt', 'wavlm') -''' - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -print("Loading FreeVC...") -hps = utils.get_hparams_from_file("configs/freevc.json") -freevc = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).to(device) -_ = freevc.eval() -_ = utils.load_checkpoint("checkpoints/freevc.pth", freevc, None) -smodel = SpeakerEncoder('speaker_encoder/ckpt/pretrained_bak_5805000.pt') - -print("Loading FreeVC(24k)...") -hps = utils.get_hparams_from_file("configs/freevc-24.json") -freevc_24 = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).to(device) -_ = freevc_24.eval() -_ = utils.load_checkpoint("checkpoints/freevc-24.pth", freevc_24, None) - -print("Loading FreeVC-s...") -hps = utils.get_hparams_from_file("configs/freevc-s.json") -freevc_s = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).to(device) -_ = freevc_s.eval() -_ = utils.load_checkpoint("checkpoints/freevc-s.pth", freevc_s, None) - -print("Loading WavLM for content...") -cmodel = WavLMModel.from_pretrained("microsoft/wavlm-large").to(device) - -def convert(model, src, tgt): - with torch.no_grad(): - # tgt - wav_tgt, _ = librosa.load(tgt, sr=hps.data.sampling_rate) - wav_tgt, _ = librosa.effects.trim(wav_tgt, top_db=20) - if model == "FreeVC" or model == "FreeVC (24kHz)": - g_tgt = smodel.embed_utterance(wav_tgt) - g_tgt = torch.from_numpy(g_tgt).unsqueeze(0).to(device) - else: - wav_tgt = torch.from_numpy(wav_tgt).unsqueeze(0).to(device) - mel_tgt = mel_spectrogram_torch( - wav_tgt, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - # src - wav_src, _ = librosa.load(src, sr=hps.data.sampling_rate) - wav_src = torch.from_numpy(wav_src).unsqueeze(0).to(device) - c = cmodel(wav_src).last_hidden_state.transpose(1, 2).to(device) - # infer - if model == "FreeVC": - audio = freevc.infer(c, g=g_tgt) - elif model == "FreeVC-s": - audio = freevc_s.infer(c, mel=mel_tgt) - else: - audio = freevc_24.infer(c, g=g_tgt) - audio = audio[0][0].data.cpu().float().numpy() - if model == "FreeVC" or model == "FreeVC-s": - write("out.wav", hps.data.sampling_rate, audio) - else: - write("out.wav", 24000, audio) - out = "out.wav" - return out - -model = gr.Dropdown(choices=["FreeVC", "FreeVC-s", "FreeVC (24kHz)"], value="FreeVC",type="value", label="Model") -audio1 = gr.inputs.Audio(label="Source Audio", type='filepath') -audio2 = gr.inputs.Audio(label="Reference Audio", type='filepath') -inputs = [model, audio1, audio2] -outputs = gr.outputs.Audio(label="Output Audio", type='filepath') - -title = "FreeVC" -description = "Gradio Demo for FreeVC: Towards High-Quality Text-Free One-Shot Voice Conversion. To use it, simply upload your audio, or click the example to load. Read more at the links below. Note: It seems that the WavLM checkpoint in HuggingFace is a little different from the one used to train FreeVC, which may degrade the performance a bit. In addition, speaker similarity can be largely affected if there are too much silence in the reference audio, so please trim it before submitting." -article = "

Paper | Github Repo

" - -examples=[["FreeVC", 'p225_001.wav', 'p226_002.wav'], ["FreeVC-s", 'p226_002.wav', 'p225_001.wav'], ["FreeVC (24kHz)", 'p225_001.wav', 'p226_002.wav']] - -gr.Interface(convert, inputs, outputs, title=title, description=description, article=article, examples=examples, enable_queue=True).launch() diff --git a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/model_param_init.py b/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/model_param_init.py deleted file mode 100644 index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/model_param_init.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -import os -import pathlib - -default_param = {} -default_param["bins"] = 768 -default_param["unstable_bins"] = 9 # training only -default_param["reduction_bins"] = 762 # training only -default_param["sr"] = 44100 -default_param["pre_filter_start"] = 757 -default_param["pre_filter_stop"] = 768 -default_param["band"] = {} - - -default_param["band"][1] = { - "sr": 11025, - "hl": 128, - "n_fft": 960, - "crop_start": 0, - "crop_stop": 245, - "lpf_start": 61, # inference only - "res_type": "polyphase", -} - -default_param["band"][2] = { - "sr": 44100, - "hl": 512, - "n_fft": 1536, - "crop_start": 24, - "crop_stop": 547, - "hpf_start": 81, # inference only - "res_type": "sinc_best", -} - - -def int_keys(d): - r = {} - for k, v in d: - if k.isdigit(): - k = int(k) - r[k] = v - return r - - -class ModelParameters(object): - def __init__(self, config_path=""): - if ".pth" == pathlib.Path(config_path).suffix: - import zipfile - - with zipfile.ZipFile(config_path, "r") as zip: - self.param = json.loads( - zip.read("param.json"), object_pairs_hook=int_keys - ) - elif ".json" == pathlib.Path(config_path).suffix: - with open(config_path, "r") as f: - self.param = json.loads(f.read(), object_pairs_hook=int_keys) - else: - self.param = default_param - - for k in [ - "mid_side", - "mid_side_b", - "mid_side_b2", - "stereo_w", - "stereo_n", - "reverse", - ]: - if not k in self.param: - self.param[k] = False diff --git a/spaces/AI-Dashboards/CP.Matplotlib.NetworkX.Streamlit.PyVis.Graphviz/app.py b/spaces/AI-Dashboards/CP.Matplotlib.NetworkX.Streamlit.PyVis.Graphviz/app.py deleted file mode 100644 index 61d4dba037b2029a86299480b8cf37618d859bc5..0000000000000000000000000000000000000000 --- a/spaces/AI-Dashboards/CP.Matplotlib.NetworkX.Streamlit.PyVis.Graphviz/app.py +++ /dev/null @@ -1,267 +0,0 @@ -import streamlit as st -import streamlit.components.v1 as components -import networkx as nx -import matplotlib.pyplot as plt -from pyvis.network import Network -import got -import numpy as np -import pandas as pd -import time -import re -import graphviz as graphviz -import pydeck as pdk - -from st_click_detector import click_detector - -st.graphviz_chart(''' - digraph { -Income -> AbleToBuyOnlyNecessities -Income -> DifficultyBuyingNecessities -Income -> DifficultyWithMoneyManagement -Income -> LowNoIncome -Income -> UninsuredMedicalExpenses - } -''') - -st.graphviz_chart(''' - digraph { -Income -> Continuityof -> Care -Income -> Durable -> Medical -> Equipment -Income -> Finances -Income -> LegalSystem -Income -> Medical -> Dental -> Care -Income -> Medication -> Coordination -> Ordering -Income -> Other -> Community -> Resources -Income -> SocialWork -> Counseling -> Care -Income -> Supplies - } -''') - -st.graphviz_chart(''' - digraph { -MentalHealth -> Apprehension -> Undefined -> Fear -> Anxious -MentalHealth -> Attempts -> Suicide -> Homicide -MentalHealth -> Difficulty -> Managing -> Anger -MentalHealth -> Difficulty -> Managing -> Stress -MentalHealth -> Expresses -> Suicidal -> Homicidal -> Thoughts -MentalHealth -> False -> Beliefs -> Delusions -MentalHealth -> False -> Perceptions -> Hallucinations -> Illusions -MentalHealth -> FlatAffect -> LackofEmotion -MentalHealth -> Irritable -> Agitated -> Aggressive -MentalHealth -> LossofInterest -> Involvementin -> ActivitiesSelfCare -MentalHealth -> MoodSwings -MentalHealth -> Narrowedto -> Scattered -> Attention -> Focus -MentalHealth -> Purposeless -> Compulsive -> RepetitiveActivity -MentalHealth -> Sadness -> Hopelessness -> Decreased -> SelfEsteem -MentalHealth -> Somatic -> Complaints -> Fatigue - } -''') - -st.graphviz_chart(''' - digraph { -MentalHealth -> Anger -> Management -MentalHealth -> Behavioral -> Health -> Care -MentalHealth -> Communication -MentalHealth -> Continuityof -> Care -MentalHealth -> Coping -> Skills -MentalHealth -> Dietary -> Management -MentalHealth -> Discipline -MentalHealth -> EndofLife -> Care -MentalHealth -> Interaction -MentalHealth -> LegalSystem -MentalHealth -> Medical -> Dental -> Care -MentalHealth -> Medication -> ActionSideEffects -MentalHealth -> Medication -> Administration -MentalHealth -> Medication -> CoordinationOrdering -MentalHealth -> Nursing -> Care -MentalHealth -> Nutritionist -> Care -MentalHealth -> Other -> Community -> Resources -MentalHealth -> Relaxation -> Breathing -> Techniques -MentalHealth -> Rest -> Sleep -MentalHealth -> Safety -MentalHealth -> Screening -> Procedures -MentalHealth -> SignsSymptoms -> MentalEmotional -MentalHealth -> SignsSymptoms -> Physical -MentalHealth -> SocialWork -> Counseling -> Care -MentalHealth -> Stress -> Management -MentalHealth -> Support -> Group -MentalHealth -> Support -> System -MentalHealth -> Wellness - } -''') - - -st.graphviz_chart(''' - digraph { -Respiration -> Abnormal -> BreathSoundsCrackles -Respiration -> Abnormal -> IrregularBreathPatterns -Respiration -> Abnormal -> RespiratoryLaboratoryResults -Respiration -> Abnormal -> Sputum -Respiration -> Cough -Respiration -> Noisy -> RespirationswheezingRalesRhonchi -Respiration -> Rhinorrhea -> NasalCongestion -Respiration -> UnabletoBreathe -> Independently - } -''') - -st.graphviz_chart(''' - digraph { -Respiration -> Anatomy -> Physiology -Respiration -> Continuityof -> Care -Respiration -> Coping -> Skills -Respiration -> Dietary -> Management -Respiration -> Durable -> Medical -> Equipment -Respiration -> Education -Respiration -> EndofLife -> Care -Respiration -> Environment -Respiration -> Exercises -Respiration -> Infection -> Precautions -Respiration -> Laboratory -> Findings -Respiration -> Medical -> Dental -> Care -Respiration -> Medication -> Action -> SideEffects -Respiration -> Medication -> Administration -Respiration -> Medication -> Prescription -Respiration -> Medication -> SetUp -Respiration -> Mobility -> Transfers -Respiration -> Nursing -> Care -Respiration -> Positioning -Respiration -> Relaxation -> Breathing -> Techniques -Respiration -> Respiratory -> Care -Respiration -> Respiratory -> Therapy -> Care -Respiration -> Safety -Respiration -> Screening -> Procedures -Respiration -> SignsSymptoms -> MentalEmotional -Respiration -> SignsSymptoms -> Physical -Respiration -> Specimen -> Collection -Respiration -> Supplies -Respiration -> Support -> Group -Respiration -> Support -> System -Respiration -> Wellness - } -''') - - -st.graphviz_chart(''' - digraph { -Circulation -> Abnormal -> BloodPressureReading -Circulation -> Abnormal -> CardiacLaboratoryResults -Circulation -> Abnormal -> Clotting -Circulation -> Abnormal -> HeartSoundsMurmurs -Circulation -> Anginal -> Pain -Circulation -> Cramping -> Pain -> ofExtremities -Circulation -> Decreased -> Pulses -Circulation -> Discoloration -> ofSkinCyanosis -Circulation -> EdemaSwelling -> inlegsarmsfeet -Circulation -> ExcessivelyRapid -> HeartRate -Circulation -> IrregularHeartRate -Circulation -> SyncopalEpisodes -> Fainting -> Dizziness -Circulation -> TemperatureChange -> inAffectedArea -Circulation -> Varicosities - } -''') - -st.graphviz_chart(''' - digraph { -Circulation -> Anatomy -> Physiology -Circulation -> Cardiac -> Care -Circulation -> Continuityof -> Care -Circulation -> Coping -> Skills -Circulation -> Dietary -> Management -Circulation -> Durable -> Medical -> Equipment -Circulation -> Exercises -Circulation -> Finances -Circulation -> Infection -> Precautions -Circulation -> Laboratory -> Findings -Circulation -> Medical -> Dental -> Care -Circulation -> Medication -> Action -> SideEffects -Circulation -> Medication -> Administration -Circulation -> Medication -> SetUp -Circulation -> Mobility -> Transfers -Circulation -> Nursing -> Care -Circulation -> Personal -> Hygiene -Circulation -> Relaxation -> Breathing -> Techniques -Circulation -> Safety -Circulation -> Screening -> Procedures -Circulation -> SignsSymptoms -> MentalEmotional -Circulation -> SignsSymptoms -> Physical -Circulation -> Support -> Group -Circulation -> Support -> System -Circulation -> Wellness - } -''') - -df = pd.read_csv("testfile.csv") -@st.cache -def convert_df(df): - return df.to_csv().encode('utf-8') -csv = convert_df(df) -st.download_button( - "Press to Download", - csv, - "testfile.csv", - "text/csv", - key='download-csv' -) - - -st.title('Streamlit Visualization') -dataframe = pd.DataFrame(np.random.randn(10, 20), - columns = ('col %d' % i - for i in range(20))) -st.write(dataframe) - -dataframe = pd.DataFrame(np.random.randn(10, 5), - columns = ('col %d' % i - for i in range(5))) -dataframe -st.write('This is a line_chart.') -st.line_chart(dataframe) - -st.write('This is a area_chart.') -st.area_chart(dataframe) - -st.write('This is a bar_chart.') -st.bar_chart(dataframe) - -st.write('Map data') -data_of_map = pd.DataFrame( - np.random.randn(1000, 2) / [60, 60] + [36.66, -121.6], - columns = ['latitude', 'longitude']) -st.map(data_of_map) - - -st.title('Pyvis VisJS DOTlang Legend') - -Network(notebook=True) -# make Network show itself with repr_html - -def net_repr_html(self): - nodes, edges, height, width, options = self.get_network_data() - html = self.template.render(height=height, width=width, nodes=nodes, edges=edges, options=options) - return html - -Network._repr_html_ = net_repr_html - -st.sidebar.title('Choose your favorite Graph') -option=st.sidebar.selectbox('select graph',('Simple','Karate', 'GOT')) -physics=st.sidebar.checkbox('add physics interactivity?') -got.simple_func(physics) - -if option=='Simple': - HtmlFile = open("test.html", 'r', encoding='utf-8') - source_code = HtmlFile.read() - components.html(source_code, height = 900,width=900) - -got.got_func(physics) - -if option=='GOT': - HtmlFile = open("gameofthrones.html", 'r', encoding='utf-8') - source_code = HtmlFile.read() - components.html(source_code, height = 1200,width=1000) - -got.karate_func(physics) - -if option=='Karate': - HtmlFile = open("karate.html", 'r', encoding='utf-8') - source_code = HtmlFile.read() - components.html(source_code, height = 1200,width=1000) \ No newline at end of file diff --git a/spaces/AIARTCHAN/openpose_editor/style.css b/spaces/AIARTCHAN/openpose_editor/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/AIARTCHAN/openpose_editor/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/AIConsultant/MusicGen/audiocraft/metrics/kld.py b/spaces/AIConsultant/MusicGen/audiocraft/metrics/kld.py deleted file mode 100644 index 18260bf974bf47d8381223ac39be0c47c031bf8a..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/metrics/kld.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -from functools import partial -import logging -import os -import typing as tp - -import torch -import torchmetrics - -from ..data.audio_utils import convert_audio - - -logger = logging.getLogger(__name__) - - -class _patch_passt_stft: - """Decorator to patch torch.stft in PaSST.""" - def __init__(self): - self.old_stft = torch.stft - - def __enter__(self): - # return_complex is a mandatory parameter in latest torch versions - # torch is throwing RuntimeErrors when not set - torch.stft = partial(torch.stft, return_complex=False) - - def __exit__(self, *exc): - torch.stft = self.old_stft - - -def kl_divergence(pred_probs: torch.Tensor, target_probs: torch.Tensor, epsilon: float = 1e-6) -> torch.Tensor: - """Computes the elementwise KL-Divergence loss between probability distributions - from generated samples and target samples. - - Args: - pred_probs (torch.Tensor): Probabilities for each label obtained - from a classifier on generated audio. Expected shape is [B, num_classes]. - target_probs (torch.Tensor): Probabilities for each label obtained - from a classifier on target audio. Expected shape is [B, num_classes]. - epsilon (float): Epsilon value. - Returns: - kld (torch.Tensor): KLD loss between each generated sample and target pair. - """ - kl_div = torch.nn.functional.kl_div((pred_probs + epsilon).log(), target_probs, reduction="none") - return kl_div.sum(-1) - - -class KLDivergenceMetric(torchmetrics.Metric): - """Base implementation for KL Divergence metric. - - The KL divergence is measured between probability distributions - of class predictions returned by a pre-trained audio classification model. - When the KL-divergence is low, the generated audio is expected to - have similar acoustic characteristics as the reference audio, - according to the classifier. - """ - def __init__(self): - super().__init__() - self.add_state("kld_pq_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("kld_qp_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("kld_all_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("weight", default=torch.tensor(0), dist_reduce_fx="sum") - - def _get_label_distribution(self, x: torch.Tensor, sizes: torch.Tensor, - sample_rates: torch.Tensor) -> tp.Optional[torch.Tensor]: - """Get model output given provided input tensor. - - Args: - x (torch.Tensor): Input audio tensor of shape [B, C, T]. - sizes (torch.Tensor): Actual audio sample length, of shape [B]. - sample_rates (torch.Tensor): Actual audio sample rate, of shape [B]. - Returns: - probs (torch.Tensor): Probabilities over labels, of shape [B, num_classes]. - """ - raise NotImplementedError("implement method to extract label distributions from the model.") - - def update(self, preds: torch.Tensor, targets: torch.Tensor, - sizes: torch.Tensor, sample_rates: torch.Tensor) -> None: - """Calculates running KL-Divergence loss between batches of audio - preds (generated) and target (ground-truth) - Args: - preds (torch.Tensor): Audio samples to evaluate, of shape [B, C, T]. - targets (torch.Tensor): Target samples to compare against, of shape [B, C, T]. - sizes (torch.Tensor): Actual audio sample length, of shape [B]. - sample_rates (torch.Tensor): Actual audio sample rate, of shape [B]. - """ - assert preds.shape == targets.shape - assert preds.size(0) > 0, "Cannot update the loss with empty tensors" - preds_probs = self._get_label_distribution(preds, sizes, sample_rates) - targets_probs = self._get_label_distribution(targets, sizes, sample_rates) - if preds_probs is not None and targets_probs is not None: - assert preds_probs.shape == targets_probs.shape - kld_scores = kl_divergence(preds_probs, targets_probs) - assert not torch.isnan(kld_scores).any(), "kld_scores contains NaN value(s)!" - self.kld_pq_sum += torch.sum(kld_scores) - kld_qp_scores = kl_divergence(targets_probs, preds_probs) - self.kld_qp_sum += torch.sum(kld_qp_scores) - self.weight += torch.tensor(kld_scores.size(0)) - - def compute(self) -> dict: - """Computes KL-Divergence across all evaluated pred/target pairs.""" - weight: float = float(self.weight.item()) # type: ignore - assert weight > 0, "Unable to compute with total number of comparisons <= 0" - logger.info(f"Computing KL divergence on a total of {weight} samples") - kld_pq = self.kld_pq_sum.item() / weight # type: ignore - kld_qp = self.kld_qp_sum.item() / weight # type: ignore - kld_both = kld_pq + kld_qp - return {'kld': kld_pq, 'kld_pq': kld_pq, 'kld_qp': kld_qp, 'kld_both': kld_both} - - -class PasstKLDivergenceMetric(KLDivergenceMetric): - """KL-Divergence metric based on pre-trained PASST classifier on AudioSet. - - From: PaSST: Efficient Training of Audio Transformers with Patchout - Paper: https://arxiv.org/abs/2110.05069 - Implementation: https://github.com/kkoutini/PaSST - - Follow instructions from the github repo: - ``` - pip install 'git+https://github.com/kkoutini/passt_hear21@0.0.19#egg=hear21passt' - ``` - - Args: - pretrained_length (float, optional): Audio duration used for the pretrained model. - """ - def __init__(self, pretrained_length: tp.Optional[float] = None): - super().__init__() - self._initialize_model(pretrained_length) - - def _initialize_model(self, pretrained_length: tp.Optional[float] = None): - """Initialize underlying PaSST audio classifier.""" - model, sr, max_frames, min_frames = self._load_base_model(pretrained_length) - self.min_input_frames = min_frames - self.max_input_frames = max_frames - self.model_sample_rate = sr - self.model = model - self.model.eval() - self.model.to(self.device) - - def _load_base_model(self, pretrained_length: tp.Optional[float]): - """Load pretrained model from PaSST.""" - try: - if pretrained_length == 30: - from hear21passt.base30sec import get_basic_model # type: ignore - max_duration = 30 - elif pretrained_length == 20: - from hear21passt.base20sec import get_basic_model # type: ignore - max_duration = 20 - else: - from hear21passt.base import get_basic_model # type: ignore - # Original PASST was trained on AudioSet with 10s-long audio samples - max_duration = 10 - min_duration = 0.15 - min_duration = 0.15 - except ModuleNotFoundError: - raise ModuleNotFoundError( - "Please install hear21passt to compute KL divergence: ", - "pip install 'git+https://github.com/kkoutini/passt_hear21@0.0.19#egg=hear21passt'" - ) - model_sample_rate = 32_000 - max_input_frames = int(max_duration * model_sample_rate) - min_input_frames = int(min_duration * model_sample_rate) - with open(os.devnull, 'w') as f, contextlib.redirect_stdout(f): - model = get_basic_model(mode='logits') - return model, model_sample_rate, max_input_frames, min_input_frames - - def _process_audio(self, wav: torch.Tensor, sample_rate: int, wav_len: int) -> tp.Optional[torch.Tensor]: - wav = wav.unsqueeze(0) - wav = wav[..., :wav_len] - wav = convert_audio(wav, from_rate=sample_rate, to_rate=self.model_sample_rate, to_channels=1) - wav = wav.squeeze(0) - # create chunks of audio to match the classifier processing length - segments = torch.split(wav, self.max_input_frames, dim=-1) - valid_segments = [] - for s in segments: - if s.size(-1) > self.min_input_frames: - s = torch.nn.functional.pad(s, (0, self.max_input_frames - s.shape[-1])) - valid_segments.append(s) - if len(valid_segments) > 0: - return torch.stack(valid_segments, dim=0) - else: - return None - - def _get_label_distribution(self, x: torch.Tensor, sizes: torch.Tensor, - sample_rates: torch.Tensor) -> tp.Optional[torch.Tensor]: - """Get model output given provided input tensor. - - Args: - x (torch.Tensor): Input audio tensor of shape [B, C, T]. - sizes (torch.Tensor): Actual audio sample length, of shape [B]. - sample_rates (torch.Tensor): Actual audio sample rate, of shape [B]. - Returns: - probs (torch.Tensor, optional): Probabilities over labels, of shape [B, num_classes]. - """ - all_probs: tp.List[torch.Tensor] = [] - for i, wav in enumerate(x): - sample_rate = int(sample_rates[i].item()) - wav_len = int(sizes[i].item()) - wav = self._process_audio(wav, sample_rate, wav_len) - if wav is not None: - assert wav.dim() == 3, f"Unexpected number of dims for preprocessed wav: {wav.shape}" - wav = wav.mean(dim=1) - # PaSST is printing a lot of infos that we are not interested in - with open(os.devnull, 'w') as f, contextlib.redirect_stdout(f): - with torch.no_grad(), _patch_passt_stft(): - logits = self.model(wav.to(self.device)) - probs = torch.softmax(logits, dim=-1) - probs = probs.mean(dim=0) - all_probs.append(probs) - if len(all_probs) > 0: - return torch.stack(all_probs, dim=0) - else: - return None diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/yolov5_s-p6-v62_syncbn_fast_8xb16-300e_coco.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/yolov5_s-p6-v62_syncbn_fast_8xb16-300e_coco.py deleted file mode 100644 index 0af1fcb84e89ca915ab7d4920d81ae34b3589098..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/yolov5_s-p6-v62_syncbn_fast_8xb16-300e_coco.py +++ /dev/null @@ -1,138 +0,0 @@ -_base_ = 'yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py' - -# ========================modified parameters====================== -img_scale = (1280, 1280) # width, height -num_classes = 80 # Number of classes for classification -# Config of batch shapes. Only on val. -# It means not used if batch_shapes_cfg is None. -batch_shapes_cfg = dict( - img_size=img_scale[0], - # The image scale of padding should be divided by pad_size_divisor - size_divisor=64) -# Basic size of multi-scale prior box -anchors = [ - [(19, 27), (44, 40), (38, 94)], # P3/8 - [(96, 68), (86, 152), (180, 137)], # P4/16 - [(140, 301), (303, 264), (238, 542)], # P5/32 - [(436, 615), (739, 380), (925, 792)] # P6/64 -] -# Strides of multi-scale prior box -strides = [8, 16, 32, 64] -num_det_layers = 4 # The number of model output scales -loss_cls_weight = 0.5 -loss_bbox_weight = 0.05 -loss_obj_weight = 1.0 -# The obj loss weights of the three output layers -obj_level_weights = [4.0, 1.0, 0.25, 0.06] -affine_scale = 0.5 # YOLOv5RandomAffine scaling ratio - -tta_img_scales = [(1280, 1280), (1024, 1024), (1536, 1536)] -# =======================Unmodified in most cases================== -model = dict( - backbone=dict(arch='P6', out_indices=(2, 3, 4, 5)), - neck=dict( - in_channels=[256, 512, 768, 1024], out_channels=[256, 512, 768, 1024]), - bbox_head=dict( - head_module=dict( - in_channels=[256, 512, 768, 1024], featmap_strides=strides), - prior_generator=dict(base_sizes=anchors, strides=strides), - # scaled based on number of detection layers - loss_cls=dict(loss_weight=loss_cls_weight * - (num_classes / 80 * 3 / num_det_layers)), - loss_bbox=dict(loss_weight=loss_bbox_weight * (3 / num_det_layers)), - loss_obj=dict(loss_weight=loss_obj_weight * - ((img_scale[0] / 640)**2 * 3 / num_det_layers)), - obj_level_weights=obj_level_weights)) - -pre_transform = _base_.pre_transform -albu_train_transforms = _base_.albu_train_transforms - -train_pipeline = [ - *pre_transform, - dict( - type='Mosaic', - img_scale=img_scale, - pad_val=114.0, - pre_transform=pre_transform), - dict( - type='YOLOv5RandomAffine', - max_rotate_degree=0.0, - max_shear_degree=0.0, - scaling_ratio_range=(1 - affine_scale, 1 + affine_scale), - # img_scale is (width, height) - border=(-img_scale[0] // 2, -img_scale[1] // 2), - border_val=(114, 114, 114)), - dict( - type='mmdet.Albu', - transforms=albu_train_transforms, - bbox_params=dict( - type='BboxParams', - format='pascal_voc', - label_fields=['gt_bboxes_labels', 'gt_ignore_flags']), - keymap={ - 'img': 'image', - 'gt_bboxes': 'bboxes' - }), - dict(type='YOLOv5HSVRandomAug'), - dict(type='mmdet.RandomFlip', prob=0.5), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip', - 'flip_direction')) -] - -train_dataloader = dict(dataset=dict(pipeline=train_pipeline)) - -test_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict(type='YOLOv5KeepRatioResize', scale=img_scale), - dict( - type='LetterResize', - scale=img_scale, - allow_scale_up=False, - pad_val=dict(img=114)), - dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'pad_param')) -] - -val_dataloader = dict( - dataset=dict(pipeline=test_pipeline, batch_shapes_cfg=batch_shapes_cfg)) - -test_dataloader = val_dataloader - -# Config for Test Time Augmentation. (TTA) -_multiscale_resize_transforms = [ - dict( - type='Compose', - transforms=[ - dict(type='YOLOv5KeepRatioResize', scale=s), - dict( - type='LetterResize', - scale=s, - allow_scale_up=False, - pad_val=dict(img=114)) - ]) for s in tta_img_scales -] - -tta_pipeline = [ - dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args), - dict( - type='TestTimeAug', - transforms=[ - _multiscale_resize_transforms, - [ - dict(type='mmdet.RandomFlip', prob=1.), - dict(type='mmdet.RandomFlip', prob=0.) - ], [dict(type='mmdet.LoadAnnotations', with_bbox=True)], - [ - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor', 'pad_param', 'flip', - 'flip_direction')) - ] - ]) -] diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/learning_rates.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/learning_rates.py deleted file mode 100644 index dd3325b4ed746f2d65e00750e40156aef6b6d851..0000000000000000000000000000000000000000 --- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/learning_rates.py +++ /dev/null @@ -1,70 +0,0 @@ -import numpy as np -import math - - -class LearningRateDecay: - def __init__(self, lr=0.002, warmup_steps=4000.0) -> None: - self.lr = lr - self.warmup_steps = warmup_steps - - def __call__(self, global_step) -> float: - step = global_step + 1.0 - lr = ( - self.lr - * self.warmup_steps ** 0.5 - * np.minimum(step * self.warmup_steps ** -1.5, step ** -0.5) - ) - - return lr - -class SquareRootScheduler: - def __init__(self, lr=0.1): - self.lr = lr - - def __call__(self, global_step): - global_step = global_step // 1000 - return self.lr * pow(global_step + 1.0, -0.5) - - -class CosineScheduler: - def __init__( - self, max_update, base_lr=0.02, final_lr=0, warmup_steps=0, warmup_begin_lr=0 - ): - self.base_lr_orig = base_lr - self.max_update = max_update - self.final_lr = final_lr - self.warmup_steps = warmup_steps - self.warmup_begin_lr = warmup_begin_lr - self.max_steps = self.max_update - self.warmup_steps - - def get_warmup_lr(self, global_step): - increase = ( - (self.base_lr_orig - self.warmup_begin_lr) - * float(global_step) - / float(self.warmup_steps) - ) - return self.warmup_begin_lr + increase - - def __call__(self, global_step): - if global_step < self.warmup_steps: - return self.get_warmup_lr(global_step) - if global_step <= self.max_update: - self.base_lr = ( - self.final_lr - + (self.base_lr_orig - self.final_lr) - * ( - 1 - + math.cos( - math.pi * (global_step - self.warmup_steps) / self.max_steps - ) - ) - / 2 - ) - return self.base_lr - -def adjust_learning_rate(optimizer, global_step): - lr = LearningRateDecay()(global_step=global_step) - for param_group in optimizer.param_groups: - param_group["lr"] = lr - return lr - diff --git a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/base.py b/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/base.py deleted file mode 100644 index cc283870f3a5254f946f487d4db2711397443380..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/base.py +++ /dev/null @@ -1,30 +0,0 @@ -from __future__ import annotations - -from typing import TYPE_CHECKING, List - -from pydantic import BaseModel - -from agentverse.message import Message - -from . import selector_registry as SelectorRegistry -from abc import abstractmethod - -if TYPE_CHECKING: - from agentverse.environments import BaseEnvironment - - -@SelectorRegistry.register("base") -class BaseSelector(BaseModel): - """ - Base class for all selecters - """ - - @abstractmethod - def select_message( - self, environment: BaseEnvironment, messages: List[Message] - ) -> List[Message]: - """Selects a set of valid messages from all messages""" - pass - - def reset(self) -> None: - pass diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogresscanvas/CircularProgressCanvas.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogresscanvas/CircularProgressCanvas.js deleted file mode 100644 index a28f984413a285e8cace9ec5cbe0d2fa615b8cc6..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogresscanvas/CircularProgressCanvas.js +++ /dev/null @@ -1,2 +0,0 @@ -import CircularProgressCanvas from '../../../plugins/circularprogresscanvas.js'; -export default CircularProgressCanvas; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/container/Container.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/container/Container.d.ts deleted file mode 100644 index 68dbf258f8f9d05190a114c6c8bb10933b842bf0..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/container/Container.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import Container from '../../../plugins/containerlite'; -export default Container; \ No newline at end of file diff --git a/spaces/AkitoP/umamusume_bert_vits2/attentions.py b/spaces/AkitoP/umamusume_bert_vits2/attentions.py deleted file mode 100644 index 3ba2407267ecd425d2095a6428015b5b4ebc0bda..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/attentions.py +++ /dev/null @@ -1,464 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import logging - -logger = logging.getLogger(__name__) - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=4, - isflow=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - # if isflow: - # cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1) - # self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1) - # self.cond_layer = weight_norm(cond_layer, name='weight') - # self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if "gin_channels" in kwargs: - self.gin_channels = kwargs["gin_channels"] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = ( - kwargs["cond_layer_idx"] if "cond_layer_idx" in kwargs else 2 - ) - logging.debug(self.gin_channels, self.cond_layer_idx) - assert ( - self.cond_layer_idx < self.n_layers - ), "cond_layer_idx should be less than n_layers" - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # pad along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/AkitoP/umamusume_bert_vits2/losses.py b/spaces/AkitoP/umamusume_bert_vits2/losses.py deleted file mode 100644 index b1b263e4c205e78ffe970f622ab6ff68f36d3b17..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/losses.py +++ /dev/null @@ -1,58 +0,0 @@ -import torch - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg**2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/README.md b/spaces/Akmyradov/TurkmenTTSweSTT/README.md deleted file mode 100644 index 7905e99235f0254c82aba497f0c9e1728f5fa4a6..0000000000000000000000000000000000000000 --- a/spaces/Akmyradov/TurkmenTTSweSTT/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: MMS -emoji: ⚡ -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 -duplicated_from: facebook/MMS ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AlexZou/Deploy_Restoration/net/SGFMT.py b/spaces/AlexZou/Deploy_Restoration/net/SGFMT.py deleted file mode 100644 index 73a6aeb8f021d6850809b588d2777c8604efca04..0000000000000000000000000000000000000000 --- a/spaces/AlexZou/Deploy_Restoration/net/SGFMT.py +++ /dev/null @@ -1,126 +0,0 @@ -# -*- coding: utf-8 -*- -# @Author : Lintao Peng -# @File : SGFMT.py -# coding=utf-8 -# Design based on the Vit - -import torch.nn as nn -from net.IntmdSequential import IntermediateSequential - - -#实现了自注意力机制,相当于unet的bottleneck层 -class SelfAttention(nn.Module): - def __init__( - self, dim, heads=8, qkv_bias=False, qk_scale=None, dropout_rate=0.0 - ): - super().__init__() - self.num_heads = heads - head_dim = dim // heads - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(dropout_rate) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(dropout_rate) - - def forward(self, x): - B, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = ( - qkv[0], - qkv[1], - qkv[2], - ) # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Residual(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - - def forward(self, x): - return self.fn(x) + x - - -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.fn = fn - - def forward(self, x): - return self.fn(self.norm(x)) - - -class PreNormDrop(nn.Module): - def __init__(self, dim, dropout_rate, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.dropout = nn.Dropout(p=dropout_rate) - self.fn = fn - - def forward(self, x): - return self.dropout(self.fn(self.norm(x))) - - -class FeedForward(nn.Module): - def __init__(self, dim, hidden_dim, dropout_rate): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, hidden_dim), - nn.GELU(), - nn.Dropout(p=dropout_rate), - nn.Linear(hidden_dim, dim), - nn.Dropout(p=dropout_rate), - ) - - def forward(self, x): - return self.net(x) - - -class TransformerModel(nn.Module): - def __init__( - self, - dim, #512 - depth, #4 - heads, #8 - mlp_dim, #4096 - dropout_rate=0.1, - attn_dropout_rate=0.1, - ): - super().__init__() - layers = [] - for _ in range(depth): - layers.extend( - [ - Residual( - PreNormDrop( - dim, - dropout_rate, - SelfAttention(dim, heads=heads, dropout_rate=attn_dropout_rate), - ) - ), - Residual( - PreNorm(dim, FeedForward(dim, mlp_dim, dropout_rate)) - ), - ] - ) - # dim = dim / 2 - self.net = IntermediateSequential(*layers) - - - def forward(self, x): - return self.net(x) diff --git a/spaces/Amon1/ChatGPTForAcadamic/theme.py b/spaces/Amon1/ChatGPTForAcadamic/theme.py deleted file mode 100644 index 1a186aacabf5d982cbe9426a198f2a0b4bdef9d1..0000000000000000000000000000000000000000 --- a/spaces/Amon1/ChatGPTForAcadamic/theme.py +++ /dev/null @@ -1,152 +0,0 @@ -import gradio as gr - -# gradio可用颜色列表 -# gr.themes.utils.colors.slate (石板色) -# gr.themes.utils.colors.gray (灰色) -# gr.themes.utils.colors.zinc (锌色) -# gr.themes.utils.colors.neutral (中性色) -# gr.themes.utils.colors.stone (石头色) -# gr.themes.utils.colors.red (红色) -# gr.themes.utils.colors.orange (橙色) -# gr.themes.utils.colors.amber (琥珀色) -# gr.themes.utils.colors.yellow (黄色) -# gr.themes.utils.colors.lime (酸橙色) -# gr.themes.utils.colors.green (绿色) -# gr.themes.utils.colors.emerald (祖母绿) -# gr.themes.utils.colors.teal (青蓝色) -# gr.themes.utils.colors.cyan (青色) -# gr.themes.utils.colors.sky (天蓝色) -# gr.themes.utils.colors.blue (蓝色) -# gr.themes.utils.colors.indigo (靛蓝色) -# gr.themes.utils.colors.violet (紫罗兰色) -# gr.themes.utils.colors.purple (紫色) -# gr.themes.utils.colors.fuchsia (洋红色) -# gr.themes.utils.colors.pink (粉红色) -# gr.themes.utils.colors.rose (玫瑰色) - -def adjust_theme(): - try: - color_er = gr.themes.utils.colors.pink - set_theme = gr.themes.Default( - primary_hue=gr.themes.utils.colors.orange, - neutral_hue=gr.themes.utils.colors.gray, - font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui", "sans-serif", gr.themes.utils.fonts.GoogleFont("Source Sans Pro")], - font_mono=["ui-monospace", "Consolas", "monospace", gr.themes.utils.fonts.GoogleFont("IBM Plex Mono")]) - set_theme.set( - # Colors - input_background_fill_dark="*neutral_800", - # Transition - button_transition="none", - # Shadows - button_shadow="*shadow_drop", - button_shadow_hover="*shadow_drop_lg", - button_shadow_active="*shadow_inset", - input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset", - input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset", - input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset", - checkbox_label_shadow="*shadow_drop", - block_shadow="*shadow_drop", - form_gap_width="1px", - # Button borders - input_border_width="1px", - input_background_fill="white", - # Gradients - stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)", - stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)", - error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)", - error_background_fill_dark="*background_fill_primary", - checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)", - checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)", - checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)", - button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)", - button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)", - button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)", - button_primary_border_color_dark="*primary_500", - button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)", - button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)", - button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)", - button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)", - button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})", - button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})", - button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})", - button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})", - button_cancel_border_color=color_er.c200, - button_cancel_border_color_dark=color_er.c600, - button_cancel_text_color=color_er.c600, - button_cancel_text_color_dark="white", - ) - except: - set_theme = None; print('gradio版本较旧, 不能自定义字体和颜色') - return set_theme - -advanced_css = """ -/* 设置表格的外边距为1em,内部单元格之间边框合并,空单元格显示. */ -.markdown-body table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} - -/* 设置表格单元格的内边距为5px,边框粗细为1.2px,颜色为--border-color-primary. */ -.markdown-body th, .markdown-body td { - border: 1.2px solid var(--border-color-primary); - padding: 5px; -} - -/* 设置表头背景颜色为rgba(175,184,193,0.2),透明度为0.2. */ -.markdown-body thead { - background-color: rgba(175,184,193,0.2); -} - -/* 设置表头单元格的内边距为0.5em和0.2em. */ -.markdown-body thead th { - padding: .5em .2em; -} - -/* 去掉列表前缀的默认间距,使其与文本线对齐. */ -.markdown-body ol, .markdown-body ul { - padding-inline-start: 2em !important; -} - -/* 设定聊天气泡的样式,包括圆角、最大宽度和阴影等. */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - /* padding: var(--spacing-xl) !important; */ - /* font-size: var(--text-md) !important; */ - /* line-height: var(--line-md) !important; */ - /* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ - /* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ -} -[data-testid = "bot"] { - max-width: 95%; - /* width: auto !important; */ - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 100%; - /* width: auto !important; */ - border-bottom-right-radius: 0 !important; -} - -/* 行内代码的背景设为淡灰色,设定圆角和间距. */ -.markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 设定代码块的样式,包括背景颜色、内、外边距、圆角。 */ -.markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(175,184,193,0.2); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} -""" \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/torch_utils/ops/__init__.py b/spaces/Amrrs/DragGan-Inversion/torch_utils/ops/__init__.py deleted file mode 100644 index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/torch_utils/ops/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py deleted file mode 100644 index 1c6aa3d74465dde7b8e1695ab3e381c228b5f9af..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py +++ /dev/null @@ -1,747 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import warnings -from typing import Any, Callable, Dict, List, Optional, Union - -import numpy as np -import PIL -import torch -from packaging import version -from transformers import CLIPImageProcessor, XLMRobertaTokenizer - -from diffusers.utils import is_accelerate_available, is_accelerate_version - -from ...configuration_utils import FrozenDict -from ...image_processor import VaeImageProcessor -from ...loaders import FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import PIL_INTERPOLATION, deprecate, logging, randn_tensor, replace_example_docstring -from ..pipeline_utils import DiffusionPipeline -from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from . import AltDiffusionPipelineOutput, RobertaSeriesModelWithTransformation - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import requests - >>> import torch - >>> from PIL import Image - >>> from io import BytesIO - - >>> from diffusers import AltDiffusionImg2ImgPipeline - - >>> device = "cuda" - >>> model_id_or_path = "BAAI/AltDiffusion-m9" - >>> pipe = AltDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) - >>> pipe = pipe.to(device) - - >>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" - - >>> response = requests.get(url) - >>> init_image = Image.open(BytesIO(response.content)).convert("RGB") - >>> init_image = init_image.resize((768, 512)) - - >>> # "A fantasy landscape, trending on artstation" - >>> prompt = "幻想风景, artstation" - - >>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images - >>> images[0].save("幻想风景.png") - ``` -""" - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.preprocess -def preprocess(image): - warnings.warn( - "The preprocess method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor.preprocess instead", - FutureWarning, - ) - if isinstance(image, torch.Tensor): - return image - elif isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - w, h = image[0].size - w, h = (x - x % 8 for x in (w, h)) # resize to integer multiple of 8 - - image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image] - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = 2.0 * image - 1.0 - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - return image - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline with Stable->Alt, CLIPTextModel->RobertaSeriesModelWithTransformation, CLIPTokenizer->XLMRobertaTokenizer, AltDiffusionSafetyChecker->StableDiffusionSafetyChecker -class AltDiffusionImg2ImgPipeline( - DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin, FromSingleFileMixin -): - r""" - Pipeline for text-guided image-to-image generation using Alt Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - The pipeline also inherits the following loading methods: - - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings - - [`~loaders.LoraLoaderMixin.load_lora_weights`] for loading LoRA weights - - [`~loaders.LoraLoaderMixin.save_lora_weights`] for saving LoRA weights - - [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. - text_encoder ([`~transformers.RobertaSeriesModelWithTransformation`]): - Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)). - tokenizer ([`~transformers.XLMRobertaTokenizer`]): - A `XLMRobertaTokenizer` to tokenize text. - unet ([`UNet2DConditionModel`]): - A `UNet2DConditionModel` to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details - about a model's potential harms. - feature_extractor ([`~transformers.CLIPImageProcessor`]): - A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: RobertaSeriesModelWithTransformation, - tokenizer: XLMRobertaTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Alt Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a - time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs. - Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the - iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - if self.safety_checker is not None: - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - lora_scale: Optional[float] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - lora_scale (`float`, *optional*): - A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - """ - # set lora scale so that monkey patched LoRA - # function of text encoder can correctly access it - if lora_scale is not None and isinstance(self, LoraLoaderMixin): - self._lora_scale = lora_scale - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif prompt is not None and type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is None: - has_nsfw_concept = None - else: - if torch.is_tensor(image): - feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") - else: - feature_extractor_input = self.image_processor.numpy_to_pil(image) - safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - return image, has_nsfw_concept - - def decode_latents(self, latents): - warnings.warn( - ( - "The decode_latents method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor instead" - ), - FutureWarning, - ) - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents, return_dict=False)[0] - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, prompt, strength, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None - ): - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start * self.scheduler.order :] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None): - if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)): - raise ValueError( - f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}" - ) - - image = image.to(device=device, dtype=dtype) - - batch_size = batch_size * num_images_per_prompt - - if image.shape[1] == 4: - init_latents = image - - else: - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective" - f" batch size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - elif isinstance(generator, list): - init_latents = [ - self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) - ] - init_latents = torch.cat(init_latents, dim=0) - else: - init_latents = self.vae.encode(image).latent_dist.sample(generator) - - init_latents = self.vae.config.scaling_factor * init_latents - - if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0: - # expand init_latents for batch_size - deprecation_message = ( - f"You have passed {batch_size} text prompts (`prompt`), but only {init_latents.shape[0]} initial" - " images (`image`). Initial images are now duplicating to match the number of text prompts. Note" - " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update" - " your script to pass as many initial images as text prompts to suppress this warning." - ) - deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False) - additional_image_per_prompt = batch_size // init_latents.shape[0] - init_latents = torch.cat([init_latents] * additional_image_per_prompt, dim=0) - elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0: - raise ValueError( - f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts." - ) - else: - init_latents = torch.cat([init_latents], dim=0) - - shape = init_latents.shape - noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - # get latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - - return latents - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[ - torch.FloatTensor, - PIL.Image.Image, - np.ndarray, - List[torch.FloatTensor], - List[PIL.Image.Image], - List[np.ndarray], - ] = None, - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - r""" - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`. - image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`): - `Image` or tensor representing an image batch to be used as the starting point. Can also accept image - latents as `image`, but if passing latents directly it is not encoded again. - strength (`float`, *optional*, defaults to 0.8): - Indicates extent to transform the reference `image`. Must be between 0 and 1. `image` is used as a - starting point and more noise is added the higher the `strength`. The number of denoising steps depends - on the amount of noise initially added. When `strength` is 1, added noise is maximum and the denoising - process runs for the full number of iterations specified in `num_inference_steps`. A value of 1 - essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. This parameter is modulated by `strength`. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in image generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not - provided, text embeddings are generated from the `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If - not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between `PIL.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in - [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - - Examples: - - Returns: - [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] is returned, - otherwise a `tuple` is returned where the first element is a list with the generated images and the - second element is a list of `bool`s indicating whether the corresponding generated image contains - "not-safe-for-work" (nsfw) content. - """ - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, strength, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_encoder_lora_scale = ( - cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None - ) - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - lora_scale=text_encoder_lora_scale, - ) - - # 4. Preprocess image - image = self.image_processor.preprocess(image) - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - - # 6. Prepare latent variables - latents = self.prepare_latents( - image, latent_timestep, batch_size, num_images_per_prompt, prompt_embeds.dtype, device, generator - ) - - # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 8. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0] - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if not output_type == "latent": - image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - else: - image = latents - has_nsfw_concept = None - - if has_nsfw_concept is None: - do_denormalize = [True] * image.shape[0] - else: - do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept] - - image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return AltDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/res2net/cascade_rcnn_r2_101_fpn_20e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/res2net/cascade_rcnn_r2_101_fpn_20e_coco.py deleted file mode 100644 index 1cac759ab66323cf034f21a9afff770f79c10035..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/res2net/cascade_rcnn_r2_101_fpn_20e_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = '../cascade_rcnn/cascade_rcnn_r50_fpn_20e_coco.py' -model = dict( - pretrained='open-mmlab://res2net101_v1d_26w_4s', - backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 332495d3d7f7d7c7c0e0aca4e379cd54e2ed07de..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r50-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/gcnet_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/Andy1621/uniformer_video_demo/app.py b/spaces/Andy1621/uniformer_video_demo/app.py deleted file mode 100644 index 205b7273836161e80d87bad3f804fef0a7503e54..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_video_demo/app.py +++ /dev/null @@ -1,127 +0,0 @@ -import os - -import torch -import numpy as np -import torch.nn.functional as F -import torchvision.transforms as T -from PIL import Image -from decord import VideoReader -from decord import cpu -from uniformer import uniformer_small -from kinetics_class_index import kinetics_classnames -from transforms import ( - GroupNormalize, GroupScale, GroupCenterCrop, - Stack, ToTorchFormatTensor -) - -import gradio as gr -from huggingface_hub import hf_hub_download - -# Device on which to run the model -# Set to cuda to load on GPU -device = "cpu" -# os.system("wget https://cdn-lfs.huggingface.co/Andy1621/uniformer/d5fd7b0c49ee6a5422ef5d0c884d962c742003bfbd900747485eb99fa269d0db") -model_path = hf_hub_download(repo_id="Andy1621/uniformer", filename="uniformer_small_k400_16x8.pth") -# Pick a pretrained model -model = uniformer_small() -# state_dict = torch.load('d5fd7b0c49ee6a5422ef5d0c884d962c742003bfbd900747485eb99fa269d0db', map_location='cpu') -state_dict = torch.load(model_path, map_location='cpu') -model.load_state_dict(state_dict) - -# Set to eval mode and move to desired device -model = model.to(device) -model = model.eval() - -# Create an id to label name mapping -kinetics_id_to_classname = {} -for k, v in kinetics_classnames.items(): - kinetics_id_to_classname[k] = v - - -def get_index(num_frames, num_segments=16, dense_sample_rate=8): - sample_range = num_segments * dense_sample_rate - sample_pos = max(1, 1 + num_frames - sample_range) - t_stride = dense_sample_rate - start_idx = 0 if sample_pos == 1 else sample_pos // 2 - offsets = np.array([ - (idx * t_stride + start_idx) % - num_frames for idx in range(num_segments) - ]) - return offsets + 1 - - -def load_video(video_path): - vr = VideoReader(video_path, ctx=cpu(0)) - num_frames = len(vr) - frame_indices = get_index(num_frames, 16, 16) - - # transform - crop_size = 224 - scale_size = 256 - input_mean = [0.485, 0.456, 0.406] - input_std = [0.229, 0.224, 0.225] - - transform = T.Compose([ - GroupScale(int(scale_size)), - GroupCenterCrop(crop_size), - Stack(), - ToTorchFormatTensor(), - GroupNormalize(input_mean, input_std) - ]) - - images_group = list() - for frame_index in frame_indices: - img = Image.fromarray(vr[frame_index].asnumpy()) - images_group.append(img) - torch_imgs = transform(images_group) - return torch_imgs - - -def inference(video): - vid = load_video(video) - - # The model expects inputs of shape: B x C x H x W - TC, H, W = vid.shape - inputs = vid.reshape(1, TC//3, 3, H, W).permute(0, 2, 1, 3, 4) - - prediction = model(inputs) - prediction = F.softmax(prediction, dim=1).flatten() - - return {kinetics_id_to_classname[str(i)]: float(prediction[i]) for i in range(400)} - - -def set_example_video(example: list) -> dict: - return gr.Video.update(value=example[0]) - - -demo = gr.Blocks() -with demo: - gr.Markdown( - """ - # UniFormer-S - Gradio demo for UniFormer: To use it, simply upload your video, or click one of the examples to load them. Read more at the links below. - """ - ) - - with gr.Box(): - with gr.Row(): - with gr.Column(): - with gr.Row(): - input_video = gr.Video(label='Input Video') - with gr.Row(): - submit_button = gr.Button('Submit') - with gr.Column(): - label = gr.Label(num_top_classes=5) - with gr.Row(): - example_videos = gr.Dataset(components=[input_video], samples=[['hitting_baseball.mp4'], ['hoverboarding.mp4'], ['yoga.mp4']]) - - gr.Markdown( - """ -

[ICLR2022] UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning | Github Repo

- """ - ) - - submit_button.click(fn=inference, inputs=input_video, outputs=label) - example_videos.click(fn=set_example_video, inputs=example_videos, outputs=example_videos.components) - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/monkey_patch_gptq_lora.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/monkey_patch_gptq_lora.py deleted file mode 100644 index 3166bd33ceba449cb542861b0238818f68c7b02e..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/monkey_patch_gptq_lora.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copied from https://github.com/johnsmith0031/alpaca_lora_4bit - -from pathlib import Path - -import alpaca_lora_4bit.autograd_4bit as autograd_4bit -from alpaca_lora_4bit.amp_wrapper import AMPWrapper -from alpaca_lora_4bit.autograd_4bit import ( - Autograd4bitQuantLinear, - load_llama_model_4bit_low_ram -) -from alpaca_lora_4bit.models import Linear4bitLt -from alpaca_lora_4bit.monkeypatch.peft_tuners_lora_monkey_patch import ( - replace_peft_model_with_int4_lora_model -) - -from modules import shared -from modules.GPTQ_loader import find_quantized_model_file - -replace_peft_model_with_int4_lora_model() - - -def load_model_llama(model_name): - config_path = str(Path(f'{shared.args.model_dir}/{model_name}')) - model_path = str(find_quantized_model_file(model_name)) - model, tokenizer = load_llama_model_4bit_low_ram(config_path, model_path, groupsize=shared.args.groupsize, is_v1_model=False) - for _, m in model.named_modules(): - if isinstance(m, Autograd4bitQuantLinear) or isinstance(m, Linear4bitLt): - if m.is_v1_model: - m.zeros = m.zeros.half() - m.scales = m.scales.half() - m.bias = m.bias.half() - - autograd_4bit.auto_switch = True - - model.half() - wrapper = AMPWrapper(model) - wrapper.apply_generate() - - return model, tokenizer diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/utils.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/utils.py deleted file mode 100644 index 9a9d3b5b66370fa98da9e067ba53ead848ea9a59..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/utils.py +++ /dev/null @@ -1,189 +0,0 @@ -"""Utils for monoDepth.""" -import sys -import re -import numpy as np -import cv2 -import torch - - -def read_pfm(path): - """Read pfm file. - - Args: - path (str): path to file - - Returns: - tuple: (data, scale) - """ - with open(path, "rb") as file: - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header.decode("ascii") == "PF": - color = True - elif header.decode("ascii") == "Pf": - color = False - else: - raise Exception("Not a PFM file: " + path) - - dim_match = re.match(r"^(\d+)\s(\d+)\s$", file.readline().decode("ascii")) - if dim_match: - width, height = list(map(int, dim_match.groups())) - else: - raise Exception("Malformed PFM header.") - - scale = float(file.readline().decode("ascii").rstrip()) - if scale < 0: - # little-endian - endian = "<" - scale = -scale - else: - # big-endian - endian = ">" - - data = np.fromfile(file, endian + "f") - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - - return data, scale - - -def write_pfm(path, image, scale=1): - """Write pfm file. - - Args: - path (str): pathto file - image (array): data - scale (int, optional): Scale. Defaults to 1. - """ - - with open(path, "wb") as file: - color = None - - if image.dtype.name != "float32": - raise Exception("Image dtype must be float32.") - - image = np.flipud(image) - - if len(image.shape) == 3 and image.shape[2] == 3: # color image - color = True - elif ( - len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1 - ): # greyscale - color = False - else: - raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.") - - file.write("PF\n" if color else "Pf\n".encode()) - file.write("%d %d\n".encode() % (image.shape[1], image.shape[0])) - - endian = image.dtype.byteorder - - if endian == "<" or endian == "=" and sys.byteorder == "little": - scale = -scale - - file.write("%f\n".encode() % scale) - - image.tofile(file) - - -def read_image(path): - """Read image and output RGB image (0-1). - - Args: - path (str): path to file - - Returns: - array: RGB image (0-1) - """ - img = cv2.imread(path) - - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 255.0 - - return img - - -def resize_image(img): - """Resize image and make it fit for network. - - Args: - img (array): image - - Returns: - tensor: data ready for network - """ - height_orig = img.shape[0] - width_orig = img.shape[1] - - if width_orig > height_orig: - scale = width_orig / 384 - else: - scale = height_orig / 384 - - height = (np.ceil(height_orig / scale / 32) * 32).astype(int) - width = (np.ceil(width_orig / scale / 32) * 32).astype(int) - - img_resized = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) - - img_resized = ( - torch.from_numpy(np.transpose(img_resized, (2, 0, 1))).contiguous().float() - ) - img_resized = img_resized.unsqueeze(0) - - return img_resized - - -def resize_depth(depth, width, height): - """Resize depth map and bring to CPU (numpy). - - Args: - depth (tensor): depth - width (int): image width - height (int): image height - - Returns: - array: processed depth - """ - depth = torch.squeeze(depth[0, :, :, :]).to("cpu") - - depth_resized = cv2.resize( - depth.numpy(), (width, height), interpolation=cv2.INTER_CUBIC - ) - - return depth_resized - -def write_depth(path, depth, bits=1): - """Write depth map to pfm and png file. - - Args: - path (str): filepath without extension - depth (array): depth - """ - write_pfm(path + ".pfm", depth.astype(np.float32)) - - depth_min = depth.min() - depth_max = depth.max() - - max_val = (2**(8*bits))-1 - - if depth_max - depth_min > np.finfo("float").eps: - out = max_val * (depth - depth_min) / (depth_max - depth_min) - else: - out = np.zeros(depth.shape, dtype=depth.type) - - if bits == 1: - cv2.imwrite(path + ".png", out.astype("uint8")) - elif bits == 2: - cv2.imwrite(path + ".png", out.astype("uint16")) - - return diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/README.md b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/README.md deleted file mode 100644 index 73e8838f3a2e0eee8df1bd273a6fa08a9cdf3cb8..0000000000000000000000000000000000000000 --- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/README.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -license: mit -sdk: gradio -emoji: 😻 -colorTo: green -pinned: true ---- -
- -
- -# :sauropod: Grounding DINO - -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-mscoco)](https://paperswithcode.com/sota/zero-shot-object-detection-on-mscoco?p=grounding-dino-marrying-dino-with-grounded) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-odinw)](https://paperswithcode.com/sota/zero-shot-object-detection-on-odinw?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco-minival)](https://paperswithcode.com/sota/object-detection-on-coco-minival?p=grounding-dino-marrying-dino-with-grounded) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco)](https://paperswithcode.com/sota/object-detection-on-coco?p=grounding-dino-marrying-dino-with-grounded) - - -**[IDEA-CVR, IDEA-Research](https://github.com/IDEA-Research)** - -[Shilong Liu](http://www.lsl.zone/), [Zhaoyang Zeng](https://scholar.google.com/citations?user=U_cvvUwAAAAJ&hl=zh-CN&oi=ao), [Tianhe Ren](https://rentainhe.github.io/), [Feng Li](https://scholar.google.com/citations?user=ybRe9GcAAAAJ&hl=zh-CN), [Hao Zhang](https://scholar.google.com/citations?user=B8hPxMQAAAAJ&hl=zh-CN), [Jie Yang](https://github.com/yangjie-cv), [Chunyuan Li](https://scholar.google.com/citations?user=Zd7WmXUAAAAJ&hl=zh-CN&oi=ao), [Jianwei Yang](https://jwyang.github.io/), [Hang Su](https://scholar.google.com/citations?hl=en&user=dxN1_X0AAAAJ&view_op=list_works&sortby=pubdate), [Jun Zhu](https://scholar.google.com/citations?hl=en&user=axsP38wAAAAJ), [Lei Zhang](https://www.leizhang.org/):email:. - - -[[`Paper`](https://arxiv.org/abs/2303.05499)] [[`Demo`](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo)] [[`BibTex`](#black_nib-citation)] - - -PyTorch implementation and pretrained models for Grounding DINO. For details, see the paper **[Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499)**. - -## :sun_with_face: Helpful Tutorial - -- :grapes: [[Read our arXiv Paper](https://arxiv.org/abs/2303.05499)] -- :apple: [[Watch our simple introduction video on YouTube](https://youtu.be/wxWDt5UiwY8)] -- :blossom:  [[Try the Colab Demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb)] -- :sunflower: [[Try our Official Huggingface Demo](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo)] -- :maple_leaf: [[Watch the Step by Step Tutorial about GroundingDINO by Roboflow AI](https://youtu.be/cMa77r3YrDk)] -- :mushroom: [[GroundingDINO: Automated Dataset Annotation and Evaluation by Roboflow AI](https://youtu.be/C4NqaRBz_Kw)] -- :hibiscus: [[Accelerate Image Annotation with SAM and GroundingDINO by Roboflow AI](https://youtu.be/oEQYStnF2l8)] -- :white_flower: [[Autodistill: Train YOLOv8 with ZERO Annotations based on Grounding-DINO and Grounded-SAM by Roboflow AI](https://github.com/autodistill/autodistill)] - - - - - - -## :sparkles: Highlight Projects - -- [Semantic-SAM: a universal image segmentation model to enable segment and recognize anything at any desired granularity.](https://github.com/UX-Decoder/Semantic-SAM), -- [DetGPT: Detect What You Need via Reasoning](https://github.com/OptimalScale/DetGPT) -- [Grounded-SAM: Marrying Grounding DINO with Segment Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) -- [Grounding DINO with Stable Diffusion](demo/image_editing_with_groundingdino_stablediffusion.ipynb) -- [Grounding DINO with GLIGEN for Controllable Image Editing](demo/image_editing_with_groundingdino_gligen.ipynb) -- [OpenSeeD: A Simple and Strong Openset Segmentation Model](https://github.com/IDEA-Research/OpenSeeD) -- [SEEM: Segment Everything Everywhere All at Once](https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once) -- [X-GPT: Conversational Visual Agent supported by X-Decoder](https://github.com/microsoft/X-Decoder/tree/xgpt) -- [GLIGEN: Open-Set Grounded Text-to-Image Generation](https://github.com/gligen/GLIGEN) -- [LLaVA: Large Language and Vision Assistant](https://github.com/haotian-liu/LLaVA) - - - - - - - - -## :bulb: Highlight - -- **Open-Set Detection.** Detect **everything** with language! -- **High Performancce.** COCO zero-shot **52.5 AP** (training without COCO data!). COCO fine-tune **63.0 AP**. -- **Flexible.** Collaboration with Stable Diffusion for Image Editting. - - - - -## :fire: News -- **`2023/07/18`**: We release [Semantic-SAM](https://github.com/UX-Decoder/Semantic-SAM), a universal image segmentation model to enable segment and recognize anything at any desired granularity. **Code** and **checkpoint** are available! -- **`2023/06/17`**: We provide an example to evaluate Grounding DINO on COCO zero-shot performance. -- **`2023/04/15`**: Refer to [CV in the Wild Readings](https://github.com/Computer-Vision-in-the-Wild/CVinW_Readings) for those who are interested in open-set recognition! -- **`2023/04/08`**: We release [demos](demo/image_editing_with_groundingdino_gligen.ipynb) to combine [Grounding DINO](https://arxiv.org/abs/2303.05499) with [GLIGEN](https://github.com/gligen/GLIGEN) for more controllable image editings. -- **`2023/04/08`**: We release [demos](demo/image_editing_with_groundingdino_stablediffusion.ipynb) to combine [Grounding DINO](https://arxiv.org/abs/2303.05499) with [Stable Diffusion](https://github.com/Stability-AI/StableDiffusion) for image editings. -- **`2023/04/06`**: We build a new demo by marrying GroundingDINO with [Segment-Anything](https://github.com/facebookresearch/segment-anything) named **[Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything)** aims to support segmentation in GroundingDINO. -- **`2023/03/28`**: A YouTube [video](https://youtu.be/cMa77r3YrDk) about Grounding DINO and basic object detection prompt engineering. [[SkalskiP](https://github.com/SkalskiP)] -- **`2023/03/28`**: Add a [demo](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) on Hugging Face Space! -- **`2023/03/27`**: Support CPU-only mode. Now the model can run on machines without GPUs. -- **`2023/03/25`**: A [demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) for Grounding DINO is available at Colab. [[SkalskiP](https://github.com/SkalskiP)] -- **`2023/03/22`**: Code is available Now! - -
- -Description - - Paper introduction. -ODinW -Marrying Grounding DINO and GLIGEN -gd_gligen -
- -## :star: Explanations/Tips for Grounding DINO Inputs and Outputs -- Grounding DINO accepts an `(image, text)` pair as inputs. -- It outputs `900` (by default) object boxes. Each box has similarity scores across all input words. (as shown in Figures below.) -- We defaultly choose the boxes whose highest similarities are higher than a `box_threshold`. -- We extract the words whose similarities are higher than the `text_threshold` as predicted labels. -- If you want to obtain objects of specific phrases, like the `dogs` in the sentence `two dogs with a stick.`, you can select the boxes with highest text similarities with `dogs` as final outputs. -- Note that each word can be split to **more than one** tokens with different tokenlizers. The number of words in a sentence may not equal to the number of text tokens. -- We suggest separating different category names with `.` for Grounding DINO. -![model_explain1](.asset/model_explan1.PNG) -![model_explain2](.asset/model_explan2.PNG) - - -## :medal_military: Results - -
- -COCO Object Detection Results - -COCO -
- -
- -ODinW Object Detection Results - -ODinW -
- -
- -Marrying Grounding DINO with Stable Diffusion for Image Editing - -See our example notebook for more details. -GD_SD -
- - -
- -Marrying Grounding DINO with GLIGEN for more Detailed Image Editing. - -See our example notebook for more details. -GD_GLIGEN -
- -## :sauropod: Model: Grounding DINO - -Includes: a text backbone, an image backbone, a feature enhancer, a language-guided query selection, and a cross-modality decoder. - -![arch](.asset/arch.png) - - -## :hearts: Acknowledgement - -Our model is related to [DINO](https://github.com/IDEA-Research/DINO) and [GLIP](https://github.com/microsoft/GLIP). Thanks for their great work! - -We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at [Awesome Detection Transformer](https://github.com/IDEACVR/awesome-detection-transformer). A new toolbox [detrex](https://github.com/IDEA-Research/detrex) is available as well. - -Thanks [Stable Diffusion](https://github.com/Stability-AI/StableDiffusion) and [GLIGEN](https://github.com/gligen/GLIGEN) for their awesome models. - - -## :black_nib: Citation - -If you find our work helpful for your research, please consider citing the following BibTeX entry. - -```bibtex -@article{liu2023grounding, - title={Grounding dino: Marrying dino with grounded pre-training for open-set object detection}, - author={Liu, Shilong and Zeng, Zhaoyang and Ren, Tianhe and Li, Feng and Zhang, Hao and Yang, Jie and Li, Chunyuan and Yang, Jianwei and Su, Hang and Zhu, Jun and others}, - journal={arXiv preprint arXiv:2303.05499}, - year={2023} -} -``` \ No newline at end of file diff --git a/spaces/Ash2219/AIchatbot/app.py b/spaces/Ash2219/AIchatbot/app.py deleted file mode 100644 index ca8b6d40b4ab898c70da92f4a4298de2baf703dc..0000000000000000000000000000000000000000 --- a/spaces/Ash2219/AIchatbot/app.py +++ /dev/null @@ -1,164 +0,0 @@ -import os -import re -import requests -import json -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') -PLAY_HT_API_KEY=os.getenv('PLAY_HT_API_KEY') -PLAY_HT_USER_ID=os.getenv('PLAY_HT_USER_ID') - -PLAY_HT_VOICE_ID=os.getenv('PLAY_HT_VOICE_ID') -play_ht_api_get_audio_url = "https://play.ht/api/v2/tts" - - -template = """You are a helpful assistant to answer user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -headers = { - "accept": "text/event-stream", - "content-type": "application/json", - "AUTHORIZATION": "Bearer "+ PLAY_HT_API_KEY, - "X-USER-ID": PLAY_HT_USER_ID -} - - -def get_payload(text): - return { - "text": text, - "voice": PLAY_HT_VOICE_ID, - "quality": "medium", - "output_format": "mp3", - "speed": 1, - "sample_rate": 24000, - "seed": None, - "temperature": None - } - -def get_generated_audio(text): - payload = get_payload(text) - generated_response = {} - try: - response = requests.post(play_ht_api_get_audio_url, json=payload, headers=headers) - response.raise_for_status() - generated_response["type"]= 'SUCCESS' - generated_response["response"] = response.text - except requests.exceptions.RequestException as e: - generated_response["type"]= 'ERROR' - try: - response_text = json.loads(response.text) - if response_text['error_message']: - generated_response["response"] = response_text['error_message'] - else: - generated_response["response"] = response.text - except Exception as e: - generated_response["response"] = response.text - except Exception as e: - generated_response["type"]= 'ERROR' - generated_response["response"] = response.text - return generated_response - -def extract_urls(text): - # Define the regex pattern for URLs - url_pattern = r'https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+[/\w\.-]*' - - # Find all occurrences of URLs in the text - urls = re.findall(url_pattern, text) - - return urls - -def get_audio_reply_for_question(text): - generated_audio_event = get_generated_audio(text) - #From get_generated_audio, you will get events in a string format, from that we need to extract the url - final_response = { - "audio_url": '', - "message": '' - } - if generated_audio_event["type"] == 'SUCCESS': - audio_urls = extract_urls(generated_audio_event["response"]) - if len(audio_urls) == 0: - final_response['message'] = "No audio file link found in generated event" - else: - final_response['audio_url'] = audio_urls[-1] - else: - final_response['message'] = generated_audio_event['response'] - return final_response - -def download_url(url): - try: - # Send a GET request to the URL to fetch the content - final_response = { - 'content':'', - 'error':'' - } - response = requests.get(url) - # Check if the request was successful (status code 200) - if response.status_code == 200: - final_response['content'] = response.content - else: - final_response['error'] = f"Failed to download the URL. Status code: {response.status_code}" - except Exception as e: - final_response['error'] = f"Failed to download the URL. Error: {e}" - return final_response - -def get_filename_from_url(url): - # Use os.path.basename() to extract the file name from the URL - file_name = os.path.basename(url) - return file_name - -def get_text_response(user_message): - response = llm_chain.predict(user_message = user_message) - return response - -def get_text_response_and_audio_response(user_message): - response = get_text_response(user_message) # Getting the reply from Open AI - audio_reply_for_question_response = get_audio_reply_for_question(response) - final_response = { - 'output_file_path': '', - 'message':'' - } - audio_url = audio_reply_for_question_response['audio_url'] - if audio_url: - output_file_path=get_filename_from_url(audio_url) - download_url_response = download_url(audio_url) - audio_content = download_url_response['content'] - if audio_content: - with open(output_file_path, "wb") as audio_file: - audio_file.write(audio_content) - final_response['output_file_path'] = output_file_path - else: - final_response['message'] = download_url_response['error'] - else: - final_response['message'] = audio_reply_for_question_response['message'] - return final_response - -def chat_bot_response(message, history): - text_and_audio_response = get_text_response_and_audio_response(message) - output_file_path = text_and_audio_response['output_file_path'] - if output_file_path: - return (text_and_audio_response['output_file_path'],) - else: - return text_and_audio_response['message'] - -demo = gr.ChatInterface(chat_bot_response,examples=["How are you doing?","What are your interests?","Which places do you like to visit?"]) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/control.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/control.py deleted file mode 100644 index 88fcb9295164f4e18827ef61fff6723e94ef7381..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/control.py +++ /dev/null @@ -1,225 +0,0 @@ -import sys -import time -from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Union - -if sys.version_info >= (3, 8): - from typing import Final -else: - from pip._vendor.typing_extensions import Final # pragma: no cover - -from .segment import ControlCode, ControlType, Segment - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderResult - -STRIP_CONTROL_CODES: Final = [ - 7, # Bell - 8, # Backspace - 11, # Vertical tab - 12, # Form feed - 13, # Carriage return -] -_CONTROL_STRIP_TRANSLATE: Final = { - _codepoint: None for _codepoint in STRIP_CONTROL_CODES -} - -CONTROL_ESCAPE: Final = { - 7: "\\a", - 8: "\\b", - 11: "\\v", - 12: "\\f", - 13: "\\r", -} - -CONTROL_CODES_FORMAT: Dict[int, Callable[..., str]] = { - ControlType.BELL: lambda: "\x07", - ControlType.CARRIAGE_RETURN: lambda: "\r", - ControlType.HOME: lambda: "\x1b[H", - ControlType.CLEAR: lambda: "\x1b[2J", - ControlType.ENABLE_ALT_SCREEN: lambda: "\x1b[?1049h", - ControlType.DISABLE_ALT_SCREEN: lambda: "\x1b[?1049l", - ControlType.SHOW_CURSOR: lambda: "\x1b[?25h", - ControlType.HIDE_CURSOR: lambda: "\x1b[?25l", - ControlType.CURSOR_UP: lambda param: f"\x1b[{param}A", - ControlType.CURSOR_DOWN: lambda param: f"\x1b[{param}B", - ControlType.CURSOR_FORWARD: lambda param: f"\x1b[{param}C", - ControlType.CURSOR_BACKWARD: lambda param: f"\x1b[{param}D", - ControlType.CURSOR_MOVE_TO_COLUMN: lambda param: f"\x1b[{param+1}G", - ControlType.ERASE_IN_LINE: lambda param: f"\x1b[{param}K", - ControlType.CURSOR_MOVE_TO: lambda x, y: f"\x1b[{y+1};{x+1}H", - ControlType.SET_WINDOW_TITLE: lambda title: f"\x1b]0;{title}\x07", -} - - -class Control: - """A renderable that inserts a control code (non printable but may move cursor). - - Args: - *codes (str): Positional arguments are either a :class:`~rich.segment.ControlType` enum or a - tuple of ControlType and an integer parameter - """ - - __slots__ = ["segment"] - - def __init__(self, *codes: Union[ControlType, ControlCode]) -> None: - control_codes: List[ControlCode] = [ - (code,) if isinstance(code, ControlType) else code for code in codes - ] - _format_map = CONTROL_CODES_FORMAT - rendered_codes = "".join( - _format_map[code](*parameters) for code, *parameters in control_codes - ) - self.segment = Segment(rendered_codes, None, control_codes) - - @classmethod - def bell(cls) -> "Control": - """Ring the 'bell'.""" - return cls(ControlType.BELL) - - @classmethod - def home(cls) -> "Control": - """Move cursor to 'home' position.""" - return cls(ControlType.HOME) - - @classmethod - def move(cls, x: int = 0, y: int = 0) -> "Control": - """Move cursor relative to current position. - - Args: - x (int): X offset. - y (int): Y offset. - - Returns: - ~Control: Control object. - - """ - - def get_codes() -> Iterable[ControlCode]: - control = ControlType - if x: - yield ( - control.CURSOR_FORWARD if x > 0 else control.CURSOR_BACKWARD, - abs(x), - ) - if y: - yield ( - control.CURSOR_DOWN if y > 0 else control.CURSOR_UP, - abs(y), - ) - - control = cls(*get_codes()) - return control - - @classmethod - def move_to_column(cls, x: int, y: int = 0) -> "Control": - """Move to the given column, optionally add offset to row. - - Returns: - x (int): absolute x (column) - y (int): optional y offset (row) - - Returns: - ~Control: Control object. - """ - - return ( - cls( - (ControlType.CURSOR_MOVE_TO_COLUMN, x), - ( - ControlType.CURSOR_DOWN if y > 0 else ControlType.CURSOR_UP, - abs(y), - ), - ) - if y - else cls((ControlType.CURSOR_MOVE_TO_COLUMN, x)) - ) - - @classmethod - def move_to(cls, x: int, y: int) -> "Control": - """Move cursor to absolute position. - - Args: - x (int): x offset (column) - y (int): y offset (row) - - Returns: - ~Control: Control object. - """ - return cls((ControlType.CURSOR_MOVE_TO, x, y)) - - @classmethod - def clear(cls) -> "Control": - """Clear the screen.""" - return cls(ControlType.CLEAR) - - @classmethod - def show_cursor(cls, show: bool) -> "Control": - """Show or hide the cursor.""" - return cls(ControlType.SHOW_CURSOR if show else ControlType.HIDE_CURSOR) - - @classmethod - def alt_screen(cls, enable: bool) -> "Control": - """Enable or disable alt screen.""" - if enable: - return cls(ControlType.ENABLE_ALT_SCREEN, ControlType.HOME) - else: - return cls(ControlType.DISABLE_ALT_SCREEN) - - @classmethod - def title(cls, title: str) -> "Control": - """Set the terminal window title - - Args: - title (str): The new terminal window title - """ - return cls((ControlType.SET_WINDOW_TITLE, title)) - - def __str__(self) -> str: - return self.segment.text - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - if self.segment.text: - yield self.segment - - -def strip_control_codes( - text: str, _translate_table: Dict[int, None] = _CONTROL_STRIP_TRANSLATE -) -> str: - """Remove control codes from text. - - Args: - text (str): A string possibly contain control codes. - - Returns: - str: String with control codes removed. - """ - return text.translate(_translate_table) - - -def escape_control_codes( - text: str, - _translate_table: Dict[int, str] = CONTROL_ESCAPE, -) -> str: - """Replace control codes with their "escaped" equivalent in the given text. - (e.g. "\b" becomes "\\b") - - Args: - text (str): A string possibly containing control codes. - - Returns: - str: String with control codes replaced with their escaped version. - """ - return text.translate(_translate_table) - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - - console = Console() - console.print("Look at the title of your terminal window ^") - # console.print(Control((ControlType.SET_WINDOW_TITLE, "Hello, world!"))) - for i in range(10): - console.set_window_title("🚀 Loading" + "." * i) - time.sleep(0.5) diff --git a/spaces/Bart92/RVC_HF/Dockerfile b/spaces/Bart92/RVC_HF/Dockerfile deleted file mode 100644 index b81f131c79cc585012b28002f4916491e85f3a33..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/Dockerfile +++ /dev/null @@ -1,29 +0,0 @@ -# syntax=docker/dockerfile:1 - -FROM python:3.10-bullseye - -EXPOSE 7865 - -WORKDIR /app - -COPY . . - -RUN apt update && apt install -y -qq ffmpeg aria2 && apt clean - -RUN pip3 install --no-cache-dir -r requirements.txt - -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d assets/pretrained_v2/ -o D40k.pth -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d assets/pretrained_v2/ -o G40k.pth -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d assets/pretrained_v2/ -o f0D40k.pth -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d assets/pretrained_v2/ -o f0G40k.pth - -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d assets/uvr5_weights/ -o HP2-人声vocals+非人声instrumentals.pth -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d assets/uvr5_weights/ -o HP5-主旋律人声vocals+其他instrumentals.pth - -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d assets/hubert -o hubert_base.pt - -RUN aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/rmvpe.pt -d assets/hubert -o rmvpe.pt - -VOLUME [ "/app/weights", "/app/opt" ] - -CMD ["python3", "infer-web.py"] \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Choque De Clanes Nulos.md b/spaces/Benson/text-generation/Examples/Choque De Clanes Nulos.md deleted file mode 100644 index ab86c56ab170bb8ade0564100c35eaca3ae79bce..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Choque De Clanes Nulos.md +++ /dev/null @@ -1,115 +0,0 @@ - -

Choque de clanes nulos: ¿Qué son y cómo usarlos

-

Clash of Clans es uno de los juegos móviles más populares del mundo, con millones de jugadores compitiendo por recursos, trofeos y gloria. ¿Pero sabías que hay algunos jugadores que no pertenecen a ningún clan? Se les llama nulos, y tienen sus propias ventajas y desventajas. En este artículo, explicaremos qué son los nulls, por qué la gente los usa y cómo puedes usarlos para tu beneficio.

-

Introducción

-

¿Qué es el choque de clanes?

-

Clash of Clans es un juego de estrategia donde construyes tu propia aldea, entrenas a tu ejército y atacas las bases de otros jugadores. También puedes unirte o crear un clan, que es un grupo de jugadores que pueden chatear, donar tropas y participar en guerras de clanes. Las guerras de clanes son eventos especiales donde dos clanes se enfrentan en una serie de ataques, y el clan con más estrellas gana. Las estrellas se ganan destruyendo un cierto porcentaje de la base del enemigo.

-

choque de clanes nulos


Download Zip –––––>>> https://bltlly.com/2v6MFt



-

¿Qué son los nulos en Clash of Clans?

-

Los nulos son jugadores que no pertenecen a ningún clan. No tienen nombre de clan, ni insignia de clan, ni chat de clan. Todavía pueden atacar las bases de otros jugadores, pero no pueden participar en guerras de clanes o recibir donaciones de otros jugadores. Hay tres tipos de nulos: inactivos, prohibidos y abusados.

-

¿Por qué la gente usa nulos en Clash of Clans?

-

Hay diferentes razones por las que la gente usa nulos en Clash of Clans. Algunos los usan por diversión, algunos los usan para recursos agrícolas, algunos los usan para probar estrategias y otros los usan para hacer trampa. Aquí hay algunos ejemplos:

- -

Tipos de nulos en Choque de clanes

-

Nulos inactivos

-

Los nulos inactivos son cuentas que han sido abandonadas por sus propietarios. No han iniciado sesión durante mucho tiempo, y sus bases suelen estar desactualizadas y mal defendidas. Son objetivos fáciles para otros jugadores que quieren saquear sus recursos.

-

Pros y contras de los nulos inactivos

-

Los pros de los nulos inactivos son:

-