diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Call Of Duty Black Ops II [UPD] Crack Only-SKIDROW Torrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Call Of Duty Black Ops II [UPD] Crack Only-SKIDROW Torrent.md
deleted file mode 100644
index 8f0ac7842f0a5ba0865c75e048efc90713fe3036..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Call Of Duty Black Ops II [UPD] Crack Only-SKIDROW Torrent.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
How to Download and Install Call of Duty Black Ops II Crack Only-SKIDROW Torrent
-Call of Duty Black Ops II is a first-person shooter video game developed by Treyarch and published by Activision. It is the ninth game in the Call of Duty series and a sequel to the 2010 game Call of Duty: Black Ops. The game was released worldwide on November 13, 2012 for Microsoft Windows, PlayStation 3, Xbox 360, and Wii U.
-If you want to play the game without buying it, you can download and install a crack file that bypasses the game's protection and allows you to run it without a valid license. One of the most popular crack files for Call of Duty Black Ops II is the one released by SKIDROW, a group of hackers who specialize in cracking video games. In this article, we will show you how to download and install Call of Duty Black Ops II Crack Only-SKIDROW torrent using a torrent client.
-Call of Duty Black Ops II Crack Only-SKIDROW torrent
Download File > https://byltly.com/2uKzea
-Step 1: Download a torrent client
-A torrent client is a software that enables you to download files from other users who are sharing them on a peer-to-peer network. There are many torrent clients available online, such as uTorrent, BitTorrent, qBittorrent, etc. You can choose any one that suits your preferences and system requirements. Download and install the torrent client on your computer.
-Step 2: Download Call of Duty Black Ops II Crack Only-SKIDROW torrent
-Once you have installed the torrent client, you need to find and download the Call of Duty Black Ops II Crack Only-SKIDROW torrent file. A torrent file is a small file that contains information about the files you want to download, such as their names, sizes, locations, etc. You can find the Call of Duty Black Ops II Crack Only-SKIDROW torrent file on various websites that host torrents, such as LimeTorrents.to[^2^], MegaGames.com[^1^], Archive.org[^4^], etc. You can also use a search engine like Google or Bing to look for the torrent file.
-Once you have found the torrent file, click on it to open it with your torrent client. The torrent client will start downloading the crack file from other users who are sharing it. The download speed may vary depending on your internet connection and the number of seeders (users who have the complete file) and leechers (users who are downloading the file) available. Wait until the download is complete.
-Step 3: Install Call of Duty Black Ops II Crack Only-SKIDROW
-After downloading the crack file, you need to install it on your computer. The crack file is usually compressed in a ZIP or RAR archive that you need to extract first using a software like WinRAR or 7-Zip. After extracting the archive, you will find a folder named SKIDROW that contains several files, such as Call.of.Duty.Black.Ops.II.Update.1.and.2.exe, SKIDROW.ini, steam_api.dll, etc.
-To install the crack file, follow these steps:
-
-- Run Call.of.Duty.Black.Ops.II.Update.1.and.2.exe and follow the instructions to update your game to the latest version.
-- Copy all the files from the SKIDROW folder to the main installation folder of Call of Duty Black Ops II and overwrite any existing files.
-- Block the game in your firewall and mark the cracked files as secure/trusted in your antivirus program to prevent them from being deleted or blocked.
-- Play the game by launching it from its executable file or from a shortcut on your desktop.
-- Support the developers by buying the game if you enjoy it!
-
-Conclusion
-In this article, we have shown you how to download and install Call of Duty Black Ops II Crack Only-SKIDROW torrent using a torrent client. This method allows you to play the game without purchasing it, but it may also expose you to some risks, such as viruses, malware, legal issues, etc. Therefore, we do not condone or encourage piracy and we advise you to use this method
- 81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (welcome 2007 hindi movie 720p torren) - Enjoy the best quality of the Indian blockbuster Welcome.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (welcome 2007 hindi movie 720p torren) - Enjoy the best quality of the Indian blockbuster Welcome.md
deleted file mode 100644
index 84ab8d3c0f25bb524c8bfeaba48c4bc4edcc3067..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (welcome 2007 hindi movie 720p torren) - Enjoy the best quality of the Indian blockbuster Welcome.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-HD Online Player (Welcome 2007 Hindi Movie 720p Torrent)
- If you are a fan of Bollywood comedy movies, you might have heard of Welcome, a 2007 film starring Akshay Kumar, Katrina Kaif, Anil Kapoor, Nana Patekar, Paresh Rawal and others. The movie is about a series of hilarious events that happen when a good-hearted gangster tries to find a suitable groom for his sister, who falls in love with a naive and innocent man. The movie was a huge hit at the box office and received positive reviews from critics and audiences alike.
-HD Online Player (welcome 2007 hindi movie 720p torren)
Download File ○○○ https://byltly.com/2uKzwW
- But what if you want to watch Welcome online in high definition (HD) quality? You might be wondering how to find a reliable and safe source to stream or download the movie in 720p resolution. Well, look no further, because we have got you covered. In this article, we will tell you how to use an HD online player to watch Welcome 2007 Hindi movie 720p torrent without any hassle.
- What is an HD online player?
- An HD online player is a software or web application that allows you to play video files from various sources, such as torrents, direct links, cloud storage, etc. An HD online player can also convert the video format and resolution according to your device and internet speed. Some of the benefits of using an HD online player are:
-
-- You can watch videos in HD quality without downloading them to your device.
-- You can save your storage space and bandwidth by streaming videos online.
-- You can access a large collection of movies and shows from different genres and languages.
-- You can enjoy a smooth and uninterrupted viewing experience with fast buffering and loading.
-- You can adjust the playback speed, volume, subtitles, etc. according to your preference.
-
- How to use an HD online player to watch Welcome 2007 Hindi movie 720p torrent?
- There are many HD online players available on the internet, but not all of them are trustworthy and secure. Some of them may contain malware, viruses, ads, or spyware that can harm your device or compromise your privacy. Therefore, you need to be careful while choosing an HD online player to watch Welcome 2007 Hindi movie 720p torrent.
-Welcome 2007 Hindi Movie 720p Download Free
-Welcome Full Movie Download Torrent
-Welcome Hd 720p Trailer Download
-Welcome Movie 720p Full HD
-Welcome Movie Download 720p – Home – PCMovie
-Welcome Movie Download 720p Video
-Welcome Movie Download 720p – Watch movies online for free
-Welcome Movie Download Movie 720p [Anees Bazmee]
-Welcome Download Movie Full Movie for Free in Hindi 720p
-Welcome Download site: www.thepiratebay.in
-Welcome 2007 Hindi HDRip 720p
-Welcome to the World of Free Films
-Welcome Free Movies Subtitles
-Welcome Download w/Torrent
-Welcome Play HD Online Player
-Welcome Stream on SoundCloud
-Welcome Audiobooks and Excerpts
-Welcome Genre: Drama, Action, Romance, Comedy
-Welcome IMDB Rating: 6.8/10
-Welcome Director: Anees Bazmee
-Welcome Cast: Akshay Kumar, Katrina Kaif, Nana Patekar
-Welcome Runtime: 2h 29mn
-Welcome Source: 1080p.AMZN.WEBDL.DDP5.1.H.264-Telly
-Welcome Video: AVC | 1280x544 1050 Kbps
-Welcome Audio: 2CH AAC Hindi
-Welcome Subtitles: English Softcoded
-Welcome Chapter: -
-Welcome Subscene Link: Indonesian, English
-Welcome Screenshot: View
-Welcome Trailer: Watch
-Welcome File: .mkv
-Welcome Size: 550 MB - 2.57 GB
-Welcome Quality: WEB-HD 480p, 720p & 1080p – Pahe.in
-Welcome Release Date: December 28, 2021
-Welcome Plot: A man falls in love with a beautiful woman, but later discovers that her brothers are gangsters.
-Welcome Comedy, Crime, Drama
-Welcome Net Energy Gain in Nuclear Fusion Experiment
-Welcome Korea Superconducting Tokamak Advanced Research Experiment
-Welcome Temperature of 100 Million°C for 30 Seconds
-Welcome Seven Times Hotter than the Core of the Sun
-Welcome Physics Problem to Engineering One
-Welcome Korea Institute of Fusion Energy
-Welcome New Scientist Article on Fusion Breakthrough
-Welcome The Sun News on Fusion Experiments
-Welcome Yahoo News on Fusion Reactor
-Welcome Wikipedia on Sun and Solar Core Temperature
-Welcome Montana Solar Physics on Sun's Core and Radiative Zone Temperature
-Welcome Cornell Astronomy on Sun's Layers Temperature
-Welcome NASA Fact Sheet on Sun and Solar Atmosphere Temperature
-Welcome Wikipedia on Solar Core Density and Composition
- One of the best and most popular HD online players is Yify. Yify is a website that provides high-quality torrents of movies and shows in various resolutions, such as 720p, 1080p, and 4K. Yify also has an online player that lets you stream the torrents directly on your browser without downloading them. Yify is known for its fast speed, user-friendly interface, and minimal ads.
- To use Yify's HD online player to watch Welcome 2007 Hindi movie 720p torrent, follow these simple steps:
-
-- Go to Yify's official website: https://yts.mx/
-- Search for Welcome 2007 Hindi movie in the search bar or browse through the categories.
-- Select the movie from the results and click on it.
-- Choose the 720p torrent option and click on the play button next to it.
-- A new tab will open with the Yify online player. Wait for a few seconds for the video to load and buffer.
-- Enjoy watching Welcome 2007 Hindi movie 720p torrent in HD quality on your device.
-
- What are some alternatives to Yify's HD online player?
- If you are not satisfied with Yify's HD online player or want to try some other options, here are some alternatives that you can use to watch Welcome 2007 Hindi movie 720p torrent:
-
-- SoundCloud: SoundCloud is a music streaming platform that also hosts some video files uploaded by users. You can find Welcome 2007 Hindi movie 720p torrent on SoundCloud by searching for it or following this link: https://soundcloud.com/eskitwirsont/welcome-2007-hindi-movie-720p-torrent
-- Boatripz: Boatripz is a website that offers free online video conversion and downloading services. You can use Boatripz to convert Welcome 2007 Hindi movie 720p torrent into mp4 format and download it to your device or watch it online. To use Boatripz, go to this link: https://boatripz.com/wp-content/uploads/2022/12/noehen.pdf
-- Eecoeats: Eecoeats is another website that provides free online video conversion and downloading services. You can also use Eecoeats to convert Welcome 2007 Hindi movie 720p torrent into mp4 format and download it to your device or watch it online. To use Eecoeats, go to this link: https://www.eecoeats.com/wp-content/uploads/2022/07/HD_Online_Player_welcome_2007_hindi_movie_720p_torren.pdf
-
- Conclusion
- Welcome 2007 Hindi movie is a comedy masterpiece that you should not miss if you love Bollywood movies. You can watch it online in HD quality using an HD online player like Yify or any of its alternatives. We hope this article helped you find the best way to watch Welcome 2007 Hindi movie 720p torrent on your device.
- FAQs
- Here are some frequently asked questions about watching Welcome 2007 Hindi movie 720p torrent using an HD online player:
- Q: Is it legal to watch Welcome 2007 Hindi movie 720p torrent using an HD online player?
- A: It depends on your country's laws and regulations regarding piracy and copyright infringement. Some countries may allow you to watch Welcome 2007 Hindi movie 720p torrent using an HD online player for personal use only, while others may prohibit it completely. Therefore, you should check your local laws before using an HD online player to watch Welcome 2007 Hindi movie 720p torrent.
- Q: Is it safe to watch Welcome 2007 Hindi movie 720p torrent using an HD online player?
- Some HD online players may be reliable and secure, while others may contain malware, viruses, ads, or spyware that can harm your device or compromise your privacy. Therefore, you should always use a trusted and reputable HD online player like Yify or any of its alternatives to watch Welcome 2007 Hindi movie 720p torrent.
- Q: What are the benefits of watching Welcome 2007 Hindi movie 720p torrent using an HD online player?
- A: Some of the benefits of watching Welcome 2007 Hindi movie 720p torrent using an HD online player are:
-
-- You can watch videos in HD quality without downloading them to your device.
-- You can save your storage space and bandwidth by streaming videos online.
-- You can access a large collection of movies and shows from different genres and languages.
-- You can enjoy a smooth and uninterrupted viewing experience with fast buffering and loading.
-- You can adjust the playback speed, volume, subtitles, etc. according to your preference.
-
- Q: What are the drawbacks of watching Welcome 2007 Hindi movie 720p torrent using an HD online player?
- A: Some of the drawbacks of watching Welcome 2007 Hindi movie 720p torrent using an HD online player are:
-
-- You may encounter some ads or pop-ups that may interrupt your viewing experience or redirect you to unwanted websites.
-- You may face some issues with the video quality, audio sync, subtitles, etc. depending on your internet connection and device compatibility.
-- You may violate some laws or regulations regarding piracy and copyright infringement depending on your country's laws.
-
- Q: How can I improve my viewing experience while watching Welcome 2007 Hindi movie 720p torrent using an HD online player?
- A: Here are some tips that can help you improve your viewing experience while watching Welcome 2007 Hindi movie 720p torrent using an HD online player:
-
-- Use a stable and fast internet connection with enough bandwidth for streaming videos in HD quality.
-- Use a compatible device with a good screen resolution and sound system for watching videos in HD quality.
-- Use headphones or earphones for better audio quality and immersion.
-- Use ad blockers or VPNs to avoid ads or pop-ups that may interrupt your viewing experience or redirect you to unwanted websites.
-- Use browser extensions or plugins that can enhance your video playback options such as speed control, volume control, subtitle control, etc.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cherish Model 11.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cherish Model 11.md
deleted file mode 100644
index 428fc82e9f32c068700289122f95d56d1887f877..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Cherish Model 11.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-the goal of the cherish consortium is to support and advance hiv-related research by identifying, recruiting, and providing funding for young investigators with a strong commitment to the study of hiv/aids. the consortium is a collaboration of academic, government, and community partners. the consortium has developed a cherish national steering committee and has been awarded funding to support the cherish pilot study. over the next two years, the consortium will evaluate and refine the cherish protocol, conduct a pilot study to test the efficacy of the cherish intervention on hiv-related clinical outcomes, and assess the feasibility and acceptability of the cherish intervention. consortium members are: susan cohan, m.d., assistant professor of medicine, division of infectious diseases, department of medicine, massachusetts general hospital; megan curtis, m., clinical fellow, infectious diseases division, boston medical center; j.t. kapsimalis, m., infectious diseases division, brigham and women's hospital; steve kowdley, m., associate professor of medicine, division of infectious diseases, department of medicine, massachusetts general hospital; douglas o'malley, m., professor of medicine, division of infectious diseases, department of medicine, harvard medical school; michael perzanowski, m., assistant professor of medicine, division of infectious diseases, department of medicine, harvard medical school; and dennis shusterman, m., assistant professor of medicine, division of infectious diseases, department of medicine, massachusetts general hospital.
-cherish model 11
Download Zip ✵ https://imgfil.com/2uxXeX
-cherish is equipped with 4 different laboratories; a molecular lab, a molecular analysis lab, a molecular data management lab, and a molecular analysis lab. these 4 different labs do molecular assays for the following target viruses; hepatitis c, hepatitis b, hiv, and hsv. cherish i had a similar set of facilities.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/FIFA-14-crack [NEW]-V6-FINAL-3DM-exe.md b/spaces/1gistliPinn/ChatGPT4/Examples/FIFA-14-crack [NEW]-V6-FINAL-3DM-exe.md
deleted file mode 100644
index 7bbdac499900327cc8008824631d674637dfd052..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/FIFA-14-crack [NEW]-V6-FINAL-3DM-exe.md
+++ /dev/null
@@ -1,9 +0,0 @@
-FIFA-14-CRACK-V6-FINAL-3DM-exe
Download Zip 🌟 https://imgfil.com/2uxWXZ
-
-Cracked 3DM. this is not a torrent and it works fine on PC.net Filehoster: ... Estimated reading time: 2 minutes Nov 21, 2013 Issue 16: FIFA 14 not working in ... Download FIFA 14 game via torrent, you can free on our site at high speed.
-Download FIFA 14 (2013) PC - torrent.
-FIFA 14 (2013) PC Release year: 2013 Genre: Simulator, Sport (Soccer) Developer: EA Canada Publisher: Electronic Arts Publication type: License [Steam-Rip] Game version: 1.0.3 Platform: PC Interface language: Russian, English Voice language: Russian, English Medicine: Present (RELOADED)
-Download FIFA 14 (2013) torrent for free here. 8a78ff9644
-
-
-
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py
deleted file mode 100644
index 6159ae89f5251a647afdd42d99132914a33e891f..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py
+++ /dev/null
@@ -1,253 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-from math import acos, sin
-from typing import List, Tuple, Union
-
-import numpy as np
-import paddle
-from PIL import Image
-
-from ...models import AutoencoderKL, UNet2DConditionModel
-from ...pipeline_utils import (
- AudioPipelineOutput,
- BaseOutput,
- DiffusionPipeline,
- ImagePipelineOutput,
-)
-from ...schedulers import DDIMScheduler, DDPMScheduler
-from .mel import Mel
-
-
-class AudioDiffusionPipeline(DiffusionPipeline):
- """
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Parameters:
- vqae ([`AutoencoderKL`]): Variational AutoEncoder for Latent Audio Diffusion or None
- unet ([`UNet2DConditionModel`]): UNET model
- mel ([`Mel`]): transform audio <-> spectrogram
- scheduler ([`DDIMScheduler` or `DDPMScheduler`]): de-noising scheduler
- """
-
- _optional_components = ["vqvae"]
-
- def __init__(
- self,
- vqvae: AutoencoderKL,
- unet: UNet2DConditionModel,
- mel: Mel,
- scheduler: Union[DDIMScheduler, DDPMScheduler],
- ):
- super().__init__()
- self.register_modules(unet=unet, scheduler=scheduler, mel=mel, vqvae=vqvae)
-
- def get_input_dims(self) -> Tuple:
- """Returns dimension of input image
-
- Returns:
- `Tuple`: (height, width)
- """
- input_module = self.vqvae if self.vqvae is not None else self.unet
- # For backwards compatibility
- sample_size = (
- (input_module.sample_size, input_module.sample_size)
- if type(input_module.sample_size) == int
- else input_module.sample_size
- )
- return sample_size
-
- def get_default_steps(self) -> int:
- """Returns default number of steps recommended for inference
-
- Returns:
- `int`: number of steps
- """
- return 50 if isinstance(self.scheduler, DDIMScheduler) else 1000
-
- @paddle.no_grad()
- def __call__(
- self,
- batch_size: int = 1,
- audio_file: str = None,
- raw_audio: np.ndarray = None,
- slice: int = 0,
- start_step: int = 0,
- steps: int = None,
- generator: paddle.Generator = None,
- mask_start_secs: float = 0,
- mask_end_secs: float = 0,
- step_generator: paddle.Generator = None,
- eta: float = 0,
- noise: paddle.Tensor = None,
- return_dict=True,
- ) -> Union[
- Union[AudioPipelineOutput, ImagePipelineOutput], Tuple[List[Image.Image], Tuple[int, List[np.ndarray]]]
- ]:
- """Generate random mel spectrogram from audio input and convert to audio.
-
- Args:
- batch_size (`int`): number of samples to generate
- audio_file (`str`): must be a file on disk due to Librosa limitation or
- raw_audio (`np.ndarray`): audio as numpy array
- slice (`int`): slice number of audio to convert
- start_step (int): step to start from
- steps (`int`): number of de-noising steps (defaults to 50 for DDIM, 1000 for DDPM)
- generator (`paddle.Generator`): random number generator or None
- mask_start_secs (`float`): number of seconds of audio to mask (not generate) at start
- mask_end_secs (`float`): number of seconds of audio to mask (not generate) at end
- step_generator (`paddle.Generator`): random number generator used to de-noise or None
- eta (`float`): parameter between 0 and 1 used with DDIM scheduler
- noise (`paddle.Tensor`): noise tensor of shape (batch_size, 1, height, width) or None
- return_dict (`bool`): if True return AudioPipelineOutput, ImagePipelineOutput else Tuple
-
- Returns:
- `List[PIL Image]`: mel spectrograms (`float`, `List[np.ndarray]`): sample rate and raw audios
- """
-
- steps = steps or self.get_default_steps()
- self.scheduler.set_timesteps(steps)
- step_generator = step_generator or generator
- # For backwards compatibility
- if type(self.unet.sample_size) == int:
- self.unet.sample_size = (self.unet.sample_size, self.unet.sample_size)
- input_dims = self.get_input_dims()
- self.mel.set_resolution(x_res=input_dims[1], y_res=input_dims[0])
- if noise is None:
- noise = paddle.randn(
- (batch_size, self.unet.in_channels, self.unet.sample_size[0], self.unet.sample_size[1]),
- generator=generator,
- )
- images = noise
- mask = None
-
- if audio_file is not None or raw_audio is not None:
- self.mel.load_audio(audio_file, raw_audio)
- input_image = self.mel.audio_slice_to_image(slice)
- input_image = np.frombuffer(input_image.tobytes(), dtype="uint8").reshape(
- (input_image.height, input_image.width)
- )
- input_image = (input_image / 255) * 2 - 1
- input_images = paddle.to_tensor(input_image[np.newaxis, :, :], dtype=paddle.float32)
-
- if self.vqvae is not None:
- input_images = self.vqvae.encode(paddle.unsqueeze(input_images, 0)).latent_dist.sample(
- generator=generator
- )[0]
- input_images = 0.18215 * input_images
-
- if start_step > 0:
- images[0, 0] = self.scheduler.add_noise(input_images, noise, self.scheduler.timesteps[start_step - 1])
-
- pixels_per_second = (
- self.unet.sample_size[1] * self.mel.get_sample_rate() / self.mel.x_res / self.mel.hop_length
- )
- mask_start = int(mask_start_secs * pixels_per_second)
- mask_end = int(mask_end_secs * pixels_per_second)
- mask = self.scheduler.add_noise(
- input_images, noise, paddle.to_tensor(self.scheduler.timesteps[start_step:])
- )
-
- for step, t in enumerate(self.progress_bar(self.scheduler.timesteps[start_step:])):
- model_output = self.unet(images, t)["sample"]
-
- if isinstance(self.scheduler, DDIMScheduler):
- images = self.scheduler.step(
- model_output=model_output, timestep=t, sample=images, eta=eta, generator=step_generator
- )["prev_sample"]
- else:
- images = self.scheduler.step(
- model_output=model_output, timestep=t, sample=images, generator=step_generator
- )["prev_sample"]
-
- if mask is not None:
- if mask_start > 0:
- images[:, :, :, :mask_start] = mask[:, step, :, :mask_start]
- if mask_end > 0:
- images[:, :, :, -mask_end:] = mask[:, step, :, -mask_end:]
-
- if self.vqvae is not None:
- # 0.18215 was scaling factor used in training to ensure unit variance
- images = 1 / 0.18215 * images
- images = self.vqvae.decode(images)["sample"]
-
- images = (images / 2 + 0.5).clip(0, 1)
- images = images.transpose([0, 2, 3, 1]).cast("float32").numpy()
- images = (images * 255).round().astype("uint8")
- images = list(
- map(lambda _: Image.fromarray(_[:, :, 0]), images)
- if images.shape[3] == 1
- else map(lambda _: Image.fromarray(_, mode="RGB").convert("L"), images)
- )
-
- audios = list(map(lambda _: self.mel.image_to_audio(_), images))
- if not return_dict:
- return images, (self.mel.get_sample_rate(), audios)
-
- return BaseOutput(**AudioPipelineOutput(np.array(audios)[:, np.newaxis, :]), **ImagePipelineOutput(images))
-
- @paddle.no_grad()
- def encode(self, images: List[Image.Image], steps: int = 50) -> np.ndarray:
- """Reverse step process: recover noisy image from generated image.
-
- Args:
- images (`List[PIL Image]`): list of images to encode
- steps (`int`): number of encoding steps to perform (defaults to 50)
-
- Returns:
- `np.ndarray`: noise tensor of shape (batch_size, 1, height, width)
- """
-
- # Only works with DDIM as this method is deterministic
- assert isinstance(self.scheduler, DDIMScheduler)
- self.scheduler.set_timesteps(steps)
- sample = np.array(
- [np.frombuffer(image.tobytes(), dtype="uint8").reshape((1, image.height, image.width)) for image in images]
- )
- sample = (sample / 255) * 2 - 1
- sample = paddle.to_tensor(sample)
-
- for t in self.progress_bar(paddle.flip(self.scheduler.timesteps, (0,))):
- prev_timestep = t - self.scheduler.num_train_timesteps // self.scheduler.num_inference_steps
- alpha_prod_t = self.scheduler.alphas_cumprod[t]
- alpha_prod_t_prev = (
- self.scheduler.alphas_cumprod[prev_timestep]
- if prev_timestep >= 0
- else self.scheduler.final_alpha_cumprod
- )
- beta_prod_t = 1 - alpha_prod_t
- model_output = self.unet(sample, t)["sample"]
- pred_sample_direction = (1 - alpha_prod_t_prev) ** (0.5) * model_output
- sample = (sample - pred_sample_direction) * alpha_prod_t_prev ** (-0.5)
- sample = sample * alpha_prod_t ** (0.5) + beta_prod_t ** (0.5) * model_output
-
- return sample
-
- @staticmethod
- def slerp(x0: paddle.Tensor, x1: paddle.Tensor, alpha: float) -> paddle.Tensor:
- """Spherical Linear intERPolation
-
- Args:
- x0 (`paddle.Tensor`): first tensor to interpolate between
- x1 (`paddle.Tensor`): seconds tensor to interpolate between
- alpha (`float`): interpolation between 0 and 1
-
- Returns:
- `paddle.Tensor`: interpolated tensor
- """
-
- theta = acos(paddle.dot(paddle.flatten(x0), paddle.flatten(x1)) / paddle.norm(x0) / paddle.norm(x1))
- return sin((1 - alpha) * theta) * x0 / sin(theta) + sin(alpha * theta) * x1 / sin(theta)
diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py
deleted file mode 100644
index eb4e0d31f1aedf4590628d394e1606920fefb5c9..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "arcface"
-config.network = "r18"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/ms1m-retinaface-t1"
-config.num_classes = 93431
-config.num_image = 5179510
-config.num_epoch = 25
-config.warmup_epoch = -1
-config.decay_epoch = [10, 16, 22]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/gui.py b/spaces/AI-Hobbyist/Hoyo-RVC/gui.py
deleted file mode 100644
index 1e5e5d90b87e88929a308d51274855db99d2c376..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/gui.py
+++ /dev/null
@@ -1,698 +0,0 @@
-"""
-0416后的更新:
- 引入config中half
- 重建npy而不用填写
- v2支持
- 无f0模型支持
- 修复
-
- int16:
- 增加无索引支持
- f0算法改harvest(怎么看就只有这个会影响CPU占用),但是不这么改效果不好
-"""
-import os, sys, traceback, re
-
-import json
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from config import Config
-
-Config = Config()
-import PySimpleGUI as sg
-import sounddevice as sd
-import noisereduce as nr
-import numpy as np
-from fairseq import checkpoint_utils
-import librosa, torch, pyworld, faiss, time, threading
-import torch.nn.functional as F
-import torchaudio.transforms as tat
-import scipy.signal as signal
-
-
-# import matplotlib.pyplot as plt
-from infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from i18n import I18nAuto
-
-i18n = I18nAuto()
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-current_dir = os.getcwd()
-
-
-class RVC:
- def __init__(
- self, key, hubert_path, pth_path, index_path, npy_path, index_rate
- ) -> None:
- """
- 初始化
- """
- try:
- self.f0_up_key = key
- self.time_step = 160 / 16000 * 1000
- self.f0_min = 50
- self.f0_max = 1100
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
- self.sr = 16000
- self.window = 160
- if index_rate != 0:
- self.index = faiss.read_index(index_path)
- # self.big_npy = np.load(npy_path)
- self.big_npy = self.index.reconstruct_n(0, self.index.ntotal)
- print("index search enabled")
- self.index_rate = index_rate
- model_path = hubert_path
- print("load model(s) from {}".format(model_path))
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [model_path],
- suffix="",
- )
- self.model = models[0]
- self.model = self.model.to(device)
- if Config.is_half:
- self.model = self.model.half()
- else:
- self.model = self.model.float()
- self.model.eval()
- cpt = torch.load(pth_path, map_location="cpu")
- self.tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- self.if_f0 = cpt.get("f0", 1)
- self.version = cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del self.net_g.enc_q
- print(self.net_g.load_state_dict(cpt["weight"], strict=False))
- self.net_g.eval().to(device)
- if Config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
- except:
- print(traceback.format_exc())
-
- def get_f0(self, x, f0_up_key, inp_f0=None):
- x_pad = 1
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def infer(self, feats: torch.Tensor) -> np.ndarray:
- """
- 推理函数
- """
- audio = feats.clone().cpu().numpy()
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- if Config.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- inputs = {
- "source": feats.to(device),
- "padding_mask": padding_mask.to(device),
- "output_layer": 9 if self.version == "v1" else 12,
- }
- torch.cuda.synchronize()
- with torch.no_grad():
- logits = self.model.extract_features(**inputs)
- feats = (
- self.model.final_proj(logits[0]) if self.version == "v1" else logits[0]
- )
-
- ####索引优化
- try:
- if (
- hasattr(self, "index")
- and hasattr(self, "big_npy")
- and self.index_rate != 0
- ):
- npy = feats[0].cpu().numpy().astype("float32")
- score, ix = self.index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
- if Config.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate
- + (1 - self.index_rate) * feats
- )
- else:
- print("index search FAIL or disabled")
- except:
- traceback.print_exc()
- print("index search FAIL")
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- torch.cuda.synchronize()
- print(feats.shape)
- if self.if_f0 == 1:
- pitch, pitchf = self.get_f0(audio, self.f0_up_key)
- p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存
- else:
- pitch, pitchf = None, None
- p_len = min(feats.shape[1], 13000) # 太大了爆显存
- torch.cuda.synchronize()
- # print(feats.shape,pitch.shape)
- feats = feats[:, :p_len, :]
- if self.if_f0 == 1:
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
- pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
- p_len = torch.LongTensor([p_len]).to(device)
- ii = 0 # sid
- sid = torch.LongTensor([ii]).to(device)
- with torch.no_grad():
- if self.if_f0 == 1:
- infered_audio = (
- self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
- .data.cpu()
- .float()
- )
- else:
- infered_audio = (
- self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float()
- )
- torch.cuda.synchronize()
- return infered_audio
-
-
-class GUIConfig:
- def __init__(self) -> None:
- self.hubert_path: str = ""
- self.pth_path: str = ""
- self.index_path: str = ""
- self.npy_path: str = ""
- self.pitch: int = 12
- self.samplerate: int = 44100
- self.block_time: float = 1.0 # s
- self.buffer_num: int = 1
- self.threhold: int = -30
- self.crossfade_time: float = 0.08
- self.extra_time: float = 0.04
- self.I_noise_reduce = False
- self.O_noise_reduce = False
- self.index_rate = 0.3
-
-
-class GUI:
- def __init__(self) -> None:
- self.config = GUIConfig()
- self.flag_vc = False
-
- self.launcher()
-
- def load(self):
- input_devices, output_devices, _, _ = self.get_devices()
- try:
- with open("values1.json", "r") as j:
- data = json.load(j)
- except:
- with open("values1.json", "w") as j:
- data = {
- "pth_path": " ",
- "index_path": " ",
- "sg_input_device": input_devices[sd.default.device[0]],
- "sg_output_device": output_devices[sd.default.device[1]],
- "threhold": "-45",
- "pitch": "0",
- "index_rate": "0",
- "block_time": "1",
- "crossfade_length": "0.04",
- "extra_time": "1",
- }
- return data
-
- def launcher(self):
- data = self.load()
- sg.theme("LightBlue3")
- input_devices, output_devices, _, _ = self.get_devices()
- layout = [
- [
- sg.Frame(
- title=i18n("加载模型"),
- layout=[
- [
- sg.Input(
- default_text="hubert_base.pt",
- key="hubert_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Hubert模型"),
- initial_folder=os.path.join(os.getcwd()),
- file_types=((". pt"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("pth_path", ""),
- key="pth_path",
- ),
- sg.FileBrowse(
- i18n("选择.pth文件"),
- initial_folder=os.path.join(os.getcwd(), "weights"),
- file_types=((". pth"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("index_path", ""),
- key="index_path",
- ),
- sg.FileBrowse(
- i18n("选择.index文件"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=((". index"),),
- ),
- ],
- [
- sg.Input(
- default_text="你不需要填写这个You don't need write this.",
- key="npy_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("选择.npy文件"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=((". npy"),),
- ),
- ],
- ],
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("输入设备")),
- sg.Combo(
- input_devices,
- key="sg_input_device",
- default_value=data.get("sg_input_device", ""),
- ),
- ],
- [
- sg.Text(i18n("输出设备")),
- sg.Combo(
- output_devices,
- key="sg_output_device",
- default_value=data.get("sg_output_device", ""),
- ),
- ],
- ],
- title=i18n("音频设备(请使用同种类驱动)"),
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("响应阈值")),
- sg.Slider(
- range=(-60, 0),
- key="threhold",
- resolution=1,
- orientation="h",
- default_value=data.get("threhold", ""),
- ),
- ],
- [
- sg.Text(i18n("音调设置")),
- sg.Slider(
- range=(-24, 24),
- key="pitch",
- resolution=1,
- orientation="h",
- default_value=data.get("pitch", ""),
- ),
- ],
- [
- sg.Text(i18n("Index Rate")),
- sg.Slider(
- range=(0.0, 1.0),
- key="index_rate",
- resolution=0.01,
- orientation="h",
- default_value=data.get("index_rate", ""),
- ),
- ],
- ],
- title=i18n("常规设置"),
- ),
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("采样长度")),
- sg.Slider(
- range=(0.1, 3.0),
- key="block_time",
- resolution=0.1,
- orientation="h",
- default_value=data.get("block_time", ""),
- ),
- ],
- [
- sg.Text(i18n("淡入淡出长度")),
- sg.Slider(
- range=(0.01, 0.15),
- key="crossfade_length",
- resolution=0.01,
- orientation="h",
- default_value=data.get("crossfade_length", ""),
- ),
- ],
- [
- sg.Text(i18n("额外推理时长")),
- sg.Slider(
- range=(0.05, 3.00),
- key="extra_time",
- resolution=0.01,
- orientation="h",
- default_value=data.get("extra_time", ""),
- ),
- ],
- [
- sg.Checkbox(i18n("输入降噪"), key="I_noise_reduce"),
- sg.Checkbox(i18n("输出降噪"), key="O_noise_reduce"),
- ],
- ],
- title=i18n("性能设置"),
- ),
- ],
- [
- sg.Button(i18n("开始音频转换"), key="start_vc"),
- sg.Button(i18n("停止音频转换"), key="stop_vc"),
- sg.Text(i18n("推理时间(ms):")),
- sg.Text("0", key="infer_time"),
- ],
- ]
- self.window = sg.Window("RVC - GUI", layout=layout)
- self.event_handler()
-
- def event_handler(self):
- while True:
- event, values = self.window.read()
- if event == sg.WINDOW_CLOSED:
- self.flag_vc = False
- exit()
- if event == "start_vc" and self.flag_vc == False:
- if self.set_values(values) == True:
- print("using_cuda:" + str(torch.cuda.is_available()))
- self.start_vc()
- settings = {
- "pth_path": values["pth_path"],
- "index_path": values["index_path"],
- "sg_input_device": values["sg_input_device"],
- "sg_output_device": values["sg_output_device"],
- "threhold": values["threhold"],
- "pitch": values["pitch"],
- "index_rate": values["index_rate"],
- "block_time": values["block_time"],
- "crossfade_length": values["crossfade_length"],
- "extra_time": values["extra_time"],
- }
- with open("values1.json", "w") as j:
- json.dump(settings, j)
- if event == "stop_vc" and self.flag_vc == True:
- self.flag_vc = False
-
- def set_values(self, values):
- if len(values["pth_path"].strip()) == 0:
- sg.popup(i18n("请选择pth文件"))
- return False
- if len(values["index_path"].strip()) == 0:
- sg.popup(i18n("请选择index文件"))
- return False
- pattern = re.compile("[^\x00-\x7F]+")
- if pattern.findall(values["hubert_path"]):
- sg.popup(i18n("hubert模型路径不可包含中文"))
- return False
- if pattern.findall(values["pth_path"]):
- sg.popup(i18n("pth文件路径不可包含中文"))
- return False
- if pattern.findall(values["index_path"]):
- sg.popup(i18n("index文件路径不可包含中文"))
- return False
- self.set_devices(values["sg_input_device"], values["sg_output_device"])
- self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt")
- self.config.pth_path = values["pth_path"]
- self.config.index_path = values["index_path"]
- self.config.npy_path = values["npy_path"]
- self.config.threhold = values["threhold"]
- self.config.pitch = values["pitch"]
- self.config.block_time = values["block_time"]
- self.config.crossfade_time = values["crossfade_length"]
- self.config.extra_time = values["extra_time"]
- self.config.I_noise_reduce = values["I_noise_reduce"]
- self.config.O_noise_reduce = values["O_noise_reduce"]
- self.config.index_rate = values["index_rate"]
- return True
-
- def start_vc(self):
- torch.cuda.empty_cache()
- self.flag_vc = True
- self.block_frame = int(self.config.block_time * self.config.samplerate)
- self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate)
- self.sola_search_frame = int(0.012 * self.config.samplerate)
- self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s
- self.extra_frame = int(self.config.extra_time * self.config.samplerate)
- self.rvc = None
- self.rvc = RVC(
- self.config.pitch,
- self.config.hubert_path,
- self.config.pth_path,
- self.config.index_path,
- self.config.npy_path,
- self.config.index_rate,
- )
- self.input_wav: np.ndarray = np.zeros(
- self.extra_frame
- + self.crossfade_frame
- + self.sola_search_frame
- + self.block_frame,
- dtype="float32",
- )
- self.output_wav: torch.Tensor = torch.zeros(
- self.block_frame, device=device, dtype=torch.float32
- )
- self.sola_buffer: torch.Tensor = torch.zeros(
- self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_in_window: torch.Tensor = torch.linspace(
- 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_out_window: torch.Tensor = 1 - self.fade_in_window
- self.resampler1 = tat.Resample(
- orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
- )
- self.resampler2 = tat.Resample(
- orig_freq=self.rvc.tgt_sr,
- new_freq=self.config.samplerate,
- dtype=torch.float32,
- )
- thread_vc = threading.Thread(target=self.soundinput)
- thread_vc.start()
-
- def soundinput(self):
- """
- 接受音频输入
- """
- with sd.Stream(
- callback=self.audio_callback,
- blocksize=self.block_frame,
- samplerate=self.config.samplerate,
- dtype="float32",
- ):
- while self.flag_vc:
- time.sleep(self.config.block_time)
- print("Audio block passed.")
- print("ENDing VC")
-
- def audio_callback(
- self, indata: np.ndarray, outdata: np.ndarray, frames, times, status
- ):
- """
- 音频处理
- """
- start_time = time.perf_counter()
- indata = librosa.to_mono(indata.T)
- if self.config.I_noise_reduce:
- indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate)
-
- """noise gate"""
- frame_length = 2048
- hop_length = 1024
- rms = librosa.feature.rms(
- y=indata, frame_length=frame_length, hop_length=hop_length
- )
- db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold
- # print(rms.shape,db.shape,db)
- for i in range(db_threhold.shape[0]):
- if db_threhold[i]:
- indata[i * hop_length : (i + 1) * hop_length] = 0
- self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata)
-
- # infer
- print("input_wav:" + str(self.input_wav.shape))
- # print('infered_wav:'+str(infer_wav.shape))
- infer_wav: torch.Tensor = self.resampler2(
- self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav)))
- )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to(
- device
- )
- print("infer_wav:" + str(infer_wav.shape))
-
- # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC
- cor_nom = F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame],
- self.sola_buffer[None, None, :],
- )
- cor_den = torch.sqrt(
- F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame]
- ** 2,
- torch.ones(1, 1, self.crossfade_frame, device=device),
- )
- + 1e-8
- )
- sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0])
- print("sola offset: " + str(int(sola_offset)))
-
- # crossfade
- self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame]
- self.output_wav[: self.crossfade_frame] *= self.fade_in_window
- self.output_wav[: self.crossfade_frame] += self.sola_buffer[:]
- if sola_offset < self.sola_search_frame:
- self.sola_buffer[:] = (
- infer_wav[
- -self.sola_search_frame
- - self.crossfade_frame
- + sola_offset : -self.sola_search_frame
- + sola_offset
- ]
- * self.fade_out_window
- )
- else:
- self.sola_buffer[:] = (
- infer_wav[-self.crossfade_frame :] * self.fade_out_window
- )
-
- if self.config.O_noise_reduce:
- outdata[:] = np.tile(
- nr.reduce_noise(
- y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate
- ),
- (2, 1),
- ).T
- else:
- outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
- total_time = time.perf_counter() - start_time
- self.window["infer_time"].update(int(total_time * 1000))
- print("infer time:" + str(total_time))
-
- def get_devices(self, update: bool = True):
- """获取设备列表"""
- if update:
- sd._terminate()
- sd._initialize()
- devices = sd.query_devices()
- hostapis = sd.query_hostapis()
- for hostapi in hostapis:
- for device_idx in hostapi["devices"]:
- devices[device_idx]["hostapi_name"] = hostapi["name"]
- input_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_output_channels"] > 0
- ]
- input_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_output_channels"] > 0
- ]
- return (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- )
-
- def set_devices(self, input_device, output_device):
- """设置输出设备"""
- (
- input_devices,
- output_devices,
- input_device_indices,
- output_device_indices,
- ) = self.get_devices()
- sd.default.device[0] = input_device_indices[input_devices.index(input_device)]
- sd.default.device[1] = output_device_indices[
- output_devices.index(output_device)
- ]
- print("input device:" + str(sd.default.device[0]) + ":" + str(input_device))
- print("output device:" + str(sd.default.device[1]) + ":" + str(output_device))
-
-
-gui = GUI()
diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/slicer2.py b/spaces/AI-Hobbyist/Hoyo-RVC/slicer2.py
deleted file mode 100644
index 5b29ee262aa54045e807be2cffeb41687499ba58..0000000000000000000000000000000000000000
--- a/spaces/AI-Hobbyist/Hoyo-RVC/slicer2.py
+++ /dev/null
@@ -1,260 +0,0 @@
-import numpy as np
-
-
-# This function is obtained from librosa.
-def get_rms(
- y,
- frame_length=2048,
- hop_length=512,
- pad_mode="constant",
-):
- padding = (int(frame_length // 2), int(frame_length // 2))
- y = np.pad(y, padding, mode=pad_mode)
-
- axis = -1
- # put our new within-frame axis at the end for now
- out_strides = y.strides + tuple([y.strides[axis]])
- # Reduce the shape on the framing axis
- x_shape_trimmed = list(y.shape)
- x_shape_trimmed[axis] -= frame_length - 1
- out_shape = tuple(x_shape_trimmed) + tuple([frame_length])
- xw = np.lib.stride_tricks.as_strided(y, shape=out_shape, strides=out_strides)
- if axis < 0:
- target_axis = axis - 1
- else:
- target_axis = axis + 1
- xw = np.moveaxis(xw, -1, target_axis)
- # Downsample along the target axis
- slices = [slice(None)] * xw.ndim
- slices[axis] = slice(0, None, hop_length)
- x = xw[tuple(slices)]
-
- # Calculate power
- power = np.mean(np.abs(x) ** 2, axis=-2, keepdims=True)
-
- return np.sqrt(power)
-
-
-class Slicer:
- def __init__(
- self,
- sr: int,
- threshold: float = -40.0,
- min_length: int = 5000,
- min_interval: int = 300,
- hop_size: int = 20,
- max_sil_kept: int = 5000,
- ):
- if not min_length >= min_interval >= hop_size:
- raise ValueError(
- "The following condition must be satisfied: min_length >= min_interval >= hop_size"
- )
- if not max_sil_kept >= hop_size:
- raise ValueError(
- "The following condition must be satisfied: max_sil_kept >= hop_size"
- )
- min_interval = sr * min_interval / 1000
- self.threshold = 10 ** (threshold / 20.0)
- self.hop_size = round(sr * hop_size / 1000)
- self.win_size = min(round(min_interval), 4 * self.hop_size)
- self.min_length = round(sr * min_length / 1000 / self.hop_size)
- self.min_interval = round(min_interval / self.hop_size)
- self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size)
-
- def _apply_slice(self, waveform, begin, end):
- if len(waveform.shape) > 1:
- return waveform[
- :, begin * self.hop_size : min(waveform.shape[1], end * self.hop_size)
- ]
- else:
- return waveform[
- begin * self.hop_size : min(waveform.shape[0], end * self.hop_size)
- ]
-
- # @timeit
- def slice(self, waveform):
- if len(waveform.shape) > 1:
- samples = waveform.mean(axis=0)
- else:
- samples = waveform
- if samples.shape[0] <= self.min_length:
- return [waveform]
- rms_list = get_rms(
- y=samples, frame_length=self.win_size, hop_length=self.hop_size
- ).squeeze(0)
- sil_tags = []
- silence_start = None
- clip_start = 0
- for i, rms in enumerate(rms_list):
- # Keep looping while frame is silent.
- if rms < self.threshold:
- # Record start of silent frames.
- if silence_start is None:
- silence_start = i
- continue
- # Keep looping while frame is not silent and silence start has not been recorded.
- if silence_start is None:
- continue
- # Clear recorded silence start if interval is not enough or clip is too short
- is_leading_silence = silence_start == 0 and i > self.max_sil_kept
- need_slice_middle = (
- i - silence_start >= self.min_interval
- and i - clip_start >= self.min_length
- )
- if not is_leading_silence and not need_slice_middle:
- silence_start = None
- continue
- # Need slicing. Record the range of silent frames to be removed.
- if i - silence_start <= self.max_sil_kept:
- pos = rms_list[silence_start : i + 1].argmin() + silence_start
- if silence_start == 0:
- sil_tags.append((0, pos))
- else:
- sil_tags.append((pos, pos))
- clip_start = pos
- elif i - silence_start <= self.max_sil_kept * 2:
- pos = rms_list[
- i - self.max_sil_kept : silence_start + self.max_sil_kept + 1
- ].argmin()
- pos += i - self.max_sil_kept
- pos_l = (
- rms_list[
- silence_start : silence_start + self.max_sil_kept + 1
- ].argmin()
- + silence_start
- )
- pos_r = (
- rms_list[i - self.max_sil_kept : i + 1].argmin()
- + i
- - self.max_sil_kept
- )
- if silence_start == 0:
- sil_tags.append((0, pos_r))
- clip_start = pos_r
- else:
- sil_tags.append((min(pos_l, pos), max(pos_r, pos)))
- clip_start = max(pos_r, pos)
- else:
- pos_l = (
- rms_list[
- silence_start : silence_start + self.max_sil_kept + 1
- ].argmin()
- + silence_start
- )
- pos_r = (
- rms_list[i - self.max_sil_kept : i + 1].argmin()
- + i
- - self.max_sil_kept
- )
- if silence_start == 0:
- sil_tags.append((0, pos_r))
- else:
- sil_tags.append((pos_l, pos_r))
- clip_start = pos_r
- silence_start = None
- # Deal with trailing silence.
- total_frames = rms_list.shape[0]
- if (
- silence_start is not None
- and total_frames - silence_start >= self.min_interval
- ):
- silence_end = min(total_frames, silence_start + self.max_sil_kept)
- pos = rms_list[silence_start : silence_end + 1].argmin() + silence_start
- sil_tags.append((pos, total_frames + 1))
- # Apply and return slices.
- if len(sil_tags) == 0:
- return [waveform]
- else:
- chunks = []
- if sil_tags[0][0] > 0:
- chunks.append(self._apply_slice(waveform, 0, sil_tags[0][0]))
- for i in range(len(sil_tags) - 1):
- chunks.append(
- self._apply_slice(waveform, sil_tags[i][1], sil_tags[i + 1][0])
- )
- if sil_tags[-1][1] < total_frames:
- chunks.append(
- self._apply_slice(waveform, sil_tags[-1][1], total_frames)
- )
- return chunks
-
-
-def main():
- import os.path
- from argparse import ArgumentParser
-
- import librosa
- import soundfile
-
- parser = ArgumentParser()
- parser.add_argument("audio", type=str, help="The audio to be sliced")
- parser.add_argument(
- "--out", type=str, help="Output directory of the sliced audio clips"
- )
- parser.add_argument(
- "--db_thresh",
- type=float,
- required=False,
- default=-40,
- help="The dB threshold for silence detection",
- )
- parser.add_argument(
- "--min_length",
- type=int,
- required=False,
- default=5000,
- help="The minimum milliseconds required for each sliced audio clip",
- )
- parser.add_argument(
- "--min_interval",
- type=int,
- required=False,
- default=300,
- help="The minimum milliseconds for a silence part to be sliced",
- )
- parser.add_argument(
- "--hop_size",
- type=int,
- required=False,
- default=10,
- help="Frame length in milliseconds",
- )
- parser.add_argument(
- "--max_sil_kept",
- type=int,
- required=False,
- default=500,
- help="The maximum silence length kept around the sliced clip, presented in milliseconds",
- )
- args = parser.parse_args()
- out = args.out
- if out is None:
- out = os.path.dirname(os.path.abspath(args.audio))
- audio, sr = librosa.load(args.audio, sr=None, mono=False)
- slicer = Slicer(
- sr=sr,
- threshold=args.db_thresh,
- min_length=args.min_length,
- min_interval=args.min_interval,
- hop_size=args.hop_size,
- max_sil_kept=args.max_sil_kept,
- )
- chunks = slicer.slice(audio)
- if not os.path.exists(out):
- os.makedirs(out)
- for i, chunk in enumerate(chunks):
- if len(chunk.shape) > 1:
- chunk = chunk.T
- soundfile.write(
- os.path.join(
- out,
- f"%s_%d.wav"
- % (os.path.basename(args.audio).rsplit(".", maxsplit=1)[0], i),
- ),
- chunk,
- sr,
- )
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/data_gen_utils.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/data_gen_utils.py
deleted file mode 100644
index 57ccb7c0200de5124908db2cba0347baf2663adc..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/data_gen_utils.py
+++ /dev/null
@@ -1,357 +0,0 @@
-import warnings
-
-warnings.filterwarnings("ignore")
-
-import parselmouth
-import os
-import torch
-from skimage.transform import resize
-from utils.text_encoder import TokenTextEncoder
-from utils.pitch_utils import f0_to_coarse
-import struct
-import webrtcvad
-from scipy.ndimage.morphology import binary_dilation
-import librosa
-import numpy as np
-from utils import audio
-import pyloudnorm as pyln
-import re
-import json
-from collections import OrderedDict
-
-PUNCS = '!,.?;:'
-
-int16_max = (2 ** 15) - 1
-
-
-def trim_long_silences(path, sr=None, return_raw_wav=False, norm=True, vad_max_silence_length=12):
- """
- Ensures that segments without voice in the waveform remain no longer than a
- threshold determined by the VAD parameters in params.py.
- :param wav: the raw waveform as a numpy array of floats
- :param vad_max_silence_length: Maximum number of consecutive silent frames a segment can have.
- :return: the same waveform with silences trimmed away (length <= original wav length)
- """
-
- ## Voice Activation Detection
- # Window size of the VAD. Must be either 10, 20 or 30 milliseconds.
- # This sets the granularity of the VAD. Should not need to be changed.
- sampling_rate = 16000
- wav_raw, sr = librosa.core.load(path, sr=sr)
-
- if norm:
- meter = pyln.Meter(sr) # create BS.1770 meter
- loudness = meter.integrated_loudness(wav_raw)
- wav_raw = pyln.normalize.loudness(wav_raw, loudness, -20.0)
- if np.abs(wav_raw).max() > 1.0:
- wav_raw = wav_raw / np.abs(wav_raw).max()
-
- wav = librosa.resample(wav_raw, sr, sampling_rate, res_type='kaiser_best')
-
- vad_window_length = 30 # In milliseconds
- # Number of frames to average together when performing the moving average smoothing.
- # The larger this value, the larger the VAD variations must be to not get smoothed out.
- vad_moving_average_width = 8
-
- # Compute the voice detection window size
- samples_per_window = (vad_window_length * sampling_rate) // 1000
-
- # Trim the end of the audio to have a multiple of the window size
- wav = wav[:len(wav) - (len(wav) % samples_per_window)]
-
- # Convert the float waveform to 16-bit mono PCM
- pcm_wave = struct.pack("%dh" % len(wav), *(np.round(wav * int16_max)).astype(np.int16))
-
- # Perform voice activation detection
- voice_flags = []
- vad = webrtcvad.Vad(mode=3)
- for window_start in range(0, len(wav), samples_per_window):
- window_end = window_start + samples_per_window
- voice_flags.append(vad.is_speech(pcm_wave[window_start * 2:window_end * 2],
- sample_rate=sampling_rate))
- voice_flags = np.array(voice_flags)
-
- # Smooth the voice detection with a moving average
- def moving_average(array, width):
- array_padded = np.concatenate((np.zeros((width - 1) // 2), array, np.zeros(width // 2)))
- ret = np.cumsum(array_padded, dtype=float)
- ret[width:] = ret[width:] - ret[:-width]
- return ret[width - 1:] / width
-
- audio_mask = moving_average(voice_flags, vad_moving_average_width)
- audio_mask = np.round(audio_mask).astype(np.bool)
-
- # Dilate the voiced regions
- audio_mask = binary_dilation(audio_mask, np.ones(vad_max_silence_length + 1))
- audio_mask = np.repeat(audio_mask, samples_per_window)
- audio_mask = resize(audio_mask, (len(wav_raw),)) > 0
- if return_raw_wav:
- return wav_raw, audio_mask, sr
- return wav_raw[audio_mask], audio_mask, sr
-
-
-def process_utterance(wav_path,
- fft_size=1024,
- hop_size=256,
- win_length=1024,
- window="hann",
- num_mels=80,
- fmin=80,
- fmax=7600,
- eps=1e-6,
- sample_rate=22050,
- loud_norm=False,
- min_level_db=-100,
- return_linear=False,
- trim_long_sil=False, vocoder='pwg'):
- if isinstance(wav_path, str):
- if trim_long_sil:
- wav, _, _ = trim_long_silences(wav_path, sample_rate)
- else:
- wav, _ = librosa.core.load(wav_path, sr=sample_rate)
- else:
- wav = wav_path
-
- if loud_norm:
- meter = pyln.Meter(sample_rate) # create BS.1770 meter
- loudness = meter.integrated_loudness(wav)
- wav = pyln.normalize.loudness(wav, loudness, -22.0)
- if np.abs(wav).max() > 1:
- wav = wav / np.abs(wav).max()
-
- # get amplitude spectrogram
- x_stft = librosa.stft(wav, n_fft=fft_size, hop_length=hop_size,
- win_length=win_length, window=window, pad_mode="constant")
- spc = np.abs(x_stft) # (n_bins, T)
-
- # get mel basis
- fmin = 0 if fmin == -1 else fmin
- fmax = sample_rate / 2 if fmax == -1 else fmax
- mel_basis = librosa.filters.mel(sample_rate, fft_size, num_mels, fmin, fmax)
- mel = mel_basis @ spc
-
- if vocoder == 'pwg':
- mel = np.log10(np.maximum(eps, mel)) # (n_mel_bins, T)
- else:
- assert False, f'"{vocoder}" is not in ["pwg"].'
-
- l_pad, r_pad = audio.librosa_pad_lr(wav, fft_size, hop_size, 1)
- wav = np.pad(wav, (l_pad, r_pad), mode='constant', constant_values=0.0)
- wav = wav[:mel.shape[1] * hop_size]
-
- if not return_linear:
- return wav, mel
- else:
- spc = audio.amp_to_db(spc)
- spc = audio.normalize(spc, {'min_level_db': min_level_db})
- return wav, mel, spc
-
-
-def get_pitch(wav_data, mel, hparams):
- """
-
- :param wav_data: [T]
- :param mel: [T, 80]
- :param hparams:
- :return:
- """
- time_step = hparams['hop_size'] / hparams['audio_sample_rate'] * 1000
- f0_min = 80
- f0_max = 750
-
- if hparams['hop_size'] == 128:
- pad_size = 4
- elif hparams['hop_size'] == 256:
- pad_size = 2
- else:
- assert False
-
- f0 = parselmouth.Sound(wav_data, hparams['audio_sample_rate']).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
- lpad = pad_size * 2
- rpad = len(mel) - len(f0) - lpad
- f0 = np.pad(f0, [[lpad, rpad]], mode='constant')
- # mel and f0 are extracted by 2 different libraries. we should force them to have the same length.
- # Attention: we find that new version of some libraries could cause ``rpad'' to be a negetive value...
- # Just to be sure, we recommend users to set up the same environments as them in requirements_auto.txt (by Anaconda)
- delta_l = len(mel) - len(f0)
- assert np.abs(delta_l) <= 8
- if delta_l > 0:
- f0 = np.concatenate([f0, [f0[-1]] * delta_l], 0)
- f0 = f0[:len(mel)]
- pitch_coarse = f0_to_coarse(f0)
- return f0, pitch_coarse
-
-
-def remove_empty_lines(text):
- """remove empty lines"""
- assert (len(text) > 0)
- assert (isinstance(text, list))
- text = [t.strip() for t in text]
- if "" in text:
- text.remove("")
- return text
-
-
-class TextGrid(object):
- def __init__(self, text):
- text = remove_empty_lines(text)
- self.text = text
- self.line_count = 0
- self._get_type()
- self._get_time_intval()
- self._get_size()
- self.tier_list = []
- self._get_item_list()
-
- def _extract_pattern(self, pattern, inc):
- """
- Parameters
- ----------
- pattern : regex to extract pattern
- inc : increment of line count after extraction
- Returns
- -------
- group : extracted info
- """
- try:
- group = re.match(pattern, self.text[self.line_count]).group(1)
- self.line_count += inc
- except AttributeError:
- raise ValueError("File format error at line %d:%s" % (self.line_count, self.text[self.line_count]))
- return group
-
- def _get_type(self):
- self.file_type = self._extract_pattern(r"File type = \"(.*)\"", 2)
-
- def _get_time_intval(self):
- self.xmin = self._extract_pattern(r"xmin = (.*)", 1)
- self.xmax = self._extract_pattern(r"xmax = (.*)", 2)
-
- def _get_size(self):
- self.size = int(self._extract_pattern(r"size = (.*)", 2))
-
- def _get_item_list(self):
- """Only supports IntervalTier currently"""
- for itemIdx in range(1, self.size + 1):
- tier = OrderedDict()
- item_list = []
- tier_idx = self._extract_pattern(r"item \[(.*)\]:", 1)
- tier_class = self._extract_pattern(r"class = \"(.*)\"", 1)
- if tier_class != "IntervalTier":
- raise NotImplementedError("Only IntervalTier class is supported currently")
- tier_name = self._extract_pattern(r"name = \"(.*)\"", 1)
- tier_xmin = self._extract_pattern(r"xmin = (.*)", 1)
- tier_xmax = self._extract_pattern(r"xmax = (.*)", 1)
- tier_size = self._extract_pattern(r"intervals: size = (.*)", 1)
- for i in range(int(tier_size)):
- item = OrderedDict()
- item["idx"] = self._extract_pattern(r"intervals \[(.*)\]", 1)
- item["xmin"] = self._extract_pattern(r"xmin = (.*)", 1)
- item["xmax"] = self._extract_pattern(r"xmax = (.*)", 1)
- item["text"] = self._extract_pattern(r"text = \"(.*)\"", 1)
- item_list.append(item)
- tier["idx"] = tier_idx
- tier["class"] = tier_class
- tier["name"] = tier_name
- tier["xmin"] = tier_xmin
- tier["xmax"] = tier_xmax
- tier["size"] = tier_size
- tier["items"] = item_list
- self.tier_list.append(tier)
-
- def toJson(self):
- _json = OrderedDict()
- _json["file_type"] = self.file_type
- _json["xmin"] = self.xmin
- _json["xmax"] = self.xmax
- _json["size"] = self.size
- _json["tiers"] = self.tier_list
- return json.dumps(_json, ensure_ascii=False, indent=2)
-
-
-def get_mel2ph(tg_fn, ph, mel, hparams):
- ph_list = ph.split(" ")
- with open(tg_fn, "r") as f:
- tg = f.readlines()
- tg = remove_empty_lines(tg)
- tg = TextGrid(tg)
- tg = json.loads(tg.toJson())
- split = np.ones(len(ph_list) + 1, np.float) * -1
- tg_idx = 0
- ph_idx = 0
- tg_align = [x for x in tg['tiers'][-1]['items']]
- tg_align_ = []
- for x in tg_align:
- x['xmin'] = float(x['xmin'])
- x['xmax'] = float(x['xmax'])
- if x['text'] in ['sil', 'sp', '', 'SIL', 'PUNC']:
- x['text'] = ''
- if len(tg_align_) > 0 and tg_align_[-1]['text'] == '':
- tg_align_[-1]['xmax'] = x['xmax']
- continue
- tg_align_.append(x)
- tg_align = tg_align_
- tg_len = len([x for x in tg_align if x['text'] != ''])
- ph_len = len([x for x in ph_list if not is_sil_phoneme(x)])
- assert tg_len == ph_len, (tg_len, ph_len, tg_align, ph_list, tg_fn)
- while tg_idx < len(tg_align) or ph_idx < len(ph_list):
- if tg_idx == len(tg_align) and is_sil_phoneme(ph_list[ph_idx]):
- split[ph_idx] = 1e8
- ph_idx += 1
- continue
- x = tg_align[tg_idx]
- if x['text'] == '' and ph_idx == len(ph_list):
- tg_idx += 1
- continue
- assert ph_idx < len(ph_list), (tg_len, ph_len, tg_align, ph_list, tg_fn)
- ph = ph_list[ph_idx]
- if x['text'] == '' and not is_sil_phoneme(ph):
- assert False, (ph_list, tg_align)
- if x['text'] != '' and is_sil_phoneme(ph):
- ph_idx += 1
- else:
- assert (x['text'] == '' and is_sil_phoneme(ph)) \
- or x['text'].lower() == ph.lower() \
- or x['text'].lower() == 'sil', (x['text'], ph)
- split[ph_idx] = x['xmin']
- if ph_idx > 0 and split[ph_idx - 1] == -1 and is_sil_phoneme(ph_list[ph_idx - 1]):
- split[ph_idx - 1] = split[ph_idx]
- ph_idx += 1
- tg_idx += 1
- assert tg_idx == len(tg_align), (tg_idx, [x['text'] for x in tg_align])
- assert ph_idx >= len(ph_list) - 1, (ph_idx, ph_list, len(ph_list), [x['text'] for x in tg_align], tg_fn)
- mel2ph = np.zeros([mel.shape[0]], np.int)
- split[0] = 0
- split[-1] = 1e8
- for i in range(len(split) - 1):
- assert split[i] != -1 and split[i] <= split[i + 1], (split[:-1],)
- split = [int(s * hparams['audio_sample_rate'] / hparams['hop_size'] + 0.5) for s in split]
- for ph_idx in range(len(ph_list)):
- mel2ph[split[ph_idx]:split[ph_idx + 1]] = ph_idx + 1
- mel2ph_torch = torch.from_numpy(mel2ph)
- T_t = len(ph_list)
- dur = mel2ph_torch.new_zeros([T_t + 1]).scatter_add(0, mel2ph_torch, torch.ones_like(mel2ph_torch))
- dur = dur[1:].numpy()
- return mel2ph, dur
-
-
-def build_phone_encoder(data_dir):
- phone_list_file = os.path.join(data_dir, 'phone_set.json')
- phone_list = json.load(open(phone_list_file))
- return TokenTextEncoder(None, vocab_list=phone_list, replace_oov=',')
-
-
-def build_word_encoder(data_dir):
- word_list_file = os.path.join(data_dir, 'word_set.json')
- word_list = json.load(open(word_list_file))
- return TokenTextEncoder(None, vocab_list=word_list, replace_oov=',')
-
-def is_sil_phoneme(p):
- return not p[0].isalpha()
-
-
-def build_token_encoder(token_list_file):
- token_list = json.load(open(token_list_file))
- return TokenTextEncoder(None, vocab_list=token_list, replace_oov='')
diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/params_model.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/params_model.py
deleted file mode 100644
index 48f8e564e772649e3207c7a90bff1bee9e6b3a47..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/emotion/params_model.py
+++ /dev/null
@@ -1,11 +0,0 @@
-
-## Model parameters
-model_hidden_size = 256
-model_embedding_size = 256
-model_num_layers = 3
-
-
-## Training parameters
-learning_rate_init = 1e-4
-speakers_per_batch = 6
-utterances_per_speaker = 20
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/custom_dataset/.ipynb_checkpoints/yolov6_s_fast-checkpoint.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/custom_dataset/.ipynb_checkpoints/yolov6_s_fast-checkpoint.py
deleted file mode 100644
index 5e04123bb59ed5b29bbea891f3456a81a5ed4a9f..0000000000000000000000000000000000000000
--- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/custom_dataset/.ipynb_checkpoints/yolov6_s_fast-checkpoint.py
+++ /dev/null
@@ -1,124 +0,0 @@
-_base_ = '../yolov6/yolov6_s_syncbn_fast_8xb32-400e_coco.py'
-
-max_epochs = 100 # 训练的最大 epoch
-data_root = './data-df2/' # 数据集目录的绝对路径
-
-# 结果保存的路径,可以省略,省略保存的文件名位于 work_dirs 下 config 同名的文件夹中
-# 如果某个 config 只是修改了部分参数,修改这个变量就可以将新的训练文件保存到其他地方
-work_dir = './work_dirs/yolov6_s_df2'
-
-# 根据自己的 GPU 情况,修改 batch size,YOLOv5-s 默认为 8卡 x 16bs
-train_batch_size_per_gpu = 32
-train_num_workers = 4 # 推荐使用 train_num_workers = nGPU x 4
-
-save_epoch_intervals = 2 # 每 interval 轮迭代进行一次保存一次权重
-
-# 根据自己的 GPU 情况,修改 base_lr,修改的比例是 base_lr_default * (your_bs / default_bs)
-base_lr = _base_.base_lr / 4
-
-class_name = ('short_sleeved_shirt',
- 'long_sleeved_shirt',
- 'short_sleeved_outwear',
- 'long_sleeved_outwear',
- 'vest',
- 'sling',
- 'shorts',
- 'trousers',
- 'skirt',
- 'short_sleeved_dress',
- 'long_sleeved_dress',
- 'vest_dress',
- 'sling_dress') # 根据 class_with_id.txt 类别信息,设置 class_name
-
-num_classes = len(class_name)
-metainfo = dict(
- classes=class_name,
- palette=[(255, 0, 0),
- (255, 128, 0),
- (255, 255, 0),
- (128, 255, 0),
- (0, 255, 0),
- (0, 255, 128),
- (0, 255, 255),
- (0, 128, 255),
- (0, 0, 255),
- (127, 0, 255),
- (255, 0, 255),
- (255, 0, 127),
- (128, 128, 128)] # 画图时候的颜色,随便设置即可
-)
-
-train_cfg = dict(
- max_epochs=max_epochs,
- val_begin=20, # 第几个 epoch 后验证,这里设置 20 是因为前 20 个 epoch 精度不高,测试意义不大,故跳过
- val_interval=save_epoch_intervals, # 每 val_interval 轮迭代进行一次测试评估
- dynamic_intervals=[(max_epochs-_base_.num_last_epochs, 1)]
-)
-
-model = dict(
- bbox_head=dict(
- head_module=dict(num_classes=num_classes)),
- train_cfg=dict(
- initial_assigner=dict(num_classes=num_classes),
- assigner=dict(num_classes=num_classes)
- )
-)
-
-train_dataloader = dict(
- batch_size=train_batch_size_per_gpu,
- num_workers=train_num_workers,
- dataset=dict(
- _delete_=True,
- type='RepeatDataset',
- # 数据量太少的话,可以使用 RepeatDataset ,在每个 epoch 内重复当前数据集 n 次,这里设置 5 是重复 5 次
- times=2,
- dataset=dict(
- type=_base_.dataset_type,
- data_root=data_root,
- metainfo=metainfo,
- ann_file='annotations/trainval.json',
- data_prefix=dict(img='smaller-dataset/'),
- filter_cfg=dict(filter_empty_gt=False, min_size=32),
- pipeline=_base_.train_pipeline)))
-
-val_dataloader = dict(
- dataset=dict(
- metainfo=metainfo,
- data_root=data_root,
- ann_file='annotations/trainval.json',
- data_prefix=dict(img='smaller-dataset/')))
-
-test_dataloader = val_dataloader
-
-val_evaluator = dict(ann_file=data_root + 'annotations/trainval.json')
-test_evaluator = val_evaluator
-
-optim_wrapper = dict(optimizer=dict(lr=base_lr))
-
-default_hooks = dict(
- # 设置间隔多少个 epoch 保存模型,以及保存模型最多几个,`save_best` 是另外保存最佳模型(推荐)
- checkpoint=dict(
- type='CheckpointHook',
- interval=save_epoch_intervals,
- max_keep_ckpts=5,
- save_best='auto'),
- param_scheduler=dict(max_epochs=max_epochs),
- # logger 输出的间隔
- logger=dict(type='LoggerHook', interval=10))
-
-custom_hooks = [
- dict(
- type="EMAHook",
- ema_type="ExpMomentumEMA",
- momentum=0.0001,
- update_buffers=True,
- strict_load=False,
- priority=49),
- dict(
- type="mmdet.PipelineSwitchHook",
- switch_epoch=max_epochs-max_epochs-_base_.num_last_epochs,
- switch_pipeline=_base_.train_pipeline_stage2
- )
-]
-
-visualizer = dict(vis_backends=[dict(type='LocalVisBackend'), dict(type='WandbVisBackend')])
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Better.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Better.py
deleted file mode 100644
index 07d6a04f1e092073365ce016debb2a170d95e891..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Better.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import json
-import requests
-from typing import Dict, get_type_hints
-
-url = 'https://openai-proxy-api.vercel.app/v1/'
-model = [
- 'gpt-3.5-turbo',
- 'gpt-3.5-turbo-0613',
- 'gpt-3.5-turbo-16k',
- 'gpt-3.5-turbo-16k-0613',
- 'gpt-4',
-]
-
-supports_stream = True
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- headers = {
- 'Content-Type': 'application/json',
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Edg/114.0.1823.58',
- 'Referer': 'https://chat.ylokh.xyz/',
- 'Origin': 'https://chat.ylokh.xyz',
- 'Connection': 'keep-alive',
- }
-
- json_data = {
- 'messages': messages,
- 'temperature': 1.0,
- 'model': model,
- 'stream': stream,
- }
-
- response = requests.post(
- 'https://openai-proxy-api.vercel.app/v1/chat/completions', headers=headers, json=json_data, stream=True
- )
-
- for token in response.iter_lines():
- decoded = token.decode('utf-8')
- if decoded.startswith('data: '):
- data_str = decoded.replace('data: ', '')
- data = json.loads(data_str)
- if 'choices' in data and 'delta' in data['choices'][0]:
- delta = data['choices'][0]['delta']
- content = delta.get('content', '')
- finish_reason = delta.get('finish_reason', '')
-
- if finish_reason == 'stop':
- break
- if content:
- yield content
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
-
diff --git a/spaces/Alcedo/yunmedia/resources/chatgpt-plugin/live2d/live2dcubismcore.min.js b/spaces/Alcedo/yunmedia/resources/chatgpt-plugin/live2d/live2dcubismcore.min.js
deleted file mode 100644
index 6ff22a9ef5a05e2c81afaaa216d74149a4b3d2f8..0000000000000000000000000000000000000000
--- a/spaces/Alcedo/yunmedia/resources/chatgpt-plugin/live2d/live2dcubismcore.min.js
+++ /dev/null
@@ -1,9 +0,0 @@
-/**
- * Live2D Cubism Core
- * (C) 2019 Live2D Inc. All rights reserved.
- *
- * This file is licensed pursuant to the license agreement below.
- * This file corresponds to the "Redistributable Code" in the agreement.
- * https://www.live2d.com/eula/live2d-proprietary-software-license-agreement_en.html
- */
-var Live2DCubismCore;!function(Live2DCubismCore){var _scriptDir,_csm=function(){function _csm(){}return _csm.getVersion=function(){return _em.ccall("csmGetVersion","number",[],[])},_csm.getLatestMocVersion=function(){return _em.ccall("csmGetLatestMocVersion","number",[],[])},_csm.getMocVersion=function(moc,mocSize){return _em.ccall("csmGetMocVersion","number",["number","number"],[moc,mocSize])},_csm.getSizeofModel=function(moc){return _em.ccall("csmGetSizeofModel","number",["number"],[moc])},_csm.reviveMocInPlace=function(memory,mocSize){return _em.ccall("csmReviveMocInPlace","number",["number","number"],[memory,mocSize])},_csm.initializeModelInPlace=function(moc,memory,modelSize){return _em.ccall("csmInitializeModelInPlace","number",["number","number","number"],[moc,memory,modelSize])},_csm.hasMocConsistency=function(memory,mocSize){return _em.ccall("csmHasMocConsistency","number",["number","number"],[memory,mocSize])},_csm.getParameterCount=function(model){return _em.ccall("csmGetParameterCount","number",["number"],[model])},_csm.getParameterIds=function(model){return _em.ccall("csmGetParameterIds","number",["number"],[model])},_csm.getParameterMinimumValues=function(model){return _em.ccall("csmGetParameterMinimumValues","number",["number"],[model])},_csm.getParameterTypes=function(model){return _em.ccall("csmGetParameterTypes","number",["number"],[model])},_csm.getParameterMaximumValues=function(model){return _em.ccall("csmGetParameterMaximumValues","number",["number"],[model])},_csm.getParameterDefaultValues=function(model){return _em.ccall("csmGetParameterDefaultValues","number",["number"],[model])},_csm.getParameterValues=function(model){return _em.ccall("csmGetParameterValues","number",["number"],[model])},_csm.getParameterKeyCounts=function(model){return _em.ccall("csmGetParameterKeyCounts","number",["number"],[model])},_csm.getParameterKeyValues=function(model){return _em.ccall("csmGetParameterKeyValues","number",["number"],[model])},_csm.getPartCount=function(model){return _em.ccall("csmGetPartCount","number",["number"],[model])},_csm.getPartIds=function(model){return _em.ccall("csmGetPartIds","number",["number"],[model])},_csm.getPartOpacities=function(model){return _em.ccall("csmGetPartOpacities","number",["number"],[model])},_csm.getPartParentPartIndices=function(model){return _em.ccall("csmGetPartParentPartIndices","number",["number"],[model])},_csm.getDrawableCount=function(model){return _em.ccall("csmGetDrawableCount","number",["number"],[model])},_csm.getDrawableIds=function(model){return _em.ccall("csmGetDrawableIds","number",["number"],[model])},_csm.getDrawableConstantFlags=function(model){return _em.ccall("csmGetDrawableConstantFlags","number",["number"],[model])},_csm.getDrawableDynamicFlags=function(model){return _em.ccall("csmGetDrawableDynamicFlags","number",["number"],[model])},_csm.getDrawableTextureIndices=function(model){return _em.ccall("csmGetDrawableTextureIndices","number",["number"],[model])},_csm.getDrawableDrawOrders=function(model){return _em.ccall("csmGetDrawableDrawOrders","number",["number"],[model])},_csm.getDrawableRenderOrders=function(model){return _em.ccall("csmGetDrawableRenderOrders","number",["number"],[model])},_csm.getDrawableOpacities=function(model){return _em.ccall("csmGetDrawableOpacities","number",["number"],[model])},_csm.getDrawableMaskCounts=function(model){return _em.ccall("csmGetDrawableMaskCounts","number",["number"],[model])},_csm.getDrawableMasks=function(model){return _em.ccall("csmGetDrawableMasks","number",["number"],[model])},_csm.getDrawableVertexCounts=function(model){return _em.ccall("csmGetDrawableVertexCounts","number",["number"],[model])},_csm.getDrawableVertexPositions=function(model){return _em.ccall("csmGetDrawableVertexPositions","number",["number"],[model])},_csm.getDrawableVertexUvs=function(model){return _em.ccall("csmGetDrawableVertexUvs","number",["number"],[model])},_csm.getDrawableIndexCounts=function(model){return _em.ccall("csmGetDrawableIndexCounts","number",["number"],[model])},_csm.getDrawableIndices=function(model){return _em.ccall("csmGetDrawableIndices","number",["number"],[model])},_csm.getDrawableMultiplyColors=function(model){return _em.ccall("csmGetDrawableMultiplyColors","number",["number"],[model])},_csm.getDrawableScreenColors=function(model){return _em.ccall("csmGetDrawableScreenColors","number",["number"],[model])},_csm.getDrawableParentPartIndices=function(model){return _em.ccall("csmGetDrawableParentPartIndices","number",["number"],[model])},_csm.mallocMoc=function(mocSize){return _em.ccall("csmMallocMoc","number",["number"],[mocSize])},_csm.mallocModelAndInitialize=function(moc){return _em.ccall("csmMallocModelAndInitialize","number",["number"],[moc])},_csm.malloc=function(size){return _em.ccall("csmMalloc","number",["number"],[size])},_csm.setLogFunction=function(handler){_em.ccall("csmSetLogFunction",null,["number"],[handler])},_csm.updateModel=function(model){_em.ccall("csmUpdateModel",null,["number"],[model])},_csm.readCanvasInfo=function(model,outSizeInPixels,outOriginInPixels,outPixelsPerUnit){_em.ccall("csmReadCanvasInfo",null,["number","number","number","number"],[model,outSizeInPixels,outOriginInPixels,outPixelsPerUnit])},_csm.resetDrawableDynamicFlags=function(model){_em.ccall("csmResetDrawableDynamicFlags",null,["number"],[model])},_csm.free=function(memory){_em.ccall("csmFree",null,["number"],[memory])},_csm.initializeAmountOfMemory=function(size){_em.ccall("csmInitializeAmountOfMemory",null,["number"],[size])},_csm}(),Version=(Live2DCubismCore.AlignofMoc=64,Live2DCubismCore.AlignofModel=16,Live2DCubismCore.MocVersion_Unknown=0,Live2DCubismCore.MocVersion_30=1,Live2DCubismCore.MocVersion_33=2,Live2DCubismCore.MocVersion_40=3,Live2DCubismCore.MocVersion_42=4,Live2DCubismCore.ParameterType_Normal=0,Live2DCubismCore.ParameterType_BlendShape=1,function(){function Version(){}return Version.csmGetVersion=function(){return _csm.getVersion()},Version.csmGetLatestMocVersion=function(){return _csm.getLatestMocVersion()},Version.csmGetMocVersion=function(moc,mocBytes){return _csm.getMocVersion(moc._ptr,mocBytes.byteLength)},Version}()),Version=(Live2DCubismCore.Version=Version,function(){function Logging(){}return Logging.csmSetLogFunction=function(handler){Logging.logFunction=handler;handler=_em.addFunction(Logging.wrapLogFunction,"vi");_csm.setLogFunction(handler)},Logging.csmGetLogFunction=function(){return Logging.logFunction},Logging.wrapLogFunction=function(messagePtr){messagePtr=_em.UTF8ToString(messagePtr);Logging.logFunction(messagePtr)},Logging}()),Version=(Live2DCubismCore.Logging=Version,function(){function Moc(mocBytes){var memory=_csm.mallocMoc(mocBytes.byteLength);memory&&(new Uint8Array(_em.HEAPU8.buffer,memory,mocBytes.byteLength).set(new Uint8Array(mocBytes)),this._ptr=_csm.reviveMocInPlace(memory,mocBytes.byteLength),this._ptr||_csm.free(memory))}return Moc.prototype.hasMocConsistency=function(mocBytes){var memory=_csm.mallocMoc(mocBytes.byteLength);if(memory)return new Uint8Array(_em.HEAPU8.buffer,memory,mocBytes.byteLength).set(new Uint8Array(mocBytes)),mocBytes=_csm.hasMocConsistency(memory,mocBytes.byteLength),_csm.free(memory),mocBytes},Moc.fromArrayBuffer=function(buffer){if(!buffer)return null;buffer=new Moc(buffer);return buffer._ptr?buffer:null},Moc.prototype._release=function(){_csm.free(this._ptr),this._ptr=0},Moc}()),Version=(Live2DCubismCore.Moc=Version,function(){function Model(moc){this._ptr=_csm.mallocModelAndInitialize(moc._ptr),this._ptr&&(this.parameters=new Parameters(this._ptr),this.parts=new Parts(this._ptr),this.drawables=new Drawables(this._ptr),this.canvasinfo=new CanvasInfo(this._ptr))}return Model.fromMoc=function(moc){moc=new Model(moc);return moc._ptr?moc:null},Model.prototype.update=function(){_csm.updateModel(this._ptr)},Model.prototype.release=function(){_csm.free(this._ptr),this._ptr=0},Model}()),CanvasInfo=(Live2DCubismCore.Model=Version,function(modelPtr){var _canvasSize_data,_canvasSize_dataPtr,_canvasSize_nDataBytes,_canvasOrigin_dataPtr,_canvasOrigin_nDataBytes,_canvasPPU_nDataBytes,_canvasPPU_dataPtr;modelPtr&&(_canvasSize_nDataBytes=(_canvasSize_data=new Float32Array(2)).length*_canvasSize_data.BYTES_PER_ELEMENT,_canvasSize_dataPtr=_csm.malloc(_canvasSize_nDataBytes),(_canvasSize_dataPtr=new Uint8Array(_em.HEAPU8.buffer,_canvasSize_dataPtr,_canvasSize_nDataBytes)).set(new Uint8Array(_canvasSize_data.buffer)),_canvasOrigin_nDataBytes=(_canvasSize_nDataBytes=new Float32Array(2)).length*_canvasSize_nDataBytes.BYTES_PER_ELEMENT,_canvasOrigin_dataPtr=_csm.malloc(_canvasOrigin_nDataBytes),(_canvasOrigin_dataPtr=new Uint8Array(_em.HEAPU8.buffer,_canvasOrigin_dataPtr,_canvasOrigin_nDataBytes)).set(new Uint8Array(_canvasSize_nDataBytes.buffer)),_canvasPPU_nDataBytes=(_canvasOrigin_nDataBytes=new Float32Array(1)).length*_canvasOrigin_nDataBytes.BYTES_PER_ELEMENT,_canvasPPU_dataPtr=_csm.malloc(_canvasPPU_nDataBytes),(_canvasPPU_dataPtr=new Uint8Array(_em.HEAPU8.buffer,_canvasPPU_dataPtr,_canvasPPU_nDataBytes)).set(new Uint8Array(_canvasOrigin_nDataBytes.buffer)),_csm.readCanvasInfo(modelPtr,_canvasSize_dataPtr.byteOffset,_canvasOrigin_dataPtr.byteOffset,_canvasPPU_dataPtr.byteOffset),_canvasSize_data=new Float32Array(_canvasSize_dataPtr.buffer,_canvasSize_dataPtr.byteOffset,_canvasSize_dataPtr.length),_canvasSize_nDataBytes=new Float32Array(_canvasOrigin_dataPtr.buffer,_canvasOrigin_dataPtr.byteOffset,_canvasOrigin_dataPtr.length),_canvasOrigin_nDataBytes=new Float32Array(_canvasPPU_dataPtr.buffer,_canvasPPU_dataPtr.byteOffset,_canvasPPU_dataPtr.length),this.CanvasWidth=_canvasSize_data[0],this.CanvasHeight=_canvasSize_data[1],this.CanvasOriginX=_canvasSize_nDataBytes[0],this.CanvasOriginY=_canvasSize_nDataBytes[1],this.PixelsPerUnit=_canvasOrigin_nDataBytes[0],_csm.free(_canvasSize_dataPtr.byteOffset),_csm.free(_canvasOrigin_dataPtr.byteOffset),_csm.free(_canvasPPU_dataPtr.byteOffset))}),Parameters=(Live2DCubismCore.CanvasInfo=CanvasInfo,function(modelPtr){this.count=_csm.getParameterCount(modelPtr),length=_csm.getParameterCount(modelPtr),this.ids=new Array(length);for(var length,length2,_ids=new Uint32Array(_em.HEAPU32.buffer,_csm.getParameterIds(modelPtr),length),i=0;i<_ids.length;i++)this.ids[i]=_em.UTF8ToString(_ids[i]);length=_csm.getParameterCount(modelPtr),this.minimumValues=new Float32Array(_em.HEAPF32.buffer,_csm.getParameterMinimumValues(modelPtr),length),length=_csm.getParameterCount(modelPtr),this.types=new Int32Array(_em.HEAP32.buffer,_csm.getParameterTypes(modelPtr),length),length=_csm.getParameterCount(modelPtr),this.maximumValues=new Float32Array(_em.HEAPF32.buffer,_csm.getParameterMaximumValues(modelPtr),length),length=_csm.getParameterCount(modelPtr),this.defaultValues=new Float32Array(_em.HEAPF32.buffer,_csm.getParameterDefaultValues(modelPtr),length),length=_csm.getParameterCount(modelPtr),this.values=new Float32Array(_em.HEAPF32.buffer,_csm.getParameterValues(modelPtr),length),length=_csm.getParameterCount(modelPtr),this.keyCounts=new Int32Array(_em.HEAP32.buffer,_csm.getParameterKeyCounts(modelPtr),length),length=_csm.getParameterCount(modelPtr),length2=new Int32Array(_em.HEAP32.buffer,_csm.getParameterKeyCounts(modelPtr),length),this.keyValues=new Array(length);for(var _keyValues=new Uint32Array(_em.HEAPU32.buffer,_csm.getParameterKeyValues(modelPtr),length),i=0;i<_keyValues.length;i++)this.keyValues[i]=new Float32Array(_em.HEAPF32.buffer,_keyValues[i],length2[i])}),Parts=(Live2DCubismCore.Parameters=Parameters,function(modelPtr){this.count=_csm.getPartCount(modelPtr),length=_csm.getPartCount(modelPtr),this.ids=new Array(length);for(var length,_ids=new Uint32Array(_em.HEAPU32.buffer,_csm.getPartIds(modelPtr),length),i=0;i<_ids.length;i++)this.ids[i]=_em.UTF8ToString(_ids[i]);length=_csm.getPartCount(modelPtr),this.opacities=new Float32Array(_em.HEAPF32.buffer,_csm.getPartOpacities(modelPtr),length),length=_csm.getPartCount(modelPtr),this.parentIndices=new Int32Array(_em.HEAP32.buffer,_csm.getPartParentPartIndices(modelPtr),length)}),Drawables=(Live2DCubismCore.Parts=Parts,function(){function Drawables(modelPtr){this._modelPtr=modelPtr;for(var length,length2=null,_ids=(this.count=_csm.getDrawableCount(modelPtr),length=_csm.getDrawableCount(modelPtr),this.ids=new Array(length),new Uint32Array(_em.HEAPU32.buffer,_csm.getDrawableIds(modelPtr),length)),i=0;i<_ids.length;i++)this.ids[i]=_em.UTF8ToString(_ids[i]);length=_csm.getDrawableCount(modelPtr),this.constantFlags=new Uint8Array(_em.HEAPU8.buffer,_csm.getDrawableConstantFlags(modelPtr),length),length=_csm.getDrawableCount(modelPtr),this.dynamicFlags=new Uint8Array(_em.HEAPU8.buffer,_csm.getDrawableDynamicFlags(modelPtr),length),length=_csm.getDrawableCount(modelPtr),this.textureIndices=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableTextureIndices(modelPtr),length),length=_csm.getDrawableCount(modelPtr),this.drawOrders=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableDrawOrders(modelPtr),length),length=_csm.getDrawableCount(modelPtr),this.renderOrders=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableRenderOrders(modelPtr),length),length=_csm.getDrawableCount(modelPtr),this.opacities=new Float32Array(_em.HEAPF32.buffer,_csm.getDrawableOpacities(modelPtr),length),length=_csm.getDrawableCount(modelPtr),this.maskCounts=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableMaskCounts(modelPtr),length),length=_csm.getDrawableCount(modelPtr),this.vertexCounts=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableVertexCounts(modelPtr),length),length=_csm.getDrawableCount(modelPtr),this.indexCounts=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableIndexCounts(modelPtr),length),length=_csm.getDrawableCount(modelPtr),this.multiplyColors=new Float32Array(_em.HEAPF32.buffer,_csm.getDrawableMultiplyColors(modelPtr),4*length),length=_csm.getDrawableCount(modelPtr),this.screenColors=new Float32Array(_em.HEAPF32.buffer,_csm.getDrawableScreenColors(modelPtr),4*length),length=_csm.getDrawableCount(modelPtr),this.parentPartIndices=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableParentPartIndices(modelPtr),length),length=_csm.getDrawableCount(modelPtr),length2=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableMaskCounts(modelPtr),length),this.masks=new Array(length);for(var _masks=new Uint32Array(_em.HEAPU32.buffer,_csm.getDrawableMasks(modelPtr),length),i=0;i<_masks.length;i++)this.masks[i]=new Int32Array(_em.HEAP32.buffer,_masks[i],length2[i]);length=_csm.getDrawableCount(modelPtr),length2=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableVertexCounts(modelPtr),length),this.vertexPositions=new Array(length);for(var _vertexPositions=new Uint32Array(_em.HEAPU32.buffer,_csm.getDrawableVertexPositions(modelPtr),length),i=0;i<_vertexPositions.length;i++)this.vertexPositions[i]=new Float32Array(_em.HEAPF32.buffer,_vertexPositions[i],2*length2[i]);length=_csm.getDrawableCount(modelPtr),length2=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableVertexCounts(modelPtr),length),this.vertexUvs=new Array(length);for(var _vertexUvs=new Uint32Array(_em.HEAPU32.buffer,_csm.getDrawableVertexUvs(modelPtr),length),i=0;i<_vertexUvs.length;i++)this.vertexUvs[i]=new Float32Array(_em.HEAPF32.buffer,_vertexUvs[i],2*length2[i]);length=_csm.getDrawableCount(modelPtr),length2=new Int32Array(_em.HEAP32.buffer,_csm.getDrawableIndexCounts(modelPtr),length),this.indices=new Array(length);for(var _indices=new Uint32Array(_em.HEAPU32.buffer,_csm.getDrawableIndices(modelPtr),length),i=0;i<_indices.length;i++)this.indices[i]=new Uint16Array(_em.HEAPU16.buffer,_indices[i],length2[i])}return Drawables.prototype.resetDynamicFlags=function(){_csm.resetDrawableDynamicFlags(this._modelPtr)},Drawables}()),Version=(Live2DCubismCore.Drawables=Drawables,function(){function Utils(){}return Utils.hasBlendAdditiveBit=function(bitfield){return 1==(1&bitfield)},Utils.hasBlendMultiplicativeBit=function(bitfield){return 2==(2&bitfield)},Utils.hasIsDoubleSidedBit=function(bitfield){return 4==(4&bitfield)},Utils.hasIsInvertedMaskBit=function(bitfield){return 8==(8&bitfield)},Utils.hasIsVisibleBit=function(bitfield){return 1==(1&bitfield)},Utils.hasVisibilityDidChangeBit=function(bitfield){return 2==(2&bitfield)},Utils.hasOpacityDidChangeBit=function(bitfield){return 4==(4&bitfield)},Utils.hasDrawOrderDidChangeBit=function(bitfield){return 8==(8&bitfield)},Utils.hasRenderOrderDidChangeBit=function(bitfield){return 16==(16&bitfield)},Utils.hasVertexPositionsDidChangeBit=function(bitfield){return 32==(32&bitfield)},Utils.hasBlendColorDidChangeBit=function(bitfield){return 64==(64&bitfield)},Utils}()),Version=(Live2DCubismCore.Utils=Version,function(){function Memory(){}return Memory.initializeAmountOfMemory=function(size){16777216>10,56320|1023&g)))):f+=String.fromCharCode(g)}return f}function da(a,c){return a?ca(M,a,c):""}function ea(a){return 0>>16)*f+d*(c>>>16)<<16)|0}),Math.clz32||(Math.clz32=function(a){var c=32,d=a>>16;return d&&(c-=16,a=d),(d=a>>8)&&(c-=8,a=d),(d=a>>4)&&(c-=4,a=d),(d=a>>2)&&(c-=2,a=d),a>>1?c-2:c-a}),Math.trunc||(Math.trunc=function(a){return a<0?Math.ceil(a):Math.floor(a)}),0),S=null,T=null;function C(a){throw b.onAbort&&b.onAbort(a),G(a),H(a),K=!0,"abort("+a+"). Build with -s ASSERTIONS=1 for more info."}b.preloadedImages={},b.preloadedAudios={};var E=null,U="data:application/octet-stream;base64,";function na(a){return a.replace(/\b__Z[\w\d_]+/g,function(a){return a==a?a:a+" ["+a+"]"})}function oa(){var a=Error();if(!a.stack){try{throw Error(0)}catch(c){a=c}if(!a.stack)return"(no stack trace available)"}return a.stack.toString()}var V=[null,[],[]];function W(a,c){var d=V[a];0===c||10===c?((1===a?G:H)(ca(d,0)),d.length=0):d.push(c)}function pa(a,c,d,f){try{for(var g=0,h=0;h>2],k=O[c+(8*h+4)>>2],y=0;y>2]=g,0}catch(R){return"undefined"!=typeof FS&&R instanceof FS.A||C(R),R.B}}function qa(){return N.length}function ra(a){try{var c=new ArrayBuffer(a);if(c.byteLength==a)return new Int8Array(c).set(N),sa(c),fa(c),1}catch(d){}}var ta=!(E="data:application/octet-stream;base64,AAAAAAAAAAARAAoAERERAAAAAAUAAAAAAAAJAAAAAAsAAAAAAAAAABEADwoREREDCgcAARMJCwsAAAkGCwAACwAGEQAAABEREQAAAAAAAAAAAAAAAAAAAAALAAAAAAAAAAARAAoKERERAAoAAAIACQsAAAAJAAsAAAsAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAAAAAAAAAAAAAAADAAAAAAMAAAAAAkMAAAAAAAMAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAA0AAAAEDQAAAAAJDgAAAAAADgAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAPAAAAAA8AAAAACRAAAAAAABAAABAAABIAAAASEhIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEgAAABISEgAAAAAAAAkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAsAAAAAAAAAAAAAAAoAAAAACgAAAAAJCwAAAAAACwAACwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMAAAAAAAAAAAAAAAMAAAAAAwAAAAACQwAAAAAAAwAAAwAADAxMjM0NTY3ODlBQkNERUYFAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAgAAALgLAAAABAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAK/////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD//////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBbiBlcnJvciBvY2N1cnJlZCBpbiB0aGUgaW50ZXJwb2xhdGlvbiBmb3IgYmxlbmQgc2hhcGVzLiBDb21iaW5hdGlvbkNvdW50IGlzICVkLgBbQ1NNXSBbRV1XYXJwRGVmb3JtZXI6OlRyYW5zZm9ybVRhcmdldCgpIGVycm9yLiBbJWRdIHAwMT0oJS40ZiAsICUuNGYpCgBbQ1NNXSBbRV1Jbml0aWFsaXplRGVmb3JtZXJzKCk6IFVua25vd24gRGVmb3JtZXIgVHlwZS4KAFtDU01dIFtFXWNzbUhhc01vY0NvbnNpc3RlbmN5OiBUaGlzIG1vYzMgc2l6ZSBpcyBpbnZhbGlkLgoATU9DMwBbQ1NNXSBbRV1jc21IYXNNb2NDb25zaXN0ZW5jeTogRmlsZVR5cGUgaXMgaW52YWxpZC4KAFtDU01dIFtFXWNzbUhhc01vY0NvbnNpc3RlbmN5OiBUaGlzIG1vYzMgdmVyIGlzIGludmFsaWQgW3ZlcjolZF0uCgBbQ1NNXSBbRV1jc21IYXNNb2NDb25zaXN0ZW5jeTogVGhlIENvcmUgdW5zdXBwb3J0IGxhdGVyIHRoYW4gbW9jMyB2ZXI6WyVkXS4gVGhpcyBtb2MzIHZlciBpcyBbJWRdLgoAW0NTTV0gW0VdY3NtSGFzTW9jQ29uc2lzdGVuY3k6IEhlYWRlciBzZWN0aW9uIGlzIGludmFsaWQuCgBbQ1NNXSBbRV1jc21IYXNNb2NDb25zaXN0ZW5jeTogRGF0YSBzZWN0aW9uIGlzIGludmFsaWQuCgBMaXZlMkQgQ3ViaXNtIFNESyBDb3JlIFZlcnNpb24gJWQuJWQuJWQAW0NTTV0gW0VdY3NtUmV2aXZlTW9jSW5QbGFjZSBpcyBmYWlsZWQuIENvcnJ1cHRlZCAgbW9jMyBmaWxlLgoAW0NTTV0gW0VdY3NtUmV2aXZlTW9jSW5QbGFjZSBpcyBmYWlsZWQuIFRoZSBDb3JlIHVuc3VwcG9ydCBsYXRlciB0aGFuIG1vYzMgdmVyOlslZF0uIFRoaXMgbW9jMyB2ZXIgaXMgWyVkXS4KAFtDU01dIFtFXWNzbUdldE1vY1ZlcnNpb24gaXMgZmFpbGVkLiBDb3JydXB0ZWQgbW9jMyBmaWxlLgoAW0NTTV0gW0VdJXM6ICVzCgBjc21HZXRNb2NWZXJzaW9uACJhZGRyZXNzIiBpcyBudWxsLgBjc21IYXNNb2NDb25zaXN0ZW5jeQAiYWRkcmVzcyIgYWxpZ25tZW50IGlzIGludmFsaWQuACJzaXplIiBpcyBpbnZhbGlkLgBjc21SZXZpdmVNb2NJblBsYWNlAGNzbVJlYWRDYW52YXNJbmZvACJtb2RlbCIgaXMgaW52YWxpZC4AIm91dFNpemVJblBpeGVscyIgaXMgbnVsbC4AIm91dE9yaWdpbkluUGl4ZWxzIiBpcyBudWxsLgAib3V0UGl4ZWxzUGVyVW5pdCIgaXMgbnVsbC4AY3NtR2V0U2l6ZW9mTW9kZWwAIm1vYyIgaXMgaW52YWxpZC4AY3NtSW5pdGlhbGl6ZU1vZGVsSW5QbGFjZQAic2l6ZSIgaXMgaW52YWxpZABjc21VcGRhdGVNb2RlbABjc21HZXRQYXJhbWV0ZXJDb3VudABjc21HZXRQYXJhbWV0ZXJJZHMAY3NtR2V0UGFyYW1ldGVyVHlwZXMAY3NtR2V0UGFyYW1ldGVyTWluaW11bVZhbHVlcwBjc21HZXRQYXJhbWV0ZXJNYXhpbXVtVmFsdWVzAGNzbUdldFBhcmFtZXRlckRlZmF1bHRWYWx1ZXMAY3NtR2V0UGFyYW1ldGVyVmFsdWVzAGNzbUdldFBhcnRDb3VudABjc21HZXRQYXJ0SWRzAGNzbUdldFBhcnRPcGFjaXRpZXMAY3NtR2V0UGFydFBhcmVudFBhcnRJbmRpY2VzAGNzbUdldERyYXdhYmxlQ291bnQAY3NtR2V0RHJhd2FibGVJZHMAY3NtR2V0RHJhd2FibGVDb25zdGFudEZsYWdzAGNzbUdldERyYXdhYmxlRHluYW1pY0ZsYWdzAGNzbUdldERyYXdhYmxlVGV4dHVyZUluZGljZXMAY3NtR2V0RHJhd2FibGVEcmF3T3JkZXJzAGNzbUdldERyYXdhYmxlUmVuZGVyT3JkZXJzAGNzbUdldERyYXdhYmxlT3BhY2l0aWVzAGNzbUdldERyYXdhYmxlTWFza0NvdW50cwBjc21HZXREcmF3YWJsZU1hc2tzAGNzbUdldERyYXdhYmxlVmVydGV4Q291bnRzAGNzbUdldERyYXdhYmxlVmVydGV4UG9zaXRpb25zAGNzbUdldERyYXdhYmxlVmVydGV4VXZzAGNzbUdldERyYXdhYmxlSW5kZXhDb3VudHMAY3NtR2V0RHJhd2FibGVJbmRpY2VzAGNzbUdldERyYXdhYmxlTXVsdGlwbHlDb2xvcnMAY3NtR2V0RHJhd2FibGVTY3JlZW5Db2xvcnMAY3NtR2V0RHJhd2FibGVQYXJlbnRQYXJ0SW5kaWNlcwBjc21SZXNldERyYXdhYmxlRHluYW1pY0ZsYWdzAGNzbUdldFBhcmFtZXRlcktleUNvdW50cwBjc21HZXRQYXJhbWV0ZXJLZXlWYWx1ZXMAW0NTTV0gW1ddUm90YXRpb25EZWZvcm1lcjogTm90IGZvdW5kIHRyYW5zZm9ybWVkIERpcmVjdGlvbi4KAFtDU01dIFtFXVVwZGF0ZURlZm9ybWVySGllcmFyY2h5KCk6IFVua25vd24gRGVmb3JtZXIgVHlwZS4KACVzCgAtKyAgIDBYMHgAKG51bGwpAC0wWCswWCAwWC0weCsweCAweABpbmYASU5GAG5hbgBOQU4ALg==");function D(a){for(var c=[],d=0;d>4,g=(15&g)<<4|h>>2,k=(3&h)<<6|p}while(c+=String.fromCharCode(f),64!==h&&(c+=String.fromCharCode(g)),64!==p&&(c+=String.fromCharCode(k)),d>>0<1280)return ia(0,993,f+576|0),S=Na,(Ma=0)|Ma;if(0|yc(b))return ia(0,1057,f+584|0),S=Na,(Ma=0)|Ma;if(g=255&(f=0|a[(C=b+4|0)>>0]),!(f<<24>>24))return c[h>>2]=g,ia(0,1110,h),S=Na,(Ma=0)|Ma;if(4<(255&f))return c[i>>2]=4,c[i+4>>2]=g,ia(0,1177,i),S=Na,(Ma=0)|Ma;(y=0!=(0|a[(x=b+5|0)>>0]))&&(sb(C,1),tb(b+64|0,4,160)),$c(0|Ka,0,576),pa(b,Ka),F=0|a[C>>0],w=b+d|0,f=128+(z=0|c[Ka>>2])|0;a:do{if(z>>>0>>0|w>>>0>>0||f>>>0>>0|w>>>0>>0||(o=(m=0|c[Ka+4>>2])+64|0,m>>>0>>0|w>>>0>>0)||m>>>0>>0|o>>>0>>0|w>>>0>>0||!(-1<(0|($=0|c[z>>2])))||(p=(n=0|c[Ka+8>>2])+(u=$<<2)|0,n>>>0>>0|w>>>0>>0)||n>>>0>>0|p>>>0>>0|w>>>0>>0||(q=(aa=0|c[(ba=Ka+12|0)>>2])+($<<6)|0,aa>>>0>>0|w>>>0>>0)||aa>>>0>>0|q>>>0>>0|w>>>0>>0||(r=(j=0|c[(ua=Ka+16|0)>>2])+u|0,j>>>0>>0|w>>>0>>0)||j>>>0>>0|r>>>0>>0|w>>>0>>0||(s=(k=0|c[(Ca=Ka+20|0)>>2])+u|0,k>>>0>>0|w>>>0>>0)||k>>>0>>0|s>>>0>>0|w>>>0>>0||(t=(l=0|c[(Ea=Ka+24|0)>>2])+u|0,l>>>0>>0|w>>>0>>0))Ma=319;else{if(l>>>0>>0|t>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(Y=Ka+28|0)>>2])+u|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(o=Ka+32|0)>>2])+u|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(i=(f=0|c[(Z=Ka+36|0)>>2])+u|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|i>>>0>>0|w>>>0>>0){Ma=319;break}if((0|(g=0|c[z+4>>2]))<=-1){Ma=319;break}if(h=(f=0|c[Ka+40>>2])+(d=g<<2)|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(t=Ka+44|0)>>2])+(g<<6)|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(n=Ka+48|0)>>2])+d|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(A=Ka+52|0)>>2])+d|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(u=Ka+56|0)>>2])+d|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(D=Ka+60|0)>>2])+d|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(B=Ka+64|0)>>2])+d|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(p=Ka+68|0)>>2])+d|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(E=Ka+72|0)>>2])+d|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if((0|(f=0|c[z+8>>2]))<=-1){Ma=319;break}if(i=(g=0|c[Ka+76>>2])+(m=f<<2)|0,g>>>0>>0|w>>>0>>0){Ma=319;break}if(g>>>0>>0|i>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(ga=Ka+80|0)>>2])+m|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(Ja=Ka+84|0)>>2])+m|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(Ga=Ka+92|0)>>2])+m|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(sa=Ka+96|0)>>2])+m|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(i=(f=0|c[(ea=Ka+100|0)>>2])+m|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|i>>>0>>0|w>>>0>>0){Ma=319;break}if((0|(f=0|c[z+12>>2]))<=-1){Ma=319;break}if(h=(g=0|c[Ka+108>>2])+(l=f<<2)|0,g>>>0>>0|w>>>0>>0){Ma=319;break}if(g>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(ya=Ka+112|0)>>2])+l|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(Ia=Ka+116|0)>>2])+l|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[Ka+124>>2])+l|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if((0|(i=0|c[z+16>>2]))<=-1){Ma=319;break}if(h=(f=0|c[Ka+128>>2])+(k=i<<2)|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[Ka+132>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[Ka+136>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[Ka+140>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(ca=Ka+144|0)>>2])+(i<<6)|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(fa=Ka+148|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(ha=Ka+152|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(Ha=Ka+156|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(xa=Ka+164|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(wa=Ka+168|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(Aa=Ka+172|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(za=Ka+176|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(Ba=Ka+180|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[Ka+184>>2])+i|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(Fa=Ka+188|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(la=Ka+192|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(qa=Ka+196|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(g=(f=0|c[(ka=Ka+200|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|g>>>0>>0|w>>>0>>0){Ma=319;break}if(h=(f=0|c[(va=Ka+204|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0>>0|w>>>0>>0){Ma=319;break}if(i=(f=0|c[(ta=Ka+208|0)>>2])+k|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|i>>>0>>0|w>>>0>>0){Ma=319;break}if((0|(g=0|c[z+20>>2]))<=-1){Ma=319;break}if(h=(f=0|c[Ka+212>>2])+(j=g<<2)|0,f>>>0>>0|w>>>0>>0){Ma=319;break}if(f>>>0>>0|h>>>0